id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
218951451
pes2o/s2orc
v3-fos-license
Reducing the severity of a traffic accident . Improving the system of preventive measures aimed at reducing the severity of road accidents is an urgent task. Mortality on roads is constantly increasing and it is necessary to ensure an integrated approach to creating safe road conditions. The purpose of this study is to analyze the promising designs of guardrails designed to prevent uncontrolled exit of vehicles from the roadway and the development of crash cushions. Guardrails should not only be safe for road users, but should also ensure their safety, as well as preserve the elements after crashing it. Conducted analytical studies have shown that to reduce the mechanical damage to vehicles and reduce the severity of injuries to the driver and passengers, it is necessary to develop guardrails that allows capturing shock energy at the moment of contact between the car and the guardrails. The considered design of the crash cushion provides a damping effect when the car crashes the guardrails and at the same time limit the ability to move its elements relative to the original position. This is achieved by using several materials with different strength characteristics in the design of the guardrails, which contributes to its gradual destruction in several stages and smooth energy dissipation. Introduction An objective assessment of the level of traffic safety on roads indicates that constantly increasing traffic causes a large number of conflict situations and, as a result, has led to an increase in the number of road traffic accidents (RTAs). Driving on modern highways requires drivers to have a high concentration of attention while driving and constantly improving their professional skills, as well as strict observance of traffic rules. At present, it is obvious that it is not possible to change the situation with the growth of road injuries only by making decisions at the legislative level. The increase in the number of administrative sanctions should be considered only as a preventive measure (in most cases, it is a punishment for an already committed violation of traffic rules), often acting only in the presence of traffic police officers or in the area of installed video recording cameras. The number of fatal accidents and serious injuries sustained as a result of accidents can be reduced through the use of an integrated approach to road safety. Strategies and programs for improving road traffic should include the following measures: reducing the risk of road traffic accidents, preventing accidents, reducing the number of injuries from road traffic accidents, and reducing the consequences of injuries by improving health care after road traffic accidents [1,2]. A report by the World Health Organization (WHO) indicates that road traffic deaths continue to increase, amounting to 1.35 million deaths per year, and injuries sustained in car accidents are the eighth leading cause of death [3]. At the same time, the risk of death due to road traffic accidents in low-income countries is still three times higher than in highincome countries. The highest rates are observed in Africa (26.6 cases per 100,000 people), and the lowest are observed in Europe (9.3 cases per 100,000 people). According to official statistics in the Russian Federation in 2018 there were more than 168 thousand accidents, in which 18,214 people died and 214,853 people received injuries of varying severity [4]. These sad statistics require the development of modern ways to prevent accidents on roads. Legislative decisions in a number of countries have been made to reduce road traffic injuries. The main goal of such decisions is "Vision Zero", i.e. the number of fatal accidents should be reduced to zero in the long term. In January 2018, the Government of the Russian Federation approved the Road Safety Strategy in the Russian Federation for 2018-2024, which stated that one of the main directions of its implementation is to improve the road network in terms of road safety, including the development of road traffic management [5,6]. According to statistics, approximately every fourth accident on the road is caused by an unintentional (uncontrolled) exit of cars from the roadway, which is characterized by serious damage to vehicles, death and personal injury, as well as material damage of transported goods. One of the ways to prevent accidents is the installation of various guardrails on the road. Guardrails allow keeping the car on the roadway, but at the same time they receive mechanical damage, and since damping is mainly due to deformation of the body, this is extremely dangerous for passengers who experience powerful overloads. At the same time, a deformed body can create additional difficulties in evacuating the driver and passengers from the damaged car. Given the high likelihood of serious injury to people and serious mechanical damage to the vehicle when hitting an obstacle (guardrail), it is necessary that if the car goes out of the road, it crashes into the guardrail and does not return to the lane and damage to it were minimal [7]. In our work, we will analyze the structures of modern guardrails and consider the possibility of using a crash cushion that reduces the severity of the consequences of an accident. Discussion Constantly increasing traffic and permissible speed limits impose more stringent requirements on road infrastructure and the use of road safety systems. One way to improve road safety is to use road restraint systems. Guardrails do not only reduce the number of road traffic accidents, but also reduce their severity [8]. Considerable attention is paid to the choice of guardrails for the safe operation of roads, bridge structures and the safety of road users: drivers and passengers of motor vehicles, non-motorized and horse-drawn vehicles, pedestrians, livestock and wild cattle. Guardrails are considered a simple but at the same time very important element of the road infrastructure. Thanks to the knowledge about their design, it is possible to reduce the number of accidents on the road significantly, thereby preventing a possible accident and minimizing damage in it, if any. One of the most important characteristics of a guardrail is its holding capacity (energy intensity), i.e. the ability of the guardrail to keep vehicles on the road and bridge structure, while not allowing E3S Web of Conferences 164, 03012 (2020) TPACEE-2019 https://doi.org/10.1051/e3sconf /202016403012 them to tip over or move over it. The holding ability is divided into levels, each of which has its own range of striking energy. It is determined depending on the category of road, permissible speed and groups of road conditions [9]. Today, there are many scientific works by both foreign and domestic authors on the topic of guardrails. The study of the design of domestic and foreign guardrails was carried out by Badoyan N.Sh., Schepetova L.S. and Pugin K.G. [10]. The authors presented a detailed analysis of guardrails used in highway construction in many countries of the world. The paper considers the theoretical and empirical aspects of guardrails. An analysis of traditional guardrails and new foreign developments is provided. A new type of gabion barrier, which has not practically been considered by any of researchers as a tool to protect against traffic accidents on roads, is presented. To conduct an empirical assessment of benefits of gabions in comparison with classical guardrails, the article presents a model, as well as software packages with which it is possible to evaluate the effectiveness of gabions as safety barriers. A study and experience of road-bridge construction in a number of countries, including domestic experience of recent years, showed that gabion structures have very broad capabilities and properties, such as efficiency, strength and durability, which are the key ones in highway construction. British scientists G. Amato, F. O`Brien, B. Ghosh and C. Simms [11] evaluated the potential of gabions as safety barriers on roads, should also be emphasized. To achieve this goal, the authors created a prototype of the gabion safety barrier, which during the study was subjected to total refinement and testing using crash tests in accordance with European standards EN1317 for protective barriers N1. Thanks to the crash tests performed in the empirical part of the work, the authors revealed that a collision of a car and a gabion safety barrier led to a rollover of the vehicle and a break in the gabion net. Unlike classic guardrails, gabion structures have many advantages, thanks to which it is possible to reduce the risk of accidents, as well as the severity of consequences for humans [12]. French scientists have conducted studies to assess the impact of longitudinal guardrails located on the middle lanes and hard shoulders of toll roads on the severity of an accident with vehicles moving off the road. The study was based on accidents related only to injuries and property damage, recorded over 15 years on the French network of toll motorways with a length of about 2,000 km. When leaving the roadway to a solid shoulder, the risk of injury was halved by the longitudinal guardrail. The specific one-sided W-radial guardrail ("GS4") proved to be the best solution for cars, buses and trucks. This does not affect the feasibility of special guardrails for bridges or concrete barriers when a narrow working width is required. Longitudinal guardrails are important for the safety of road users, providing "forgiving" infrastructure in the event a vehicle exits the road, provided there are very few motorized two-wheeled vehicles on the roadway [13,14]. Employees of Orenburg State University conducted research on a safe guardrail called Road Roller System, being a new type of road safety system that disperses the force of impact. Instead of a sturdy metal beam, this design uses many rollers. When the car enters the "roller guardrail", even at a right angle, they redirect the impact force of the car, turning that energy, which could be fatal, into a much less powerful one. Technically, the Road Roller System consists of sturdy steel pipes, between which there are plastic rollers rotating around its axis. Rollers are bright yellow with reflective stripes. When a car collides, the guardrail bends and works like a shock absorber, taking on most of the shock, and the rollers rotate, due to which the car's inertia is suppressed and the emergency trajectory changes (turn the car sideways to the guardrail and smoothly return it to the road). As a result, the nature and amount of damage to the car, as well as injuries of people are reduced, and the likelihood of a rollover is minimized [15]. Australian scientists [16] conducted studied the response to shock of a portable waterfilled barrier (PWFB), which has the potential to absorb impact energy and, therefore, mitigate the effects of the accident at low and moderate speeds. Modern studies of the shock and energy absorption capacity of water-filled barriers are limited due to the complexity of the interaction of the fluid structure under dynamic impact. In the present work, a new method for the interaction of a fluid and a structure is developed, based on a combination of the hydrodynamics of smooth particles and the finite element method. The phenomenon of water splash inside the PWFB was investigated to study the ability of water to absorb the energy under dynamic exposure. It was found that water plays an important role in energy absorption. The link analysis presented in this article provides a platform for further research to optimize the PWFB. The effect of the amount of water on its energycapturing ability was studied and the results found practical application in the design of the PWFB. To reduce the severity of the consequences of an accident, the authors of [17] propose using torsion energy-capturing elements in guardrails, the principle of which is based on the dissipation of impact energy due to plastic torsion of metal rods. The choice of torsion elements is based on the combination of their positive qualities. Torsion energy-capturing elements have a specific energy consumption that exceeds the similar indicators of known shock absorbers. They can be placed in narrow gaps, are very technologically advanced to manufacture and easy to operate. Their power characteristic is practically independent of the speed of exposure and environmental parameters. In addition, the torsion energycapturing element has another positive quality, namely that a partially or fully deformed element can be repeatedly brought back to its original position, and its energy-capturing ability is restored. Energy-capturing guardrail The analysis makes possible to formulate a number of requirements for modern designs of guardrails. Guardrails should not only be safe for road users, but should also preserve the elements after hitting the guardrail. Guardrails can be considered safe if in a case when the vehicle contacts them [7]: -guardrail elements did not get into the cabin; -the car did not turn over, did not damage it and did not turn around after the collision; -an overload per person and deformation of the car cab do not lead to serious injuries. Besides, the design of the developed guardrails should be quite simple, and be able to be mounted either already on existing road barriers, or completely replace them. Moreover, the installation process should be extremely simple and very fast. It should also be borne in mind that this system is destructible, that is, it is necessary to take into account its minimum cost in the design. All of the above circumstances allow to conclude that there is a need to develop a design of energy-capturing guardrails, which allows, if not completely eliminating undesirable consequences for road accident participants, then at least reducing them to the lowest possible level. Consider the basic requirements for materials for the manufacture of guardrails: -high energy-capturing ability, i.e. the material should have sufficient porosity or low density, but at the same time it should be easily compacted when exposed to; -the possibility of using various climatic conditions, i.e. the characteristics of such materials should slightly vary depending on the ambient temperature, humidity and sunlight intensity; -the guardrail must be assembled from separate damping elements, united by flexible connections between each other in the section, which would provide the possibility of limited movement of the guardrail elements relative to their initial position in a case of hammering action by the car; -low cost of these materials. Given the above requirements for materials, it can be assumed that a single material, will probably not be able to satisfy all the requirements at the same time, so we suggest using the so-called sandwich construction, i.e. the construction should consist of the following materials (Fig. 1): -the structural or "skeleton" material, which will act as a supporting structure, and ensure the gradual destruction of the guardrail, from layer to layer; -the energy-capturing material that will capture the impact energy due to compaction or destruction of the base layer; -the external protective material, which should have increased abrasion resistance, have anti-corrosion properties and perform the function of a protective element. 1 -external (metal) layer, 2 -frame (plastic), 3 -energy-capturing material. To give the design the required characteristics, it is necessary to determine the most promising materials that can be used in structures of this type. Let us consider them in the sequence in which they are shown in Fig. 1. The external protective layer must have the properties described above, therefore, it is proposed to use specially molded sheet metal with a galvanized surface or special plastic spraying for its manufacture. In this case, the profile of sheet metal, which is an undulated surface, can be obtained using conventional bending on a bending machine from a conventional, for example, galvanized sheet metal. The thickness of such a sheet should provide easy bending when forming a profile, but at the same time, have rigidity and strength sufficient to withstand external influences unrelated to the suppression of collision energy. It should be borne in mind that the thickness of the sheet of metal should not exceed the thickness of the external body elements of the car, i.e. when a car collides with a guardrail, its outer layer must first deform, and then the body panels of the car. Based on these assumptions and according to the Dassault System databases, the most suitable option is to use a galvanized metal with a thickness of 0.25 mm. Next, it is necessary to determine the material that can be used to create a power structure, the so-called skeleton of an energy-capturing guardrail. For these purposes, it is most rational to use one of the types of plastics. The so-called PVC plastic, being one of the most economical options, should be of prime attention. This material is well formed, there are many enterprises that are engaged in its industrial processing. In particular, water-filled barriers are made of this material. The production base is debugged. In addition, only molds will be required. The general view of a single cell, made according to the principle of combs, may have the view presented in Fig. 2. Fig. 2. The structure of the "skeleton" part. The energy-capturing material should have the properties of low density and hygroscopicity, as well as not have an elastic component of deformations, so it is most rational to use fibrous materials for its manufacture, for example, one of the types of Thermoisol. The general view of the structure of the proposed guardrail will have the form presented in Fig. 3 [7]. Conclusion Improving road safety by introducing administrative restrictions and imposing penalties for violation of traffic rules is almost impossible to achieve. Consequently, an integrated approach to solving this problem is required involving representatives of the legislative branch, specialists engaged in practical activities in the road industry, as well as research and development organizations. Preventing RTAs, as well as reducing the severity of their consequences, is possible through the installation of guardrails for various purposes and design. The analysis showed that at present there is a wide range of various construction guardrails, the use of which makes it possible to keep the vehicle on the road, but it does not always take into account the severity of the consequences of the interaction of the car and the guardrails. The design of the energy-capturing guardrail considered in this work allows to reduce the likelihood of severe injuries of road users in a case of an uncontrolled vehicle exit from the roadway. The damping properties of this guardrail provide a smooth damping of impact energy when a car hits it and thereby reduce the loads experienced by road accident participants. Unlike most analogues, the design of energy-capturing guardrail during shock contact with the vehicle makes possible to limit the ability to move its elements relative to the original position.
2020-05-07T09:11:20.400Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "89a22e3f291be512849e5f083e7ffd9dc5c03391", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/24/e3sconf_tpacee2020_03012.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2d203aa58597cd6906ae8c64ff53ecb01a63e854", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244749412
pes2o/s2orc
v3-fos-license
Learning to Measure: Adaptive Informationally Complete Generalized Measurements for Quantum Algorithms Many prominent quantum computing algorithms with applications in fields such as chemistry and materials science require a large number of measurements, which represents an important roadblock for future real-world use cases. We introduce a novel approach to tackle this problem through an adaptive measurement scheme. We present an algorithm that optimizes informationally complete positive operator-valued measurements (POVMs) on the fly in order to minimize the statistical fluctuations in the estimation of relevant cost functions. We show its advantage by improving the efficiency of the variational quantum eigensolver in calculating ground-state energies of molecular Hamiltonians with extensive numerical simulations. Our results indicate that the proposed method is competitive with state-of-the-art measurement-reduction approaches in terms of efficiency. In addition, the informational completeness of the approach offers a crucial advantage, as the measurement data can be reused to infer other quantities of interest. We demonstrate the feasibility of this prospect by reusing ground-state energy-estimation data to perform high-fidelity reduced state tomography. Many prominent quantum computing algorithms with applications in fields such as chemistry and materials science require a large number of measurements, which represents an important roadblock for future real-world use cases. We introduce a novel approach to tackle this problem through an adaptive measurement scheme. We present an algorithm that optimizes informationally complete positive operator-valued measurements (POVMs) on the fly in order to minimize the statistical fluctuations in the estimation of relevant cost functions. We show its advantage by improving the efficiency of the variational quantum eigensolver in calculating ground-state energies of molecular Hamiltonians with extensive numerical simulations. Our results indicate that the proposed method is competitive with state-of-the-art measurement-reduction approaches in terms of efficiency. In addition, the informational completeness of the approach offers a crucial advantage, as the measurement data can be reused to infer other quantities of interest. We demonstrate the feasibility of this prospect by reusing ground-state energy-estimation data to perform high-fidelity reduced state tomography. I. INTRODUCTION Quantum computing is a rapidly growing multidisciplinary field with a very clear objective: to understand if, and to what extent, it is possible to build computing machines able to perform tasks that are impossible for conventional (classical) computers. Theoretically, milestone discoveries such as Shor's and Grover's quantum algorithms hint toward a positive answer to this question. These algorithms, which exploit quantum properties of the processor, can in principle outperform all currently existing classical methods. In practice, however, the implementation of such protocols in the regimes of interest will most probably require the use of ideal fault-tolerant universal quantum computers. At the same time, because of the extreme fragility of quantum information storage and processing in the presence of environmental noise, error-correction techniques required to achieve fault tolerance are still experimentally in their infancy. Universal fault-tolerant quantum computers, however, are not the only type of quantum machines able to tackle computationally hard problems. In fact, we can reformulate the main quantum computing research question and ask ourselves: what are the useful problems that quantum computers can solve more efficiently than their classical counterparts and, specifically, which subclasses of such problems are less demanding in terms of experimental requirements, given the current state-of-the-art quantum hardware? Note that this question has a dif-ferent starting point, namely it focuses on our current -or near-future-technologies and devices, and aims at identifying, based on the current understanding, useful applications that may benefit from them. There are at least two classes of problems that satisfy the above requirements. The first class has a longstanding history, dating back to Feynman (1982) [1] and Manin (1980) [2], who pointed out that the simulation of quantum systems is hard on classical computers, while, under certain conditions, they can be efficiently investigated by means of other quantum systems [3]. In fact, this can be done using either digital quantum simulators, namely specific-purpose quantum computers [4][5][6], or by employing analog quantum simulators [7][8][9][10], namely other equivalent but easier-to-control quantum systems. The second class of problems emerges when we lift the requirement of finding "exact" solutions to a given problem. Approximate near-term quantum devices might be able, e.g., to find better solutions to certain worst-case instances of non-deterministic polynomial-time hard (NPhard) problems or find such approximate solutions faster. A final ingredient to move toward the existing approximate noisy quantum devices [11,12] is the combination of quantum and classical techniques to maximize performance. In this paper, we focus on variational quantum algorithms, which have emerged recently as the most suited paradigm to tackle the classes of problems identified above [13,14] with approximate quantum computing. Specifically, these protocols are implemented by arXiv:2104.00569v2 [quant-ph] 3 Dec 2021 preparing a parametrized N -qubit trial state on a quantum device, extracting some observable quantities with suitable measurements and processing such measurement outcomes using a classical optimizer. The latter then returns the small changes that need to be implemented to prepare, in the next step, an updated trial wave function. This cycle is repeated many times until it converges to a quantum state from which the desired approximate solution can be extracted. This procedure can be used to solve problems in chemistry [15][16][17][18], for the design of new materials [19], and generally in every field of physics where one needs to extract the properties of many-body quantum correlated systems, e.g., interacting fermionic systems, which are typically hard to simulate on classical devices [20,21]. In this case, these algorithms go by the name of Variational Quantum Eigensolvers (VQE) [15,22,23]. In essence, the quantum processor is used to explore the exponentially large Hilbert space of the fermionic particles in order to find iteratively the ground state of the Hamiltonian, without solving the full diagonalisation problem. As an example, the knowledge of the ground state of a chemical compound as a function, e.g., of the nuclear coordinates allows one to extract crucial information such as the equilibrium bond length, bond angle, and dissociation energy. Note that, at least in principle, a quantum computer with a few hundreds of qubits could already have the potential to solve useful quantum chemistry problems that are intractable on classical computers. The application of VQE has already been demonstrated in many proof-of-principle experiments [15,22,[24][25][26]. However, a few major challenges still need to be overcome along the path to useful quantum advantage. On one hand, the classical optimization step that is associated with variational quantum algorithms can in general incur high computational costs because of the existence of many local minima or due to the problem of vanishing gradients [27]. Some possible solutions have been proposed, combining techniques borrowed from classical optimization theory with a careful design of the variational ansatz, such as the recently proposed ADAPT-VQE [28] and oo-VQE [29], and of the associated cost function [30]. On the other hand, the so-called measurement problem arises from the very high cost in terms of the number of observations that are typically needed to reconstruct the properties of interest, and specifically, the expectation value of the Hamiltonian, on the quantum states constructed by variational means. In fact, as the size of the problem approaches the regime in which the VQE could compete with classical methods, the current approaches would lead to prohibitive requirements to reach the desired degree of accuracy [21,[31][32][33]. In this work, we tackle the second problem, by presenting a novel adaptive method that sensibly alleviates the demands on the number of measurements, thus paving the way for an increase of the affordable problem sizes in experimental realisations. On a fundamental level, our approach introduces a new perspective on how to improve the overall observables reconstruction strategy in VQE, and possibly in variational algorithms in general, by leveraging informationally complete quantum measurements. Before introducing our protocol, however, in the next section we describe the measurement problem in more detail and briefly mention the main approaches that have been proposed in the literature to tackle it in the next section. II. THE MEASUREMENT PROBLEM One of the most prominent differences between classical and quantum methods concerns the way in which information is extracted at the end of the execution of the algorithm. In a typical situation, the quantum circuit prepares a N -qubit quantum state |ψ that is used to compute the expectation value of an operator O = ψ|O|ψ . Generally, it is not possible to measure O directly in its eigenbasis. For instance if we are interested in finding the ground state of the Hamiltonian H, measuring in its eigenbasis requires solving the problem itself in advance. The standard measurement protocol, henceforth named the Pauli method, consists in writing the oper- x , . . . are Pauli operators. The expectation value of the operator is therefore obtained in terms of the weighted sum of K expectation values, O = k c k P k . Unfortunately, this method leads to a suboptimal measurement scheme, as the variance of O is the sum of the weighted variances of the individual operators P k . More precisely, the error in the estimation is given by where Var(P k ) = P 2 k − P k 2 is the variance of P k and S k is the number of measurements, i.e., wave-function collapses, used to estimate term k [32]. Interestingly, under such measurement scheme, even exactly prepared ground states do not enjoy the zero-variance property, such that statistical energy fluctuations always remain finite and large. This constitutes a major source of problems for variational-based state preparation, where circuit parameters are optimized to minimize the expectation value of the energy. Given its significance, several efforts have been put forward to mitigate this problem. One simple strategy, henceforth named grouped Pauli method, aims at identifying all the Pauli strings that can be measured simultaneously from the same data set [15]. While this is not solving the issue, it reduces the computational overhead of the procedure. Promising approaches also involve the usage of a classical machine-learning engine to perform an approximate reconstruction of the quantum state [34] using only the basis state defined by P k [35], or classical shadows of a quantum state [36][37][38]. Other approaches based on grouping of commuting terms, effective measurement scheduling and optimized qubit tomography have been described in Refs. [39][40][41][42][43][44][45][46][47][48]. In the context of quantum state tomography with generalized quantum measurements, neural network-assisted adaptive methods have also been proposed [49]. In the following, we show how a related idea can be applied to fully general observable reconstruction tasks and gradient-based measurement learning with effective sampling costs. It is also worth reminding that, in a more general scenario in which faulttolerant architectures are available, optimal strategies for obtaining expectation values with Heisenberg-limited precision are known, based on quantum phase estimation [50]. Intermediate solutions between the standard and the quantum-phase-estimation-like sampling regimes are also possible, leveraging some trade-offs between sample complexity and quantum coherence [51,52]. In this work, we present an algorithm for efficient observable estimation that exploits generalized quantum measurements integrating three important components: a hybrid quantum-classical Monte Carlo, a method to navigate generalized measurement space toward efficient measurements, and a recipe to combine different estimations of the observable of interest. The result is a procedure in which the optimal measurement of an operator average is learnt in an adaptive fashion with no measurement overhead. III. ADAPTIVE MEASUREMENT SCHEME In this section, we explain our adaptive measurement scheme. In a nutshell, the idea is to use parametric informationally complete positive operator-valued measures (IC POVMs), which can in principle be used to estimate any expectation value of our choice. We first introduce a hybrid Monte Carlo approach, which bypasses the need to use tomographic reconstructions of quantum states from the IC data. We then describe how, by using parametric families of POVMs, the measurement settings can be optimized to yield low statistical errors in the estimation of the target expectation values. With respect to the second point, special attention must be devoted to achieving the desired POVM optimization without incurring additional overheads in terms of, e.g., the number of repetitions (also named shots in the following) of the state-preparation-and-measurement routine. As we will explain in the following, an adaptive method -that is, an on-the-fly optimization -will serve this scope. In brief, the key is to use the IC data obtained with one given POVM twice. First, we use them to produce an estimation of the mean of the observable. Second, the same set of results can also be employed to find a better POVM for the next experiment. The collection The ansatz prepares a state |ψ( θ) (green box) for which the mean of some observable O must be evaluated. Our algorithm is an efficient measurement subroutine in this process. It relies on parametric informationally complete POVMs (purple box) implemented with ancillary qubits (red box). These are explained in detail in App. A. Initially, we start by performing S1 measurements using the POVM corresponding to parameters x1, and obtain S1 outcomes m1, . . . , mS 1 . The measurement data are post-processed efficiently on a classical device (blue box) twice, with two different goals. First, we estimate the mean of the observable,Ō1, and the corresponding error of the estimation,V1, as explained in Sec. III A. Second, we calculate the gradient of the estimation variance, ∇ x Var(ωm), in POVM parameter space, and thus find a better POVM for iteration 2 (see Sec. III B and App. B). At every step t, the variablesŌ andV integrate all the estimations for t ≤ t while minimising the overall statistical error (see Sec. III C and App. D). The process is repeated iteratively untilV is below some desired threshold. of intermediate estimators of the target observable, each constructed along the process with a different POVM, is finally integrated together as to minimize the overall statistical uncertainty. As a result of this strategy, the measurement learning procedure improves over the initial POVM (which turns out to be already quite efficient, as shown in Sec. IV) with no additional measurement costs. The scheme is illustrated and summarized in Fig. 1. It is important to stress that the method does not require any approximations whatsoever. In fact, it is completely agnostic to the nature of the operator O to be measured, as long as it is given in terms of a linear combination of products of single-qubit observables (e.g., Pauli strings). While the algorithm is rather general, its performance is strongly dependent on the weight of such products (the number of non-identity single-qubit operators in every term), as we explain later, which makes quantum chemistry with low-weight fermion-to-qubit mappings, such as Bravyi-Kitaev [53] and the one recently introduced in Ref. [54], ideal use cases. Moreover, it should be mentioned that the methodology relies on the use of one ancillary qubit for every system qubit. However, the ancillary qubits remain in the ground state until the measurement stage, and the procedure only requires an increase in the circuit depth that is independent of the system size (i.e., a single layer of two-qubit gates that can all be executed in parallel). Yet, the efficiency of the method when applied to quantum chemistry problems is comparable to that of state-of-the-art methods that require an additional circuit depth linear in the number of qubits [41] and, at the same time, it provides informationally complete (IC) data useful for purposes beyond energy estimation. To ease the explanation of the algorithm, we present its three main components separately. We first introduce the hybrid quantum-classical Monte Carlo sampling for the estimation of expectation values of operators in Sec. III A. We then show in Sec. III B how to estimate the gradient in the space of POVMs without additional measurements, using only efficient classical post-processing and, lastly, in Sec. III C, we illustrate how to integrate all the data obtained from different POVMs to estimate mean values while minimising statistical fluctuations. A. Hybrid quantum-classical Monte Carlo sampling Our proposed algorithm relies on single-qubit (minimal) IC POVMs, which can be realised by applying a two-qubit gate between a system qubit and an ancillary one, the latter in a known state, and then measuring both qubits in the computational basis. In practice, this means that the ancillary qubits are initialised along with all the other qubits in the device (e.g., they are prepared in the ground state |0 ) and no operations are applied to them until the measurement stage. The implementation of these POVMs on current quantum computers has recently been demonstrated experimentally on IBM Quantum devices [55,56]. By definition, one such POVM is represented by four linearly independent positive operators {Π i > 0, i = 0, . . . , 3} adding up to identity, i Π i = I, and spanning the space of linear operators in the Hilbert space H of the system qubit. Each of these operators, usually called effects, is associated with one of the four possible outcomes of the two-qubit measurement, with Tr[ρΠ i ] being the probability of outcome i on the quantum state ρ of the target qubit. It is important to note that different qubit-ancilla unitaries generally lead to different POVMs. Hence, by parametrising these unitaries, we can parametrize the corresponding family of POVMs (see App. A). Let us consider the N -qubit case, with local and not necessarily identical POVMs associated with each qubit. The four effects associated with qubit i are denoted by Π (i) m , with m running from 0 to 3. The outcome of an experiment in which all qubits are measured via these local POVMs is a string m = (m 1 , . . . , m N ), where m i ∈ {0, . . . , 3}. The probability of such outcome given an N - As explained in previous sections, in VQE realisations one typically needs to measure an operator O that can be decomposed in terms of K Pauli strings, O = k c k P k (we assume c k ∈ R, as is customary, although our results can be easily generalized to complex-valued coefficients). Given that each of the local POVMs is IC, we can express the Pauli operators acting on each qubit i in terms of the effects Π The above expression seems useless at first sight: we transform a representation of O in terms of K terms c k P k into one with possibly 4 N terms ω m Π m . However, the expectation value of the operator now reads where p m is the probability of obtaining outcome m. In other words, the mean value of the operator is the average of ω m over the probability distribution {p m }, O = ω m {pm} . This observation enables a very different strategy for estimating O as compared to the standard Pauli method introduced in Sect. II. Instead of evaluating each of the p m = Π m via repeated sampling, and once all these mean values are known with high enough precision calculating O = m ω m Π m , which would be infeasible given the aforementioned exponential amount of terms, we can resort to a Monte Carlo approach. In Monte Carlo integration, one can evaluate integrals over high-dimensional domains efficiently by randomly sampling points within the domain and averaging their image through a suitable function. Similarly, in our case, we can exploit the fact that p m is the probability of the measurement yielding outcome m to calculate O in a similar manner, that is, using the quantum computer to sample values of m and a classical one to calculate the corresponding ω m , hence bypassing the need to evaluate the mean values Π m . More precisely, the strategy is to repeat the measurement S times using the local POVMs to sample from the probability distribution {p m }, resulting in a sequence of outcomes m 1 , . . . , m S , and computē kimi can be calculated in a polynomial time on a classical computer. This estimator converges to O = m ω m p m as Var(ω m )/S, where Var(ω m ) is the variance of ω m over the probability distribution {p m }, hence possibly providing accurate estimations even when the sum in Eq. (4) only involves a number of terms S 4 N . Crucially, this method estimates the weighted average of all the Pauli strings P k simultaneously, regardless of whether they commute or not, by exploiting IC data, yet circumventing any costly tomographic reconstruction of quantum states. In addition, in this Monte Carlo approach, the variance naturally takes into account the covariance between all these parallel measurements. In other words, the quantity ( ω 2 m {pm} − ω m 2 {pm} )/S, which can be estimated efficiently from the data, accounts for the total statistical error. As we explain next, our strategy is to iteratively search for POVMs that minimize this error. Importantly, the previous result holds for any operator O, that is, the same sequence of outcomes m 1 , . . . , m S can be used to estimate, using only classical postprocessing, any expectation value. However, not all expectation values can be estimated with the same precision. In particular, note that the products can in principle result in variances scaling exponentially in N . This is so because, generally, each coefficient b can have an absolute value different from, and also larger than, one. Hence, in worst-case scenarios, the absolute value of such products, and therefore of the ω m defined in terms of linear combinations of them, can scale unfavorably with the system size (see App. C for a concrete example). This limitation can be overcome for fermionic problems by using fermion-to-qubit mappings such as the Bravyi-Kitaev (BK) [53] and especially the one recently proposed by Jiang et al. in Ref. [54] (to which we refer as JKMN mapping), which lead to Pauli strings with logarithmic weight (that is, such that fermionic creation/annihilation operators are mapped onto Pauli strings with at most a logarithmic number of non-identity Pauli operators). Since the terms b (i) 0mi , corresponding to the decomposition of identity, are always equal to one (recall that m Π (i) m = I (i) ), these mappings guarantee that the products In any case, it should be clarified that using other mappings does not necessarily imply an unfavorable scaling of the algorithm, as there may nevertheless exist POVM parameters for which the method is efficient. In fact, as we show in Sect. IV, the adaptive strategy that we present in what follows finds POVMs for which the algorithm outperforms the Pauli and grouped Pauli methods in evaluating the ground-state energy of molecular Hamiltonians using the parity [57] and Jordan-Wigner (JW) mappings as well. Regarding the method proposed in Ref. [54], it should be mentioned that our Monte Carlo approach, given in Eq. (4), offers some advantages over the latter. On the one hand, it bypasses the classical overhead needed for tomographic reconstructions. On the other hand, and more importantly, our approach does not disregard the covariance-induced statistical errors in the estimation of the average resulting from parallel measurements. These points are discussed in more detail in App. C. B. Classical gradient estimation for POVM optimization Modification of the POVM results in a different probability distribution {p m }, as well as different weights ω m , and hence potentially different Var(ω m ). This can be exploited to devise an adaptive algorithm in which the measurement of O is optimized over the space of POVMs, that is, by finding one that minimizes the variance Var(ω m ). We now propose a classical postprocessing routine to navigate the space of POVMs toward low-variance ones. Essentially, besides using the outcomes obtained with the current POVM to construct an estimation of the target observable, the same set of data is also employed in a classical routine to assess the variance of other POVMs that have not previously been implemented on the quantum processor. Such procedure is explained in detail in the following. Suppose that we want to evaluate the Monte Carlo variance Var(ω r ) for a new POVM defined in terms of local POVMs with effects {Γ where the ω r are given by the b kr matrices corresponding to these local POVMs, and Γ r = ri . The second term in Eq. (5) is the squared mean O 2 , which does not depend on the POVM. The first term, i.e. the second moment of ω r over the probability distribution , is the one that we must minimize. Suppose further that we have already run some experiments on the quantum computer with another IC POVM given by the effects {Π rm are real numbers. Inserting these decompositions into the expres- (d-f) final POVM effects in the gradient optimization process, when starting from the SIC POVM 2, for a sample of 20 realisations from the data set of (a). Every POVM effect is mapped onto the three-dimensional unit-radius ball in a similar way as how single-qubit states are mapped onto the Bloch ball. In particular, the point r = (rx, ry, rz), | r| ≤ 1, is associated with the effect Π( r) = (| r|I + r · σ)/2 (note the difference with the Bloch ball representation of quantum states; see App. A). In the figure, the color indicates the qubit to which an effect corresponds, while the symbol identifies the effect itself among the possible four. The black symbols locate the initial effects, common to all realisations and qubits. Each panel presents the projection of the ball onto a different plane. The clustering of the points with equal color and symbol reveal that all realisations reach approximately the same optimal measurement. However, the result of the optimization is different for every qubit. Moreover, starting with SIC POVM 1 instead leads to a very different measurement (see [58]). sion for the second moment, we obtain This last expression is also calculated in a hybrid Monte Carlo manner. More precisely, we can reuse the strings m 1 , . . . , m S obtained from the measurements on the quantum computer (sampled from the probability distribution {p m }) to estimate the variances of other POVMs by calculating, for each m s , the corresponding rimi ω r 2 classically. Note, however, that this last sum cannot always be computed efficiently, since it generally contains 4 N terms (both positive and negative), and involves products rimi that can scale exponentially in N . To ensure the feasibility of the procedure, we use a gradient descent approach for the optimization of the POVMs; in such case, only one of the terms in the product is different from one. For concreteness, suppose that we use the effects {Π (i) m } corresponding to the point x in the POVM parameter space (see App. A) on the quantum computer and obtain S samples with which we can estimate the second moment ω 2 m . We can approximate the partial derivative of the second moment with respect to one of the parameters (for instance, the k-th), as ∂ x k ω 2 m ≈ ( ω r 2 − ω 2 m )/h, where ω r 2 is the estimated second moment corresponding to the POVM the coordinates of which in parameter space x fulfill x k = x k + h and x j = x j for j = k (let us denote the corresponding effects by {Γ (7) Using this method, all the partial derivatives can be calculated using classical post-processing, in polynomial time, of the same samples m 1 , . . . , m S obtained from the quantum computer. Once the gradient has been estimated, we can identify a new POVM with smaller expected variance than the previous one. We detail the gradient-based optimization used in this work in App. B. C. On-the-fly optimization An important aspect of the algorithm is that we do not need to first optimize the POVM (until it reaches a smallenough variance) before starting to estimate the expected value of the observable. The intermediate POVMs used in the process are also IC, so they can be used for the estimation of O as well. The strategy is to use the intermediate mean values obtained with every fixed choice of the POVM to calculate a weighted average. As we will show below, the latter is designed in a way that minimizes the resulting variance in the overall estimation. The whole procedure can be carried out iteratively as the algorithm progresses, thus effectively making use of all measurement results obtained during the intermediate POVM optimization steps for the reconstruction of O . The above procedure can be recast in terms of an iterative algorithm as follows: 1. Initialize two variables,Ō andV , such thatŌ 1 → O andV 1 →V . 2. At the end of each iteration t ∈ (2, . . . , T ) of the POVM optimization, update them as (Ō tV At any point along the process, we have an estimated meanŌ with estimated standard errorV 1/2 that minimizes the overall error of the input data and can be easily updated with new ones. It is important to stress that this iterative mixing of the outcomes is unbiased, as we prove in App. D. IV. NUMERICAL SIMULATIONS In this section, we present the results of the numerical experiments that are run to test the feasibility and performance of our algorithm. Section IV A is aimed at illustrating the effect of the adaptive measurement. Section IV B presents a more in-depth analysis of the performance. Finally, in Sec. IV C we demonstrate an important feature of our approach: the IC data used for the estimation of the energy can be reused for other purposes. All the data used in this manuscript are available on Zenodo [59]. The source code used to generate the results is available online [60]. A. Energy measurement learning We start by measuring the ground-state energy of the H 2 , LiH and H 2 O molecules. For the characterization of each system, we use different numbers of molecular orbitals. The basis set used for H 2 is 6-31G [61][62][63][64] , and H2O (c)) with different measurement methods, with a total of S = 10 6 shots (for the Pauli and grouped Pauli methods, we use the same number of shots 10 6 /K on every Pauli string, so the total number of shots is in fact S = K 10 6 /K ; this represents a deficit of at most 0.1% in the total number of shots in the examples considered). The ground state is approximated by optimising a VQE ansatz. The estimation error is the absolute difference between the simulation results and the exact value for the optimized ansatz. The points represent the average error over 100 realisations and the error bars show a 95% confidence interval obtained using bootstrapping. For H2, our algorithm offers little improvement, but the difference in performance becomes clearer with the other two molecules. Note that the two initial POVMs yield slightly different results, with SIC POVM 1 generally outperforming SIC POVM 2. We also note that, in the cases involving more qubits, such as the 14-qubit H2O molecule with the BK mapping, the measurement optimization has not fully converged for S = 10 6 shots, so the difference with respect to the other methods is expected to increase for larger S, potentially reaching chemical accuracy earlier. [54], the Jordan-Wigner (JW), and the parity [57] mapping transformations. The latter has an intrinsic property, deriving from spin up and spin down electron conservation, that reduces the number of qubits required by two [57]. We also leverage different symmetries present in each system to reduce further the qubit count [57]. For the case of LiH and H 2 O we also freeze the core orbitals allowing us to exclude another two spin orbitals from our calculation (refer to the table in Fig. 6 for more details on the Hamiltonians and qubit reductions considered). Each of these molecular Hamiltonians is mapped into qubits using one or more of the aforementioned techniques, hence producing several qubit Hamiltonians with varied number of qubits, which are then used to simulate the energy measurement process in a VQE experiment near convergence. We proceed as follows. First, for each qubit Hamiltonian H, we numerically approximate the ground state with a hardware-efficient ansatz |ψ( θ) introduced in Ref. [15]. This generates a trial wave function by combining repetitive layers of single qubit R y gates and entangling blocks composed of two-qubit operations [controlled-NOT (CNOT) gates]. The single qubit rotations are parametrized with a set of angles (also known as variational parameters) that are iteratively updated, with the help of a classical optimization routine, in order to minimize the energy expectation value. Once we have the optimal parameters for which the variational form |ψ( θ opt ) approximates the ground-state wave function, we calculate the corresponding exact expected energy E = ψ( θ opt )|H|ψ( θ opt ) . We then simulate different energy evaluation methods as a function of the number of state preparations (shots)Ē(S), and compute the corresponding errors |Ē(S) − E |. We also calculate the estimated statistical error for each approach, that is, the estimated error when the exact value E is not available (for the gradient-descent algorithm, this error is given byV 1/2 as defined in Sec. III C). These quantities are depicted in Fig. 2 (a-c) for three selected examples. The effect of the measurement learning results in the error decreasing faster than S −1/2 , especially for small S. This is a consequence of the fact that, after each batch of runs, the next POVM used in the sequence is in principle more efficient (i.e. leads to a smaller variance) than the previous one. Importantly, even if the starting efficiency is lower than that of other methods, our algorithm eventually takes over and reaches better accuracy at lower costs. Moreover, as we discuss in detail in the next subsection, even the use of Eq. (4) with the initial POVM without optimization tends to give better performance than with the Pauli and the grouped Pauli methods, as the size of the problem increases. The results also reveal Number of shots Star required to achieve a target estimated error of tar = 0.5 mHa for H chains as a function of the number of qubits N . The qubit Hamiltonian is obtained using the JKMN mapping. For each method and molecule, we use up to S lim ≈ 10 6 runs, as in Fig. 3. If the average estimated error with S lim shots, lim = V 1/2 , where · represents the average over realisations, is still larger than tar, we estimate the required number of shots needed to reach it by assuming a scaling ∼ S −1/2 , that is, we use Star = S lim 2 lim / 2 tar . While this procedure saves us considerable computing time, it also overestimates the number of measurements needed by our algorithm: indeed, the convergence of our method to tar is faster than ∼ S −1/2 unless it has already converged to the optimal POVM (see Fig. 2). Thus, these results are to be regarded as an upper bound to the total measurement cost of the learning POVM method. The curves depict least squares fits to the data with functions of the form Star = aN b . The corresponding values of the exponent b for each method are reported in Table I. Note that the values found for the Pauli and grouped Pauli methods are consistent with the ones reported in Ref. [41]. Moreover, the performance of our algorithm is similar to that of the state-of-the-art method proposed in Ref. [41], especially for the lower values of N , for which the overestimation of Star is less significant. thatV 1/2 , as introduced in Sec. III C, gives the correct estimation of the statistical error in the evaluation of the energy [67]. The learning process is also illustrated in Fig. 2 (df), where we depict graphically the result of the optimization in terms of a geometric representation of the effects akin to the Bloch sphere for single-qubit states (see App. A for details). We only include the results for one example in the paper, but the results for all the Hamiltonians analysed in this work, as well as their animated version, are available online [58]. Interestingly, while the optimization eventually converges and different realisations with the same initial condition lead to the same minimum (modulo small fluctuations), the two initial conditions considered here (see App. C) result in different optimal POVMs with slightly different performance. This suggests the potential existence of better Fig. 4, as well as their counterparts using other fermion-to-qubit mappings, are fitted to a function of the form Star = aN b . The table contains the corresponding exponents. The exponent b ≈ 6 of the Pauli method, as well as the mild reduction b ≈ 5.6 offered by grouped Pauli, are consistent with the values reported in Ref. [68] for other molecules. The POVM-based method without optimization already outperforms these results, with b ≈ 4.8 using the JKMN mapping [54]. The adaptive strategy results in a considerably smaller exponent b ≈ 3.3. Interestingly, a similar scaling is achieved also for the JW mapping; while this mapping leads to Pauli strings with weight of O(N ), the adaptive strategy is able to find POVMs for which the measurement process is efficient. initial conditions than those explored here. This subject will be considered in future work. B. Performance and scaling While the previous results illustrate the working principles of the algorithm with three molecular Hamiltonians, we now turn our attention toward the analysis of its performance. In Fig. 3 and in App. E, we collect the errors of similar estimations for several other Hamiltonians corresponding to the same molecules (under different qubitreduction schemes) for a total number of measurements S ≈ 10 6 , from which it can be seen that our algorithm is advantageous in almost all cases, and particularly for LiH and H 2 O. Note that, since our algorithm is adaptive and the error decreases faster than S −1/2 , in contrast to the other methods, the advantage of the former would potentially increase for larger S. This is especially the case for the larger problems, for which the POVM-learning algorithm is further from convergence -and the error in the energy from chemical accuracy-at S = 10 6 shots. In order to study the performance of the algorithm for larger Hamiltonians, we analyse the number of measurements required to reach an accuracy of 0.5 mHa as a function of the number of qubits for hydrogen chains with increasing number of atoms; arguably, this figure of merit is more informative of the usefulness of the approach in real applications, in which one is interested in determining the ground-state energy within some fixed accuracy, rather than obtaining the best performance for a fixed number of shots. Due to limitations in computational power, we run our simulations for a limited number of measurements and extrapolate the total number required for such precision (see Fig. 4 for results using the JKMN mapping and caption for a detailed explanation). Even though this method overestimates the actual number of shots needed by our algorithm, we see a considerable improvement with respect to the Pauli and grouped Pauli methods. Interestingly, the bare hybrid quantum-classical Monte Carlo method without optimization, despite yielding higher errors for the small sizes considered here, also shows a more favorable scaling than the former methods. To provide a more quantitative evaluation, we further fit each set of results into a function of the form aN b . We report the corresponding values of the exponent b, also including those for other fermion-to-qubit mappings, in Table I. Note that, while other mappings are added for completeness, the optimal performance of our algorithm is expected with the mapping from Ref. [54] (see Sec. III A), as confirmed by the results. Importantly, we can see that our method thus benefits from two improvements: the Monte Carlo approach results in a considerable reduction in the exponent, followed by a second scaling improvement stemming from the learning strategy. The result is an overall efficiency comparable to state-of-the-art methods [41,54]. C. Exploiting informationally complete data Further numerical experiments demonstrate that the IC data collected for the estimation of the energy can indeed be reused for other purposes. As explained in Sec. III A, the same IC outputs can be post-processed to calculate any expectation value of our choice, the only limitation being that, as it is reasonable to expect, the optimization procedure targeting a particular observable may worsen the estimation of other specific quantities. In what follows, rather than focusing on particular additional observables, we consider an arguably more costly task: state tomography. More precisely, we address the reconstruction of all the k-qubit density operators in the system for all k ≤ K. Reduced tomography has recently attracted some interest in the quantum information literature for diverse purposes [39-47, 56, 69]. We thus proceed in a similar manner as in the previous subsections. We approximate the ground states by training VQE ansätze and then estimate the energy using the adaptive algorithm. The resulting data is then used to reconstruct all the k-qubit reduced density matrices using likelihood maximisation. In particular, for every subset of k qubits in the system, we marginalise the outcomes over the subset and then use the algorithm introduced in Ref. [70] to reconstruct the density operator. Since we must integrate IC data from T different POVMs in the likelihood maximisation procedure, we define a collective POVM with T ×4 N effects {Ξ (t,m) = Π (t) m S t /S, t ∈ [1, T ]}, where the index t indicates the POVM optimiza-tion step, S t represents the number of measurements carried out in iteration t, and S = t S t [71]. Once a k-qubit density matrix ρ tomo is reconstructed, we compute its infidelityF(ρ tomo , ρ exact ) = 1 − F(ρ tomo , ρ exact ), (where F(ρ tomo , ρ exact ) = Tr[ √ ρ tomo ρ exact √ ρ tomo ] 2 is the quantum fidelity) with respect to the exact one ρ exact (obtained by tracing out all other qubits in the trained VQE ansatz). In Fig. 5, we show the resulting average k-wise infidelity for the ground states of two molecules, H 2 and LiH, as a function of k, with and without gradient-based POVM optimization. We note that the density matrices can be reconstructed with high fidelity from the same data that was used for the estimation of the energy. Moreover, the comparison between these two methods reveals that the optimization of the POVM with respect to the precision in the estimation of energy also improves the fidelity of the reconstructed density matrices by up to an order of magnitude. In all cases, however, the infidelity increases with k, as expected. V. DISCUSSION AND CONCLUSIONS We introduce an algorithm for efficient observable estimation that exploits informationally complete generalized quantum measurements integrating three important components: a hybrid quantum-classical Monte Carlo, an efficient method to navigate POVM space toward lowvariance measurements, and a recipe to combine different estimations of the observable of interest. The result is a procedure in which an optimized measurement of an operator average is learnt in an adaptive fashion with no measurement overhead. Consequently, the overall measurement cost is drastically reduced with respect to the initial POVM considered. This is particularly interesting for real applications, considering that the initial SIC POVMs used already offer a significant improvement over other widely used methods, such as grouped Pauli. Importantly, the method does not require any exponentially scaling classical or quantum computations, although it does involve a modest polynomial classical overhead. We have illustrate the potential of the approach with several proof-of-principle numerical experiments by reconstructing the ground-state energies of several molecular Hamiltonians. Importantly, our simulations suggest that this adaptive method exhibits scaling performance comparable to those of the most efficient measurementreduction techniques in the current literature. While confirmation of this calls for a more thorough analysis and simulations, possibly including more general operators than molecular Hamiltonians, it is also important to point out that there is still substantial room for improvement in our algorithm, especially in the parametrisation of the POVMs and in the gradient-descent-based update schedule. It might also be interesting to investigate, in a future work, the potential use of classical machine-learning methods to enhance the measurementadaptation step [49]. Our algorithm also offers some other intrinsic advantages. Being completely agnostic to the nature of the qubit Hamiltonian, and not inspired by quantum chemistry but by quantum information alone, the proposed procedure may find interesting applications beyond VQE calculations. The method is also formally exact, as no approximations are made at any point, except for using the estimated variances as proxies of the actual ones. Moreover, the informationally complete data produced during the measurement process for a particular observable can in principle be reused to calculate many other properties of the underlying quantum state, including its tomographic reconstruction. We provide evidence of the feasibility of this prospect by performing high-fidelity reduced state tomography with no additional measurements. In this paper, we have only consider the task of estimating a given observable for a fixed quantum state. This typically represents a single step of, e.g., a VQE calculation. In perspective, one could, however, easily integrate our proposed method as a subroutine of the whole ansatzoptimization method. In such case, it might actually be helpful to use the optimal POVM from the previous VQE step, or a slight modification of it, as the starting point of the measurement optimization on the updated ansatz, given that the trial wave function should undergo relatively small changes between consecutive iterations. While this stands as a hypothesis for now, it might reduce the average number of steps required to adjust the POVM settings, hence leading to an even larger reduction in the measurement costs associated with the overall ansatz-optimization process. Admittedly, our contribution presents a drawback: it requires twice as many qubits. However, it is important to discuss what this entails in practice. The ancillary qubits used for the implementation of the POVM are initialised in the ground state, along with the rest of the qubits in the device, but no operations are applied on them until the measurement stage. Hence, the algorithm introduced here does not require the entanglement of 2N qubits throughout the whole computation, which would amplify the detrimental effect of decoherence. Instead, the additional N qubits should be regarded as nothing other than part of the measurement apparatus. In addition, we note that the method offers a significant advantage over a simpler use of the additional qubits, such as, e.g., executing two grouped Pauli iterations in parallel, which, albeit cutting the total run time by up to a factor of 2, would not improve the scaling of such method. Finally, it is worth discussing some relevant aspects regarding its implementation on real hardware, especially on near-term quantum computers. On the one hand, in devices with limited connectivity, additional SWAP gates may be required in order to enable the interaction between system and ancillary qubits. Importantly, the topology of most currently existing platforms enables the additional SWAP gates, when needed, to be parallelized in such a way that the measurement circuit preserves its size independence. This highlights a favorable aspect of the algorithm: since only a constant-depth measurement circuit is required (namely, the application of a two-qubit gate instead of a single-qubit one, and perhaps some SWAP gates if the connectivity requires so, for every system qubit), the measurement process itself is not expected to introduce significant decoherence effects with respect to applying standard Pauli measurements. Moreover, the commonly used readout noise-reduction techniques, such as the algorithms integrated in Qiskit or any other error-mitigation strategies that would be used for basic Pauli measurements, can be used here to correct the outcome statistics. While a proper assessment of the performance of the method under real noise conditions, as well as possible specific noise-mitigation strategies, is beyond the scope of this work, these considerations suggest that the ideas introduced in this work can play an important role in enabling the first useful applications of quantum computing for quantum chemistry, so far estimated to require prohibitive computing times. As stated in the main text, the algorithm relies on parametrized, informationally complete POVMs implemented through the application of two-qubit unitaries with ancillary qubits, followed by projective measurements on the computational basis. To explain the parametrisation used in this work, it is easier to start by identifying the POVM characterizing one such measurement when applying an arbitrary unitary gate U between some system qubit q in state ρ and an ancilla a in state |0 0|. Since the two qubits are eventually measured projectively in the computational basis, there are four possible outcomes (b q , b a ) with b q ∈ {0, 1} (and similarly for b a ). Each outcome occurs with probability p (bq,ba) = b q b a | U ρ ⊗ |0 0| U † |b q b a . Writing U = ijkl u ij kl |ij kl|, this expression becomes p (bq,ba) = kk u bqba k0 (u bqba k 0 ) * k| ρ |k = Tr π (bq,ba) π (bq,ba) ρ , where we have defined π (bq,ba) = k (u bqba k0 ) * |k . Hence, the corresponding POVM is given by the set of effects {Π i = |π i π i | , i ∈ [0, 3]}, where we have relabelled the outcomes using i = 2b q + b a . The previous calculation suggests our strategy for the POVM parametrisation: parametrize the unitary U , and compute the resulting POVM. The following observations are important. Firstly, not all the components u bqba kl are relevant for the measurement, as the initial state of the ancilla deems those with l = 1 irrelevant (provided that U is unitary). Secondly, global phases on |π i have no effect on the resulting operator Π i , so we are free to set u are the components of two orthonormal vectors, which we may call u 0 and u 1 in what follows, in C 4 . Before we proceed any further, let us count the total number of available degrees of freedom. On the one hand, we have four real numbers whose squares add up to one for u 0 , which amounts to 3 degrees of freedom. For u 1 , we have four complex numbers with three constraints (one for normalisation and two for the orthogonality with u 0 ), which results in 5 degrees of freedom. In total, we need 8 parameters per system qubit. Our parametrisation for single-qubit POVMs thus consists of 8 real numbers x = (x 0 , . . . , x 7 ), with x i ∈ (0, 1), ∀i (in practice, we constrain the values further, see App. B). We start by using the first three of these to produce the set of angles (πx 0 , πx 1 , 2πx 2 ), which identify (uniquely) a point on a 3-sphere S 3 with unit radius embedded in R 4 . The corresponding Euclidean coordinates in the embedding space are four real numbers whose squares add up to one, hence generating u bqba 00 . Defining u bqba 10 from the other five parameters is slightly more involved. To guarantee that the vector u 1 is orthogonal to u 0 , we construct it as a linear combination of orthonormal vectors orthogonal to u 0 , that is, u 1 = i z i u ⊥ i ; the orthonormal basis {u ⊥ i } can be found by means of the Gram-Schmidt orthonormalisation. The components z i , which must also be normalised, are determined by the remaining parameters: once again, we define a list of angles (πx 3 , . . . , πx 6 , 2πx 7 ) and calculate the Euclidean coordinates of the corresponding point in S 5 . These six real numbers {r i , i ∈ [0, 5]} are then used to define three components {z k = r 2k + ir 2k+1 }. The result of this procedure is a vector u 1 ∈ C 4 whose components can be identified with u bqba 10 . Finally, we must find two more vectors u 2 , u 3 ∈ C 4 to complete the missing terms u bqba k1 in the definition of the unitary. This can be done by using the Gram-Schmidt orthonormalisation once more. Once the unitary U is defined, we can not only calculate the corresponding set of effects {Π i }, but also implement it in a given circuit. Indeed, the algorithms to find the circuit decomposition of unitary U are known and readily implemented in Qiskit [72] (also, note that any two-qubit gate can be decomposed in up to three CNOT gates). Admittedly, this methodology is more complicated than simply parametrising arbitrary two-qubit gates U and then calculating the corresponding POVM. However, as discussed above, our procedure avoids the use of unnecessary or redundant parameters, which could make the POVM optimization harder. Nevertheless, it is likely that other parametrisations, more suitable for the adaptive optimization algorithm, exist. These refinements, as well as improving the gradient descent protocol (see App. B), will be the subject of future work. Appendix B: Gradient descent protocol Along the measurement process, we iteratively update the POVM parameters as well as the number of shots per experiment. In particular, we gradually increase the number of shots in order to have more precise estimations of the second moment as the POVM parameters approach a minimum and, consequently, the gradient decreases in magnitude. In this section, we briefly outline the protocol used in our numerical experiments. As explained in the main text, the POVM-based measurements allow us to estimate the gradient ∇ x ω 2 m classically from the outcomes of an experiment run with the POVM corresponding to parameters x t , where t labels the iteration (for the finite-difference partial derivatives ∂ x k ω 2 m ≈ ( ω 2 r − ω 2 m )/h, we use h = 10 −3 ). With these elements, we determine the POVM to be used in the (t + 1)-th iteration through where |∇ x ω 2 m | is to be understood as the set of absolute values of the components of ∇ x ω 2 m . Hence, ν is the magnitude of the largest change, in absolute value, of the POVM parameters. It should also be mentioned that, to avoid numerical instabilities, we further constrain every parameter to be between [δ, 1 − δ], with δ = 0.05. We start our simulations with S 1 = 1000 shots, and we use ν = 0.05. Every three iterations, we update S t + 1000 → S t+1 and ν/1.2 → ν. Hence, as the algorithm approaches the minimum, we obtain more precise estimations of the gradient (larger S t ) and we make smaller changes to the parameters (smaller ν). This parameter updating schedule is rather heuristic and still leaves room for improvement. Designing a more theory-driven approach, or using more sophisticated optimization techniques, will be the subject of future work. Appendix C: Symmetric IC POVMs as initial measurements and correlated estimators In the absence of prior knowledge about the state of the qubit register, it is desirable to use a so-called symmetric informationally complete POVM (SIC POVM) on every system qubit. Symmetric here means that its single-qubit effects, when rescaled asΠ i = 2Π i yield a set of projectors {Π i :Π 2 i =Π i } fulfilling Tr[Π iΠj ] = (2δ ij + 1)/3), ∀i, j. Hence, the projectors {Π i } form a regular tetrahedron in the Bloch sphere. In this work, we have considered two different SIC POVMs as initial conditions for the adaptive algorithm. The first one is the classic example of singlequbit SIC POVM, defined in terms of the projec- The second SIC POVM used in this paper was considered by Jiang et al. [54] and is another standard setting [73][74][75]. In order to use them in our algorithm, we must first find the parameters x of each of them in the POVM space (see App. A). This can be done numerically; the resulting parameters are reported in the computer code accompanying this paper [60]. It is worth discussing some properties of this second SIC POVM when used in our hybrid quantum-classical Monte Carlo algorithm, Eq. (4). In this case, all the b 3, √ 3}, ∀k > 0 (for k = 0, these are equal to one, since the effects add up to identity). This, in turn, has interesting implications. Let us consider the statistical error in the estimation of the expectation value of an observable given by a single Pauli string P k with weight l, that is, only l Pauli operators in P k are different from identity. In this case, the variance of the Monte Carlo is given by Var(ω m ) = 3 l − P k 2 ≤ 3 l . Hence, if S measurements are performed, the variance of the estimatorP k is Var(P k ) ≤ 3 l /S. This is indeed consistent with Ref. [54]. While we can reuse the IC data from the quantum computer to calculate the expectation value of other Pauli strings P k with similar statistical error (assuming they have the same weight l), we must take into account that the resulting estimatorsP k andP k can be correlated. In practice, this means that, if we are to use them to calculate the expectation value of an operator defined in terms of a linear combination of Pauli strings, O = k c k P k , the variance of the estimator O = k c kPk depends on the potentially non-zero covariance between distinct terms, so we cannot assume that Var(Ō) = k |c k | 2 Var(P m ). The estimation based on the Monte Carlo method, Eq. (4), naturally takes into account these correlations when accounting for the statistical error of the approach, FIG. 6. (Left) Error in the estimation of the energy of an optimal VQE circuit for all the Hamiltonians reported in the table on the right. Every column compares the results for one molecule (H2, LiH, and H2O) with different measurement methods, with a total of S = 10 6 shots. Each row corresponds to a different mapping. Points represent the average error over 100 realisations and the error bars show a 95% confidence interval obtained using bootstrapping. We note that, in some cases, especially those involving more qubits, like the 14-qubit H2O molecule with the BK mapping, the measurement optimization has not fully converged for S = 10 6 , so one would expect more notable differences with respect to the other methods for larger S. (Right) A table of the various combinations of molecule, mapping, basis and qubit-reduction techniques considered, with the corresponding number of qubits N . TQR is the two-qubit reduction for the parity mapping, Z2 refers to qubit reductions due to discrete symmetries [57] and CF denotes core freeze. hence yielding the correct estimation. This is important for two reasons. On the one hand, it provides a meaningful assessment of how far the algorithm is from reaching the required accuracy at any given point of its execution. On the other hand, since the Monte Carlo variance is the quantity that our adaptive strategy seeks to minimize, the algorithm presented here can potentially find POVMs for which the negative impact of these correlations on the estimated mean is reduced. Appendix D: Sequential and one-step mixing equivalence In this section we prove that the sequential estimation mixing presented in the main text is unbiased. To show this, let us first compute the unbiased one-step mixing estimation. Suppose that, after the different experiments have been run, we are left with a set of T estimated means {Ō t } and variances {V t }. We would like to find a set of weights {α t > 0}, with t α t = 1, that minimizes the varianceV T = t α 2 tVt ofŌ T = t α tŌt . To do so, we can introduce a Lagrange multiplier λ and define so that ∂ λ L = 0 imposes the constraint t α t = 1. From ∂ αt L = 0 we obtain α t = λρ t /2, where we have defined ρ t ≡ 1/V t to ease the presentation, as inverse variances will appear throughout. Using now t α t = 1 yields λ = 2/ iρ i and α t =ρ t / iρ i . Hence, we arrive at To assess the result of the sequential algorithm, note that the recurrenceV t =V t−1Vt /(V t−1 +V t ) in the second step is equivalent toρ t =ρ t−1 +ρ t , withρ t ≡ 1/V t . Iterating, we obtainρ T = T t=1ρ t , which is the rightmost term in Eq. (D2). Similarly, the recurrence for the mean,Ō t = (Ō tVt−1 +Ō t−1Vt )/(V t−1 +V t ), reads O t = Ō tρt +Ō t−1ρt−1 /ρ t . Iterating once again, we obtain the expression forŌ T in Eq. (D2). Hence, both estimations are equivalent.
2021-04-02T01:15:46.508Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "2cf943b60bbc92469f18222d07ee5526f17a7cd4", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PRXQuantum.2.040342", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2cf943b60bbc92469f18222d07ee5526f17a7cd4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
203912290
pes2o/s2orc
v3-fos-license
Neuroprotective Potential of Pituitary Adenylate Cyclase Activating Polypeptide in Retinal Degenerations of Metabolic Origin Pituitary adenylate cyclase-activating polypeptide (PACAP1-38) is a highly conserved member of the secretin/glucagon/VIP family. The repressive effect of PACAP1-38 on the apoptotic machinery has been an area of active research conferring a significant neuroprotective potential onto this peptide. A remarkable number of studies suggest its importance in the etiology of neurodegenerative disorders, particularly in relation to retinal metabolic disorders. In our review, we provide short descriptions of various pathological conditions (diabetic retinopathy, excitotoxic retinal injury and ischemic retinal lesion) in which the remedial effect of PACAP has been well demonstrated in various animal models. Of all the pathological conditions, diabetic retinopathy seems to be the most intriguing as it develops in 75% of patients with type 1 and 50% of patients with type 2 diabetes, with concomitant progression to legal blindness in about 5%. Several animal models have been developed in recent years to study retinal degenerations and out of these glaucoma and age-related retina degeneration models bear human recapitulations. PACAP neuroprotection is thought to operate through enhanced cAMP production upon binding to PAC1-R. However, the underlying signaling network that leads to neuroprotection is not fully understood. We observed that (i) PACAP is not equally efficient in the above conditions; (ii) in some cases more than one signaling pathways are activated; (iii) the coupling of PAC1-R and signaling is stage dependent; and (iv) PAC1-R is not the only receptor that must be considered to interpret the effects in our experiments. These observations point to a complex signaling mechanism, that involves alternative routes besides the classical cAMP/protein kinase A pathway to evoke the outstanding neuroprotective action. Consequently, the possible contribution of the other two main receptors (VPAC1-R and VPAC2-R) will also be discussed. Finally, the potential medical use of PACAP in some retinal and ocular disorders will also be reviewed. By taking advantage of, low-cost synthesis technologies today, PACAP may serve as an alternative to the expensive treatment modelities currently available in ocular or retinal conditions. INTRODUCTION Neuropeptides have a fundamental role in the maturation of the nervous system and their functional consequences appear in countless biological mechanisms, both in physiological and in pathological conditions. Peptides may act as neurotransmitters, neuromodulators or neurohormones, therefore their function in neuronal development/regeneration may confer crucial protective roles during pathological conditions (Strand, 2003;Casini, 2005;Cervia and Casini, 2013). The biological effects of PACAP are mediated by three types of G-protein coupled receptors which have seven transmembrane domains (PAC1-R, VPAC1-R, VPAC2-R, see below). PACAP binds to pituitary adenylate cyclase-activating polypeptide type I receptor (PAC1-R) with approximately 100x higher affinity than VIP while both peptides have similar affinities for VPAC1-R and VPAC2-R. These receptors are widely distributed in the central and peripheral nervous system Laburthe et al., 2007). The variable effects of PACAP are due to the activation of diverse signal transduction pathways and their outcomes depend on which receptor types have been activated. AC, PLC and Ca 2+ are main effectors during the signal transduction mechanisms of PACAP (Spengler et al., 1993;Pisegna and Wank, 1996). PAC1R and VPAC1R are coupled to AC, which leads to cyclic adenosine 3 ,5 -monophosphate (cAMP) level elevations and the subsequent activation of PKA, which in turn could activate the MAPK pathway. Both receptor types are coupled to PLC as well, which leads to the stimulation of Ca 2+ mobilization and the activation of the protein kinase C (PKC) pathway. VPAC2R subtype also seems to activate the AC signaling pathway. Beyond the receptor types, activation of different pathways depends on the ligands, the tissue type, and the stage of the development (Filipsson et al., 1998;Basille et al., 2000;Vaudry et al., 2000). PACAP and its receptors are present in the CNS and in peripheral organs of mammals (Arimura and Shioda, 1995;Vaudry et al., 2009). In the CNS it behaves as a neurotransmitter or neurotrophic factor and is expressed in the hippocampus, cerebellum, hypothalamus and in several brainstem nuclei (Hannibal, 2002;Lee and Seo, 2014). Several studies discussed its neuroprotective effects in neurodegenerative diseases such as in stroke, brain ischemic injuries, Alzheimer's diseases and in Parkinsonism (Wang et al., 2008;Atlasz et al., 2010;Han et al., 2014;Matsumoto et al., 2016). Studies have revealed the expression of PAC1-R in the conjunctiva while PACAP/PAC1-R show higher expression in the lacrimal glands, in the cornea and in the retina (Wang et al., 1995;Elsas et al., 1996). In the retina, the nerve cell bodies in the GCL, some amacrine cells and horizontal cells show PACAP immunopositivity Denes et al., 2014). PAC1-R is strongly expressed in the GCL, in the INL and shows lower expression in the outer and inner plexiform layers (OPL, IPL) as well as in the ONL (Seki et al., 1997). To date, several studies have described the significant neuroprotective potential and neurotrophic effects of PACAP in relation to retinal metabolic disorders. Although its physiological action is incompletely elucidated, this peptide exerts neuroprotective and trophic actions by regulating cell survival and death, not only during the development and maturation of the nervous system but also in pathological conditions. Although pivotal roles in retinal metabolic disorders have been extensively investigated, the mechanisms are still not well understood and further signal transduction pathways may await to be revealed. The primary aims of the present review are to summarize our knowledge about PACAP action in the retina in various physiological and pathological conditions (diabetic retinopathy, excitotoxic retinal injury and ischemic retinal lesion) and to discuss the potential signal transduction pathways in the context of its protective action. Particularly, we pay special attention to (i) the lack of PACAP in the retina and supplementation of PACAP during early postnatal development; (ii) PAC1-R subtypes in the retina and their possible involvement in the neuroprotective events; and (iii) role of PACAP in mobilizing the immune system, both white blood cells and chemical messengers, to achieve retinal neuroprotection. Finally, we summarize the synergistic and diverging pathways through which PACAP acts and achieves functional improvement in concerted action with other neuropeptides. PACAP CONTRA RETINAL DEGENERATION WITH METABOLIC ORIGINS As we mentioned above, the physiological role of PACAP in the adult retina is not well established. Clearly, an emerging theory is that the lack of endogenous PACAP would accelerate age-related degeneration (Reglodi et al., 2018). PACAP deficiency mimics aspects of age-related pathophysiological changes including increased neuronal vulnerability and systemic degeneration accompanied by increased apoptosis, oxidative stress, and inflammation thus mimicking early aging. In support of this theory, it has been proven recently that endogenous PACAP has a protective effect during retinal inflammation. Experiments with PACAP KO mice revealed that intraperitoneal injection of LPS induced markedly more seriously eye-inflammation in PACAP KO mice than in the wild type group. During the process of inflammation, protein kinase B (pAkt) and glycogen synthase kinase-3 (pGSK) levels decreased in PACAP KO mice while cytokines (sICAM-1, JE, TIMP-1) were elevated (Vaczy et al., 2018). INVOLVEMENT OF PACAP IN RETINAL CELL DEVELOPMENT AND AGING In the CNS numerous extrinsic and intrinsic factors contribute to the formation of mature tissue by the precise regulation of the appropriate number and distribution of neurons. Neuropeptides influence many developmental processes of the CNS in a regulated way (Casini, 2005). In the developing retina, progenitor cells proliferate and differentiate into various retinal cell types as a result of numerous regulated cell cycle processes and develop into the final multi-layered structure of the retina. In postnatal (P6, P9) rat retinas PACAP treatment modulates cell death by activation of cAMP-PKA pathways (Silveira et al., 2002). Njaine et al. (2010) have investigated the exact timing and role of PACAP and its receptors in the cell generation of the developing rat retina. PAC1-R is expressed as early as E16 during development while VPAC1-R and VPAC2-R are expressed later, but then are present at all other stages. PACAP treatment resulted in an anti-proliferative effect by phosphorylation of CREB in cyclin D1 expressing retinal progenitor cells after PACAP receptor activation. Conversely, PACAP receptor activation led to a decreased level of cyclin D1 mRNA and further decreased by a combined treatment with PACAP and the cAMP degradation inhibitor IBMX. These findings have shown that PACAP has control over a subpopulation of progenitor cells and modulate cell proliferation in the developing retinal tissues (Njaine et al., 2010). Interestingly, PACAP shows both pro-and anti-apoptotic effects on postnatal retinal development in rat models. Caspase activity analysis has shown dose-and stage-dependent effects of PACAP on developmental apoptosis in rat retinas. Intravitreal injection of PACAP from postnatal day 1 (P1) to P7 induces apoptosis during the early stage of the retinogenesis. When 100 pmol PACAP was injected, it increased caspase 3/7 activity at P1, P3, and P5, but had no effect at P7. At P3, treatment repressed caspase 3/7 activity 18 h after the intravitreal injection, however, their levels increased 24 h post-injection. Apparently, PACAP treatment did not exert anti-apoptotic effects at P1, P5, and P7 rat retinas (Nyisztor et al., 2018). These findings warn us to re-evaluate PACAP action cautiously, always taking the timing and concentrations into account, especially in development. Unfortunately, not much is known about the functions of this peptide in mature retinas. Aging experienced as loss of function is accompanied by functional and morphological changes in retinal tissues (Gao and Hollyfield, 1992;Curcio and Drucker, 1993;Ramirez et al., 2001;Kovacs-Valasek et al., 2017). PACAP KO mice show accelerated age-related changes compared to wild type retinas. Altered structural changes included enhanced loss of ganglion cells and spouting of rod bipolar cell dendrites into the ONL in aging PACAP KO mice. Protein kinase C (PKC) α level in rod bipolar cells has been reduced in this condition. In contrast, GFAP levels have increased with an absence of endogenous PACAP. At the same time, PAC1-R has been upregulated in PACAP deficient young adult mice retinas. Surprisingly, the authors did not find differences in the histological structure of young adult PACAP KO and wild type mice (Kovacs-Valasek et al., 2017). These results suggest that PACAP contributes to maintaining the biochemical balance within neurons and glial cells. Thus, in the absence of this peptide, aging processes (e.g., reactive oxygen species formation) may gain strength earlier than in animals with normal PACAP levels. PACAP RECEPTOR TYPES EXPRESSED IN RETINA In the retina, the presence of four PAC1-R isoforms has been verified during postnatal development. The Null isoform showed no impressive changes at early stages (P1 to P5), but then manifested a decline from P5 to P15. Null message levels fell almost to zero in early adult ages. The Hip isoform had a similar expression pattern. The Hiphop1 isoform showed one prominent peak at P10. The Hop1 splice variant did not change much between P1 and P5, but thereafter it showed a significant increase at P10, P15, and P20. This seems to be the major isoform during adult life. Depending on the type of the PAC1-R isoform, PACAP can induce precursor cells to exit the cell cycle (through activation of the Null isoform (Lu and DiCicco-Bloom, 1997) or can promote proliferation in neuroblasts (if they express the Hop isoform (Lu et al., 1998). Interestingly, expression of both Hip and Hop1 isoforms displays a sudden increase at P10 prior to eye opening. Due to technical difficulties, PAC1-R bearing retinal cells could not be sorted by their respective isoforms (isoform-specific antibodies are not available currently). Based on these experimental results, a subsequent study has investigated the exact time period of isoform shift from postnatal day 5-10. The transcript level of Hip mRNA decreased from P6 through P9, while Hop1 expression level did not display any changes until P10. Consequently, a Hip/Hop1 isoform shift occurs between P6 and P8, which could alter PACAP functions in the postnatal rat retina (Denes et al., 2014). In contrast to the PAC1-R expression levels of the VPAC1-R receptor did not change during postnatal retinal development, though both the mRNA and protein could be detected in all selected time points. A similar scenario has been found in the case of VPAC2-R. Therefore, these receptors appear to be expressed in the newborn as well as in the adult retina, with similar intensity both at message and protein level (Lakk et al., 2012). RETINAL PATHOLOGIES AND PACAP Retinal diseases fall into two main categories: inherited disorders and problems of metabolic origin. Both conditions have attracted substantial research interest. According to our PubMed survey, approximately 4,000 papers have been published in the last 10 years dealing with the former and about 3,000 with the latter. Approximately half of the papers deal either with diagnostic advances or treatment options. Below we shall summarize some of the experimental results regarding the three most common conditions: diabetic retinopathy, excitotoxic retinal injury and ischemic retinal conditions. Diabetic Retinopathy and PACAP Diabetes is a multifactorial, metabolic disorder which appears to be the result of several pathological metabolic processes with increased morbidity statistics worldwide. In 2017, 425 million adults lived with diabetes and the size of the affected population will rise to 629 million until 2045 (International Diabetes Federation, 2017) 1 . DR is a microvascular complication of diabetes and the leading cause of vision loss (Cheung et al., 2010;Antonetti et al., 2012). DR is also considered as a chronic inflammatory disorder; low-grade inflammation has been observed in the retinas of both diabetic animals and human patients (Krady et al., 2005;Kern, 2007;Zeng et al., 2008). Patients with 1 type diabetes have a higher risk of DR than with the type 2 disease (Yau et al., 2012). DR has two distinguishable stages depending on the presence of neovascularization: an earlier non-proliferative phase characterized by abnormalities in the microvasculature, which could progress into a proliferative phase with macular neovascularization (Cheung et al., 2010). The pathogenesis of DR includes increased polyol and hexosamine pathway activation, higher advanced glycation endproducts production and the activation of PKC pathways. These altered signaling mechanisms could result in oxidative stress and chronic inflammation. Retinal microglial cells become activated and migrate in the subretinal space in several retinopathies, including DR (Zeng et al., 2000(Zeng et al., , 2008. The activation of microglia induced by hyperglycemia has been associated with the early development of DR, and occurs as early as electroretinographic modifications (Gaucher et al., 2007;Kern, 2007). Cytokines, released by activated microglia, were shown to contribute to neuronal cell death (Krady et al., 2005). They stimulate the production of cytotoxic substances, such as TNFα and ROS, proteases and even excitatory amino acids, which may induce neuronal degeneration. Leukocytemediated retinal cell apoptosis is among the earliest pathological manifestations of DR and results in the formation of a cellularoccluded capillaries, microaneurysms, and vascular basement membrane thickening (Kern and Engerman, 1995). Macrophages have long been known to play a major role in the pathogenesis of proliferative vitreoretinal disorders. In human DR, all types of macrophages could be detected regardless of clinical history and duration of the disease (Esser et al., 1993). Consequences of vascular occlusions contribute to neurodegeneration and dysfunction of the retina (Frank, 2004;Cheung et al., 2010;Giacco and Brownlee, 2010). Neuroprotective effects of PACAP in this pathological condition are complex because they have both structural, physiological and synaptic aspects as evidenced by many papers in this field ( Table 1). In a rat model, intravitreal injection of PACAP ameliorated the structural changes of the retina in streptozotocin-induced DR. This treatment attenuates neuronal cell loss in the GCL, reduces cone cell degeneration and shows normal dopaminergic amacrine cell number compared to untreated diabetic retinas. These findings have demonstrated the significant neuroprotective effect of PACAP and its therapeutic potential in DR . In their latest study, Maugeri et al. (2019) have provided evidence that PACAP1-38 protects not only neurons, but also the retinal pigmented epithelium both in vivo and in vitro. In another study, the intraocular PACAP injection attenuated the retinal injury by increasing the anti-apoptotic p-Akt, extracellular signalregulated-kinase (p-ERK1/2), PKC and B-cell lymphoma 2 (Bcl-2) proteins levels, meanwhile the pro-apoptotic phosphorylated p38MAPK and activated caspase -3,-8, and -12 levels were decreased. As a result PACAP treatment significantly decreased apoptotic cell numbers compared to untreated diabetic rats and attracted a number of unidentified immune cells into the retina through the inner limiting membrane . At the same time, electron microscopic analysis found altered synaptic structures in the diabetic retinas, in contrast to PACAP-treated diabetic groups, where more bipolar ribbon synapses appeared in the inner plexiform layer indicating higher levels of synapse-retention (Szabadfi et al., 2016). Giunta and his colleagues have described that MAPK transcripts levels were modified in the retina of diabetic rats during the early stages and the levels of PACAP, VIP and their receptors were all significantly downregulated as compared to nondiabetic rats (Giunta et al., 2012). At the same time, PACAP treatment has increased the PAC1-R expression in the retina, sometimes even in cells where PAC1-Rs are normally not present . References Study aim Findings In vivo Giunta et al., 2012 PACAP, VIP and their receptors expression change in retina of streptozotocin-induced diabetic rats. The expression of peptides and their receptors were decreased after induction of diabetes. PACAP38 intravitreal injection restored diabetic changes in Bcl-2 and p53 expression to non-diabetic levels. Szabadfi et al., 2012 Highlights the protective effects of PACAP in diabetic retinopathy PACAP ameliorated structural changes in DR, attenuated neuronal cell loss and increased the levels of PAC1-receptor and tyrosine-hydroxylase. D'Amico et al., 2015 The effects of PACAP in hyperglycemic retina is mediated by modulation of HIFs' expression in retina. In diabetic rats HIF-1α and HIF-2α expression decreased after PACAP intraocular administration while HIF-3α downregulated in retinas of STZ injected rats and increased after PACAP treatment. Szabadfi et al., 2016 Analyze the synaptic structure and proteins of PACAP-treated diabetic retinas after intravitreal PACAP administration. In the PACAP-treated diabetic retinas more bipolar ribbon synapses were found intact in the inner plexiform layer than in DR animals. Degeneration of bipolar and ganglion cells could be ameliorated by PACAP treatment. D'Amico et al., 2017 Protective role of PACAP through IL1β and VEGF expression in rat diabetic retinopathy PACAP reduced the IL-1β expression and downregulates VEGF, VEGFRs in STZ-treated animals. Maugeri et al., 2019 Effect of PACAP-38 against high glucose damage is mediated by EGFR phosphorylation in retina. Unfortunately, there is no data available regarding VPAC1-R and VPAC2-R involvement in the PACAP response in the retina. However, VIP and PACAP have been shown to cooperate in functional studies by using other disease models (Schratzberger et al., 1998;Ganea and Delgado, 2003;Abad et al., 2016). Excitotoxic Retinal Injury and PACAP Excitotoxic retinal injury in animal models mimics the changes associated with elevated intraocular pressure in human that causes glaucoma. Several studies have examined the neuroprotective effect of PACAP in excitotoxic retinal injuries. In normal conditions, glutamate is a neurotransmitter molecule in the retina, however, in high concentration it causes excessive stimulation of glutamate receptors and leads to excitotoxicity. In animal models of excitotoxic retinal injury, monosodium-glutamate treatment is used in vivo to model this pathological condition. Monosodium glutamate (MSG) injection treatment has caused severe degenerations in neonatal rat retinas (Tamas et al., 2004;Atlasz et al., 2009). If prior to MSG treatment PACAP was injected unilaterally into the vitreous body of neonatal rat eyes, the MSG-induced degeneration became less pronounced. PACAP was applied in two different concentrations (1 and 100 pmol) to examine the dose-dependency of PACAP treatment in excitotoxic retinal injury. After MSG treatment the thickness of the entire retina was reduced by more than half and the reduction was especially due to the degeneration of the inner layers. Retinas of rats treated with 100 pmol PACAP showed significantly less damage than the retinas of animals treated with 1 pmol PACAP. These findings have described how PACAP could significantly attenuate the degeneration of the retina and underlined the importance of the dose-dependent effects of PACAP (Tamas et al., 2004). In another study, two different forms of PACAP (PACAP1-27, PACAP1-38) and their antagonists (PACAP6-38, PACAP6-27) have been tested in excitotoxic injury. The thickness of the retina has been significantly reduced, much of the IPL disappeared, the GCL and the INL cells intermingled and the ONL cells were swollen. During the investigation, PACAP1-38 and PACAP1-27 treated groups have shown retained retinal structure and the INL and GCL remained well separated. The two isoforms of PACAP have shown the same degree of neuroprotection after MSG treatments. The application of two PACAP antagonists after MSG injection did not ameliorate the MSG-induced retinal degenerations and led to a pronounced degeneration in the rat retina . During these experiments, the degenerations of the inner retinal layers were ameliorated by PACAP treatment. Note that PAC1-R distribution in the retina corresponds to the location of the protective effect because it shows the highest expression in the INL and in the GCL, and the lowest in the ONL and OPL . Another study examined the molecular background of signal transduction pathways underlying the neuroprotective effect of PACAP in MSG-induced retinal injury. The authors found that MSG inhibits the production of the anti-apoptotic molecules (phospho-PKA, phospho-Bad, Bcl-xL and 14-3-3 proteins) using rat models. PACAP treatment attenuates these effects by inducing the activation of the anti-apoptotic pathway by phosphorylation of PKA and Bad molecules and increasing the levels of Bcl-xL, and 14-3-3 proteins (Racz et al., 2007). These results highlighted that PACAP has a retinoprotective effect in glutamate induced injuries by reducing the pro-apoptotic pathways, while inducing anti-apoptotic signaling. Interestingly, an enriched environment surrounding the experimental animals has also been shown to provide strong protective effect. A combination of enriched environment and PACAP treatment, however, did not further improve the protective effect, suggesting that these two treatments may utilize the same pathway for protection (Kiss et al., 2011). Retinal Ischemic Conditions and PACAP Retinal ischemia, as well as ischemia-reperfusion, causes inflammation which leads to injury progression, though inflammation usually helps in neuronal repair. These conditions contribute to excess ROS production, increase intracellular calcium levels and initiate mitochondrial damage. In addition, MAPKs, nuclear factor κB (NFκB) and hypoxia-inducible factor 1α (HIF1α) are also activated when ischemic conditions elicit inflammation (Rayner et al., 2006;Wang et al., 2014;Kovacs et al., 2019). In the BCCAO model, PACAP activated one of the most important cytoprotective pathways, the PI3K-Akt, and suppressed the p38 MAPK and JNK pathways just like PARP inhibitors (Mester et al., 2009). Furthermore, a neurotrophic agent with a similar mode of action, CNTF, a member of the IL6 family (Wen et al., 2012), has also been tested in the form of intravitreal injection in preclinical studies. Using 12 animal models from 4 different species, researchers described a strong neuroprotective effect on photoreceptors and ganglion cells in the retina (Tao et al., 2002;Pease et al., 2009;Flachsbarth et al., 2014;Lipinski et al., 2015). The effect of PACAP fragments has also been tested extensively in this model (Werling et al., 2014). The rationale for this study was that bioavailability and fast degradation of PACAP limit its therapeutic use and therefore scientific attention has been drawn to shorter fragments, especially the ones where C-terminus is truncated Bourgault et al., 2011;Dejda et al., 2011). Therefore, it was necessary to test whether shorter PACAP fragments (4-13, 4-22, 6-10, 6-15, 11-15, and 20-31) have any effect on retinal lesions caused by chronic retinal hypoperfusion. Since the N-terminal fragments show a high similarity with the structure of VIP, and the 4-13 domain shows high selectivity to the PAC1-R, the prospect of creating a short and effective peptide fragment with a similar neuroprotective potential to PACAP seemed very promising. However, the authors came to the conclusion that the natural form of the peptide, PACAP1-38, is the most effective in retinal ischemia, and the 38 amino acid form of the peptide cannot be replaced by another fragment or another member of the peptide family (Werling et al., 2014). It has also been shown that PACAP mediates functional recovery after 14 days of intraocular treatment (Danyadi et al., 2014), probably through downregulation of VEGF production and glutamate release (D'Alessandro et al., 2014). COMMON, SYNERGISTIC AND DIVERGING PATHWAYS OF PACAP SIGNALING TO ACHIEVE FUNCTIONAL IMPROVEMENT In the next few paragraphs, we aim to summarize the pathways activated, directly or indirectly by PACAP receptors (Figure 1). Unfortunately, most studies do not provide evidence which PACAP receptors are involved in the processes described below. Nevertheless, all the available data point to a critical function of PACAP in neuroprotection. Downregulation of Vascular Endothelial Growth Factor (VEGF) Vascular endothelial growth factor, a dimeric glycoprotein functions as a mitogen by stimulating proliferation and migration of endothelial cells. It is also responsible for formation of new blood vessels (Ferrari and Scagliotti, 1996). The receptors of this signal molecule (VEGF-receptor 1, VEGF-R1 and VEGF-receptor 2, VEGF-R2) have tyrosine kinase domains and contribute to angiogenesis (Yancopoulos et al., 2000;Rahimi, 2006). Among retinal cell types, mainly astrocytes, Müller glia cells, retinal pigment epithelium (RPE) and pericytes produce VEGF (Chalam et al., 2014). VEGF expression level is increased under low-oxygen concentrations through the induction of hypoxiainducible factor 1 (HIF-1) expression. Hypoxia inducible factors (HIFs, see later) are modulators in hypoxia and cause endothelial cell transmigration across the RPE in the eye. These endothelial cells contribute to new vessel formation under VEGF control (Wang et al., 1995;Kaur et al., 2008;Skeie and Mullins, 2009). Elevated VEGF production leads to angiogenesis in order to supply tissues in hypoxic conditions (Kim et al., 2015). However, the newly generated blood vessels scatter light, and thus, instead of contributing to a better vision, they actually deteriorate visual acuity. Studies have described diverse effects of PACAP on VEGF expression levels. Both PACAP and VIP are able to modulate HIF and VEGF expression during diabetic macular edema. VEGF expression is increased during hyperglycemic insult compared to control conditions. This effect can be ameliorated by PACAP or VIP treatment which could reduce the expression of VEGF and its receptors . Conversely, in another study, unrelated to diabetes, intravitreal treatment with PACAP has increased VEGF expression levels in rats after bilateral common carotid artery occlusion . Although the results appear contradictory at first, at biological level the finding further demonstrates how profoundly protective PACAP is. In the extreme hypoxia at carotid artery occlusion the only survival strategy is more capillaries, that PACAP can also provide by an adaptive switch in its signaling bias. Nevertheless, the anti-VEGF effects of PACAP are clearly beneficial in patients suffering from DR conditions (Gabriel, 2013). Downregulation of c-Jun and p38 Kinases c-Jun N-terminal protein kinase (JNK) and p38 kinase are members of the MAPK superfamily and they regulate apoptotic signaling pathways in cells (Estus et al., 1994;Ham et al., 1995;Mesner et al., 1995). JNK can have both pro-and anti-apoptotic effects (Ham et al., 1995;Xia et al., 1995;Lenczowski et al., 1997). In experiments using sodium arsenite (NaAsO 2) to trigger neuronal apoptosis, both p38 kinase and JNK3 were upregulated and c-Jun phosphorylation was induced. The results showed that p38 kinase and JNK inhibitors attenuated apoptosis in cortical neurons and established the differences between JNK isoforms which differently contributed to the apoptotic processes (Namgung and Xia, 2000). It has also been described that intravitreal PACAP treatment decreased JNK, p38 activation and the activation of ERK1/2, AKT in hypoperfused rat retinas . In MSG-induced retinal degeneration, PACAP treatment attenuated the activation of JNK and caspase 3 and increased the level of phospho-Bad (Racz et al., 2006). On the contrary, the same group demonstrated that PACAP treatment decreased the expression and activation of pro-apoptotic p38 in diabetic rat retinas . Synergism With Other Peptidergic Mechanisms The therapeutic potentials of different neuropeptides have been confirmed by numerous animal models of human diseases. These substances deserve prominent attention in the development of peptide-based therapeutic strategies of visionthreatening diseases. The effectiveness of SST neuropeptide has been described in various pathological conditions of the retina. SST is an important neuromodulator and its immunoreactivity occurs mainly in the GABAergic amacrine cells in the retina (Feigenspan and Bormann, 1994;van Hagen et al., 2000). SST levels are downregulated at the early stage of DR (Carrasco et al., 2007). Topical administration of SST and its analogs have a preventive effect in retinal neurodegeneration in STZ-induced diabetes. It has been established that SST treatment inhibits extracellular glutamate accumulation, glial activation, ERG abnormalities and it modulates the proapoptotic/survival signaling pathways in experimental diabetes (Hernandez et al., 2013). Octreotide (OCTR) is a synthetic SST analog which, for example, in an ischemia/reperfusion injury study reduced cell loss, retinal thickness changes, ROS formation and inhibited NF-κB p65 activation. These findings demonstrated that OCTR application has a neuroprotective and antioxidant effect on ischemic injury in the retina (Wang et al., 2015). In another investigation, OCTR reduced hypoxia induced activation of STAT3 and HIF1 levels in retinal explants (Mei et al., 2012). OCTR and another SST analog (Woc4D) decreased neovascularization in the mouse model of oxygen-induced retinopathy (Higgins et al., 2002). A metabolomic analysis revealed the roles of PACAP, SP, and OCTR in ex vivo mouse models of retinal ischemia. These ex vivo results show a synergistic action of the above mentioned peptides. All treatments reduce VEGF overexpression, cell death and glutamate release and modulate pro-survival pathways by restoring IP3 signaling, cAMP levels and PIP2/PIP3 ratio in ischemia-induced retinal damages. It has also been demonstrated in ischemia related oxidative stress that PACAP and SP treatments help to cope with this condition and OCTR also contributes to the preventive effect in pathological processes (D'Alessandro et al., 2014). Takuma et al. have investigated the effect of an enriched environment on memory impairments in PACAP deficientmice. This environment ameliorates the memory impairments in knockout mice after 4 weeks and the beneficial effects of it were also observed if mice were returned to a standard environment after 2 weeks. The results showed that the levels of BDNF, phospho-ERK, phospho-CaMKII and N-methyl D-aspartate receptor subtype 2B (NR2B) in the hippocampus increased in an enriched environment and these factors are responsible for the ameliorating effect of the this environment on memory dysfunction. In PACAP −/− mice, however, these increased expression levels disappeared after 2 weeks when they were returned to standard housing, so in the lack of PACAP the longlasting ameliorating effects of the enriched environment could not be verified (Takuma et al., 2014). An in vitro examination by Ogata and his colleagues have compared morphological effects of PACAP and BDNF on primary cultures of hippocampal neurons. Both PACAP and BDNF increased neurite length and numbers at a similar level, while PACAP increased the axon length only, but not the branching. Interestingly, the use of PACAP6-38 antagonist blocked both PACAP and BDNF-induced increases in axon length, suggesting that these two peptides may act through the same intracellular signal transduction machinery and that PACAP antagonists can interfere effectively with BDNF signaling (Ogata et al., 2015). Divergence in PACAP Receptor Signaling -How Immune Elements Are Recruited to Damaged Tissue Sites? It has been demonstrated that immune cells express functional PACAP receptors. However, PAC1-R has minor roles in the immune response whereas VPAC1-R and VPAC2-R signaling evoke diverging effects. The former is constitutively expressed on macrophages, while the latter is inducible and particularly strongly effected by LPS (Abad et al., 2016). While VPAC1-R is thought to act mainly as an inhibitor of the immune response, VPAC2-R is able to accelerate inflammatory processes by initiating the production of several cytokines, most prominently IL-6 and IL-10. Additionally, D'Amico et al. (2017) have provided evidence that both IL1ß and VEGF levels are modified in diabetic rat retinas after PACAP administration. In peripheral organs PACAP also activates T-lymphocytes. In PACAP KO mice, however, PACAP treatment failed to reduce neutrophil infiltration into organs indicating that other indirect downstream PACAP signaling is also essential in this system (Martinez et al., 2005). VPAC1 and VPAC2 receptors, but not PAC1-R mRNA levels, were transiently induced in retinas 1 week following diabetes induction (Giunta et al., 2012). In the same diabetic condition, immune cells were attracted to the retina through the inner limiting membrane and resulted in strengthening of IL-6 but not tumor-necrosis factor (TNF) α-immunoreactivity in retinal ganglion cells. The reason for this difference is currently unknown and research is needed to clarify the underlying signaling routes. It is even more interesting that TNFα is dramatically increased in glaucoma and ischemia (Martinez et al., 2005). Therefore, it seems evident that not all of the microcircuitry-related disorders have identical immune cell recruitment pathways. This immune response may enhance the degeneration of the damaged cells. That, however, may be beneficial science when a protective signal like PACAP appears, it may be hasten the clearance of the dying elements, help to rearrange the neural connections and maintain the integrity of the remaining cells, to restore function as quickly as possible. DISCUSSION Our review highlights the importance of PACAP and, some other neuropeptides in retinal degenerative diseases with metabolic origins. Neuropeptides with their wide range of signaling potential could modulate the pathological pathways of retinal diseases through converging signal pathways. The question arises why these potentials are neglected in drug development and subsequent clinical trials. One of the difficulties of using natural peptides as protective agents is their relatively short half-life (in some cases it can be shorter than 1 min). The solution for this problem is to modify these peptides at their N and/or C termini in order to prevent degradation (acetylation, cyclization, N and/or C termini modification, PEGylation, D-amino acid substitution, etc.). In the case of PACAP, half-life can be longer than 4 h after some modifications (Mathur et al., 2016). Another potential problem using peptides as therapeutic agents is their limited passage through the blood brain barrier (Banks et al., 1993;Banks and Kastin, 1996). In the case of the retina there is no need for systemic administration since the peptides can injected into the vitreus body and must pass through the retinal inner limited membrane. Indeed it has been shown in the case of PACAP that it reaches the inner retinal layers after intravitreal injection (Werling et al., 2017). At the same time, one of the mobilized downstream signals in the pathogenesis, VEGF is intensively targeted by different anti-VEGF therapies (Gabriel, 2013). While anti-VEGF therapies are expensive, synthesis and modification of peptides like PACAP are cost effective, so they may provide alternatives to the treatments available today in various retinal conditions, particularly in the case of DR. It would also be reasonable to consider the combination of modified neuropeptides, which can effectively counteract pathological retinal metabolic conditions. As discussed above, there are a number of candidates to be included in this mixture. In order to effectively protect every retinal cell type and layer we suggest trying the combination of modified BDNF, CNTF, OCTR, and PACAP. These substances together satisfy the following criteria (i) under normal conditions their native form is present in the retina in low concentration; (ii) each retinal cell type has a receptor for at least one of the four peptides; (iii) the signal transduction pathways behind the retinal receptors of these substances do not ameliorate or cross each other's action; and (iv) none of them causes unwanted side effects even if they are given in higher concentrations. Considering that anti-VEGF drugs cost over 500 million pounds in Great Britain alone in 2015 (Hollingworth et al., 2017), alternatives are definitely needed, especially in low and medium income countries (Shanmugam, 2014). Clinical trials with the combinations of the above substances could be envisioned based on the results achieved on animal models in research laboratories. AUTHOR CONTRIBUTIONS All authors read and approved the final manuscript. RG wrote the manuscript and supervised the manuscript production. EP wrote the manuscript. VD gave expert advice and provided critical feedback.
2019-10-09T13:21:39.511Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "b211ba429590b89b4316a02c09b0403de9906627", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.01031/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b211ba429590b89b4316a02c09b0403de9906627", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247617315
pes2o/s2orc
v3-fos-license
A Novel Method of CD31-Combined ABO Carbohydrate Antigen Microarray Predicts Acute Antibody-Mediated Rejection in ABO-Incompatible Kidney Transplantation Graphical Abstract Isohemagglutinin assays employing red blood cells (RBCs) are the most common assays used to measure antibody titer in ABO-incompatible kidney transplantation (ABOi KTx). However, ABO antigens expressed on RBCs are not identical to those of kidney and antibody titers do not always correlate with clinical outcome. We previously reported that CD31 was the main protein linked to ABO antigens on kidney endothelial cells (KECs), which was different from those on RBCs. We developed a new method to measure antibody titer using a microarray of recombinant CD31 (rCD31) linked to ABO antigens (CD31-ABO microarray). Mass spectrometry analysis suggested that rCD31 and native CD31 purified from human kidney had similar ABO glycan. To confirm clinical use of CD31-ABO microarray, a total of 252 plasma samples including volunteers, hemodialysis patients, and transplant recipients were examined. In transplant recipients, any initial IgG or IgM antibody intensity >30,000 against the donor blood type in the CD31-ABO microarray showed higher sensitivity, specificity, positive predictive value, and negative predictive value of AABMR, compared to isohemagglutinin assays. Use of a CD31-ABO microarray to determine antibody titer specifically against ABO antigens expressed on INTRODUCTION In most countries, a paired donation program to circumvent the immunological challenge of ABO incompatibility is precluded by law. Therefore, a kidney transplant candidate with an ABO-incompatible (ABOi) living donor has a valuable option to wait for a deceased ABO-compatible donor with long-term dialysis therapy. Recent cohort studies have shown no significant difference in patient and graft survival in ABOi kidney transplantation (KTx) compared to ABO-compatible (ABOc) KTx (1)(2)(3)(4)(5). However, recent meta-analysis has shown lower patient and graft survival in ABOi KTx than ABOc KTx (6,7). In ABOi KTx, over immunosuppression, leading to life-threatening infections, may cause lower patient survival (6,7). In addition, acute antibodymediated rejection (AABMR), due to anti-A or -B antibodies (Abs), contributes to lower graft survival (6,7). Ab titers against donor blood group antigen may be an AABMR predictor following ABOi KTx, and tailored desensitization therapy according to Ab titer may avoid over immunosuppression (8). However, an acceptable Ab titer against donor blood group antigen to prevent AABMR has not been defined in ABOi KTx. In addition, the desensitization therapy protocol varies from institution to institution and the method to measure Ab titer is not unified. Technological advances in HLA laboratory testing undoubtedly improved the sensitivity and specificity of HLA Ab assessment. Multiple methodologies such as complement-depending cytotoxicity test, flow cytometry, and Luminex-based technology can be available for HLA Abs test. The understanding of complement (C1q and C3d, etc) fixing Abs and IgG subclass in HLA Abs has become widespread. In contrast, Ab test against ABO antigens in ABOi organ transplantation is still primitive. Isohemagglutinin assays employing red blood cells (RBCs) are the most common assay used to measure Ab titer in ABOi KTx. However, ABO blood group antigens expressed on RBCs are not identical to those of the kidney due to different proteins linked to ABO carbohydrate antigens (9). Ab epitopes against ABO blood group antigens may differ between RBCs and endothelial cells (10). In some cases, Ab titers do not correlate with clinical outcome; AABMR does not occur in some patients with high Ab titers, and vice versa (11)(12)(13). A method to determine Ab titer specifically against ABO blood group antigens expressed on kidney endothelial cells (KECs) is necessary to prevent over immunosuppression or precisely predict AABMR following ABOi KTx. Pecam1 (CD31) is the most abundant protein linked to ABO blood group antigens on KECs, which is different from Band3 mainly expressed on RBCs (9). Here, a new method was developed to measure Ab titer using a microarray of CD31 linked to ABO carbohydrate antigens (CD31-ABO microarray) which is a mimic of ABO blood group antigens on KECs. This novel method may precisely predict AABMR following ABOi KTx. Sample and Data Collection A total of 252 plasma samples were collected. Volunteers (n = 120) donated blood samples at the Japan Red Cross blood center. Approval for this study was obtained from the Japanese Red Cross Institutional Review Board (authorization number 28J0001). Samples were donated without personal identifiers. The only available demographic factor was ABO blood type for these samples. Other plasma samples were collected from patients undergoing hemodialysis (n = 80) and recipients (n = 52) who received ABOi KTx at the Niigata University Medical and Dental Hospital, Nagoya Daini Red Cross Hospital, and Hokkaido University Hospital, Japan. All participants in this study were Japanese. All transplantations were living-donor KTx. Clinical and laboratory information was extracted from electronic databases and patients' medical records. Transplant recipients were divided into two groups: patients without AABMR (-) and with AABMR (+) due to anti-A or B Abs after ABOi KTx. The study was performed in accordance with the guidelines of the Declaration of Helsinki, subsequent to approval from the hospital's Institutional Ethical committee (authorization number 2018-0311). Anti-ABO Ab Isohemagglutinin Titers Titration of anti-A and anti-B Abs were performed using the test tube method, as described in detail in the Supplementary Methods. Immunosuppression for ABOi KTx Immunosuppression therapy was performed according to the protocol at each institution, as described in detail in the Supplementary Methods. Plasma exchange or doublefiltration plasmapheresis was performed before ABOi KTx to decrease Ab titers. Splenectomy was performed on the day of ABOi KTx before 2003, and rituximab was used after 2004, instead of splenectomy. Calcineurin inhibitors, methylprednisolone, mycophenolate mofetil, and basiliximab were given for induction therapy, with the exception of a few cases. AABMR Diagnosis There were no recipients who had donor human leukocyte antigen (HLA) specific performed Abs in this cohort. Whenever a rejection was clinically suspected, an episode biopsy was performed. A rejection diagnosis was made by the pathologist at each institution. AABMR due to anti-A or B Abs was diagnosed using pathological findings of ABMR (Banff19) when anti-donor HLA Abs were not detected at the time of rejection. Preparation of Recombinant CD31 Containing ABO Carbohydrate Antigens Recombinant CD31 proteins (rCD31) containing ABO carbohydrate antigens were produced in glycogene-modified human embryonic kidney (HEK293) cells. H-type glycanexpressing cells were established by overexpression of α1,2 fucosyltransferase (FUT1) into HEK293 cells; the resulting cells were designated HEK293H. A-type glycan-and B-type glycan-expressing cells were established by overexpression of α1,3 N-acetylgalactosaminyltransferase (GT-A), and α1,3 galactosyltransferase (GT-B) into HEK293H, respectively, and designated HEK293A and HEK293B. The cDNA encoding the extracellular domain of CD31 was amplified by polymerase chain reaction using the following primers: Forward: 5ʹ-aagcttcaggAT GCAGCCGAGGTGGGCCCA-3ʹ, including the HindIII site and Reverse: 5ʹ-gcggccgcTTCTTCCATGGGGCAAGAATGA-3ʹ, including the NotI site, and cDNA derived from human umbilical vein endothelial cells as a template. An approximately 1.8 kb DNA fragment was amplified and subcloned into the pCRII-blunt vector (Life Technologies). After confirmation of the correct sequence using a Genetic Analyzer 3130xl (Applied Biosystems), the HindIII and NotI fragment was inserted into the pcDNA3.1n-F expression vector, which was modified from pcDNA3.1n(+) (Life Technologies) by introducing the sequence encoding DYKDDDDK and a termination codon. The resulting plasmid, designated pcDNA3.1n-CD31-F, was transfected into HEK293H, HEK293A, and HEK293B cells using Lipofectamine LTX (Life Technologies), to produce rCD31 with a FLAG tag at the C-terminus in culture medium. After 48-72 h incubation at 37°C, each medium was collected and rCD31 was purified using an anti-FLAG M2 agarose affinity gel (Sigma-Aldrich). The culture medium (300 ml) was mixed with 500 μL suspension of anti-FLAG M2 agarose affinity gel and rotated slowly at 4°C for several hours. After centrifugation, the gel was washed 2-5× with PBS containing 0.01% Tween-20 and rCD31 was eluted from the affinity gel using a FLAG peptide (Sigma-Aldrich). The protein concentration of purified rCD31 was determined using a NanoDrop LITE spectrophotometer (Thermo Scientific) and was designated H-CD31, A-CD31, and B-CD31, respectively ( Figure 1A). Preparation of CD31 Proteins From Human Kidneys Kidney tissues were obtained from patients, with their informed consent, who underwent surgical nephrectomy due to renal carcinoma at the Niigata University Medical and Dental Hospital. Proteins were extracted from normal kidney cortices of patients with different ABO blood types and CD31 proteins were purified, as described in detail in the supplementary materials and methods. Protein extracts were incubated with Dynabeads protein G (VERITAS) pre-bound to anti-CD31 Ab (Santa Cruz Biotechnology). Dynabeads were thoroughly washed with lysis solution and eluted with sodium dodecyl sulfate (SDS) sample buffer. The eluates with SDS sample buffer were separated by SDS-polyacrylamide gel electrophoresis (PAGE). SDS-PAGE gel pieces containing CD31 protein with molecular mass approximately 130 kDa were excised for mass spectrometry (MS). Mass Spectrometry Analyses of CD31 Glycopeptides Identification of N-glycosylated Asn sites and site-specific analysis of glycan compositions and structures for both the rCD31 and CD31 proteins from human kidneys were conducted using the IGOT (14) and Glyco-RIDGE (15) methods, respectively. CD31 proteins were digested with Lysyl endopeptidase and trypsin. The remove sialic acids. The deglycosylated or desialylated glycopeptides were analyzed using a nano-flow liquid chromatography-coupled Orbitrap Fusion Tribrid mass spectrometer (Thermo Scientific). Desialylated glycopeptides were analyzed for their site-specific glycan compositions and partial structure using the Glyco-RIDGE method. For more experimental details, refer to the supplementary methods. Microarray of CD31 Linked to ABO Carbohydrate Antigens (CD31-ABO Microarray) The CD31-ABO microarray was produced as described previously (detail in the Supplementary Methods) (16). rCD31 containing ABO carbohydrate antigens (H-CD31, A-CD31, and B-CD31) were dissolved at a concentration of 0.1 mg/ml in a spotting solution (Matsunami Glass) and spotted onto epoxysilane-coated glass slides (Schott) in triplicate using a non-contact microarray printing robot (Microsys4000, Genomic Solutions). The glass slides were incubated at 25°C overnight to allow immobilization, washed with probing buffer, and incubated with the blocking reagent at 20°C for 1 h. Finally, glass slides were washed with TBS containing 0.02% NaN3 and stored at 4°C until use. Human plasma (80 µL/well) was diluted 100-fold with probing buffer and incubated with the CD31-ABO microarray at 20°C overnight. After washing twice with 100 µL/well probing buffer, 1 μg/ml Cy3-conjugated goat anti-human Fc (Jackson ImmunoResearch: 109-165-098) or Cy3-conjugated goat antihuman IgM (Jackson ImmunoResearch: 109-165-043) were added and incubated at 20°C for 1 h. Fluorescence images were acquired using an evanescent field-activated fluorescence scanner Bio-REX Scan200 (Rexxam). The fluorescence signal of each spot was quantified using Array Pro Analyzer version 4.5 (Media Cybernetics), and background values were subtracted. Background values were obtained from an area without immobilized samples ( Figure 1B). Anti-A and B Ab levels Statistical Analysis The continuous variables are expressed as the mean ± standard deviation, and the categorical variables are expressed as N and percentages. A Mann-Whitney U-test or student's t-test was used to compare two groups of continuous variables, and a chi-square test was used to compare categorical data. The diagnostic potential of the CD31-ABO microarray was determined by calculating the receiver operating characteristic (ROC) curve plotted to evaluate the sensitivity and specificity for predicting AABMR after ABOi KTx. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were used to investigate its accuracy as a diagnostic tool for AABMR after ABOi KTx. ABO Glycan Analysis of rCD31 and Human Kidney CD31 by MS We successfully produced rCD31-contained ABH glycans by using HEK293 cells in vitro and used them for microarray. We needed to know whether rCD31 in this microarray had similar ABH glycans to those of native human kidney. Using the Glyco-RIDGE method, glycan compositions of two core glycopeptides (VLENSTK, including Asn-453, and EGKPFYQMTSNATQAFWTK, including Asn-551) derived from rCD31 proteins purified from culture media of HEK193H, HEK293A, and HEK293B cells were assigned and compared (Supplementary Figures S1, S2). The glycan composition of each signal is shown in Supplementary Figure S1 as XYZ corresponding to the numbers of Hex, HexNAc, and dHex (Fucose) on the tri-mannosyl core (Man3GlcNAc2 = 000). The compositions containing multiple fucoses are shown in blue. These glycopeptides are presumed to have blood type glycans, since one fucose at least is attached on non-reducing terminus. Characteristic or increased compositions in blood type A or B are shown in each spectrum with triangles. In Supplementary Figure S2, generation of blood type glycans is suggested clearly. For example, 232, significant in type H, seemed to be shifting to 242 of type A, and to 332 of type B, suggesting the generation of type A and type B antigens, respectively. CD31 prepared by immunoprecipitation from the normal parts of human kidney extract followed by SDS-PAGE was analyzed by the same way as rCD31 (Supplementary Figure S3). Results"). Taken together, rCD31 used for the CD31-ABO microarray had the glycopeptide (VLENSTK) conjugated to the blood group H, A, and B glycan, and CD31 derived from human kidney had the same glycopeptide, which was strongly suggested to have blood group H, A, and B glycans. Anti-A and B Abs in Volunteers and Hemodialysis Populations The results of Ab levels measured using the CD31-ABO microarray are shown in Tables 1, 2. These microarrays specifically detected anti-A and anti-B Abs. Anti-A and B Ab levels were not significantly different between volunteers and hemodialysis populations. Both anti-A and B IgG Ab levels were significantly higher in the type O population than those in the type B and A populations, respectively (p < 0.01). However, anti-A and B IgM Ab levels were not significantly different between the type O and type B, and type O and type A populations, respectively. We analyzed the same samples by using the isohemagglutinin method (Tables 1, 2), showing a similar trend as CD31-ABO microarray. Anti-A and B Abs were compared between these two methods. Ab titers in the isohemagglutinin method and Ab levels in the CD31-ABO microarray were roughly correlated in volunteers and the hemodialysis population (Figure 2, 3). However, Ab levels in the CD31-ABO microarray varied even in samples with the same isohemagglutinin titers. Table 3 shows the patient characteristics in the two groups divided by the existence of AABMR after ABOi KTx. There were no significant differences, except for Ab removal therapy before ABOi KTx. Ab titers before desensitization therapy and on the day of ABOi KTx against donor blood type measured using the isohemagglutinin method were not significantly different between the two groups (data not shown). The median post-operative day at diagnosis of AABMR was 5 (range; 0-19). Prediction of AABMR After ABOi KTx Using anti-A and B Abs by the CD31-ABO Microarray The area under the receiver operating characteristic (ROC) curve (AUC) indicated the significant prognostic power for AABMR after ABOi KTx using initial Ab levels measured by the CD31-ABO microarray, except for anti-B IgG Ab (Figures 4A,B). The prognostic power in the CD31-ABO array was better than those of isohemaggulutinin assay (Figures 4C,D). Table 4 shows the comparison of the prognostic power for AABMR with several cut-offs, suggesting the CD31-ABO microarray had higher prognostic power for AABMR than isohemagglutinin method. Any initial IgG or IgM Ab levels against donor blood type >30,000 in the CD31-ABO microarray showed high sensitivity, After excluding the patients whose rituximab was not used, these significant results could be seen in rituximab-based protocol patients ( Table 4). To investigate whether Ab levels in the CD31-ABO microarray would more accurately predict AABMR after ABOi KTx than isohemagglutinin method, initial anti-A and B Abs of the samples obtained before desensitization therapy were compared (Upper Figures 5, 6). In A-incompatible KTx, anti-A IgG Ab levels by microarray were significantly higher in the AABMR (+) group than those in the AABMR (-) group (median: 54721 vs. 10211, p < 0.001). Ten out of 12 patients with AABMR (83.3%) had anti-A IgG Ab levels >30,000 in the CD31-ABO microarray; in contrast, only 1 out of 17 patients without AABMR (5.9%) had anti-A IgG Ab levels >30,000 (upper Figure 5A). Anti-A IgM Ab levels in the CD31-ABO microarray were significantly higher in the AABMR (+) group than those in the AABMR (-) group (median: 14277.5 vs. 5887, p = 0.03). No one had anti-A IgM Ab levels >30,000 in the AABMR (-) group, in contrast, 4 out of 12 patients with AABMR had anti-A IgM Ab levels >30,000 in the microarray (upper Figure 5B). Eight out of 12 patients with AABMR had anti-A IgM Ab levels <30,000 in the CD31-ABO microarray. However, six of these samples had anti-A IgG Ab levels >30,000 by microarray; probably, these anti-A IgG Abs induced AABMR in these patients (yellow circles in upper Figure 5B). Taken together, 10 out of 12 patients with AABMR (83.3%) had initial anti-A IgG or IgM Ab levels >30,000 in A-incompatible KTx, as shown by the CD31-ABO microarray. When we analyzed the predictive value of AABMR in the rituximab-based protocol patients, 6 out of 6 patients (100%) had anti-A IgG or IgM Ab levels >30,000 in A-incompatible KTx (Lower Figures 5A,B). Figure 6 shows anti-B Abs in patients undergoing B-incompatible KTx. Anti-B IgG Ab levels in the CD31-ABO microarray were significantly higher in the AABMR (+) group than those in the AABMR (-) group (median: 16378 vs. 1970, p = 0.047). Anti-B IgM Ab levels in the CD31-ABO microarray were significantly higher in the AABMR (+) group than those in the AABMR (-) group (median: 18058 vs. 3481, p = 0.021). No one had anti-B IgG and IgM Ab levels >30,000 using the CD31-ABO microarray in the AABMR (-) group (Upper Figures 6A,B). Three out of 5 patients with AABMR (60.0%) had initial anti-B IgG or IgM Ab levels >30,000 in B-incompatible KTx, as shown by the CD31-ABO microarray. When we analyzed the predictive value of AABMR in the rituximab-based protocol patients, 3 out of 4 patients (75%) had anti-B IgG or IgM Ab levels >30,000 in B-incompatible KTx (Lower Figures 6A,B). Samples obtained after ABOi KTx were also investigated ( Supplementary Figures S7, S8). The timing of plasma sample collection was different in each case. In patients without AABMR, the samples were collected within 1 month after ABOi KTx. Plasma samples were collected when AABMR Figure S8). However, there were no significant differences between the AABMR (+) and AABMR (-) groups in levels of anti-A and -B Abs examined by both the isohemagglutinin and CD31-ABO microarray methods. We showed how Ab levels changed before and after ABOi KTx in Supplementary Figure S9. Ab titers by isohemagglutinin method before desensitization were not significantly different between AABMR (+) and AABMR (-). However, CD31-ABO microarray could show that they were significantly higher in AABMR (+) than ABMR (-) before desensitization therapy. As described above, Ab titers after ABOi KTx were not significantly different between AABMR (+) and AABMR (-) in either of the two methods. DISCUSSION To evaluate the risk of AABMR in patients undergoing ABOi transplants, anti-A or -B Ab titers are required. There are several methods to measure anti-A and -B Ab titers, such as the tube test assay (17), the column agglutination technique (18), flow cytometry (19,20), and the solid phase red cell adherence technique (21). In these methods, the reaction of Abs against RBCs is used to determine anti-A or -B Ab titers. ABO blood group antigens are expressed on both RBCs and KECs. RBCs are used as targets to investigate anti-A or -B Ab titers from the convenience of use and obtainability before ABOi KTx. Initial anti-A or -B Ab titers against RBCs are a good predictor of AABMR in ABOi KTx (22), suggesting ABO blood group antigens are similar between RBCs and KECs. However, CD31 is major protein linked to ABO carbohydrate antigens in human KECs, and is different from those expressed on RBCs (9). Antiblood group Ab epitopes against ABO blood group antigens are thought to be different between RBCs and endothelial cells (10). The Ab removal-free protocol has been reported in ABOi KTx when anti-A or -B Ab titers are below 64-fold, resulting in no AABMR (23). In contrast, anti-A or -B Ab induced AABMR and thrombotic microangiopathy in ABOi-KTx remain critical issues (24), and heavier immunosuppression is required. To clarify the risk of AABMR and avoid infectious events due to over immunosuppression after ABOi KTx, the real reaction of anti-A or -B Ab against ABO blood group antigens on KECs needs to be known. In the present study, a method to evaluate anti-A and -B Abs that react to ABO blood group antigens expressed on KECs was developed. rCD31 proteins containing ABO carbohydrate antigens were used to form the CD31-ABO microarray. ABO glycans were compared between rCD31 used for the CD31-ABO microarray and CD31 derived from normal human kidney by MS analysis, which suggested that the CD31-ABO microarray was a mimic of ABO blood group antigens on human KECs. Anti-A and -B Abs titers were roughly correlated between the isohemagglutinin and CD31-ABO microarray methods. However, there was great variability in anti-A and -B Abs levels in the CD31-ABO microarray among patients who had the same Ab titer using the isohemagglutinin method. The desensitization therapy contents were not significantly different between the two groups of patients with and without AABMR, except for Ab removal. In spite of isohemagglutinin Ab titers using RBCs that were not significantly different between the two groups, the patients who suffered from AABMR had significantly higher Ab levels of the CD31-ABO microarray in AABMR (+). The sensitivity of predicting AABMR in the CD31-ABO microarray was not high in B-incompatible KTx when the cut-off Ab level was >30,000. However, there were no patients who had anti-B Ab levels >30,000 in B-incompatible KTx without AABMR, using the CD31-ABO microarray (the specificity of predicting AABMR was 100%). In this study, we found that Ab levels measured by the CD31-ABO microarray was the most important to predict AABMR after ABOi-KTx. Ab levels examined by the CD31-ABO microarray were low in the samples obtained when AABMR was clinically suspected. After ABOi KTx, it is possible that anti-A or -B Ab reacted to ABO antigens on graft endothelial cells and were absorbed. The absorption of anti-A or -B Abs could affect plasma Ab levels determined using the CD31-ABO microarray more than the isohemagglutinin method because of the specificity to KECs. Thus, the CD31-ABO microarray might not be a significant tool to predict AABMR after ABOi KTx. There are limitations to the present study. We do not routinely examine blood group A subtype. However, 99.8% of Japanese people of blood type A belong to A1 (25,26). The cohort of ABOi KTx consisted of a heterogeneous population who received different immunosuppressive protocols in this study. However, the desensitization therapy protocol for ABOi KTx varies from institution to institution. In a real situation for ABOi KTx, we tried to examine the new method of CD31-ABO microarray on various patients in the present study. The number of samples obtained from patients with ABOi KTx was small, especially B-incompatible KTx. To elucidate the value of the CD31-ABO microarray to predict AABMR in ABOi KTx, further examination using more samples is required. Samples from the day of the ABOi KTx were not stored and could not be investigated by CD31-ABO microarray. It is important to know Ab levels that should be decreased by desensitization therapy before ABOi KTx. A multi-center study using the CD31-ABO microarray is currently ongoing to determine if AABMR may be avoided after ABOi KTx and how much high Ab levels should be decreased before ABOi KTx. In conclusion, a novel method to investigate anti-A and -B Abs was developed, which were mimics of ABO blood group antigens on KECs. This may identify the precise risks of AABMR after ABOi KTx in advance. As large meta-analysis of ABOi KTx has shown, graft and patient survival in ABOi KTx were significantly worse than those of ABOc KTx (6, 7). They suggest two issues of ABOi KTx: AABMR and infectious events. According to the results of the CD31-ABO microarray, we will be able to strengthen or reduce desensitization therapy, resulting in decreased numbers of AABMR and infectious events. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Ethical committee of Niigata University (authorization number 2018-0311). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MT, KS, YN, MO, YM, KH, KT, and YT collected data and samples. HT, TS, and HN performed research (developed CD31-ABO array and analyzed antibody levels). AT and HK performed research and mass spectrometry analysis. TA, MK, and TU performed research (measured isohemagglutinin antibody titers). MT and YY performed research (purified CD31 from human kidney). MT, HT, TS, and HK analyzed data and prepared the manuscript. FUNDING This study was supported by JSPS KAKENHI Grant Number 17K11195.
2022-03-24T13:16:31.906Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "c9edfdce42ec78527573a8cab0758663f52c011a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c9edfdce42ec78527573a8cab0758663f52c011a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
98692700
pes2o/s2orc
v3-fos-license
Seasonal variation of nocturnal temperatures between 1 and 105 km altitude at 54° N observed by lidar Temperature soundings are performed by lidar at the mid-latitude station of Kuhlungsborn (Germany, 54° N, 12° E). The profiles cover the complete range from the lower troposphere (~1 km) to the lower thermosphere (~105 km) by simultaneous and co-located operation of a Rayleigh-Mie-Raman lidar and a potassium resonance lidar. Observations have been done during 266 nights between June 2002 and July 2007, each of 3–15 h length. This large and unique data set provides comprehensive information on the altitudinal and seasonal variation of temperatures from the troposphere to the lower thermosphere. The remaining day-to-day-variability is strongly reduced by harmonic fits at constant altitude levels and a representative data set is achieved. This data set reveals a two-level mesopause structure with an altitude of about 86–87 km (~144 K) in summer and ~102 km (~170 K) during the rest of the year. The average stratopause altitude is ~48 km throughout the whole year, with temperatures varying between 258 and 276 K. From the fit parameters amplitudes and phases of annual, semi-annual, and quarter-annual variations are derived. The amplitude of the annual component is largest with amplitudes of up to 30 K in 85 km, while the quarter-annual variation is smallest and less than 3 K at all altitudes. The lidar data set is compared with ECMWF temperatures below about 70 km altitude and reference data from the NRLMSISE-00 model above. Apart from the temperature soundings the aerosol backscatter ratio is measured between 20 and 35 km. The seasonal variation of these values is presented here for the first time. Introduction Temperature is one of the most fundamental and important quantities to describe the Earth's atmosphere. The altitudinal and seasonal variations are a result of radiative, chemical, and dynamical effects (cf., e.g., Garcia, 1989). Radiative forcing has the main effect on the background temperature structure, e.g. the middle atmosphere is heated by the absorption of solar UV radiation by O 2 and O 3 (Mlynczak and Solomon, 1993) and cooled by the emission of infrared radiation by CO 2 (Andrews et al., 1987). Dynamical forcing produces major deviations from the radiative equilibrium in the middle atmosphere. Direct energy deposition by wave dissipation is added by momentum deposition, slowing down or reversing the zonal wind. This results in a strong meridional circulation, which is connected with upwelling (summer) and downwelling (winter) above the poles and midlatitudes, i.e. with adiabatic cooling and heating (Lindzen, 1981;Holton, 1982). Exothermic chemical reactions provide an additional heat source of the atmosphere, while airglow in the mesopause region or chemiluminescence are acting as energy sinks (Riese et al., 1994). Detailed and high-resolved observations of the temperature structure are strongly needed for the examination of the atmosphere's energy budget, dynamics and chemistry. Temperature data sets can be used for the validation of General Circulation Models (GCM) etc. State-of-the-art GCMs often cover the whole range from the troposphere to the (lower) thermosphere, reflecting the importance of vertical coupling for the description of the atmospheric state. In contrast to this many observational techniques can only be used in a limited altitude range. Comparisons therefore require the combination of different techniques with their individual limitations. E.g. radiosondes provide a high altitude resolution (∼100 m) in the troposphere and lower stratosphere, but are often limited to one or two profiles per day and altitudes below ∼30 km. Disturbing effects of natural variability can often be reduced by multi-year averaging. Satellite soundings become increasingly important as they cover partly extended altitudes from the stratosphere to the lower thermosphere (e.g. Shepherd et al., 2001;Mertens, 2001;Xu et al., 2007). They provide a global view, but due to orbit constraints they need long data series to overcome drawbacks of zonal averaging and/or local time coverage. The vertical resolution is often in the range of 3-4 km, i.e. worse than many GCMs. Systematic errors might occur due to nonlocal thermodynamic equilibrium especially in the summer mesopause region at mid and high latitudes (Kutepov et al., 2006) or difficulties in the calculation of geometric altitudes (Sica et al., 2008). GPS radio occultations provide a good local time and zonal coverage, but are limited to the troposphere and (low/mid) stratosphere (e.g. Gobiet et al., 2005). Various types of rocket soundings are performed to measure temperatures in the mesosphere at several locations (e.g. Lübken, 1999;Lübken et al., 2004). The lack of temporal resolution can be avoided by combination of different data sets at a single locations. Unfortunately, rocket soundings are still sparse, especially at mid-latitudes, and provide typically only snapshots of the atmosphere (e.g. Hirota, 1984;Kubicki et al., 2006). Lidar observations are as well limited to a few fixed locations. Depending on the particular technique typical altitude ranges are 0-30 km, 30-80 km, or 80-110 km (cf., e.g. Hauchecorne et al., 1991;Yu and She, 1995;Wickwar et al., 1997;Leblanc et al., 1998;States and Gardner, 2000;Friedman and Chu, 2007). Time resolved observations allow the identification of gravity waves and tides that average out in the nightly or daily mean. Typical altitude resolutions of ∼1 km are fine compared to satellite data. In this paper we describe the temperature structure at Kühlungsborn (Germany, 54 • N, 12 • E). Although that this is a mid-latitude site, it is still influenced by polar phenomena like Noctilucent Clouds (NLC) (cf. Gerding et al., 2007b) or Sudden Stratospheric Warmings (SSW). This makes an important difference to more equatorward stations like Observatoire d'Haute Provence, France at 44 • N (Hauchecorne et al., 1991), Fort Collins, Colorado at 41 • N or Urbana, Illinois at 40 • N (States and Gardner, 2000), even if they are only separated by a few degrees in latitude. The temperature structure at the latitude of 54 • N provides an important benchmark for general circulation models as well as substantial information for retrieval of atmospheric parameters from satellite soundings. As our site is located at the edge of the Noctilucent Cloud/Polar Mesospheric Cloud (NLC/PMC) existence region, it provides a reference point for the understanding of ice particle generation and potential NLC/PMC trends. The seasonal variation of gravity wave activity has recently been published by Rauthe et al. (2008) for part of the data described here. We present a summary of more than 1850 h of data. By use of three different scattering types and combination of two lidars an altitude range between 1 and 105 km is covered with identical temporal and spatial resolution. To the best of our knowledge this is the first comprehensive temperature data set covering the whole range from the troposphere to the lower thermosphere. In the following we first give an update on our lidar systems. In Sect. 3 we present the temperature observations for the period June 2002 to July 2007. A harmonic fit of the temperature variation is calculated for each single altitude bin. By this the natural variability is blanked and a more representative data set is obtained (Sect. 4). Of particular interest are the temperatures and altitudes of the mesopause and stratopause that are presented in Sect. 5. In Sect. 6 we compare our data with the most recent MSIS climatology (NRLMSISE-00, cf. Picone et al., 2002) and ECMWF analyses. In the last section we discuss our results and compare with other ground-based and space-based observations. Description of the lidar systems At the Leibniz-Institute of Atmospheric Physics we combine a Rayleigh-Mie-Raman (RMR) lidar (Alpers et al., 2004) and a potassium resonance lidar (K lidar) (von Zahn and Höffner, 1996) to achieve temperature profiles from the troposphere to the lower thermosphere. The RMR lidar uses the well known Rayleigh temperature retrieval (hydrostatic integration of the density profile) and the rotational Raman method. Our lidar systems and the combination of the methods are described by Alpers et al. (2004). Below we show an example of a temperature sounding from June 2005 and describe recent updates of the setup. The temperature profiles at Kühlungsborn are a combination of potassium resonance temperatures (∼85-105 km), Rayleigh temperature profiles in two altitude ranges using separate telescopes and detectors (∼44-85 km and 34-46 km), aerosol-corrected Rayleigh-temperature profiles (∼22-33 km), and rotational Raman temperatures (∼1-25 km). Figure 1 (left) shows an example of a temperature profile at 20 June 2005 with 1 km vertical resolution after 1 h of integration (22:30-23:30 UT). The individual methods are displayed in different colours. The potassium lidar covers the range from the top of the profile down to 85 km altitude due to the limited extension of the K layer in summer (cf. Eska et al., 1998). The Mesosphere-Rayleigh channel provides temperatures below 86 km using the K lidar observations at 88 km as a start value for density integration. This channel measures down to 44 km altitude. The data at 48 km are used as a start value for the Stratosphere-Rayleigh channel yielding data between 46 and 22 km altitude. Below 34 km this channel is corrected for additional aerosol backscatter (see below). The rotational Raman temperatures are used below 25 km. The error bars at the particular profiles denote the uncertainty of photon count statistics. The typical statistical uncertainty is about ±2-3 K. In this study we only use data points with an uncertainty of less than ±10 K. We will concentrate on nightly averages calculated from hourly profiles like the example in Fig. 1. The statistical uncertainty of nightly means is strongly reduced compared to the single profiles and depends on the length of the sounding. In the following we will partly neglect the statistical uncertainty since it is small compared to the natural variability (typically 4-10 K depending on altitude and season). The right part of Fig. 1 shows in detail the profiles in the troposphere and lower stratosphere. A simultaneous temperature profile from a co-located radiosonde launch is presented for comparison (red line). The lidar integration time covers approximately the flight time from the ground to the tropopause, whereas the stratospheric part of the radiosonde profile is observed after the lidar integration time. The tropospheric temperatures agree nearly perfectly except for the lowest data point of the lidar profile. The gradients in the tropopause region are partly underestimated by the lidar due to its coarser altitude resolution. In the stratosphere the in-situ and remotesensed data agree well, taking the increasing horizontal and temporal distance between lidar and drifting radiosonde into account (max ∼30 km, 1 h). Hydrostatic temperature retrievals from elastic backscatter are affected by the presence of aerosols, as the backscatter is no longer proportional to the density. The mesosphere and upper stratosphere are normally anticipated as aerosol-free. However, in the lower stratosphere there is a background aerosol layer that might extend well above 20 km. To correct for any additional aerosol backscatter at the 532 nm wavelength channel used for the Rayleigh temperature retrieval, we simultaneously observe the 608 nm N 2 vibrational Ra-man backscatter . This is used to calculate the backscatter ratio R of total and molecular backscatter on a nightly mean basis. We have corrected every elastic backscatter profile applying R to yield the true density profile needed for the temperature retrieval. Figure 1 shows the Rayleigh temperatures with and without aerosol correction as solid and dotted green lines, respectively. After the correction the lidar observed temperature profile shows perfect agreement with the radiosonde observation. The aerosolinduced bias is largest in the lowest Rayleigh channels (up to ∼7 K) and remains significant up to about 31 km. The aerosol correction based on observed R profiles is available since February 2004. Before that date, an aerosol correction based on an empirical average R profile has been applied (Alpers et al., 2004). The N 2 vibrational Raman backscatter is observed up to ∼50 km altitude. It is taken as a measure for the molecular backscatter after normalization to the 532 nm signal at an aerosol-free altitude. The latter contains both molecular and aerosol backscatter (i.e. total backscatter). The normalization is taken at about 34 km altitude to avoid errors due to the decreasing photon statistics above. The atmosphere is assumed to be aerosol free above this altitude, in good agreement with other observations (e.g. Vaughan and Wareing, 2004). The aerosol correction provides a data set of backscatter ratios in a height region that is rarely covered by regular aerosol soundings from lidars and satellites. By our regular N 2 Raman soundings we yielded for the first time an extensive ensemble of backscatter ratios in the mid-stratosphere. The data sets covers 213 lidar observations since 11 February 2004, each of more than 3 h duration. Figure 2 shows the seasonal variation of the backscatter ratio R in the altitude range 20-35 km. Between March and October only small seasonal variations of R occur. After smoothing across ±30 days some variability remains. But this is mostly due to a high night-to-night variability (not shown here) and less induced by periodic seasonal variations. Between December and February and in August/September an increase in the backscatter ratio in the 20-30 km range is obvious. Above 30 km, again any potential seasonal variation remains hidden behind variability on the scale of days. Independent from season the backscatter ratio decreases with altitude up to the upper end of our observation range (here: 34 km, with R=1.0 by definition). As a rule of thump we observe a backscatter ratio R=1.1 at 21 km, R=1.06 at 23-24 km, R=1.04 at 25 km, and R=1.02 around 28 km. Above 30 km we mostly observed R<1.01. Seasonal variation of temperatures from observed data The lidar observations at Kühlungsborn were performed throughout the five years whenever weather conditions allowed. 266 nights of lidar operation between June 2002 and July 2007 are used for this study, each of at least 3 h and up to 15 h combined operation of K lidar and RMR lidar. With this data set all seasons are covered, but with less soundings in winter due to bad weather conditions. Figure 3 gives an overview on the distribution of nights throughout the seasons. Each nightly mean profile shows the temperature in colour coding. The figure provides also a histogram with the number of observations per night (e.g. 1 at 7, 9 and 10 January, and 2 at 15 January, . . .). Only few periods of about two weeks without sounding can be found in Fig. 3. Nevertheless, several dates have been sampled more than once within the five years. Almost all profiles reach an altitude of 100 km, and most of the data extend up to 105 km and above. Only in summer some gaps between 100 and 105 km exist due to the limited extension of the K layer (Eska et al., 1998). Above 105 km the number of data points decreases and the seasonal coverage gets worse. Therefore we limit all further studies of the mean state and seasonal variations to the height range of 1-105 km. Some general features of the temperature structure above Kühlungsborn can be identified from Fig. 3. i) Most obvious is the cold summer mesopause between 85 and 90 km. ii) Part of the winter data show a comparatively warm stratopause. iii) During winter the night-to-night variability in stratopause region and mesosphere is high. We will address these topics in more detail later. To reduce the night-to-night variability we have calculated monthly mean profiles, averaging e.g. all January nightly mean observations of the different years to a single profile. Figure 4 and Table 1 show the monthly mean profiles and their standard deviations (based on nightly averages). First we concentrate on absolute temperatures. In the lower stratosphere between about 10 and 30 km two seasons can be distinguished: The winter season covers the months November to February and is characterized by a temperature decrease with altitude. During the rest of the year there is partly a small range with nearly constant temperatures up to about 25 km, but a general positive gradient between tropopause and 30 km altitude. In several months (independent from season) a small temperature inversion is visible above the tropopause. The stratopause in the monthly means is always slightly below 50 km, with the exception of January (43 km) and November (52 km). For the mesopause altitude the monthly means reveal two states: Between May and August the temperature has a pronounced minimum at 86/87 km. The mesopause temperature drops down to ∼144 K in June/July. Compared to this, May and August are about 10 and 15 K warmer. During the rest of the year the mesopause altitude is slightly above 100 km, and by this partly not covered by the monthly means. We point out here that the transition between both periods is very fast (see below), i.e. a mixed state does not exist in terms of monthly means. Earlier observations describe two local temperature minima in spring and autumn seasonal averages, when data from the two mesopause states are mixed. These situations are described as double temperature minima or double mesopause (cf., e.g. She et al., 1993;Berger and von Zahn, 1999). Our monthly mean profiles do not show a double mesopause. Mesopause observations within individual nights are described later. The standard deviations of the nightly means also have a distinct altitudinal and seasonal variation (Fig. 4). The variability is generally larger in winter and smaller in summer. The strongest seasonal effect occurs in the upper stratosphere and lower mesosphere where the standard deviation varies by a factor of ∼4-10 (cf. Rauthe et al., 2008). The high variability in winter is mostly due to planetary wave activity and inter-annual variability. The altitudinal variation also depends on season. Between January and May the standard deviation initially decreases with altitude around the stratopause and increases again above variability in the mesosphere can be found. Additionally, there is an obvious decrease in the standard deviation in the upper mesosphere in November and December. The high variability in the range between 70 and 80 km, is due to socalled mesospheric inversion layers (MIL) (see, e.g. review by Meriwether and Gerrard, 2004) and strong gravity and tidal waves, remaining in the nightly mean profiles even after integration of up to ∼15 h. Further analysis of gravity wave signatures in our lidar observed temperatures is presented by Rauthe et al. (2006Rauthe et al. ( , 2008. The analysis of MIL is outside the scope of this paper. Harmonic fit of temperatures The night-to-night variability implies some difficulties to extract an undisturbed representative temperature structure for our location even with our extended data base. We compare the seasonal dependent night-to-night and interannual variability for different altitudes (30, 65, 74, 87 km) in Fig. 5. The individual nightly mean temperatures are given as single dots. Additionally, the figure shows smoothed and fitted data of the temperature. We have chosen a Hanning filter with ±30 d width to eliminate all variations with scales shorter than about one month. A representative mean temperature is calculated by harmonic fits. Harmonic fits inherently provide some periodic seasonal variation. The harmonic fit presented here uses at first an annual variation due to the changing illumination from the sun. A semi-annual harmonic is added due to some dynamic processes like the observed stratospheric warmings that are of different phase as the solar irradiance. A quarter annual component has been added to identify the characteristics of the remaining variations. It will be shown later that this harmonic is only of minor importance. In summary we use a harmonic equation of the form with A 0 (z) mean temperature at altitude z, t time in days, A i (z) and ϕ i (z) amplitudes and phases of the annual (i=1), semi-annual (i=2) and quarter-annual (i=3) variations. ω i = 2π P i describes the frequency, with P i the period (in days) of the different variations. At 30 km the night-to-night variability of temperatures is generally small (standard deviation ∼3−5 K, cf. Fig. 4). Only during the winter season (October-March) the temperature deviates by up to 25 K from the mean due to planetary wave activity. The temperatures vary strongly with season with a peak-to-peak value of ∼25 K. In the lower mesosphere at 65 km the seasonal variation is strongly reduced compared to 30 km, i.e. it is smallest compared to all altitudes below 100 km. By this, the seasonal variation is partly lower than the night-to-night variation at this altitude. The latter is strongest in winter, related to mesospheric cooling events accompanying stratospheric warmings. The average temperature at 65 km decreases continuously in time by about 10 K between June and September, followed by a fast increase of 10 K till the end of October. The Hanning filtered data sets shows some strong variation between October and January, but this data set obviously does not represent a mean state but is affected by the high variability and a bad sampling due to weather conditions. The harmonic fit eliminates this signatures of the smoothed time series, while the observed nightly averages are still nicely reproduced. In other words, the fitted time series can be anticipated as the best representation of the data and as a mean state in a climatological sense. At 74 km a distinct seasonal variation is again obvious. The peak-to-peak value is about 30 K with the minimum in summer as expected for the upper mesosphere. The nightto-night variability is generally higher than below, but still about four times higher in winter than in summer. Again, the smoothed time series shows some variations that are partly due to a bad sampling in winter, while the harmonic fit nicely represents the winter data. The largest seasonal variation is observed in the region around 87 km altitude. In average it is up to about 50 K, with the difference between extreme values being much higher (nearly 100 K). In April and May the temperatures decrease fast and are minimal in the high summer at June/July. Conversely, in August a fast increase of temperatures at 87 km is observed. The cold summer season is much shorter than the winter. In winter the night-to-night-variability at 87 km is about as large as in the mid-mesosphere, whereas the summer variability is increased compared to lower altitudes. The harmonic fit reproduces not only the general behaviour of temperatures at 87 km altitude, but also the fast transitions in April and August as well as the temperature minimum in June/July. The temperature minimum of ∼144 K occurs nearly at summer solstice around day 169 (18 June). The fitted data allow to determine the slopes of the temperature changes within a particular altitude. The seasonal variation is more or less asymmetric at all altitudes displayed in Fig. 5. At 30 and 74 km the temperature change in spring is slower than the autumn change (about 0.15 K/d and −0.2 K/d in spring, −0.2 K/d and 0.25 K/d in autumn, respectively). At 87 km spring and autumn slope are similar (∓0.5 K/d). This slope results in a temperature decrease (increase) by ∼20 K within a single month. The fast temperature change leads to the formation (back-formation) of the summer mesopause within a couple of days in spring (autumn), without a double mesopause structure in the monthly means. At 65 km the slope is different due to the low seasonal variation. Here the average temperature decrease between mid of May and mid of September is −0.1 K/d, followed by an increase until November. The change of the slope in September/October is too fast to be captured by the harmonic analysis. For the other seasons the fit reveals nearly constant temperatures at 65 km. We will examine amplitudes and phases of the different harmonics later in more detail. Figure 6 shows a) the fitted climatological mean temperature structure for the whole range between 1 and 105 km at Kühlungsborn and b) the difference between the observed and fitted temperature structure. The data of Fig. 6a are tabulated in the supplement http://www.atmos-chem-phys.net/ 8/7465/2008/acp-8-7465-2008-supplement.pdf of this publication for intervals of 10 d and 3 km, the altitude-dependent fit parameters are described below. The fit results from annual, semi-annual and quarter-annual harmonic fits as described above. The dipole structure in summer with a warm summer stratopause and a cold mesopause is clearly visible. The temperature in the stratosphere has a clear annual variation with a minimum in winter. Above the stratopause the amplitude of the seasonal variation decreases and the phase reverses, producing a nearly isothermal layer around 65 km. In the upper mesosphere the isolines of temperatures clearly show the asymmetry in the seasonal variation with a faster autumn transition. A downward progression of the cold phase is indicated between 55 and 105 km. The prominence of the warm winter stratopause is strongly reduced compared to the single observations displayed in Fig. 3 (265 K to about 300 K in single nights), but the temperature increase from autumn to winter is still clearly visible. In Sect. 5 we will examine the altitudes and temperatures of the stratopause and mesopause as the most remarkable features of the temper- ature structure above Kühlungsborn. Figure 6b shows the difference between the observed nightly mean profiles (cf. Fig. 3) and the harmonic fit. The differences are smoothed in time by a ±10 day Hanning filter to yield a better visualization of the differences. In general the harmonic fit nicely represents the general features of the observed temperature structure. There are neither altitude ranges nor seasons with a bias between the pure observations and the fit. For most of the seasons the difference is less than 5 K at all altitudes. Only in winter larger differences occur. As the differences are both negative and positive they are mostly due to natural night-to-night and year-to-year variability, i.e. planetary wave activity. In December and January a slight cold-bias of the fitted temperatures in the stratopause region and a warmbias in the mesosphere exist, indicating that the highly variable temperature structure in the stratopause region can not be described in full detail with our harmonic analysis. Amplitudes and phases of the annual, semi-annual, and quarter-annual components vary strongly with altitude ( Fig. 7 and Table 2). The annual component dominates nearly at all altitudes. It is largest at 85 km altitude with an amplitude of nearly 28 K (i.e. peak-to-peak value of 56 K). Other maxima are found in the lowermost mesosphere, the mid-stratosphere and the troposphere. The semi-annual component is strongest around 43 km and 87 km. The quarterannual component is always small and its amplitude does not exceed 3 K. Therefore, generally annual and semi-annual variations are sufficient to describe the seasonal variation at all altitudes. In Fig. 7b we have plotted the phase maxima (i.e. the days where the temperature is maximal) for every altitude where the amplitude of the fit exceeds 1 K. In the other regions the phases can not be estimated exactly and the particular har-monic component is of only minor importance. The annual component in the upper mesosphere has a phase maximum in winter due to the higher winter temperatures. Around 65 km there is a fast phase shift by about 4 months due to the summer dipole character of the stratopause and mesopause. Below 55 km the annual component is always in the summer phase with only slight shifts in the order of one month. At the tropopause a distinct phase jump of one month is observed, resulting in a phase maximum near the end of July. The tropospheric and stratospheric phases can be explained by the influence of the sun. The tropospheric weather reacts with a delay of about one month to the changing solar elevation. The stratosphere is more radiatively controlled especially by the absorption of light by ozone. The phase of temperature changes synchronously to the sun. The winter phase maximum in the upper mesosphere is independent from the solar irradiation. It is well known that the mean temperatures in the upper mesosphere are more dynamically than radiatively controlled. The phase maxima of the semi-annual component are e.g. at the end of March around 85 km and at the end of June around 45 km. In the stratopause region the phase is clearly due to the two maxima in stratopause temperatures in summer and winter. In the region between 80 and 90 km there is no direct geophysical reason for the phase maximum. Here the phasing produces the fast temperature change in spring and autumn. The quarter-annual component is always small, as described above. However, its phase is strongly coupled to the phase of the semi-annual fit. This gives reason for the assumption that the quarter-annual variation has no own geophysical cause but occurs as a higher harmonic. A remarkable similarity also exists in the phase velocities of the different harmonics. Nearly always negative phase Seasonal variation of stratopause and mesopause The continuous set of temperature profiles between 1 and 105 km allows to observe the stratopause and mesopause by the same instrumental technique. Even though our observations cover also the tropopause region, we have to acknowledge that other data sets like radiosonde climatologies provide a much better resolution at this heights. Therefore we concentrate on the stratopause and mesopause, with their temperatures and altitudes shown in Fig. 8. The temperature extrema calculated from the nightly mean profiles are plotted as single data points in black. Nightly means are still affected by different kinds of waves (gravity, tidal, planetary) which reduces e.g. the comparability with other data sets and models. Therefore we also plotted the fitted temperature field (red line) and interpret only these as stratopause and mesopause. Additionally, the identification of the stratopause is limited to the altitude range 30-60 km and of the mesopause to 75-105 km in order to avoid false interpretations. Within these ranges the absolute temperature extrema will be interpreted as stratopause/mesopause. It should be noted that especially the mesopause can not always be identified from our data set (individual points and fit), as temperatures in winter partly decrease up to the top of the profiles. The altitude of the stratopause is nearly constant throughout the year. It varies between 47 and 49 km without any clear seasonal cycle. The altitude of the temperature maximum in individual nights might differ by up to ∼10 km from the "climatological" stratopause. The differences are largest in winter and connected with stratospheric disturbances and polar vortex shifts. In contrast to the altitude of the stratopause its temperature has a clear semiannual cycle with a winter maximum due to the stratospheric disturbances already mentioned above. The summer stratopause (∼275 K) is slightly warmer than the winter stratopause (∼266 K). Also the spring/autumn minima in stratopause temperatures are not identical, but autumn temperatures more than 5 K lower. Looking at the seasonal variation of stratopause temperatures maxima occur around solstices, while minima are before (spring) and after (autumn) equinoxes. Mesopause altitudes vary by about 15 km throughout the year. This variation is by far not harmonic but can be represented with a bi-stable state with the mesopause being in the lower level for about 120 days between May and August and in the higher level (∼102 km) for the rest of the year. Within the summer months a slight altitude decrease from 87 to 85 km is observed, caused by the temperature Fig. 9. Difference of fitted lidar observations and reference data sets. Above the yellow line the NRLMSISE-00 is used as reference, below ECMWF-analyses. decrease (shrinking) in the upper mesosphere during summer. The transition between the upper and lower mesopause level is nearly instantaneous and an intermediate state practically non-existing. Only within single nights the temperature minimum is found between 90 and 95 km. But here effects of gravity and tidal waves have to be taken into account, which might remain in the data even after some hours of observations. The temperature of the mesopause changes more continuously, especially in summer when the mesopause is low. The temperature minimum of ∼144 K occurs around day 169, i.e. a few days before summer solstice. The transition to the upper-level-period of the mesopause is smooth and the temperatures during that time more variable. Between September and April the mesopause temperature shows no distinct seasonal variation but is in average roughly constant at ∼170 K with high night-to-night variability. Overall, the temperature of the mesopause changes throughout the year by ∼30 K, i.e. much less than the temperature variation at about 87 km. Comparison with climatologies and analyses The temperature profiles observed above Kühlungsborn by lidar are compared with other data both on a climatological and an event basis. In the following we compare our data with the most recent reference atmosphere NRLMSISE-00 (Picone et al., 2002) above ∼65 km and the meteorological analyses from the European Centre for Medium Range Weather Forecast (ECMWF) below. The ECMWF model assimilates various observations from satellites, balloons and groundbased stations. By this they provide the most representative data set for the troposphere and stratosphere. Above that region observations and model results have been summarized in the NRLMSISE-00 climatology. Recently we have compared our data also with temperature data of the FTS instrument onboard the ACE-SCISAT satellite and found dif-ferences especially in the summer mesopause region (Sica et al., 2008). In this region a known bias exists in many satellite measurements of temperature (cf. Kutepov et al., 2006). For further comparisons we refer to the discussion section of this paper. In Fig. 9 the differences of the lidar temperature profiles and the combined NRLMSISE-00/ECMWF data set are plotted. For the lidar data the result of the harmonic fit (see Fig. 6) is taken as reference. ECMWF analyses are available for every single day and 00:00, 06:00, 12:00, and 18:00 UT. We have selected the 00:00 UT profiles for all nights to get the most unbiased comparison. From NRLMSISE-00 the climatological data set for our location is used. The NRLMSISE-00 temperatures are mostly higher than the lidar observations by up to ∼25 K (typically 10 K). Only for a short period after summer solstice the reference atmosphere shows lower temperatures than the mesopause region in our observations. In summary, the temperatures around 87 km are too high in the reference data set and the phase of the seasonal variation is shifted by a couple of days. In the stratopause region the high temperature difference in winter is most obvious. Especially in the beginning and end of winter the lidar observed temperatures are higher than the analyses of ECMWF. This is partly due to some suppression of stratospheric warming events in the analyses data set, as also revealed from direct comparison of individual ECWMF-profiles and simultaneous lidar data (not shown). On the other hand the lidar-observed temperatures may be biased to higher values due to some incomplete sampling in winter. Nevertheless this effect is reduced by the harmonic analyses (cf. Fig. 6) and will be further reduced by additional soundings in future winters. Overall, the difference in winter stratopause temperatures is a combined effect of underestimation of stratospheric disturbances in the ECMWF data set and overestimation due to incomplete lidar sampling. In the other periods and height ranges no obvious bias exists and the differences are mostly around 5 K or less. The lidar-MSIS differences for the region below about 70 km (not shown here) are partly lower than the lidar-ECMWF differences. Especially in the winter stratopause region the NRLMSISE-00 data show only ∼5 K lower temperatures. On the other hand the NRLMSISE-00 is warmbiased in the upper stratosphere by up to ∼7 K for the rest of the year. The differences between lidar and NRLMSISE-00 decrease towards the tropopause region. The general picture is comparable to the results of Schöch et al. (2008) for 69 • N, with their difference about twice as large compared to 54 • N. Discussion In this section we first provide a general discussion about our methods. In the second part we compare our results with other observations based on lidar and satellite soundings. General discussion The temperature data set presented here contains 266 observations (more than 1850 h), covering all seasons of the year with reasonable resolution. Each observation is longer than 3 h, and only profiles extending from the troposphere to the lower thermosphere are taken into account. By the averaging procedure gravity waves of 3 h, partly up to about 12 h period are smoothed out. The harmonic fit additionally removes effects of planetary waves, especially arising in the winter season. We have shown recently by comparison with Noctilucent Clouds around 83 km that our temperature profiles are free from significant systematic errors at least in the upper mesosphere and mesopause region (Gerding et al., 2007b). The profiles are also corrected for the effect of aerosols which would induce an increasing bias below about 30 km (Gross et al., 1997;Faduilhe et al., 2005). Behrendt et al. (2004) describe a lidar system using the rotational Raman technique also well above 30 km altitude, i.e. including the whole aerosol layer. For typical aerosol conditions above 20 km both methods should yield correct temperature profiles, with the statistical uncertainty of the rotational Raman temperatures being higher than the uncertainty of the data presented here. Furthermore, our method requires less laser power and/or telescope area. Our observations support the concept of the two level (bistable) mesopause with a lower altitude in summer and a higher level during the rest of the year (e.g. She and von Zahn, 1998). As a minor difference the start of the summer period in our observations (2002)(2003)(2004)(2005)(2006)(2007) is about two weeks later than in the earlier soundings (1996)(1997) presented by She and von Zahn (1998). We will come back to this topic in Sect. 7.2. A double mesopause structure with an inversion layer in between is reported by States and Gardner (2000) for nighttime monthly means at 40 • N based on 2 years of lidar soundings. Our data set reveals no indications for a double mesopause even in spring and autumn as the temperature change around 87 km altitude is fast enough to establish or abolish the lower mesopause state within a few weeks. The most important drawback of the data set presented here is the limited diurnal coverage. Some effects of diurnal tides on the true daily mean may remain, as the data presented here are obtained only during the night. States and Gardner (2000) describe some major differences in the temperature structure during day and night as measured by their lidar between 80 and 105 km at 40 • N. In general the profiles covering the whole day are warmer than the nightly means below ∼91 km and colder between 91-100 km (by ∼5 K and ∼3 K, respectively). By this the whole temperature structure in the mesopause region is changed, and the season with a low mesopause level is shortened. A recent study of Yuan et al. (2008) for the month of April reveals higher temperatures during the night above 88 km, and slightly lower temperatures (less than 4 K compared to full-diurnal data) between 84 and 88 km. The K lidar at our station was also run during day and night for at least part of the period described here (cf. Fricke-Begemann and Höffner, 2005). We have evaluated the daily (24 h) means and the nightly means separately. E.g. for the month of June we found slightly lower temperatures in 24 h compared to nighttime-only in the whole range between 85 and 95 km, e.g. by ∼1 K in the 90 km region (not shown here). This average temperature profile might still be biased towards the nighttime-mean as only about 1/3 of all sounding hours are obtained during daylight. For the region below 85 km the RMR lidar at our site has a too bad signal-to-noise ratio during daylight to get continuous temperature profiles. In order to present a data set with the same time base for the whole altitude range we have limited this study to the nighttime soundings. The bias induced by this method is small, as the tidal effect at 54 • N is small compared to the latitude of 40/41 • N where the observations of States and Gardner (2000) and Yuan et al. (2008) are performed (e.g. Forbes, 2002, 2003;. In summer the elastic backscatter received by the RMR lidar can be affected by additional NLC aerosol scattering between 80 and 85 km. This would result in strong warm/cold biases at the lower and upper edge of the NLC, respectively (Gerding et al., 2007a). To avoid this error we have carefully removed all NLC signatures in single profiles with β(532)>0.1·10 −10 m −1 sr −1 . The remaining error induced by very weak and sporadic NLC on the nightly mean temperature profiles is much weaker than the statistical uncertainty and therefore negligible (Gerding et al., 2007b). Comparison with other observations There are only very few comparable data sets of aerosol backscatter ratios or similar quantities above 20 km. Fromm et al. (2003) compile a data base of SAM II, SAGE II, and POAM II/III aerosol profiles. For our latitude they find no distinct seasonal variation but typically lower aerosol load inside the vortex than outside. Also our data with increasing backscatter ratios during the central winter period suggest some causal connection with the position of the vortex. Unfortunately, the data base of Fromm et al. (2003) shows no results above 30 km and even above 25 km the observational error becomes increasingly important. Other lidar soundings from mid-latitudes confirm our results on higher backscatter ratios in winter, even if the absolute numbers are slightly smaller than ours (Vaughan and Wareing, 2004). The aerosol soundings demonstrate the necessity of correction methods for Rayleigh temperature retrievals in the 30 km range and below. Otherwise the atmospheric temperature may be underestimated. Comparisons of our temperature observations with other experimental data sets are limited to partial height ranges, as other soundings typically do not cover the whole range from the troposphere to the lower thermosphere. Temperature soundings in the mesopause region are performed at our site since July 1996. Results of the first year of observations are presented by She and von Zahn (1998). The authors describe a seasonal temperature variation generally similar to our results. Nevertheless there are differences in the mesopause temperature and the amplitudes and phases of the particular harmonic components. The summer mesopause temperatures decrease by about 10 K from 1996/1997 to 2002-2007 (this data set), whereas the mesopause altitude remains unchanged. The reason for this temperature decrease remains open, as we can not distinguish from our data set between a general trend, a solar-cycle dependency or larger-scale variability. The increase of the amplitude of the annual variation in the 95 km-region (3 K to 10 K for 1996/1997 and 2002-2007) might also be due to a long-term variation, as there is some tendency for increasing amplitudes in this range if we form subsets of our database (not shown here). Minor differences occur in the phases of the harmonic fit and the summer mesopause altitude. These can be explained by the different lengths of the data sets. We note here again that She and von Zahn (1998) describe only one complete annual cycle (88 observations between July 1996 and August 1997) while our soundings cover about 5 years (266 soundings, June 2002-July 2007. Xu et al. (2007) examine the global mesopause structure with temperature profiles from the SABER instrument onboard the TIMED satellite for the period February 2002 to February 2006. They find the nighttime summer mesopause at the latitude of Kühlungsborn around 83 km at temperatures of 145-150 K, i.e. 3-4 km lower than the mesopause in our observations, but at nearly the same temperature. This mesopause altitude difference between lidar and SABER data is confirmed by Xu et al. (2006) for the latitude of 41 • N. SABER as well as our lidar data give a decrease of mesopause altitude during the summer. During the other seasons the SABER mesopause is around 100 km at temperatures of ∼180 K. Again, our observations show a higher mesopause. But the mesopause temperature observed by the lidar is about 10 K lower. Both data sets agree in the slower transition between summer and normal state in spring compared to autumn. The SABER mesopause altitudes are in agreement with the model results of the TIME-GCM (Xu et al., 2007). But the summer mesopause temperatures in the model are as low as ∼130 K which is about 15 K below both our lidar and SABER observations. Such low temperatures and the low mesopause altitude would strongly affect phenomena like Polar Mesospheric Summer Echoes and Noctilucent Clouds. However, from our simultaneous lidar soundings we found generally good agreement between temperatures and NLC. NLC appeared at the lower edge of the supersaturated altitude range and no NLC has been observed during too warm periods (Gerding et al., 2007b,a). The deficiencies of the TIMED/SABER temperature data in the polar and midlatitude summer mesopause region are at least partly due to the non-LTE retrieval as revealed by Kutepov et al. (2006). examine the signatures of the Quasi-Biennial-Oscillation (QBO) and the Semi-Annual Oscillation (SAO) in the TIMED/SABER data between 48 • S and 48 • N. The amplitudes and phases of the SAO in the mesosphere near 48 • N are similar to the results presented here. The smaller SAO amplitudes near 85 km and near 45 km are most probably due to the latitudinal differences. In other words, the occurrence of stratospheric warmings should decrease with decreasing latitude. We note also that our study of seasonal variations show a dominating annual oscillation at nearly all altitudes. The dominance of the annual variation is confirmed by the studies of She et al. (1995) and Leblanc et al. (1998) for nighttime soundings in the range 30-105 km and Chen et al. (2000) for day and night soundings between 80 and 105 km. The latter present differences of amplitudes of annual and semiannual variations using diurnal means or nightly means that are typically less than 25%. In general the amplitudes at Ft. Collins (41 • N) are slightly smaller than at our location for both the annual and semi-annual variation (Chen et al., 2000). Similar numbers are taken from the combined soundings at Ft. Collins and Observatoire de Haute Provence (44 • N), covering the range 30-105 km (She et al., 1993;Leblanc et al., 1998). Here an additional maximum in the semi-annual component around 60 km has been found, which is not visible in our data at 54 • N and also not in the SABER data presented by for 44 • N. Both studies show similar phases of the annual component compared to our observation. The phase of the semi-annual variation at 41/44 • N is shifted by a couple of days towards earlier times. The studies of She et al. (1995) and Leblanc et al. (1998) shows additionally remarkable reversal in sign of the phase velocity around 80 km that is not found at our location. In general we interpret the differences in amplitudes (decreasing with decreasing latitude) and phases as latitudinal differences in temperature structure due to the residual pole-to-pole circulation. For the stratopause region annual and semi-annual components are derived from SAGE II satellite based observations for different latitude bands (Burton and Thomason, 2003). The most suitable latitude band is centred around 60 • N. Here the annual amplitude is about twice as large as our results, while the semi-annual component is only half. This difference can be explained by the large integration range of the SAGE II analysis, covering about 20 • in latitude. Phases are again similar to our observations. Comparison with other latitudes reveals the position of our site at the edge between polar and mid-latitudes. Above we have compared our results mainly with observations between 40 • N and 44 • N, where several lidars exist. Towards the north the observational sites get sparse. Nevertheless a comparison is made for different specific features of the middle atmosphere. The summer mesopause temperature is decreasing from mid-latitudes towards the north pole. At ∼40 • N the summer mesopause temperature is about 167 K as recently published from full-diurnal data by Yuan et al. (2008) for the period May 2002 to April 2006, i.e. comparable to our sounding period. Our data reveal nighttime temperatures of ∼144 K while Lübken (1999) report 129 K for 69 • N. Recent observations of Höffner and Lübken (2007) show summer mesopause temperatures as low as ∼120 K at 78 • N. The same studies show that differences between summer and winter mesopause temperatures increase with latitude. The differences are about 10 K, 25 K and 60 K at ∼40 • N, 54 • N and 78 • N, respectively. The increasing differences are mostly due to decreasing summer mesopause temperatures, while the winter mesopause temperatures are more constant at ∼175-190 K. The summer mesopause temperatures are due to a wave driven upwelling that is much stronger in polar regions compared to lower latitudes. As mentioned before, the Kühlungsborn latitude of 54 • N is at the edge of the polar region, with the slope of summer mesopause temperatures being steeper towards lower latitudes (∼1.7 K/deg) than towards higher latitudes (∼1.2 K/deg). Also the date of lowest temperatures show some remarkable variation with latitude. While the lowest temperatures at ∼40 • N and also at 54 • N (this site) are reached around or shortly before summer solstice (cf. fit results given by Leblanc et al., 1998;States and Gardner, 2000), they appear at polar latitudes 1-2 weeks after summer solstice (Lübken, 1999;Höffner and Lübken, 2007). It remains an open question why the minimal temperatures are reached earlier in time in mid-latitudes, as the upwelling from the best of our knowledge starts in polar regions. Stratopause temperatures show only small variations with latitude. Summarizing the observations at 41/44 • N, Leblanc et al. (1998) present a summer stratopause temperature of 280-290 K. This is about 10 K higher than the observations at 54 • N (this data) and at 69 • N Schöch et al. (2008). At all latitudes winter stratopause temperatures are influenced by stratospheric disturbances, resulting in only slightly lower temperatures compared to the particular summer data. Conclusions and summary We have described the temperature structure of the Earth's atmosphere at 54 • N and its seasonal variation in the whole altitude range between 1 and 105 km (troposphere to lower thermosphere). The study is based on about six years of observations, i.e. 266 nights of 3-15 h sounding time (totally nearly 1900 h). We have compiled a unique data set of uninterrupted profiles that are observed by the same technique (lidar). For the first time temperature profiles have been combined from Raman-, Rayleigh-, and resonant backscatter lidars at the same location, providing comparable spatiotemporal resolution over the whole altitude range. Seasonal temperature variations at mid-latitudes have already been published before (e.g. She and von Zahn, 1998;Leblanc et al., 1998). In our study we have extended the existing data set strongly and have concentrated on the latitude of 54 • N. This latitude range is of particular importance as it connects polar phenomena like Sudden Stratospheric Warmings (SSW) and Noctilucent Clouds (NLC) with midlatitudes. In summer the mesopause temperature is as low as ∼144 K, i.e. the existence of ice particles is possible for a couple of weeks and in a small altitude range between about 85 and 90 km. In winter temperature profiles are also affected by Stratospheric Warmings and Mesospheric Coolings. Due to this transient phenomena the standard deviation of the nightly mean profiles in January is as large as 20 K at 40 km altitude and the winter stratopause temperature is about as high as in summer. Regarding the vertical temperature structure our study generally confirms the findings of Leblanc et al. (1998) who combined observations from different stations to get a temperature field between the stratosphere and the lower thermosphere. Nevertheless there are distinct differences due to our more "polar" location. We have used harmonic fits of the temperature field at any altitude bin to reduce effects of natural variability and incomplete sampling of the seasonal temperature distribution. Overall, the harmonic fit of annual, semi-annual and quarter-annual variation nicely reproduces the observed temperature structure. Typically, the differences between observed and fitted temperatures are less than 5 K and are due to the above mentioned natural inter-annual and day-to-day variability. The influence of SSW on the temperature profiles is reduced, but the result of the harmonic analysis still shows a semi-annual variation in stratopause altitudes. The semi-annual amplitude is about as large as the annual variation with maximum in summer. Comparing amplitudes of the different harmonics, the annual variation is always dominating and nearly ±27 K in the mesopause region. The quarter-annual variation is smallest, with amplitudes of less than 3 K. In the MLT (mesosphere/lower thermosphere) region (∼70-100 km) the amplitude of the annual and semi-annual variations are larger than observed at mid-latitude stations around 40 • N (Leblanc et al., 1998), but much lower than e.g. observed near 90 km at polar latitudes of 78 • N (Höffner and Lübken, 2007). The annual variation is driven by the residual pole-to-pole circulation, which has largest effects in the polar regions. Accordingly, the slope of mesopause temperatures with latitude is larger equatorward of our site compared to the poleward direction. This again demonstrates the importance of our observations at the edge of the polar region. From the harmonic analysis we yield a downward propagation of temperature changes, with a phase velocity of about −0.4 km/d between 45 and 90 km altitude (semi-annual) or between 70 and 90 km altitude (annual). This reveals the general importance of waves for the seasonal temperature variation. Additionally, the phase jump of the annual component around 65 km describes the transition between the radiatively driven stratosphere (warm summer) and the dynamically driven MLT (cold summer). The temperature variation at stratopause heights is roughly symmetric to the solstices. The stratopause temperature has a semi-annual cycle with maxima in winter and summer (266 K and 276 K, respectively) and minima in spring and autumn (263 K and 257 K, respectively). The average stratopause altitude is nearly constant throughout the year, varying only between 47 and 49 km. The mesopause has been identified at temperatures as low as 144 K in summer and about 170 K in winter. The mesopause altitude varies between ∼102 km in winter and 86/87 km in summer, with a transition phase of about two weeks or less. This sharp transition allows to identify a "summer season" (indicated by a low mesopause) with a length of about 120 days. The temperature minimum of the summer mesopause is observed around day 169, i.e. nearly at summer solstice. This is comparable to measurements near 40 • N, but in contrast to the observation of a 1-2 week phase shift at higher latitudes. Our data set reveals some discrepancies compared to the most recent NRLMSISE-00 reference atmosphere and satellite observations. The MSIS temperatures are generally too high in the whole range between 70 and 105 km and for nearly all seasons. The available satellite observations show the largest discrepancies in the summer mesopause region. Here they provide typically too low temperatures and a too low mesopause, which are e.g. not in agreement with local observations of Noctilucent Clouds. The largest drawback of our own temperature profiles is the limited daily coverage. Actually we aim for complete daylight capabilities for the range 30-105 km to avoid potential influences of solar tides on the mean temperatures.
2018-12-11T08:37:56.440Z
2008-12-16T00:00:00.000
{ "year": 2008, "sha1": "42f05c6fdc2f1f6f9f55d1fd6163259fcc129b2c", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/8/7465/2008/acp-8-7465-2008.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "42f05c6fdc2f1f6f9f55d1fd6163259fcc129b2c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
7747342
pes2o/s2orc
v3-fos-license
Patterns and Universals of Mate Poaching Across 53 Nations: The Effects of Sex, Culture, and Personality on Romantically Attracting Another Person’s Partner As part of the International Sexuality Description Project, 16,954 participants from 53 nations were administered an anonymous survey about experiences with romantic attraction. Mate poaching— romantically attracting someone who is already in a relationship—was most common in Southern Europe, South America, Western Europe, and Eastern Europe and was relatively infrequent in Africa, South/Southeast Asia, and East Asia. Evolutionary and social-role hypotheses received empirical support. Men were more likely than women to report having made and succumbed to short-term poaching across all regions, but differences between men and women were often smaller in more gender-egalitarian regions. People who try to steal another’s mate possess similar personality traits across all regions, as do those who frequently receive and succumb to the poaching attempts by others. The authors conclude that human mate-poaching experiences are universally linked to sex, culture, and the robust influence of personal dispositions. Exclusivity (whether one is promiscuous and adulterous), Gender Orientation (masculinity and femininity), Sexual Restraint (abstinence and prud-ishness), Erotophilic Disposition (obscenity, indecency, and lust), Emotional Investment (love and romance), and Sexual Orientation (homosexuality and heterosexuality). Archival measures. Several archival data sets were used in this article. Gross domestic product per capita (GDP) and the Gender Development Index (i.e., the degree to which men and women differ in the achievement of basic human capabilities, including health, longevity, education, and a decent standard of living) were obtained from the United Nations Development Programme (2001). National sex ratios and the percentage of women in government were obtained from the United Nations Statistics Division (2001). When people feel romantic desire toward another person, they often act in special ways in hopes of attracting that person. They might try to enhance their appearance by wearing attractive clothes, engage in a lively conversation and try to present themselves in a positive light, or make an attempt at derogating the romantic competition in order to improve their relative standing (Schmitt & Buss, 1996;Tooke & Camire, 1991). Occasionally, the romantic competition has a decided head start. The desired partner may be regularly dating another person or have just embarked on a new romantic relationship. The object of affection may be married, recently engaged, or currently living with a partner. Trying to attract someone who is already in a romantic relationship is known as the process of mate poaching , and it is a process filled with many special challenges and potential pitfalls. One of the central challenges of mate poaching is that many of the more effective tactics in general romantic attraction seem to backfire in the context of mate poaching, especially those that involve derogating competitors . Instead of using direct tactics, many mate poachers are forced to use indirect means of gaining romantic favor, such as giving furtive glances, slowly invading the target's social networks, and planting subtle seeds of dissatisfaction within the existing relationship. Some tactics appear to work well in regard to mate poaching, such as men's use of status and resource-related tactics (Schmitt, 2002). Almost all mate-poaching tactics must be used with caution, however. The use of openly flirtatious poaching tactics, for example, can stir the wrath of the person's current partner and likely would be seen as inappropriate by the larger community. Indeed, many people feel that the entire process of mate poaching is unethical at its core, especially those that have felt the sting of losing a romantic partner at the hands of a friend or colleague . Despite the prohibitive difficulties associated with mate poaching, recent evidence suggests that poaching does occur, with most people reporting that they have experienced poaching-related attraction in one form or another . In practice, mate poaching often takes the form of a short-term sexual seduction. Short-term mate poachers seek to elicit only a brief adulterous desertion by the already-mated partner. At times, the mate poacher may desire a more enduring relationship defection, perhaps even the establishment of a new, long-term alliance with the mating target. In most cases, the short-term and long-term targets of mate poachers regard romantic attempts by prospective suitors as either mildly flattering or, at worst, unwelcome attention . However, mate poaching also can result from active and explicit solicitations made by those sexually or emotionally unsatisfied with their current relationships (Glass & Wright, 1985;Grosskopf, 1983). In these cases, it is the poaching targets themselves who actively seek out would-be poachers, either enticing a singular night of adulterous passion or safely securing a more permanent marital replacement (Schmitt & Shackelford, 2003). Whether short-term or long-term, unwelcome or solicited, mate poaching typically involves an intricate web of social deception, interpersonal conflict, and intense emotionality (Shackelford, 1997;Shackelford & Buss, 1996;Shackelford, LeBlanc, & Drass, 2000). Although much is known about romantic attraction, infidelity, and the emotion of betrayal in isolation (e.g., Buss, 2000;Moore, 1995;Tennov, 1999;Tooke & Camire, 1991;Walters & Crawford, 1994), only recently has there been a concerted effort to understand how each interacts within the unique context of mate poaching (Bleske & Shackelford, 2001). Also at issue has been whether mate poaching is a distinct evolutionary strategy or whether poaching-related attraction simply follows from more general adaptive desires and basic human mating strategies Schmitt & Shackelford, 2003). In this article, we extend this line of research by examining the psychology of mate poaching from a cross-cultural perspective. We explore the patterns and universals of poaching experiences across 53 nations, representing five continents, 28 languages, and 12 islands. We identify the pancultural and region-specific traits associated with being a mate poacher and with being a popular target of mate poachers. We also test several evolutionary and social-role hypotheses about the effects of sex and culture on romantically attracting someone else's partner. We begin by reviewing what is known about the frequency of mate poaching. How Often Do People Engage in Mate Poaching? argued that mate poaching has been a recurrent and perhaps frequent form of romantic attraction over human evolutionary history. Because behavior does not fossilize, it is difficult to know with absolute certainty whether, and to what extent, ancestral humans actually engaged in poaching. One useful window into our evolutionary past is to look at behavioral regularities among "traditional" cultures that still practice foraging (Brown, 1991;Cronk, 1999), the hunting-and-gathering lifestyle that was prevalent for 99% of human history (Lee & Daly, 1999). Among foraging cultures that exist today, there is some evidence that mate poaching occurs relatively frequently. Marital infidelity rates, for example, tend to be considerable, with at least "occasional" extramarital sex taking place in over 70% of traditional cultures (Broude & Greene, 1976). suggested that many of these infidelities could be the result of strategic short-term mate poaches. Among more developed societies, the occurrence rate of infidelity-defined as the percentage of people who have ever been unfaithful-is also appreciable and ranges from 20% to 75% depending on age, type of relationship, and relationship duration (Blumstein & Schwartz, 1983;Thompson, 1983;Wiederman, 1997). Infidelity prevalence rates-such as the percentage of people who have been unfaithful in the past year-must by definition be somewhat lower than occurrence rates but are still considerable, ranging between 10% and 25% (Blumstein & Schwartz, 1983;Laumann, Gagnon, Michael, & Michaels, 1994). Infidelity rates such as these are observed despite the fact that extramarital sex within modern societies is usually met with more social disapproval than it is within most foraging cultures (Frayser, 1985;Pasternak, Ember, & Ember, 1997). Another window into the historical occurrence and prevalence of short-term mate poaching is to look at the reproductive consequences of infidelity. Studies of cuckoldry rates-the rates at which men are deceived into raising offspring that are genetically not their own-range from 0.7% (i.e., Switzerland; see Sasse, Müller, Chakraborty, & Ott, 1994) to around 30% (i.e., southeast England; see Philipp, 1973), though most estimates place the value between 10% and 15% in modern populations (see Cerda-Flores, Barton, Marty-Gonzalez, Rivas, & Chakraborty, 1999;Macintyre & Sooman, 1991). Cuckoldry rates are somewhat lower among the foraging cultures that have been studied, ranging between 2% and 9% around the world (see Baker & Bellis, 1995;Neel & Weiss, 1975). Cuckoldry rates of this magnitude suggest that short-term poaching likely pays some reproductive dividends and has done so throughout human's foraging past. Alongside other evidence indicating that short-term mating and sperm competition are integral parts of the basic human mating system (Barash & Lipton, 2001;Birkhead, 2000;Shackelford & LeBlanc, 2001;Shackelford et al., 2002;Smith, 1984), it seems a plausible scenario that short-term mate poaching occurred with some regularity during our ancestral past. There is reason to suspect that long-term mate poaching also occurred throughout human evolutionary history. speculated that spousal deaths caused by warfare, birthing difficulties, and sickness-common occurrences in foraging cultures-would have produced a recurring need for people to remarry during their adult lifetimes. Because many of the most valuable partners already would be mated, the process of reentering the mating market for many people would have meant trying to attract and retain someone who was already mated. The reproductive advantages to those willing and able to woo away another's partner in these instances may have been considerable. Moreover, because most cultures include an equal number of men and women, the mating system of polygyny (i.e., the predominant system of foraging people whereby some men have more than one wife; Foley, 1996;Frayser, 1985) would have exacerbated the problem of finding a long-term mate for many men, forcing some men to perhaps engage in long-term mate poaching as a necessary sexual strategy. The human tendency toward serial monogamythe cyclical practice of marriage, divorce, and remarriage (Fisher, 1987)-would have provided further reproductive opportunities to those capable of poaching away the most valuable mates. Although both theoretical rationale (e.g., adaptive problems of widowhood, polygyny, and sperm competition) and indirect pieces of evidence (e.g., infidelity, cuckoldry, and remarriage rates) suggest that poaching has been-and continues to be-a recurrent form of human mating, the direct evidence of short-term and long-term mate poaching is limited. On the basis of responses from a small sample of American college students, found that most people admit to having attempted to poach someone in the past, with men (64%) more likely than women (49%) to report having made short-term poaching forays. Similar poaching occurrence rates were found in an older community sample (60% vs. 38% for men and women, respectively). found that over 80% of both men and women reported that they or a past partner had received a poaching attempt. Subjective perceptions of poaching attempts made on oneself may be less veridical than self-reported attempts made by oneself, but it nevertheless seems clear that most people in the study had experienced mate poaching in some form. also reported that nearly half of college-age men and women who had received a poaching attempt in the past admitted that they had "gone along" or succumbed to the mate poacher. Similar levels of infidelity occurrence were observed in the community sample. Revealing that one has been unfaithful is, of course, a highly undesirable admission. As a result, these are probably underestimates of the occurrence of true poaching successes. Perhaps most compelling, found that 15% of people currently in romantic relationships reported that their current relationship directly resulted from mate poaching, either because they poached their current mate or because they were poached into the relationship by their current mate. Because these rates are based only on people's current romantic partnerships, the actual occurrence rates of effective long-term mate poaching may be well above 15%. Finally, around 3% of current relationships resulted from both partners having poached each other out of previous relationships, a comparatively infrequent form that may be termed the "copoached" relationship. Overall, the study painted a portrait of human mating replete with poaching-related experiences. Despite the high occurrence of mate poaching in the study, it remains unknown whether such experiences are limited to a set of small and peculiar American samples-the samples were limited to the Midwest region of the United States-or whether poaching-related attraction occurs with similar regularity across different cultures. Given the apparent frequency and functionality of mate poaching (as evidenced in studies of human infidelity, cuckoldry, and remarriage), an evolutionary psychology perspective might anticipate mate poaching to be universal across cultures. There is no reason to expect that poaching is the primary or most common form of mating for all people, but the potential adaptive advantages for individuals in certain situations to engage in mate poaching may have been large enough for mate poaching to have become a pancultural form of romantic attraction. Moreover, if mate poaching does exist across all cultures, an evolutionary perspective would be interested in whether mate poaching constitutes a distinct evolutionary strategy or whether mate poaching follows as a consequence of more generalized mating adaptations. If mate poaching is a distinct strategy, it should show evidence of "special design" across cultures (Gangestad, 2001;Gaulin, 1997;Williams, 1966). For example, if there are theoretical reasons for expecting people within certain environmental situations (e.g., an unbalanced sex ratio) to functionally benefit from mate poaching, and people in those conditions are significantly more likely than others to try-and to succeed-at mate poaching, this would provide partial (though not complete) evidence that mate poaching might be a distinct mating strategy. Such evidence could indicate that humans have psychological adaptations that take in specific information about the environment (both the physical environment and one's own personal characteristics) and then adjust mate-poaching behavior in functionally specific ways. Thus, just as possessing the personal attribute of physical attractiveness (e.g., high levels of bodily symmetry) may lead some men to functionally pursue a short-term mating strategy (Gangestad & Simpson, 2000), being in a culture with an unbalanced sex ratio or possessing particular personality traits may functionally evoke mate-poaching behavior. We next review what is known about the specific personal characteristics of mate poachers. What Type of Person Engages in Mate Poaching? found in American samples that people who more frequently attempt to poach another's romantic partner scored higher on certain personality trait scales. Using a measure of the Big Five dimensions of personality (Goldberg, 1992) and the Sexy Seven dimensions of sexuality (Schmitt & Buss, 2000), found that mate poachers described themselves as especially disagreeable, unconscientious, unfaithful, and erotophilic (see Fisher, Byrne, White, & Kelley, 1988). speculated that the lack of empathy associated with disagreeableness (Graziano & Eisenberg, 1997) and the immorality associated with low conscientiousness (Hogan & Ones, 1997) were key ingredients in the causal etiology of poaching, perhaps serving as psychological "releasing factors" for mate-poaching attempts (see also , Foster, Shrira, Campbell, & Stone, 2002). People who were especially successful at mate poaching in the study also scored high on certain personality-trait scales. Those who reported success at poaching described themselves as relatively open to new experiences and reported being sexually attractive, relationally unfaithful, sexually unrestrained (not celibate), and erotophilic. The finding that successful mate poachers find it comfortable to talk about sex (i.e., high erotophilia) suggests that open conversations and curiosity about sexual matters may be a key milieu for successful matepoaching endeavors. People who frequently received mate-poaching attempts (i.e., those that are common targets of poaching) also possessed certain traits. found that frequent targets of mate poachers described themselves as more extraverted, open to experience, attractive, unfaithful, and loving than other people did. The combination of extraversion and openness may provide a special opportunity to poachers, as already-mated partners who are highly social and open to new ideas would be more likely to interact with those who are looking to poach. The fact that attractive and loving people were common targets of poaching was unsurprising, given that these attributes are universally desired in potential mates (Buss, 1989). The finding that unfaithful people are common targets suggests that mate poachers are functionally selective in choosing to attract those who are likely to succumb to poaching forays. People who have succumbed to poaching attempts (i.e., those who have been unfaithful) described themselves as disagreeable, unconscientious, neurotic, unfaithful, erotophilic, and unloving . Similar to mate poachers, unfaithful people display a lack of empathy and morality in their personality, combined with high neuroticism and erotophilia. Again, the finding that both successful mate poachers and those who are successfully poached find it comfortable to talk about sex (i.e., high erotophilia) suggests that curiosity and openness about sexual matters are potential catalysts for successful mate-poaching endeavors. How Important Is Culture to Mate Poaching? The studies and findings reported thus far-mate-poaching frequencies, sex differences in mate poaching, and personality traits of poachers and poaching targets-were primarily based on responses from college students in the United States. We attempted to replicate these findings across 10 major regions of the world using multiple college student and community samples from 53 individual nations. The 10 world regions were North America (represented by 3 nations), South America (5 nations), Western Europe (8 nations), Eastern Europe (11 nations), Southern Europe (6 nations), the Middle East (3 nations), Africa (7 nations), Oceania (2 nations), South/Southeast Asia (4 nations), and East Asia (4 nations). In addition to replicating the features of mate-poaching psychology identified in previous research , we tested four hypotheses about patterns and universals of mate poaching across world regions. Hypothesis 1 Our first hypothesis was as follows: Proportionately more men than women will attempt and succumb to short-term mate poaching across all world regions; proportionately more women than men will receive and be successful at short-term mate poaching across all world regions. Sexual strategies theory (Buss & Schmitt, 1993) postulates that sex differences in human reproductive biology have led to fundamental differences in men's and women's sexual psychology (see also Symons, 1979;Trivers, 1972). In particular, because men need not invest as much as women to produce viable offspring (women are minimally required to invest in gestation, placentation, and lactation), men can reap greater reproductive benefits than women can from mating with multiple partners. It is not the case that all men are indiscriminate maters at all times. Men can be very discriminating when choosing a longterm marriage partner, for example (Kenrick, Sadalla, Groth, & Trost 1990), and many men choose long-term mating as their primary sexual strategy (Gangestad & Simpson, 2000). However, sexual strategies theory further postulates that when men actively seek short-term mates, they do so with less discriminating tastes than women do and that, on average, men will spend more effort seeking short-term mates than women will (see also Schmitt & International Sexuality Description Project, 2003). To understand the adaptive value of multiple mating for men, consider that 1 man can produce as many as 100 offspring by indiscriminately mating with 100 women in a given year, whereas a man who is monogamous will tend to produce only 1 child during that same time period. In contrast, whether a woman mates indiscriminately with 100 men or more reservedly with 1 man, she will still tend to produce only 1 child in a given year. This profound reproductive difference in the potential benefits of promiscuous or indiscriminate sex leads to the hypothesis that men, more than women, will seek multiple mates. Short-term mate poaching would help to achieve this fundamental adaptive goal of men's short-term mating strategy, and so men are predicted to attempt more short-term mate poaches than women; and when in a relationship, men are predicted to succumb more often to shortterm mate poaches on themselves. Sexual strategies theory (Buss & Schmitt, 1993) makes it clear that women can reap adaptive benefits from occasional short-term mating (see also Gangestad, 2001;Greiling & Buss, 2000;Hrdy, 1981). However, women's short-term sexual strategy appears to be focused more on selectively obtaining men of high status and genetic quality rather than obtaining numerous partners in high quantity (Gangestad & Thornhill, 1997;Schmitt, Shackelford, Duntley, Tooke, & Buss, 2001;Smith, 1984). Because men will attempt more short-term mate poaching, it is also predicted that women will report receiving more short-term mate-poaching attempts. Also, because men will succumb more to short-term poaching, it is predicted that women will report more success in their short-term poaching attempts. Hypothesis 2 Our second hypothesis was as follows: World regions with more demanding environments will have lower rates of short-term mate poaching. According to strategic pluralism theory (Gangestad & Simpson, 2000), humans possess a menu of alternative mating strategies (see also, Belsky, 1999;Chisholm, 1996;Gross, 1996;Thiessen, 1994). Which strategy is followed depends in part on local environmental conditions. When local environments are demanding and the difficulties of rearing offspring are high, for example, the adaptive need for biparental care increases. Because both men and women are needed to raise offspring successfully in more demanding environments, Gangestad and Simpson (2000) argued that the importance of fidelity and heavy family investment should correspondingly increase in such environments. "In environments where male parenting qualities are needed and valued, women should be less likely to engage in short-term mating and extra-pair mating. In response to this, men should devote greater effort to parental investment" (Gangestad & Simpson, 2000, p. 585). If true, this suggests that in cultures with more demanding environments (e.g., fewer resources), rates of short-term mate poaching-an index of infidelity-should be lower. Conversely, in cultures with abundant resources, short-term mate poaching should be more common in both occurrence and prevalence. Hypothesis 3 Our third hypothesis was as follows: World regions with more men than women will have higher rates of mate poaching by men, whereas regions with more women than men will have higher rates of mate poaching by women. Operational sex ratio can be defined as the relative number of men to women in the local mating pool (Guttentag & Secord, 1983;Pedersen, 1991). In most cultures, women tend to slightly outnumber men because of men's higher mortality rate (Daly & Wilson, 1988). Nevertheless, significant variation exists in sex ratios across cultures and within cultures when viewed over historical time (Guttentag & Secord, 1983). According to Pedersen (1991), when men tend to outnumber women, women become a more valued resource over which men compete with greater-than-average intensity (see also Guttentag & Secord, 1983). When the number of women noticeably outsizes the number of men, on the other hand, women become more competitive over access to the relatively scarce presence of the male gender. Because an excess of one sex would exacerbate the problem of finding a mate, the heightened intrasexual competition associated with imbalanced ratios may accentuate rates of mate poaching. In regions where men outnumber women, for example, men should report higher rates of mate-poaching attempts. In regions where women outnumber men, in contrast, women should report higher rates of mate-poaching attempts. Hypothesis 4 Our fourth hypothesis was as follows: Sex differences in shortterm mate poaching should be larger in regions with traditional sex-role ideologies and smaller in regions with liberal sex-role ideologies (as indexed by women's political and economic equality). According to the social structural theory of Eagly and Wood (1999), men and women do not possess adaptations that are specifically designed to cause sex differences in sexuality, including short-term mating tendencies (see also Wood & Eagly, 2002). Instead, Eagly and Wood (1999) assumed that humans have evolved the tendency to have different social structures for men and women and that any "differences in the minds of men and women arise primarily from experience and socialization" (p. 414) once in those different social roles. Thus, when men and women differ, it is because they have received dissimilar socialization experiences-particularly those experiences associated with a society's bifurcated social and gender roles (Eagly, 1987;Kasser & Sharma, 1999;Maccoby, 1998). The degree to which men and women are forced to inhabit dissimilar social roles, and eventually develop psychological differences, is something that can vary across cultures (see Williams & Best, 1990). From this social structural perspective, sex differences in short-term mate poaching-if they exist-are likely produced by social-role differences, especially the different economic and family tasks that men and women perform (Eagly & Wood, 1999;Wood & Eagly, 2002). This social structural perspective generates the following hypothesis: In regions where women are more socially restricted in terms of politics and economics, sex differences in short-term mate poaching should be larger. Within regions that possess more "modern" or progressive sex-role ide-ologies-where women have greater access to power and money and are able to make their own decisions-women are allowed to explore a wider array of roles. Both men and women enjoy less burdensome and gender-constraining social structures in regions with modern sex-role ideologies (Williams & Best, 1990), and "when men and women occupy the same specific social role, sex differences . . . tend to erode" (Eagly & Wood, 1999, p. 413). Thus, sex differences in short-term mate poaching should be smaller, or perhaps even absent, in regions with more gender equality. Samples The research reported in this article is a result of the International Sexuality Description Project (ISDP; Schmitt et al., in press), a collaborative effort of over 100 social, behavioral, and biological scientists. Fifty-six nations composed the full span of ISDP cultures. In 3 of these nations, mate-poaching experiences were not assessed (i.e., Fiji, India, and Jordan). The current data set included samples from 53 nations. Collaborators were asked to administer an anonymous nine-page survey to at least 100 men and 100 women. Some nations, such as the United States and Canada, included many convenience samples, and so the national sample size was much larger than 200. As seen in Table 1, several national samples failed to reach the designated sample size of 100 men and 100 women. Because of the small sample sizes for several individual nations, and because individual nations used varying poaching assessment formats (see below), the 53 nations were collapsed into 10 basic world regions when conducting key statistical analyses. The 10 world regions included North America (N ϭ 1,470 men, 2,553 women), South America (N ϭ 445 men, 591 women), Western Europe (N ϭ 1,084 men, 1,862 women), Eastern Europe (N ϭ 1,207 men, 1,550 women), Southern Europe (N ϭ 497 men, 836 women), the Middle East (N ϭ 503 men, 552 women), Africa (N ϭ 684 men, 548 women), Oceania (N ϭ 315 men, 446 women), South/Southeast Asia (N ϭ 300 men, 359 women), and East Asia (N ϭ 563 men, 589 women). For each world region, at least 200 participants (100 men and 100 women) were included, providing the necessary statistical power (when setting ␤ ϭ .90, ␣ ϭ .05, and when looking for effects moderate in size; Cohen, 1988) for evaluating regional variation in sex differences. In addition, these 10 world regions have proven useful in Note. The nations of Jordan, Fiji, and India also were a part of the International Sexuality Description Project (ISDP), but the mate-poaching measure was not administered to those samples. Most ISDP samples were composed of college students; some included members of the community. All samples were convenience samples. In several samples, a between-subjects design was used. Half the participants were administered the short-term version of the mate-poaching measure; the other half were administered the long-term version (indicated in the Design column by "Between"). Some samples were administered both short-term and long-term versions of the mate-poaching measure (indicated by "Within"). Some samples were administered only short-term or long-term formats. Finally, some nations contained a mix of assessment formats (indicated by "Mixed"). Further details on sampling methods within each culture are available from the authors. previous studies of romantic attachment, sexual desire, and human mating strategies (Schmitt et al., in press;Schmitt & ISDP, 2003), and nations within each region were, on average, more similar in mate poaching than between nations. Participants in most samples were recruited as volunteers, some received course credit for participation, and others received a small monetary reward for participation. All samples were administered an anonymous self-report survey; most surveys were returned via sealed envelope or the usage of a drop-box. Return rates for college student samples were relatively high (around 95%), although this number was lower in some cultures. Return rates for community samples were around 50%. Further details on the sampling and assessment procedures within each of the world regions and national samples are provided elsewhere (Schmitt et al., in press;Schmitt & ISDP, 2003) and are available from David P. Schmitt. Procedure All participants were provided with a brief description of the study, including the following written instructions. This questionnaire is entirely voluntary. All your responses will be kept confidential and your personal identity will remain anonymous. No identifying information is requested on this survey, nor will any such information be added later to this survey. If any of the questions make you uncomfortable, feel free not to answer them. You are free to withdraw from this study at any time for any reason. This series of questionnaires should take about 20 minutes to complete. Thank you for your participation. The full instructional set provided by each collaborator varied, however, and was adapted to fit the specific culture and type of sample. Details on incentives and cover stories used across samples are available from David P. Schmitt. Measures Translation procedures. Researchers from nations where English was not the primary language were asked to conduct a translation/backtranslation procedure and administer the ISDP measures in their native language. This process typically involved the primary collaborator translating the measures into the native language of the participants and then having a second bilingual person back-translate the measures into English. Differences between the original English and the back-translation were discussed, and mutual agreements were made as to the most appropriate translation. In general, this is regarded as more of an "etic" approach to cross-cultural psychology (Church, 2001). This procedure attempts to balance the competing needs of making the translation meaningful and naturally readable to the native participants, while preserving the integrity of the original measure and its constructs (Brislin, 1980). As seen in Table 1, this process resulted in the survey being translated into 26 different languages. Samples from Ethiopia, Hong Kong, Morocco, and the Philippines were administered the survey in English, but certain terms and phrases were annotated to clarify what were thought to be confusing words for the participants. The translation of the ISDP survey into the Flemish dialect of Dutch used only a translation procedure, as this involved minor word-variant changes from the original Dutch. Pilot studies were conducted at several testing sites to clarify translation and comprehension concerns. Demographic measure. Each sample was first presented with a demographic measure entitled Confidential Personal Information. This measure included questions about sex (male, female), age, sexual orientation (heterosexual, homosexual, bisexual), current relationship status (e.g., married, cohabiting, dating one person exclusively, not currently involved with anyone), and socioeconomic status (lower, lower middle, middle, upper middle, upper). Mate-poaching inventory. All participants were presented with one of two versions of a questionnaire entitled Anonymous Romantic Attraction Survey or ARAS. The ARAS asked a series of questions about personal experiences with romantic attraction and mate poaching. One version of the ARAS asked about short-term mate attraction experiences (i.e., brief affairs, one-night stands), and the other version of the ARAS asked about long-term mating experiences (i.e., potential marital relationships). Each rating scale on the questionnaire asked participants to describe their experiences with a specific attraction behavior. For the frequency of attempting poaching behaviors, rating scale values ranged from 1 (never) to 7 (always). Intermediate values were labeled rarely, seldom, sometimes, frequently, and almost always. For the degree of success in mate poaching, rating scales ranged from 1 (not at all successful) to 7 (very successful). An intermediate value of 4 (moderately successful) also was provided. These particular frequency and degree anchors tend to maximize the intervallevel quality of rating-scale data (Spector, 1992). Seven items from the ARAS were relevant to the present study. The first ARAS question asked about the frequency with which participants have attempted to mate poach: "Have you ever tried to attract someone who was already in a romantic relationship with someone else for a short-term sexual relationship with you?" The second question asked, "If you have tried to attract someone who was already in a relationship for a short-term sexual relationship with you, how successful have you been (if you have never tried, skip this question)?" The third question asked about the participants' experiences with others trying to take them away from past mating partners: "While you were in a romantic relationship, have others tried to attract you as a short-term sexual partner?" A fourth item asked, "While you were in a romantic relationship, if others attempted to obtain you as a short-term sexual partner, how successful have they been (if others have never tried, skip this question)?" As noted in Table 1, some participants received versions of the ARAS in which they were asked about these four items in the context of short-term poaching and some in the context of long-term mate poaching; in a few samples, participants received both short-term and long-term versions. Finally, all participants were asked three questions about their current relationship status: (a) "Are you currently in a romantic relationship?" (b) "Are you in a romantic relationship right now with a partner whom you attracted away from someone else?" and (c) "Are you in a romantic relationship right now with a partner who attracted you away from someone else?" After all three questions, participants were asked to circle either a "Yes" or "No" option. Personality traits. All samples were administered the Big Five Inventory (BFI) of personality traits (Benet-Martínez & John, 1998). The 44item English BFI was constructed to allow quick and efficient assessment of five personality dimensions-Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness (Benet-Martínez & John, 1998). Example items from the BFI include: "I see myself as someone who is outgoing, sociable" (i.e., Extraversion), "I see myself as someone who is helpful and unselfish with others" (i.e., Agreeableness), "I see myself as someone who is a reliable worker"(i.e., Conscientiousness), "I see myself as someone who worries a lot" (i.e., Neuroticism), and "I see myself as someone who is curious about many different things" (i.e., Openness). Self-report ratings for each item were made on a scale from 1 (disagree strongly) to 5 (agree strongly). This self-report measure was used because of its ease of administration and its brevity and because it has proven useful for cross-language and cross-cultural research (Benet-Martínez & John, 1998). Sexuality attributes. Most samples were administered a measure of the "Sexy Seven" sexuality attributes (Schmitt & Buss, 2000). The Sexy Seven Measure asks participants to rate themselves compared with others they know (using a 9-point scale, ranging from 1 ϭ extremely inaccurate to 9 ϭ extremely accurate) on a list of 67 sexually connotative adjectives. The Sexy Seven scales that are scored from these self-ratings include Sexual Attractiveness (including facets of beauty and seduction), Relationship Exclusivity (whether one is promiscuous and adulterous), Gender Orientation (masculinity and femininity), Sexual Restraint (abstinence and prudishness), Erotophilic Disposition (obscenity, indecency, and lust), Emotional Investment (love and romance), and Sexual Orientation (homosexuality and heterosexuality). Archival measures. Several archival data sets were used in this article. Gross domestic product per capita (GDP) and the Gender Development Index (i.e., the degree to which men and women differ in the achievement of basic human capabilities, including health, longevity, education, and a decent standard of living) were obtained from the United Nations Development Programme (2001). National sex ratios and the percentage of women in government were obtained from the United Nations Statistics Division (2001). Frequency of Mate Poaching Across Cultures The frequency of mate poaching was examined in multiple ways. We examined both the occurrence (i.e., has it ever happened?) and prevalence (i.e., how often has it happened?) of mate-poaching experiences. This included whether and to what extent one has attempted a mate poach, whether and to what extent one has been successful at mate poaching, whether and to what extent one has received a mate poach, and whether and to what extent one has succumbed to a past mate poach attempt. We present the complete profile of mate-poaching experiences for short-term and long-term poaching and for men and women, separately. For comparative purposes, the results in Tables 2-10 follow the basic analysis strategy used by . Have you attempted to attract someone who was already in a relationship? Similar to , we first examined the occurrence of mate poaching through identifying the percentage of participants who responded above 1 (1 ϭ never, 2 ϭ rarely, 3 ϭ seldom, 4 ϭ sometimes, etc.) to the ARAS item, "Have you ever tried to attract someone who was already in a romantic relationship with someone else for a short-term sexual relationship with you?" Using this categorization strategy, found that approximately 60% of men and 40% of women reported having made at least some attempt at short-term mate poaching. As shown in Tables 2 and 3, this finding was replicated in the North American region (62.1% for men, 39.9% for women). The occurrence of attempting a short-term mate poach was significantly higher for men than women across all regions, supporting Hypothesis 1. Table 2 displays the magnitude of sex differences in short-term poaching occurrence using the phi () statistic. Sex differences are considered small if phi exceeds Ϯ0.10, moderate if phi exceeds Ϯ0.30, and large if phi exceeds Ϯ0.50 (Cohen, 1988). For all world regions, sex differences in short-term mate-poaching attempts were small to moderate in magnitude, confirming the hypothesis that men expend more mating effort on short-term mateships than women do (Buss & Schmitt, 1993;Schmitt & ISDP, 2003). Exceptions to the seemingly high occurrence of short-term poaching included the finding that only 29.5% of East Asian males had ever engaged in short-term poaching and that less than 30% of women from the Middle East, Africa, South/Southeast Asia, and East Asia reported having made a short-term poaching attempt. Overall, though, more than 50% of men and 30% of women from around the world responded above 1 on the short-term form of this scale. From these data, we conclude that most college-age men and around a third of college-age women across cultures have engaged in at least some short-term mate poaching. We examined mean levels on this ARAS scale as an index of the prevalence of mate-poaching attempts. Using this analysis strategy (see Table 3), the prevalence of short-term mate poaching overall for men was 2.32 (SD ϭ 1.47), whereas the average woman rated only 1.68 (SD ϭ 1.10). This would suggest that most men "rarely to seldom" engage in short-term mate-poaching attempts, whereas most women average below "rarely" on this scale. Despite these relatively low averages, however, the mean level of short-term mate seeking for men was significantly higher than for women across all world regions, again supporting Hypothesis 1. We used the d statistic to evaluate the magnitude of mean differences between men and women. Differences using the d statistic are considered small if d exceeds Ϯ0.20, moderate if d exceeds Ϯ0.50, and large if d exceeds Ϯ0.80 (Cohen, 1988). The largest sex Note. Occurrence was operationally defined as scoring greater than 1 on a 1 (never) to 7 (frequently) scale. For , ͉.10͉ ϭ small, ͉.30͉ ϭ moderate, and ͉.50͉ ϭ large. *** p Ͻ .001. differences in the prevalence of short-term poaching attempts were found in South America (d ϭ .61) and Southern Europe (d ϭ .60), the smallest occurred in South/Southeast Asia (d ϭ .32) and East Asia (d ϭ .32). Most regions exhibited sex differences close to the worldwide average (d ϭ .43). From these mean-level analyses, we conclude that among college-age men and women, there is a significant and moderately sized sex difference in the prevalence of seeking already-mated partners for short-term sexual experiences. As noted earlier, some participants completed a long-term mating version of the ARAS. found that around 55% of men and women reported that they had made at least some attempt at long-term mate poaching. As seen down the right-hand side of Table 2, similar percentages of long-term mate poaching were reported across every ISDP world region, although the rates were somewhat lower in South/Southeast Asia. Across all cultures, women's occurrence rates of long-term poaching (43.6%) were slightly more frequent than short-term poaching (34.9%), 2 (1, N ϭ 9,883) ϭ 160.04, p Ͻ .001, ϭ .13. This was not true for men. It is interesting to note that, although East Asian women were conspicuously low on short-term poaching (14.9%), they were close to the overall average on long-term mate poaching (33.5%). As with the prevalence of short-term poaching attempts, the mean levels of long-term poaching were less than substantial. The average man rated 2.42 (SD ϭ 1.42) on the 1-7 frequency scale, and the average woman rated only 1.94 (SD ϭ 1.21). This would suggest that most men "rarely to seldom" engage in long-term mate-poaching attempts, whereas women average just below "rarely" on this scale. It is interesting to note that the difference between the prevalence of women's short-term and long-term poaching attempts was significant, t(9881) ϭ 10.87, p Ͻ .001, d ϭ .22. The difference between men's short-term and long-term poaching attempts was one third the size, t(7063) ϭ 2.77, p Ͻ .01, d ϭ .07. Most regions exhibited sex differences in long-term mate-poaching attempts close to the worldwide average (d ϭ .33), although in Oceania the difference was negligible (d ϭ .02) and in Africa the sex difference was large (d ϭ .75). From these meanlevel analyses, we conclude that among college-age men and women, there is a significant and small to moderately sized sex difference in the prevalence of seeking already-mated partners for long-term mating experiences. Have you successfully attracted someone who was already in a relationship? A second point of interest was whether participants from each region had successfully attracted someone who was already in a relationship. We examined the occurrence of whether our participants had ever successfully mate poached by asking, "If you have tried to attract someone who was already in a relationship for a short-term sexual relationship with you, how successful have you been (if you have never tried, skip this question)?" Responses greater than 1 (1 ϭ not at all successful) were interpreted as indicating the participant had been at least somewhat successful at poaching away a past partner (again, some participants received the long-term version of this question). The percentages in Table 4 are based only on the responses of people who have attempted a mate poach in the past. These data do not represent base-rates of infidelity or serial monogamy, per se. Rather, they represent the relative mate-poaching efficacy of those subgroups of men and women that have targeted mates for poaching in the past. Among North American men, for example, 62.1% had attempted a short-term mate poach (see Table 2). According to Table 4, 84.1% of those men who had attempted short-term poaching achieved some level of success. Thus, 52.2% of men in the ISDP North American sample (i.e., 84.1% of 62.1%) had successfully engaged in short-term mate poaching. For women, about 33.7% admitted to having ever successfully engaged in a shortterm mate poach. Overall, there were few sex differences in the occurrence of successful short-term mate poaching. We examined the mean levels on this ARAS scale as an index of the prevalence of short-term mate-poaching success. The average man rated 3.86 (SD ϭ 1.94) on the 1-7 frequency scale, and the average woman rated 4.38 (SD ϭ 2.13) (see Table 5). This would suggest that most men and women who have attempted a short-term mate poach have been "moderately" successful at it. It is interesting to note that, in most cases, women were more successful than men at short-term mate poaching, although this difference was significant only in North America, Western Europe, Eastern Europe, and Oceania. An exception to this trend was found in Africa, where men reported higher success rates. Still, from these mean-level analyses, we conclude that there is a trend for the prevalence of successful short-term mate poaching to be higher in women than men. This trend may be seen as supporting Hypothesis 1, in that women's greater effectiveness in short-term poaching may come as a result of men's greater interest in short-term mateships (Buss & Schmitt, 1993;Schmitt & ISDP, 2003). The occurrence and prevalence of successful long-term mate poaching was similar to that of short-term poaching. However, in most cases, there were no sex differences in reports of long-term mate-poaching success. An exception to this trend was found in South America, where men reported a higher occurrence ( ϭ .18) and prevalence (d ϭ .24) of successful long-term mate poaching than women did. It is interesting to note that the difference between the prevalence of women's short-term and long-term poaching success was significant, t(3602) ϭ 5.74, p Ͻ .001, d ϭ .20. The difference between men's short-term and long-term poaching success was insignificant and was one fourth the magnitude of the difference for women, t(3838) ϭ 1.03, ns, d ϭ .05. From these mean-level analyses, we conclude that there is a trend for the occurrence and prevalence of successful long-term mate poaching to be similar in men and women, but for women success at short-term mate poaching is noticeably greater than success at long-term mate poaching. Has anyone tried to attract you while you were already in a relationship? A third point of interest was whether participants from each region had experienced someone trying to poach them while they were in a past relationship. Responses greater than one (1 ϭ never) to the question, "While you were in a romantic relationship, have others ever attempted to obtain you away from your partner for a short-term sexual relationship?" indicated that the participant had at some point received a short-term matepoaching attempt. found that nearly 80% of men and women had received a mate-poaching attempt. As seen in Table 6, the worldwide occurrence of receiving a mate-poaching attempt was about 70% for men and women from most world regions. These short-term poaching percentages appeared somewhat higher in Western cultures (e.g., the Americas and all of Europe) compared with African and Asian cultures. Few sex differences were observed in the occurrence of receiving either short-term or long-term mate-poaching attempts. In Oceania, women were more likely than men to report receiving long-term poaching attempts, whereas in South/Southeast Asia men were more likely than women to report receiving long-term poaching attempts. Sex differences in receiving short-term matepoaching attempts were nonsignificant within regions, although the worldwide occurrence of receiving short-term poaching attempts was higher for women than for men. The failure to find sex differences in the reception of short-term poaching attempts within regions would seem to conflict with men's self-reported tendency to make more short-term poaching attempts. However, men are more likely, in general, to perceive sexual interest from the opposite sex (Abbey, 1982). This may be an adaptive vigilance that leads men to be hypersensitive to short-term mating possibilities (Haselton & Buss, 2000). Consequently, men in this study may have subjectively overestimated the short-term poaching attempts made by women. At the same time, this particular perceptual bias would not necessarily lead men to overestimate the relatively objective rates at which they made short-term poaching forays. Thus, it is possible for men to accurately report higher rates than women do in making short-term poaching attempts, while men overestimate the short-term interest of women and report similar perceptions of receiving short-term poaching attempts. We examined the mean levels on this ARAS scale as an index of the prevalence of receiving mate-poaching attempts. The average man rated 2.78 (SD ϭ 1.55) and the average woman rated 2.98 (SD ϭ 1.64), t(11181) ϭ Ϫ6.39, p Ͻ .001, d ϭ Ϫ.12 (see Table 7). This would suggest that most men and women "rarely to seldom" receive short-term mate-poaching attempts. In contrast to occurrence rates, the prevalence of receiving short-term matepoaching attempts did display some sexual differentiation across regions. In most regions, women reported significantly higher prevalence rates of receiving short-term poaching attempts, supporting Hypothesis 1. Only in East Asia were men significantly more likely to report receiving short-term poaching attempts. As noted above, it was possible that women would not report receiving more short-term attempts, due in part to men's potential hypersensitivity to sexual interest by women (Haselton & Buss, 2000). In addition, women from four world regions (North America, Western Europe, Middle East, and Oceania) reported higher prevalence rates of receiving long-term poaching attempts. From these mean-level analyses, we conclude that among college-age men and women, there is some evidence that women report receiving more attempts at mate poaching-particularly in the context of short-term poaching-than men do. Have you succumbed to a mate-poaching attempt when someone tried to attract you away from a previous partner? We examined the occurrence of whether participants had ever been successfully poached from a past relationship by asking, "If others have attempted to obtain you as a short-term sexual partner, how successful have they been (if others have never tried, skip this question)?" Responses greater than 1 (1 ϭ not at all successful) were interpreted as indicating the participant had been at least somewhat successfully poached away from a past partner (again, some participants received the long-term version of this question). We chose this analysis strategy because it was used in a previous study in which 50% of men and 35% of women had succumbed to a short-term poaching attempt . As seen in Note. Occurrence was operationally defined as scoring greater than 1 on a 1 (never) to 7 (frequently) scale. For , ͉.10͉ ϭ small, ͉.30͉ ϭ moderate, and ͉.50͉ ϭ large. * p Ͻ .05. *** p Ͻ .001. Table 8, around 60% of men and 45% of women worldwide reported that they had succumbed to a short-term mate poach at some point in their past. For long-term poaching, over 60% of men and 50% of women reported that they had succumbed to a poaching attempt at some point in their lives. These percentages are based only on the responses of people who have received a mate-poaching attempt. These data do not represent base rates of infidelity or serial monogamy. Rather, they may represent the relative susceptibility of those subgroups of men and women that have been targeted by mate poachers. Among North American men, for example, 74.5% had received a shortterm mate-poaching attempt (see Table 6). According to Table 8, 63.3% of those men who have received an attempted short-term poach went along with it. Thus, 47.2% of men in the ISDP North American sample (i.e., 63.3% of 74.5%) had ever engaged in a short-term affair as a result of poaching. For women, about 32% admitted to having gone along with a short-term mate poach. These percentages are in line with other studies of infidelity among college-age individuals and dating couples from North America (see Wiederman, 1997;Wiederman & Hurd, 1999). The occurrence rates of going along with a short-term mate poach were significantly higher for men than women across all cultures, supporting Hypothesis 1 (see Table 8). For long-term poaching, men reported significantly more success in South America, Western Europe, Eastern Europe, Africa, Oceania, and South/ Southeast Asia. The primary difference between the occurrence of succumbing to short-term and long-term poaching was between women, with more women succumbing to long-term poaching (54.4%) than short-term poaching (45.0%), 2 (1, N ϭ 6,925) ϭ 101.03, p Ͻ .001, ϭ .12. We examined the mean levels on this ARAS scale as an index of the prevalence of succumbing to mate-poaching attempts. The average man rated 2.86 (SD ϭ 1.91) and the average woman rated 2.10 (SD ϭ 1.58), t(7867) ϭ 19.09, p Ͻ .001, d ϭ .40 (see Table 9). This suggests that most men who receive short-term poaching attempts "seldom" succumb to the poachers, whereas women "rarely" succumb when they receive short-term mate-poaching attempts. Within each of the 10 ISDP world regions, men reported significantly higher prevalence rates of succumbing to short-term mate-poaching attempts, again supporting Hypothesis 1. In longterm mate poaching, men were more likely to succumb only in Western Europe and Eastern Europe. From these mean-level analyses, we conclude that, among college-age men and women, there is evidence that men succumb to short-term poaching attempts more frequently than women do. Is your current romantic relationship the result of mate poaching? When participants were asked about their current relationship status, over half of the men and women reported being in a romantic relationship. This is typical of college-student samples (e.g., Buss, Larsen, Westen, & Semmelroth, 1992). Of those participants who reported that they were currently in a romantic relationship, around 12% of men and 8% of women reported that their current relationship resulted from their having attracted their current partner away from someone else (see Table 10). These data provide a relatively clear and direct estimate of recent long-term mate-poaching success, suggesting that around 10% of current relationships result from mate poaching. We also asked whether participants had been lured away from a past partner into their current relationship. About 14% of women and 10% of men reported that they had been poached into their current romantic relationship. Finally, the percentage of relationships that resulted from both partners poaching each other into the relationship (i.e., a copoach) varied from a low of 1.7% in South America to a high of 7.7% in South/Southeast Asia. On the basis of the current ISDP findings, we conclude that the occurrence of mate poaching is a cultural universal. Although the overall prevalence of mate poaching ranged from only "rarely" to "seldom," in every region of the world sampled by the ISDP, at least one-fifth of the sample had engaged in mate-poaching behavior, and most of those who attempted mate poaching had achieved at least some level of success. In addition, men universally reported succumbing to short-term mate poaches more than women did. Perhaps the most compelling testament to poaching frequency, however, was the finding that around 15% of people currently in a romantic relationship admitted that the relationship directly resulted from mate poaching-successful poaching either by oneself or on oneself. We turn next to the personal characteristics of mate poachers and their targets. Personal Characteristics and Mate-Poaching Experiences Across Cultures We related participants' recollections of poaching-attraction experiences to self-reported personal characteristics. Few differences emerged between short-term and long-term poaching correlations. In general, the relationships between personality and mate poaching were stronger in short-term poaching, but across both forms of poaching, the same set of personality and sexuality variables were involved. As a result, we focus on the relationship between personal characteristics and mate poaching after collapsing across temporal context. The results in Tables 11-15 represent partial correlations between mate poaching and personal characteristics, after controlling for the effects of nation within each world region. Nation was statistically controlled for in order to rule out any confounding influences within each region. If one nation had particularly high levels of both extraversion and mate poaching, for example, failing to control for nation would artificially produce a positive correlation between extraversion and mate poaching within the general world region. What type of person tries to poach another's partner? We compared responses to the ARAS item, "Have you ever tried to attract someone who was already in a romantic relationship with someone else for a short-term sexual relationship with you?" with measures of personality traits and sexuality attributes that were deemed relevant to mate poaching in previous studies Schmitt & Shackelford, 2003). All comparisons were between raw scores on continuous scales, and some participants received the long-term version of this scale. As displayed in Table 11, people who more often attempt mate poaching possess similar personality traits across regions. From a measure of the Big Five personality traits (Benet-Martínez & John, 1998), mate poachers tended to describe themselves as extraverted and disagreeable. Extraversion, sometimes called surgency, is the degree to which one is active, assertive, and talkative (Ashton, Lee, & Paunonen, 2002;Watson & Clark, 1997). Agreeableness refers to whether one is generous, gentle, and empathetic (Graziano & Eisenberg, 1997). found that mate poachers also were low on conscientiousness-a trait linked to low morality and lack of will. This association was apparent, though somewhat less consistent, across the world regions of the ISDP. From a measure of the Sexy Seven sexual dimensions (Schmitt & Buss, 2000), found that mate poachers described themselves as unfaithful and erotophilic. In the present ISDP study, mate poachers displayed these same sexual attributes. Indeed, these associations were strong and significant for both men and women across all world regions. Mate poachers were sexually unfaithful-apparently, they do not ask others to do what they would not do themselves. Mate poachers also were erotophilic, scoring high in lust, perversion, and indecency (Fisher et al., 1988;Schmitt & Buss, 2000). What type of person successfully poaches another person's partner? As displayed in Table 12, the psychological traits of people who reported that they have successfully poached away another's partner are somewhat consistent across cultures. People who reported having poaching success score higher on openness to experience and sexual attractiveness, and they score lower on relationship exclusivity. There was also a tendency for successful mate poachers to describe themselves as sexually unrestrained. Among women, but not men, it was common across regions for successful mate poachers to be erotophilic in disposition. The finding that people who are unfaithful and erotophilic tend not only to practice mate poaching but also to be successful at it when attempted, suggests that these two traits are integral to the psychology of mate poaching. The finding that attractive individuals tend to be more successful at mate poaching demonstrates that some of the same processes of general romantic attraction may be operating in the context of mate poaching. For women seeking short-term mates, and for men seeking long-term or short-term mates, physical attractiveness is highly desired (Buss & Schmitt, 1993;Schmitt & Buss, 1996). Thus, the psychological adaptations of women and men that influence general mate selection appear to be relevant to mate poaching as well. In sum, the psychological traits of successful mate poachers were both universal and, in some ways, region-specific. Successful mate poachers tended to be open and sexually attractive, as well as unfaithful to their own relationship partners. Those who were successful in poaching also tended to be sexually unrestrained, and erotophilia was pronounced among women. Several of these crossculturally pervasive linkages of personality, sexuality, and mate poaching fell short of statistical significance, but the gross pattern of correlations was similar across all 10 world regions. tend to have an extraverted personality. Only men from South/ Southeast Asia failed to display this linkage. The relationship between receiving mate-poaching attempts and openness to experience was less consistent across regions, however. Mate-poaching targets were higher on openness in several "Western" cultures, corroborating the finding that those high in sensation-seeking, a trait corresponding to high levels of extraversion and openness to experience, are more susceptible to risky sexual behavior involving multiple sexual partnerships (Zuckerman, 1994). Male poachers in the Middle East, South/Southeast Asia, and East Asia, and female poachers in Africa, Oceania, and South/Southeast Asia, however, did not score significantly higher in openness. Similar to the findings of , across all world regions, people who received frequent mate-poaching attempts described themselves as sexually attractive. This makes sense in that men and women often seek physical attractiveness in potential romantic partners (Buss, 1989;Schmitt & Buss, 1996). Frequent targets of mate poachers also described themselves as sexually unfaithful. Apparently, mate poachers around the world are attuned to the probability of success when they choose poaching targets. Finally, targets of mate poaching described themselves as having an erotophilic disposition. Being willing to talk openly about sex and sexual deviance appear to be universal attractants to would-be mate poachers. What type of person is successfully poached away? As displayed in Table 14, the psychological traits of people who reported they have been poached away from a past partner are not as consistent across regions as are the traits of mate poachers. Similar to the traits of mate poachers, those who have succumbed to poaching attempts tended to be disagreeable. These links were not significant for men or women in Western Europe or the Middle East, however. Contrary to the findings of , the correlation between low conscientiousness and going along with a mate-poaching attempt was nonsignificant for both men and women in South America, the Middle East, South/ Southeast Asia, and East Asia. People who reported having succumbed to a mate-poaching attempt scored lower on relationship exclusivity than other people, a finding that provides universal convergent validity to the ARAS scale. People who reported succumbing to poachers also reported more erotophilia, across most cultures. Finally, found that those who had gone along with a mate poach were lower on the Emotional Investment scale of Schmitt and Buss's (2000) Sexy Seven Measure of sexuality attributes. This pattern largely failed to replicate and was evident only in North America, among men from South America and Oceania and among women from South/Southeast Asia and East Asia. In sum, the psychological profiles of mate poachers and matepoaching targets were similar across most cultures. Mate poachers tended to be extraverted and disagreeable, as well as unfaithful and erotophilic. Those who were common targets of poaching reported high levels of extraversion and openness and described themselves as sexually attractive, unfaithful, and erotophilic. These crossculturally pervasive linkages of personality, sexuality, and mate poaching suggest that the psychology of mate poaching has universal qualities that are not limited to North America's specific sexual culture. Hypothetical Links Between Culture and Mate Poaching Men's occurrence and prevalence of short-term mate-poaching attempts were positively correlated across world regions, r(8) ϭ .92, p Ͻ .001. Women's occurrence and prevalence of short-term mate-poaching attempts also were positively correlated, r(8) ϭ .96, p Ͻ .001, as were the magnitudes of sex differences in the occurrence and prevalence of short-term mate-poaching attempts, r(8) ϭ .84, p Ͻ .001. We collapsed poaching indicators across sex and temporal context and created overall poaching scales for Poaching Attempts, Poaching Success, Poaching Received, and Poaching Succumbed. Again, most forms of mate poaching were highly correlated across cultures. The sociocultural criterion variables used in this study were largely unrelated, though GDP per capita and economic gender equity were significantly associated, r(8) ϭ .73, p Ͻ .01. The complete intercorrelation matrix of all predictor and criterion variables is available from David P. Schmitt. Hypothesis 1. According to sexual strategies theory (Buss & Schmitt, 1993), men desire multiple mating partners more than women do, with men's strategy of short-term mate poaching serving as a key avenue for obtaining multiple partners. We found that proportionately more men than women across all regions of the ISDP had attempted short-term mate poaches (see Table 2) and that proportionately more men than women had succumbed to short-term mate-poaching attempts (see Table 8). There was some evidence that women report more success when they attempt short-term mate poaching (see Table 5), a further indication that men more easily succumb to short-term mate attempts than women do. There also were indications that women report receiving shortterm poaching attempts more than men do (see Table 7), although this finding was limited to North America, Western Europe, Eastern Europe, the Middle East, Africa, Oceania, and East Asia. Most findings from this study supported Hypothesis 1 and, by implication, the broader theory that men desire multiple mates more than women do (see Schmitt & ISDP, 2003). Hypothesis 2. According to strategic pluralism theory (Gangestad & Simpson, 2000), biparental care of children and marital fidelity should become more important in regions with high environmental stress. One potential indicator of environmental stress is scarcity of resources. The frequency of short-term mate poaching, therefore, should be lower in regions with fewer resources. The per capita GDP for each world region was related to several indexes of short-term mate poaching. Although GDP was not related to at-tempts at short-term poaching, GDP was correlated positively with the regional occurrence of successful short-term mate poaching in women, r(8) ϭ .58, p Ͻ .05. GDP also was significantly correlated with the rate at which women succumb to short-term poaching attempts, r(8) ϭ .75, p Ͻ .01. In men, this latter relationship fell just short of statistical significance, r(8) ϭ .53, p ϭ .057. The prevalence of women succumbing to short-term mate poaches also was positively related to GDP,r(8) ϭ .73, p Ͻ .01. After collapsing across men and women, the relationship between GDP and the occurrence of succumbing to short-term mate-poaching attempts reached statistical significance, r(8) ϭ .76, p Ͻ .01. A scatterplot of this bivariate relationship across world regions is displayed in Figure 1. As predicted by strategic pluralism theory (Gangestad & Simpson, 2000), it appears that men and women in cultural regions with fewer resources tend not to engage in successful short-term mate poaching. 1 Women appeared to be slightly more affected by scarcity of resources, with sex differences in the occurrence of successful short-term mate poaching, r(8) ϭ Ϫ.56, p Ͻ .05, and in succumbing to short-term mate poaching, r(8) ϭ Ϫ.57, p Ͻ .05, decreasing as regional resources increased. The negative correlation between resources and sex differences in short-term poaching also was 1 Tests for curvilinearity revealed no significant associations between predictors and criteria. evident in the occurrence, r(8) ϭ Ϫ.78, p Ͻ .01, and prevalence, r(8) ϭ Ϫ.55, p Ͻ .05, of self-reported poaching attempts. Thus, it appears that increased levels of resources lead to smaller sex differences in short-term poaching, primarily owing to the associated increase in women's short-term poaching. We also related short-term mate-poaching experiences to the socioeconomic status of men and women within each of the 10 world regions of the ISDP. Of particular interest were the participants' reports of the socioeconomic status in which they were raised. As seen in Table 15, men who reported making more attempts at short-term poaching tended to come from a higher socioeconomic background, r(6684) ϭ .05, p Ͻ .001. Although small in magnitude, this significant finding was present within the specific regions of North America, Western Europe, the Middle East, and Africa. This trend also was present among women, r(9330) ϭ .02, p Ͻ .05, including within the specific regions of Southern Europe, South/Southeast Asia, and East Asia. As predicted, therefore, those with fewer resources tended to engage in less short-term mate poaching. Men's and women's rates of successful short-term poaching and reception of short-term poaching attempts, as well as men's succumbing to short-term poaching, also were significantly related to socioeconomic status in the predicted direction. Overall, these individual-level findings provide further support for Hypothesis 2 and for the broader theory of strategic pluralism (Gangestad & Simpson, 2000). Hypothesis 3. According to theories concerning human sex ratios (Pedersen, 1991), as the ratio of men to women becomes unbalanced in a culture, the pressure for finding a suitable mate becomes greater on the more populous sex. Thus, in regions with more women than men (with what is traditionally referred to as a low sex ratio), it was expected that women would be more likely to engage in mate poaching. We found this to be the case, with the occurrence of women's short-term, r(8) ϭ Ϫ.63, p Ͻ .05, and long-term, r(8) ϭ Ϫ.62, p Ͻ .05, mate-poaching attempts negatively correlating with the average sex ratio across world regions. We also found that the regional prevalence of women's short-term poaching attempts was negatively correlated with sex ratio. However, we did not find that men's mate poaching increased with sex ratio. Instead, men's poaching rates were negatively associated with sex ratio across most indexes of poaching. Contrary to sexratio theory, therefore, men in cultures with a surplus of men reported fewer poaching attempts and poaching successes. Figure 2 portrays the regional levels of long-term mate-poaching attempts (after collapsing across men and women) related to sex ratios across world regions. Overall, regardless of whether short-term or long-term poaching was considered, poaching rates tended to increase when the percentage of women increased across regions. As a result, sex-ratio theory was only partially confirmed in this study. Hypothesis 4. According to social structural theory (Eagly & Wood, 1999;Wood & Eagly, 2002), women's greater access to political and economic power should be associated with smaller sex differences in sexuality. Several of the ISDP findings support this hypothesis. For example, sex differences in the occurrence of short-term mate-poaching attempts tended to be smaller in regions with greater gender equality, as assessed by the Gender Development Index, r(8) ϭ Ϫ.85, p Ͻ .001. The relationship between gender equality and sex differences in mate poaching appeared to result primarily from women's increased poaching behavior in egalitarian regions. Men's mate-poaching experiences tended to decrease in some instances. For example, the prevalence of men's long-term poaching attempts was negatively correlated with gender egalitarianism, r(8) ϭ Ϫ.62, p Ͻ .05. For women, long-term poaching attempts were positively correlated with gender egalitarianism, r(8) ϭ .45, p Ͻ .10, though this association was only marginally significant. Women's access to greater political power, as indexed by the percentage of women in parliament, was associated with increased poaching by women. Men's poaching, however, also increased with women's increased access to political power. For example, the occurrence of women's short-term poaching attempts was positively correlated with political equality, r(8) ϭ .72, p Ͻ .01, as was the prevalence of short-term poaching attempts, r(8) ϭ .72, p Ͻ .01. For men, the occurrence of short-term poaching attempts was positively correlated with political equality, r(8) ϭ .50, p Ͻ .10, though again this association was only marginally significant. Overall, social structural theory was largely supported in this study. Discussion Mate-poaching experiences can have important social consequences for all those involved, including retributional violence, social ostracism, cuckoldry, jealousy, and relationship dissolution Schmitt & Shackelford, 2003). Unfortunately, mate poaching is often cloaked in secrecy, making it difficult to study with the research methods currently available to social scientists. Even so, the present findings-based on anonymous self-report surveys administered to 16,954 people around the world-yield three fundamental conclusions. First, mate poaching is a cultural universal, at least across the 10 world regions of the ISDP. Many people from North America, South America, Western Europe, Eastern Europe, Southern Europe, the Middle East, Africa, Oceania, South/Southeast Asia, and East Asia report that they have attempted, received, and occasionally succumbed to the experience of poaching. Second, mate poachers and their targets possess the same basic personality traits across all world regions, with extraversion, agreeableness, openness, and erotophilia serving as the primary correlates of mate poaching. Third, mate-poaching experiences are associated with aspects of culture in ways that support several evolutionary theories of human mating. Each of these findings, along with associated limitations, is addressed more fully below. Frequency of Mate Poaching Across the 10 world regions of the ISDP, around 60% of men and 40% of women admit that they have tried to poach someone else's partner, either for the purpose of having a short-term sexual relationship or for the purpose of forming a new long-term mating alliance. Among those who have attempted to poach, the occurrence of successful poaching was substantial (often over 80%). The prevalence of short-term and long-term mate-poaching attempts only ranged from "rarely" to "seldom," but the prevalence rating of success among those who have attempted poaching centered on the midpoint (i.e., "moderate success") of the scales used in this study. Nearly 70% of people report that someone has tried to poach them, and around 50% of those who have been tempted by a would-be mate poacher have succumbed to that attempt. Given that a single poaching attempt can cause significant discord in a romantic relationship and that merely one poaching success can result in severe social and reproductive consequences, the current findings suggest that the problem of mate poaching has farreaching relevance. Whether in the form of brief short-term desertions or permanent long-term defections, it appears that mate poaching is a culturally universal human experience, one that is undoubtedly related to the strong feelings of jealousy, rage, and betrayal that have coevolved as part of the human condition (Buss, 2000;Shackelford & Buss, 1996;Shackelford et al., 2000). Mate poaching sometimes leads to positive outcomes as well. In almost every region we studied, around 10% of romantic relationships were the result of mate poaching, and around 3% were the result of two people poaching one another out of their old relationships and into a new mateship. Mate poaching, it appears, can lead to the successful development of new romantic partnerships. How long these poaching-based relationships will last is an important question for future research. In this study, we can gain some insight into this question by examining the personal characteristics of mate poachers. Personality of Mate Poaching The personal characteristics of those who poach and those who are targets of poaching conform to a consistent pattern across most world regions. Those who attempt to poach another's partner are especially extraverted, disagreeable, unconscientious, unfaithful, and erotophilic. As documented in previous studies, mate poachers appear to possess certain personality traits (i.e., assertiveness combined with the tendency to be unempathetic) that are indicative of narcissism (see Foster et al., 2002) and may reflect a heritable or ecologically evoked orientation to short-term mating more generally (Bailey, Kirk, Zhu, Dunne, & Martin, 2000;MacDonald, 1998;Rowe, 2002). Among those who attempt to poach, the most successful mate poachers are those who are open to experience, sexually attractive, unfaithful, and erotophilic. The finding that those with high sexual attractiveness are more successful would seem to confirm that the mate preferences involved in general romantic attraction (Buss & Schmitt, 1993) are operative in the psychology of human mate poaching as well. Common targets of mate poaching express high levels of extraversion, openness, attractiveness, unfaithfulness, and erotophilia. Those who succumb to mate poachers are particularly disagreeable, unconscientious, unfaithful, erotophilic and, in Western cultures, unloving. This heuristic guide to the psychology of mate poachers and those who are poached should be useful to future studies in which individual differences in mate poaching and their implications are more fully explored. As a whole, these results suggest that mate poaching is an important and, at least in some ways, psychologically distinct form of romantic attraction. It also can be concluded that, although mate poaching leads to new relationships, the personality traits of those who engage in and succumb to mate poaching (i.e., disagreeableness, unfaithfulness, and erotophilia) lead us to conclude that these new relationships may not be long lasting. Culture of Mate Poaching We tested four hypotheses about the cultural patterns and universals of human mate poaching. Hypothesis 1 was strongly supported. Proportionately more men than women pursue short-term mate poaching across all ISDP regions. This is true when assessed in terms of the occurrence and prevalence of short-term mate poaching. Men also disproportionately succumb to women's shortterm poaching attempts. Whether assessed with occurrence or prevalence rates, the ISDP findings confirm that men seek and go along with short-term mate poaching more than women do, precisely as predicted by sexual strategies theory (Buss & Schmitt, 1993). Two other findings provide support for Hypothesis 1. First, women tend to report receiving more short-term poaching attempts than men, though this sex difference is limited to prevalence rates in North America, Western Europe, Eastern Europe, the Middle East, Africa, and Oceania. Given the tendency for men to overperceive the sexual interests and intentions of women (Haselton & Buss, 2000), this finding provides a reasonable level of support for Hypothesis 1. Second, women in several world regions report significantly more success than men do when pursuing short-term poaches. Again, this is not a universal finding and is limited to prevalence rates in the regions of North America, Western Europe, Eastern Europe, and Oceania. Overall, this portrait of short-term poaching confirms that men seek out short-term mateships more than women and buttresses the more general hypothesis that men possess psychological adaptations that give rise to the desire for multiple sexual partners (Buss & Schmitt, 1993;Schmitt & ISDP, 2003). Hypothesis 2 was partially supported. Across some, but not all, measures of short-term mate poaching, regions with fewer resources tend to have lower rates of short-term mate poaching. These findings support the view that humans might possess environmentally sensitive adaptations that influence mating strategy. When in resource-poor environments, it appears that humans pursue more long-term, monogamous mating strategies. When in resource-rich environments, in contrast, short-term strategies that include mate-poaching behaviors are more common, a finding that fits with the functional view of strategic pluralism theory (Gangestad & Simpson, 2000). Still, given the correlational methodology of the present investigation, this conclusion must be considered tentative. Future longitudinal studies showing shifts in poaching behavior that correspond with concurrent shifts in resources across cultures would bring much needed convergent support for this hypothesis. The associations among environmental indicators and matepoaching behaviors in this study seem to run counter to some well-established findings of attachment researchers. Several studies have shown that children from poor, unstable, and high-stress home environments tend to develop insecure parent-child attachment styles (Belsky, 1999), attachment styles that presumably give rise to insecure romantic attachment orientations in adulthood (Belsky, Steinberg, & Draper, 1991). These insecure adult attachments are thought to share many of the basic features of short-term mating strategies (Kirkpatrick, 1998), including earlier and more prolific reproduction (Chisholm, 1996). Thus, an attachment perspective would expect that people from resource-poor regions (i.e., low GDP) would exhibit more short-term mate poaching, not less as was evident in the current ISDP investigation. A recent study by Barber (2003) may shed light on these conflicting pieces of evidence. Barber documented across 85 nations that national levels of GDP were negatively related to teen birth rates. Thus, resource-poor environments were associated with higher rates of early reproduction, precisely as predicted from the attachment perspective. However, resource-poor environments (i.e., lower levels of GDP) also were associated with lower nonmarital or single-mother birth rates. Thus, as cultural regions possessed greater resources, Barber (2003) found that rates of women giving birth without being married (i.e., more short-term mating) actually increased, precisely as predicted by strategic pluralism theory (Gangestad & Simpson, 2000). An integrated explanation of these findings and those of the current study may reside in the idea that environmental resource levels affect different components of short-term mating strategies in different ways. The early reproduction component of short-term mating (e.g., high teen birth rates) appears to be activated or evoked by exposure to low-resource levels. The adult components of short-term mating (e.g., high single-parenthood and more prevalent short-term poaching) appear to be activated by high-resource levels. Future studies looking at these variables both within and across cultures, particularly those that study changes over time, will be needed to fully address this issue. Hypothesis 3 received partial support (Pedersen, 1991). As the number of women outsizes the number of men across regions (i.e., low sex ratios), women are more likely to engage in mate poaching. This is true for both short-term and long-term poaching among women. As the number of men outsizes the number of women, however, men are not more likely to engage in mate poaching. Instead, men's poaching rates are negatively associated with sex ratio. Why does an excess of women, but not men, lead to more poaching by both sexes? One speculation is that shifts in sex ratio drive mating systems as a whole, not just the mating psychology of one sex. When women are abundant and men are a scarce resource, men may be able to command more promiscuity on the part of women, and the entire mating system (for both men and women) may shift toward promiscuity . Regardless of whether men's or women's short-term or long-term poaching is considered, mate poaching (a form of promiscuity) tends to increase overall when the percentage of women increases across regions. Barber (2003) found similar results with male-biased sex ratios correlating negatively with both teen birth rates and singleparent rates across 85 nations. Hypothesis 4 was largely supported (Eagly & Wood, 1999). As women's access to resources increases across regions, women's rates of short-term poaching increase and sex differences in shortterm mate poaching are reduced. In many cases, men's short-term poaching also increases, but to a lesser extent than women's. Women's greater access to political power is equally associated with increases in women and men's poaching-related behavior and, as a result, sex differences in poaching are generally unrelated to the prevalence of women in parliament across regions. Apparently, greater political gender equity does not always result in attenuated sex differences. Instead, it appears to accentuate both men and women's poaching-related behavior (see also . Overall, the influence of culture on human mate poaching appears to be profound. Although proportionately more men than women seek short-term mate poaches across all regions of the ISDP, this effect is tempered by several cultural factors. When in resource-rich environments, for example, both men and women tend to engage in more short-term mate poaching. When women gain access to greater resources, women especially tend to engage in more short-term mate poaching and, as a result, the magnitude of sex differences in seeking and succumbing to short-term poaches is attenuated (though never eliminated). Finally, when the number of women is greater than the number of men in a region, people tend to engage in long-term and short-term mate poaching at higher rates as the entire mating system moves toward sexual promiscuity. In science, the most valued studies often are those that directly contrast competing theories and are able to rule out one hypothesis in favor of another. In the present study, the most consistent finding was that men more than women seek and succumb to short-term mate poaching across all regions of the ISDP. However, all theories of human mating subjected to testing in this study received at least some empirical support. As a result, we are left with the relatively unsatisfying conclusion that mate-poaching experiences are predictable from several theoretical perspectives, none of which is conspicuously superior to the others. Perhaps in future investigations, additional measures and variables can be used that will better determine whether one of these competing theories is superior to the others. Limitations and Future Research Directions This study is limited in several ways. Five particular concerns lead us to interpret our results with caution. First, the samples included in this study are mostly comprised of undergraduate students. A number of studies suggest that many undergraduates do form long-term mating relationships, with roughly 50% being in enduring relationships at any one point in time (Buss et al., 1992;. A case can be made that issues of mate poaching are more prevalent for young adults than for other samples. Future research nevertheless should explore matepoaching frequency and personality among older, more diverse, and more committed samples. A number of studies suggest that men are most jealous and vigilant about potential poachers when married to young and attractive women (Buss & Shackelford, 1997), suggesting that young married couples might be an ideal sample to study issues of mate-poaching psychology. On the other hand, actual rates of infidelity appear to increase among women in the mid-30s (Baker & Bellis, 1995), suggesting that sexual desertions (which may reflect successful short-term mate-poaching attraction) are more common in later stages of adulthood. Studies of different age samples could explore these important developmental and life span dimensions of the mate-poaching experience. Second, participants were not asked about the quality or satisfaction of their previous or current relationships. It seems likely that the quality of romantic relationships would be a determining factor in making mate-poaching attempts . The percentages of men and women who are in unsatisfying relationships, however, may vary cross-culturally with marriage customs, degree of matrilocality, and economic conditions. These extraneous factors, therefore, may be associated with the regional and sex differences found in the current study. Although the precise connections among these factors may be difficult to determine, it will be especially important for future investigations of mate poaching to assess the quality or satisfaction people have with their current relationships. A third shortcoming of this study is that some samples may have been especially unrepresentative of their region. In the ISDP samples from Africa, for example, most participants were college students. College students are probably unrepresentative of African populations, perhaps more so than for other world regions. In addition, several nations from the full ISDP were not administered the mate-poaching questions from the ISDP survey. Nations such as Jordan, India, and Fiji would have added more variability to our regional database and improved the testing of evolutionary and social-role theories. Future research that includes larger and truly representative samples from a wider range of cultural regions is needed to more accurately relate United Nations databases to profiles of mate poaching. A fourth limitation of the current study involves the region-level evaluation of the current set of hypotheses. Indeed, even the use of national indexes such as GDP and other United Nations indicators is less than ideal for testing many of the theories presented in this article. Although GDP certainly reflects some degree of environmental demand, it is not a measure of the demanding nature of environments, in situ. It is simply a regional average that may have only limited relationships with an individual participant's family history and socioeconomic condition. We did measure individual socioeconomic status and this was related in predictable ways to poaching behavior. However, we feel the current analyses should be considered merely a first step in theory testing and development concerning the patterns and universals of human mate poaching across cultures. A final limitation of the current study is its reliance on selfreport methodology. When comparing the scores of different cultures on mate-poaching scales, any observed differences may be due not only to a real cultural disparity in some aspect of poaching, but also to inappropriate translations, biased sampling, or the nonidentical response styles of people from different cultures (Grimm & Church, 1999). In this study, it was assumed that reported perceptions of mate poaching were reasonably veridical representations of actual mate-poaching experiences. The many universal personality correlates suggest that our key concepts were being similarly measured across languages. Fully establishing veridicality would be an extraordinarily difficult task, given that mate poaching is often conducted in secret, making observational studies almost impossible to conduct. Still, in-depth interviews of successful mate poachers, as well as those who have been lured by mate poaching, may be one step toward providing convergent evidence of the current results. Perhaps assessing mate-poaching reactions in laboratory experiments (e.g., Schmitt, Couden, & Baker, 2001) or capitalizing on social psychological principles such as contrast effects (e.g., Kenrick, Neuberg, Zierk, & Krones, 1994) would help to establish the validity of the sex and regional differences in mate poaching found in this study. Cross-cultural studies in which more specific poaching behaviors are assessed, such as whether people have kissed someone who was already in a relationship, would further allow for clearer measurement equivalence across cultures and languages. Although the current study is the broadest investigation ever undertaken to reveal this hidden side of human romance, the clandestine complexity of mate poaching leaves much work to be done.
2017-03-30T22:42:55.694Z
2004-01-01T00:00:00.000
{ "year": 2004, "sha1": "a6dc105b4d5847cf6b6d45dc3449aea649950a84", "oa_license": "CCBYNCSA", "oa_url": "https://repositorio.ulima.edu.pe/bitstream/20.500.12724/2161/1/Echegaray_Marcela.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "18309a11371803eeff5a98b20efcc98327bd0cec", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
886426
pes2o/s2orc
v3-fos-license
Biological significance of promoter hypermethylation of p14/ARF gene: Relationships to p53 mutational status in Tunisian population with colorectal carcinoma One of the most important pathways which are frequently affected in colorectal cancer is p53/ (MDM2)/p14ARF pathway. We aim to determine the methylation pattern of p14/ARF in relation to mutation of p53. This correlation was studied to investigate whether their alterations could be considered as a predictor factor of prognosis in colorectal cancer and whether it can be useful in early-stage diagnosis. Statistical analyses show that p14/ARF hypermethylation was correlated with rectum location (p = 0.004), primary TNM stage (p = 0.016), and advanced Astler–Coller stage (p = 0.024). The RT-PCR that revel 31 % of patients did not express p14/ARF mRNA or at very low level. A high concordance between CpG hypermethylation and the low levels (p < 0.005) was shown. In addition, our analyses demonstrate that patients with mutation in the p53 gene have a lack of the protein expression (p < 0.005). This category with negative expression of p53 had a shorter survival rate (p < 0.005). On the one hand, MSP pattern of p14/ARF were correlated with a lack of p53 expression (p = 0.007). We found that p53/p14ARF pathway was frequently deregulated among our patients. In our study, we demonstrate that hypermethylation of p14/ARF occurs early during CRC tumorogenesis. However, we did not find correlation between p14/ARF and survival. These results suggest that p14/ARF methylation pattern may constitute a predictor factor of CRC in early stage but it could not be considered as a prognostic factor. On the other hand and because of the reversibility of the methylation mechanism, it may be appropriate to target the demethylation of p14/ARF to develop new drogues for CRC. Introduction Extensive molecular analyses have revealed that colorectal carcinogenesis is characterized by a multistep process of genetic and epigenetic alterations. P53/MDM2/p14ARF pathway is usually affected in colorectal carcinogenesis. Indeed, these proteins are actively involved in the apoptosis which represents a principal physiological control mechanism. Any alteration affecting one of these molecules could lead to abnormal cell survival and will start the carcinogenesis process. The p53 is a key regulator of cell cycle checkpoints; it plays an important role inducing cell death after DNA damage or under conditions of cellular stress [1]. The prevalence of p53 mutations in colorectal cancer is highly variable among different series and may be estimated from 40 to 60 % of patients [2,3]. Its expression is maintained at a very low level in the normal cells [4]. However, it has been demonstrated that mutations in p53 gene increase the half life of the protein which is associated with overexpression in the nucleus [5]. Furthermore, the main cellular function of the MDM2 oncoprotein is to control the level of p53 through an autoregulatory feedback loop. In cancers, MDM2 overexpression deregulates this feedback, and the interaction between MDM2 and p53 is blocked [6]. Recently, the p14/ ARF protein has been investigated acting as intermediate in the MDM2/P53 pathway regulation. This protein has been also identified as a tumor suppressor gene promoting the rapid degradation of MDM2 and leading to p53 stabilization and its nuclear accumulation [6,7]. In fact, p14/ARF bounds and blocks MDM2 to inhibit the nucleocytoplasmic shuttling of p53 and induces its nuclear retention, production, and activation [8][9][10]. Furthermore, it acts upstream of p53 and answers to a negative feedback regulation, which suggests that p53 mutations or its inactivation by MDM2 amplification are often accompanied by overexpression of p14/ARF [11]. P53-positive tumors are also likely to have sustained epistatic mutations such as MDM2 amplification or p14/ARF loss or inactivation [12]. Nuclear import and export is a feature of both p53 and MDM2, such that nuclear p53 absence is associated with tumors with a poor prognosis [13]. Many analyzes suggest that p14/ARF influences the subcellular localization of MDM2 [14]. Consequently, the localization of these proteins and the relationship between their levels of expression are likely to be important in many carcinogenesis. Previous studies have examined p14/ ARF mRNA expression in breast cancers, with evidence suggesting altered expression and an association with p53 [15,16]. Moreover, the literature describes other process by which p14/ARF gene can be inactivated in many cancers such as deletion, promoter hypermethylation, or mutations [17]. In colorectal cancer (CRC), the p14/ARF inactivation was proved to be the result of promoter hypermethylation [18]. This last seems to be rich in CpG dinucleotides methylation of the cytosine residues at the CpG islands; this region plays an important role in the inactivation of gene expression. The transcription of p14/ARF can be deregulated by the hypermethylation [18]. Transcriptional silencing of the p14/ARF gene through CpG hypermethylation of the DNA promoter is an important event in the genetic regulation of cancers and would be associated with its carcinogenesis process [18]. This epigenetic mechanism occurs in many cancers and was mainly studied in glioma and bladder cancers [17,19]. Actually, only few recent studies were published concerning hypermethylation and loss of expression of p14/ARF in colorectal cancer [7,20]. The complexity and the close relationship between p53 and p14/ARF prompted us to describe these mutational profiles and expression in Tunisian colorectal cancer. In our study, we aimed to determine the p14/ARF expression level and its promoter methylation pattern in relation to mutational status of p53. First, we analyzed the relationship between epigenetic profiles and mutation status of p14/ARF and p53 genes with clinicopathological parameters. Next, we investigated whether the promoter methylation and the mRNA expression, respectively of p14/ARF and p53, were a predictor of the disease progression and the prognosis of colorectal cancer patients in Tunisian population. Patient specimens We underwent a retrospective study from 1995 to 2011 regarding patients with CRC, diagnosed in the laboratory of Pathology, Mongi Slim Hospital, Tunis. The individuals had neither gastrointestinal diseases nor a history of tumor. In the 167 cases included in this study, samples were taken not only from the tumoral area but also from the margin, corresponding to distant resection and were histologically free from precancerous lesions and cancer. The data collected for all patients included sex, age, tumor localization, TNM stage, and Astler-Coller stage. For DNA and the RNA extraction, representative samples of frozen sample tumoral mucosa (112) and paraffin-embedded tissues (55) were obtained from the files of 167 patients with CRC. The patient group included 83 women and 84 men. The mean age of the Tunisian patients (at the time of tissue collection) was 57 years. On histological exam, the tumor location was divided into 99 colon and 68 rectum. Furthermore, the pathologic classification of tumors was made according to the international TNM staging system: we identified 30 in primary stage (stages I and II) and 137 in advanced stage (stages III and IV). Concerning the Astler-Coller stage, we found 53 cases at early Astler-Coller stage and 114 at the advanced ones. DNA and RNA extraction Twenty milligrams of genomic DNA was extracted from paraffin embedded and frozen samples of tumoral mucosa. They were treated using the Wizard SV Genomic DNA Purification System according to the manufacturer's instructions (Promega, Madison, WI). The concentration of the DNA was measured with a spectrophotometer. Total RNA was extracted with TRIZOL reagent (Invitrogen) according to the manufacturer's instructions. After purification, RNA was dissolved in DEPC-treated water. The cDNA was synthesized by M-MLV Reverse Transcriptase (Invitrogen) and stored at −20°C until used. Sodium bisulfite modification of DNA(cDNA synthesis) and methylation-specific PCR of p14/ARF Two micrograms of genomic DNA from each sample were bisulfite-modified using the EZ DNA methylation kit (ZYMO Research, Orange, CA) according to the manufacturer's instructions. After treatment, the resulting bisulfite-modified DNA was eluted in 10 μL of the kit elution buffer and stored at −20°C. Two microliters of the bisulfite-modified DNA were used for each PCR reaction. Two microliters of bisulfitemodified DNA from each sample were amplified independently using the U-and M-specific primers in a 25-μL total volume reaction ( Table 1). Each PCR reaction contained a final concentration of 0.4 mmol of each primer (SGS, Köping, Sweden), 0.5-mmol dNTPs, 1× PCR buffer (Promega), 1.5-mM MgCl 2 (Promega), and 0.04 units of Taq polymerase (Promega). The PCR products were checked on-chip electrophoresis. RT-PCR for detection of p53 and p 14/ARF mRNA expression Total RNA was reverse-transcribed by M-MLV reverse transcriptase (Invitrogen) from which the c DNA was obtained from PCR reaction. PCR primers for p53 and β-actin were outlet in Table 1. RT-PCR of p53 was conducted with an initial step for 5 min at 95°C followed by 40 cycles of 15 s at 95°C and 1 min at T m (degrees Celsius). cDNA integrity was confirmed by β-actin-specific PCR analyzes. The RT-PCRproducts were checked on chips electrophoresis. The amplified band was 379 bp for p53, 207 bp for p14/ARF, and 581 bp for β-actin. Analyzes of p14/ARF and p53 amplification product by on-chip electrophoresis The PCR products of p14/ARF and p53 migration have been performed by chips-electrophoresis, for that, we used DNA 1000 LabChips kits, prepared with gel-dye mix, pressurized, then a marker solution and DNA 1500 ladder were added. For this process, 1 μL of each PCR reaction was added into one out of 11 sample wells of a prepared chip. After vortexing, the chip was placed in the BioRad Experion bioanalyzer. The electrophoresis of samples lasted approximately 30 or 40 min. Fragment analyzes was conducted using BioRad Experion software, and an overlay of two electropherogram was used to compare PCR patterns derived from tumor and normal mucosa. Differences in the peak patterns of the overlaid electropherogram were evaluated and two were used for each patient. A 2-μL volume of p53 PCR product was denatured in 5 μL of formamide, incubated for 10 min at 95°C. The overall mutation rate of p53 was identified after chips electrophoresis by the presence of one or two extra bands migration above or below the normal single-stranded products. Occasionally, mutated bands were detected between the single and doublestrand bands that may be caused by formation of normalmutated heterodimers. Samples with mobility shift were confirmed by sequencing using Sanger methods. Immunohistochemistry of p53 protein Serial sections of 4-mm thickness were cut from formalinfixed paraffin-embedded samples, incubated in an oven at 37°C overnight, deparaffinized, and rehydrated. The slides were immersed in citrate buffer (pH=6.0) in a microwave for 2-5 min to unmask the epitopes and then kept at room temperature for 20 min, followed by a Tris-wash for 5 min. The sections on the slides were incubated with peroxidase block to inhibit endogenous peroxidase activity. After washing twice in Tris, the sections were incubated in p53 (1:50, Vision Biosystems) at room temperature for 1 h. They were then incubated with postprimary block for 30 min. Expressions were assessed after incubation of the sections at room temperature with the peroxidase-labeled DAKO Envision System for 30 min, using DAB as a chromogene for 20 min. After washing with distilled water, the sections were then counterstained with hematoxylin. The reaction was considered as positive when a positive nucleus staining of p53. Statistical analyzes The relationships between p14/ARF, p53 gene status and the different clinicopathological variables were assessed using χ 2 test. The odds ratio was obtained by unconditional logistic regression analyzes. Survival curves were computed according to the Kaplan-Meier method. All p values cited were twosided and p values of <0.05 were judged as statistically significant. SPSS software, version 17.0, was used for analyzes, Results Analyzes of methylation status and mRNA expression of p14/ARF: correlation with clinicopathological data Of 167 patients with CRC, 120 (71.8 %) cases were unmethylated (U), 33 (19.7 %) methylated (M), and 14 (8.5 %) methylated and unmethylated (MU) at the same time ( Fig. 1). Statistical analyzes shows that the MSP pattern was correlated with location (p = 0.04), Astler-Coller stage (p =0.024) and with TNM stage (p =0.016). In fact, we found that 65 % of U phenotype was seen in the colon compared with M and MU phenotype distribution which was equivalent between colon and rectum. For the prognosis factor, the MSP pattern demonstrates that the M and MU bands were correlated with the primary TNM stage: Stages I and II (Table 2) and with advanced Astler-Coller stage (stages C and D; p =0.024). However, we do not find any statistical association between p14/ARF MSP pattern and the other clinicopathological criteria. We also examined the expression of p14/ARF using RT-PCR. Among 167 early lesions with available cDNA, 101/120 colorectal adenomas which are U at p14/ARF expressed high levels of p14/ARF mRNA, whereas 23/52 adenomas with p14/ARF M and 10/52 with MU pattern do not expresses p14/ARF mRNA or very little p14/ARF mRNA level, demonstrating an exact correlation of transcriptional loss with p14/ ARF hypermethylation (p <0.005; Fig. 2; Table 3). No correlation was found with clinicopathological features. Analyzes of mRNA expression, mutational status and immunostaning of p53 in CRC: correlation with clinincopathological data The samples were considered negative when they were positive for β-actin and negative for p53 (Fig. 2). In our set, 109 cases (65.3 %) showed positive expression for p53 while 58 (34.7 %) cases were negative ( Table 3). The p53 exons 5, 6, 7 and 8 were successfully amplified in all cases, which gave expected PCR fragment of 266, 160, 180, and 230 bp, respectively. After SSCP analyzes, among 167 cases of CRC, 17.4 % (29/167) was found harboring altered p53. Furthermore, 11 cases showed alterations in exon 5, 5 in exon 6, and 14 in exon 7, whereas no mutation was found in exon 8 (Fig. 3). Out of these 29 cases, 19 (65.5 %) were transition and 10 (34.5 %) transversion. All mutated samples in SSCP analyses were confirmed by sequencing (Fig. 3a, b; Table 4). No significant involvement has been detected through our statistical analyses between p53, SSCP analysis, or p53 mRNA expression and clinicopathological variables. The comparison between p53 mRNA expression and p53 alteration revealed an association among the variables (p <0.005). In fact, we found that 75.9 % (22) of patients who had mutation in the p53 gene have a lack of the protein expression (Table 5). For the immunoreactions, there were no significant differences with respect to clinicopathological characteristics between weak, moderate, or strong staining. Therefore, the three latter groups were classified as positive in our following analyzes. In total, 57 (34.1 %) of the 167 tumors showed positive immunoreactivity for p53. The immunoreactivity of p53 was invariably confined to the nucleus, and normal mucosa samples were negative for p53 expression (Fig. 4a) and positive in the tumoral area (Fig. 4b). According to clinicopathological variables, no significant involvement was detected through our statistical analyses except for the TNM stage. However, p53 was associated with TNM advanced stage, and we found a loss of expression of p53 (86.4 %) in stages III and IV (p =0.037; Table 2). We found significant association between p53 expression and mutational status analyses (p <0.005; Table 6). Indeed, we observed that the patients with negative expression of p53 demonstrate the absence of mobility shift in the DNA by SSCP Relationship between the p14/ARF methylation and p53 analyzes Statistical results did not show any association between the methylation pattern of p14/ARF and the mutational status of p53. Interestingly, combined analyses of p53 mRNA expression and p14/ARF methylation pattern showed a significant association. In fact, we found that the majority of cases with M (26 cases) and UM (13 cases) patterns of p14/ARF were correlated with a lack expression of p53 (p =0.007; Table 3). The relationship between the alteration in p14/ARF and p53 and patients survival The Kaplan-Meier survival curve for p14/ARF showed that there was no correlation between its methylation pattern and the specific disease survival (p =0.41; Fig. 5a). For the p53 mRNA expression, we found that the patients with negative expression of p53 had shorter survival than patients with positive expression of p53 (p =0.000; Fig. 5b). Discussion The p14/ARF is considered as a tumor suppressor protein, its inactivation by hypermethylation has been extensively described in many cancers [4,11]. The frequency of its promoter hypermethylation varies in different tumor types [21]. This last alteration was particularly intriguing in view to the interplay between p14/ARF and its impact in the p53 pathway during tumorogenesis. Methylation status of p14/ARF in colorectal cancer remains unclear and is until now not studied in Tunisian population. Therefore, to elucidate the implication of p14/ARF through the colorectal carcinogenesis, the MSP pattern, mRNA expression, and different clinicopathological data were studied. In this study, the methylation of the p14/ARF gene occurred in 28.2 %. This level among colorectal cancer patients varies between 22 and 50 %. This difference may be due to the diversity of populations [22][23][24]. The MSP pattern of p14/ARF are habitually represented by two entities: U and M patterns. Interestingly, in a recent paper, we have published [18], a new result was found showing the presence of U and M bands in the same sample that indicates the hemi-methylated pattern (MU) [18]. This proportion represents 8.5 % of our set. These bands were analyzed in terms of intensity, and we found that the majority of cases showed a greater intensity of the methylation band. This result confirms that it is a physiological and progressive transition from the U pattern towards the promoter hypermethylation. Comparing p14/ARF MSP pattern with clinicopathological parameters showed an association between p14/ARF hemi-methylated pattern and CRC rectum site (p =0.04). Indeed, our analysis revealed a similar distribution of p14/ARF methylation Transition ARG CYS MS 17 CRC E5/175 CGC CAC Transition ARG HIS MS 20 CRC E5/173 GTG ATG Transition VAL MET MS 37 CRC E5/154 GGC GGT Transition ARG TRP MS 39 CRC E5/166 TCA ATC Transversion SER ILE FM 64 CRC E5/177 CCC CTC Transition PRO LEU MS 112 CRC E5/159 GCC GAC Transition ALA ASP MS 113 CRC E5/176 TGC TGG transition CYS TRP MS 128 CRC E5/126 CAG TAG Transition GLU STOP TP between the colon (48.5 %) and the rectum (51.5 %). Moreover, hemi-methylated pattern was generally observed in the rectum (64.3 %). However, Burri et al. demonstrate that methylation of p14/ARF gene was significantly more frequent in right-sided than in left-sided tumors [25]. In this frame, Lee et al. showed no statistical significance between the MSP pattern and colorectal cancer site [23]. Based on the prognostic parameters, we did not detect any association between the hypermethylation and the survival rate. This result was similar to many previous studies of the literature [21,26]. Conversely, we found that methylation of p14/ARF promoter gene was associated with the stage of the disease. In fact, in the primary TNM stages of our tumors, stages I and II, the M and hemi-methylated status were seen at 57.6 and 100 %, respectively (p =0.016). Furthermore, the infiltrative growth of the tumor (p =0.024; Astler-Coller C and D) was associated with M and hemi-methylated pattern of the promoter gene. These results showed that the inactivation of p14/ ARF were associated with early stages of the tumor which were also characterized by small diameter and absence or rare metastatic lymph nodes. Despite their primary stages, these tumors were correlated with infiltrative growth process. These results proved that these patterns are very aggressive. Dominguez et al. [27] also reported a significant correlation between methylation of p14/ARF gene and poor prognosis in breast, colon, and bladder carcinomas. Thus, our result and those of the literature indicate that methylation process constitutes the major mechanism of p14/ARF inactivation and could be used as a biomarker for CRC [18]. P14/ARF is a candidate for hypermethylation with loss and inactivation of its protein. It contains a documented CpG island which can be silenced by this genetic and epigenetic alteration. However, few works have evaluated the methylation of p14/ARF in association with its expression. For further comprehension of this loss, we associate this fact to its regulation by mRNA expression. Therefore, we conducted a specific analysis of p14/ARF mRNA expression by RT-PCR and evaluated its impact in the genesis and prognosis in our cohort. Interestingly and according to our results, a high concordance was shown between CpG hypermethylation and the low levels of the p14/ARF mRNA pattern (p <0.005). Consequently, our data confirmed by others in recent literature, suggests that epigenetic regulation by promoter hypermethylation is the predominant mechanism involved in the deregulation of p14/ARF and may contribute to silencing of p14/ARF mRNA expression in CRC patients [28][29][30]. Moreover, no association was found between p14/ARF mRNA expression and patient survival. According to these results, we can conclude that the inactivation of tumor suppressor genes by aberrant hypermethylation is a fundamental process involved in the progression of many malignant tumors, including gastrointestinal cancer [31,32]. After rigorous validation on RT-PCR and immunohistochemistry investigation, a molecular signature associated with p53 mutant phenotypes in this subset of CRC was identified. Approximately the same values were found between the p53 transcriptional (mRNA) and translational (protein) profile with the p53 mutation status. In fact, we note that mutant forms of p53 have two distinct expressions phenotypes namely positive or negative. This could be the result of different mutations in the p53 gene: activation or inactivation mutation. With regard to the p53 wild type, the lack of expression (39 and 36 cases) is probably associated with the involvement of epi-mutation (p53 promoter hypermethylation), knowing that these data have been recently reported in the literature. Therefore, we found that p53 mutation and its abnormal expression may affect the occurrence and the development of CRC in synergy. It was reported that in 50 % of human M mutated, WT wild type cancers, the p53 gene is mutated. The gain of oncogenicity or the loss of tumor suppressor function of p53 is due to two alterations, such as its inactivation through missens mutations or its overexpression by transcription of p53 mutant form. This fact is considered as metastatic signatures in CRC [33]. It contributes to tumor aggressiveness and results in poor survival [34][35][36][37]. The present study demonstrates an association between p53-negative mRNA expression and poor survival in our cohort. The genomic instability associated with p53 mRNA overexpression is the cause of developing a risk phenotype, aggressive progression, and early death as reported in previous works [32,38]. The basic functions of p14/ARF inactivation, is predicted to reduce the p53 aberrant protein resulting from the mutation of the p53 [32]. Many researches showed that high frequency of p14/ARF promoter methylation had been previously reported to occur in tumors without TP53 mutations [4,38]. An inverse correlation between TP53 mutations and epigenetic inactivation of p14/ARF in CRC do not always hold true [39]. Conversely, p53/p14/ARF axis was considered as the major pathway involved in the regulation of cell proliferation, apoptosis, and DNA repair [40,41]. Although these two proteins are mechanically dependant, this complex was (p53/ p14/ARF) frequently deregulated through the strong association between p53 expression and p14/ARF methylation. In fact, we showed that M (78.8 %) and hemi-methylated (92.2 %) patterns were observed in tumor samples with a lack of p53 expression. This result is logical as p14/ARF methylation causes loss of p14/ARF functions and induces its absence in the nucleolus. As a result, it cannot bind to MDM2, resulting in its liberation. In this case, MDM2 acts as an oncogene, degrade p53 by ubiquitinylation, and blocks the normal cell cycle. In the literature, controversial results between p53 expression and p14/ARF inactivation has been observed not only in CRC but also in gastric and lung carcinomas [5,[42][43][44][45]. Eischen et al. [46] reported that control of p53 by p14/ARF occurs under specific stressful conditions and their effects on p53 functions may be dependent of the p53/p14ARF pathway in some tumor types. If one gene is abnormal, the p53/p14/ ARF pathway function is blocked. In conclusion, we found that p53/p14/ARF pathway was frequently deregulated in our patients. Herein, we demonstrate that hypermethylation of p14/ARF occurs early during CRC tumorogenesis. However, we did not find any correlations between p14/ARF and survival. These results suggest that p14/ARF methylation pattern may constitute a predictor factor of CRC in early stage but it cannot be considered as a prognostic factor. Finally, simultaneous assessment of p14/ARF methylation and abnormal expression of p53 may work as a biological indicator for early diagnosis of colorectal cancer, which may provide a theoretical basis for genetic intervention in clinical practice. With regard to our results, several revolutionary prospects can be opened in modern oncology: (1) Intervention of p14ARF methylation may be considered a powerful biomarkers in early colorectal cancer diagnosis. (2) Knowing that the methylation process is a reversible phenomenon, demethylation could be considered as targeted therapy.
2017-08-03T02:38:55.208Z
2013-09-25T00:00:00.000
{ "year": 2013, "sha1": "c8a138913ae9c363e78b753f0850372a0054c88b", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13277-013-1198-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c8a138913ae9c363e78b753f0850372a0054c88b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
71343241
pes2o/s2orc
v3-fos-license
Influence of cavity preparations and restorative procedures on stress distribution by finite element method Floriano J. C. Bello, Carlos A. Cimini Jr1, Rodrigo C. Albuquerque2, Walison A. Vasconcellos3 Restorative Dentistry, University Federal de Minas Gerais, 1Department of Mechanical Engineering, School of Engineering, Federal University of Minas Gerais, Belo Horizonte, Minas Gerais, 2Department Restorative Dentistry, Federal University of Minas Gerais, Belo Horizonte, Minas Gerais, 3Department Dentistry, Estadual University of Montes Claros, Montes Claros, Minas Gerais, Brazil INTRODUCTION Dental surgeons frequently encounter diffi culties in their practice accompanied by doubts regarding the most appropriate therapeutic course to follow.A controversial matter and subject of considerable doubt is the treatment of teeth that have undergone extensive structural loss due to decay lesions and cavity formation. [1,2]It should be taken into consideration that the loss of structural integrity induces changes in biomechanical properties and infl uences the capacity to assimilate and distribute the occlusal loads along the structures involved in functional and parafunctional activities. [3]iterature reports such as those of Reeh, Douglas and Messer [4] state that the endodontic treatment reduces the resistance of the dental element by only 5%, while the cavity preparation results in a decrease of 20%; the mesiodistal occlusal cavity (MDO) reduces the resistance of the same group of teeth by 63%.Therefore, the professionals should select an appropriate technique that minimizes the wear on the healthy dental structure and also induces minimum stress on the remaining structure; thus, such a technique will decrease the fracture risk and permit carrying out restorations with a long-term high clinical success index.Thus, on selecting from the techniques and materials available in the market, it should be considered that none of these materials can replace the effi ciency of the dental tissue while reestablishing the intimate and balanced relationship among the biological, mechanical, functional and aesthetic parameters.Several methodologies have been employed for investigating teeth and restorations subjected to the action of loads, among which the fi nite element method is the technique of choice. [5,6]9][10][11][12] The effi ciency of this method is demonstrated by the good agreement of results obtained by numerical analysis based on Considering that the loss of dental structural integrity induced by decay, cavity preparation, and endodontic access alters the biomechanical properties and infl uences the capacity of assimilation and distribution of occlusal loads, the purpose of this study was to determine the infl uence of cavity preparations and restorative procedures on the stress distribution of the upper incisor through the three-dimensional fi nite element method. MATERIALS AND METHODS In this study, the fi nite element method was employed with software ANSYS, version 5.7.The geometric three-dimensional models were obtained by using the anatomy of the right upper central incisor as presented by Wheeler. [13]ine fi nite elements models were developed.In model 1, a healthy tooth was defi ned as a healthy, decayfree tooth.This model was prepared using enamel, coronary and radicular dentin, pulp and cortical and cancellous bones.In models 2, 3, 4 and 5, dentin and enamel were removed from the teeth in model 1 to simulate the interproximal cavity preparation and endodontic access.The teeth in models 6, 7, 8 and 9 were restored with composite resin. Stress distribution analysis was carried out in the following cases: each volume of the models and their mechanical properties (Poisson's ratio and Young's modulus) were determined as shown in Table 1.The materials were considered homogeneous and isotropic, presenting a linear elastic behavior.Structure discretization was carried out by the generation of a network of fi nite elements formed by a set of subspaces called 'elements'.Tetrahedral elements with 10 nodes called 'SOLID 92' were used.Table 2 presents the number of elements, nodes, and degrees of freedom of the models.The models were subjected to a static load of 100 N with an inclination of 45° at a distance of 2.0 mm from the incisal edge of the palatal tooth surface.To prevent displacement, the geometric models were immobilized by mounting the nodes on the upper portion of the cortical bone as well as the cortical bone nodes facing contiguous teeth, thus leaving the models free in the vestibule-lingual direction. RESULTS AND DISCUSSION The stress distribution pattern (von Mises) of the models studied enabled us to conclude that the preparation of cavities and restorative procedures present three signifi cant areas of stress concentration associated with the healthy tooth: the areas of conservative interproximal cavity preparation, extensive interproximal cavity preparation, and endodontic access cavity.A summary of the maximal stress of Von Mises is presented in Table 3. Concerning the conservative interproximal cavity preparation area, relative to the healthy tooth (8.3 MPa), the von Mises stress concentration increased to 80% in model 2 (14.9 MPa), while it increased to 99% in model 5 (16.5 MPa).Therefore, the endodontic access through this area exacerbates the stress concentration in this area.In contrast, in the extensive interproximal cavity preparation area, the healthy tooth demonstrated a maximal von Mises stress concentration of 10.7 MPa, which was signifi cantly difference as compared to those in models 2, 4 and 5.The maximal von Mises stress in models 2 (25.0 MPa) and 4 (29.3MPa) demonstrated an increase of 134% and 174%, while model 5 (27.8 MPa) demonstrated a 160% increase as compared to the healthy tooth.With regard to the endodontic access cavity preparation area, the healthy tooth demonstrated the maximal von Mises stress concentration of 11.3 MPa.Models 3, 4 and 5 showed a signifi cant increase in the stress concentration (maximal von Mises stress values, 24.4 MPa, 24.6 MPa and 25.7 MPa, respectively) compared to the healthy tooth.However, no signifi cant variation was observed between them, indicating that the different interproximal cavity preparations do not affect the stress concentration in this area.The models restored with composite resin (models 6, 7, 8 and 9) exhibited a decrease in the stress in these areas in the order of ca.28%; this shows the importance of the restorative procedure for the functional reestablishment of the tooth.It is worth indicating that the tooth/restoration interface was considered ideal in this study, that is, the tooth/ restoration interface showed perfect adhesion, which is diffi cult to achieve in clinical practice.The main objective of restorative dentistry is to reestablish the biomechanical, functional and aesthetic principles of natural dentition through restorations that can withstand the masticatory load and the thermal variations that they are subjected to along with a long life for the dental element.The application of a load onto a dental element can result in important structural modifi cations that may, in some cases, alter its morphology.The substitution of the dental structure by restorative materials, such as composite resins, leads to a considerable change in the biomechanical properties of the tooth.Consequently, it is important to understand these alterations.This study shows that the removal of healthy dental structure in cavity preparation alters the stress distribution pattern and renders the dental element more susceptible to fracture.The load assimilation capacity of teeth is improved after restoration.In 1989, Reeh, Douglas and Messer [4] demonstrated that endodontic procedures such as preparation of endodontic access cavity, instrumentation, and fi lling affect only 5% of the relative rigidity of the tooth.The occlusal cavity preparation affects the relative rigidity of the tooth by 20%.The largest rigidity loss has been reported for the removal of the integrity of the marginal edge; MDO cavity preparation caused an average loss of 63% in tooth resistance.Magne and Douglas [14] observed an alteration in the biomechanical behavior compared to the anterior dentition.The endodontic procedures on the anterior dentition affected the resistance of the dental structure more signifi cantly, while class III cavity preparations are less harmful to the dental structure.In our study, the extensive interproximal cavity preparation presented the maximum stress concentration, followed by the endodontic access and the conservative interproximal cavity. CONCLUSIONS • Considering the results obtained with the methodology used in this study, following conclusions can be drawn: • Among the cavity preparation procedures, the maximum stress concentrations are associated with extensive interproximal preparations.• A second cavity preparation implied alterations in the stress distribution induced by the first preparation as distinct areas of the dental element exhibit higher or lower stress concentrations.• Restoration with composite resin improved the load assimilation capacity of the dental element, indicating the importance of the restorative treatment. 1 . Healthy tooth control -Model 1 [Figure 1] 2. Tooth with two conservative and extensive interproximal cavity preparations -Model 2 [Figure 2] 3. Tooth with endodontic access preparation [Figure 3] 4. Tooth with extensive interproximal cavity preparation and endodontic access -Model 4 5. Tooth with two conservative and extensive interproximal cavity preparations and endodontic access -Model 5 6.Restoration of model 2 with composite resin -Model 6 7. Restoration of model 3 with composite resin -Model 7 8. Restoration of model 4 with composite resin -Model 8 9. Restoration of model 5 with composite resin -Model 9 A large number of structures were used for analysis considering the conditions closely related to real life.After the preparation of the models, the materials (dental structures and/or restorative materials) of Figure 2 : Figure 2: External surface mesh of the studied models and applied load at the cross section Figure 3 : Figure 3: Graphical representation of the distribution of von Mises stresses developed on the dental structure compared those on the designed models
2019-03-08T14:23:33.944Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "aeb1e849607dc93bfc49a74533b15ddf73adb9ca", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0972-4052.43611", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "955cfb41f1a7ace37651de7ac735f401fc0ebe0e", "s2fieldsofstudy": [ "Materials Science", "Political Science" ], "extfieldsofstudy": [ "Materials Science" ] }
41369308
pes2o/s2orc
v3-fos-license
Vibration induced memory effects and switching in ac-driven molecular nanojunctions We investigate bistability and memory effects in a molecular junction weakly coupled to metallic leads with the latter being subject to an adiabatic periodic change of the bias voltage. The system is described by a simple Anderson-Holstein model and its dynamics is calculated via a master equation approach. The controlled electrical switching between the many-body states of the system is achieved due to polaron shift and Franck-Condon blockade in the presence of strong electron-vibron interaction. Particular emphasis is given to the role played by the excited vibronic states in the bistability and hysteretic switching dynamics as a function of the voltage sweeping rates. In general, both the occupation probabilities of the vibronic states and the associated vibron energy show hysteretic behaviour for driving frequencies in a range set by the minimum and maximum lifetimes of the system. The consequences on the transport properties for various driving frequencies and in the limit of DC-bias are also investigated. Introduction Quantum switching, bistability and memory effects provide potential applications for molecular electronics [1,2,3,4]. Recent scanning-tunneling microscopy (STM) experiments [5,6,7,8,9] have shown bistability and multistability of neutral and charged states. Random and controlled switching of single molecules [10,11,12], as well as conformational memory effects [6,9,13,14] have been recently investigated. Other groups have observed memory effects in graphene [15,16,17] and carbon nanotubes [18,19,20]. Motivated by the experimental achievements, several groups [21,22,23,24,25,26,27] have attempted to theoretically explain these striking features invoking a strong electron-vibron coupling. In Ref. [21] charge-memory effects have been investigated in a polaron-modeled system using the equation-of-motion method for the Green's functions in the strong tunnel coupling regime. Similarly, in Ref. [23] these effects are associated with a polaron system treated within a simple mean-field approach. However, the hysteresis effects in Ref. [23] may be an artefact of the mean-field approximation as pointed out by Alexandrov and Bratkovsky [28]. In Ref. [24] memory effects have been found in a polaron-modeled system taking the quantum dot as a d-fold-degenerate energy level weakly coupled to the leads and accounting for attractive electron-electron interactions. However, here a multiple degenerate energy level (d>2) is required. In contrast, in Ref. [26], again the situation of weak coupling to the leads but with repulsive electron-electron interaction is considered. In this work, bistability, charge-memory effects and switching between charged and neutral states of a molecular junction have been explained within the framework of a polaron model, where an electronic state is coupled to a single vibronic mode. These features have been associated with the asymmetric voltage drop across the junction and the interplay between time scales of voltage sweeping and quantum switching rates between metastable states in the strong electron-vibron coupling regime. In the weak tunnel coupling limit, a perturbation theory in the tunneling amplitude between the molecule and leads is appropriate to describe electronic transport. In particular, such a perturbative treatment is valid if the tunneling-induced level width Γ is small enough compared to the thermal energy k B T . The lowest order in this expansion leads to sequential tunneling, which corresponds to the incoherent transfer of a single electron from a lead onto the molecule or vice versa. Moreover, it is known from transport theory that sequential tunneling is dominant as long as the dot electrochemical potential (i.e. the difference E N − E N −1 between eigenvalues of the many-body Hamiltonian corresponding to states with particle number differing by unity) is located between the Fermi energies of the leads. A strong electron-vibron coupling can in turn qualitatively affect the sequential tunneling dynamics [29,30,31,32,33,34]. For strong coupling, the displacements of the potential surfaces for the molecule in a charged or neutral configuration are large compared to the quantum fluctuations of the nuclear configuration in the vibrational ground state. As a result, the overlap between lowlying vibronic states is exponentially small. This leads to a low-bias suppression of the sequential transport known as Franck-Condon (FC) blockade, which in turn is responsible for bistability effects in [26]. In this paper we extend and improve the ideas of Ref. [26]. Specifically, we include the time dependence of the bias voltage explicitly, and derive a time-dependent master equation for the reduced density matrix of a single level molecule coupled to a vibrational mode and weakly coupled to metallic leads. Moreover, we relax the assumption of fast relaxation of vibrons into their ground states and discuss the role played by the vibronic excited states in the switching dynamics. As in Ref. [26], we find that controlled electrical switching between metastable states is achieved due to polaron shift and Franck-Condon blockade in the presence of strong electron-vibron interaction. Moreover, we find that the hysteresis effects can be observed in the switching dynamics only if the time scale of variation of the external perturbation, T ex , is constrained into a specific range set by the minimum, τ min , and maximum, τ max , charge lifetimes of the system as a function of the applied bias. With λ being the dimensionless electron-vibron coupling, it holds τ min ∼ Γ −1 , τ max ∼ Γ −1 e λ 2 . Hence, a strong electron-vibron coupling (λ 1) is a necessary condition for the opening of this time scale window and thus of hysteresis. Such a large dimensionless electron-vibron coupling is not rare in conjugated molecules with soft torsional modes (e.g biphenyl with different substituents, azobenzene) which have been experimentally proven to behave as conformational switches ( [12], [13]). Very large reorganization energies (of the order of 1 eV) attributed to a polaron effect have also been observed in STM single atom switching devices [5]. Also in this case the electronphonon coupling should be large (λ 1) to justify the bistability. Outside this range the averaging over multiple charging events in the slow driving case or multiple driving cycles in the fast case removes the hysteresis. The paper is organized as follows: In Section 2 the model Hamiltonian of a single level molecule coupled to a vibronic mode is introduced. A polaron transformation is employed to decouple the electron-vibron interaction Hamiltonian and obtain the spectrum of the system. In Section 3 we derive equations of motion for the reduced density matrix for the case in which the leads are subject to an adiabatic bias sweep. The time-dependent master equation is solved in the limit of weak coupling to the leads and important time scale relations are derived. In Sections 4, 5 and 6, our main results of the memory effects are presented and analyzed for a sinusoidal perturbation of period T ex = 2π/ω. In Section 4 the lifetimes of the many-body states of the system are calculated. We show that, for the case of asymmetric voltage drop across the junction, at small bias voltages a bistable configuration is achieved which plays a significant role in the hysteretic dynamics of the system. Bistability can involve also vibronic excited states of the system. In Sections 5 and 6 we give an explanation of the hysteretic behavior of the system in terms of characteristic time scales, in particular, the interplay between the time scale T ex of variation of the external perturbation and of the dynamics of the system set by τ switch ∼ τ min ∼ Γ −1 . In Section 5 focus is on the regime ω ∼ Γ while in Section 6 is ω ≪ Γ. In the latter case the features observed in Ref. [26] can be successfully reproduced. In Section 7, the consequences on the transport properties in the DC-limit are presented as a special case. Finally, we conclude in Section 8. Model Hamiltonian We consider a simple Anderson-Holstein model where the Hamiltonian of the central system is described aŝ whereĤ mol represents a spinless single molecular level modeled by the Hamiltonian whered † (d) is the creation (annihilation) operator of an electron on the molecule and ε 0 is the energy of the molecular level, and V g accounts for an externally applied gate voltage. For simplicity we assume a spinless state describing the molecular level with strong Coulomb interaction where only one excess electron is taken into account. The spin degeneracy would not qualitatively change the results of the paper. The vibron Hamiltonian can be written aŝ whereâ † (â) creates (annihilates) a vibron with energy ω 0 . Finally, the electron-vibron interaction Hamiltonian is expressed asĤ where g is a coupling constant. Polaron transformation In order to decouple the electron-vibron interaction Hamiltonian, we apply the canonical polaron unitary transformation [35]. Explicitly, we setH sys = eŜĤ sys e −Ŝ , wherê with λ = g ω0 as the dimensionless coupling constant. The transformed form of the electron operator iŝ whereX = exp −λ â † −â . In a similar way, the vibron operator is transformed aŝ Now the transformed form of the system Hamiltonian readŝ where ε = ε 0 +eV g − g 2 ω0 is the polaron energy with polaron shift ε p = g 2 ω0 . The polaron eigenstates of the system are |n, m 1 := e −Ŝ |n, m , where n denotes the number of electrons on the molecular quantum dot, while the quantum number m characterizes a vibrational excitation induced by the electron transfer to or from the dot. Sequential tunneling We analyze the transport properties of the system in the limit of weak coupling to the leads. The Hamiltonian of the full system is expressed aŝ where α = s,d, denotes the source and the drain contacts, respectively. The tunneling Hamiltonian is given bŷ whereĉ † ακ (ĉ ακ ) creates (annihilates) an electron in lead α. The coupling between molecule and leads is parametrized by the tunneling matrix elements t s and t d . Here, we consider the weak coupling regime so that the energy broadening Γ of molecular levels due toĤ T is small, i.e., Γ ≪ ω 0 , k B T , and a perturbative treatment forĤ T in the framework of rate equations is appropriate. For simplicity, we assume that the tunneling amplitude t s/d of lead s/d is real and independent of the momentum κ of the lead state. In addition, we consider a symmetric device with t s = t d . Finally, the time dependent lead Hamiltonian is described byĤ The above equation describes the lead Hamiltonian of noninteracting electrons with dispersion relation ε κ . The timevarying chemical potential ∆µ α (t) of lead α depends on the applied bias voltage, and yields a κ-independent shift of all the single-particle levels. Time dependent master equations for the reduced density matrix In this section, we briefly derive the equation of motion for the reduced density matrix (RDM) of the molecular junction accounting for the time-dependence, Eq. (12), of the lead HamiltonianĤ α (t). We restrict to the lowest nonvanishing order in the tunneling Hamiltonian. Nevertheless, due to the explicit time dependance in the leads Hamiltonian, this work represents an extension of previous studies on similar systems (see e.g., Refs. [33,34,36,37,38,39,40,41,42,43,44,45,46,47,48,49]). The method is based on the well known Liouville equation for the time evolution of the density matrix of the full system consisting of the leads and the generic quantum dot. To describe the electronic transport through the molecule, we solve the Liouville equation for the reduced density matrixρ red (t) = Tr leads {ρ(t)} in the interaction picture, where the trace over the leads degrees of freedom is taken. In the above equation,Ĥ I T (t) is the tunneling Hamiltonian in the interaction picture to be calculated as below: where ζ α (t) = t t0 ∆µ α (t ′ )dt ′ . We make the following approximations to solve the above equation: (i) The leads are considered as reservoirs of noninteracting electrons in adiabatic thermal equilibrium. Note that this implies that the time scale of variation of the external perturbation has to be large compared to the relaxation time scale of the reservoirs (cf. Eq. (19) below). We assume the coupling between system and reservoirs has been switched on at time t = t 0 and consider a factorized initial condition. Thus at times t ≥ t 0 it holds ρ I (t) = ρ I where the correction in the tunnelling Hamiltonian drops in the second order master equation (see Eq. (16)). Here ρ s/d = 1 Z s/d e −β(Ĥ s/d (t)−µ s/d (t)N s/d ) denotes the thermal equilibrium grandcanonical distribution of lead s/d, Z s/d is the partition function, β the inverse of the thermal energy,N s/d the electron number operator, and µ s/d (t) = µ 0 +∆µ s/d (t) is the time dependent chemical potential of lead s/d which depends on the applied bias voltage. Note that the levels shift is taken into account by the time-dependent perturbation ∆µ s/d (t), while the change in chemical potential is taken into account accordingly via the chemical potential µ s/d (t) so that the net positive or negative charge accumulation in the leads is avoided. Conventionally, we take the molecular energy levels as a fixed reference and let the bias voltage drop across the source and drain contacts through the Fermi energies as [52] where 0 ≤ η ≤ 1 describes the symmetry of the voltage drop across the junction. Specifically, η = 0 corresponds to the most asymmetric situation, while η = 1/2 represents the symmetric case. In addition, we consider a sinusoidally-varying bias voltage, i.e., where ω is the frequency of the driving field. (ii) Since we assume weak coupling of the molecule to the leads, we treat the effects ofĤ T perturbatively up to second order. Accounting for the time-evolution as in Eq. (14) of the leads creation/annihilation operators, we find: In the derivation of the above equation we have used the relation: is the Fermi function, and the cyclic property of the trace. By summing over κ we obtain the generalized master equation (GME) for the reduced density matrix in the forṁ where the correlation function F α (t − t ′ , µ 0 ) of lead α [see Appendix A] has, in the wide band limit, the following form: Here D α is the density of states of lead α at the Fermi level. (iii) Since we are interested in the long-term dynamical behavior of the system, we set t 0 → −∞ in Eq. (17). Furthermore, we replace t ′ by t− t ′′ . We then apply the Markov approximation, where the time evolution ofρ I red is taken only local in time, meaning we approximateρ red (t − t ′′ ) ∼ρ red (t) in Eq. (17). In general the condition of time locality requires that [50] Here we defined from Eq. (17) together with Eq. (18), Γ α = 2π |t α | 2 D α as the bare transfer rates and Γ = α Γ α as the tunneling-induced level width. Notice that the validity of the Markov approximation, justified in this case, is crucially depending by the order of the current cumulant and the order of the perturbation expansion in the tunnelling coupling [51]. Finally, the condition of adiabatic driving Eq. (19) allows to approximate ζ α (t)−ζ α (t−t ′′ ) = ∆µ α (t)t ′′ . Taking into account these simplifications, the generalized master equation (GME) for the reduced density matrix acquires the forṁ where Since the eigenstates |n, m 1 ofĤ sys are known, it is convenient to calculate the time evolution ofρ I red in this basis. For a generic quantum dot system, this projection yields a set of differential equations coupling diagonal (populations) and offdiagonal (coherences) components of the RDM. For the simple Anderson-Holstein model Eq. (1) coherences and populations are, however, decoupled. In the sequentialtunneling regime, the master equation for the occupation probabilities P m n = 1 n, m|ρ red |n, m 1 of finding the system in one of the polaron eigenstates assumes the forṁ where the inequality Γ ≪ ω 0 ensures the applicability of the secular approximation, i.e., the separation between the dynamics of populations and coherences. In the numerical treatment of these equations we truncate the phonon space. Convergence is reached already with 40 excitations. In Eq. (21) the coefficient Γ m ′ →m n ′ →n denotes the transition rate from |n ′ , m ′ 1 into the many body state |n, m 1 , while Γ m→m ′ n→n ′ describes the transition rate out of the state |n, m 1 to |n ′ , m ′ 1 . Taking into account all possible single-electrontunneling processes, we obtain the incoming and outgoing tunneling rates, in the wide band limit, as where the terms describing sequential tunneling from and to the lead α are proportional to the Fermi functions , respectively. Notice that the integrations over energy and time introduce the explicit time dependance in the Fermi functions. The factor F mm ′ = | m|X|m ′ | 2 is the Franck-Condon matrix element which can be calculated, withX defined in Section 2.1, explicitly using Appendix C. The sum rules m F mm ′ = m ′ F mm ′ = 1 are well satisfied because of the completeness of each vibrational basis set |0, m and |1, m ′ 1 . This factor describes the wave-function overlap between the vibronic states participating in the particular transition. It contains essential information about the quantum mechanics of the molecule and significantly influences the transport properties of the single-molecule junction. Within the rateequation approach, the (particle) current through lead α is determined by and it is in general time dependent. Moreover, differently from the stationary case, in general I L (t) = −I R (t). The charge is though not accumulating on the dot since, for the average quantities it holds I L,av = −I R,av , as it can be easily proved considering that the average charge on the dot oscillates with the same period T ex of the driving bias. Finally, in the DC limit ω → 0 the relation I L (t) = −I R (t) holds as the fully adiabatic driving allows to reach the quasi-stationary limit at all times. Lifetimes and bistability of states In this section, we show that when the bias voltage drop is asymmetric across the junction, upon sweeping the bias, one can tune the lifetime of the neutral and charged states to achieve a bistable system. The lifetime of a state is obtained by calculating the switching rate of that state. The lifetime τ nm of a generic quantum state |n, m 1 is given by the sum of the rates of all possible processes which depopulate this state, i.e., and it defines, at least on a relative scale, the stability of the state |n, m 1 . Thus, at finite bias voltage, the inverse lifetime of the 0-particle mth vibronic state is given by the relation In a similar way, the inverse lifetime of the 1-particle and mth vibronic state is expressed as A consequence of Eqs. (27) and (28) is that, due to the characteristic features of the Franck-Condon matrix elements, in the strong electron-vibron coupling regime, the tunneling with small changes in m − m ′ is suppressed exponentially. Hence only some selected vibronic states contribute to the tunneling process. However, tunneling also depends on the bias voltage and temperature through the Fermi function. To proceed further, let us focus first on the lifetime of the 0-and 1-particle ground states for the case of fully asymmetric coupling of the bias voltage to the leads, i.e., η = 0: One can see from Eq. (29) that if in the considered parameters range is ε + m ′ ω 0 ≫ µ 0 , i.e., f (ε + m ′ ω 0 − µ 0 ) → 0, then the second term in the bracket is negligible. The first term is nonzero at large positive bias, while at large negative bias it remains negligible. In a similar way one can analyze the behavior of τ −1 10 in which the first term on the r.h.s. of Eq. (30) will be dominating at large negative bias. In order to understand the mechanism of this process the energy-level scheme for the relevant transitions in a coordinate system given by the particle number N and the grandcanonical energy E −µ 0 N shown in Figure 1. We choose V g = 0 and µ 0 = 0. Moreover, the polaron energy levels are at resonance with the 0-particle states for our chosen set of parameters: we set ε p = ε 0 and hence ε = 0. Then the only transitions allowed at zero bias are ground state ↔ ground state transitions. At finite bias also transitions involving excited vibronic states become allowed. In particular, at V b = 0 it follows from Eqs. (29), and (30) that whereas In practice the asymptotic behaviors are already reached at e|V b |/ ω 0 ∼ 2λ 2 as observed in Figure 1(b). Note that τ max and τ min set the maximum and minimum achievable lifetimes which, due to τ max /τ min ∼ e λ 2 , can differ by several orders of magnitude for λ > 1. Note also that near zero bias the lifetimes are so long that the system never likes to charge or discharge and a bistable situation is reached. A selective switching, however, can occur upon sweeping the bias voltage. Hence τ min also sets the time scale for switching: τ min ∼ τ switch . Analogously, we can explain the behavior of the lifetimes of the excited states (see Figure 2). It follows that in the considered parameters range, in general, the 0-particle The blue thin line represents the inverse lifetime of the 0-particle state (n = 0), while the thick dashed red line refers to the 1-particle state (n = 1). The asymmetry parameter is η = 0 and we fix the zero of the energy at the leads chemical potential at zero bias: µ0 = 0. The energy of the molecular level is ε0 = 25 ω0. The electron-vibron coupling constant is λ = 5 yielding a polaron shift εp = ε0. Finally, the thermal energy is kBT = 0.2 ω0, the frequency of the driving field is ω = 0.002ω0, and Γs = Γ d = 0.006ω0. vibronic states are stable at large enough negative bias voltage, while the 1-particle vibronic states are stable at large positive bias. There is, however, an interval of bias voltage, the so-called bistable region, where both states |1, m ′ 1 and |0, m 1 are metastable for not too large m and m ′ , as shown in Figure 2. Moreover, m steps are observed in the inverse lifetimes τ −1 nm (see Figures 2(b-f)) because for certain values of the coupling constant λ some of the FC factors F mm ′ vanish or are exponentially small such that the additional channels opened upon increasing the bias voltage do not have pronounced contribution. For instance, the FC factor for the first excited vibronic state can be described as which vanishes for m ′ = λ 2 . That is why a plateau around eV b / ω 0 = 25 in Figure 2(b) is observed for our chosen parameters. Analogously, using Eq. (57), one can find (cf. Appendix D) that F 2m ′ has two minima at Hence two plateau can be observed (see Figure 2(c)) around eV b / ω 0 = 20 and eV b / ω 0 = 31. Similar arguments can be extended to explain the steps in the inverse lifetimes of higher excited states. This also implies that the bias window for bistability shrinks for excited states and even disappears for large enough m. It follows that the major contribution in bistability is coming from low excited vibronic states. Note that the bistability of the many body states is crucial for the hysteresis and hence memory effects which is discussed in the next section. Finally, a closer inspection of Figure 2 reveals that the minimum of the inverse lifetime increases with the vibronic quantum number m. This effect can be understood easily by analyzing the minimum of the inverse lifetime for each particle state. For example the minimum of the inverse lifetime for the 0-particle vibronic ground state is, cf Eq. (33), whereas for the 0particle vibronic first excited state is From Eqs. (33) and (36), one can conclude that τ −1 . A similar explanation can be extended to the higher excited states. For gate voltages such that eV g > 0, the 1-particle vibronic excited states are becoming unstable faster than the 0-particle states (see Figure 3(a)-(c)), while for large negative gate (eV g < 0), the 0-particle states are getting unstable fast (see Figure 3(d)-(f)). In order to explain this effect, we analyze the shift of the inverse lifetime of the 0-particle vibronic first excited state, τ −1 01 , as follows: The maximum of the inverse lifetime for V g = 0 is whereas the minimum is given by Eqs. (37) and (38) imply that both minimum and maximum of τ −1 01 shift by an equal amount and the condition of the bistability region can be tuned by setting V g . Quantum switching and hysteresis Neutral and charged (polaron) states correspond to different potential energy surfaces and transitions between low-lying vibronic states are strongly suppressed in the presence of strong electron-vibron interaction. This leads to the bistability of the system. Upon applying an external voltage, one can change the state of this bistable system obtaining under specific conditions hysteretic chargevoltage and current-voltage curves. Here it is crucial to point out that only if the time scale of variation of the external perturbation is shorter than the maximum lifetime but longer than the minimum lifetime of the system hysteresis can be observed, i.e., τ min ∼ τ switch < T ex < τ max . Due to τ max > T ex , the system stays in the stable state during the sweeping until the sign of the perturbation changes, the former stable state becomes unstable and, due to T ex < τ min , a switching to the new stable state can occur. In this section we now consider the situation when ω ∼ Γ, i.e., T ex ∼ τ switch while in Section 6 the regime ω ≪ Γ, i.e., T ex ≫ τ switch is addressed. In Figures 4 and 5 we present the populations of the electronic states, P n = m P m n , as well as of the vibronic states, P m = n P m n , respectively. Specifically, in Figure 4(a)-(b), we have plotted the populations of the 0and 1-particle electronic states as a function of normalized bias voltage, where hysteresis loops can be seen. In Figure 4(c), instead, we have shown the population of the 0-particle electronic state as a function of time. The latter can be used to determine the time τ switch of switching between the neutral and charged states. In a similar way, the sweeping time T ex of the bias voltage can be calculated using Figure 4(d). By comparison of these two time scales, it is apparent that the switching time is of the same order as the sweeping time and much shorter than the lifetime in the bistable region (see Figure 1). The relation τ switch ≈ T ex also explains why the switching between the neutral and charged state is on average never complete (P 0 oscillates between 0.2 and 0.8). In Figure 5, the populations of the vibronic states as a function of the normalized bias voltage are shown, while in Figure 6 the populations of the different vibronic states resolved for different charges have been plotted. Clearly not only the vibronic ground states (which were considered in Ref. [26]) show hysteretic behavior but the vibronic excited states also exhibit these interesting features. Furthermore, inspection of these figures reveals that even after relaxation on the stable limit cycle, the vibronic excited states are highly populated in the non-stationary case in contrast to the stationary case ω → 0 (see e.g., Figures 15 and 17) where the population of the excited states is strongly suppressed. Finally, while the general trend is a reduction of the population, the higher the excitation and the populations are negligible for m ≈ 40, an interesting behaviour can be recognized in the form of the limit cycles. Namely, upon sweeping the bias we find that, for m ≫ 8 the probability grows at large biases, it stays essentially constant for m ≈ 8 and it decreases at larger biases for m < 8. The interpretation of this behaviour is still unclear to us. All these observation confirm, though, that it is natural to take into account the vibronic excited states in the dynamics of the system. I − V characteristics The hysteretic behavior of the bistable system is also reflected in the current as a function of normalized bias (see Figure 7) where a hysteresis loop (single loop) is observed in the current calculated both at the left and the right lead. Interestingly, the left and the right currents differ by more than a sign, in contrast to the stationary case. This behavior is understandable again in terms of relaxation time scales. In fact, for voltages |V b | outside the bistable region the system relaxes to the stationary regime on a time scale τ switch . Though, since the driving time T ex has the same order of magnitude, the stationary regime cannot be reached. Yet, no net charge accumulation occurs since I L,av = −I R,av . In Figure 8, we plot the left time dependent current as a function of the normalized bias for different values of the electron-vibron coupling constant. An inspection of this figure reveals that the width of the hysteresis loop decreases and shifts from zero bias upon decreasing the coupling constant λ. This feature can be understood by observing that for λ = 5 the polaron shift ε p does not longer compensate the energy of the molecular level ε 0 , and hence the polaron energy ε = 0. In other words, the system is no longer behaving symmetrically upon exchange of the sign of the bias voltage. If we consider e.g. the case λ = 1 is, for V g = µ 0 = 0, ε/ ω 0 = 24. In turn this implies that τ −1 00 (V b = 0) ∼ 0 and τ −1 10 (V b = 0) ∼ Γ s + Γ d , i.e., the region around zero bias is no longer bistable as for the case λ = 5. Hence the dot is preferably empty at zero bias. Switching however can be reached upon increasing V b in the region around eV b ∼ ε. Overall however the bistability region has shrunk. Similar considerations apply to the other considered values of λ. Vibron energy In this section, we illustrate the role played by the vibronic energy in the hysteretic behavior of the system. The vibron energy of the whole system can be expressed as where the trace is taken over the system degrees of freedom. The normalized vibronic energy as a function of normalized bias voltage is depicted in Figure 9(a), where hysteretic loops are also observed. The value of the vibronic energy, together with the observation that the probability distribution is relatively flat over the excitations (see Fig. 6) ensures that, depending on the bias, between 10 and 20 vibronic excited states are considerably populated. Further insight in the dynamics of the system is obtained by considering the correlation between the vibronic energy and the charge occupation. The vibron energy associated with the 0-particle state is determined by the relation withρ 0 =ρ red |0, m 11 0, m|. In Figure 9(b), the normalized vibronic energy as a function of normalized bias voltage for the 0-particle configuration has been plotted. The hysteresis loop resembles that of Figure 4(a) implying a direct correlation between the vibronic energy and the population of the neutral state i.e., the more the neutral state is occupied the higher is the associated vibronic energy. Qualitatively the result can be explained as follows: transitions from the charged to the neutral states are predominantly involving low energy charged states and highly excited neutral states. Due to energy conservation and asymmetric bias drop these transitions are confined to the large negative biases where the highly excited neutral states show also a long life time. This situation remains roughly unchanged during the up sweep of the bias until the symmetric condition is obtained at high positive bias and the charged excited states are maximally populated. Finally, the bistability around zero bias explains the hysteresis. The analytical expression for the vibronic energy of the 1-particle state is given by withρ 1 =ρ red |1, m 11 1, m|. The normalized average vibron energy as a function of normalized bias voltage for the 1-particle configuration is sketched in Figure 9(c), where we can observe a hysteresis loop resembling that of Figure 4(b). In conclusion, the vibron energies also show hysteretic behavior, in analogy to the population-voltage and currentvoltage curves, in the non-stationary limit. Testing lower driving frequencies When lowering the driving frequency ω (ω ≪ Γ) of the external perturbation, we choose ω = 2 × 10 −6 ω 0 , our model displays features similar to those presented in Ref. [26]. In more detail, we show the population of the electronic states as a function of normalized bias and time in Figure 10(a)-(b), Figure 10(c), respectively, whereas in Figure 10(d) the normalized bias as a function of time is shown. In this case the population-voltage curve is slightly different from Figure 4 because the transition between 0 and 1 occurs more abruptly as a function of V b and it is complete. Indeed, for the parameter chosen in Figure 10 is In other words the frequency is small compared to the charge/discharge rate. The system thus follows adiabatically the changes of the bias voltage and only switches at those values of the bias where τ n0 ∼ τ switch . The time-dependent left current as a function of normalized bias is shown in Figure 11(a) giving two loops, one for positive bias sweeping and the other for negative sweeping. The right current is shown in Figure 11(b). Due to the extremely low frequency the currents substantially fulfill the quasi-stationary relation I L (t) = −I R (t) associated to a fully adiabatic regime. In Figure 12, we present the populations of the vibronic states and hysteretic loops are visible. Vibronic states with quantum numbers up to λ all display nonvanishing populations, much less than in the case T ex ≈ τ switch . 7 The DC-case (ω → 0) In this section, we consider the limit (ω → 0) of DCbias as a special case of the master equation presented in the previous section and compare the results. Even if the system still exhibits the bistable properties discussed in Section 4 (they are in fact not related to the sweeping time of the bias) the hysteretic behavior cannot be observed anymore. In Figure 13, we present the population of the electronic states for gate voltage V g = 0. At large negative bias the system is empty, while at large positive bias it is charged. The system makes transitions from the 0-to 1particle state near zero bias. Analogously, in Figure 14, the population of electronic states as a function of normalized bias is depicted for gate voltage eV g / ω 0 = 8. Due to a finite ε, the transition 0 → 1 occurs at positive bias voltages. Moreover, the populations of the vibronic states are sketched in Figure 15 for gate voltage V g = 0, which clearly shows that, for the considered parameters, only the vibronic ground state and first excited state are populated, whereas the populations of higher excited states are very small. This is in contrast to the non-stationary case where the excited states are highly populated (see Figure 5). In a similar way, the populations of the vibronic states for gate voltage eV g / ω 0 = 8 are presented in figure 16 where higher excited states also get populated. Finally, in Figures 17 and 18 we show the populations of the 0- Fig. 13. (Color online). Population of (a) the 0-particle electronic state, (b) the 1-particle electronic state. The value of gate voltage is Vg = 0, and the frequency of the driving field is ω ≪ 1/τmax. The other parameters are the same as used in Figure 2. and 1-particle vibronic states for gate voltages V g = 0 and eV g / ω 0 = 8, respectively, which basically provide the same information as mentioned before. I − V characteristics for the DC-case In the DC-case the analytical expression for the current remains the same as given by Eq. (24) taking into account a time independent bias. Let us first discuss the situation when the 0-and 1-particle states are in resonance, ε = ε 0 − ε p = 0 and V g = 0. In this particular case, an interesting behavior of the I − V characteristics with two opposite current peaks around zero bias can be observed (see Figure 19). In order to understand the mechanism of this process, we consider the source current which can be expressed in the form At V b = 0 only ground to ground state transitions are open and P 0 0 = P 0 1 = 1 2 . Hence, from Eq.(42) one deduces that in this region the current is zero. At large positive bias, i.e., V b → ∞, the current is zero because the system is in a 1-particle stable state and no new transition channel is available. For finite bias, the behavior of the Franck-Condon factor F mm ′ is of importance. In particular, it suffices to investigate the classically allowed transitions as determined by the Franck-Condon parabola [32,53]. The minimum of the parabola is for m = m ′ ∼ λ Moreover, F mm ′ attains the maximal values for F mm ′ = F m0 or F mm ′ = F 0m ′ and m or m ′ of the order of λ 2 . Hence Fig. 17 describes a threshold effect. The populations P m 1 of the 1-particle states are mirror symmetric with respect to the bias inversion (not shown). Analogously, we can analyze in the same way as above the current peak in Figure 20 which occurs at eV b ∼ ε for gate voltage eV g / ω 0 = 8. Conclusions In conclusion, we analyzed the quantum switching, bistability and memory effects in a single level system within the framework of the polaron model, where the electronic state is weakly coupled to metallic leads under AC-bias and strongly coupled to a vibrational mode. We showed that the bistability arises if the quantum switching between neutral and charged states involved is suppressed, e.g., due to Franck-Condon blockade. In the case of an asymmetric junction, the neutral and charged states can be unstable at one polarity but stable at the other polarity of bias voltage. Under an appropriate choice of parameters, the stability regions of the two states overlap, which results in a bistable region in a certain interval of bias voltage. Taking into account non-stationary effects, in particular the interplay between time scales of variation of the external perturbation and the switching time of the system, we demonstrated electrically controlled hysteretic behavior of the system. Furthermore, we showed that vibronic states and average vibron energies also show hysteretic behavior like the ones shown by the population-voltage and current-voltage curves. At the end, we also discussed the case of a DC-bias. In this case the population-voltage and current-voltage curves get single valued. Interestingly, one can observe current peaks in the I − V characteristics of the system when given vibronic channels contribute to transport. Moreover, we found that in the AC-case the vibronic excited states can be highly populated, while in the stationary case the population of the excited states is strongly decreased. port of Kohat University of Science & Technology, Kohat-26000, Khyber Pakhtunkhwa, Pakistan. We thank D. A. Ryndyk for useful discussions. A Calculation of the correlator F α (t − t ′ , µ 0 ) Here we calculate the correlation function F α (t − t ′ , µ 0 ) of lead α in the wide band limit. From Eq. (16) we can write: where D α is the constant density of states of lead α. To simplify the above equation, we use the following relation: The first term leads to the result The simplified form of the second part in (44) reads: Due to symmetry the cosine component of the integral vanishes. One can further use the following relation: Putting all together, the correlation function gets the final form (in the wide band limit) as This function characterizes the correlation which exists on average between events where a lead electron is destroyed at time t ′ and another is created at time t. It thus provides very important information about the time scales which control the relaxation dynamics of the leads. B Evaluation of an integral In order to solve Eq. (20) and obtain the populations of the many-body states, one needs to evaluate the following integral: Substituting the correlation function F t ′′ , µ α (t) using Eq. (49) in the above equation, we obtain To simplify the above relation, one can use the following formula: Using Eq. (52), we can write Eq. (51) as After some calculations, we obtain where µ α (t) = µ 0 + ∆µ α (t). C Evaluation of transition matrix elements of the electron operator To determine the transition rates, we need to calculate the matrix elements r|d|s = e − 1 2 |λ| 2 F (λ, m, m ′ ) , where |r and |s represent the eigenstates given by Eq. (9). The function F (λ, m, m ′ ) determines the coupling between states with a different vibronic number of excitations with effective coupling λ and is expressed as [42,54] F (λ, m, m ′ ) = where m min/max = min/max(m, m ′ ). The coefficient F mm ′ in Eqs. (27) and (28) is defined as F mm ′ = e −λ 2 F 2 (λ, m, m ′ ). D Expression for the FC factor F 2m ′ Using Appendix C the expression for F 2m ′ follows to be
2012-09-19T14:55:12.000Z
2012-05-22T00:00:00.000
{ "year": 2012, "sha1": "569f04a42b6995677728eea23e72cef22270d4fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.4927", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "569f04a42b6995677728eea23e72cef22270d4fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270854906
pes2o/s2orc
v3-fos-license
Heterogeneity of fibroblast activation protein expression in the microenvironment of an intracranial tumor cohort: head-to-head comparison of gallium-68 FAP inhibitor-04 (68Ga-FAPi-04) and fluoride-18 fluoroethyl-L-tyrosine (18F-FET) in positron emission tomography-computed tomography imaging Background Cancer-associated fibroblasts (CAFs) within the tumor microenvironment (TME) can interact with tumor parenchymal cells to promote tumor growth and migration. Fibroblast activation protein (FAP) expressed by CAFs can be targeted with positron emission tomography (PET) tracers, but studies on FAP expression patterns in intracranial tumors remain scarce. We aimed to evaluate FAP expression patterns in intracranial tumors with gallium-68 FAP inhibitor-04 (68Ga-FAPi-04) and immunohistochemical staining and to observe the interactions between CAFs and tumor cells with a head-to-head comparison of 68Ga-FAPi-04 and fluoride-18 fluoroethyl-L-tyrosine (18F-FET) for PET quantification analysis. Methods We prospectively enrolled 22 adult patients with intracranial mass lesions. 68Ga-FAPi-04 and 18F-FET PET-computed tomography (PET/CT) brain imaging were applied before surgery. Maximal tumor-to-brain ratio (TBRmax), metabolic tumor volume (MTV), and total lesion tracer uptake (TLU) was obtained, and different thresholds were used for 68Ga-FAPi-04-positive lesion delineation owing to the lack of relevant guidelines. The MTV and TLU ratios of both tracers were calculated. Linear regression was applied to observe the differential efficacy of semiquantitative PET parameters. Results A total of 22 patients with a mean age of 50±13 years (range, 27–69 years) were enrolled. Heterogeneous patterns of 68Ga-FAPi-04 uptake [median of maximal standardized uptake value (SUVmax) =3.8; range, 0.1–19.1] were found. More malignant tumors, including brain metastasis, glioblastoma, and medulloblastoma, generally exhibited more significant 68Ga-FAPi-04 uptake than did the less malignant tumors, while the SUVmax and TBRmax exhibited nonsignificant differences across three intracranial lesion groups of primary brain tumor, brain metastasis, and noncancerous disease (SUVmax: P=0.092; TBRmax: P=0.189). Immunohistochemistry staining showed different stromal FAP expression status in various intracranial lesions. In 15 patients with positive 68Ga-FAPi-04 intracranial tumor uptake, the MTVFAPi:MTVFET ratio had differential efficacy in various types of intracranial tumors [95% confidence interval (CI): 0.572–7.712; P=0.027], and further quantification analyses confirmed the differential ability of the MTVFAPi:MTVFET ratio (95% CI: −0.045 to 11.013, P=0.052; 95% CI: 0.044–17.903, P=0.049; 95% CI: −1.131 to 30.596, P=0.065) with different isocontour volumetric thresholds. Conclusions This head-to-head study demonstrated heterogeneous FAP expression in intracranial tumors. The FAP expression volume percentage in tumor parenchyma may therefore offer benefit with respect to differentiating between intracranial tumor types. Introduction Intracranial tumors exhibit substantial heterogeneity, which could be attributed to both tumor parenchymal cells and benign cells in the tumor microenvironment (TME).Gliomas, the most common central nervous system tumors, exemplify this, demonstrating pronounced intratumoral and intertumoral heterogeneity (1)(2)(3).Interactions between glioma cells and adjacent TME cells in the tumor stroma occur through various mechanisms, promoting tumor pathophysiological behaviors such as proliferation, migration, and angiogenesis (4,5). Positron emission tomography (PET) imaging with radio-labeled amino acid tracers such as fluoride-18 fluoroethyl-L-tyrosine ( 18 F-FET) can contribute to brain tumor grading, differential diagnosis, prognostication, treatment planning, and monitoring (6)(7)(8).The Response Assessment in Neuro-Oncology (RANO) working group endorses this imaging modality as a valuable complement to magnetic resonance imaging (MRI) in all stages of glioma management (9).Although these radio-labeled amino acid tracers have appreciable efficacy, they primarily target L-type amino acid transporters (LATs), which are transmembrane proteins expressed in tumor cells (10,11). Cancer-associated fibroblasts (CAFs), capable of secreting growth factors and inflammatory cytokines, are critically involved in the interactions between tumor cells and stromal cells (12,13).Fibroblast activation protein (FAP), a transmembrane glycoprotein, may be overexpressed by CAFs in the TME (14,15).Radio-labeled fibroblast activation protein inhibitors (FAPis) have demonstrated efficacy in imaging CAFs activities across a range of solid tumors with satisfactory results. As there is significant FAP accumulation in the stroma of malignant tumors and satisfactory tissue contrast, FAPtargeted imaging has efficacy in malignant tumor detection, tumor delineation, metastatic lymph node recognition, tumor staging and restaging, and radiotherapy planning (16)(17)(18).Research suggests that FAP-targeted imaging can influence the treatment decisions after the detection of extra-lymph node metastasis in breast carcinomas (19).FAPi imaging also aids in the differentiation of malignant transformation of pancreatic intraductal papillary mucinous neoplasms (20).While there is some evidence indicating that certain types of gliomas overexpress FAP (21), data on the FAP expression in other intracranial tumors have not been well established in the literature and should be pursued further.Moreover, the correlation between the volume of FAP in the TME and tumor malignancy degree in various intracranial tumors is of considerable interest. This prospective, head-to-head study applied gallium-68 FAPi-04 ( 68 Ga-FAPi-04) and 18 F-FET PET-computed tomography (CT) imaging to patients with intracranial tumors before surgery to investigate the spectrum of FAP expression.Immunohistochemical staining was used to examine the patterns of FAP expression.A quantification analysis of PET parameters was employed to characterize the relationship between FAP expression volume in tumor stroma and the degree of malignancy. Ethical approval This study was conducted according to the Declaration of Helsinki (as revised in 2013).Ethical approval of our previously written study protocol and consequent analytical design was obtained from Ethics Committee of Huashan Hospital, Fudan University (No. 2021-891), and informed consent was obtained from all individual participants.The study protocol was not registered on a public platform and did not involve any interventional procedures. Study design Adult preoperative patients with intracranial mass lesions were enrolled.Anatomic MRI after symptom occurrence was collected and reviewed for initial diagnosis by outpatient neurosurgeons in Huashan Hospital before admission for further investigation and treatment.Those patients who had already received treatment including surgery, radiosurgery, radiotherapy, or chemotherapy were excluded from the study.Enrolled adult patients with intracranial tumors received 68 Ga-FAPi-04 and 18 F-FET PET/CT brain imaging before surgery from October 2022 to March 2023 in the Neurosurgery Department of Huashan Hospital, Fudan University.Semiquantitative imaging parameters and immunohistochemical staining were obtained for diagnostic evaluation of this head-to-head study.Written informed consent from patients for dual-tracer PET/CT imaging and follow-up analysis was obtained. Imaging protocols 68 Ga-FAPi-04 and 18 F-FET tracers were synthesized in the Department of Nuclear Medicine & PET Center of Huashan Hospital, Fudan University. For 18 F-FET PET/CT imaging, patients fasted for a minimum of 4 hours prior to imaging.A 20-minute static scan was conducted in 3-dimensional (3D) mode with a Biograph mCT Flow Edge 128 PET/CT system (Siemens Healthineers, Erlangen, Germany) 20 minutes after intravenous bolus injection of 18 F-FET (185±17.0MBq).Attenuation correction was performed using low-dose CT (tube current =150 mAs, voltage =120 kV, acquisition =64×0.6 mm, convolution kernel = H30s, slice thickness =5 mm, interslice gap = 1.5 mm) prior to the emission scan.Postacquisition, PET images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm with a Gaussian filter and a full width at half maximum of 3.5 mm at the center of the field of view. In 68 Ga-FAPi-04 PET/CT imaging, 30 minutes after intravenous bolus injection of 68 Ga-FAPi-04 (185±29.2MBq), a 30-minute static scan was conducted in 3D mode with a uMI510 PET/CT (United Imaging, Shanghai, China).Attenuation correction was similarly performed using lowdose CT prior to the emission scan.PET images were also reconstructed using the OSEM algorithm with a Gaussian filter and the same full width at half maximum after acquisition. Image analysis PET/CT images were analyzed with a syngo.viaworkstation (Siemens Healthineers).Two experienced nuclear medicine physicians (W.Z. and T.H., with over 6 and 13 years of experience, respectively) performed blinded 68 Ga-FAPi-04 and 18 F-FET PET/CT positive lesion judgement and lesion delineation before surgical treatment. Structural MRI was initially read before PET/CT lesion delineation.For 18 F-FET PET/CT imaging, mean standardized uptake value (SUVmean) of the brain background was measured in a crescent-shaped area, encompassing both gray and white matter on the lesion's contralateral hemisphere.Subsequently 1.6 times of background SUVmean was used for lesion delineation, and the maximal standardized uptake value (SUVmax), metabolic tumor volume (MTV), and total lesion tracer uptake (TLU) were obtained.Maximal tumor-to-brain ratio (TBRmax) was calculated by dividing the intracranial lesion SUVmax with the background SUVmean. For 68 Ga-FAPi-04 imaging, the background SUVmean was measured similarly to that of 18 F-FET PET.Lesion SUVmax and TBRmax were measured.Due to the lack of guidelines for a FAPi-positive lesion delineation and extremely low uptake of the brain background, a series of isocontour volumetric thresholds including 20%, 30%, 40%, and 50% of lesion SUVmax were measured for MTV FAPi and TLU FAPi measurements.Different MTV FAPi :MTV FET and TLU FAPi :TLU FET ratios were calculated for further analysis. Pathological diagnosis was completed in the Pathology Department of Huashan Hospital, Fudan University.Characteristic surgical resection slices were selected for FAP immunohistochemistry staining as per the FAP kit instruction (Abcam, Cambridge, UK).Briefly, heatmediated antigen retrieval with buffer was applied before FAP immunohistochemical staining.Tissue sections were incubated at 4 ℃ overnight with a 1:250 antifibroblast activation antibody (RRID: AB_207178; Abcam, Cambridge, UK).Streptavidin-biotin complex was used for incubation before staining and visualization. FAP immunohistochemical staining scores were assigned by two independent pathologists who were blinded to patients' clinical and PET/CT imaging analysis results.The scoring method was applied as previously described (22), with 0 indicating complete absence or very minimal FAP staining in less than 1% of the evaluation area, 1 indicating weak FAP immunohistochemical staining from 1% to 10% of the evaluation area, 2 indicating moderate FAP immunohistochemical staining from 11% to 50% of the evaluation area, and 3 indicating strong FAP immunohistochemical staining over 50% of the evaluation area. Statistical analysis Descriptive statistics are expressed as the mean and standard deviation or median and range.The t-test and one way analysis of variance was used to compare continuous variables.The Wilcoxon signed rank or Kruskal-Wallis test was performed if a normal distribution of variables was not met.Linear regression analysis was applied to investigate the relationship of pathological diagnosis and PET parameters.The variance inflation factor was used to control multicollinearity.Intraclass correlation coefficients (ICCs) for PET parameter measurements and FAP immunohistochemical scoring were assessed, and the results were classified as poor (less than 0.2), fair (0.21-0.4), moderate (0.41-0.6), good (0.61-0.8), and very good (0.8-1.0).All statistical analyses were performed with Stata version 17 (College Station, TX, USA).In all analyses, P<0.05 was considered to indicate a statistically significant difference. Patients characteristics This study enrolled 22 patients, including 14 men and 8 women, with a mean age of 50±13 years (range, 27-69 years).Consecutive 68 Ga-FAPi-04 and 18 F-FET PET/CT imaging scans were applied with an at least a 24-hour interval for each patient in our cohort.Surgical treatment was performed after dual-tracer PET/CT within 8 days and included 20 surgical resections and 2 stereotactic biopsies.Primary brain tumor, brain metastasis, and noncancerous disease were observed.Eight specific histological diagnoses were identified according to the 2021 World Health Organization (WHO) Classification of Tumors of the Central Nervous System, which included WHO grade 4 glioblastoma isocitrate dehydrogenase wild type (IDHwt), metastatic carcinoma, oligodendroglioma, WHO grade 2 IDH mutation (IDH-mu) and 1p/19q codeleted, noncancerous lesions, WHO grade 4 astrocytoma IDHmu, WHO grade 3 astrocytoma IDH-mu, WHO grade 4 medulloblastoma not otherwise specified (NOS), and WHO grade 1 ganglioglioma.Demographic details of the cohort are provided in Table 1. Analysis of patient-based PET semiquantitative parameters ICCs showed very good agreement between the different tracers for 18 F-FET and 68 Ga-FAPi-04 PET/CT parameter measurements (ICC >0.95, P<0.001; ICC >0.87, P<0.001), and the results of reader one (W.Z.) were used for analysis. Out of the 16 68 Ga-FAPi-04-positive patients, 15 surgically treated intracranial tumors were used to 2 and Figure 4 and the scatter plot matrix of PET parameters and pathological diagnosis is provided in Figure 5. FAP immunohistochemistry and 68 Ga-FAPi-04 PET imaging analysis The FAP immunohistochemical scores from the two pathologists were in satisfactory agreement (ICC >0.92; P<0.001).In primary brain tumors, moderate FAP expression in the tumor stroma and peri-vascular areas could be observed in most of the patients with WHO grade 4 glioblastoma IDH-wt.Mild FAP expression was observed in the tumor stroma and perivascular areas in patients with WHO grade 4 and grade 3 astrocytoma IDH-mu.Scant-to-mild FAP expression was found in the tumor stromal areas of patients with low-grade gliomas including oligodendroglioma and ganglioglioma.Mild FAP expression was found in the tumor stroma of the patient with medulloblastoma but not in the perivascular region.In brain metastasis, significant tumor stroma, perivascular, and small vessel epithelial expression were found.In patient 13, who had a noncancerous lesion, moderate FAP expression was present in the stromal cells in inflamed regions, the perivascular region, the small-vessel epithelium, and the gliotic region; meanwhile, mild FAP expression was observed in the small vessel epithelium and gliotic region in patient 10 (Figure 6).The lesion median SUVmax was 0.4 (range, 0.2-1.3),4.0 (range, 3.2-5.8),and 10.4 (range, 8.9-19.1) in patients with an FAP immunohistochemical score from 1 to 3, respectively.The lesion FAPi SUVmax was significantly correlated with FAP immunohistochemical score (r 2 =0.70;P<0.001) (Figure 2). Discussion Despite there being distinct characteristics of FAP expression across a wide range of carcinomas, limited evidence exists regarding its expression patterns in intracranial tumors.Our study found there to be heterogeneous uptake of 68 Ga-FAPi-04 across our cohort, with generally significant uptake noted in patients with brain metastasis, glioblastoma, and medulloblastoma.The immunohistochemical results showed varied FAP expression patterns in primary tumors, brain metastasis, and noncancerous disease.In FAPi-positive intracranial tumors, the MTV FAPi :MTV FET ratio, a semiquantitative parameter which can be used to reflect the quantification results of CAFs activity in tumor parenchyma, has shown promising differential potential in intracranial tumors.TLU ratio, TLU FAPi :TLU FET ratio; TLU, total lesion tracer uptake; PET, positron emission tomography; 68 Ga-FAPi-04, gallium-68 fibroblast activation protein inhibitor 04; SUVmax, maximal standardized uptake value.Limited research in the FAP of gliomas suggests there to be increased FAPi uptake in high-grade gliomas (21).Other analysis indicates that FAPi uptake in high-grade gliomas partly correlates with the tumor perfusion but not with the cell density of these gliomas (23).Our study expanded the scope to an intracranial tumor cohort, and FAP expression status was generally associated with tumor malignancy degree, in accordance with previous investigations.Immunohistochemical analysis indicated that the differences of FAP expression were mainly present in the areas of the TME, including perivascular region.These patchy-and spot-like FAP expressions in intracranial tumor stroma correlated with 68 Ga-FAPi-04 uptake.Based on this, we hypothesized that FAP expression in perivascular stromal region is associated with tumor lesion perfusion, while the cellular density status of the intracranial tumor is not correlated with its FAP expression status, which results in the absence of relationship between FAPi uptake and the apparent diffusion coefficient of the tumor. Radio-labeled amino acid tracers have satisfactory diagnostic efficacy in brain tumors (9,24). 18F-FET PET imaging can well delineate a wide range of intracranial tumors in our cohort, even some small lesions.However, TME cell-targeting tracers can offer new possibilities in evaluating the heterogeneities of intracranial tumors in addition to tumor cell-targeted imaging.The TME consists of a range of benign cells that interact with tumor cells, influencing growth, migration, and recurrence (25)(26)(27).FAP expression has been observed in most epithelial neoplasms (28).nervous system (29), moderate FAP expression was present in most of the patients with WHO grade 4 glioblastoma IDH-wt in our cohort.These patients typically have a very poor prognosis, and the expression pattern of FAP could provide valuable data for future FAP-targeted theranostic methods, serving as a complement to classical treatment procedures. Due to unique classification and characteristics of central Lesion delineation is vital for devising a treatment plan.Radio-labeled amino acid tracer imaging could be used to outline lesions (30).There is evidence supporting the potential of FAP-targeted imaging in the differentiation and delineation of intracranial malignant tumors.It is reasonable to suggest that FAPi imaging can complement current imaging protocols for intracranial malignancies.Besides FAPi imaging, comprehensive analysis of anatomic MRI and amino acid PET imaging are necessary to exclude occasional benign lesions with FAP expression.The costs of FAP-targeted imaging are similar to those of widely clinically used tracers, and the potential benefits of the noninvasive malignant degree recognition and evidenceguided treatment planning via FAP-targeted imaging should be considered. There is currently no standard for brain lesion delineation in FAP-targeted imaging.Considering the characteristics of brain tumors and FAP expression status in normal brain background, we applied 20% lesion SUVmax as the threshold for the initial observation in 68 Ga-FAPi-04 PET analysis.Further observations were investigated with 30%, 40%, and 50% of the lesion SUVmax being used as thresholds.This strategy was employed to control the influence of delineation fluctuation owing to the extremely low uptake of the brain background.Given the significant tumor-to-brain ratio of FAPi uptake, a 20% isocontour volumetric threshold is reasonable for lesion delineation.There are growing amount of studies on 68 Ga-FAPi uptake in noncancerous diseases including autoimmune disease, cardiovascular disease, wound healing (31)(32)(33)(34).The results from this varied research indicates the potential of FAPi in the molecular imaging of inflammation.In our cohort, the participants with noncancerous lesions also had 68 Ga-FAPi-04 uptake, suggesting that caution should be taken in the diagnosis of positive 68 Ga-FAPi-04 lesions.In classical semiquantitative parameters, characteristic timeactivity curve patterns from dynamic 18 F-FET PET imaging could help to differentiate between glioma subtypes and exclude noncancerous lesions.Other imaging parameters should be considered for participants for whom diagnosis is unclear. Our study involved certain limitations which should be addressed.First, our head-to-head study included a relatively small size, and the results of this exploratory investigation need to be further confirmed with a more robust patient cohort.Second, this initial study mainly concentrated on cohort 68 Ga-FAPi-04 and 18 F-FET PET/CT imaging analysis, the combination of anatomic MRI parameters with those examined in our study will complement these results.More investigations including histopathological validation of the different FAPi imaging threshold-based lesion delineations and imaging quantification analysis combined with anatomic MRI features will doubtlessly contribute to furthering our understanding of the effects of CAFs in the intracranial TME. Conclusions Our study found heterogeneities in 68 Ga-FAPi-04 uptake in this intracranial tumor cohort, with tumors with greater malignancy generally having an elevated FAP expression, although the difference between the primary brain tumor, brain metastasis, and noncancerous disease were statistically nonsignificant.Immunohistochemical analysis revealed a diversity of FAP expression patterns.For intracranial tumor patients with positive 68 Ga-FAPi-04 uptake, the MTV FAPi :MTV FET ratio demonstrated potential in differentiating between various intracranial tumors, suggesting that the interactions between intracranial tumor parenchyma and CAF cells in the TME are distinct across various types of tumor genesis.
2024-07-01T15:04:45.403Z
2024-06-27T00:00:00.000
{ "year": 2024, "sha1": "41220680fec75f259633f8f5f4dc0e46794e5d3d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.21037/qims-24-82", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e57b412601c758282199b28c795c0da67fca21e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
245852533
pes2o/s2orc
v3-fos-license
Behaviour of One-Way Reinforcement Concrete Cantilever Slabs with Openings In some concrete structures, openings are placed because of the need for several utility requirements. These openings could affect the strength of the structural members. So the behavior of reinforcement concrete (RC) cantilever slab containing openings and its effect is the subject of the study. Opening shapes, numbers and sizes are the main variables that have been studied in this research. Five RC cantilever slabs were cast and tested; one is without openings and the other four slabs are with openings. It is found that there is a significant effect of openings on the behavior of these slabs. Where, the decrease in the ultimate load (from 39kN to 24.7kN), while the decrease in the deflection at ultimate load (from 67 mm to 35 mm). Introduction In order to pass sewage pipes, internet lines and water supplies, it has become necessary to make openings in the slabs and roofs in concrete structures. These openings may be of large sizes sometimes, such as elevators or emergency stairs, and may be placed in a dangerous location in the structure for reasons related to the architectural design, whose construction requires special measures. The openings in concrete structures may lead to many problems, including reducing the stiffness of the structural member, as well as its resistance, increasing the deflection and developing many cracks around the openings. This complex behavior may be performed by the structural member due to decrease the area of concrete in the cross-sectional dimension [1-4]. Boon [5] prepared an experimental study on one-way reinforced concrete (RC) slabs that contains openings, and additional reinforcement was used to strengthen it. He noticed that the slabs without additional reinforced, the ultimate load was decreased by about 37% relative to the slab without opening. While for slabs which designed with additional reinforced around the opening, the ultimate load was decreased in a range between (26-34)%. Afefy and Fawzy [6] carried out a study on one-way RC slabs with openings using various techniques for strengthening. They used NSM (Near Surface Mounted), ECC (Engineered Cementitious Composites) and EB-CFRP (Externally Bonded Carbon Fiber Reinforced Polymer) .The researchers noticed that the strengthened slabs gave a higher resistance than the unstrengthened slabs by about 50%. Al-Hafiz [7] conducted an experimental study on one-way RC slabs containing openings using steel plates as a strengthening technique. The main variable was the thickness of the steel plates (2, 4 and 6) mm and the thickness of the slabs were (40, 60 and 80) mm. The researcher noticed that when the thickness of the steel plate increases, the decrease in resistance due to the opening is small. [8] tested ten RC slabs containing openings except one as a reference slab. The dimensions of the openings were different. Several layers of CFRP strips were used to strengthen these slabs. The researchers noticed that when the openings area was increased from 5% to 20%, the ultimate load was decreased by about 7%. When 3 layers of CFRP were used instead of one layer leads to an increase in the ultimate load up to 9%. In general, openings lead to a decrease in the resistance; therefore, it is necessary to compensate for the decrease in resistance by taking special measures. Because of the lack of research and studies related to the cantilever slab, as well as the difference in the shape and number of openings are not studied in previous researches. This research aims to study the behavior of the cantilever slab that contains openings of different numbers and different shapes. Experimental Program Five RC cantilever slab models were cast and tested up to failure. These models were made with the same dimensions as follows: length= 2100 mm, width=600 mm and thickness=140 mm. All models have the same reinforcement details with main steel bars of 10 mm and secondary steel bars of 6 mm. Only one model was without opening (solid slab). Whereas, all the other four having openings. The edge of the openings was 150 mm away from the center of the interior support. The models were differed in the shape of the openings and their number as listed in Table 1. It is worth to mention that the experimental program and materials tests were conducted at the laboratory of civil engineering department of the University of Baghdad. Where: SS refers to solid slab, SCO refers to slab with circular opening, SSO refers to slab with square opening, SDCO refers to slab with double circular opening and SDSO refers to slab with double square opening. Concrete Concrete was poured by a mixture track at the work site to avoid the difference in the resistance of the models due to the error of making various mixtures, and the target cube crushing strength was 27 MPa after 28 days of casting Steel bars reinforcement Two diameters of steel reinforcement were used, the main longitudinal rebars of 10 mm and the secondary distribution rebars of 6 mm. the yield stress was 610 MPa and 515 MPa for rebars of 10 mm and 6 mm respectively. Test Setup and Procedure The load was placed on the free end of the model as a knife edge load with a load cell with a maximum capacity of 500 kN. Linear variable differential transformer (LVDT) was placed under the load to measure the amount of the free-end deflection during the test. At a distance of 850 mm from the free end, a roller-support was placed under the slab, while the other end was supported using a steel frame to make a fixed support. Figure 1 shows the layout of the reinforcing steel and specimen setup of a typical tested slab. While figure 2 shows the steel reinforcement distribution of the tested slabs. Test Result and Discussion After observing the results of the tested slabs, it was found that the slabs with one opening showed least degradation in strength than the slabs with double openings compares with the solid slab. Generally, slabs with circular opening showed better strength than the companion slabs with square openings. It is noticed that the shape of the opening has a significant effect on slabs strength. Conclusion Openings in concrete structure have a significant effect on the resistance, as the ultimate load drop was in the ranged of 8.97% to 36.58% relative to (SS) reference slab. The number of openings had a significant effect. When comparing the two models (SSO and SDSO) with each other, it's found that the SDSO model that contains two openings had a decrease in its resistance relative to the reference slab by 36.58%, while the SSO model had a decrease in the percentage of 17.45%, comparing with the two models (SDCO, SCO) It is also notices that the decrease in the SDCO model containing two openings amounted to 31.28% of the SS slab. From this it appears that the increase in the number of openings leads to a significant decrease in the ultimate load. The shape of the openings has an effect on the ultimate load. When comparing (SSO, SCO) it was found that the decrease in the ultimate load in the SCO model was 8.97%, while in the SSO model it was 17.43%, and the same for the (SDCO, SDSO). The ultimate load drop (31.28%, 36.58%) respectively. And this decrease in resistance of slabs with square openings due to a sharp edges at the corners of openings, and this causes develop a more cracks.
2022-01-11T20:05:25.710Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "37518b061fafe022237295f98c8f8b3003fc2991", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/961/1/012043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "37518b061fafe022237295f98c8f8b3003fc2991", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
260177693
pes2o/s2orc
v3-fos-license
The effects of amino acid substitution of spike protein and genomic recombination on the evolution of SARS-CoV-2 Over three years’ pandemic of 2019 novel coronavirus disease (COVID-19), multiple variants and novel subvariants have emerged successively, outcompeted earlier variants and become predominant. The sequential emergence of variants reflects the evolutionary process of mutation-selection-adaption of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Amino acid substitution/insertion/deletion in the spike protein causes altered viral antigenicity, transmissibility, and pathogenicity of SARS-CoV-2. Early in the pandemic, D614G mutation conferred virus with advantages over previous variants and increased transmissibility, and it also laid a conservative background for subsequent substantial mutations. The role of genomic recombination in the evolution of SARS-CoV-2 raised increasing concern with the occurrence of novel recombinants such as Deltacron, XBB.1.5, XBB.1.9.1, and XBB.1.16 in the late phase of pandemic. Co-circulation of different variants and co-infection in immunocompromised patients accelerate the emergence of recombinants. Surveillance for SARS-CoV-2 genomic variations, particularly spike protein mutation and recombination, is essential to identify ongoing changes in the viral genome and antigenic epitopes and thus leads to the development of new vaccine strategies and interventions. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a sister clade of SARS-CoV (Coronaviridae Study Group of the International Committee on Taxonomy of Viruses et al., 2020), has posed a global public health threat since its initial outbreak in December 2019 (Hu et al., 2020). On May 5, 2023, the World Health Organization (WHO) declared the end of the 2019 novel coronavirus disease pandemic as a Public Health Emergency of International Concern. At that time, WHO reported a total of 765,222,932 cases and 6,921,614 deaths worldwide. 1 Consistent with other coronaviruses, the genome of SARS-CoV-2 is a single-stranded positive-sense RNA of approximately 30,000 nucleotides, with replication mediated by RNA-dependent RNA polymerase (RdRP) (Vkovski et al., 2020;Li et al., 2020b). The 5'-terminus of the SARS-CoV-2 genome contains two open reading frames (ORFs), while the 3'-terminus contains four major structural proteins coding-gene in the following order: spike protein, envelope protein, membrane protein, and nucleocapsid protein (Bai et al., 2021). Despite the presence of error-correction enzymes, which contribute to a relatively high replication fidelity compared to other RNA viruses, SARS-CoV-2 still undergoes significant mutations (Robson et al., 2020;Domingo et al., 2021;Perales, 2021). The nucleotide mutation rates of SARS-CoV-2 are estimated to be 6.677 × 10 -4 and 8.066 × 10 -4 substitutions per year for the whole genome and spike protein, respectively . Amino acid mutations in the spike protein play a crucial role in the evolution of SARS-CoV-2. The spike protein, which forms a trimeric fusion protein on the surface of the coronavirus, exhibits a crown-like appearance and serves as an ideal target for inducing neutralizing antibodies and protective immunity (Kang et al., 2021;Tian et al., 2021). The spike protein is composed of S1 and S2 subunits, and the Receptor Binding Domain (RBD) in the spike interacts with the human receptor angiotensin-converting enzyme 2 (ACE2) receptor when activated to allow the virus to entry into cells (Conceicao et al., 2020;Hoffmann et al., 2020b;Zhang et al., 2021a). Mutations in the spike protein, particularly in the RBD, have led to alterations in spike-ACE2 recognition, resulting in viral immune escape and the failure of neutralizing antibodies (Magazine et al., 2022;Chen et al., 2023). Spike proteins are classified as open and closed forms according to the up and down conformations of the RBD, and mutations in the spike may change the RBD conformation (Walls et al., 2020;Wrapp et al., 2020). The D614G mutation, which represents the substitution of amino acid D (Asp) by G (Gly), is conservative across all major variants (Wassenaar et al., 2022) and predominant in the spike protein during the early stage of pandemic (Chang et al., 2020). The D614G mutation has been shown to enhance furin proteolysis capacity by 50 times (Gobeil et al., 2021). Notably, the Omicron variant harbors more than 60 substitutions, deletions, and insertions, of which 15 rare mutations are found in the spike Ma et al., 2022b). The spike protein of Omicron predominantly adopts closed conformations (Calvaresi et al., 2023), potentially leading to the failure of nearly all anti-spike monoclonal antibodies (Focosi and Casadevall, 2022;Turelli et al., 2022). In addition to point mutations in the spike protein, viral genomic recombination is common among coronaviruses (Yewdell, 2021), especially during the late pandemic phase when different variants co-circulate. According to the US Centers for Disease Control and Prevention (CDC), the most prevalent circulating strains in the US as of May 13, 2023, were XBB.1.5 (61.5%), XBB.1.9.1 (10.0%), and XBB.1.16 (9.4%) (Ma et al., 2023). The frequent occurrence of recombination makes it challenging to predict the effectiveness of vaccines targeting the spike protein, and recombination may confer altered transmissibility, virulence, and immune escape properties to the virus (Focosi and Maggi, 2022;Carabelli et al., 2023). The evolution of SARS-CoV-2 within the population follows the mutation-selection-adaptation theory of Darwinian evolution (Goldman, 2021; Figure 1). In this context of hypermutation, both innate and adaptive host immune responses drive mutation selection (Thorne et al., 2021), as we have previously discussed . The virus evolves to adapt to external selection pressures, and antigenic drift occurs as mutations gradually accumulate, affecting the virus's immunogenicity (Bano et al., 2021;Shapira et al., 2023). Antigenic drift facilitates viral evasion from host immune response, particularly by affecting antibody neutralization, resulting in viral resistance to previous infection and vaccination Cao et al., 2022c;Planas et al., 2023;Qu et al., 2023). The evolutionary trend tends to lower the pathogenicity but increase the transmissibility of variants, resulting in long-term retention of virus in human hosts (Magiorkinis, 2023). In this review, we provide an overview of SARS-CoV-2, summarize the characteristic amino acid mutations in the spike protein, particularly in novel variants, discuss recent recombination events, and propose future perspectives to guide viral evolution and intervention strategies. 2. An overview of SARS-CoV-2 2.1. Nomenclature and timeline of SARS-CoV-2 Several nomenclatures have been introduced for SARS-CoV-2 according to genetic relatedness of the sequences, including GISAID, 2 Year-Letter (NextStrain) nomenclature, 3 and Phylogenetic Assignment of Named Global Outbreak LINeages (Pango lineage) (Rambaut et al., 2020). The GISAID nomenclature system is based on marker mutations within the eight high-level phylogenetic groups, from the early split of S and L, to the further evolution of L into V and G, and later G into GH, GR and GV, and more recently GR into GRY. The Year-Letter nomenclature consists of the year when the clade emerged and a capital letter starting with A for each year, including 19A, 19B, 20A, 20B, 20C, and 20I. The Pango lineage uses an alphabetical prefix and a numerical suffix to identify descendants 4 and contains phylogenetic, genetic, and epidemiological information. The first letter represents the lineage label of the variant, with the order from A to Z, then AA to AZ, BA to BZ, etc. The subsequent numbers separated by periods indicate the branches of lineages. When a branch has three more numeric suffixes, a new letter will be used as the lineage label in alphabetical order. For example, C.1 is the branch of B.1.1.1 (O'Toole et al., 2022). The recombinant variants are named in a uniform nomenclature beginning with "X. " To promote surveillance and research, WHO categorized SARS-CoV-2 variants as three specific classes: variants of concern (VOC), variants of interest (VOI), and variants under monitoring (VUMs). 5 VOCs are variants of high mutation and transmission rate. To date, Alpha, Beta, Gamma, Delta, and Omicron are known emerged VOCs and have become dominant in turn globally or regionally. The Alpha variant (B.1.1.7) was discovered in the UK in September 2020 (du Plessis et al., 2021;Galloway et al., 2021). It was proven to be highly transmissible and infectious, and became prevalent a few months later (Davies et al., 2021;Volz et al., 2021) first reported in South Africa in October 2020 , and the Gamma variant (P.1) was first identified in travelers from Brazil in January 2021 (Fujino et al., 2021). The Delta variant (B.1.617.2) was isolated in India (Mlcochova et al., 2021) and quickly became the most prevalent variant worldwide in June 2021 (Mahase, 2021). The Omicron variant (B.1.1.529/BA sublineages) was first discovered in Botswana, South Africa in November 2021, and outcompeted other VOCs rapidly upon its emergence . Five major sublineages of Omicron, BA.1, BA.2, BA.3, BA.4, and BA.5, have been identified so far . Most recently, a series of novel Omicron subvariants have emerged, such as BA.2.75 , BF.7 (Scarpa et al., 2023a), Deltacron (Kreier, 2022), XE (Rahimi and Bezmin Abadi, 2022b), XF (Chakraborty et al., 2022), BQ.1 , BQ.1.1 , XBB (Imai et al., 2023), XBB.1 , XBB.1.5 , XBB.1.16 (Harris, 2023), and they have raised increasing concern. The timeline of emergence of variants is illustrated in Figure 2A. Entry pathways of SARS-CoV-2 and hypotheses for VOCs Two described entry pathways of SARS-COV-2 through the cell membrane or through endosomes ( Figure 2B) have been reviewed in detail previously (Shang et al., 2020;Hoffmann et al., 2020b;Rahbar Saadat et al., 2021;Jackson et al., 2021b;Lim, 2023). The two entry pathways differ because S2' cleavage occurs either at plasma membrane by the transmembrane protease serine protease 2 (TMPRSS2) [such as in the nasal epithelial cells, lungs, and bronchial branches where TMPRSS2 is highly co-expressed with ACE2 (Lukassen et al., 2020;Sungnak et al., 2020)] or within the cell by endolyosomal cathepsins such as Cathepsin L (Bestle et al., 2020;Shang et al., 2020). The proteolytic site between the S1 and S2 subunit of the spike protein, also known as furin cleavage site (FCS), is cleaved by a host protease furin (Lavie et al., 2022). This process of cleavage is essential to the entry pathway and membrane fusion (Bestle et al., 2020;Hossain et al., 2021;Johnson et al., 2021;Peacock et al., 2021;Lavie et al., 2022). Optimization of FCS has been shown to facilitate cell-cell fusion to improve the infectivity (Hoffmann et al., 2020a), increase the transmissibility (Peacock et al., 2021), and promote pathogenesis . Multiple hypotheses have been proposed to explain the origin of VOCs (Mallapaty, 2022), such as (1) circulation in geographically sequencing limited areas; (2) circulation within animal hosts then spillover to humans; and (3) evolution in immunosuppressed chronic infection hosts. In some regions, the limited capacity for genomic sequencing has resulted in a lack of testing for asymptomatic patients. It has been observed that asymptomatic carriers exhibit higher levels of antiviral immunity and lower levels of inflammation compared to symptomatic individuals (Yang et al., 2020b;Le Bert et al., 2021;Ma et al., 2022a). This immunological profile may create an environment conducive to viral evolution under immune pressure. There is evidence supporting the hypothesis of an animal host origin, with white-tailed deer (Hale et al., 2022;Marques et al., 2022) and farmed mink (Koopmans, 2021;Lu et al., 2021) identified as stable animal reservoirs for SARS-CoV-2. These variants have the potential to infect animals and accumulate mutations within animal reservoirs. Subsequently, the virus may undergo further evolution, giving rise to new subvariants that can then spillover to humans. The hypothesis of chronic infection in immunodeficient hosts is widely accepted in many scenarios. Chronic infection in such individuals is associated with ACE2 affinity, immune evasion, and optimization of viral packaging (Choi et al., 2020;Kemp et al., 2021;Harari et al., 2022;FIGURE 1 Process of mutation-selection-adaptation in SARS-CoV-2 evolution. Frontiers in Microbiology 04 frontiersin.org Wilkinson et al., 2022). This process drives the mutation profiles of the virus and enhances its fitness (Ghafari et al., 2022;Hill et al., 2022). Extensive immune escape has been observed in SARS-CoV-2 infections in immunocompromised hosts, such as patients with advanced HIV disease (Cele et al., 2022). Spike protein mutations produce antigenic drift Mutation profiles of the variants of concern (VOCs) exhibit certain overlapping patterns, while also assuming distinct roles in the process of viral evolution, thereby suggesting an underlying evolutionary resemblance among these variants. Notably, a common early substitution mutation, namely D614G, is shared by all five VOCs, which has been shown to significantly augment the binding affinity of the viral spike protein to the ACE2 receptor, consequently amplifying viral pathogenicity (Alkhatib et al., 2021;Wang P. et al., 2021;Zhang et al., 2021b;Venkatakrishnan et al., 2022). Moreover, the substitution P681H has been identified in Alpha (Lubinski et al., 2022), Gamma (Fujino et al., 2021), and Omicron (Tian et al., 2022), and has been demonstrated to enhance viral cell entry. Conversely, the substitution P681R, occurring at the same position, has been observed to augment the replication capacity and pathogenicity of the Delta variant (Mlcochova et al., 2021;Saito et al., 2021;Liu et al., 2022). These mutations accumulate in a stepwise manner, progressively modifying the antigenic epitope of the virus, ultimately leading to a transition from "genetic drift" to antigenic drift. Spike mutations in current VOCs For variant Alpha (B.1.1.7), of eight mutations in the spike protein, D614G, Del H69/V70 (Del H69/V70 represents amino acid deletion mutation in the site 69 and 70 of the spike protein), N501Y, and P681H are most meaningful . The D614G mutation has been found to confer a fitness advantage by promoting efficient replication in primary airway cells, thereby increasing virulence and transmission (Hou et al., 2020;Korber et al., 2020;Ozono et al., 2021;Zhou et al., 2021). It also leads to alterations in spike conformation and enhanced FCS cleavage and leads to alterations in spike conformation and enhanced FCS cleavage (Gobeil et al., 2021;Nguyen et al., 2021). However, it has also been observed that the D614G mutation renders the virus more susceptible to monoclonal antibodies by increasing epitope exposure, suggesting that it does not impede the effectiveness of vaccines (Weissman et al., 2020), indicating it does not impede vaccine effect (Hou et al., 2020;Weissman et al., 2020;Yurkovetskiy et al., 2020;Ozono et al., 2021). Del H69/V70 is associated with diagnostic test failure for probes targeting spike proteins, known as spike gene targeting failure (SGTF) (Bal et al., 2021). SGTF has been utilized as a reliable proxy for monitoring the prevalence of the B.1.1.7 variant (Bal et al., 2021;Borges et al., 2021;Kidd et al., 2021). N501Y has been shown to enhance the binding of the spike protein to human ACE2 receptors, potentially expanding the host range of SARS-CoV-2 (Starr et al., 2020;Chan et al., 2021;Zahradník et al., 2021;Wang et al., 2022c). P681H, which is located adjacent to the FCS, has been found to enhance the efficiency of FCS cleavage during virus entry into cells Timeline, structure, and entry pathways of SARS-CoV-2. (A) The chronological order of the emergence of major SARS-CoV-2 variants. (B) There are two pathways for SARS-CoV-2 entering cells: endosome pathway and membrane pathway. ACE2, angiotensin-converting enzyme 2; TMPRSS2, transmembrane protease serine protease 2; S1, subunit 1 of the spike protein; FP, fusion peptide, responsible for membrane fusion; S1/S2, furin cleavage site between S1 and S2 subunit of the spike protein; S2', another proteolytic site in the subunit 2 of the spike protein. The Gamma variant (P.1) carries 12 mutations in the spike protein, including K417T, N501Y, and E484K (Faria et al., 2021). These three mutations collectively enhance the affinity of the spike protein for ACE2 receptors, thereby increasing the transmissibility of the Gamma variant. E484K is also associated with reduced neutralization by antibodies (Faria et al., 2021). E484K is also associated with reduced neutralization by antibodies (Cele et al., 2021;Greaney et al., 2021;Wibmer et al., 2021). The Delta variant (B.1.617.2) harbors several mutations previously reported in other VOCs, including L452R, T478K, E484Q, D614G, and P681R in the spike protein . These mutations partly explain the rapid global spread of the Delta variant upon its emergence. The L452R mutation has been found to increase infectivity, modestly reduce susceptibility to neutralizing antibodies, and enhance viral fusogenicity, thereby promoting virus replication (Motozono et al., 2021). E484Q exhibits similar reduced sensitivity to vaccineinduced neutralizing antibodies as L452R, but lacks synergistic effects when taken together (Motozono et al., 2021). Similar to P681H in Alpha, P681R in Delta increases FCS cleavage, resulting in enhanced transmissibility (Mlcochova et al., 2021;Saito et al., 2021;Wibmer et al., 2021). Studies have revealed that spike of Delta is more stable and binds with higher affinity to ACE2 than the spike of the wild-type (Gomari et al., 2023). As discussed above, the evolution of SARS-CoV-2 of pre-Omicron variants has primarily centered around recurrent mutations in key residues of the spike protein, including D614, N501, P681, K417, and E484. However, with the emergence of the Omicron variant and its sublineages, the landscape has undergone a significant shift. The Omicron variant harbors over 30 spike mutations, with 15 of them occurring in the RBD (Kumar et al., 2021). Figure 3 illustrates the mutation profiles of VOCs. In general, Omicron exhibits several distinctive characteristics compared to previous VOCs, including enhanced transmissibility, reduced antibody neutralization capacity (resulting in lower vaccine effectiveness), altered tissue tropism, relatively lower pathogenicity, and an increased likelihood of reinfection. The higher transmissibility may attribute to the altered viral affinity to ACE2 receptor. Multiple experimental observations have demonstrated that the binding affinity between the RBD of the spike protein and ACE2 is significantly higher for Omicron compared to wildtypes (Kumar et al., 2021;Abeywardhana et al., 2022;Cui et al., 2022;Hong et al., 2022). The mutations T478K, Q493R, Q498R, and N501Y collectively contribute to the increased binding affinity through electrostatic effects (Kumar et al., 2021;Abeywardhana et al., 2022). However, another study revealed that Omicron exhibits comparable binding affinity to ACE2 when compared to the wild type SARS-CoV-2 and weaker binding affinity than the Delta variant . This discrepancy may stem from differences in the surface plasmon resonance methodologies employed in the studies, necessitating further research. The sublineages of Omicron display variations in their ACE2 affinity, with BA.2 exhibiting the highest affinity, followed by BA.3, BA.1, BA.2.75, and BA.5 (Abeywardhana et al., 2022). Furthermore, Omicron variants exhibit reduced sensitivity to neutralizing antibodies induced by triple-dose inactivated vaccines . Reports indicate that the neutralizing activity against Omicron variants is lost in 90% of immunization serum samples and 43% of convalescent serum samples . In contrast to pre-Omicron variants, which primarily exploit TMPRSS2 for cell entry (Hoffmann et al., 2020b), Omicron variants have a propensity for entering nose and throat cells that are deficient in TMPRSS2 via the cathepsin-mediated endosomal pathway (Hui et al., 2022;Meng et al., 2022;Willett et al., 2022;Zhao et al., 2022). This shift in cell entry tropism from the membrane pathway to the endosomal pathway reduces the capacity of Omicron to fuse infected cells and form syncytia, resulting in a lower pathogenicity Willett et al., 2022). The Omicron's propensity to infect upper respiratory tract restricts its clinical manifestation and lowers the disease severity. From a structural standpoint, compared with Delta, Omicron has an inconsistent distribution of electrostatic potential and a geometric reorganization in the FCS of the spike protein. This structural divergence contributes to Omicron's reduced fusogenicity and consequently lower pathogenicity (Fantini et al., 2022). Moreover, the Omicron variant possesses an enhanced capacity for immune evasion, leading to reinfection of individuals (Chavda et al., 2022;Xia et al., 2022). For pre-Omicron variants, infection-induced protective immunity has limited efficacy against BA.4 and BA.5, but it demonstrates a strong effect in preventing reinfection of BA.1 and BA.2 (Altarawneh et al., 2022). Notably, the combinatorial mutations in the spike protein appear to have a synergistic effect on the characteristics of Omicron, further complicating its mutation profile. Preliminary findings suggest that certain mutations in Omicron form three distinct clusters, wherein the mutations seem to work in concert to compensate for the detrimental effects of any individual mutation . Two mutations, N501Y and Q498R, collectively increase the affinity of a variant for the ACE2 receptor by nearly 20-fold (Bate et al., 2022). . Differing from BA.2, BA.2.75 carries 9 additional mutations in the spike protein (147E, W152R, F157L, I210V, G257S, D339H, G446S, N460K, and an R493Q reversion mutation) (Sheward et al., 2022;Kurhade et al., 2023;Qu et al., 2023). BA.2.75 exhibits enhanced resistance to neutralization compared to BA.2 but falls short of the BA.4/5 variant Saito et al., 2022;Cao et al., 2022b;. The G446S and N460K mutations are primarily responsible for the increased resistance of neutralizing antibodies against BA.2.75 Wang et al., 2022a), while the R493Q mutation reduces neutralization resistance . Furthermore, the spike protein of BA.2.75 demonstrates significantly higher affinity for ACE2 , and the N460K mutation, which enhances S processing, leads to BA.4.6, a sublineage of BA.4, carries two additional mutations in the spike protein (R346T and N658S) and was initially identified in the US and UK (Hachmann et al., 2022). This subvariant exhibits a notable ability to evade neutralizing antibodies induced by infection or vaccination, with titers lower than those of BA.5 by a factor of 2 to 2.7 (Hachmann et al., 2022;Wang et al., 2022b;Planas et al., 2023). BF.7 BF.7 variant (also known as BA.5.2.1.7) is a derivative of BA.5 and has gained attention since the beginning of 2022, particularly in Asia (Kelleni, 2023;Pan et al., 2023;Scarpa et al., 2023a). Compared to BA.5, BF.7 carries an additional R346T mutation in the RBD and shares an identical N-terminal domain (NTD) (Scarpa et al., 2023a). The R346T mutation has been associated with enhancing the virus's ability to evade neutralizing antibodies generated by vaccines or previous infection (Akif et al., 2023). However, R346T does not greatly increase the affinity of BF.7 to ACE2 (Scarpa et al., 2023a). Although enhanced resistance to neutralization exists , BF.7 appears to be less virulent, with a low evolutionary rate of 5.62 × 10 -4 substitutions/sites/years compared to other Omicron subvariants (Scarpa et al., 2023a K444T, L452R, and F486S) in the RBD of the spike protein . CH.1.1 does not pose a significant threat to pandemic control. Antiviral drugs (remdesivir, molnupiravir, nirmatrelvir, and ensitrelvir) remain effective against CH.1.1, and an additional dose of bivalent mRNA vaccines may be beneficial in preventing CH.1.1 infection . 3.2.6. XBB and XBB.1.5 XBB variant carries 9 additional changes in the RBD and 5 additional changes in the NTD compared to its progenitor BA.2 (Imai et al., 2023). The R346 position is a critical mutation site (harboring R346T/S/I) that leads to increased immune evasion by neutralizing antibodies (Cao et al., 2021). Similar to BQ.1 and BQ.1.1, the XBB lineage exhibits an exceptionally strong ability to evade antibodies . BQ and XBB subvariants have rendered all authorized antibodies ineffective, with titers against BQ and XBB significantly lower Chakraborty et al., 2023). A cohort study in Singapore revealed that protection against XBB reinfection was lower and weakened more rapidly compared to protection against BA.4 or BA.5 reinfection in previously vaccinated omicron-infected individuals (Tan et al., 2023), further indicating greater immune evasion in XBB. XBB.1.5 has a substantial growth advantage over BQ.1.1 and XBB.1, becoming the predominant strain in the US by January 2023 Illustration of RBD conformation of spike protein complexed with ACE2 receptors. There are two RBD conformations: "up" and "down," and when the RBD is in "up" conformation, the spike protein is open to the ACE2 receptor. The trimeric spike protein is indicated by chain in three colors, purple, green, and blue, and three ACE2 receptors are indicated in yellow, gray, and pink. The complexes are obtained from RCSB.org (7KNE, 7KNH, 7KNI for 1 "up," 2 "up," 3 "up," respectively). Spike mutations in RBD conformation SARS-CoV-2 infection is partially controlled by the conformation of the spike protein RBD. The RBD located in the S1 subunit of the extracellular domain of the spike is responsible for interacting with ACE2 receptors, and has been shown an important molecular determinant of the COVID-19 pandemic (Shang et al., 2020). The RBD exists in two different conformations: up for receptor binding and down for immune evasion. Accordingly, the spikes are also in open and closed conformations. Compared with the closed-form spike protein, an open-form with an up RBD conformation leads to infection more rapidly (Yin et al., 2022), and binding with antibodies more easily (Berger and Schaffitzel, 2020;Yin et al., 2022). Figure 4 illustrates the different up or down conformations of spike protein complexed with ACE2 receptors. In the early phase of the pandemic, the D614G substitution adjacent to the NTD subdomain leads to a more open and thus receptor-accessible conformations of the spike compared with the wild-type (Benton et al., 2021;Gobeil et al., 2021;Mansbach et al., 2021;Zhang et al., 2021a). The D614G substitution confers the virus an adaptation advantage and higher transmissibility, facilitating the acquisition of further mutations and forming the variants of concern (Korber et al., 2020;Zhang et al., 2020;Plante et al., 2021). It is shown that the conformations of Alpha, Beta and Delta spikes are predominantly open and that the binding of ACE2 increases membrane fusion (Calvaresi et al., 2023). In contrast, substitution of the Omicron spikes results in a predominantly closed conformation that may allow them to evade antibodies (Calvaresi et al., 2023). Other studies show that the mutations in the RBD of Omicron may promote the conformation to change from "down" to "up" and thus increase engagement of ACE2 (Hossen et al., 2022;Ye et al., 2022). This may due to the mutations that reduce the protein-protein interaction affinity of RBD with its neighboring domains (Singh et al., 2022). Glycosylation is another way to affect the RBD conformation and thus change the spike open state. The SARS-CoV-2 spike gene encodes 22 N-linked glycan sequons per protomer and the trimeric spike protein displays 66 N-linked glycosylation sites. Glycosylated spike has a higher barrier to opening and also energetically favors the down state over the up state (Pang et al., 2022). Inhibition of protein N-glycosylation is shown to block SARS-CoV-2 infection (Casas-Sanchez et al., 2021). The glycosylation sites also have the effect of facilitating immune evasion by shielding specific epitopes from antibody neutralization (Watanabe et al., 2019). It is observed that proximal glycosylation sites (N165, N234, and N343) shield the receptor binding sites on the SARS-CoV-2 spike, especially when the RBD is in the "down" conformation (Watanabe et al., 2020). Sztain et al. (2021) revealed that N-glycan at position N343 facilitates RBD opening, and plays a gating role in the spike protein open state. Although the spike surface is substantially shielded by N-glycans, it presents regions that are vulnerable to neutralizing antibodies such as in the RBM, NTD, and S2 subunit (Chi et al., 2020;Tortorici et al., 2020;Cerutti et al., 2021). Mutations in the spike may affect glycosylation. For example, P681H and P681R were found in Alpha and Delta, respectively, and they decreased O-glycosylation which potentially increases furin cleavage and may influence viral infectivity (Zhang et al., 2021c). Recombinant mutations complement variants with new properties Recombination, a frequently observed evolutionary mechanism in coronaviruses, plays a significant role in the genetic diversity and evolution of these viruses. For example, lineage 5 of Middle East respiratory syndrome coronavirus (MERS-CoV), which caused the MERS-CoV outbreak in South Korea and mass infections in Saudi Arabia in 2015, is putatively a recombinant virus of groups 3 and 5 of clade B, or lineages 3 and 4 (Wang et al., 2015;Sabir et al., 2016). The measurement of recombination versus de novo mutation (R/M) provides insights into the relative impact of these two variations (Patiño-Galindo et al., 2021). In SARS-CoV-2, the R/M ratio is 0.00264 (Turakhia et al., 2022), while in MERS, it is estimated to be 0.25-0.31 (Patiño-Galindo et al., 2021), indicating a low level of recombinant mutations in the early stage of the SARS-CoV-2 pandemic. However, as co-infections and mutation accumulation increase within the population, recombination is expected to play a more prominent role in generating functional genetic diversity (Kim et al., 2020). Co-circulation of variants provides basis for recombination Recombination occurs when genetically distinct SARS-CoV-2 variants co-infect the same host during co-circulation ( Figure 5A). This process leads to the emergence of recombinant viruses with new properties, such as increased transmissibility or virulence (Li et al., 2020a). Recombination occurs frequently in the later phase of pandemic (Varabyou et al., 2021). Turakhia et al. (2022) developed a method called Recombination Inference using Phylogenetic PLacEmentS (RIPPLES) to detect recombination in pandemic-scale phylogenies. By analyzing a 1.6 million sample tree, they identified Frontiers in Microbiology 08 frontiersin.org 589 recombination events, indicating that approximately 2.7% of sequenced SARS-CoV-2 genomes have detectable recombinant ancestry (Turakhia et al., 2022). The distribution of recombination breakpoints across the SARS-CoV-2 genome is not uniform, with a higher incidence toward the 3' end compared to the 5' end, consistent with previous analyses in other human coronaviruses (Patiño-Galindo et al., 2021;Müller et al., 2022). Recombination events often lead to genetic alterations near the breakpoints, and the specific breakpoints vary across the genome (Bolze et al., 2022). For example, a recombinant virus containing genetic material from the Alpha (B.1.1.7) and Epsilon (B.1.429) variants was detected in New York, and recombinant mutations were found in the spike, nucleocapsid, and ORF8 coding regions (Wertheim et al., 2022). In the US, there have been nine reported recombination events between the Delta (AY.119.2) and Omicron (BA.1.1) variants, with the breakpoint located between the NTD and RBD of the spike protein (Lacek et al., 2022a). These recombinants can produce hybridized spike proteins containing characteristic amino acids from both Delta and Omicron (Lacek et al., 2022a). The co-circulation of different variants highlights the importance of ongoing genomic surveillance, with particular attention to recombinants (Jackson et al., 2021a). Figure 5B illustrates different patterns of recombination. Co-infection in immunocompromised population accelerates recombination Co-infection is common in the later phase of the pandemic. For example, a 17-year-old Portuguese female was reported to be co-infected with two SARS-CoV-2 lineages belonging to distinct clades, differing by six variants (Pedro et al., 2021). Similar co-infection events have been observed, such as B.1.1.28 co-infecting with either B.1.1.248 or B.1.91 lineages (da Silva Francisco et al., 2021), and GH co-infecting with GR clades (Samoilov et al., 2021). In the US, out of 29,719 SARS-CoV-2 positive samples sequenced from November 2021 to February 2022, 20 co-infections were identified (Lacek et al., 2022b). In Brazil, nine co-infection events (0.61%) were identified in the investigated samples from May 2020 to April 2021, although this data is likely an underestimation due to sample limitations. Recombination has been found to occur more frequently in immunodeficient individuals at high risk of severe Representation of the SARS-CoV-2 spike protein, showing amino acid mutations in VOCs Alpha, Beta, Gamma, Delta and Omicron. Amino acid mutations are colored in orange, Alpha; yellow, Beta; purple, Gamma; green, Delta; red, Omicron; blue, ≥ 2 VOCs. The spike protein structure complexed with ACE2 receptor is obtained from RCSB.org (7KNE). The mutations of VOCs are based on the data from covariants (https://covariants. org, 20I for Alpha, 20H for Beta, 20 J for Gamma, 21A for Delta, and 21 L for Omicron). Frontiers in Microbiology 09 frontiersin.org (Perez-Florido et al., 2023). Immunodeficient individuals are considered incubators for punctuated evolutionary events, possibly due to their vulnerability to chronic and co-infections (Rockett et al., 2022). For instance, a recombinant variant of B.1.160 and Alpha was isolated from a patient with lymphoma who was chronically infected for 14 months. The patient was initially infected with B.1.160, followed by concurrent Alpha infection, and eventually, the recombinant variant emerged (Burel et al., 2022). Intra-variant recombination in omicron major subvariants Recombination occurs in five major sublineages of Omicron. BA.1, a descendent lineage of B.1.1, shows distinctly different phylogenetic as compared with other VOCs or VOIs. It has caused the fourth epidemic wave in South Africa Lino et al., 2022;Saxena et al., 2022;Tian et al., 2022). The spike gene sequencing reveals that the BA.1 subvariant shares nine common amino acid mutations with most VOCs in the spiked proteins (three more than BA.2) Ou et al., 2022;Tian et al., 2022), suggesting that Omicron may be derived from the recombinant origin of these VOCs. Three more Alpha-associated mutations (Del 69, Del 70, and Del Y144) were found in BA.1 rather than in BA.2, for BA.1 is phylogenetically closer to Alpha than the other variants (Kumar et al., 2021;Ou et al., 2022). Reverse mutations were also found in some dominant mutations (frequency > 95%) in BA.1 (Ou et al., 2022). Taken together, these support the role of Alpha in Omicron evolution. Along with BA.1, BA.2 and BA.3 were also isolated in South Africa . BA.2 has caused increased global infection, Illustration of recombination in co-infected cells and different recombination patterns. (A) When different variants co-infect an individual, there is possibility that recombinant variants emerge with altered properties. (B) BA.3 is putatively a recombinant of BA.1 and BA.2, and the breakpoint probably lies in the spike protein-coding gene. BA.4 is putatively a recombinant of BA.2 and BA.5, and the breakpoint probably lies in the M protein-coding gene. XD and XF are recombinants of Delta and BA.1, and the breakpoints lie in the spike protein-coding gene/ORF3a and NSP3 protein-coding gene, respectively. XE is a recombinant of BA.1 and BA.2, with breakpoint lying in the NSP6 protein-coding gene. XBB.1.5 is a recombinant od BJ.1 and BA.2.75, and the breakpoint probably lies in the S1 subunit of the spike protein-coding gene. M, membrane protein; ORF3a, open reading frame 3a; NSP, non-structural protein. Frontiers in Microbiology 10 frontiersin.org hospitalization, and mortality rate Fonager et al., 2022;Rahimi and Bezmin Abadi, 2022a). BA.3 is likely a recombinant derivative of BA.1 and BA.2 due to BA.3 has similar genome in NTD region of the spike protein with BA.1 and BA.2 (Viana et al., 2022). A study revealed that BA.3 shared main mutations with BA.1 and BA.2, and BA.3 seemed to originate later , thus to some extent, corroborating the possibility of recombination. BA.4 and BA.5 were afterwards identified as Omicron lineages in South Africa . They were estimated to have originated in mid-December 2021 and early January 2022 (Viana et al., 2022). Their most recent common ancestor was estimated to have originated in mid-November 2021, coinciding with the emergence of BA.2 . It deserves to note that BA.4 and BA.5 are close to BA.2 in genomes, and they both have similar spike proteins with BA.2 . It is estimated that BA.4 and BA.5 are likely to evolve independently from the common ancestry of BA.2 subvariant . Compared with BA.2, BA.4 and BA.5 own extra mutations Del 69-70, L452R, F486V, and the wild-type amino acid at position Q493 (Ou et al., 2022). BA.4 and BA.5 share mutational profiles from 5'-UTR to envelope protein but differ distinctly from membrane protein to 3'-UTR . This mutation pattern suggests that there exists a breakpoint within E and M, which is the possible evidence of recombinant event. Inter-variant recombination between delta and omicron Recombination events raised more concerns when Omicron quickly outcompeted Delta pandemic. Co-circulation of Delta and Omicron provided a grounded basis for recombinant variants. There is growing concern about the possibility that this recombination potential could eventually result in mutations that confer virus on enhanced transmissibility and immune escape properties. On January 7, 2022, scientists detected a Delta and Omicron recombinant genome, and informally named it as "Deltacron" (Kreier, 2022). Nevertheless, it was later determined as a lab contamination (Kreier, 2022). On March 9, WHO declared the detection of such recombinants in different regions around the world and designated this Deltacron as a VUM (Farheen et al., 2022;Maulud et al., 2022). Generally, Deltacron is referred to as the AY.4/BA.1 recombinant, named XD, and consists of a full-length spike protein of Omicron and backbone of Delta (Mahase, 2022;Wang C. et al., 2022). According to Chinese Center for Disease Control and Prevention, of the 36 amino acid mutations found in the spike protein, 27 are present in BA.1 and 5 in AY.4, while 4 are present in both (Wang and Gao, 2022). Structural analysis of the Deltacron recombinant spike suggests its hybrid content leads to optimization of viral binding to the host cell membrane (Colson et al., 2022a,b). Consequently, this novel recombined virus causes increased disease transmission (Chakraborty et al., 2022;Hosch et al., 2022). The Deltacron recombinant also has the potential to escape neutralization by monoclonal antibody . Although Delta (AY.45) and BA.1 are sensitive to Sotrovimab neutralization, while an AY.45-BA.1 recombinant, with its breakpoint located adjacent to the Sotrovimab binding site, is resistant to its neutralization (Duerr et al., 2023). Deltacron shows higher transmissibility but lower clinical severity (Moisan et al., 2022). As recombination did not really emerge on a large scale and did not show its power until the appearance of Deltacron, the advent of Deltacron is regarded as a "gray rhino" event, rather than a "black swan" event. Other than Deltacron (recombinant of AY.4 and BA.1, also known as XD), the UK Health security agency recognized two similar recombinants, XE and XF (Chakraborty et al., 2022). The XE recombinant contains genomic elements from Omicron BA.1 and BA.2 subvariants (Rahimi and Bezmin Abadi, 2022b). The breakpoint lies in the NSP6 protein-coding region of genome, with the 11,537 bp of the BA.1 and 11,537 bp of the BA.2 genomes before and after the break site (Chakraborty et al., 2022). XE appears to be roughly 10% more transmissible than its parent variant BA.2 (Basky and Vogel, 2022). The XF variant contains the genomes of NSP1 to NSP3 from the Delta variant; the breakpoint lies at site 5,386, and the rest genomes from Omicron BA.1 variant (Chakraborty et al., 2022). XBB, nicknamed Gryphon, is the most recent recombinant. XBB is regarded as the first observed SARS-CoV-2 variant to increase its fitness through recombination rather than substitutions . XBB derives from two BA.2 sublineages: BJ.1 (BA.2.10.1) and BM.1.1.1 (BA.2.75) Scarpa et al., 2023b). XBB and its first descendant XBB.1 are both evolutionarily close to BA.2 genomes (Scarpa et al., 2023b), suggesting BA.2 acts as their progenitor. The breakpoint lies between position 22,901 and 22,939, a position in the middle of RBD (Scarpa et al., 2023b). The mutation profiles possibly altogether contribute to the greater immune invasion capabilities of XBB than do those of the earlier Omicron variants BA.2 (Imai et al., 2023). The pathogenicity of XBB.1 is comparable to or even lower than that of BA.2.75 . Though XBB subvariants exhibit enhanced fusogenicity and substantial immune evasion in elderly population, but the fusion inhibitors EK1 and EK1C4 can potently block either XBB or XBB.1.5 spike protein mediated fusion and viral entry (Xia et al., 2023a). Overall characteristics of emerging recombinants As a whole, the novel recombinant subvariants demonstrate a higher transmission rate and relatively greater resistance to antibodies compared to earlier variants Brandolini et al., 2023;Faraone et al., 2023). In January 2023, there was a rapid increase in the prevalence of XBB.1.5 in the United States (Callaway, 2023). According to the World Health Organization (WHO), XBB.1.5 accounted for 23-86% of circulating variants throughout the country (XBB.1.5 Updated Risk Assessment, 24 February 2023). 6 However, these recombinant variants do not significantly increase the severity of the disease or cause clinical exacerbation (Karyakarte et al., 2023). XBB.1.5 does not carry mutations associated with potential changes in pathogenicity, such as P681R (Mlcochova et al., 2021;Saito et al., 2021). It is important to note that most vaccines are developed based on the spike protein, and the emergence of recombinant variants may pose a risk of vaccine failure . Therefore, it is crucial to consider potential new subvariants in the development of novel strategic vaccines. Outlook for SARS-CoV-2 evolution and interventional strategies Various factors drive the viral evolution (Moelling, 2021), including RNA polymerase exchanging accuracy for efficiency (Yewdell, 2021), the selective pressures exerted by host immune system (Milne et al., 2021;Thorne et al., 2021), chronic infection in other species then spillover to human Hale et al., 2022;Marques et al., 2022), and prolonged co-infection in immunodeficient hosts (Ou et al., 2022;Rockett et al., 2022). These factors contribute to the mutation-selection-evolution process of SRAS-CoV-2 evolution. Continuous evolution of SARS-CoV-2 has led to rapid and simultaneous emergence of multiple variants that exhibit a growth advantage over previously circulating variants (Wolf et al., 2022). During the evolution of SARS-CoV-2, the spike gene is the only gene that undergo the strong positive selection, while other genes show only weak or temporary positive selection (Lu et al., 2023) Thus, spike mutations contribute highly in its evolution. The mutational process is dynamic, and the mutation spectrum of SARS-CoV-2 may tend to be more similar to that of other animal Sarbecoviruses (Bloom et al., 2022). Here we propose several interventional strategies. 1. Genomic surveillance of SARS-CoV-2, specifically in the spike gene and genomic recombination, is of utmost importance in recognizing its evolutionary trend. Efforts have been made to promote the genomic monitoring. Dadonaite et al. (2023) developed a novel deep mutational scanning (DMS) platform for mapping the effects of spike protein mutations on immune evasion and viral infectivity (Xia et al., 2023b). Saldivar-Espinoza et al. (2023) developed a SARS-CoV-2 Mutation Portal which provides access to a database of SARS-CoV-2 mutations. Sathyaseelan et al. (2023) developed a CoVe-tracker (SARS-CoV-2 evolution tracker) 7 for quick surveillance of newly emerging mutations/variants/lineages to facilitate the understanding of viral evolution, transmission, and disease epidemiology. Huang et al. (2023) developed a genomic surveillance framework and a dynamic community-based variant dictionary tree, which enables early detection and continuous investigation of SARS-CoV-2 variants. Outbreak. info is a platform for scalable and dynamic surveillance of SARS-CoV-2 variants and mutations, and it relies on shared virus sequences from the GISAID Initiative (Gangavarapu et al., 2023;Tsueng et al., 2023). 2. As the recently expanding Omicron subvariants are capable of immune evasion from most of the existing neutralizing antibodies, it is imperative to explore broad-spectrum antivirals to combat the emerging variants. Resistance to monoclonal antibody neutralization is dominated by the action of epitope single amino acid substitutions in the spike protein (Cox et al., 2022). Currently, most therapeutic neutralizing antibodies and promising vaccine candidates are designed to target the RBD or use RBD as the sole antigen Yang et al., 2020aYang et al., , 2021Dai et al., 2022;Han et al., 2022). A novel group of neutralizing antibodies and vaccines targeting S2 subunit of the spike [such as fusion peptide (FP), heptad repeats 1 and 2 (HR1-HR2), and stem helix (SH)] may become the next 7 https://project.iith.ac.in/cove-tracker/ generation of therapeutic strategies. For example, COV44-62 and COV44-79 were identified as anti-FP antibodies and showed considerable neutralizing capacity (Dacon et al., 2022). 3. Strategies should be implemented to prevent long-term SARS-CoV-2 infection and to limit the spread of emerging, neutralization-resistant variants in immunocompromised patients (Gonzalez-Reiche et al., 2023). It is found that the evolutionary rate of SARS-CoV-2 in chronic infection individual is 2-fold higher than that around the globe (Chaguza et al., 2023). This persistent intrahost evolution may accelerate antigenic alteration and lead to the emergence of genetically distinct subvariants (Smith and Ashby, 2022;Ahmadi et al., 2023;Chaguza et al., 2023). Bendall et al. (2023) observed a tight transmission bottleneck that would limit the development of highly mutated VOCs in the transmission chain of acutely infected individuals, further suggesting that selection for long-term infection in immunocompromised patients may drive SARS-CoV-2 VOC evolution (Braun et al., 2021;Wilkinson et al., 2022). Surveillance by sequencing is recommended for (i) patients carried with SARS-CoV-2, (ii) patients suspected of reinfection, and (iii) patients who are immunocompromised (Landis et al., 2023). 4. Vaccination in large population acts as a valuable measure in decreasing the mortality. However, vaccination alone cannot slow the pace of viral evolution for immune evasion and therefore, vaccine protection against severe and fatal outcomes for COVID-19 patients may not be assured (Van Egeren et al., 2023). Current herd immunity and BA.5 vaccine boosters may not efficiently prevent the infection of Omicron convergent variants (Cao et al., 2022a). However, these may result from the decreased pathogenicity of SARS-CoV-2 via inducing the mutations. The vaccination against SARS-CoV-2 still efficiently decrease the case fatality rate . Summary and conclusion In the process of SARS-CoV-2 evolution, external and internal pressures drive the selection of randomly occurring mutations, with the retention of favorable mutations leading to adaptation. SARS-CoV-2 exhibits a trajectory of evolution characterized by increased transmissibility, reduced virulence, and enhanced immune escape, enabling its long-term persistence within the population. The mutation patterns observed in pre-Omicron variants primarily manifest at recurrent amino acid sites within the spike protein, affecting the RBD conformation and glycosylation sites, consequently altering antigenicity. However, the emergence of the Omicron introduced a multitude of novel mutations, resulting in a substantial increase in transmissibility and immune evasion. Remarkably, the severity and clinical manifestation of patients did not escalate further, mainly for Omicron's tropism for the upper respiratory tract. These changes observed in Omicron are attributed to the ongoing viral evolution. The appearance of the recombinant variant XBB and its subsequent descendants since August 2022 likely stems from the co-circulation of multiple variants and co-infection in the immunocompromised patients during the later stage of the pandemic. Although novel recombinant variants such as XBB.1.5 and XBB.1.16 demonstrate a considerable transmission advantage and outcompete the predecessors, they do not exhibit a significant increase in disease severity and display Frontiers in Microbiology 12 frontiersin.org relatively moderate antibody escape. Although SARS-CoV-2 is no longer regarded as a Public Health Emergency of International Concern, its evolution persists. We strongly recommend for enhanced surveillance of the viral genome, particularly in immunocompromised patients, the development of therapeutics targeting domains beyond the RBD, and the promotion of widespread vaccination. Author contributions GC and LF: conceptualization. LF, JX, YZ, JF, and JS: data collection. LF, JX, and GC: writing-original draft preparation. LF, JX, GC, YZ, JF, WL, and JS: writing-review and editing. All authors have read and agreed to the published version of the manuscript. Funding This work was funded by the National Natural Science Foundation of China (82041022) and Shanghai Commission of Science and Technology (20JC1410200 and 20431900404). Acknowledgments The authors acknowledge the use of Biorender.com to create Figure 1, 2, and 5. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-07-27T15:19:41.770Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "e4313f89cb3943634bbc88af208586360b5860b1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1228128/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49ccaeca6ab93f12b8b304109d5eeb8e05989df1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
12805823
pes2o/s2orc
v3-fos-license
Incubation environment impacts the social cognition of adult lizards Recent work exploring the relationship between early environmental conditions and cognition has shown that incubation environment can influence both brain anatomy and performance in simple operant tasks in young lizards. It is currently unknown how it impacts other, potentially more sophisticated, cognitive processes. Social-cognitive abilities, such as gaze following and social learning, are thought to be highly adaptive as they provide a short-cut to acquiring new information. Here, we investigated whether egg incubation temperature influenced two aspects of social cognition, gaze following and social learning in adult reptiles (Pogona vitticeps). Incubation temperature did not influence the gaze following ability of the bearded dragons; however, lizards incubated at colder temperatures were quicker at learning a social task and faster at completing that task. These results are the first to show that egg incubation temperature influences the social cognitive abilities of an oviparous reptile species and that it does so differentially depending on the task. Further, the results show that the effect of incubation environment was not ephemeral but lasted long into adulthood. It could thus have potential long-term effects on fitness. Recent work exploring the relationship between early environmental conditions and cognition has shown that incubation environment can influence both brain anatomy and performance in simple operant tasks in young lizards. It is currently unknown how it impacts other, potentially more sophisticated, cognitive processes. Social-cognitive abilities, such as gaze following and social learning, are thought to be highly adaptive as they provide a short-cut to acquiring new information. Here, we investigated whether egg incubation temperature influenced two aspects of social cognition, gaze following and social learning in adult reptiles (Pogona vitticeps). Incubation temperature did not influence the gaze following ability of the bearded dragons; however, lizards incubated at colder temperatures were quicker at learning a social task and faster at completing that task. These results are the first to show that egg incubation temperature influences the social cognitive abilities of an oviparous reptile species and that it does so differentially depending on the task. Further, the results show that the effect of incubation environment was not ephemeral but lasted long into adulthood. It could thus have potential long-term effects on fitness. Introduction Environmental change is increasingly impacting habitats worldwide, creating novel challenges for the animals living there [1]. Genetic adaptation can be slow and, therefore, one of the first responses that an animal can make in the face of environmental change is behavioural [2]. Cognitive likely to play a major role in behavioural adaptation as they influence how an animal perceives, stores and uses information from the surrounding environment [3]. Reptiles are especially interesting in this context because of their dependence on behavioural thermoregulation [4] and their pattern of reproduction, which is reliant on environmental sources of heat for maintenance of embryonic development. Viable incubation temperatures for reptiles are wideranging and are known to impact upon many aspects of offspring phenotype, inter alia growth rate, sex determination and many aspects of behaviour (for a review see [5]). Recent work exploring the relationship between early environmental conditions and cognition has shown evidence that incubation temperature can influence brain anatomy [6] and performance in various learning tasks [7][8][9] in young Eastern three-lined skinks (Bassiana duperreyi). However, it is currently unknown whether incubation temperature impacts upon other aspects of cognition and the duration to which it might influence cognitive processes. Social cognition encompasses all cognitive processes involved in acquiring knowledge from another individual and is typically studied in two categories, social intelligence and social learning [10]. One classic test of social intelligence, i.e. intelligence applied to the social world [11], is the use of gaze following, which refers to an animal's ability to follow the direction of another individual's gaze [12]. Such a skill is considered highly adaptive as it can alert the observer to essential information, such as the presence of a food source or predator. Two different modes of gaze following are typically observed. Firstly, gaze following into distant space, which is taxonomically widespread [12][13][14][15], and is likely to be controlled by a socially facilitated orienting response that can be modified through experience [13]. By contrast, geometric gaze following, which requires following gaze behind a visual barrier, is considered more complex as it entails an assessment of the difference in the visual perception between the cuegiver and the observer [14]. Thus, investigating two modes of gaze following will provide further insight into the level of complexity that egg incubation temperature may influence upon social cognition. Gaze following into the distance has been demonstrated in the red-footed tortoise [15] while geometric gaze following has never been investigated in reptiles and has only been observed in primates, corvids and canids [16][17][18][19][20][21]. Social learning, in its broadest sense, can be considered as 'learning that is influenced by observations of, or interaction with, another animal (typically a conspecific) or its products' [22]. Social learning is thought to be adaptive as it offers an individual a short-cut to novel information [23][24] such as potential resources [25] and can aid in learning novel foraging techniques [26]. Social learning is thought to be particularly advantageous when the costs of asocial learning are high [27], such as in areas of high predation or dwindling resources (e.g. [28]). Social learning is also positively correlated with asocial learning abilities [24] and it has been suggested that they are controlled by the same mechanisms [29][30][31]. Egg incubation temperature is known to influence associative learning in oviparous reptiles [6][7][8][9]32] and thus it is possible that these abilities could be extended to social learning abilities. The long-term effects of egg incubation temperature on cognitive traits in oviparous reptiles are currently unknown. Typically, experiments use animals that are a few days or a few weeks old [6][7][8][9]32]. However, it remains unclear how long these differences last. In the only study to investigate this so far, we observed differences in the development of behavioural traits in bearded dragons (Pogona vitticeps) as a result of incubation temperature, with animals from the hot group initially appearing bolder than those in the cold group. However, these differences did not last into adulthood [33]. It is, therefore, essential to investigate the long-term impact of incubation environment on offspring phenotype. Incubation temperature is known to influence the development of behavioural traits and the learning ability of oviparous reptiles [6][7][8][9]32,33], traits that are intrinsically linked to social cognition [3]. Thus, to begin to explore the association between incubation environment and social cognition, this study tested the impact that incubation environment has on gaze following into the distance, geometric gaze following and social learning abilities of adult bearded dragons. Material and methods Thirteen eggs were randomly assigned to two incubation conditions, the 'hot group' (n = 7) incubated at an average temperature of 30 ± 3°C, and the 'cold group' (n = 6) incubated at an average temperature of 27 ± 3°C (fully described in [33]). The eggs were incubated in multiple plastic boxes with a vermiculate substrate and kept moist. Bearded dragons do not have temperature-dependent sex determination when incubated at optimal temperatures [34] and hence, after incubation, we had an even spilt of sexes between incubation groups (hot group: four males and three females; cold group: three males, three females). Once hatched, the animals were housed in similar environments and maintained under standard conditions. Bearded dragons were social housed in heated vivariums (145 × 48 × 60 cm) with conspecifics from the same incubation temperature regime. The average temperature of the room was maintained at 29°C and all lizards received the same feeding regime. All bearded dragons had experimental experience [33] but had no previous experience with video experiments. The animals were at least one-year old at the time of testing for both experiments (see below) and were all considered to be sexually mature. Gaze following To assess gaze following, the lizards were placed facing towards a computer monitor within a familiar arena containing a barrier on one side (figure 1a). The experimental set-up consisted of a square arena measuring 73 × 73 cm with 19 cm high walls. A visual barrier (41 × 16 cm) was placed so that the lizard could see the screen but the barrier obstructed the view of the lizard to one side of the arena. A computer monitor was positioned in one end of the arena, this was used to present video stimuli to the observer animal (figure 1a). Bearded dragons have been shown previously to respond to videos of conspecifics [35]. Prior to the onset of the experiment all lizards were habituated to the arena. Each habituation trial lasted 10 minutes. During this time, the lizards had access to the entire arena including the barrier. Mealworms were placed inside the arena and the animals were considered habituated if they readily explored and ate all the food on two consecutive trials. All the lizards were habituated within two trials. During the experimental trials, the observer animal was placed facing towards the screen. If the observer was not looking at the screen (classed as at least one eye facing towards the screen) or the lizard moved off before the video was played, then the trial was terminated and repeated on another day. Exactly 5 s after the bearded dragon had been placed in the arena, a video was presented. The video footage, described in more detail below, showed an unfamiliar female bearded dragon doing one of four things; looking up, looking to the side, looking to the side behind a barrier and looking straight ahead (the control). The gaze movements were recorded by presenting a favoured food in a specific position (above or to the side) relative to the demonstrator bearded dragon. The average length of the video clips was 2.33 ± 1.23 s and a different video was used for each trial. Once the clip was finished, the demonstrator bearded dragon remained on the screen and continued facing in the same direction. All trials were recorded on a video camera (Sony HDR-CX22OE) and were analysed using VLC media player. dragon looked up by either extending its head and neck upwards or by turning its head so one eye was directed upwards in the 5 s following the stimulus presentation. Look sideways into distant space Each video showed a demonstrator lizard, positioned in the centre of the screen looking to the side by moving its head either to the left or the right (direction counterbalanced across animals) without tilting its head upward. The observer lizard was considered to be looking sideways if its head moved sideways in either direction in the 5 s following the stimulus presentation. Bearded dragons have binocular vision meaning that in order to follow the gaze of a conspecific the lizard could move its head to the side in either direction. Looking sideways behind the barrier Each video showed a demonstrator lizard looking sideways behind a barrier. This meant that the observer lizard had to reposition itself in order to follow the gaze of the demonstrator lizard. The lizard was considered to have moved past the barrier if it moved so that its head was positioned so it could clearly follow the gaze of the demonstrator lizard or if it climbed over the barrier within the 1 min trial time. The video stimuli used in this condition presented identical information to those used in the look sideways into distant space condition; as such, the specific videos used for the two conditions were counterbalanced across subjects. The side was also counterbalanced (and the barrier was moved accordingly) across subjects. Control: stationary demonstrator To control for the influence of the presence of a conspecific we included a condition in which the video showed a demonstrator lizard facing towards the observer but did not shift its gaze direction. When presented with the control condition, observers were expected to show fewer shifts in their gaze in the 5 s following stimulus presentation and move around the barrier less often than in the geometric gaze following condition. Each lizard received one session of three trials a day repeated over four days. When an animal moved before the start of the video the trial was repeated the next day, hence it took five days to test all lizards. Each trial was separated by an inter-trial interval of at least 10 min, during which time the lizard was returned to its enclosure. Each animal received 12 trials in total and three trials per condition. The order of the trials was counterbalanced between animals and across sessions. Data and statistical analysis All trials were coded from video recordings. If the lizard responded within the appropriate time period (5 s after the start of the video for gaze following into the distance and 1 min for the moving around the barrier), it was coded as 1 and if it did not then it was coded as 0. These numbers were used as the dependent variable in a general linear model while incubation temperature and test condition were used as fixed factors as was the interaction between them. The individual was used as a random factor. We also used time to respond to gaze as a dependent variable in the same model to assess if incubation influenced the time it took the subject to respond to the demonstrator gaze. Ten per cent of the data was analysed by a second individual who was blind to condition and the inter-observer reliability was excellent (Cohen's k = 0.913, p = 0.001). All statistical analyses were carried out using Minitab (v. 17). Social learning To investigate social learning, we used a bi-directional control procedure [23] in which the lizards observed a video of an unfamiliar lizard opening a sliding door with its foot or with a sliding head movement to receive food behind it (figure 1b). After watching the video, the animals were given access to the sliding door and they had 5 min to open the door themselves. Each lizard was presented with 10 trials. The experimental arena (120 × 41.5 × 51 cm; length × width × height) was divided into two parts by the test apparatus (figure 1b); the test area (where the subjects were located) and the demonstration area (where a computer screen was located; figure 1b). The test apparatus was a wooden board (41.5 × 51 cm) with a horizontally sliding door with vertical bars in front of the hole (12 × 12 cm). The door could be opened to either the left or the right side. The sides of the arena were opaque and the floor was lined with newspaper. All testing was recorded with a digital camera (Panasonic HC-V100) on a tripod positioned above the arena. All animals were habituated to the arena in a similar manner to the previous experiment. During the experiment the lizards received up to two trials per day, with a total of 10 trials per animal. At the onset of each trial the lizard was placed in the arena for 30 s. They then watched a demonstration video lasting 11 s in which they observed an unknown female demonstrator opening a horizontally sliding door to either the left or the right side, using a specific head movement (see [23] for full details). If the lizard moved away from the demonstration area prior to the start of the video, the lizard was placed in front of the screen before the video started. The observer animals saw either a demonstration in which the door was opened in a rightward direction or a leftward direction (a mirrored version of the stimulus). Each lizard was pseudo-randomly assigned either leftward or rightward opening demonstrations. This was counterbalanced across incubation condition. After the video presentation, the observer lizard was then moved to the test area and placed behind a screen while the experimenter setup the trial (approx. 15 s). After the screen was raised the lizard had five minutes to open the sliding door to access a reward of a mealworm that was located behind the door. The trial was terminated when the lizard successfully opened the door and ate the reward or after five minutes had passed. Lizards were returned to their vivarium between trials. The lizards were considered to have successfully opened the door when the door was moved enough, to either the right or left side, to create a visible gap (see electronic supplementary material, video S1 of a successful trial). The time taken before a successful opening was measured, as was the latency before attempting to open the door; latency ended when the subject first moved to make contact with the door. To investigate whether differences in motivation could account for any differences in performance between the two groups we assessed motivation by recording the sum of the amount of head and claw interactions with the door prior to its opening and measuring the latency to approach the door. A control condition was not included in this experiment as Kis et al. [23] showed that, over the course of 10 trials, bearded dragons were unable to complete this task without first observing a conspecific succeed. Data and statistical analysis For the door opening the data for the hot and the cold groups were compared using an independent t-test assuming unequal variances. A general linear model was used to see if there was a difference in the speed of social learning for the hot and the cold group. The time it took to open the door was used as a dependent variable, temperature was used as a fixed factor and the trial number was used as a covariant. Ten per cent of the videos were second coded and the correlation of results was excellent (ρ 13 = 0.752, p = 0.003). All the statistical analysis was calculated on Minitab (v. 17). Gaze following Bearded dragons followed the gaze direction of the stimulus animal significantly more than they looked in that direction during the control trials when both looking upwards (F 1,23 = 10.89, p = 0.003; figure 2a) or looking sideways (F 1,23 = 21.49, p = 0.001; figure 2b). Egg incubation temperature did not influence their propensity to follow gaze into the distance (looking upwards: F 1,23 = 1.44, p = 0.242; looking sideways: F 1,23 = 0.29, p = 0.597). Lizards incubated at colder temperatures, however, were quicker to respond in general in the looking upwards but not the looking sideways condition (looking upwards: F 1,24 = 5.59, p = 0.027; looking sideways: F 1,34 = 0.53, p = 0.975; electronic supplementary material, figures S1 and S2). There was no evidence of geometric gaze following in the bearded dragons; individuals were equally as likely to move around the barrier on a control trial as they were when they observed the stimulus animal looking behind the barrier (F 1,23 = 1.44, p = 0.242; figure 2c); this was also unaffected by incubation temperature (F 1,23 = 1.24, p = 0.277). Social learning Although the 'cold group' opened the door more times than the 'hot group' (figure 3a) this difference was not significant (t 8 = −1.79, p = 0.111). However, over the course of 10 trials, the cold group completed the task significantly quicker than the hot group (figure 4; trial: Discussion Our findings reveal that egg incubation environment impacts upon some aspects of social cognition in adult bearded dragons. We found that the lizards were able to follow the gaze of a conspecific into distant space, but that both incubation groups performed similarly in this task and neither group were able to follow the gaze of a conspecific around a barrier. By contrast, there was an effect of incubation temperature on social learning, with the cold-incubated animals performing significantly faster than those that were incubated at a warmer temperature over time. Gaze following into the distance is thought to be controlled by an innate orienting response, which can be modulated by experience [13,18]. This is generally considered to be a relatively simple mechanism and, as such, it may not be surprising that no differences were observed in performance of this task. However, the results did suggest that lizards from the cooler incubation group looked up more rapidly than the hotter incubated animals. By contrast, geometric gaze following is considered cognitively complex because, at a minimum, it requires either learning about how barriers impair vision [14] and using that information appropriately or, potentially, forming a mental representation of the demonstrator's visual perspective [13]. Here, at least under these conditions, the bearded dragons were unable to use this information irrespective of their incubation environment. It is not clear whether this lack of effect represents a true lack of ability; however, the lizards moved around the barrier in 40% of all trials, suggesting that neophobia was not the reason for the failure in this task. We found a significant influence of egg incubation temperature on social learning. Over the course of the experiment, the cold-incubated animals opened the door significantly faster than the hot-incubated animals. The nature of this difference supports the idea that the contrast observed between the groups in this experiment may be the result of differences in associative learning abilities caused by incubation environment. It, therefore, provides further evidence for the idea that associative learning mechanisms may underpin social learning. This is supported by research in a number of areas. The medial cortex is thought to play a central role in reptile learning [36] and recent work has revealed that incubation temperature positively influences the density of neurones found in this region of the brain in hatchling Eastern three-lined skinks [6], though it remains unclear whether other brain areas are also impacted by the manipulation, and to what extent this observation applies to other species. Further, previous research has observed differences in associative learning abilities as a result of incubation environment in other species [6][7][8][9]32]. Taken together, the results add to the idea that associative processes play a crucial role in social learning abilities. There was a difference in the number of interactions with the door between the conditions, with the cold-incubated animals interacting with the door more than the hot-incubated animals. This suggests that there may be a difference in motivation between the two groups. However, there was no difference in willingness to approach the door. This, in combination with recent work with the same animals, which revealed that there was no difference in food motivation between the two groups [37], makes it unlikely that the differences observed were the result of this, but were rather the result of differences in social learning ability. All previous research in this area has used very young animals [6][7][8][9]32] and it was unclear whether the observed differences in cognitive ability persisted during ontogeny. Our previous work revealed that incubation environment impacted upon the development of behavioural traits but that these did not differ when the animals were adults [33]. Although most studies deal with short-term effects of incubation conditions [5], there is evidence that long-term (over a year) survival in snapping turtles (Chelydra serpentina) is affected by incubation temperature [38], which implies an effect on individual fitness. Although the effect of egg incubation temperature on social cognition in young bearded dragons is unknown, the results are the first to reveal that incubation environment can influence the cognitive ability of adult reptiles. The results come from captive animals; however, they suggest that if this behaviour was seen under natural conditions it is likely to have profound impact upon individual fitness. The mechanisms that underlie temperature-dependent differences in the phenotypes of oviparous reptiles remain poorly understood [5,36]. One intriguing idea suggests that incubation environment may 'select' for traits that are adaptive to the specific environment into which the animal is born [33]. Therefore, a cooler environment may produce animals that are better adapted to survival in that temperature profile and vice versa. Further research is required to test these ideas. As such, it is possible that variation in the sensitivity of oviparous reptiles to external environmental factors may provide a behavioural buffer that allows individuals to better cope with heterogeneous and changing environments. Ethics. This experiment had approval from the College of Science ethical committee at the University of Lincoln (COSREC-2014-05), and the work was carried out in accordance with the relevant guidelines and regulations of the UK. Data accessibility. We provide supporting data in the supplementary information. Authors' contributions. All authors contributed to experimental design. H.S. and M.v.G. ran the experiments. All authors contributed to writing the manuscript. All authors have approved the manuscript and agree to be held accountable for the contents of this work.
2018-04-03T06:13:58.390Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "7b34d57cf7a620b9b22b36f247cc05dfa56587f4", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170742", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e517e1d3b1ff0368c866170794d8701f691207bf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
120276034
pes2o/s2orc
v3-fos-license
Mathematical Approach to the Platonic Solid Structure of MS 2 Particles Bacteriophage MS2 is a viral particle whose symmetrical capsid consists of 180 copies of asymmetrical coat proteins with triangulation number T = 3. The mathematical theorems in this study show that the phage particles in three-dimension (3D) might be an icosahedron, a dodecahedron, or a pentakis dodecahedron. A particle with 180 coat protein subunits and T = 3 requires some geometrical adaptations to form a stable regular polyhedron, such as an icosahedron or a dodecahedron. However, with mathematical reasons electron micrographs of the phage MS2 show that 180 coat proteins are packed in an icosahedron. The mathematical analysis of electron micrographs in this study may be a useful tool for surveying the platonic solid structure of a phage or virus particle before performing 3D reconstruction. When an icosahedron is transferred to a flat sheet [4], its hexagon units consist of 6 regular triangles.A convex angle can be formed by 5 angles of regular triangles, but not 6 angles, which become a plane.The 20 triangular faces created by the vertices of 12 pentagonal cones and 30 edges can form an icosahedron. The coat protein of MS2 forms a shell that protects the phage nucleic acid and acts as a translational repressor [5].The tertiary structure of the coat proteins is asymmetrical.A particle with nucleotides packed with asymmetrical proteins requires a low free-energy to achieve a stable condition.Packing as a helix or a regular polyhedron is a way of getting a symmetrical solid with asymmetrical subunits.In icosahedral particles, proteins are packed on the faces and directed to the vertices and the particles become symmetrical [3] [6].In the MS2 capsid, one triangle of the icosahedron contains 3 asymmetrical subunits [6] [7].This study uses mathematical analysis to identify the reasonable Platonic solids for packing a symmetrical capsid with asymmetrical subunits.Results show that the MS2 particles with 180 coat proteins and triangulation number ( ) 3 T = might form an icosahe- dron, a dodecahedron, or a pentakis dodecahedron. The overall shape of MS2 is spherical, but is difficult to see the three-dimensional (3D) figure in electron microscopy (EM) two-dimensional (2D) images before 3D reconstruction.Therefore, this study also introduces a mathematical method to predict particle solid 3D figures with 2D EM images before performing 3D reconstruction. Theory and Calculation The 3D structure of MS2 should be an isohedron or a regular polyhedron if the particles need to pack the asymmetrical proteins to make a stable symmetrical structure.The capsid of phage MS2 contains 180 identical copies of a coat protein with a T = 3 isohedral (such as icosahedral) shell [3] [7].MS2 particles may be constructed with 60 triangles, 45 quadrilaterals, 36 pentagons, or 30 hexagons.This study examines particle construction using the following theorems and mathematical analysis: Theorem 1.If a polyhedron has 180 subunits, where 180/n-polygons have n sides, n should be a positive integer and equal or less than 5. Proof.A polyhedron is constructed with n-side polygons, the polyhedron has 180/n faces and the number of edges should be 180 2 90 n n × ÷ = . By the Euler theorem, 2 V F E + − = , where V, F, and E denote the number of vertices, faces, and edges, respectively, 2 2 90 180 92 180 A vertex of solid angle needs at least trimer, then ( ) Thus, the maximum of n is 5, where the polygons are pentagons, quadrilaterals, or triangles.Therefore, a polyhedron with 180 structure protein subunits should not be constructed with 30 hexagons.Theorem 2. If a polyhedron with 180 subunits is constructed with 36 pentagons, the polyhedron should not be an isohedron or a regular polyhedron. Proof.If 36 pentagons can construct an isohedron or a regular polyhedron, the number of faces is 36.If two faces form dihedral angles, then the number of edges = 36 × 5 ÷ 2 = 90. By the Euler theorem, 2 The shell of an isohedron or a regular polyhedron consists of m-polymer units, and y m-polymers consist of 36 faces. That is 36 my = . The solution of 3 simultaneous equations .This is a contradiction with positive integer m.Therefore, this theorem suggests that 36 pentagons cannot form an isohedron or a regular polyhedron.Theorem 3. If a polyhedron with 180 subunits is constructed with an odd number of quadrilaterals, the polyhedron should not be an isohedron or a regular polyhedron. Proof.Suppose that an isohedron or a regular polyhedron can be constructed with an odd number, 2n + 1, of quadrilaterals. The number of faces is then 2n + 1, and the number of edges ( ) The number of vertices is odd, which is a contradiction because an isohedron or a regular polyhedron is symmetrical, and the number of verticies should be even. Thus, an odd number (such as 45) of quadrilaterals cannot form an isohedron or a regular polyhedron. According to Theorems 1, 2, and 3, 45 quadrilaterals, 36 pentagons, or 30 hexagons (the polygons with more than 3 edges) cannot form an isohedron or a regular polyhedron with 180 subunits.Only 60 triangles can form a regular polyhedron for packing 180 subunits. Lemma 4. If a convex solid angle is constructed with n equilateral triangles, n should be 3, 4, or 5. Proof.In 3D models, a convex solid angle has at least by 3 faces.For a solid angle, the total angle with 3 equilateral triangular angles is 180˚.The total angle with 4 equilateral triangular angles is 240˚.The total angle with 5 equilateral triangular angles is 300˚.The total angle with 6 equilateral triangular angles is 360˚, which is a cyclic angle. For a convex solid angle constructed with n equilateral triangles, n should be 3, 4, or 5.The angle of a vertex equal to or greater than 360˚ is flat or concave. Theorem 5.If a polyhedron is constructed with 60 identical equilateral triangles, the polyhedron is not an isohedron or a regular polyhedron. Proof.If a polyhedron is constructed with 60 identical equilateral triangles, as in Figure 1, then vertex Q in Figure 1 consists of 6 identical equilateral triangles.Lemma 4 shows that vertex Q is not a convex solid angle.Instead, vertex Q may belong to the following: 1) Vertex Q is a concave solid angle.This hypothesis is not true because the polyhedron is convex.2) Point Q is at the center of hexagon PQ P O RO ′ ′ ′ .Vertex O and pentagon PQRST form a regular pentagonal cone.A dihedral angle forms between triangles OPQ and OQR on line OQ .Thus, polygon PQ P O RO ′ ′ ′ is not a regular hexagon, and the hypothesis is not true. 3) Vertex Q locates on a dihedral angle.Vertex Q′ and pentagon QP T S R ′ ′ ′ form a regular pentagonal cone.Line O Q ′ is on the dihedral angle between triangles O P Q ′ ′ and O QR ′ . On the other hand, a dihedral angle forms between triangles OPQ and OQR on line OQ , as in (2).Therefore, two dihedral angles appear at vertex Q, and the hypothesis is not true.Theorem 6.A polyhedron becomes an icosahedron when the solid angles of the hexamers of the polyhedron spread to be flat and the connecting lines between the vertices of 2 nearby pentagonal cones become the ridges of dihedral angles. Proof.As Figure 1 shows that the distances between the vertices of any 2 pentagonal cones in Figure 2 are the same.Then, Spreading and flattening the solid angles of the hexamers in the pentagonal cone OO Q VUR ′ ′ ′ forms dihe- dral angles on lines OO′ , OQ′ , OV , OU , and OR′ .These lines become the edges of pentagonal cones, and vertex O has equilateral triangular faces.The vertices of 12 pentagonal cones have the same number of edges. Therefore, a polyhedron becomes an icosahedron.Finally, the polyhedron becomes a dodecahedron. Mathematical Theorems in Packing a Stable Particle Theorems 1, 2, and 3 show that 45 quadrilaterals, 36 pentagons, or polygons with 6 or more edges cannot form an isohedron or a regular polyhedron with 180 subunits.Only 60 triangles can form a regular polyhedron for packing 180 subunits.In the MS2 particle, the way to pack 180 coat proteins is to form an isohedron or a regular polyhedron with 60 identical triangles (Lemma 4).The 3D polyhedron constructed with 60 identical equilateral triangles in Figure 1 is not an isohedron or a regular polyhedron (Theorem 5).That is, a polyhedron consisting of 60 identical equilateral triangles is not a convex isohedron.Therefore, a T = 3 MS2 phage with 180 coat protein subunits might become an icosahedron [3] only if the solid angles of hexamers in the polyhedron of 60 identical equilateral triangles spread to become flat and the connecting lines between the vertices of 2 nearby pentagonal cones become the ridges of dihedral angles (Theorem 6).Theorem 7 shows that the MS2 particle model may form a dodecahedron when the solid angles of pentamers spread out and become flat.The 3D structure of MS2 [3] consists of symmetrical units that lie between 2 threefold axes and 1 fivefold axis.However, a particle with 180 coat protein subunits and T = 3 requires some geometrical adaptations to pack a stable regular polyhedron including an icosahedron or a dodecahedron (Theorem 6 and 7). An array of hexamers is the basic unit for generating an icosahedron [4].A hexagon consisting of 6 regular triangles cannot form a solid angle, but lies flat (Lemma 4).Removing a regular triangle from the triangles in the hexagon forms a pentagonal cone.The vertex of the pentagonal cone becomes one of the 12 solid angles of an icosahedron and the 20 triangular faces form part of hexagons [8]. For MS2 capsids with the principal of quasi-equivalence [8], the triangulation number ( ) T h hk k = + + in the hexagon net is 3, where h and k are nonnegative integers on the original hexagonal net, h and k cannot be zero simultaneously, and the capsid has 180 structure subunit ( ) S proteins ( ) and 32 morphological units ( ) M ( ) . T values may be 1, 3, 4, 7, 9, 12, 13, 16, 19, 21, 25, etc [1] although T = 2 and 6 appear [9] [10].When T > 1, the morphological unit appears to form pentamers or hexamers.In an assembly pathway, 5 dimers converge into a pentamer.Twelve pentamers are linked together with free dimers creating a complete particle [2].According to Theorems 6 and 7, a T = 3 particle can form a regular polyhedron with geometrical degeneration.Although the numbers of subunit proteins vary, the particle morphology is quasi-equivalent. This study hypothesizes that regular Platonic solids allow a single type of asymmetric subunit to assemble into a well-defined spherical structure [7].The asymmetrical subunit contains 3 subunits, designated as A, B, and C, in an icosahedral particle of phage MS2.Pairwise interactions between the monomers form dimers.The capsid contains 2 types of dimers: one at the quasi-twofold axis composed of subunits A and B and the other at the icosahedral twofold axis consisting of 2 C subunits [7].Therefore, the capsid is effectively constructed from 90 dimers [7].The theorems above indicate that the T = 3 phage MS2 particles with 180 protein subunits may by icosahedral, dodecahedral, or pentakis dodecahedral (dual semiregular solid) particles.Therefore, further studies with EM images are necessary to determine if the Platonic solid of MS2 is an icosahedron or a dodecahedron. Mathematical Reason with Electron Micrographs of MS2 Particles With electron micrographs [11], it is difficult to distinguish between regular hexagons and hexagon-like dodecagons in the projections of icosahedron.Both regular hexagons and regular decagons appear in EM images, though regular decagons appear spherical (Figure 4).However, unequilateral hexagons with large obtuse angles instead of narrow obtuse angles appear in EM images of MS2 in twofold views (Figure 5). The 5 regular Platonic solids are tetrahedron, octahedron, cube, dodecahedron, and icosahedron.The icosahedron has a common symmetry with the dodecahedron, and the octahedron is similar to the cube [7].Most studies recognize the bacteriophage MS2 as an icosahedral particle [1]- [3] [6] [12] [13], though the MS2 coat protein mutant corresponds to T = 3 octahedral particles [9].The main difference in the subunit packing between the octahedral and icosahedral arrangements is close to the fourfold and fivefold symmetry axes [7]. The mathematical reason from Theorem 6 and 7 shows that the T = 3 phage MS2 particles may be icosahedral, dodecahedral, or pentakis dodecahedral particles.The projections of icosahedral particles in phage MS2 EM images exhibit unequilateral hexagons in twofold views, regular hexagons in threefold views, and regular decagons in fivefold views (Figure 4).The projections of dodecahedron are unequilateral hexagons in twofold views, hexagon-like dodecagons in threefold views, and regular decagons in fivefold views and asymmetrical views.The projections of pentakis dodecahedra are regular decagons in twofold, threefold, and fivefold views (Table 1).In fivefold views, icosahedra, dodecahedra, and pentakis dodecahedra models show the same projection as a regular decagon.In threefold views, 3 models have different projections: regular hexagons form from icosahedra, hexagon-like dodecagons form from dodecahedra, and regular decagons form from pentakis dodecahedra.In twofold views, the projections of icosahedra and dodecahedra are unequilateral hexagons (Figure 6).In the unequilateral hexagons, 4 vertices have the same angles, and are unlike the other 2 obtuse angles.The 2 big obtuse angles in the unequilateral hexagons of the projections from icosahedra and dodecahedra are 138˚ and 116˚, respectively [14].The obtuse angles of unequilateral hexagon projections from icosahedra are much larger than those from dodecahedra.EM images of the MS2 particles show the projections of icosahedral particles since the obtuse angles of unequilateral hexagons are 138˚.The T = 3 phage MS2 particles with 180 coat protein subunits [3] form icosahedra.This suggests that the solid angles of hexamers in the polyhedral particles may spread to become flat, and the connecting lines between the vertices of 2 nearby pentagonal cones become the ridges of dihedral angles during packing the coat protein subunits in MS2 (Theorem 6 and 7).A few viral particles are dodecahedra [4] [15].In nature, spreading in the solid angles of hexamers in a polyhedron seems to be easier and more common than spreading of pentamers.Most viruses and phages form icosahedral particles instead of dodecahedral or pentakis dodecahedral particles. Mathematical Analysis in Particle Electron Micrographs for 3D Reconstruction Although the 3D structure reconstructed from the EM images shows the Platonic solids of the particles, the wrong order of 3D reconstruction might yield a false solid figure [16].It is easy to recognize an icosahedron or a dodecahedron in the Platonic solid of a particle using EM images and mathematical analysis.According to a primary survey of the Platonic solid of the particle, the 3D reconstruction can be performed confidently with a right symmetrical order [10] [14].The mathematical analysis of EM images is a useful primary survey before performing the 3D reconstruction of a polyhedral particle. Conclusion In conclusion, a viral particle with 180 coat protein subunits and T = 3 requires some geometrical adaptation to form a stable regular polyhedron, such as an icosahedron or a dodecahedron.The mathematical analysis of MS2 particles reveals the EM projections of icosahedral particles.The MS2 particles are confirmed to be icosahedra. Theorem 7 .Figure 2 . Figure 2.An icosahedron forms from a polyhedron after the solid angles of hexamers degenerate and the connecting lines (solid lines) between the vertices of 2 nearby pentagonal cones become the ridges of dihedral angles. Figure 3 . Figure 3.A dodecahedron forms from a polyhedron when the solid angles of pentamers (solid lines) spread out and become flat. Figure 4 . Figure 4. Micrograph of phage MS2 in twofold (a), threefold (b) and fivefold (c) views.Note particles attaching (d) and unattaching (e) on a pilus of E. coli.The bar represents 10 nm. Figure 6 . Figure 6.Projections of an icosahedron (a) and a dodecahedron (b) from a twofold view. Table 1 . Projections of polyhedra from various views.
2019-01-03T11:14:29.765Z
2015-04-10T00:00:00.000
{ "year": 2015, "sha1": "34155188e0fea8b7452eeb1dc822e464e8c315a6", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=55754", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "34155188e0fea8b7452eeb1dc822e464e8c315a6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Mathematics" ] }
249256475
pes2o/s2orc
v3-fos-license
Behavior therapy as a theoretical approach to group counseling Received Sep 30, 2021 Revised Oct 20, 2021 Accepted Nov 29, 2021 Through group counseling, clients are given a safe and comfortable place to convey the problems they are experiencing so that they can help them in alleviating these problems. The problem will be discussed with other group members, who may also have the same problem. Counselors, as group leaders, can use various relevant approaches to realize the goals of group counseling, one of which is behavior therapy. Behavior therapy focuses on observable behaviors and learning experiences that promote change. The research approach employed is library research. We based the development of this manuscript on a survey of several publications from research articles in behavior therapy. The author used a thematic analysis to get basic conclusions concerning counselors’ use of behavior therapy as a theoretical approach to group counseling. We intend this paper to serve as a literature for counselors who desire to use a behavior therapy approach in group counseling. Keyword: Introduction Group counseling can alleviate client problems with a group format by prioritizing the principle of confidentiality for each group member. The group leader (counselor) focuses the group on different individuals and their problems; then, the members try to help each other with the leader's guidance (Beck, 2016). The group leader will sometimes play a dominant role by directing the session to be more productive (Jacobs, Schimmel, Masson, & Harvill, 2015). Another opinion states that group counseling is a type of counseling in which a small group of people meets to discuss, interact, and explore problems with each other and with the group leader. Group therapy aims to provide a safe and comfortable environment on campus for students to solve their problems. Members gain insight into their own ideas and actions, and offer advice and support to others (Berg, Landreth, & Fall, 2017;Pérusse, Goodnough, & Lee, 2009). Group counseling helps group members in alleviating the problems they experience. Each group member discusses it together by maintaining the principle of confidentiality, and group leaders also have a significant role in the success of group counseling. Most group members do not require substantial personality reconstruction, and their problems are usually related to developmental life goals. The emphasis in group counseling is on identifying internal sources of strength, which are growth oriented. Group members may face situational crises and short-term conflicts, overcome personal or interpersonal challenges, navigate life transitions, or try to change self-defeating patterns. Groups provide the empathy and support needed to cultivate the trust that allows people to share and discuss their difficulties. Group members are provided with help in honing their current interpersonal problemsolving skills so that they will be better equipped to tackle similar problems in the future (Corey, 2015). In Behavior therapy as a theoretical approach to group counseling addition, clients with obstacles to building interpersonal relationships can be helped by this group counseling (Young et al., 2016). To realize the goals of group counseling, it is necessary to choose an appropriate approach for the problems experienced by group members, because each approach has its own focus. Observable behaviors, causes of present behavior, learning experiences that promote change, adapting treatment tactics to specific clients, and rigorous testing and evaluation are all priorities for behavior therapy counselors. Behavior therapy has been used to treat a wide range of psychiatric illnesses in a variety of client groups. This method has successfully treated anxiety disorders, depression, substance misuse, eating disorders, domestic violence, sexual issues, and hypertension. Developmental disabilities, mental illness, special education and education, community psychology, clinical psychology, rehabilitation, business, selfmanagement, sports psychology, health-related behavior, and gerontology all use behavioral techniques (Corey, 2012). In comparison to other types of techniques utilized in group counseling, behavior therapy has received less attention in the literature. In this study, the meaning, goals, and principles of group counseling with behavior therapy will be examined. Method The method used is library research. The construction of this manuscript was based on the review of various manuscripts of research papers relevant to the behavior therapy. To arrive at basic conclusions about the use of behavior therapy as a theoretical approach to group counseling by counselors, the author did a thematic analysis. Result and Discussion Group Counseling Based on Behavior Therapy "Which is better, group counseling or individual counseling?" is a common question. This is a difficult issue to answer because people and situations are so diverse. Sometimes one or the other is the best option, while other times a combination of individual and group counseling is the best option. Generally speaking, groups can be really beneficial. Some people prefer group counseling because it demands feedback from others and allows them to learn more by listening rather than talking. Teenagers prefer group counseling to individual treatment because they are more likely to talk to other adolescents than adults. For people who are stuck in the mourning process, groups have proven to be very helpful (Humphrey, 2009;Jacobs, et al., 2015;Worden & Winokuer, 2011). Clients are taught self-management skills and new coping behaviors, as well as how to rebuild their thoughts, in the group-based behavior approach. After completing their group experience, clients can learn to apply these skills to take control of their life, deal with current and future challenges, and operate successfully. Many groups are designed to give clients more power and freedom in specific areas of their lives (Corey, 2012). Group members are homogeneous in the behavior therapy approach, which means that there are many different types of groups with behavioral changes, or groups that mix behavioral and cognitive approaches for a specific demographic. In today's world, structured groups with a psychoeducational focus are very popular. In the practice of behavioral groups, at least five main approaches can be used: (1) social skills training groups, (2) psychoeducational groups with specific topics, (3) stress management groups, (4) multimodal group treatment, and (5) mindfulness and behavioral therapy acceptance-based in groups (Corey, 2012). The Purpose of Group Counseling Based on Behavior Therapy The goal of group counseling is to help people prevent and correct problems and to achieve a certain goal, which could be educational, career-related, social, or personal. Group counseling emphasizes interpersonal communication of conscious ideas, feelings, and behaviors in a short time span. Members define the content and goals of counseling groups, which are frequently problem-oriented (Corey, 2015). Process objectives and result goals are two forms of group counseling/therapy goals established by behavioral researchers. The goals linked with the group process are referred to as process objectives. Process goals, for example, might assist members in becoming more comfortable in the group, increasing their openness to the group, and learning to engage with members in a more productive manner. Some educators believe that focus groups should be about what's going on in the "here and now," and that external issues should be avoided. Interaction, member criticism, and confrontation take up a lot of time with this strategy. While focusing on process goals in group therapy can be beneficial, we believe it should not be the primary focus of any therapeutic group. Individual concerns and outcomes goals should be prioritized (Jacobs, et al., 2015). Vol. 6, No. 3, 2021, pp. 820-825 822 Journal homepage: https://jurnal.iicet.org/index.php/jpgi Outcome objectives are those that have to do with behavioral improvements in members' lives, such as acquiring a job, enhancing interpersonal connections, staying calm, or feeling more self-assured. Therapy groups that concentrate on members' concerns are far more beneficial than groups that concentrate on member interactions. Leaders that prioritize result goals encourage their followers to concentrate on problems that are at or below the depth level of 6 on the depth chart (Jacobs, et al., 2015). In behavioral therapy, goals are quite important. Behavioral therapy's overall purpose is to expand personal choices and create new learning environments. Early in the therapeutic process, the client, with the support of the therapist, develops particular treatment goals. Although both evaluation and therapy are carried out, a formal assessment is carried out prior to treatment to identify the behavior that needs to be changed. The extent to which the identified goals are being met is determined through ongoing therapy evaluation. It's critical to devise methods for assessing progress toward objectives that are based on empirical evidence (Corey, 2012). The active role of clients in choosing on their therapy is emphasized in contemporary behavior therapy. The therapist helps the client come up with clear, quantifiable objectives. Client and counselor goals must be explicit, concrete, understandable, and agreed upon. The counselor and client address the goal-related behavior, the conditions that demand adjustment, the nature of the sub-goals, and a strategy for achieving these objectives. Setting therapeutic goals necessitates a dialogue between the client and the counselor, which culminates in a contract that drives the therapy process. Goals are changed by the behavioral therapist and the client as needed during the therapy process (Corey, 2012). The current trend in behavior therapy is to establish processes that provide clients more power and freedom. People's skills are improved in behavior therapy so that they have more possibilities for reacting. People are free to choose from previously unavailable options as they overcome burdensome practices that limit choice. (Corey, 2012). Characteristics in behavior therapy Behavioral group therapy includes a number of distinguishing features that set it distinct from other group therapies. Behavioral practitioners are distinguished by their meticulous attention to criteria and measures. Conducting behavioral evaluations, clearly laying out collaborative treatment goals, developing specialized treatment procedures that are suited for a particular condition, and evaluating behavior group therapy outcomes are all unique elements of behavioral group therapy. Behavior therapists utilize short-term and timelimited therapies to help members solve problems and learn new abilities (Corey, 2012). Characteristics of behavior treatment are listed below (Corey, 2012). First, behavior therapy is founded on the scientific method's concepts and processes. To assist clients in changing their maladaptive behavior, experimentally derived learning principles are utilized in a methodical manner. Behavioral practitioners are distinguished by their methodical devotion to precision and empirical evaluation. Behavioral therapists formulate treatment goals in tangible language so that their interventions may be replicated. The client and therapist agree on the treatment goals. The therapist evaluates problem behaviors and the conditions that support them throughout therapy. The effectiveness of the assessment and treatment procedures was assessed using research methods. The therapy method chosen must be proven to be effective. In a nutshell, behavioral concepts and methods are presented plainly, empirically tested, and continually changed. Second, unlike an investigation of probable historical determinants, behavior therapy focuses on the client's current problems and the circumstances that influence them. The focus is on the precise aspects that influence current functionality as well as the factors that potentially alter performance. Understanding the past can sometimes provide important information about environmental occurrences that are relevant to current behavior. Behavioral practitioners use a method called functional assessment, or "behavioral analysis," to look at present environmental events that perpetuate problematic behavior and help clients create behavior change by modifying those environmental events. Third, Clients in behavior therapy are expected to take an active role in their treatment by taking specific measures to address their issues. Rather than simply talking about their situation, people must act to change it. Clients learn and practice coping skills, as well as role-play new behaviors, both during and outside of treatment sessions. The therapeutic chores that the client completes in their daily lives, sometimes known as homework, are an important aspect of this approach. Learning is viewed as the foundation of behavior therapy, which is an action-oriented and educational approach. To replace old and maladaptive behaviors, clients acquire new and adaptable ones. Behavior therapy as a theoretical approach to group counseling Fourth, This method assumes that change can happen even if the underlying dynamics aren't understood. Behavioral practitioners work under the assumption that behavior change can happen before or after selfunderstanding, and that behavioral change can lead to further self-understanding. Knowing that one has a problem and knowing how to remedy it are two different things. Insight and understanding of the possibility of intensifying one's difficulties might provide incentive to change. Fifth, The emphasis is on evaluating overt and hidden behavior, identifying issues, and assessing change. The target problem is directly assessed by observation or self-monitoring. Therapists consider their clients' culture to be an important aspect of their social environment, which includes a social support network that is relevant to the goal behavior (Tanaka-Matsumi, Higginbotham, & Chang, 2002). A detailed assessment and evaluation of the interventions utilized to establish whether the technique resulted in behavior change is critical to the behavioral approach. Individual and group interaction: There are three fundamental principle learning functions: attitude is an emotional reaction to social inputs. As a result, the stimulus serves as both a reinforcer and an elicitor. These principles govern social phenomena such as group cohesion, attraction, persuasion, prejudice, and intergroup connections. Learning the rich human repertoire is a social interaction process that must be mastered. (Shulman, 2015;Staats, 1996). Counselor as a Leader in Behavior Therapy The leader's style or role will always be determined by the group's objectives (Jacobs, et al., 2015). The most effective group leaders are adaptable (Gladding, 2008). The fundamental point of contention in the leadership argument appears to be how active, deliberate, and structured leaders should be. Many group counselors were hesitant to advise students they needed to be more active and assertive in the past. In the 1960s, there was a similar dispute about the relative virtues of directive and non-directive counseling, when counselors argued the respective merits of directive and non-directive counseling. Most counselors now encourage their students to be more engaged and direct in their individual therapy sessions (Jacobs, et al., 2015). The active leadership approach is best for most groups when it comes to group counseling. We are firm believers in what we previously stated: people don't mind being led if it is done correctly. The majority of people in most groups require some form of structure, organization, and direction. In fact, the majority of members demand and desire leadership. This is particularly true at schools, hospitals, prisons, mental health facilities, and rehabilitation centers, as well as with groups that deal with issues like divorce, abuse, incest, and addiction (Jacobs, et al., 2015). Behavioral group leaders work as teachers, encouraging members to learn and practice skills in small groups that they may use in their daily lives. In most groups, group leaders take an active, directing, and supportive role in the group, applying their understanding of behavioral concepts and talents to issue solving. With their involvement in setting agendas, designing homework, and teaching new skills and behaviors, they exemplify active participation and teamwork. The leader carefully observes and evaluates behavior in order to determine the conditions that are related with a specific problem as well as those that will aid change. Members of behavioral groups identify abilities that they lack or wish to improve (Corey, 2012). Therapeutic procedures in behavior therapy A strength of the behavioral approach is the development of specific therapeutic procedures that be proven effective in aim ways. Behavioral therapy practitioners can incorporate into their treatment plan any technique that can be demonstrated to change behavior. The use of various techniques, regardless of their theoretical origin. Behavioral therapists should not limit themselves to methods derived from learning theory. Group leaders who function within a behavioral framework can develop techniques from a variety of theoretical points of view. Behavioral practitioners use a brief, active, directive, structured, collaborative model of psychoeducational therapy, which relies on empirical validation of its concepts and techniques (Bilderbeck et al., 2016;Brown, 2018;Delgadillo et al., 2016). Leaders keep track of group members' progress by collecting data before, during, and after all interventions. This method gives ongoing feedback on therapeutic progress to both the group leader and the members. This form of accountability is now demanded by many organizations in community institutions (Corey, 2012). The group structure is ideal for assertiveness and social skills training (Wilson, Jacob, & Powell, 2011). In behavioral groups, relaxation treatments, behavioral exercises, modeling, training, meditation, and mindfulness approaches are frequently integrated. In communal situations, where people meditate while still being in the presence of others, the feeling of being attentive is enhanced (Corey, 2012). Behavioral therapists' therapeutic processes are tailored to a single client rather than being pulled from a "bag of tricks." In their interventions, therapists are frequently highly inventive. Applied behavior analysis,
2022-06-02T15:13:23.892Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "4244186dedc68e83d6bc0875a34c2e7820c6b107", "oa_license": "CCBY", "oa_url": "https://jurnal.iicet.org/index.php/jpgi/article/download/1286/888", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc3662941e15fbcc79425cf4df067dbd0afb006c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
56813212
pes2o/s2orc
v3-fos-license
Cost-Effectiveness Analysis of Automated Auditory Brainstem Response and Otoacoustic Emission in Universal Neonatal Hearing Screening Background: During the last decade, the rapid expansion of universal neonatal hearing screening (UNHS) has brought into focus questions about the most appropriate screening technology for this indication. Objectives: The main aim of this study was to examine the cost-effectiveness of automated auditory brainstem response (AABR) and otoacoustic emissions (OAE) in universal neonatal hearing screening programs. Methods: This economic study was performed in Iran. A decision tree model was applied for economic evaluation of the AABR and OAEdevicesusedinUNHS.Themaininputsof ourmodelincludedtheprevalenceof hearinglossinIran,devicesensitivity,specificity andcostpercase,aswellasdefinitediagnosisof eachnewborn. Uponcollection,theseinputswereanalyzedwithTreeAgeeconomic analysis software. Sensitivity analysis was conducted upon examining the probability of uncertainty concerning the inputs. Results: For a one-year period and a one-million population of newborns, the UNHS entails a cost of $3,310,700 and detects 4,650 newborns with hearing loss, using the AABR device. However, if the OAE device is used, the cost will be expanded to $3,414,100 and 3,850 newborns with hearing loss will be detected. Consequently, the AABR device costs $103,400 less than the OAE device, and detects800morecasesthantheOAEdevice. Sensitivityanalysisresultsrevealedthattheprevalencerateorcostsof thegoldstandard had no effect on displacing the dominant technology. Conclusions: In this study, it was found that the AABR is the cost-effective alternative compared to OAE. AABR dominates OAE, be-cause it has lower expected costs and higher effectiveness. Background Hearing impairment in infants is a particularly serious obstacle to their optimal development and education, including language acquisition. According to a range of studies and surveys conducted in different countries, around 0.5 -6 in every 1,000 neonates and infants have congenital or early childhood onset sensorineural deafness or severe-to-profound hearing impairment (1). In Iran, the prevalence of hearing loss is 5 in 1,000 live births on average (2). Deaf and hearing-impaired children often experience delayed development of speech, language and cognitive skills, which may result in slow learning and difficulty progressing in school (1). There is scientific evidence to suggest that early identification (three-six months) and administration of appropriate intervention at or before six months of age provides children with impaired hearing with the opportunity to develop normal speech and language. As a result, many countries have implemented neonatal hearing screening programs (3)(4)(5)(6)(7)(8)(9)(10). The rationale for implementing a universal neonatal hearing screening programs is that it can detect more deaf infants, providing a greater opportunity for them to experience normal language development, while providing overall benefits in terms of reducing the disability and improving the health and well-being of the children (11). There are two main screening interventions generally available to a number of healthcare systems worldwide. These interventions are based on electrophysiological methods: Otoacoustic emissions (OAE), and automated auditory brainstem response (AABR) (1). Both AABR and AOAE are non-invasive, rapid screening tests. OAE measures sounds that are produced by the cochlea to response to acoustic stimulation, and AABR measures electroencephalographic waveforms in response to clicks (12)(13)(14)(15). Factors such as limited funding, workforce shortage and the inadequate provision of follow-up and support services have prevented the implementation of the neonatal hearing screening program in the vast majority of developing countries (16). Kemper et al. (17) conducted a survey entitled "A cost-Effectiveness Analysis of Newborn Hearing Screening Strategies." The main objective of their study was to compare the two screening strategies: Universal screening, and targeted screening. In this two-stage procedure, OEA and AABR were the applied devices, respectively. However, in this research, the main objective was to compare AABR and OAE devices for implementing universal newborn hearing screening under a one-stage procedure. In Iran, hearing screening is conducted by implementing universal strategy, and OAE is the most applied device. Hence, this study aimed to compare the cost-effectiveness of this device and that of AABR in performing universal newborn hearing screening. We aimed to find why OAE is still the most applied device in conducting UNHS when AABR is apparently more accurate and cost-effective in the long run. During the last decade, the rapid expansion of universal neonatal hearing screening (UNHS) programs has brought into focus questions about the most appropriate screening technology for this indication. The high prevalence of hearing loss, its subsequent burden on the health system, and the ethical issues surrounding its delayed diagnosis have necessitated the implementation of UNHS programs. However, due to the limited resources of the health system, and the possible associated outcomes and costs that these devices may have, we sought to perform a cost-effectiveness analysis (CEA), as each of these devices may have extra benefits for the UNHS program. Eventually, it may be used as a tool for evidence-informed policymaking in the field of UNHS in Iran, and for optimizing resources to control hearing loss and its resultant burden. To our knowledge, this was the first formal study to focus on the economic evaluation of screening programs for hearing impairment in Iranian newborns. Objectives The high prevalence of hearing loss and its subsequent burden on the health system and the ethical issues surrounding its delayed diagnosis have necessitated the implementation of neonatal hearing UNHS programs. However, due to the limited resources of the health system, and the possible associated outcomes and costs that these devices may have, we sought to perform a CEA, as each of these devices may have extra benefits for the UNHS program. The main objective of this study was to examine the cost-effectiveness of AABR and OAE in UNHS programs. Furthermore, it may be used as a tool for evidence-informed policy-making in the field of UNHS in Iran, and for optimizing resources to control hearing loss and its subsequent burden. Methods We applied a decision tree model with a time horizon of one year to economically evaluate the AABR and OAE devices used in UNHS. Our perspective was the health care system, and we only considered the direct costs. We defined effectiveness as the number of neonates with hearing loss, whose hearing status has been correctly detected upon using either of the devices. In general, the cost-effectiveness of these two devices was analyzed based on the annual birth rate statistics. The diagnostic accuracy of the two devices was derived from an up-to-date and high quality research (i.e., Heidari et al.'s systematic review and meta-analysis in 2016), and newborn screening and definite diagnosis costs were derived from hearing screening centers in Iran. In other words, this study is not a primary research (like a cohort); rather it is considered as a secondary study. Rationale of the Model In this model, we assumed a one-million-cohort population of neonates, who were screened during the first 24 hours of birth, using one of the AABR or OAE devices in a single stage, and without loss to follow up (decision node). These devices identify the screened neonates as normal or abnormal. This detection may be true or false, and its possibility depends on the prevalence of the hearing loss, and the sensitivity and specificity of the devices. Here, hearing loss was defined as permanent congenital bilateral hearing loss exceeding 35 dB, presuming that the screening has been performed by an audiologist. Therefore, no error occurs due to the operator's insufficient skills (chance node). The newborns detected as positive (whether true or false) by the clinical Auditory Brainstem Response (ABR) device-as the gold standard-are considered to be definitely diagnosed. An audiologist performs this test, and the model presumes that its accuracy is 100%. The remaining newborns, whose results are negative (whether true or false) are discharged and not followed up (terminal node). Each device has four branches and end nodes, and their expected cost is determined as follows: 2 Iran J Pediatr. 2017; 27(2):e5229. -Branch A/A' : The cost of screening and definite diagnosis of newborns, reflecting with true positive hearing loss, is included under this branch. -Branch B/B' : The cost of screening for newborns, showing false negative hearing loss, is included under this branch. -Branch C/C' : The cost of screening and definite diagnosis of newborns, showing false positive normal hearing, is included under this branch. -Branch D/D' : The cost of screening for newborns, showing true negative normal hearing, is included under this branch. The total costs of these four branches indicate the total cost of each device in NHS. Our expected effectiveness for each device was calculated by multiplying the number of newborns entering the model by prevalence, and by device sensitivity. Model Inputs The main inputs of this model include the prevalence of hearing loss in Iran, device sensitivity and specificity, the cost of screening, and definite diagnosis of each newborn. Upon collection, these inputs were analyzed with TreeAge economic analysis software. The data related to device sensitivity and specificity were collected through a recent systematic review and meta-analysis. This study was based on Cochrane Institute's standard method for diagnostic accuracy studies. Only one research has been conducted to analyze the sensitivity and specificity of the OAE, which was metaanalyzed in a systematic review by Heidari et al. [18]. No study was found investigating the sensitivity and specificity of AABR devices. Given that sensitivity and specificity are among the technical specifications of the devices and they are not affected by geographical and local environmental factors, it seems that meta-analysis studies conducted in other countries can be generalized to similar studies in Iran. Furthermore, to extract relevant data on the prevalence of hearing loss, we focused mainly on the high quality, up-to-date studies with large sample sizes conducted in Iran. Thus, we searched the most important domestic databases such as Magiran, SID, and IranMedex, using the following keywords: 'Hearing loss', 'newborn' and 'prevalence'. The economic analysis in this study was conducted from the perspective of the healthcare system on evaluating cost-effectiveness. In this study, the cost of the newborns' screening and the cost of definite diagnosis of newborns' hearing ability were calculated based on the sources of cost used in hearing screening and definite diagnosis, and not based on the costs in private clinics. To determine the costs, the sources of costs were identified first, and then the amount of each source was quantified and evaluated. Only the direct costs were considered to identify the sources. The unit cost was determined in two steps: In the first step, the unit cost of each of the devices was outlined for screening; and in the second step, the unit cost of the gold standard was outlined. In these two steps, cost findings include the costs of device purchase, repair and maintenance, annual depreciation, location, consumer products, required infrastructures, employees' salaries and wages, human resources training, overhead costs, taxes and other direct costs. Based on these costs and the variables presented in Table 1, the unit cost per newborn was estimated. Through contacting five audiology equipment manufacturers either by phone or in person posing as a customer, we obtained information about each device's cost, lifespan, and salvage value across the country. The remaining sources of cost and the variables presented in Table 2 were designed in the form of a questionnaire. Fifteen experienced audiologists employed in centers offering active UNHS programs completed the questionnaire. Eventually, after collecting the questionnaires, Delphi method was applied to analyze and summarize them. Since the costs were calculated based on the currency in Iran, the exchange rate of 36,350 Iranian Rial (IRR) was used to convert the costs into the U.S. dollar. In this study, attempt was made to examine the direct costs of human resources. These costs covered the salary and benefits of an audiologist and/or a technician or a secretary, and they did not require training. Moreover, location, overhead and infrastructure costs were not taken into account, because the devices are now portable and the screening test can be performed at the mother's bedside or in the newborn's special bed during the first 24 hours of life before the mother is discharged from the hospital. Since there is no manufacturing company in Iran that recycles scrapped devices, zero was assigned to the salvage value of the devices. Finally, upon examining the probability of uncertainty concerning the inputs, particularly cost data and the prevalence rate of hearing loss, sensitivity analysis was conducted in view of the maximum and minimum values of these parameters (with the assumption of keeping the other parameters constant). Sensitivity and Specificity Based on the systematic review and meta-analysis conducted by Heidari et al. (18) on the sensitivity and speci- hour 2 hour Mean screening of newborns in one day Mean screening of newborns in one year 3,168 infants 4,320 infants 576 infants ficity of AABR and OAE devices compared to the ABR device (as the gold standard), the pooled sensitivity and specificity of the AABR device were reported to be 0.93 and 0.97, respectively. These figures were 0.77 and 0.93 for the OAE device, respectively ( Figure 1). The Prevalence of Hearing Loss and the Annual Birth Rates in Iran Based on the study conducted by Firoozbakht et al. (2), the prevalence of congenital hearing loss in Iran varies from two to eight in 1,000 live births, which has been estimated to be five in 1,000 live births on average. The annual birth rate in Iran has been estimated to be one million on average (19). In general, between 2,000 and 8,000 (a mean of 5,000) newborns are born with permanent congenital hearing loss in Iran annually. Costs Neonatal screening with the OAE device costs between $1.6 and $2.2. This figure is between $2.3 and $2.9 for the AABR device. On the other hand, the definite diagnosis of a newborn, using the ABR device costs between $19.2 and $22. Mainly, the average cost per newborn screening, using the OAE and AABR devices, was estimated at $1.9 and $2.6, respectively, and it was estimated at $20.6 for the definite diagnosis. Cost-Effectiveness Analysis According to the decision tree and the data presented in Table 3, if hearing screening is performed in a onemillion cohort population of newborns (considering the annual birth rate), using the OAE device, it will entail the following probable costs and outcomes: 1) 925,350 newborns with normal hearing will be detected correctly with a cost of $1,758,165. 2) 69,650 healthy newborns will be falsely detected as having hearing loss. With respect to the cost of screening and the cost of the gold standard for 69,650 newborns, it will cost $1,567,125. 3) 3,850 newborns with hearing loss will be correctly detected; and taking into account the cost of the gold standard for this number, it will cost $86,625. 4) 1,150 newborns with hearing loss will be falsely detected as healthy. In addition to a cost of $2,185, they will eventually enter a delayed stage of intervention, followed by its subsequent complications. If universal hearing screening is carried out with the AABR device in the same population, it will entail the following costs and outcomes: 1) 965,150 newborns with normal hearing will be detected correctly, with the cost of $2,509,390. 2) 29,850 healthy newborns will be falsely detected as having hearing loss. With regards to the cost of screening and the cost of the gold standard, it will cost $692,520. 3) 4,650 newborns with hearing loss will be correctly detected; taking into account the cost of the gold standard for this number, it will cost $107,880. 4) 350 newborns with hearing loss will be falsely detected as healthy. In addition to a cost of $910, they will eventually enter a delayed stage of intervention, followed by its subsequent complications. The universal NHS entails a cost of $3,310,700, and detects 4,650 newborns with hearing loss for a one-year period and a one-million population of newborns, using the AABR device. If the OAE device is used, the cost will exceed to $3,414, and 3,850 newborns with hearing loss will be diagnosed. Collectively, the AABR device costs $103,400 less than the OAE device, and detects 800 more cases compared to the OAE device. Thus, according to the results, the AABR device imposes fewer costs and has greater effectiveness. Sensitivity Analysis Bearing in mind the minimum prevalence rate, the AABR device is $115,760 less costly than the OAE device, and detects 320 more affected newborns compared with the OAE device. If the maximum prevalence rate is taken into account in the model, the AABR, compared with the OAE device, costs $91,040 less and detects 1,280 more affected newborns. Upon considering the minimum and maximum costs related to the gold standard, the difference between the cost of the two devices is $48,800 and $158,000 in favor of the AABR device. Under similar circumstances, the AABR can detect 800 more newborns with hearing loss compared to the OAE device. Considering the minimum cost of screening with the OAE device or the maximum screening cost with the AABR device, the difference between the cost of the two devices in screening favors the OAE device, with $196,600. Nevertheless, it can detect 800 fewer newborns with hearing loss compared to the AABR device. As there was no vagueness surrounding the diagnostic accuracy of the results of the devices, this parameter did not undergo sensitivity analysis. Discussion Based on the findings, the unit cost of screening per newborn of the AABR was higher compared to the OAE de-Iran J Pediatr. 2017; 27(2):e5229. 5 vice. Moreover, if NHS is performed among the live population of newborns over a year, the prevalence of hearing loss will decline in Iran. Therefore, in addition to the high diagnostic accuracy of AABR compared to OAE, and the fact that it entails less costs, the AABR device may prevent delayed interventions in 800 newborns and the subsequent complications that may ensue. The number of false positive results (i.e., the newborns who were healthy but falsely detected as cases) was far less in the AABR method than in the OAE method, imposing less costs (direct, indirect and intangible), stress and anxiety on the newborns' families. In this study, effectiveness was defined as the percentage of newborns, whose hearing status was correctly detected by each of the two devices. This effectiveness was grounded on the diagnostic accuracy of the devices. In other countries, three studies have been conducted on the economic analysis of these devices for screening. Although they have defined effectiveness by the number of referred cases, their conclusions are in line with those obtained in this research (20)(21)(22). According to Lin et al. [20], in addition to the lower direct medical and intangible costs of the AABR compared to the OAE device, the number of referred false positives was also significantly smaller. Vohr et al. (21) stated that although the unit cost of newborn screening is slightly higher in the AABR technique than in the OAE technique, its referred cases are fewer. Likewise, Lemons et al. (22) believe that the AABR is a good substitute for the OAE, as it entails fewer referred cases and lower total costs per screened newborn. Sensitivity analysis results showed that the minimum or maximum prevalence rate of hearing loss had no effects on displacing the dominant technology, and that the AABR device was associated with lower costs and greater effectiveness. The minimum and maximum costs of the gold standard also indicated that the AABR costs less and has greater effectiveness. Upon considering the minimum costs of the OAE or the maximum costs of the AABR, the screening procedure employing the AABR is associated with higher costs and effectiveness. Under such circumstances, determining the cost-effective device depends on the threshold that specifies how much the detection of a newborn with hearing loss before the age of three months is valued in a country. In this model, it was estimated that if the UNHS program was conducted with the AABR device for a year, the health system would undergo a cost of approximately $3,310,700. However, the health system may have to undergo far more costs to efficiently cover this program as many issues such as equity, access to health services, and the limitations of this study remain to be solved. Study Limitations This analysis was based on a one-million-cohort population of annual births, which overlooked the loss to follow up. Here, it was assumed that the newborns were screened only once during their first 24 hours of birth, and the clinical ABR device only confirmed the rejected cases (positive cases). However, in reality, newborns may be screened many times before a definite diagnosis is reached; and usually, such a diagnosis is achieved through multiple tests. In this study, the focus was chiefly on the direct costs, and the indirect service-related costs were not considered. Hence, the estimate presented in this study for the unit cost may not be representative of the real costs. Here, the numbers of correctly detected cases have been employed as criteria for effectiveness, and the outcomes following the definite diagnosis and undetected cases have not been investigated. Such outcomes may include the final effect of hearing loss on language and speech development, communication skills, emotional developments and academic advances. The outcomes of the two groups should have been outlined by considering a wider time horizon, and then, a more realistic CEA of the hearing screening should have been undertaken. In this model, we presumed that an audiologist with the necessary skills performed the screening. Thus, we managed common errors that might have occurred due to the operator's insufficient skills. In real- 6 Iran J Pediatr. 2017; 27(2):e5229. ity, insufficiently skilled operators may perform the screening, as there may be a lack of skilled audiologists. Thence, the error created by the operator can affect the screening results. We recommend conducting further studies in which costs are considered from the public's viewpoint, with a wider time horizon, and employing quality of life as a measure for effectiveness. Furthermore, we recommend designing a model for neonatal screening that reflects the operator's error, loss to follow-up, and the other aforementioned limitations. Conclusions The AABR device is a non-invasive, rapid, safe and simple technology that can be employed in UNHS programs. In case of shortage of skilled and expert work force, it can be easily taught to other personnel. The high sensitivity and specificity of this device, compared to that of the OAE device, not only reduces the number of falsely referred cases, but also detects a greater percentage of newborns with hearing loss. Eventually, better clinical effectiveness may be achieved. Furthermore, considering the annual birth rate, the prevalence rate of hearing loss, and the high diagnostic accuracy of this device in the long run, it can be stated that this device imposes lower costs than the OAE device. In conclusion, if the required infrastructure is provided for UNHS programs, the aforementioned technology can be used as a cost-effective tool in such programs.
2018-12-25T14:14:23.574Z
2016-11-02T00:00:00.000
{ "year": 2016, "sha1": "a1e0d69b8bb9c4f62cf5a8522d90214c43d3cd14", "oa_license": "CCBYNC", "oa_url": "http://ijp.neoscriber.org/cdn/dl/361fd27a-42c3-11e7-9917-bbfff8b05fd8", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3fda486b78ebe9fe8a88266defa14c09fdcdd7fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210987640
pes2o/s2orc
v3-fos-license
Promotional E ff ect of Cu, Fe and Pt on the Performance of Ni / Al 2 O 3 in the Deoxygenation of Used Cooking Oil to Fuel-Like Hydrocarbons : Inexpensive Ni-based catalysts can a ff ord comparable performance to costly precious metal formulations in the conversion of fat, oil, or greases (FOG) to fuel-like hydrocarbons via decarboxylation / decarbonylation (deCO x ). While the addition of certain metals has been observed to promote Ni-based deCO x catalysts, the steady-state performance of bimetallic formulations must be ascertained using industrially relevant feeds and reaction conditions in order to make meaningful comparisons. In the present work, used cooking oil (UCO) was upgraded to renewable diesel via deCO x over Ni / Al 2 O 3 promoted with Cu, Fe, or Pt in a fixed-bed reactor at 375 ◦ C using a weight hourly space velocity (WHSV) of 1 h − 1 . Although all catalysts fully deoxygenated the feed to hydrocarbons throughout the entire 76 h duration of these experiments, the cracking activity (and the evolution thereof) was distinct for each formulation. Indeed, that of the Ni-Cu catalyst was low and relatively stable, that of the Ni-Fe formulation was initially high but progressively dropped to become negligible, and that of the Ni-Pt catalyst started as moderate, varied considerably, and finished high. Analysis of the spent catalysts suggests that the evolution of the cracking activity can be mainly ascribed to changes in the composition of the metal particles. regenerated Ni-Pt formulation, the dearth of signals can be explained by residual coke blocking the sites responsible for low-temperature CO adsorption. Introduction Interest in renewable energy sources has increased considerably, mainly due to concerns related to the climate change caused by the atmospheric accumulation of greenhouse gases resulting from fossil fuel use [1]. A promising alternative to the fossil fuels used in the transportation sector is the production of biofuels-e.g., biodiesel, green diesel, and biokerosene-from renewable feedstocks, including vegetable oils and animal fats [2,3]. Moreover, to improve the economics of these biofuels and avoid disrupting the food supply, attention has shifted to low-cost inedible feedstocks, including used cooking oil (UCO), which is also known as yellow grease (YG) [4][5][6]. This particular waste stream is both abundant and inexpensive (~$463/ton), with ca. one million tons being produced annually in the U.S. alone [7]. Scheme 1. Deoxygenation routes for tristearin as a model compound representing triglycerides (blue shading) and concomitant reactions confounding oxygen-bearing deoxygenation products (red shading). In recent years, the production of fuel-like hydrocarbons via deCOx has been intensively investigated as a way to avoid the large amounts and pressures of hydrogen, as well as the problematic sulfide catalysts required by HDO, since the hydrogen requirements of deCOx are lower and these reactions proceed over simply supported metal catalysts [14]. Although the majority of deCOx studies have focused on Pd and Pt catalysts, the high price of these metals has spurred the search for alternatives. Saliently, inexpensive Ni-based catalysts can provide comparable results to Pd and Pt formulations in the deCOx of FOG to hydrocarbons [15,16]. Since the high activity of Ni in C-C hydrogenolysis can decrease the carbon yield and the hydrogen efficiency of deCOx processes, the incorporation of a second metal has been investigated as a means to modify the electronic and geometric properties of Ni to ultimately improve its activity and selectivity [17]. Indeed, Ni/Al2O3 promoted with Cu or Pt can afford near quantitative diesel yields in the conversion of both model and realistic lipid feeds to fuel-like hydrocarbons [13,18]. The promotion effect displayed by these bimetallic catalysts is in large part attributed to the ability of Cu and Pt to facilitate NiO reduction at relatively low temperatures since metallic Ni sites constitute the active site for the deCOx reaction. Moreover, Pt addition also curbs the adsorption of CO on the Scheme 1. Deoxygenation routes for tristearin as a model compound representing triglycerides (blue shading) and concomitant reactions confounding oxygen-bearing deoxygenation products (red shading). In recent years, the production of fuel-like hydrocarbons via deCO x has been intensively investigated as a way to avoid the large amounts and pressures of hydrogen, as well as the problematic sulfide catalysts required by HDO, since the hydrogen requirements of deCO x are lower and these reactions proceed over simply supported metal catalysts [14]. Although the majority of deCO x studies have focused on Pd and Pt catalysts, the high price of these metals has spurred the search for alternatives. Saliently, inexpensive Ni-based catalysts can provide comparable results to Pd and Pt formulations in the deCO x of FOG to hydrocarbons [15,16]. Since the high activity of Ni in C-C hydrogenolysis can decrease the carbon yield and the hydrogen efficiency of deCO x processes, the incorporation of a second metal has been investigated as a means to modify the electronic and geometric properties of Ni to ultimately improve its activity and selectivity [17]. Indeed, Ni/Al 2 O 3 promoted with Cu or Pt can afford near quantitative diesel yields in the conversion of both model and realistic lipid feeds to fuel-like hydrocarbons [13,18]. The promotion effect displayed by these bimetallic catalysts is in large part attributed to the ability of Cu and Pt to facilitate NiO reduction at relatively low temperatures since metallic Ni sites constitute the active site for the deCO x reaction. Moreover, Pt addition also curbs the adsorption of CO on the catalyst surface, Catalysts 2020, 10, 91 3 of 32 helping to avoid catalyst inhibition by any CO evolved via decarbonylation and the catalyst coking resulting from the disproportionation of CO via the Boudouard reaction [13]. Supported Ni catalysts promoted with Fe have also afforded promising results in the conversion of model and realistic lipid feeds to hydrocarbons [17], the promoting effect of Fe being attributed to the synergy between nickel sites possessing the ability to activate hydrogen and iron sites with strong oxophilicity [17,19]. Indeed, since Fe has a higher oxygen affinity than Ni, oxygen vacancies within iron oxide species can facilitate the adsorption and subsequent activation of oxygenates. Specifically, H 2 activated through its facile dissociative adsorption on Ni sites can spill over to neighboring Fe sites onto which the oxygen atoms of C=O groups are adsorbed, subsequent hydrogenation leading to deoxygenation products. In addition, the formation of Ni-Fe alloys with Fe-rich surfaces disrupts the adjacency of Ni atoms, a geometric effect known to suppress C-C hydrogenolysis, which requires Ni ensembles. Cu is also known to decrease the C-C hydrogenolysis activity of Ni through the same geometric effect [20]. In order to develop practicable catalytic deCO x technology for the conversion of FOG to fuel-like hydrocarbons, it is necessary to study the most promising catalysts using industrially relevant feeds and reaction conditions. In addition, in order to make meaningful comparisons between the performance of different catalysts, measurements must be made at a steady state. Against this backdrop, the present work investigated the conversion of UCO to renewable diesel via deCO x over supported Ni catalysts promoted with Cu, Fe or Pt. The performance of these formulations was tested in a fixed bed reactor using industrially-relevant reaction conditions for 76 h of time on stream (TOS), as previous work has shown that catalysts of this type require >48 h of TOS to attain a steady state [11]. In addition, the analysis of the fresh, spent, and regenerated catalysts was undertaken in an effort to understand the distinct performance displayed by these formulations. Catalytic Deoxygenation of UCO Over Ni/Al 2 O 3 Promoted with Cu, Fe, and Pt The composition of the UCO employed in this work is shown in Table A1 within Appendix A. The feed is mostly triolein (~95%) with a small amount (~5%) of oleic acid. This feed was upgraded in a fixed bed reactor using a WHSV of 1 h −1 and a reaction temperature of 375 • C (see Section 3.3) in order to investigate and compare the relative effect of Cu, Fe and Pt promotion on the performance of Ni/Al 2 O 3 in the conversion of UCO to diesel-like hydrocarbons. The results of the gas chromatography-mass spectrometry (GC-MS) analysis of the liquid products collected at representative times on stream are summarized in Figure 1, and are presented in more detail in Appendix A (Tables A2-A4), while the gaseous products are shown in Figure 2. In addition, a blank (sans catalyst) run was performed using an identical set of conditions in order to assess the extent of thermal (as opposed to catalytic) contributions to UCO conversion and diesel yield. The GC-MS analysis of the liquid products obtained in this blank run (see Table A5 in Appendix A) revealed the vast majority (>79%) of the products to be fatty acids and monolein stemming from the thermal conversion of triolein. In addition, the amount of hydrocarbons obtained was <21%, and olefins represented the vast majority of hydrocarbon products irrespective of TOS, which is unsurprising in the absence of a hydrogenation catalyst. Thus, it can be concluded that under the experimental conditions employed, thermal contributions to the conversion of UCO to diesel-like hydrocarbons are relatively minor. Remarkably, complete deoxygenation occurs over all catalysts tested (see Figure 1 and Tables A2-A4), the concentration of diesel-like (C10-C20) hydrocarbons in the reaction products being >82% irrespective of both catalyst and TOS, heavier (C21-C35) hydrocarbons comprising the remainder of the product mixtures. Whereas heavier hydrocarbons stem from the deoxygenation of long-chain ester intermediates [21], diesel-like hydrocarbons are the result of the deoxygenation of the triglycerides and the fatty acids constituting the UCO feed (as well as of the cracking of heavier hydrocarbons) [11,18]. Remarkably, complete deoxygenation occurs over all catalysts tested (see Figure 1 and Tables A2-A4), the concentration of diesel-like (C10-C20) hydrocarbons in the reaction products being >82% irrespective of both catalyst and TOS, heavier (C21-C35) hydrocarbons comprising the remainder of the product mixtures. Whereas heavier hydrocarbons stem from the deoxygenation of long-chain ester intermediates [21], diesel-like hydrocarbons are the result of the deoxygenation of the triglycerides and the fatty acids constituting the UCO feed (as well as of the cracking of heavier hydrocarbons) [11,18]. It is also worth noting that the Ni-Pt catalyst afforded liquid product mixtures comprised solely of diesel-like hydrocarbons after 24 h on stream. In contrast, the Ni-Cu and Ni-Fe catalysts showed lower diesel amounts along with a higher yield of heavier (C21-C35) hydrocarbons (particularly at ≥48 h on stream) in their liquid products, which suggests that Ni-Pt disfavors the production of longchain ester intermediates and/or favors cracking reactions. A closer look at the individual components of the liquid products in the diesel range, namely, C10, C11, C14, C15, and C16 (C12 and C13 could not be determined due to the interference of the reaction solvent) provides valuable insights vis-à-vis the changes in selectivity that take place as the reaction progresses over each catalyst It is also worth noting that the Ni-Pt catalyst afforded liquid product mixtures comprised solely of diesel-like hydrocarbons after 24 h on stream. In contrast, the Ni-Cu and Ni-Fe catalysts showed lower diesel amounts along with a higher yield of heavier (C21-C35) hydrocarbons (particularly at ≥48 h on stream) in their liquid products, which suggests that Ni-Pt disfavors the production of long-chain ester intermediates and/or favors cracking reactions. A closer look at the individual components of the liquid products in the diesel range, namely, C10, C11, C14, C15, and C16 (C12 and C13 could not be determined due to the interference of the reaction solvent) provides valuable insights vis-à-vis the changes in selectivity that take place as the reaction progresses over each catalyst (see Tables A2-A4). While the vast majority of the feed comprised triolein and oleic acid, the most abundant product obtained over all the catalysts is heptadecane (C17), suggesting that the reaction proceeds mainly via deCO x as opposed to HDO, which would afford octadecane (C18). Nevertheless, C18 is also produced in significant amounts, which indicates that HDO also occurs. Whereas the amount of C17 and C18 produced over the Ni-Cu and Ni-Fe catalysts remains relatively stable throughout, the corresponding values drop considerably beyond 28 h on stream over the Ni-Pt formulation. Tellingly, the amount of lighter (C10-C14) diesel-like hydrocarbons-which is the result of (and, thus, a proxy for) cracking activity-remains low and fairly stable, drops significantly, and increases considerably with TOS over the Ni-Cu, Ni-Fe and Ni-Pt catalysts, respectively. Differences in the evolution of the incondensable gas products observed over each catalyst (see Figure 2) provide insights that are both consistent and complementary to those drawn from the composition of the liquid product mixtures. Briefly, whereas H 2 represents the reaction atmosphere, CO and CO 2 are produced from glycerides and fatty acids via decarbonylation and decarboxylation, respectively. Butane, propane, and ethane are produced through the internal chain cracking of glycerides, fatty acids, and long-chain hydrocarbons, while propane can also stem from the triglyceride backbone and its cracking can afford additional ethane and methane. Lastly, methane is also produced from the methanation of CO x as well as from the cracking of glycerides, fatty acids, and long-chain hydrocarbons via terminal carbon loss, the main chain shortening mechanism according to a previous report [22]. With this in mind, the first thing worth noting is the small amount of CO x detected. Indeed, the amount of CO x is practically negligible in the gaseous products evolved over both the Fe-and Pt-promoted catalysts (see Figure 2b,c), indicating that the entirety of these gases is converted to methane and/or remain adsorbed on the surface of these formulations. In contrast, a small amount of CO 2 is detected in the gaseous products evolved over the Cu-promoted catalyst (see Figure 2a), particularly after a brief induction period observed in the first hours of the experiment. Parenthetically, this induction period has been observed in previous work and attributed to the accumulation of CO 2 on the catalyst surface as alumina-bound carbonates [22]. While CO 2 eventually breaks through and is detected in the gaseous products, CO remains undetected, likely indicating its full conversion to methane, to CO 2 , and coke via the Boudouard reaction, or its strong and irreversible adsorption on the surface of the Ni-Cu catalyst (see Section 2.3). The amount of ethane, propane, and butane in the gaseous products is also telling. Over both the Cu-and Fe-promoted catalysts, the amount of these gases detected at the beginning of the reaction is practically negligible, gradually increasing over the first 24 h on stream (see Figure 2a,b). Beyond this point, the amount of these gases evolved over the Cu-promoted catalyst remains relatively stable for the remainder of the experiment, whereas it becomes negligible once again towards the end of the run over the Fe-promoted formulation. In contrast, the amount of C2-C4 gaseous products evolved over the Pt-promoted catalyst is both higher and constant throughout the entirety of the run (see Figure 2c). Nevertheless, the amount of ethane, propane, and butane is always smaller than that of methane irrespective of both catalyst and TOS; the amount of methane detected being particularly informative. Whereas the quantity of methane evolved over the Ni-Cu catalyst is both small (< 0.6%) and stable (see Figure 2a), the latter only applies to the end of the experiment involving the Ni-Fe formulation (see Figure 2b). At the beginning of the run performed over the Fe-promoted catalysts, the amount of methane in the gaseous products varies from 71% at t = 0 h to 64% at t = 3 h (results not shown), before dropping precipitously to 4% at t = 4 h and then more gradually to reach stability around 0.4% at t = 54 h. In stark contrast, the amount of methane in the gaseous products evolved over the Ni-Pt catalyst varies widely and can be as high as~75% at t = 24, 48, and 72 h (see Figure 2c). All of these trends indicate that while the Ni-Cu and Ni-Fe catalysts retain their deoxygenation activity within the time period investigated, cracking activity either remains constant or declines with TOS, consistent with results reported in other studies [11,17,23]. Although the Ni-Pt catalyst also retains its deoxygenation activity throughout the entire experiment, cracking reactions remain prevalent during the entirety of the run. In short, a comparison of the results obtained with 20% Ni-5% Cu/Al 2 O 3 , 20% Ni-5% Fe/Al 2 O 3 , and 20% Ni-0.5% Pt/Al 2 O 3 catalysts suggests that Cu-and Fe-promoted catalysts are preferable to Ni-Pt formulations. Indeed, the latter is rendered disadvantageous by its higher price and cracking activity, which would reduce the cost and carbon efficiency of a process designed to convert UCO to diesel-like hydrocarbons. Characterization of Fresh and Spent Catalysts The textural properties of the catalysts used in this study are compiled in Table 1. The surface area, pore volume, and pore size of all catalysts fall in very narrow ranges, which is consistent with their total metal loadings and particle sizes (vide infra) and the fact that all catalysts were prepared using the same alumina support. These results indicate that the effects of differences in these properties on catalyst performance should be minimal. Figure 3 includes the X-ray diffractograms of the catalysts employed in this study. Since diffractograms were acquired using the fresh catalysts in their oxidized form-catalysts were subjected to XRD after the calcination in air constituting the final step of their preparation (see Section 3.1)-the fact that all Ni detected is present as NiO is unsurprising. Indeed, the three diffractograms display several peaks (at 37.2 • , 43.3 • , 62.9 • , 75.4 • , and 79.4 • ) assigned to NiO [24]. The fact that diffraction peaks attributed to Fe 3 O 4 and to Ni-Fe alloy phases [17,19,25,26] are absent from the diffractogram corresponding to 20% Ni-5% Fe/Al 2 O 3 ( Figure 3a) is unsurprising since these phases would only be expected in reduced (as opposed to oxidized) catalysts. Peaks associated with Fe 2 O 3 are also absent from this diffractogram, indicating that Fe is highly dispersed. As previously reported [18], the fact that peaks at 35.5 • and 38.7 • corresponding to a CuO phase [27] are not observed in the diffractogram of 20% Ni-5% Cu/Al 2 O 3 (Figure 3b) can be similarly attributed to the high dispersion of the Cu phase [24,28]. Likewise, no distinct Pt-related features (peaks or peak shifts) can be observed in the diffractogram corresponding to 20% Ni-0.5% Pt/Al 2 O 3 (Figure 3c). The textural properties of the catalysts used in this study are compiled in Table 1. The surface area, pore volume, and pore size of all catalysts fall in very narrow ranges, which is consistent with their total metal loadings and particle sizes (vide infra) and the fact that all catalysts were prepared using the same alumina support. These results indicate that the effects of differences in these properties on catalyst performance should be minimal. Figure 3 includes the X-ray diffractograms of the catalysts employed in this study. Since diffractograms were acquired using the fresh catalysts in their oxidized form-catalysts were subjected to XRD after the calcination in air constituting the final step of their preparation (see Section 3.1)-the fact that all Ni detected is present as NiO is unsurprising. Indeed, the three diffractograms display several peaks (at 37.2°, 43.3°, 62.9°, 75.4°, and 79.4°) assigned to NiO [24]. The fact that diffraction peaks attributed to Fe3O4 and to Ni-Fe alloy phases [17,19,25,26] are absent from the diffractogram corresponding to 20% Ni-5% Fe/Al2O3 (Figure 3a) is unsurprising since these phases would only be expected in reduced (as opposed to oxidized) catalysts. Peaks associated with Fe2O3 are also absent from this diffractogram, indicating that Fe is highly dispersed. As previously reported [18], the fact that peaks at 35.5° and 38.7° corresponding to a CuO phase [27] are not observed in the diffractogram of 20% Ni-5% Cu/Al2O3 (Figure 3b) can be similarly attributed to the high dispersion of the Cu phase [24,28]. Likewise, no distinct Pt-related features (peaks or peak shifts) can be observed in the diffractogram corresponding to 20% Ni-0.5% Pt/Al2O3 (Figure 3c). The temperature-programmed reduction (TPR) profiles shown in Figure 4 clearly illustrate that the three catalysts employed in this study display very different reduction behavior. As discussed in a previous report [18], the TPR profile for 20% Ni-5% Cu/Al 2 O 3 shows four distinct reduction events: (1) a sharp peak at 180 • C attributed to the reduction of copper oxide [24,29]; (2) a broader but well-defined peak with a maximum at 360 • C assigned to the reduction of a NiO-CuO phase [30]; (3) a shoulder with a local maximum at 460 • C signaling the reduction of NiO [31]; and (4) a weak and broad signal around 690 • C indicating the reduction of nickel aluminate (NiAl 2 O 4 ) [32]. The TPR profile for 20% Ni-5% Fe/Al 2 O 3 also shows four (but less distinct) reduction events, namely: (1) a small signal with a maximum at 235 • C corresponding to large (10-50 nm) NiO ensembles (vide infra); (2,3) a very large and broad peak ranging from 260 to 675 • C with a maximum at 350 • C, commingling the reduction of nickel and iron oxides (leading to the formation of a Ni-Fe alloy) [17,19]; and (4) a high-temperature tail of the latter peak, assigned to NiAl 2 O 4 reduction. Lastly, as discussed in a recent report [13] the TPR profile for 20% Ni-0.5% Pt/Al 2 O 3 also displays several reduction events, including (1) a small and broad peak between 300 and 350 • C attributed both to the reduction of surface Pt and of large NiO particles in close proximity to Pt [13,33]; (2) an intense and well-defined signal with a maximum at 460 • C assigned to the Pt-assisted reduction of smaller NiO particles [13]; and (3) a broad peak above 500 • C with a high temperature (>700 • C) shoulder attributed to the reduction of NiO and NiAl 2 O 4 , respectively. profile for 20% Ni-5% Fe/Al2O3 also shows four (but less distinct) reduction events, namely: (1) a small signal with a maximum at 235 °C corresponding to large (10-50 nm) NiO ensembles (vide infra); (2,3) a very large and broad peak ranging from 260 to 675 °C with a maximum at 350 °C, commingling the reduction of nickel and iron oxides (leading to the formation of a Ni-Fe alloy) [17,19]; and (4) a high-temperature tail of the latter peak, assigned to NiAl2O4 reduction. Lastly, as discussed in a recent report [13] the TPR profile for 20% Ni-0.5% Pt/Al2O3 also displays several reduction events, including (1) a small and broad peak between 300 and 350 °C attributed both to the reduction of surface Pt and of large NiO particles in close proximity to Pt [13,33]; (2) an intense and well-defined signal with a maximum at 460 °C assigned to the Pt-assisted reduction of smaller NiO particles [13]; and (3) a broad peak above 500 °C with a high temperature (>700 °C) shoulder attributed to the reduction of NiO and NiAl2O4, respectively. Given that Ni-based formulations used in the deoxygenation of FOG to fuel-like hydrocarbons are known to be particularly susceptible to coking [1], the spent catalysts were subjected to thermogravimetric analysis (TGA) in air, the resulting profiles being shown in Figure Temperature (°C) 20% Ni-5% Cu/Al2O3 20% Ni-5% Fe/Al2O3 Given that Ni-based formulations used in the deoxygenation of FOG to fuel-like hydrocarbons are known to be particularly susceptible to coking [1], the spent catalysts were subjected to thermogravimetric analysis (TGA) in air, the resulting profiles being shown in Figure 5. The TGA profiles indicate that the total mass loss displayed by the spent catalysts follows the trend Ni-Fe (3.0%) < Ni-Cu (7.6%) < Ni-Pt (10.8%). In addition, the temperature at which mass loss takes place is also noteworthy, since mass loss events <400 °C can be attributed to strongly adsorbed reactants, intermediates and products (or soft coke) and mass loss events >400 °C can be assigned to more recalcitrant carbonaceous deposits (graphitic or hard coke). Tellingly, albeit the majority of the weight loss displayed by all spent catalyst takes place below 400 °C, the Ni-Pt formulation also shows distinct and significant weight loss above this temperature. The increased coking observed on the Ni-Pt catalyst is consistent with its considerably higher cracking activity, which is evinced by the copious amounts of methane produced by this formulation (see Section 2.1). In turn, the fact that both the Ni-Cu and the Ni-Fe catalysts display lower amounts of carbonaceous deposits is in agreement with the known ability of both Cu and Fe to curb the hydrogenolysis activity of Ni via geometric effects (see Section 1). Table 2 shows the surface concentration (in at.%) of the elements detected via x-ray photoelectron spectroscopy (XPS) in the catalysts after (i) 76 h of TOS, followed by washing with Temperature (°C) 20% Ni-5% Cu/Al2O3 20% Ni-0.5% Pt/Al2O3 The TGA profiles indicate that the total mass loss displayed by the spent catalysts follows the trend Ni-Fe (3.0%) < Ni-Cu (7.6%) < Ni-Pt (10.8%). In addition, the temperature at which mass loss takes place is also noteworthy, since mass loss events <400 • C can be attributed to strongly adsorbed reactants, intermediates and products (or soft coke) and mass loss events >400 • C can be assigned to more recalcitrant carbonaceous deposits (graphitic or hard coke). Tellingly, albeit the majority of the weight loss displayed by all spent catalyst takes place below 400 • C, the Ni-Pt formulation also Catalysts 2020, 10, 91 9 of 32 shows distinct and significant weight loss above this temperature. The increased coking observed on the Ni-Pt catalyst is consistent with its considerably higher cracking activity, which is evinced by the copious amounts of methane produced by this formulation (see Section 2.1). In turn, the fact that both the Ni-Cu and the Ni-Fe catalysts display lower amounts of carbonaceous deposits is in agreement with the known ability of both Cu and Fe to curb the hydrogenolysis activity of Ni via geometric effects (see Section 1). Table 2 shows the surface concentration (in at.%) of the elements detected via x-ray photoelectron spectroscopy (XPS) in the catalysts after (i) 76 h of TOS, followed by washing with dodecane and drying (spent); (ii) their subsequent calcination for 5 h at 450 • C under air (calcined); and (iii) their successive reduction for 3 h at 400 • C under H 2 (re-reduced). All XPS spectra can be found in Appendix B ( Figures A1-A8). Carbon in the samples can be divided into inorganic (carbide) and organic (coke) carbon, the inorganic carbon being mostly associated with the SiC used as a diluent in the upgrading experiments (see Section 3.3) and is indicative of the relative amount of sample components-catalyst or diluent-analyzed during XPS measurements. The amount of inorganic carbon can considerably impact the interpretation of the data in Table 2. This also the case for the amount of organic carbon (calculated by subtracting the inorganic from the total carbon) associated with coke deposits and whose Ni-Pt > Ni-Cu > Ni-Fe trend in the spent catalysts is in agreement with the results of TGA (vide supra). The fact that both spent Ni-Fe and Ni-Pt catalysts display a very similar surface concentration of Si and inorganic carbon indicates that a similar fraction of SiC diluent and catalyst is being analyzed, which in turn confirms that organic carbon (coke) deposits are much more abundant on the Pt-than on the Fe-promoted formulation. Similarly, while both the Si and inorganic carbon concentration of the spent Ni-Cu sample is lower, suggesting that a lower amount of C is contributed by SiC, the fact that the concentration of C Tot (and, thus, of C Org ) in this formulation is higher than on Ni-Fe indicates that coke deposits on the surface of Ni-Cu are intermediate to those on Ni-Pt and Ni-Fe, which is also consistent with TGA data. Notably, the amount of Ni in general and Ni 0 in particular, is considerably higher on the surface of the spent Ni-Fe catalyst than on the other two formulations, although the observed trend (Ni-Fe >> Ni-Cu > Ni-Pt) likely stems-at least in part-from the relative amount of coke deposits on the surface of these catalysts. Considering the promoter metals, it is worth noting that a much higher amount of Fe in the Ni-Fe catalyst (relative to Cu in the Ni-Cu formulation) is detected at the surface. While this cannot be attributed to the amount of SiC artificially depressing the amount of Cu detected (since the Si and inorganic carbon concentration is higher in the Ni-Fe catalyst) or to the slightly lower atomic weight (A r ) of Fe-the A r difference is too small-the higher amount of coke deposits on the surface of the Ni-Cu catalyst could partially explain the relatively low Cu concentration, as with the case of Ni. Lastly, while the entirety of Cu is present as Cu + , which is indicative of Cu 2 O as copper does not form a carbide phase, a small amount (ca. 13%) of Fe is in the metallic state, the remainder being present in oxidic form (see Figures A7 and A8 in Appendix B). Upon calcination, the amount of C Tot is significantly reduced mainly due to the combustion of coke deposits. Consistent with the more graphitic-and thus recalcitrant-nature of the coke on this formulation as indicated by TGA results (see Figure 5), the amount of residual coke after calcination is highest for the Ni-Pt catalyst. Although the amount of Ni in the calcined catalysts is lowest for 20% Ni-5% Cu/Al 2 O 3 , the amount of Ni 0 follows the trend Ni-Cu > Ni-Pt > Ni-Fe, which is indicative of the resistance of surface Ni in each catalyst to oxidation. Regarding the metallic promoters within the calcined catalyst, which increase in concentration due to the removal of coke, Cu is present as a mixture of Cu 2 O and CuO, the vast majority of Fe also being present in the oxidized form (only~3% being present as Fe 0 ). Changes observed upon the reduction of the calcined catalysts are also informative. The similar amounts of Si, Ni, and Ni 0 on the re-reduced Ni-Cu and Ni-Pt formulations and the changes in these values relative to those displayed by their calcined counterparts indicate that (i) the region of the samples analyzed are catalyst-rich and SiC-poor; and (ii) sintering takes place during reduction based on the loss of surface Ni relative to the calcined catalysts. Changes in the Ni/Al ratio-from 0.18 to 0.12 and from 0.65 to 0.27 for calcined to re-reduced Ni-Cu and Ni-Pt, respectively-also suggests that sintering takes place during reduction. Moreover, the amount of C Tot and C Inorg on the Ni-Pt catalyst is striking and suggests the formation of a considerable amount of metallic carbide(s). Unfortunately, the presence of the latter could not be conclusively confirmed since Pt could not be observed (due to the small amount of Pt and the overlapping of the Pt4f and the Al2p XPS regions) and a distinct nickel carbide signal is not resolved. It is also noteworthy that the surface concentration of Ni (and to a lesser degree that of Ni 0 ) on the re-reduced Ni-Fe catalyst is significantly higher than that of its Ni-Cu and Ni-Pt counterparts, particularly taking into account the considerably larger amount of SiC being analyzed alongside Ni-Fe. Finally, it is interesting to note that the surface concentration of Cu is lower on the re-reduced Ni-Cu catalyst than on its spent and calcined counterparts-particularly taking into account the lower amount of SiC analyzed alongside the re-reduced material and the higher amount of coke on the spent formulation-which may indicate the alloying of Cu with Ni. Regarding the oxidation state of the promoter metals after reduction, while the entirety of Cu is present in the metallic form, only~9% of iron is present as Fe 0 , the remainder being present as Fe 2+ or Fe 3+ . Since it has been reported that Fe 2 O 3 undergoes reduction to Fe 3 O 4 in the~300-400 • C range, while Fe 3 O 4 is reduced between~400 and 500 • C [34], it is unsurprising that after reduction at 400 • C most of the Fe detected by means of XPS is oxidic. In short, XPS results indicate that the trend related to the amount of organic (coke) deposits on the surface of spent catalysts (Ni-Pt >> Ni-Cu > Ni-Fe) explains the relative amounts of Ni and Ni 0 -as well as of promoter metals-on the surface of spent formulations (Ni-Fe >> Ni-Cu > Ni-Pt). Moreover, the trends related to the amount of Ni 0 on the surface of calcined and re-reduced catalysts (Ni-Cu > Ni-Pt > Ni-Fe and Ni-Fe > Ni-Cu ≈ Ni-Pt, respectively) indicate that Ni displays distinct redox behavior within each formulation. Indeed, Ni on the surface of 20% Ni-5% Fe/Al 2 O 3 is easier to oxidize and reduce than Ni on the surface of 20% Ni-5% Cu/Al 2 O 3 or 20% Ni-0.5% Pt/Al 2 O 3 . Finally, XPS results evince that while the Ni-Cu and Ni-Pt catalysts experience metal particle sintering during re-reduction, Ni-Cu and Ni-Fe alloys form within the Cu-and Fe-promoted catalysts and metallic carbides may form within the Pt-promoted formulation. The analysis of the fresh and spent catalysts via transmission electron microscopy-energy dispersive X-ray spectroscopy (TEM-EDS) also afforded significant insights. In the case of the Cu-promoted catalyst, TEM results indicate that the particle size distribution-which is narrow and centered around 4 nm particles in the fresh catalyst-is both broader and centered around larger particles in the spent formulation (see Figure 6a), signaling particle sintering. The TEM-EDS results in Figure 6b reveal that the metal particles in the fresh catalyst display a composition that is close to that of the bulk formulation (80% Ni-20% Cu considering only the metallic phase), albeit Ni-rich particles containing 85-95% Ni are also observed. Notably, the spent catalyst comprises particles slightly more enriched in Cu relative to those in the fresh formulation, all particles in the spent catalyst containing between 65 and 80% Ni. This indicates that particles not only grow in size but also become Cu-rich during the reaction, which is also in line with previously reported results [11]. This conclusion is clearly illustrated by the TEM micrographs and the TEM-EDS elemental maps included in Figure A9; the elemental maps also showing that Ni and Cu are present in close association on both the fresh and spent catalysts. The Cu map of the spent Ni-Cu catalyst shown in Figure A9 also provides an example of the Cu-hollow space not observed in the fresh formulation, as Ni-Cu particles likely undergo Cu-hollowing through a mechanism based on the Kirkendall effect [35]. These observations are consistent with the widely reported bulk and surface enrichment of Ni-Cu nanoparticles with Cu [11,36,37]. In the case of the Fe-promoted catalyst, TEM results indicate that a similar particle size change takes place during reaction. Indeed, the particle size distribution-which is narrow in the fresh catalyst as the vast majority of particles range from 3 to 7 nm-is both broader and shifted to larger (8-30 nm) particles in the spent formulation (see Figure 7a). However, the composition of the metal particles does not change much during the reaction in contrast with the Cu-promoted catalyst; the only change observed being the disappearance of Ni-rich (90-100% Ni) particles according to the TEM-EDS results in Figure 7b. These observations are illustrated by the TEM micrographs and the TEM-EDS elemental maps in Figure A10, which show the degree of association between Ni and Fe in both the fresh and the spent catalyst. The Fe maps included in Figure A10 also evince Fe-hollow spaces on both the fresh and spent formulation, albeit the latter displays more of these spaces. Analogous to the case of the Ni-Cu catalyst, Fe-hollowing may occur through a mechanism based on the Kirkendall effect, which has also been reported for Ni-Fe bimetallic formulations [38,39]. In the case of the Fe-promoted catalyst, TEM results indicate that a similar particle size change takes place during reaction. Indeed, the particle size distribution-which is narrow in the fresh catalyst as the vast majority of particles range from 3 to 7 nm-is both broader and shifted to larger (8-30 nm) particles in the spent formulation (see Figure 7a). However, the composition of the metal particles does not change much during the reaction in contrast with the Cu-promoted catalyst; the only change observed being the disappearance of Ni-rich (90-100% Ni) particles according to the TEM-EDS results in Figure 7b. These observations are illustrated by the TEM micrographs and the TEM-EDS elemental maps in Figure A10, which show the degree of association between Ni and Fe in both the fresh and the spent catalyst. The Fe maps included in Figure A10 also evince Fe-hollow spaces on both the fresh and spent formulation, albeit the latter displays more of these spaces. Analogous to the case of the Ni-Cu catalyst, Fe-hollowing may occur through a mechanism based on the Kirkendall effect, which has also been reported for Ni-Fe bimetallic formulations [38,39]. In the case of the Pt-promoted catalyst, TEM results indicate a similar particle size change as those experienced during the reaction by the other formulations. Specifically, whereas the fresh catalyst shows a fairly narrow particle size distribution with the vast majority of particles falling within the 5-10 nm range, the spent formulations show a much broader distribution with particles as small as 3 nm and as large as 28 nm (see Figure 8a). Nevertheless, changes in the composition of metal particles within the Pt-promoted catalyst are noteworthy (see Figure 8b). Indeed, the fresh catalyst shows a significant amount of Pt-rich particles-relative to the bulk formulation (97.6 wt.% Ni-2.4 wt.% Pt or 99.3 at. % Ni-0.7 at.% Pt)-whereas the vast majority of metal particles in the spent catalyst show a composition very close to that of the bulk. The TEM micrographs and the TEM-EDS elemental maps in Figure A11 support these conclusions, showing both the increase in metal particle size and the closer association of Ni and Pt in the spent catalysts than in the fresh state. Figure 7b. These observations are illustrated by the TEM micrographs and the TEM-EDS elemental maps in Figure A10, which show the degree of association between Ni and Fe in both the fresh and the spent catalyst. The Fe maps included in Figure A10 also evince Fe-hollow spaces on both the fresh and spent formulation, albeit the latter displays more of these spaces. Analogous to the case of the Ni-Cu catalyst, Fe-hollowing may occur through a mechanism based on the Kirkendall effect, which has also been reported for Ni-Fe bimetallic formulations [38,39]. In the case of the Pt-promoted catalyst, TEM results indicate a similar particle size change as those experienced during the reaction by the other formulations. Specifically, whereas the fresh catalyst shows a fairly narrow particle size distribution with the vast majority of particles falling within the 5-10 nm range, the spent formulations show a much broader distribution with particles as small as 3 nm and as large as 28 nm (see Figure 8a). Nevertheless, changes in the composition of metal particles within the Pt-promoted catalyst are noteworthy (see Figure 8b). Indeed, the fresh catalyst shows a significant amount of Pt-rich particles-relative to the bulk formulation (97.6 wt.% Ni-2.4 wt.% Pt or 99.3 at. % Ni-0.7 at.% Pt)-whereas the vast majority of metal particles in the spent catalyst show a composition very close to that of the bulk. The TEM micrographs and the TEM-EDS elemental maps in Figure A11 support these conclusions, showing both the increase in metal particle size and the closer association of Ni and Pt in the spent catalysts than in the fresh state. Structural and Activity Changes Observed during Catalysts Aging and Regeneration As mentioned in the preceding section, the similarity between the textural properties (surface area, pore volume, and pore diameter) of all fresh catalysts suggests that the effect of these properties on catalyst performance should be minimal, at least at the onset. However, any variations in these and other properties with TOS may influence catalyst performance. Indeed, based on TGA results, the loss of surface area and porosity attributable to coking and fouling should follow the trend Ni-Fe < Ni-Cu < Ni-Pt, the latter catalyst showing a higher amount of more recalcitrant (graphitic) carbonaceous deposits. Notably, complete deoxygenation of the feed was maintained throughout the experiment for each of the catalysts, which suggests that losses in surface area and porosity due to coking, fouling, and sintering are not sufficient to noticeably impact the deoxygenation activity of these formulations in the time period investigated. Thus, differences in the cracking activity of the catalysts offer better insights vis-à-vis structure-activity relationships. Looking at both the composition of the liquid and gaseous products, the cracking activity of the Cu-promoted catalyst-although always relatively low-is highest between 8 and 30 h on stream and progressively drops between 30 and 72 h on stream, at which point it becomes stable. Somewhat similarly, the cracking activity of the Fe-promoted formulation is considerably higher in the first 8 h of the experiment, becomes moderate between 24 and 52 h on stream, and is both negligible and Structural and Activity Changes Observed during Catalysts Aging and Regeneration As mentioned in the preceding section, the similarity between the textural properties (surface area, pore volume, and pore diameter) of all fresh catalysts suggests that the effect of these properties on catalyst performance should be minimal, at least at the onset. However, any variations in these and other properties with TOS may influence catalyst performance. Indeed, based on TGA results, the loss of surface area and porosity attributable to coking and fouling should follow the trend Ni-Fe < Ni-Cu < Ni-Pt, the latter catalyst showing a higher amount of more recalcitrant (graphitic) carbonaceous deposits. Notably, complete deoxygenation of the feed was maintained throughout the experiment for each of the catalysts, which suggests that losses in surface area and porosity due to coking, fouling, and sintering are not sufficient to noticeably impact the deoxygenation activity of these formulations in the time period investigated. Thus, differences in the cracking activity of the catalysts offer better insights vis-à-vis structure-activity relationships. Looking at both the composition of the liquid and gaseous products, the cracking activity of the Cu-promoted catalyst-although always relatively low-is highest between 8 and 30 h on stream and progressively drops between 30 and 72 h on stream, at which point it becomes stable. Somewhat similarly, the cracking activity of the Fe-promoted formulation is considerably higher in the first 8 h of the experiment, becomes moderate between 24 and 52 h on stream, and is both negligible and stable beyond t = 72 h. Contrastingly, the cracking activity of the Pt-promoted formulation is higher at the end of the run than at its onset. Therefore, at least in the case of the Ni-Pt catalyst, it appears that neither the loss of surface area due to coking, fouling, and sintering, nor changes in the bulk or surface composition of the metal particles, reduce the cracking activity of this formulation in the time period investigated. The opposite is the case for the Cu-and Fe-promoted catalysts, which display a lower cracking activity towards the end of the experiment, this effect being more pronounced for the Ni-Fe formulation. TPR measurements performed on the spent catalysts after calcination (see Figure A12) provide valuable insights on the structural changes that occur during regeneration. The first thing worth noting is that all peaks (except for those assigned to NiAl 2 O 4 ) are shifted to lower temperatures, which can be partially attributed to the formation of larger particles that are easier to reduce. However, the more substantial shifts (>100 • C) can only be fully explained by invoking a considerable increase in the association of Ni with the promoter metal. For the Cu-promoted catalyst, changes in the relative intensity of peaks attributed to the reduction of copper oxide, NiO-CuO, and NiO (with respect to that in the fresh formulation) point to a reduction in the amount of unalloyed Cu and the formation of Ni-Cu bimetallic particles with a Cu-rich surface. Indeed, the peaks with maxima at 245 and 375 • C likely correspond to the reduction of NiO-CuO at the surface and of NiO at the core of these particles, respectively. For the Fe-promoted formulation, the narrowness of the main peak relative to that in the fresh formulation indicates a closer association between the metals forming a Ni-Fe alloy, which is consistent with the disappearance of Ni-rich particles observed via TEM-EDS (see Figure 7b). In addition, the intensity of this peak relative to that of peaks shown by other catalysts is in line with the higher amount of surface Ni in the Fe-promoted formulation measured via XPS (vide infra). Lastly, for the Pt-promoted catalyst, the sizable shift (>200 • C) that the main peak displays between the fresh and the regenerated formulation, is indicative of a major increase in the association between Ni and Pt. The results of the XPS measurements performed on calcined and re-reduced spent catalysts confirm these conclusions and offer additional insights. While most (if not all) of the coke is removed from Cuand Fe-promoted spent catalysts, the regenerated Ni-Pt formulation displays both a significant amount of residual coke, as well as a possible metallic carbide phase. In addition, metal particle sintering takes place during the regeneration of spent Ni-Cu and Ni-Pt catalysts, likely explaining the lower amount of surface Ni and Ni 0 these formulations display relative to their Fe-promoted counterpart. Finally, the regenerated (i.e., calcined in air at 450 • C for 5 h and re-reduced under H 2 at 400 • C for 3 h) spent catalysts were subjected to in-situ diffuse reflectance infrared Fourier transform spectroscopy after CO adsorption (CO-DRIFTS) to gain additional information on their structure post-regeneration (see Figure 9). Since the adsorption of CO on metals is highly temperature-dependent, adsorption was carried out at 25 • C in order to focus on the most active sites for CO adsorption and limit the confounding effects that the presence of multiple metals can have on M-CO spectra [40]. Irrespective of its state (fresh or spent and regenerated), the Cu-promoted catalyst displays a very intense band at 2119-2100 cm −1 , which is attributed to CO adsorbed on Cu sites [41,42]. However, this band increases in intensity and shifts to higher wavenumbers on the regenerated catalyst, which signals an increase in the total quantity of Cu sites, as well as a change in their electronic properties. While the increase in intensity is in line with the enrichment of the surface with Cu, the shift in wavenumber may result from a greater extent of Ni-Cu alloying and from the rise of Cu-hollow spaces [43]. Tellingly, while the fresh Ni-Fe catalyst shows a Ni-CO band at 2179 cm −1 , this band is absent from the corresponding spectrum post-regeneration. This suggests that the most coordinatively unsaturated Ni sites-which are the most active cracking sites [44]-are irreversibly deactivated during reaction/regeneration. Similarly, while the fresh Ni-Pt catalyst shows a well-defined peak at~2180 cm −1 and a broad feature at~2120 cm −1 associated with CO on metallic Ni sites, as well as a large and well-defined peak at~2077 cm −1 assigned to CO on Pt sites [13], none of these signals are observed post-regeneration. Since XPS results demonstrate the presence of Ni 0 on the regenerated Ni-Pt formulation, the dearth of signals can be explained by residual coke blocking the sites responsible for low-temperature CO adsorption. Catalysts 2020, 10, x FOR PEER REVIEW 15 of 33 regenerated Ni-Pt formulation, the dearth of signals can be explained by residual coke blocking the sites responsible for low-temperature CO adsorption. These observations are reinforced by a recent report in which the performance of a regenerated Ni-Cu catalyst in fatty acid deoxygenation was found to be distinct from-and superior to-that of the fresh formulation [11]. Thus, the need for additional work in which the regenerated catalysts are tested to study their performance (and the evolution thereof) in a second cycle post-regeneration is clearly indicated, particularly since these tests stand to shed light on the recyclability of these formulations and unveil additional structure-activity relationships. were sieved separately to the desired particle size (150-300 µm), and stored in a vacuum oven at 60 °C prior to their use. These observations are reinforced by a recent report in which the performance of a regenerated Ni-Cu catalyst in fatty acid deoxygenation was found to be distinct from-and superior to-that of the fresh formulation [11]. Thus, the need for additional work in which the regenerated catalysts are tested to study their performance (and the evolution thereof) in a second cycle post-regeneration is clearly indicated, particularly since these tests stand to shed light on the recyclability of these formulations and unveil additional structure-activity relationships. g) as the support. Following impregnation, the materials were dried overnight at 60 • C under vacuum and then calcined at 500 • C for 3 h in static air. The catalysts and SiC diluent (Kramer Industries, Piscataway Township, NJ, USA) were sieved separately to the desired particle size (150-300 µm), and stored in a vacuum oven at 60 • C prior to their use. Catalyst Characterization A detailed description of the instrumentation and procedures employed for catalyst characterization-by means of N 2 physisorption, XRD, TGA, TEM-EDS, TPR, and DRIFTS-can be found in previous contributions [13,15,45]. Briefly, XRD measurements were performed on a Phillips X'Pert diffractometer using Cu Kα radiation (λ = 1.5406 Å) and a step size of 0.02 • . TGA was performed on a TA instrument Q500 thermogravimetric analyzer under flowing air (50 mL min −1 ) by ramping the temperature from room temperature to 1000 • C at a rate of 10 • C/min. TEM observations were conducted using a Thermo Scientific Talos F200X analytical electron microscope equipped with a SuperX EDS system consisting of 4 windowless silicon drift detectors (SDD) for quantitative chemical composition analysis and elemental distribution mapping. XPS analyses were performed using a PHI 5000 Versaprobe apparatus with monochromatic Al Kα1 X-Ray source (energy of 1486.6 eV, accelerating voltage of 15 kV, power of 50 W and spot size diameter of 200 µm). Pass energies of 187.5 eV and 58.7 eV were used for survey spectra and high-resolution windows, respectively. The signal for Al 2 O 3 (Al2p at 74.4 eV) was employed for energy calibration purposes (measurements being performed with a neutralization system). Spectra were processed with the CasaXPS software package, ionization cross-sections from Landau being used to quantify the semi-empirical relative sensitivity factors. Prior to analysis, powders were deposited on a steatite sample holder made in house. This sample holder enables the transfer of samples between a pre-treatment chamber and the XPS analysis chamber without exposure to air. The pre-treatment chamber-which was also designed and manufactured in house and is equipped with a furnace that can heat samples up to 1050 • C-can be filled with pre-treatment gases (up to 1 bar) and be placed under vacuum, which is done prior to transferring pretreated samples to the XPS analysis chamber. Continuous Fixed-Bed Deoxygenation Experiments Used cooking oil upgrading experiments were performed in continuous mode using previously described equipment and procedures [11]. Briefly, a fixed-bed stainless-steel tubular reactor (1/2 in. o.d., Parr, Moline, IL, USA) with a stainless-steel porous frit to hold the bed-0.5 g of catalyst and 0.5 g of SiC as a diluent (or 1 g of SiC in the blank run)-in place was employed. Prior to each deoxygenation experiment, the catalyst to be tested was reduced in situ for 3 h at 400 • C under 40 bar of flowing H 2 (60 mL/min). The same pressure and H 2 flow were used during deoxygenation experiments, which were performed at 375 • C. The feed was introduced to the reactor-as a solution of 75 wt.% UCO in dodecane (>99% Alfa Aesar, Haverhill, MA, USA)-at a rate of 0.75 mL/h (equivalent to a WHSV of 1 h −1 ) using a Harvard Apparatus (Holliston, MA, USA) syringe pump equipped with an 8 mL syringe. Liquid products were sampled from a liquid-gas separator (kept at 0 • C) placed downstream from the catalyst bed. Incondensable gases were directed to a dry test meter before being collected in Tedlar ® gas sample bags. Gas sample bags were changed every time a liquid sample was taken to ensure that the gas samples were analyzed, and the liquid samples collected could be correlated. A blank (sans catalyst) experiment was conducted using 1 g of SiC. Representative experiments were performed in duplicate to ensure reproducibility. The highest average standard deviation values observed in the amount of diesel-like and heavier hydrocarbons formed were ±6.15% and ±5.66%, respectively. Analysis of Reaction Products Liquid products were analyzed using a combined simulated-distillation-GC and GC-MS approach. A detailed description of the development and application of this method is available elsewhere [46]. Briefly, the analyses were performed using an Agilent 7890B GC system equipped with an Agilent 5977A extractor MS detector and flame ionization detector (FID). The multimode inlet was run with an initial temperature of 100 • C. Upon injection, this temperature was immediately increased at a rate of 8 • C/min to 380 • C, which was maintained for the remainder of the analysis. The oven temperature was increased upon injection from 40 • C to 325 • C at a rate of 4 • C/min, followed by a ramp of 10 • C/min to 400 • C, which was maintained for 12.5 min. An Agilent J&W VF-5ht column (30 m × 250 µm × 0.1 µm) rated to 450 • C was used. Gaseous samples were analyzed using an Agilent (Santa Clara, CA, USA) 3000 Micro-GC equipped with 5 Å molecular sieve, PoraPLOT U, alumina, and OV-1 columns, as well as with a universal thermal conductivity detector (TCD). The GC was calibrated for all of the gaseous products obtained, including CO and CO 2 , as well as straight-chain C1-C6 alkanes and alkenes. Conclusions In this contribution, an industrially relevant experimental approach was used to upgrade UCO to diesel-like hydrocarbons in order to compare the performance of Ni catalysts promoted with Cu, Fe or Pt. Results indicate that all catalysts tested display and retain the ability to fully deoxygenate the feed to hydrocarbons through the entirety of the time period investigated. However, the cracking activity of Ni-Cu is relatively low and stable throughout, that of Ni-Fe drops with TOS, and that of Ni-Pt is higher, variable, and is still high at the end of a 76 h run. Analysis of the fresh and spent catalysts helps explain these trends and their underlying structure-activity relationships. In the case of the Ni-Pt catalyst, neither coking, fouling, and metal particle sintering-nor changes in the bulk or surface composition of the metal particles-reduce the cracking activity of this formulation in the time period investigated. In contrast, the cracking activity of both the Ni-Cu and the Ni-Fe catalysts decrease with TOS, this decrease being more pronounced for the Fe-promoted formulation. Based on TEM-EDS data, this reduction in cracking activity can be attributed to an increased degree of alloying between Ni and Cu or Fe, the formation of Ni-Cu and Ni-Fe alloys disrupting the adjacency of Ni atoms required for C-C hydrogenolysis. In short, results suggest that Cu-and Fe-promoted catalysts are preferable to Ni-Pt formulations, which are rendered disadvantageous by the fact that their higher price and cracking activity would reduce the cost and carbon efficiency of a process designed to convert UCO to diesel-like hydrocarbons. Acknowledgments: Sarah Cummins and the Redwood Cooperative School in Lexington, Kentucky, as well as Jennifer Wyatt and personnel from the Lexington-Fayette Urban Country Government, are thanked for collecting and delivering the used cooking oil used in this study. Tonya Morgan is thanked for her help with the GC-MS analysis of liquid reaction products. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Figure A1. Nickel 2p X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum) and Ni-Pt (purple spectrum) catalysts recovered from deoxygenation experiments. Figure A2. Nickel 2p X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from deoxygenation experiments after calcination in air at 450 °C for 5 h. Figure A2. Nickel 2p X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from deoxygenation experiments after calcination in air at 450 • C for 5 h. Figure A3. Nickel 2p X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum) and Ni-Pt (purple spectrum) catalysts recovered from deoxygenation experiments after calcination in air at 450 °C for 5 h and reduction under H2 at 400 °C for 3 h. Figure A3. Nickel 2p X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum) and Ni-Pt (purple spectrum) catalysts recovered from deoxygenation experiments after calcination in air at 450 • C for 5 h and reduction under H 2 at 400 • C for 3 h. Figure A4. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments. Figure A4. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments. Figure A5. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments after calcination in air at 450 °C for 5 h. Figure A5. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments after calcination in air at 450 • C for 5 h. Appendix A Catalysts 2020, 10, x FOR PEER REVIEW 25 of 33 Figure A6. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments after calcination in air at 450 °C for 5 h and reduction under H2 at 400 °C for 3 h. Figure A6. Carbon 1s X-ray photoelectron spectra of the spent Ni-Cu (red spectrum), Ni-Fe (green spectrum), and Ni-Pt (purple spectrum) catalysts recovered from the deoxygenation experiments after calcination in air at 450 • C for 5 h and reduction under H 2 at 400 • C for 3 h. Catalysts 2020, 10, x FOR PEER REVIEW 26 of 33 Figure A7. Copper 2p X-ray photoelectron spectra of the spent Ni-Cu catalyst after the deoxygenation experiment (red spectrum), after calcination in air at 450 °C for 5 h (green spectrum), and reduction under H2 at 400 °C for 3 h (purple spectrum). Figure A7. Copper 2p X-ray photoelectron spectra of the spent Ni-Cu catalyst after the deoxygenation experiment (red spectrum), after calcination in air at 450 • C for 5 h (green spectrum), and reduction under H 2 at 400 • C for 3 h (purple spectrum). Catalysts 2020, 10, x FOR PEER REVIEW 27 of 33 Figure A8. Iron 2p X-ray photoelectron spectra of the spent Ni-Fe catalyst after the deoxygenation experiment (red spectrum), after calcination in air at 450 °C for 5 h (green spectrum), and reduction under H2 at 400 °C for 3 h (blue spectrum). Figure A8. Iron 2p X-ray photoelectron spectra of the spent Ni-Fe catalyst after the deoxygenation experiment (red spectrum), after calcination in air at 450 • C for 5 h (green spectrum), and reduction under H 2 at 400 • C for 3 h (blue spectrum).
2020-01-09T09:14:33.395Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "61023863ece620f95589d3ded02fd8ae00847ec5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/10/1/91/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f44032f8a43856ed7d1c9301c4f718528a466859", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
1950664
pes2o/s2orc
v3-fos-license
Revisiting the Marrow Metabolic Changes after Chemotherapy in Lymphoma: A Step towards Personalized Care Purpose. The aims were to correlate individual marrow metabolic changes after chemotherapy with bone marrow biopsy (BMBx) for its potential value of personalized care in lymphoma. Methods. 26 patients (mean age, 58 ± 15 y; 13 female, 13 male) with follicular lymphoma or diffuse large B-cell lymphoma, referred to FDG-PET/CT imaging, who had BMBx from unilateral or bilateral iliac crest(s) before chemotherapy, were studied retrospectively. The maximal standardized uptake value (SUV) was measured from BMBx site over the same area on both initial staging and first available restaging FDG-PET/CT scan. Results. 35 BMBx sites in 26 patients were evaluated. 12 of 35 sites were BMBx positive with interval decrease in SUV in 11 of 12 sites (92%). The remaining 23 of 35 sites were BMBx negative with interval increase in SUV in 21 of 23 sites (91%). The correlation between SUV change over the BMBx site before and after chemotherapy and BMBx result was significant (P < 0.0001). Conclusions. This preliminary result demonstrates a strong correlation between marrow metabolic changes (as determined by FDG PET) after chemotherapy and bone marrow involvement proven by biopsy. This may provide a retrospective means of personalized management of marrow involvement in deciding whether to deliver more extended therapy or closer followup of lymphoma patients. Introduction Molecular imaging using 2-deoxy-2-[F-18]fluoro-d-glucose (FDG) positron emission tomography (PET) scanning has recently emerged as a major imaging modality for the staging and followup of patients with non-Hodgkin's lymphoma (NHL) [1,2]. Diffuse large B-cell lymphoma is the most common subtype. Follicular lymphoma accounts for 22% of NHL in adults with high tendency to involve the bone marrow [3,4]. Follicular grade I lymphoma is the most predominating histological subtype to involve the marrow [5]. Before initiation of treatment, distinguishing potentially curable (stage I/II) from advanced disease (stage III/IV) may guide the planning of management. The advanced stages III and IV correlate significantly with shorter overall or event-free survival, and treatment may have to be modified accordingly. In NHL, bone marrow involvement places the patient in the advanced disease (stage IV). Bone marrow biopsy (BMBx) is the established method for detection of bone marrow involvement in the initial staging and restaging of NHL. However, BMBx is a painful and invasive procedure and sometimes only a small sample can be obtained, which may be inconclusive due to sampling errors, despite bilateral iliac crest blind biopsy under anesthesia. Furthermore, even if the volume of the biopsy is adequate, focal lesions may be missed. Thus, although it is very specific, BMBx from 2 International Journal of Molecular Imaging traditional biopsy sites (iliac crests) has low sensitivity in detecting marrow involvement of lymphoma. It is essential to have a supplementary prospective, and if not possible even retrospective diagnostic procedure, possibly consisting of a multistep approach, to reliably assess bone marrow infiltration in patients with NHL to compliment the BMBx. The ability of FDG-PET to evaluate bone marrow infiltration in patients with lymphoma has been investigated extensively. Multiple prior studies showed that FDG-PET has a high potential to detect bone marrow involvement in high-grade malignant lymphoma with low sensitivity for the detection of bone marrow infiltration in low-grade NHLs [6,7] despite the fact that bone marrow involvement occurs in 30%-50% of patients with NHL. The majority of these studies used visual interpretation of marrow FDG uptake during whole-body staging PET scans to assess bone marrow involvement. Since it is more common in indolent histology [8,9], the marrow activity may be not avid for visual detection based on a single staging PET scan. In the visual approach, the marrow was assumed to be abnormal where the FDG uptake was equal to or greater than uptake in the liver, which was usually greater than background. This approach for assessing marrow involvement depends on the extent of marrow infiltration by lymphoma and has made no use of available information from pre-and postchemotherapy intramedullary metabolic activities due to changes in cell population. The standardized uptake value (SUV) is a semiquantitative measure of the glucose metabolism based on the degree of FDG uptake, which is derived from the tumor activity divided by dose per body mass in the attenuation-corrected PET images [10]. It may improve the definition of abnormal areas by reducing subjective assessment. It is common to see the pattern of diffusely increased FDG uptake in normal bone marrow after chemotherapy on F-18 FDG-PET scans due to change in hematopoietic cell population. Decreased FDG uptake after chemotherapy is noted in the areas with PET evidence of bone marrow involvement due to reduction in tumor population. The aims were to correlate individual marrow metabolic changes after chemotherapy with bone marrow biopsy (BMBx) for its potential value of personalized care in lymphoma. Materials and Methods Database of patients referred for FDG-PET/CT scan for initial staging and first restaging after chemotherapy who had BMBx from unilateral or bilateral iliac crest(s) before chemotherapy at a tertiary hospital and cancer center over a period of two years was retrospectively searched with Human Investigation Committee approval. Patients were excluded if they had malignancies other than lymphoma or if they had received prior radiation treatment or chemotherapy. Twentysix patients were eligible for this study. The mean age was 58 ± 15 years old with 13 females and 13 males. There were 16 follicular lymphomas (FL, grades I, II, and III) and 10 diffuse large B-cell lymphomas (DLBC). The maximal standardized uptake value (SUV) was measured from BMBx site over the same area on both initial staging and restaging FDG-PET/CT scans. The interval changes of SUV were classified as decrease or increase and correlated with BMBx result of positive or negative for bone marrow involvement by lymphoma. PET-CT imaging was obtained by a dedicated 16-slice body PET-CT scanner (GE Discovery DST, GE Medical Systems, Milwaukee, Wis, USA). All patients had fasted for 4 to 6 hours before the intravenous injections of average of 15 mCi (555 MBq) F-18 FDG. PET scans were performed around one and a half hours after injection right after mapping CT. This choice of FDG uptake time was based on the modified schedule of usual one-hour time of FDG uptake to maximize the contrast between tumor, soft tissue, and benign inflammation if any [11]. The PET images were obtained at each bed position for 3 minutes with 6-8 beds to cover the entire body. The PET images were obtained using a two-dimensional high-sensitivity mode with an axial field of view of 15 cm in a 256 × 256 matrix. A 3-slice overlap was utilized between the bed positions. The PET images were reconstructed iteratively on a 128 × 128 matrix using ordered-subsets expectation maximization algorithm for 30 subsets and two iterations, with a 7.0 mm postreconstruction filter. An initial scout scan was obtained first to define the imaging field for the CT component of image acquisition, which used the following imaging parameters: 140 kVp, 120-200 mA, 0.8 seconds per CT rotation, pitch 1.75 : 1, and detector configuration of 16 × 1.25 mm, 3 mm slice thickness with oral contrast only. The serum glucose in mg/dL was recorded just before PET, and the maximum SUV, defined as tumor activity divided by dose injected per body mass, was measured by searching the maximum value within a volume of interest over BMBx sites (posterior iliac crests) by a nuclear medicine physician. Bone marrow involvement was defined by histopathologically proven bone marrow lymphoma infiltration from marrow biopsy. Statistical analysis was done using SPSS (SPSS Inc, Chicago, IL) for comparing the changes of marrow before and after chemotherapy. A P value of <0.05 was considered significant in all tests. Results Thirty-five BMBx sites from 26 patients were evaluated. Twelve of 35 sites were BMBx positive with interval decrease in SUV in eleven out of the 12 sites (92%) (Figure 1). For the 11 sites with true positive PET findings (decreased SUV on BMBx positive sites), the magnitude of SUV decrease ranged from 0.7 to 15.8 (−27 to −89%) for an average 8.3 ± 5.9 (−64 ± 24%). The correlation between SUV change over the BMBx site before and after chemotherapy and BMBx result was shows diffuse uniform increase in marrow uptake. However, the SUV actually decreases (from 6.7 to 4.9) on the right iliac bone marrow with pathologic confirmation of marrow involvement (b). On the contrary, the SUV over the pathologically negative left iliac bone marrow increases (from 2.7 to 4.2) due to normal hematopoietic response. significant (P < 0.0001). The interval between chemotherapy and restaging PET scan was 9 to 75 days (mean 22 ± 16 days). The interval between BMBx and initial staging PET scan was 1 to 47 days (mean 8 ± 9 days). Discussion It is currently a common practice to use PET scan as a qualitative tool in the arena of lymphoma, supplemented by more clinically useful information extracted in the form of SUV, to aid in therapeutic decision making and prognostication. The clinical significance of the current study suggested that change in marrow SUV measurement after chemotherapy might provide a retrospective marrow staging, which may give the same information noninvasively as obtained from traditional BMBx regarding bone marrow involvement of lymphoma. Bone marrow involvement in patients with lymphoma signifies extensive disease with less favorable prognosis. Although bone marrow biopsy is still the gold standard method for detection of bone marrow involvement in the initial staging or restaging of lymphomas, the potentially low sensitivity and invasive nature as well as other limitations of marrow biopsy make it nonideal diagnostic test in detection of marrow infiltration. A more reliable and sensitive noninvasive method of detecting marrow involvement would be desirable. FDG-PET imaging has become a major imaging modality for the staging and followup of patients with NHL The SUV increases (from 1.7 to 3.8) on the right iliac bone marrow (red region) with pathologic examination negative for marrow involvement. The change of the SUV over bone marrow is all due to normal hematopoietic response to chemotherapy. [1,2], and the potential ability of FDG-PET to evaluate both focal and diffuse bone marrow infiltration in patients with lymphoma is a natural choice for investigation and optimization. Though prior studies with pure visual interpretation of marrow FDG uptake to assess marrow have revealed a high potential to detect bone marrow involvement in malignant lymphoma [12][13][14], the sensitivity is still unacceptably low in low-grade non-Hodgkin's lymphomas [6,7,15]. The present study was undertaken to investigate and optimize the efficacy of FDG-PET as a potentially improved and complimentary method to aid in evaluating marrow involvement in patients for personalized care of lymphomas by utilizing individual interval metabolic changes instead of detection of focal abnormal marrow uptake based on a single staging PET scan alone. The strong correlation between SUV change after chemotherapy and BMBx result as demonstrated from this current study has notable potential clinical significance. PET-CT is more sensitive than marrow biopsy and can be employed routinely to assess the entire marrow [16]. In patients with initial obvious focal FDG-avid bone marrow lesions, FDG-PET may offer guidance to biopsy or can be used as direct evidence of bone marrow involvement. The bone marrow involvement of these lesions can be confirmed by analyzing SUV changes after chemotherapy. If initial FDG PET shows no definite focal marrow lesions and patient International Journal of Molecular Imaging 5 needs chemotherapy clinically, then the retrospective analysis of SUV changes after chemotherapy over the traditional BMBx sites may offer similar if not better information as obtained from BMBx regarding bone marrow involvement by lymphoma and therefore potentially eliminate the error on management caused by false negative pretreatment BMBx due to sampling issue. Whether a blind marrow biopsy may still be warranted in patients with indolent lymphoma for whom chemotherapy may not be offered if there is no evidence of bone marrow involvement by lymphoma, or in patients with marrow population changes related to factors other than chemotherapy only, for example, G-CSF treatment, bone marrow dysplasia, prior chemo-or radiation treatment, and so forth, needs further clinical investigation. Nonetheless, the current study moves a step towards personalized care by employing retrospective evidence of marrow involvement in lymphoma patients. Since the current revised response criteria for lymphoma require clearance of infiltrative marrow lesions by repeated biopsy [17], the current study may offer an alternative insight for noninvasive marrow response criteria by requiring involved marrow sites to show decrease in SUV from hot or normal FDG uptake to reduced or cold metabolic activity on the marrow. With PET-CT, the functional PET images are coregistered with the anatomic CT images obtained by the almost simultaneously acquired CT scan. This approach can result in a significant improvement in the accurate anatomic localization and the region of interest determination and therefore make sure that the measurement of SUV is over the same area before and after chemotherapy. The current study demonstrates that the change in metabolic behavior correlates well with marrow cell population change and suggests possible optimization of metabolic information by using intramedullary SUV change to countercheck or supplement the invasive limited sampling of histological examination to predict marrow involvement, which in turn may lead to the most appropriate subsequent management for each individual patient. There are limitations of this current retrospective study, including small sample size, mixed histopathological classification, and significant variation of interval between chemotherapy and restaging PET scan. A comparison between bone marrow biopsy and evaluations of FDG-PET after treatment is not accomplished. This is partially because there will be little justification to repeat biopsy if there is already a good response to therapy. A welldesigned prospective study overcoming the above limitations to further confirm the conclusion of this preliminary study is necessary and is currently being investigated. Conclusion This preliminary study demonstrates a strong correlation between marrow metabolic changes by FDG PET scan after chemotherapy and marrow cell population change. There is an inverse relationship between (positive) marrow involvement of lymphoma and SUV change (decrease) after chemotherapy. The potential clinical significance of this observation is to provide a noninvasive retrospective means of countercheck or replacement of the invasive limited sampling of histological examination in predicting marrow involvement, which in turn may help in personalized care of deciding whether to deliver more extended therapy or closer followup of lymphoma patients.
2016-05-04T20:20:58.661Z
2011-09-28T00:00:00.000
{ "year": 2011, "sha1": "71cfe8523df820735189fc7b1022f0d22f0a96fb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2011/942063.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d00d5ee25a71c79e08d9da9a1e0368e3f8949e5c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4493070
pes2o/s2orc
v3-fos-license
High-Density Energetic Metal–Organic Frameworks Based on the 5,5′-Dinitro-2H,2′H-3,3′-bi-1,2,4-triazole High-energy metal–organic frameworks (MOFs) based on nitrogen-rich ligands are an emerging class of explosives, and density is one of the positive factors that can influence the performance of energetic materials. Thus, it is important to design and synthesize high-density energetic MOFs. In the present work, hydrothermal reactions of Cu(II) with the rigid polynitro heterocyclic ligands 5,5′-dinitro-2H,2′H-3,3′-bi-1,2,4-triazole (DNBT) and 5,5′-dinitro-3,3′-bis-1,2,4-triazole-1-diol (DNBTO) gave two high-density MOFs: [Cu(DNBT)(ATRZ)3]n (1) and [Cu(DNBTO)(ATRZ)2(H2O)2]n (2), where ATRZ represents 4,4′-azo-1,2,4-triazole. The structures were characterized by infrared spectroscopy, elemental analysis, ultraviolet-visible (UV) absorption spectroscopy and single-crystal X-ray diffraction. Their thermal stabilities were also determined by thermogravimetric/differential scanning calorimetry analysis (TG/DSC). The results revealed that complex 1 has a two-dimensional porous framework that possesses the most stable chair conformations (like cyclohexane), whereas complex 2 has a one-dimensional polymeric structure. Compared with previously reported MOFs based on copper ions, the complexes have higher density (ρ = 1.93 g cm−3 for complex 1 and ρ = 1.96 g cm−3 for complex 2) and high thermal stability (decomposition temperatures of 323 °C for complex 1 and 333.3 °C for complex 2), especially because of the introduction of an N–O bond in complex 2. We anticipate that these two complexes would be potential high-energy density materials. In traditional energetic molecules, their densities and thermal stabilities can be easily improved by introducing functional groups, such as nitro groups (NO 2 ), N-O bonds, and nitramino groups (NHNO 2 ), but this type of formation is rarely seen in MOFs. To further improve the density, oxygen balance, nitrogen content, and detonation performances of energetic MOFs, polynitro heterocyclic compounds can be considered as ideal energetic ligands. However, there were few reports about high-energy MOFs with ligands based on polynitro heterocyclic compounds. Recently, Matzger reported an energetic MOF [MOF(Cu-DNBT)] based on a polynitro heterocyclic compound DNBT as a ligand [25], where the insensitivity to external stimuli and thermal stability (its decomposition temperature is above 300 • C) are promoted by formation of a structural framework. This proves that MOFs with polynitro heterocyclic compounds as ligands show potential as energetic materials. To further expand the structural framework (skeleton) and improve their energetic properties, we envision that nitro groups and N-O bonds could be introduced into MOFs. In this study, DNBT and its oxide, 5,5 -dinitro-3,3 -bis-1,2,4-triazole-1-diol (DNBTO), were used as ligands to assemble MOFs because of the following advantages: (1) these ligands possess high densities (e.g., DNBT 1.90 g cm −3 ) [11], high nitrogen contents (DNBT 49.5%, DNBTO 46.3%), and high heats of formation due to containing many high-energy N-N bonds (160 kJ mol −1 ) and N=N bonds (418 kJ mol −1 ) [26,27]; and (2) DNBT and DNBTO have different coordination modes, such as multidentate and building block bridging, as shown in Figure 1, offering the possibility for constructing unpredictable and fascinating MOFs. Cu(II) ions as the central atoms not only have good coordination ability with the N and O atoms of ligands, but they are also environmentally friendly ions compared with heavy metal ions such as lead [21,28] and mercury [16]. From the above considerations, the two novel energetic MOFs [Cu(DNBT)(ATRZ) 3 ] n (1) and [Cu(DNBTO)(ATRZ) 2 (H 2 O) 2 ] n (2) were prepared by the hydrothermal method and were characterized in detail by infrared spectroscopy, elemental analysis, ultraviolet-visible (UV) absorption spectroscopy and single-crystal X-ray diffraction. In addition, their thermal stabilities were determined by thermogravimetric/differential scanning calorimetry analysis (TG/DSC). The results revealed that the complexes 1 and 2 possess high densities ( = 1.93 g cm −3 for complex 1 and = 1.96 g cm −3 for complex 2) and high thermal stabilities (decomposition temperatures of 323.0 • C for complex 1 and 333.3 • C for complex 2), especially because of introduction of an N-O bond in complex 2. [25], where the insensitivity to external stimuli and thermal stability (its decomposition temperature is above 300 °C) are promoted by formation of a structural framework. This proves that MOFs with polynitro heterocyclic compounds as ligands show potential as energetic materials. To further expand the structural framework (skeleton) and improve their energetic properties, we envision that nitro groups and N-O bonds could be introduced into MOFs. In this study, DNBT and its oxide, 5,5′-dinitro-3,3′-bis-1,2,4-triazole-1-diol (DNBTO), were used as ligands to assemble MOFs because of the following advantages: (1) these ligands possess high densities (e.g., DNBT 1.90 g cm −3 ) [11], high nitrogen contents (DNBT 49.5%, DNBTO 46.3%), and high heats of formation due to containing many high-energy N-N bonds (160 kJ mol −1 ) and N=N bonds (418 kJ mol −1 ) [26,27]; and (2) DNBT and DNBTO have different coordination modes, such as multidentate and building block bridging, as shown in Figure 1, offering the possibility for constructing unpredictable and fascinating MOFs. Cu(II) ions as the central atoms not only have good coordination ability with the N and O atoms of ligands, but they are also environmentally friendly ions compared with heavy metal ions such as lead [21,28] and mercury [16]. From the above considerations, the two novel energetic MOFs [Cu(DNBT)(ATRZ)3]n (1) and [Cu(DNBTO)(ATRZ)2(H2O)2]n (2) were prepared by the hydrothermal method and were characterized in detail by infrared spectroscopy, elemental analysis, ultraviolet-visible (UV) absorption spectroscopy and single-crystal X-ray diffraction. In addition, their thermal stabilities were determined by thermogravimetric/differential scanning calorimetry analysis (TG/DSC). The results revealed that the complexes 1 and 2 possess high densities (ρ = 1.93 g cm −3 for complex 1 and ρ = 1. Synthesis of Energetic Complexes The copper complexes were synthesized by a simple one-step hydrothermal reaction of copper dinitrate pentahydrate [Cu(NO 3 ) 2 ·5H 2 O] with polynitro heterocyclic compounds (DNBT and DNBTO) and ATRZ in water. Complexes 1 and 2 are air stable, maintain their crystallinity for at least several weeks, and are insoluble in common organic solvents, such as dimethyl sulfoxide (DMSO), chloroform, methanol, ethanol, and acetone. The IR spectrum of complex 1 showed a strong band associated with the NO 2 group (1540 cm −1 ), while complex 2 also had a strong band according to the N-O bond (1465 cm −1 ) in its IR spectrum ( Figure S2, See the Supporting Information). The results showed the two polynitro ligands were successfully involved in their MOFs. To better characterize the structures, single-crystal X-ray experiments were performed. The experimental details for structural determination of the compounds are summarized in Table S1, and selected bond lengths and angles are given in Tables S2 and S3. The hydrogen bonding parameters are listed in Table S4 (See the Supporting Information). The results of XRD analysis are shown in Figure S3. Further information about the crystal structure determination is provided in the Supporting Information. X-ray Crystallography Complex 1 crystallizes in the triclinic space group P(-1) (see Table S1) with a 2D porous network, in which there is one crystallographically-independent copper atom. The molecular structure is shown in Figure 1a. DNBT adopts a bidentate bridging mode to coordinate to the copper atom. The asymmetry unit is made of one Cu(II) ion, one DNBT ligand, and three ATRZ ligands. The Cu(II) ion is penta-coordinated to two nitrogen atoms from DNBT ligands (N3 and N5) and three nitrogen atoms from ATRZ ligands (N9, N13 and N17). (Table S2) in a distorted square pyramid. Figure 1b shows the 2D layer of complex 1, in which adjacent Cu(II) centers are bridged by ATRZ ligands in three different directions to from a nearly hexagonal grid. In addition, the π-π interactions of triazole rings that results in molecular stacking planes, forming the 2D porous structure. The pore structure in the framework takes the most stable chair conformation (like cyclohexane).The presence of three unique molecular stacking plane orientations results in mixed molecular stacking (Figure 1c), which prevents interlayer sliding within the crystal lattice. Complex 2 crystallizes in the monoclinic space group P2 1 /n (see Table S1) with a 1D porous MOF, in which there is one crystallographically-independent copper atom. The molecular structure is shown in Figure 2a. The asymmetry unit is composed of a copper ion as the center of the equatorial plane, and it includes one Cu(II) ion, one DNBTO ligand, two ATRZ ligands, and two H 2 O molecules ( Figure 2). DNBTO adopts a bidentate bridging mode to coordinate the copper atom. The Cu(II) ion is hexa-coordinated to three nitrogen atoms from DNBTO and ATRZ ligands (N5, N10, and N14) and three oxygen atoms (O1, O6, and O7) from DNBTO and water molecules [Cu1-N5A = 1.954 (9) (Table S3) in a distorted octahedron-like structure. ATRZ bridges the skeleton. The intermolecular hydrogen bonds between hydroxy and nitro groups are shown by dashed lines in Figure 2c. The hydrogen bonds and π-π interactions result in molecular stacking planes with an interplanar distance of 6.02 Å. Abundant hydrogen bonds, including O6-H6···N10, C5-H5···N11, C8-H8···O7, and C8-H8···O1A with distances range from 1.913 to 2.416 Å (see Table S4), between expanded chains result in a stable 1D structure. Ultraviolet-Visible (UV) Absorption The solid-state ultraviolet absorption spectra of three complexes [(ATRZ-Cu), (BTRZ-Cu), and 1] at room temperature are depicted in Figure 3. By means of the ultraviolet absorption spectra of three complexes, the impact of nitro groups on the skeleton of MOFs based on nitrogen-rich heterocyclic ligands was investigated. As can be seen from Figure 3, the three kinds of MOFs show strong absorption within the range 200-700 nm. In contrast to 1 and BTRZ-Cu, there is an additional strong peak in 1 at 310 nm, which is possibly attributable to the absorption of ATRZ. In addition, the absorption band of complex 1 at 200-400 nm becomes significantly broader, probably because of the strong π-π interactions between the adjacent triazole rings of the layers in 1, which could reduce the π-π* transition energy. Meanwhile, the electron-withdrawing effect of the nitro group brought about the hypochromic effect of complex 1. Ultraviolet-Visible (UV) Absorption The solid-state ultraviolet absorption spectra of three complexes [(ATRZ-Cu), (BTRZ-Cu), and 1] at room temperature are depicted in Figure 3. By means of the ultraviolet absorption spectra of three complexes, the impact of nitro groups on the skeleton of MOFs based on nitrogen-rich heterocyclic ligands was investigated. Ultraviolet-Visible (UV) Absorption The solid-state ultraviolet absorption spectra of three complexes [(ATRZ-Cu), (BTRZ-Cu), and 1] at room temperature are depicted in Figure 3. By means of the ultraviolet absorption spectra of three complexes, the impact of nitro groups on the skeleton of MOFs based on nitrogen-rich heterocyclic ligands was investigated. As can be seen from Figure 3, the three kinds of MOFs show strong absorption within the range 200-700 nm. In contrast to 1 and BTRZ-Cu, there is an additional strong peak in 1 at 310 nm, which is possibly attributable to the absorption of ATRZ. In addition, the absorption band of complex 1 at 200-400 nm becomes significantly broader, probably because of the strong π-π interactions between the adjacent triazole rings of the layers in 1, which could reduce the π-π* transition energy. Meanwhile, the electron-withdrawing effect of the nitro group brought about the hypochromic effect of complex 1. As can be seen from Figure 3, the three kinds of MOFs show strong absorption within the range 200-700 nm. In contrast to 1 and BTRZ-Cu, there is an additional strong peak in 1 at 310 nm, which is possibly attributable to the absorption of ATRZ. In addition, the absorption band of complex 1 at 200-400 nm becomes significantly broader, probably because of the strong π-π interactions between the adjacent triazole rings of the layers in 1, which could reduce the π-π* transition energy. Meanwhile, the electron-withdrawing effect of the nitro group brought about the hypochromic effect of complex 1. Stability and Detonation Properties The thermal decomposition temperatures of the complexes were determined by thermogravimetric/differential scanning calorimetry (TG/DSC) with a linear heating rate of Figure 4. According to the TG curve of complex 1, it undergoes a main weight loss (40.4%) in the temperature range 300-350 • C, which is attributed to the decomposition of the coordination framework. Meanwhile, the DSC curve further showed that these is only one intense exothermic peak with a peak temperature of 323.0 • C, which corresponds to its decomposition temperature. In addition, complex 2 also undergoes significant weight loss (49.8%) in the temperature range 290-370 • C, which is attributed to the decomposition of the coordination framework. There is only one exothermic process with a peak temperature of 333.3 • C in its DSC curve (Figure 4). These complexes are among very few energetic materials that show thermal stability above 300 • C [29][30][31]. Furthermore, complex 1 is more thermally stable than nearly all energetic salts and cocrystals of DNBT reported at present [11,32]. The decomposition temperature of complex 2 (333.3 • C) is also the highest among those of all reported high-energy MOFs with 1D chain structures [33]. The high thermal stabilities of these complexes are presumably caused by strong multiple intermolecular interactions such as hydrogen-bonding and π-π stacking. Stability and Detonation Properties The thermal decomposition temperatures of the complexes were determined by thermogravimetric/differential scanning calorimetry (TG/DSC) with a linear heating rate of 10 °C min −1 under nitrogen atmosphere, and their TG/DSC curves are shown in Figure 4. According to the TG curve of complex 1, it undergoes a main weight loss (40.4%) in the temperature range 300-350 °C, which is attributed to the decomposition of the coordination framework. Meanwhile, the DSC curve further showed that these is only one intense exothermic peak with a peak temperature of 323.0 °C, which corresponds to its decomposition temperature. In addition, complex 2 also undergoes significant weight loss (49.8%) in the temperature range 290-370 °C, which is attributed to the decomposition of the coordination framework. There is only one exothermic process with a peak temperature of 333.3 °C in its DSC curve (Figure 4). These complexes are among very few energetic materials that show thermal stability above 300 °C [29][30][31]. Furthermore, complex 1 is more thermally stable than nearly all energetic salts and cocrystals of DNBT reported at present [11,32]. The decomposition temperature of complex 2 (333.3 °C) is also the highest among those of all reported high-energy MOFs with 1D chain structures [33]. The high thermal stabilities of these complexes are presumably caused by strong multiple intermolecular interactions such as hydrogen-bonding and ππ stacking. Besides their high thermal stabilities, the two complexes also possess high densities. The densities of the complexes are 1.93 g cm −3 for complex 1 and 1.96 g cm −3 for complex 2, which are higher than that of the parent monomer (DNBT, ρ = 1.90 g cm −3 ) [11]. With the introduction of N-O bonds, the density of the MOFs increases distinctly. It is worth noting that the two complexes also show higher crystal densities than those of most known copper-based MOFs such as {[Cu(ATZ)(ClO4)2]n, ρ = 1. [36]. It is possible that these complexes contain polynitro ligands, which result in their high densities. In addition, the complexes also have a high nitrogen content: 52.4% and 44.8%, respectively, for complexes 1 and 2. Thus, they can not only release a large amount of energy, but solid waste containing harmful components is reduced during detonation. Sensitivity to external stimuli, such as electrostatic discharge, friction, and impact, is important for safe handling and transportation of explosive materials. The impact sensitivity (IS) and friction sensitivity (FS) measurements of complexes 1 and 2 were performed using a standard BAM drop hammer and a BAM friction tester, respectively. The results showed that the complexes exhibit relatively low sensitivities towards impact and friction (Table 1). In particular, complex 1 is insensitive to impact and friction (IS > 40 J and FS > 360 N), and the values are lower than those of DNBT (IS = 10 J and FS = 360 N) [11], CHP (IS = 5 J) [12], CHHP (IS = 8 J) [37]. Besides their high thermal stabilities, the two complexes also possess high densities. The densities of the complexes are 1.93 g cm −3 for complex 1 and 1.96 g cm −3 for complex 2, which are higher than that of the parent monomer (DNBT, = 1.90 g cm −3 ) [11]. With the introduction of N-O bonds, the density of the MOFs increases distinctly. It is worth noting that the two complexes also show higher crystal densities than those of most known copper- [36]. It is possible that these complexes contain polynitro ligands, which result in their high densities. In addition, the complexes also have a high nitrogen content: 52.4% and 44.8%, respectively, for complexes 1 and 2. Thus, they can not only release a large amount of energy, but solid waste containing harmful components is reduced during detonation. Sensitivity to external stimuli, such as electrostatic discharge, friction, and impact, is important for safe handling and transportation of explosive materials. The impact sensitivity (IS) and friction sensitivity (FS) measurements of complexes 1 and 2 were performed using a standard BAM drop hammer and a BAM friction tester, respectively. The results showed that the complexes exhibit relatively low sensitivities towards impact and friction (Table 1). In particular, complex 1 is insensitive to impact and friction (IS > 40 J and FS > 360 N), and the values are lower than those of DNBT (IS = 10 J and FS = 360 N) [11], CHP (IS = 5 J) [12], CHHP (IS = 8 J) [37]. According to our developed method [38,39], the detonation properties {e.g., detonation velocity (D) and detonation pressure (P)} of the energetic MOFs were calculated using the experimentally determined (back-calculated from ∆ c U) enthalpy of formation (∆ f H • ), and the crystal densities. The constant-volume combustion energies (∆ c U) of the complexes were measured by an oxygen bomb calorimeter. The enthalpy of combustion (∆ c H • ) was calculated from ∆ c U, and correction for the change in the gas volume during combustion was included (Scheme 1, Equation (1)). The standard enthalpies of formation of complexes 1 and 2 were back-calculated from the heats of combustion on the basis of combustion equations (Scheme 1, Equations (2) and (3)), Hess' law (Scheme 1, Equations (4) and (5)), and the known standard heats of formation of copper oxide (−157.3 kg mol −1 ), water (−285.8 kg mol −1 ), and carbon dioxide (−393.51 kg mol −1 ) [40]. The calculated ∆ f H • values of complexes 1 and 2 are 1461 and 843.6 kJ mol −1 , respectively. We used the EXPLO5 computer code (version 6.01) to calculate their detonation velocity (D) and detonation pressure (P). For complex 2, P = 27.62 GPa and D = 7.86 km s −1 , which are better than the same values for TNT [12], CHHP, and ZHHP, and many of known energetic MOFs. The detonation properties of some energetic MOFs and energetic complexes are listed in Table 1. According to our developed method [38,39], the detonation properties {e.g., detonation velocity (D) and detonation pressure (P)} of the energetic MOFs were calculated using the experimentally determined (back-calculated from ΔcU) enthalpy of formation (ΔfH°), and the crystal densities. The constant-volume combustion energies (ΔcU) of the complexes were measured by an oxygen bomb calorimeter. The enthalpy of combustion (ΔcH°) was calculated from ΔcU, and correction for the change in the gas volume during combustion was included (Scheme 1, Equation (1)). The standard enthalpies of formation of complexes 1 and 2 were back-calculated from the heats of combustion on the basis of combustion equations (Scheme 1, Equations (2) and (3)), Hess' law (Scheme 1, Equations (4) and (5)), and the known standard heats of formation of copper oxide (−157.3 kg mol −1 ), water (−285.8 kg mol −1 ), and carbon dioxide (−393.51 kg mol −1 ) [40]. The calculated ΔfH° values of complexes 1 and 2 are 1461 and 843.6 kJ mol −1 , respectively. We used the EXPLO5 computer code (version 6.01) to calculate their detonation velocity (D) and detonation pressure (P). For complex 2, P = 27.62 GPa and D = 7.86 km s −1 , which are better than the same values for TNT [12], CHHP, and ZHHP, and many of known energetic MOFs. The detonation properties of some energetic MOFs and energetic complexes are listed in Table 1. Chemical and Materials Cu(NO3)2·5H2O, NaNO2 and oxalic acid were purchased from the Aladdin corporation and used without further purification, Oxone was purchased from Shanghai Alfa aesar Co. Ltd. (Shanghai, China). Aminoguanidine bicarbonate was purchased from Shanghai Macklin Biochemical Co. Ltd. (Shanghai, China). All chemicals and reagents were of analytical grade, and were used as received without further purification. Deionized water was used throughout this work. Chemical and Materials Cu(NO 3 ) 2 ·5H 2 O, NaNO 2 and oxalic acid were purchased from the Aladdin corporation and used without further purification, Oxone was purchased from Shanghai Alfa aesar Co. Ltd. (Shanghai, China). Aminoguanidine bicarbonate was purchased from Shanghai Macklin Biochemical Co. Ltd. (Shanghai, China). All chemicals and reagents were of analytical grade, and were used as received without further purification. Deionized water was used throughout this work. Preparation of Ligands 4,4 -Azo-1,2,4-triazole (ATRZ) was prepared according to our previous work [41]. DNBT was prepared according to the procedures described in the literature [42]. In a typical synthesis of DNBTO, DNBT (1.0 g, 4.4 mmol) was dissolved in a solution of water (25 mL) and potassium acetate (5.0 g, 0.051 mol) and heated to 40 • C. Oxone (16.6. g, 27 mmol) was added portion wise within 2 h, and the pH was meanwhile carefully adjusted to 4-5 by dropwise addition of potassium acetate (38.0 g, 0.38 mol) in water (50 mL). The mixture was subsequently stirred at 40 • C for 24 h. The solution was acidified with sulfuric acid and extracted with ethyl acetate. The combined organic phases were dried over magnesium sulfate, and the solvent was evaporated in vacuum. 15 Synthesis of the Energetic Metal Organic Framework The copper complex, [Cu(DNBT)(ATRZ)3] n (1) was synthesized with a hydrothermal method; copper dinitrate pentahydrate was reacted with ATRZ and an ammonium salt of DNBT [30] in water. ATRZ (0.05 g, 0.3 mmol) was suspended in 10 mL deionized water, and stirred at room temperature until the solution was clear. Ammonium salt of DNBT (0.238 g, 0.9 mmol) and a few drops nitric acid were added. A solution of copper dinitrate pentahydrate (0.22 g, 0.9 mmol) in 20 mL water was added at room temperature and held at this temperature for 48 h, after which dark-blue bulk crystals were acquired. The solid was collected by filtration, washed with deionized water, and dried in air for 30 min. Yield: 65% based on Cu. Elemental analysis (%) calculated for C 10 Measurement of Solid-State Ultraviolet Absorption Ultraviolet absorption was tested on a UV-2600 220v CH ultraviolet spectrophotometry from Beijing Shimadzu Co. Ltd. (Beijing, China) (attaching diffuse reflection measurement device: integrating sphere). Instrument parameters: high-speed scanning rate and slit width is 1. Measurement of Temperature Using Differential Scanning Calorimetry Measurement/Thermogravimetric To determine the thermal stability of the described MOFs, a TG-DSC Q2000 differential scanning calorimeter was used. About 1.5 mg of sample was used and the temperature was programmed to 600 • C (873 K) at the rate of 10 • C min −1 in 60 mL min −1 N 2 flow. Measurement of Sensitivity To determine the thermal stability of the described MOFs, a type 12 tooling according to the "up and down" method (Bruceton method). A 2.5 kg weight was dropped from a set height onto a 20 mg sample placed on 150 grit garnet sandpaper. Each subsequent test was made at the next lower height if explosion occurred and at the next higher height if no explosion happened. 50 drops were made from different heights, and an explosion or non-explosion was recorded to determine the results. RDX was considered as a reference compound, and the impact sensitivity of RDX is 7.4 J [43]. The friction sensitivity was tested on a FSKM-10 BAM friction apparatus. RDX was also used as a reference compound, and its friction sensitivity is 110 N [43]. Conclusions Two high-density energetic MOFs based on polynitro heterocyclic DNBT and DNBTO ligands were successfully synthesized. Their structures were characterized by FT-IR spectroscopy, elemental analysis, ultraviolet-visible (UV) absorption spectrophotometry, thermal analysis, and single-crystal X-ray diffraction. The results showed that complex 1 adopts a 2D porous framework and possesses the most stable chair conformations (like cyclohexane), whereas complex 2 adopts a 1D polymeric structure. Moreover, the complexes possess high thermal stabilities (decomposition temperatures of 323 • C for complex 1 and 333.3 • C for complex 2) and high densities ( = 1.93 g cm −3 for complex 1 and = 1.96 g cm −3 for complex 2) due to their containing many nitro groups and N-O bonds. The complexes also exhibit relatively low sensitivities towards impact and friction. In particular, complex 1 is insensitive to impact and friction (IS > 40 J and FS > 360 N). Thus, we anticipate that the two complexes would be potential high-energy density materials. Supplementary Materials: The following are available online. Figure S1: Coordination structure of complex 1 (left) and complex 2 (right); Figure S2: FTIR spectrum of complex 1 (left) and complex 2 (right); Figure S3: Experimental PXRD pattern of complexes 1 and 2; Table S1: Crystal data and structure refinement details for 1 and 2; Table S2: Selected bond lengths and bond angles for 1; Table S3: Selected bond lengths and bond angles for 2 and Table S4: Selected hydrogen-bond lengths for 2.
2017-07-24T07:28:42.161Z
2017-06-26T00:00:00.000
{ "year": 2017, "sha1": "d613ced987a20fbd3280441ba0540e1df2a7e005", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/22/7/1068/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d613ced987a20fbd3280441ba0540e1df2a7e005", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
231648456
pes2o/s2orc
v3-fos-license
Paranoia and belief updating during a crisis The 2019 coronavirus (COVID-19) pandemic has made the world seem unpredictable. During such crises we can experience concerns that others might be against us, culminating perhaps in paranoid conspiracy theories. Here, we investigate paranoia and belief updating in an online sample (N=1,010) in the United States of America (U.S.A). We demonstrate the pandemic increased individuals’ self-rated paranoia and rendered their task-based belief updating more erratic. Local lockdown and reopening policies, as well as culture more broadly, markedly influenced participants’ belief-updating: an early and sustained lockdown rendered people’s belief updating less capricious. Masks are clearly an effective public health measure against COVID-19. However, state-mandated mask wearing increased paranoia and induced more erratic behaviour. Remarkably, this was most evident in those states where adherence to mask wearing rules was poor but where rule following is typically more common. This paranoia may explain the lack of compliance with this simple and effective countermeasure. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable, but at the same time predicted more rewards. In a follow-up study we found people who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines – again, mask attitude and conspiratorial beliefs were associated with erratic task behaviour and changed priors. Future public health responses to the pandemic might leverage these observations, mollifying paranoia and increasing adherence by tempering people’s expectations of other’s behaviour, and the environment more broadly, and reinforcing compliance. (N=1,010) in the United States of America (U.S.A). We demonstrate the pandemic increased 23 individuals' self-rated paranoia and rendered their task-based belief updating more erratic. Local 24 lockdown and reopening policies, as well as culture more broadly, markedly influenced participants' 25 belief-updating: an early and sustained lockdown rendered people's belief updating less capricious. 26 Masks are clearly an effective public health measure against COVID-19. However, state-mandated 27 mask wearing increased paranoia and induced more erratic behaviour. Remarkably, this was most 28 evident in those states where adherence to mask wearing rules was poor but where rule following is 29 typically more common. This paranoia may explain the lack of compliance with this simple and effective 30 countermeasure. Computational analyses of participant behaviour suggested that people with higher 31 paranoia expected the task to be more unstable, but at the same time predicted more rewards. In a 32 follow-up study we found people who were more paranoid endorsed conspiracies about mask-wearing 33 and potential vaccines -again, mask attitude and conspiratorial beliefs were associated with erratic 34 task behaviour and changed priors. Future public health responses to the pandemic might leverage 35 these observations, mollifying paranoia and increasing adherence by tempering people's expectations Introduction zMETA=4.035, pMETA=5.45E-5). However, 2 was lower in high paranoia, indicating that tonic task 120 changes were less impactful on their choices ( Fig. 1a; social task, F(1, 128)=5.091, p=0.026, ηp 2 =0.038; 121 non-social task, F(1, 70)=8.681, p=0.004, ηp 2 =0.11). Across social and non-social contexts, high paranoia 127 The impact of an evolving pandemic on paranoia and belief updating 128 After the pandemic was declared we continued to acquire data on both tasks (3/19/2020-7/17/2020). 158 We asked participants in the social task to rate whether or not they believed that the avatars had 159 deliberately sabotaged them. Win-switch rate (r=0.259, p=1.2E-5, n=280), 2 0 (r=0.124, p=0.038), and 178 We recorded a significant increase in paranoia when Americans were emerging from lockdown ( Figure 179 2A 200 Mandated mask wearing was associated with an estimated 48% increase in paranoia (gDID = 0.48, p = 201 0.018), relative to states in which mask wearing was recommended but not required (Figure 4a). This 202 increase in paranoia was mirrored as significantly higher win-switch rates in participant task 208 We examined whether any other features might illuminate this variation in paranoia by local mask 209 policy 17 . There are state-level cultural differences -measured by the Cultural Tightness and Looseness 210 (CTL) index 17 -with regards to rule following and tolerance for deviance. Tighter states have more 211 rules and tolerate less deviance, whereas looser states have few strongly enforced rules and greater 212 tolerance for deviance 17 . We also tried to assess whether people were following the mask rules. We 213 acquired independent survey data gathered in the U.S.A. from 250,000 respondents who, between July 214 2 and July 14, were asked: How often do you wear a mask in public when you expect to be within six 215 feet of another person? 18 These data were used to compute an estimated frequency of mask wearing in 216 each state during the reopening period ( Figure 4c). 223 Through backward linear regression with removal, we fit a series of models attempting to predict 224 individuals' self-rated paranoia (N=172) from the features of their environment, including whether they 225 were subject to a mask mandate or not, the cultural tightness of their state, state-level mask-wearing, 226 and Coronavirus cases in their state. In the best fitting model (F(11,160)=1.91,p=0.04) there was a 227 significant three way interaction between mandate, state tightness and perceived mask wearing (t24=- 257 Whilst mask-mandate and mask-recommend states were matched at baseline, it is possible that 258 increases in cases and deaths at reopening explain the increase in paranoia, rather than the mask 259 mandate. Our data militate against this explanation. 261 There were no significant differences in cases (t=-1. 79 276 lockdown, reopening, Figure 5). Furthermore, given that the effects we describe depend on 277 geographical location, we confirm that the proportions of participants recruited from each state did not 278 differ across our study periods (χ 2 =6.63, d.f.=6, p=0.34, Figure 6). Finally, in order to assuage concerns lockdown, r = 0.78 p = 5.8E-9; reopening, r = 0.81 p = 8.5E-10 ( Figure 6). Thus, we did not, by chance, recruit more participants from mask-mandating states or tighter states, for example. Furthermore, 287 focusing on the data that went into the DiD, there were no demographic differences pre-versus post-288 reopening for mask-mandate versus mask-recommended states (age, p=0.45, gender, p=0.73, race, 289 p=0.17, Figure 7). Taken together with our task and self-report results, these control analyses increase 290 our confidence that during reopening, people were most paranoid in the presence of rules and 291 perceived rule breaking, particularly in states where people usually tend to follow the rules. 310 The lockdown rendered participants in less proactive states more susceptible to paranoia in terms of 311 their expectations about volatility. However, we also found that people who were less paranoid during 312 lockdown and reopening were more forgiving of collaborators, returning to those characters even after 313 they have delivered losses in the social task. 315 The increase in paranoia that we observed appeared to coincide with reopening from lockdown and to 316 be particularly pronounced in states that mandated that their residents wear masks when in public. We 351 Perhaps a more vigorous lockdown provided fewer opportunities to misinterpret social interactions, 352 whereas reopening provided more opportunities to encounter others and thence for paranoia. Abiding by lockdown is a personal choice whose effectiveness depends on ones' own choice (to stay 354 home and avoid others). Choosing to wear a mask also offers personal protection. However, mask-355 wearing also protects others from the wearer; it is something one does for others. Table 1 for further 630 information. We recruited 130 (20 high paranoia) participants who completed the social task. Similarly, 631 of the 231 (see Table 2 for details), we recruited 119 (27 high paranoia) and 112 (23 high paranoia) 632 participants who completed the non-social and social tasks, respectively. Lastly, of the 172, we 633 recruited 93 (35 high paranoia) and 79 (35 high paranoia) participants who completed the non-social 634 and social tasks, respectively (See Table 3 for details). In addition to CloudResearch's safeguard from bot submissions, we implemented the same study advertisement, submission review, approval and 636 bonusing as described in our previous study 5 . We excluded a total of 163 submissions -18 from pre- partner. We instructed participants to select an avatar (or partner) to work with to gain as many points 659 towards their group project. Like the non-social, they were instructed that the best partner could 660 change. For both tasks, the contingencies began as 90% reward, 50% reward, and 10% reward with 661 the allocation across deck/partner switching after 9 out of 10 consecutive rewards. At the end of the 662 second block, unbeknownst to the participants, the underlying contingencies transition to 80% reward, 663 40% reward, and 20% reward -making it more difficult to discern whether a loss of points was due to 664 normal variations (probabilistic noise) or whether the best option has changed. 666 Questionnaires. Following task completion, questionnaires were administered via Qualtrics, we 676 For the replication study, we adopted a survey 43 that investigated beliefs on mask usage of individual 677 US consumers and a survey 44 of COVID-19. The 9-item mask questionnaire was used for our study to (1) The coronavirus vaccine will contain microchips to control the people. 684 (2) Coronavirus was created to force everyone to get vaccinated. 685 (3) The vaccine will be used to carry out mass sterilization. (4) The coronavirus is bait to scare the whole globe into accepting a vaccine that will introduce the 'real' 687 deadly virus. 688 (5) The WHO already has a vaccine and are withholding it. 694 Additional features. Along with the task and questionnaire data, we examined state-level 695 unemployment rate 45 , confirmed COVID-19 cases 46 , and mask usage 18 in the USA. Unemployment. 696 The Carsey School of Public Policy reported unemployment rates for the months of February, April, 697 May and June in 2020. We utilized the rates in April and June as our markers for measuring the 698 difference in unemployment between the pre-pandemic period and pandemic period, respectively. 718 Protests. We accessed the publicly available data from the armed conflict location and event data 719 project (ACLED, https://acleddata.com/special-projects/us-crisis-monitor/), which has been recording 730 We also defined a proactivity metric (or score) to measure how adequately or inadequately a state 731 reacted to COVID-19 47 . This score was calculated based on two features: 733 ∶ number of days from baseline to introduce the stay-at-home order (i.e., baseline date -introduced date). ∶ number of days before the order was lifted (i.e., expiration date -introduced date). 736 where baseline date is defined as the date at which the first stay-at-home order was implemented. 737 California was the first to enforce the order on March 19 th , 2020 (i.e., baseline date = 0). States where 738 stay-at-home orders were not implemented had 'N/A' values and were set to 0 in our calculation. 739 Moreover, states that had an indefinite time frame for the orders were set to 100 in our calculation (i.e., 740 expiration date = 100). 742 To compute the proactivity score, we perform the following sum: 746 This metric -ranging from 0 (inadequate) to 100 (adequate) -offers a reasonable approach for 747 measuring proactive state interventions in response to the pandemic. 786 We estimated perceptual parameters individually for the first and second halves of the task (i.e., blocks 787 1 and 2). Each participant's choices (i.e., deck 1, 2, or 3) and outcomes (win or loss) were entered as 788 separate column vectors with rows corresponding to trials. Wins were encoded as '1', losses as '0', and 789 choices as '1', '2', or '3'. We selected the autoregressive 3-level HGF multi-arm bandit configuration for 790 our perceptual model and paired it with the softmax-mu03 decision model. Table 4 813 To conduct meta-analyses of effect replication across experiments, we fit random effects models in the
2020-09-10T10:21:45.653Z
2020-09-04T00:00:00.000
{ "year": 2021, "sha1": "27861b06b08e6f17279f38906df270437aab1e8a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-145987/v1.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bd0a09488d827c37d14ee1c29dafc550d6ad4d50", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
95212222
pes2o/s2orc
v3-fos-license
LC-MSMS Method for Determination of Metolazone in Human Plasma A rapid, sensitive and specific method for quantification of metolazone in human plasma using metaxalone as internal standard is described. Sample preparation involved a simple liquid-liquid extraction procedure. The extract was analyzed by high performance liquid chromatography coupled to electrospray tandem mass spectrometry (LC–MS– MS). Chromatography was performed isocratically on a 5 μm C18 analytical column (50 mm × 4.6 mm i.d.) with buffer–acetonitrile 20:80 (v/v) as mobile phase. The response to metolazone was a linear function of concentration over the range 1.00 to 2000.00 ng mL. The lower limit of quantification in plasma was 1.0 ng mL. The method was successfully applied in a bioequivalence study of a metolazone formulation after administration as a single oral dose. Introduction Metolazone has the molecular formula C 16 H 16 ClN 3 O 3 S and the chemical name 7-chloro-1, 2,3,4-tetrahydro-2-methyl-3-(2-methylphenyl)-4-oxo-6-quinazolinesulfonamide with molecular weight 7 of 365.83 .Metolazone is a quinazoline diuretic with properties generally similar to the thiazide diuretics.The actions of metolazone result from interference with the renal tubular mechanism of electrolyte reabsorption.Metolazone acts primarily to inhibit sodium potassium excretion.Metolazone does not inhibit carbonic anhydrase.A proximal action of metolazone has been shown in humans by increased excretion of phosphate and magnesium ions and by a markedly increased fractional excretion of sodium in patients with severely compromised glomerular filtration [6][7][8][9][10][11][12][13][14][15][16] .Previous work shows metazalone has been determined in biological fluids such as human plasma, blood and urine with high pressure liquid chromatography with fluorescence detector and coupled mass detector [1][2][3][4][5] .The objective of this study was to develop a simple, inexpensive, sensitive, rapid, and accurate method for analysis of metolazone in human plasma with reliable reproducibility suitable for pharmacokinetic studies. Chemicals and reagents Working standard of metolazone was obtained from Centaur Chemical Pvt.Ltd, Mumbai-400055, India.Working standard of metaxalone was obtained from Lannett company, Inc.9000 state Rd., Philadelphia.Methanol and acetonitrile of HPLC grade were from J.T. Baker.Ammonium acetate and formic acid of HPLC grade were from BDH Laboratory Reagents, England.HPLC-grade water was from Merck. Instrumentation and chromatographic conditions The HPLC system (Shimadzu LC-20AD) consisted of a binary pump and an autosampler (SIL-HTc).Detection was performed with an Applied Biosystems Sciex (API 2000) mass spectrometer with atmospheric ion spray for ion production, which was controlled by Analyst 1.4 software.Chromatography was performed isocratically on a 50 mm × 4.6 mm i.d., 5 µm particle, thermo hypurity C 18 analytical column.The mobile phase was buffer-acetonitrile 20:80 (v/v) at a flow rate of 0.4 mL min −1 .Buffer was 2 mM ammonium acetate, pH adjusted to 3.0 with formic acid.Chromatography was performed at ambient temperature.The ionspray potential was set at 5.5 kV and the source temperature was 400°C.The collision activation dissociation (CAD) gas setting at 4.0; nitrogen was used as collision gas.The instrument was set up in multiple reaction monitoring (MRM) mode; the transition m/z 366.1 → 259.0 was monitored for metolazone and the transition m/z 222.2 → 161.1 for metaxalone.Figure 1 and Figure 2 shows MS-MS scan for metolazone and metaxalone. Preparation of stock solution and calibration and quality control samples A stock solution of metolazone (1000.00µgmL -1 ) and metaxalone (1000.00µg mL -1 ) were prepared in methanol.A series of working standard of metolazone containing 20, 50, 500, 1000 10000, 20000, 30000 and 40000 ng mL -1 was prepared by diluting the stock solution with diluent methanol-water 60:40 (v/v).Working internal standard solution (10.00 µg mL −1 ) was prepared in methanol and water in the ratio 60:40 (v/v).Low, medium and high concentration quality control solutions (60.00, 4000.00 and 35000.00ngmL -1 , respectively) were prepared in diluents methanol and water in the ratio 60:40 (v/v).Calibration plot standards were prepared by spiking blank plasma with metolazone at concentration of 1.00, 2.50, 25.00, 50.00, 500.00,1000.00,1500.00 and 2000.00ng mL -1 .Quality control samples were prepared by spiking blank plasma with 3.00, 200.00 and 1750.00 ng mL -1 metolazone.Stock solutions were stored at 4-8°C and used within 33 days of preparation Sample preparation To 2.0 mL polypropylene centrifuge tube, 500 µL of plasma sample was spiked with 50.00 µL of internal standard (10 µg mL -1 ) solution, 25 µL diluent (or Metolazone working calibration standard solution or quality-control solution) and 1.5 mL methyl tert-butyl ether were added.Samples were vortexed for 10.0 minutes followed by centrifugation at 15000 x g for 10 minutes.The supernatant layer was separated and evaporated to dryness under a stream of nitrogen at 50°C.The dry residue was dissolved with 250 µL of mobile phase, vortexes for 1.0 minute and 10 µL of the sample was injected Validation procedures The method was validated in accordance with current acceptance criteria.The specificity of the analytical method was investigated by extraction and analysis of blank plasma samples from six different sources to assess potential interference from endogenous substances.The apparent response at the retention times of metolazone and metaxalone was compared with that at the lower limit of quantification (1.0 ng mL −1 ).Representative chromatograms illustrating the specificity of the method are shown in Figure 3 and 4. The acceptance criterion for metolazone was that the mean interference from the six individual sources should be ≤20% of the signal at the LLOQ and that the mean interference for metaxalone (internal standard) from the six individual sources should be ≤5 % of the signal at working concentration The calibration equation was determined by least-squares linear regression (weighting 1/X 2 ) over the range 1.0 to 2000.0 ng mL −1 in plasma.The precision and accuracy of the methods were determined at the three QC sample levels for six replicates together with calibration samples from different validation batches.The basic fundamental properties studied during method validation were the stability of stock solution stored at 4-8°C for one month, freeze thaw stability through three cycles, short term stability, an autosampler stability and long term stability.For freeze thaw stability, QC plasma samples were subjected to three cycles from −20°C to room temperature.Short term bench top stability was determined by placing samples on the bench top at ambient temperature for 24 h.An autosampler stability was assessed by placing processed QC samples in an autosampler at 10°C for 24 h and long term stability was evaluated by freezing QC samples at −20°C for a month.Recovery was assessed by comparing the peak areas of neat analyte standards with those for spiked standards at three concentrations before and after extraction 17 . Pharmacokinetic application The method was used to determine pharmacokinetic data for metolazone in human volunteers. Results and Discussion The assay was found to be linear for metolazone concentrations in the range 1.0 to 2000.00 ng mL −1 , (r = 0.9998). Table 1 Precision and accuracy were studied satisfactory at three QC concentrations.The intraday precision and accuracy of the method at QC levels (3.0 ng mL −1 , 200.0 ng mL −1 , and 1750.0 ng mL −1 , n = 6) were 5.43, 5.63 and 4.27% and 106.10, 104.59 and 102.07%, respectively.The inter-day precision and accuracy of the method at QC levels (n = 6) were 5.82, 5.27 and 5.13% and 106.58, 103.97 and 100.55% respectively.The results obtained from measurement of linearity, precision and accuracy are listed in Tables 2 and 3. The extraction recovery of the method at QC levels (n = 6) was 62.23, 67.40 and 78.58%, respectively.Absolute mean recovery of metolazone and the internal standard (metaxalone) were 69.41 & 79.46% respectively.Stock solution stored at 4-8°C was found to be stable for 33 days.When drug stability at the LQC and HQC concentrations was measured after three freeze thaw cycles, the differences from freshly prepared samples 1.28 and 3.54% respectively were low.When bench top stability at the LQC and HQC concentrations for 24 h was measured the differences from freshly prepared QC samples were -4.33 and -1.07%respectively.When an autosampler stability at the LQC and HQC concentrations for 24 h was measured, the differences from freshly prepared samples were approximately 1.52 and 0.77% respectively.When drug stability in the matrix at −20°C for 31 days was measured at the LQC and HQC concentrations difference from freshly prepared samples were 4.78 and -2.57% respectively. Pharmacokinetic application After oral administration to volunteers, the observed peak plasma concentration (C max ) was 49.00 ng mL −1 for test and 49.66 ng mL −1 for reference.The time (T max ) taken to achieve peak plasma concentration were 2.83 hours for test and 2.75 hours for reference.In addition, the calculated 90% confidence interval (CI) for mean C max , AUC last , and AUC 0-∞ individual ratios were within the 80-125% interval stipulated by the US Food and Drug Administration.LC-MS-MS analysis with a reversed-phase column and low aqueous high organic mobile phases was found to be ideal for the analysis.][21] .Use of methanol as the organic component of the mobile phase did not result in adequate sensitivity and selectivity, owing to bad peak shape and increased interference from the plasma.Acetonitrile as organic component and pH of buffer, resulted in better sensitivity, but variation of the amount of acetonitrile in the mobile phase effected the run time.The mobile phase was optimized to provide sufficient selectivity in a short separation time.Analyte and internal standard responded best to positive ionization using atmospheric turbo ion spray for ion production.In order to get higher response hypersil hypurity (4.6 x 50 mm), 5 µ, Column was used.The pH of mobile phase adjusted to 3.0 with formic acid.The assay was found to be linear in the conc.range 1.000 to 2000.000ng mL −1 for metolazone.Precision and accuracy were satisfactory at the three conc.studied.Absolute mean % recovery of the metolazone and metaxalone is 69.41 & 79.46% respectively.Recovery of an analyte and internal standard is consistent, precise and reproducible Stability of analyte and internal standard in methanol stock sol.was verified on storage for 33 days at 2-8°C.The proposed method proved accurate and selective and met the standards for bioanalytical Method Validation accepted by the FDA 17 Conclusion Rapid and sensitive LC-MS-MS method is reported for the determination of metolazone in human plasma.The Assay was successfully applied to determine concentration of the drug in a bioequivalence study of metolazone.The method developed allows high samples through put due short run time and relatively simple sample preparation procedure.Advantages of wide Linearity range 1.0 ng mL -1 to 2000 ng mL -1 of the method is applicable for various dose of metolazone. Figure 4 . Figure 4. Representative chromatogram plasma spiked with metolazone at the lower limit of quantificationThe calibration equation was determined by least-squares linear regression (weighting 1/X 2 ) over the range 1.0 to 2000.0 ng mL −1 in plasma.The precision and accuracy of the methods were determined at the three QC sample levels for six replicates together with calibration samples from different validation batches.The basic fundamental properties studied during method validation were the stability of stock solution stored at 4-8°C for one month, freeze thaw stability through three cycles, short term stability, an autosampler stability and long term stability.For freeze thaw stability, QC plasma samples were subjected to three cycles from −20°C to room temperature.Short term bench top stability was determined by placing samples on the bench top at ambient temperature for 24 h.An autosampler stability was assessed by placing processed QC samples in an autosampler at Figure 5 . Figure 5. Mean plasma concentration -time profile of metolazone after a single oral dose of metolazone (USP 5mg) tablet to healthy male volunteers.LC-MS-MS analysis with a reversed-phase column and low aqueous high organic mobile phases was found to be ideal for the analysis.Increasing the organic content of the mobile phase resulted in improved sensitivity by enhancement of the ionization yield.[19][20][21] .Use of methanol as Table 2 . Concentrations of metolazone in calibration standards prepared in human plasma. Table 3 . Data of inter day precision and accuracy. Table 4 . Concentrations of metolazone in stability sample prepared in human plasma *ng mL −1 ; n = 6
2018-12-28T07:52:45.223Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "fe7f5c44eec4e37601a0e8cd5347c4aabfb2d2f2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2008/425974.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fe7f5c44eec4e37601a0e8cd5347c4aabfb2d2f2", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
52114328
pes2o/s2orc
v3-fos-license
NEURAL CONTROL OF SWALLOWING BACKGROUND Swallowing is a motor process with several discordances and a very difficult neurophysiological study. Maybe that is the reason for the scarcity of papers about it. OBJECTIVE It is to describe the chewing neural control and oral bolus qualification. A review the cranial nerves involved with swallowing and their relationship with the brainstem, cerebellum, base nuclei and cortex was made. METHODS From the reviewed literature including personal researches and new observations, a consistent and necessary revision of concepts was made, not rarely conflicting. RESULTS AND CONCLUSION Five different possibilities of the swallowing oral phase are described: nutritional voluntary, primary cortical, semiautomatic, subsequent gulps, and spontaneous. In relation to the neural control of the swallowing pharyngeal phase, the stimulus that triggers the pharyngeal phase is not the pharyngeal contact produced by the bolus passage, but the pharyngeal pressure distension, with or without contents. In nutritional swallowing, food and pressure are transferred, but in the primary cortical oral phase, only pressure is transferred, and the pharyngeal response is similar. The pharyngeal phase incorporates, as its functional part, the oral phase dynamics already in course. The pharyngeal phase starts by action of the pharyngeal plexus, composed of the glossopharyngeal (IX), vagus (X) and accessory (XI) nerves, with involvement of the trigeminal (V), facial (VII), glossopharyngeal (IX) and the hypoglossal (XII) nerves. The cervical plexus (C1, C2) and the hypoglossal nerve on each side form the ansa cervicalis, from where a pathway of cervical origin goes to the geniohyoid muscle, which acts in the elevation of the hyoid-laryngeal complex. We also appraise the neural control of the swallowing esophageal phase. Besides other hypotheses, we consider that it is possible that the longitudinal and circular muscular layers of the esophagus display, respectively, long-pitch and short-pitch spiral fibers. This morphology, associated with the concept of energy preservation, allows us to admit that the contraction of the longitudinal layer, by having a long-pitch spiral arrangement, would be able to widen the esophagus, diminishing the resistance to the flow, probably also by opening of the gastroesophageal transition. In this way, the circular layer, with its short-pitch spiral fibers, would propel the food downwards by sequential contraction. INTRODUCTION To understand how the nervous system controls any biological process, we must know what are the necessary afferent and efferent impulses, where they came from, what is their destination and which functions integrate this process (1) .Swallowing is a motor process with a very difficult neurophysiological study, and subject of several discordances (2) .These observations and the literature review show that great part of the accepted mechanisms for the neural control of swallowing could not be considered trustworthy hypotheses.In this way, the neural control of swallowing remains as a research field, open to new considerations. The swallowing process is formed by the oral, pharyngeal and esophageal phases (2,3) , with much controversy involving their mechanisms.Great evolution has been obtained with observations of neurological lesions and many are the methods available for confirmation of the hypotheses that, in the end, remain as just hypotheses.Nevertheless, there is an expressive quantity of new morphological and functional conceptions that, even being only hypotheses, at least are more structured than the empirical others used until now to explain the swallowing mechanisms. It had been believed that the swallowing control center was located exclusively in the brainstem, and that the entire swallowing mechanism, automatic and semiautomatic movements of chewing and swallowing, were involuntary by genesis and regulation.From observations of patients with cortical dysphagia, the role of the cerebral cortex in the swallowing control mechanism has been recognized and extensively studied (4) . Based on the nervous system embryology a rhombencephalic center, formed by association of the third primitive vesicle (hindbrain) with the second one (mesencephalon or midbrain), origin of the brainstem and cerebellum was described.The rhombencephalic center would receive stimuli produced by the food bolus passage over existing receptors at the base of the tongue, on the palatoglossal and palatopharyngeal pillars, on the palate, and pharyngeal walls, especially in the posterior one, starting an involuntary and coordinated process that would characterize the pharyngeal phase of swallowing.The assumption was that this phase would be controlled, in physiological circumstances, by a framework continuously modified by peripheral afferent stimuli that would especially influence the muscular function, adjusting strength and time of contraction to the size of bolus swallowed.The bolus entrance in the oropharynx would produce soft palate elevation and reflex contraction of the upper pharynx constrictor.In addition, to protect the airways, the bolus entrance would initiate a peristaltic wave that would propagate to the other muscles, narrowing the pharynx, except at the level of the cricopharyngeal muscle, which would relax, allowing the passage of the pharyngeal content to the esophagus (5) . A center involving sensory and motor nuclei integrated by a network of interneurons located in the brainstem complements the described coordination (6,7,8) . A new approach considers the oropharynx functional activity as composed by the oral and pharyngeal phases of swallowing (2,9,10) .This functional activity would be produced by muscular contraction, and coordinated by a control center in the brainstem, designated as the Central Pattern Generator (CPG) for Swallowing (9,(11)(12)(13)(14)(15) .This pattern-generating center would consist of two hemicenters, one on each side of the brainstem, which, under physiological conditions, would synchronize and organize the bilateral contraction of the oral and pharyngeal muscles.Their nerve fibers would cross the midline of the brainstem, interconnecting the two halves of the involved generating centers with swallowing-linked neurons in the dorsal and ventral regions of the brainstem (16,17) . It has been admitted that in this pattern-generating center the solitary tract nucleus would receive information that would converge to it both from peripheral impulses triggered by the swallowing stimulus and from the cerebral cortex (9,18) .This convergence of stimuli to the solitary tract nucleus would be primarily important for the induction of voluntary swallowing (15) .It's been considered that the first event observed in the "swallowing reflex" would occur in the oropharyngeal cavity (oral and pharyngeal cavities), where the bolus would produce a sensory afferent stimulus that would inform the brainstem and cortex (19)(20)(21) . In nutritive swallowing, the first cortical command would be sent to the solitary tract nucleus.Thus, eating and drinking sequentially could be voluntarily initiated or facilitated by the cerebral cortex through the neural network (CPG) of the brainstem (2,20,22,23) .It was also considered that, in voluntary deglutition, regions of the cortex and subcortical areas related to swallowing would serve mainly to trigger and control the onset of the swallowing motor sequence, especially the oral phase (20) . In disagreement with the bilateral integration of the brainstem, admitted in the pattern-generating center conception (9,(11)(12)(13)(14)(15) , it has already been described that both the dorsal (sensory) and ventral (motor) regions represented on both sides of the brainstem would be able to independently coordinate the pharyngeal and esophageal phases of swallowing on each side (24) . Although the oral and pharyngeal cavities are morphologically contiguous and have sequential function, the oral and pharyngeal swallowing phases are distinct from each other in structures, innervation and neural control.The oral phase is voluntary and the pharyngeal one is reflex.Designating the oral and pharyngeal phases as oropharyngeal or buccopharyngeal (2,9,10,12,25,26) is inadequate, although not rare.Anatomically, the oropharynx is the intermediate segment that communicates the oral and pharyngeal cavities, receiving the contents transferred during swallowing, which in no way defines the functional role of the oral and pharyngeal phases of swallowing. High dysphagia has been often defined as oropharyngeal dysphagia.High dysphagia may occur with impairment of both phases, but the possibility of exclusively oral or pharyngeal injury cannot be ignored.The fact that the injury of one neighboring phase interferes with the dynamics of the other emphasizes commitment of the sequence, and not of both phases.The oropharyngeal designation for this kind of dysphagia diverts the clinical and therapeutic focuses, which should be directed to the actually compromised phase, with doubtful therapeutic adequacy.The designation of oropharyngeal dysphagia led us to misclassify the dysphagia that affects the oral and pharyngeal phases as transference dysphagia, and the esophageal dysphagia, as a conduction one.Transference is proper to the oral phase, and conduction, to the pharyngeal and esophageal phases.Transference is a voluntary process and occurs in the voluntary oral phase, and conduction occurs in the pharyngeal and esophageal phases, both reflex (27) . It is a fact that we have learned very much by observing neurological dysphagia.In addition, today there are many methods available for study of swallowing and its disorders, which, while enabling us to better understand the swallowing physiology, highlight the significant number of conflicting concepts still in force. The aim of this work is to offer new conceptual alternatives, based on the literature and personal research, to give a more solid basis to the hypotheses used to explain the swallowing mechanisms and, consequently, the neural control of swallowing. CHEWING Mastication, basically voluntary, integrates the activation of the chewing muscles, innervated by the trigeminal pair (V), the tongue muscles, innervated by the hypoglossal pair (XII), and with less evident participation, of the expression muscles, in special the orbicular of the lips and buccinators, which, like other skin-inserted muscles, are innervated by the facial pair (VII).Trigeminal afferent fibers reach the dorsal region of the brainstem (the main sensory nucleus of the V) and, still in an afferent pathway through the trigeminal lemniscus, reach the thalamus, from where axons go to the postcentral gyrus (somatosensory cortex) in the parietal region of the cerebral cortex (1) .The postcentral gyrus transfers information to the precentral gyrus (somatomotor cortex) in the frontal region, generating a motor efferent response by the nuclear cortical route (pyramidal-voluntary), which reaches the ventral region of the brainstem, where on each side the trigeminal motor nucleus is located.From this nucleus, the motor route of the trigeminal nerve activates the chewing muscles (1,(28)(29)(30)(31) . By activation from the cortex-nuclear pathway, the hypoglossal motor nucleus in the brainstem gives dynamics to the tongue in its participation in the chewing process.Afferent and efferent facial nerve pathways, in functional association, participate in the accommodation of the bolus and in the oral cavity pressure by adjusting the tension of the cavity walls, especially dependent on the orbicularis of the lips and buccinators. The afferent trigeminal fibers also reach its mesencephalic nucleus, which connects its sensory route with its motor root in absence of cortical relation.This direct, sensory-motor relationship allows the chewing action, which is voluntary, to have a reflex component (1) , which, by proprioceptive perception during the preparation of the bolus, modulates the variation of the chewing intensity produced by the continuous modification of the resistance of the bolus under preparation. ORAL QUALIFICATION The oral cavity is able to identify several characteristics of the inner bolus.It presents at least four distinct types of perception, thermal, painful, mechanical and chemical (32,33) . The thermal-reception can perceive hot or cold in various levels.When pleasing and adequate with the type of food, they can be incorporated to the pleasure of the diet.When extreme and damaging, they produce rejection. The pain-reception is usually due to mechanical, thermal or chemical hyper-stimuli produced on sensitive afferent pathways, warning and preventing injury.However, there is a painful submodality produced by capsaicin, present in a large number of peppers, probably using the same pain way, whose perception is often perceived as dietary pleasure. The mechanical-reception allows noticing the contact of the bolus against the intraoral structures.The tongue that presses the bolus, gathering information defined as tactile.This information allows perceiving the physical characteristics of the bolus, detecting if there is impropriety in its contents.Mechanical-reception is also responsible for the characterization of oral bolus volume and viscosity, to define how much motor units must be depolarized for the necessary generation of oral pressure to transfer the contents from the oral cavity to the pharynx. The chemical-reception identifies the tastes by different mechanisms.Sweet appears to be identified by coupling of a primary messenger (taste protein) with a secondary messenger (cAMP -cyclic adenosine monophosphate), whose concentration increase closes the potassium channels in the gustatory receptors, with membrane depolarization.It is considered that the intracellular metabolic pathways responsible for natural sweeteners would be distinct from those activated by artificial sweeteners, whose secondary messenger would be the IP3 (inositol triphosphate), which would act on the calcium channels, provoking calcium input into the cells, with depolarization.The identification of the bitter taste is given by coupling of the same primary messenger (taste protein), resulting in calcium increase due to action of the IP3 secondary messenger, releasing a neurotransmitter without membrane depolarization.The salty perception is generated by direct passage of sodium through the membrane channels that depolarize.The hydrogen from sour or acid penetrates the cellular membrane by blocking the potassium channels, which supports the membrane depolarization (32,33) . Although sweet, salty, sour and bitter are the tastes considered basic, others like metallic, astringent and more recently, umami (monosodium glutamate) have been suggested as primary.Nevertheless, the first four were the ones that resisted as basic over time.It is not very clear whether and how the association of basic tastes (sweet, salty, sour and bitter) can appropriately produce the palate, i.e. the gustatory perception as a whole.The palate, which can distinct for each of us, is an association of the social level and learning, basic tastes, tactile and thermal perceptions, and certainly the impressions permitted by the vision and smell senses (33,34) . The perception of tastes in the oral cavity has been prioritized on the tongue.Classic description points to sequential areas on each side of the anterior 2/3 of the tongue as having selective capacity for the basic tastes, the anterior tip to sweet, the sides, in sequence to salty and sour, and the posterior central area, to bitter (31,(34)(35)(36)(37)(38)(39) .This concept, already contested, shows that the tongue is able to perceive all the basic tastes in all its regions, with expressive predominance of the bitter one (40)(41)(42) . The tongue's filiform, fungiform, foliate and circumvallate papillae are anatomical elements involved with the chemical senses (taste).These papillae display incrusted gustatory buttons.In the filiform papillae, gustatory buttons are rare or absent.In the fungiform ones there are few, but in the foliate papillae and especially in circumvallate ones, there are many gustatory buttons (41,42) . Buttons considered as gustatory can be identified, in addition to the tongue papillae, on the palate and vallecula.Buttons with similar morphology to those defined as gustatory have been found on the pharynx regions, where, at first, no taste is perceived.In the vallecula, even with the oral cavity anesthetized, the bitter taste transferred to the pharynx can be perceived by vagus nerve conduction (41) . As far as we know, in the oral cavity there have not been described or observed any other morphological kind of receptors than that admitted as gustatory.However, the oral cavity holds several other perceptions.Specific receptors to be stimulated are supposedly necessary.Nevertheless, there is no evidence indicating that any receptor is responsible for detecting only one type of stimulus (43) . It is possible that receptors deemed gustative are also able to receive other oral stimuli.This hypothesis is reinforced by the presence of receptors morphologically similar to the gustatory receptor, where tastes are not perceived as palate, in the pharynx (except the in vallecula) and larynx (34) .There are also gustatory perception descriptions by thermal stimulation of the tongue (44) , such as sweet perception by heating the anterior edge of the tongue from a cool state, and evocation of acid or salty perception with cooling intensification (45) . CRANIAL NERVES The cranial nerves associated with the swallowing process are the trigeminal (V), facial (VII), glossopharyngeal (IX), vagus (X), accessory (XI) -usually not considered -and hypoglossal (XII).It should be emphasized that the structures involved in the swallowing process are pairs, both anatomically and/or functionally, due to the dual-side innervation.Anatomically unique, the tongue, palate, pharynx and larynx are functional pairs, each side having independent innervation (1,7,29,30) . From receptors on each side of the oral cavity, the trigeminal (V), facial (VII) and glossopharyngeal (IX) nerves conduct information to the brainstem.These mixed nerves lead sensitivity (afferent pathway) and motor command (efferent pathway).The afferent pathways of the anterior two thirds of the tongue are supplied by the lingual nerve, which associates the trigeminal (general sensibility) with the facial nerve (taste).In the posterior third of the tongue, both the general sensibility and taste are conducted by the glossopharyngeal nerve (33,39,(41)(42)(43)(44)(45)(46) . In its afferent pathways toward the brainstem, the trigeminal, facial and glossopharyngeal nerves of both sides will make ganglionar synapses similar to the posterior roots of the spinal cord.The afferent pathway of the trigeminal nerve makes synapses in the trigeminal ganglion (Gasser), the facial nerve, in the geniculate ganglion, and the glossopharyngeal, in the rostral ganglion (upper one) (1,30,39) . The trigeminal nerve (V) has three branches; upper (ophthalmic), middle (maxillary) and lower (mandibular).The upper and medium are exclusively sensitive, and the inferior, mixed.The sensitive fibers of the three branches innervate the face in transverse bands of representation.Regarding the oral cavity, the middle branch (maxillary) has sensitive responsibility for the upper arcade teeth, upper lip, cheeks, hard palate (mouth mucosa) and mucosa of the rhinopharynx.The sensitive portion of the lower branch (mandibular) is responsible for the sensitivity of the lower arcade teeth and lower mucosa of the mouth, as well as by the general sensitivity of the anterior 2/3 of the tongue (1,29,30) . From the trigeminal ganglion to the brainstem, all the sensory pathways will end in the posterior portion of the brainstem, over the trigeminal sensitive nucleus that occupies the medulla oblongata (spinal tract nucleus of the cranial nerve V), the pons (main sensory nucleus of the cranial nerve V) and the midbrain (midbrain nucleus of the cranial nerve V).Centrally the sensitive fibers divide into short, ascending branches that end in the main sensorial nucleus, to attend to tactile sensibility, and into long, descending branches that serve to tact, temperature and pain, also providing collateral pathways to the spinal nucleus of the cranial nerve V (29) . It is believed that proprioceptive fibers from the midbrain nucleus of the trigeminal neve, in synapse with its motor nucleus located in the upper portion of the pons (47) , would be able to integrate important chewing reflex arcs (1,29) .Unless expressly desired, these arcs allow reflex modulation of chewing intensity based on bolus consistency variations, even during the voluntary bolus chewing preparation. The motor root of the trigeminal nerve emerges from the ventral portion of the pons and runs through the mandibular root to innervate the chewing muscles, the mylohyoid, the anterior belly of the digastric and the tensor muscle of the palate (1,29,30) . The facial nerve (VII) is a mixed one, considering its motor root in association with the sensitive root given by the intermediate (Wrisberg) nerve (1) .The taste of the anterior two thirds of the tongue on each side are its responsibility.From the tongue, this afferent, pre-ganglionic route follows through the lingual nerve (association of nerves V and VII), and afterwards through the tympanic cord nerve (facial branch), to make synapses on the geniculate ganglion.Through the intermediate nerve, the postganglionic fibers (afferent visceral special -gustative route) synapse in the solitary tract nucleus of the medulla oblongata, associated with the general afferent visceral fibers, providing sensitive innervation to the mucosa of the nasal cavities and soft palate (1) . The parasympathetic efferent fibers of the facial nerve, originating from the upper salivary nucleus located on each side of the upper portion of the medulla oblongata, run through the intermediate nerve and afterwards through the tympanic cord nerve to make synapses in the submandibular ganglion.Thence, through postganglionic fibers, they stimulate salivary secretion of the submandibular and sublingual glands (1) . The motor portion of the facial nerve has its nucleus on the ventral portion of the pons.Its fibers stimulate the skin-inserted muscles in the face, neck and scalp, as well as the posterior belly of digastric and stylohyoid muscles (1,8,29,39) . The glossopharyngeal (IX) nerve comes out of the skull together with the vagus (X) and accessory (XI) nerves.The visceral general afferent and the visceral special afferent fibers of the glossopharyngeal nerve are associated.The visceral general afferent fibers are responsible for the general sensitivity of the oropharynx mucosa and the posterior third of the tongue, and the special visceral afferent fibers, for the taste of the posterior third of the tongue.These preganglionic fibers make synapses with the upper ganglion.The postganglionic fibers will end at the solitary tract nucleus (1,8,29) . The glossopharyngeal nerve's efferent pathways come from two distinct nuclei of the medulla oblongata, the salivary inferior (parasympathetic) nucleus and ambiguous motor (special visceral efferent) nucleus.The parasympathetic fibers stimulate the salivary secretion after synapses with the optic ganglion, from which postganglionic fibers emerge to innervate the parotid gland (1,29,31) . The glossopharyngeal nerve's only motor role is with the stylopharyngeus muscle.Nevertheless, it has already been considered as motor to the superior pharyngeal constrictor muscle, whose activity had been previously attributed to the vagus nerve, responsible for the motor innervation of all pharyngeal constrictors muscles (1,29) . The vagus (X) nerve has relationships extending from the cervical region to the abdomen (transverse colon).Its sensory afference (sensory pathway) connects with the solitary tract nucleus located in the medulla oblongata.The visceral special efference (motor pathway) comes from the ambiguous nucleus in the ventral region of the medulla oblongata, and the parasympathetic fibers (visceral general efference), from the dorsal motor nucleus of the vagus (1,(29)(30)(31) . The visceral special afferent (taste) and visceral general afferent (sensibility) pathways of the vagus nerve, after synapses in a peripheral ganglion (lower or caudal), have their postganglionic fibers end at the solitary tract nucleus, similar to that observed in the intermediate portion of the facial nerve and in the glossopharyngeal one.The visceral general afferent fibers conduct impulses related to the sensitivity of the pharynx, larynx, trachea and esophagus, and the visceral special afferent route lead taste stimuli from receptors on the vallecula and from a small posterior area of the tongue next to the vallecula (1) . The visceral general efferent (parasympathetic) fibers of the vagus nerve originate in the vagus dorsal motor nucleus, and from it, on each side, they gather in a single-trunk, descending pathway, emitting branches in the cervical, thoracic and abdominal region, where they end.These preganglionic fibers will establish synapses in peripheral ganglia of the parasympathetic vegetative or autonomous nervous system, close to, or even inside, the viscera walls (1,(29)(30)(31) . The visceral special efferent (motor) fibers of the vagus originate in the ambiguous nucleus, and are responsible for innervation of the striated muscles of the pharynx, larynx and esophagus (1) . The accessory (XI) nerve, not always considered among those related to swallowing control, presents special visceral efferent fibers coming from the ambiguous nucleus (motor to striated muscles of branchial origin) that would join this type of special visceral efferent fibers of the vagus.Thus, in addition to the vagus (X) nerve, the accessory (XI) one would also be responsible for the motor innervation of the striated portions of the pharynx, larynx and esophagus.A possible second association between the vagus and accessory nerves would be the presence of parasympathetic fibers (general visceral efferent) in the accessory nerve, with origin in the dorsal nucleus of the vagus, which would accompany the vagus nerve fibers (1,29,47) . The Hypoglossal (XII) nerve, a motor one, has an individualized nucleus on the ventral-medial portion on each side of the medulla oblongata.It is responsible for the tongue extrinsic and intrinsic muscles.In addition, fibers from the cervical plexus in association with the hypoglossal nerve form the ansa cervicalis, from which a branch from the cervical plexus, usually C1, will innervate the geniohyoid muscle, one of the responsible for the hyoid-laryngeal displacement (1,8,29) . The pharyngeal plexus (glossopharyngeal, vagus and accessory though vagus) is considered responsible for the pharyngeal reflex phase, where afferent information from the pharynx reach the brainstem, generating efferent stimuli to the pharyngeal structures involved in this phase of the swallowing process. The pressure transfer from the oral cavity to the pharynx by distention would produce afferent stimuli that would reach the brainstem, in special the sensitive (solitary tract) nucleus.From the sensitive nucleus, through interneurons of the reticular formation, the ventral motor (ambiguous) nucleus of the brainstem generates efferent motor stimuli to the pharyngeal structures.Several structural movements initiated during the voluntary oral phase, remain in progress until the end of the pharyngeal phase, such as hyoid-laryngeal elevation, swallowing apnea and tongue posterior projection, to pharynx, started during the oral ejection, without considering the palate tension produced by the trigeminal nerve.In this way, several elements of the oral phase incorporated by the pharyngeal reflex phase allow us to consider the pharyngeal phase as dependent on the cranial nerves V, VII, IX, X, XI and XII of both sides. BRAINSTEM, CEREBELLUM, BASE NUCLEI AND CORTEX The brainstem is formed by the medulla oblongata, the pons and the midbrain.It contains the cranial nerves' nuclei related to swallowing.The sensory nuclei are posteriorly located on both sides, and the motor ones, anteriorly.Interneurons and pathways of the reticular formation interconnect the sensory and motor nuclei in the brainstem.These are also connected with peripheral receptors, cerebellum, and sensory and motor areas of the cerebral cortex through base nuclei, and with peripheral effectors like muscles and salivary glands (1,8,(28)(29)(30)39) . The brainstem receives and emits pathways with stimuli information to be integrated and distributed.From peripheral receptors, the brainstem sensitive nuclei will receive peripheral sensitivity information by general afferent pathways (V, VII, IX), and taste, by special afferent ones (VII, IX, X).During the oral phase, all the bolus characteristics are identified and analyzed by the cortex, which informs the brainstem the pattern to be employed by the oral effectors.The brainstem, through the motor hypoglossal (XII) nerve, will stimulate intrinsic and extrinsic tongue muscles.The other swallowing muscles, as well as those involved in the pharyngeal phase, will be stimulated by motor fibers of visceral special efferent nerves (V, VII, IX, X and XI).The brainstem also depolarizes visceral general efferent parasympathetic pathways to salivary glands (nerves VII and IX) (8,29,30) .The vagus (X), and maybe the accessory (XI), send preganglionic parasympathetic fibers to the autonomic digestive system, through fibers from the vagus dorsal nucleus (1,29,47) . In the brainstem, swallowing cranial nerves' pathways make functional connections with the cerebellum.The swallowing cranial nerves go in and out of the cerebellum through the inferior, middle and superior cerebellar peduncles.The inferior one receives mainly afferent signals, the medium, only afferent signals, and the superior, mostly efferent signals.Specific longitudinal pathways interconnect brainstem and cerebellum nuclei with base nuclei and cerebral cortex.In this way, the cerebellum and cerebral cortex can interfere with the mechanics to be effected by the cranial nerves' pathways in the swallowing process (1,8,29) . In addition to balance and muscle tone, the cerebellum acts by determining the temporal sequence of the synergistic contraction of the different skeletal striated muscles, which can generate delay of the motor signals by fractions of a second.It also acts by sequencing the motor activities from one movement to another, and can control the relation of agonist and antagonist muscles.When necessary, the cerebellum also can make adjustments in the motor activities produced by other parts of the brain (1,8,29) . Ascending and descending cerebellar pathways connect the cortex and the cerebellum.Originated in large parts of the premotor and motor cortex, the so-called cortex-pons-cerebellar pathway follows to nuclei in the pons and thence to the contralateral hemisphere of the cerebellum.The signs that enter the cerebellum connect with its nuclei and go out to send signals that are distributed to other parts of the brain.The cerebellar pathway, whose role is to help coordinate the motor activity sequences initiated by the cerebral cortex, originates in the cerebellar cortex and, after connection with one of its main nuclei (dentate), goes to the thalamus and will end in the cerebral cortex (8) .Swallowing has its motor control bilaterally represented in the cerebral cortex (48)(49)(50)(51) .This bilateral representation means that peripheral stimuli reach both cerebral hemispheres, with admitted dominance of one of them.This dominance assumes that, in physiological conditions, the dominant hemisphere inhibits the function of the contralateral one.In dysphagia due to involvement of the dominant hemisphere, it has been observed that the contralateral hemisphere can increase its representation, with apparent functional recovery (52)(53)(54) . The oral phase, being voluntary, allows us to decide whether to swallow the oral content.The cortical area with the oral control capacity has been identified in the lower portion of the precentral gyrus (frontal cortex) and postcentral gyrus (parietal cortex), where sensitivity (somatosensory cortex) and motor control (somatomotor cortex) are separated by the central sulcus (55,56) .(FIGURE 1). The intraoral qualification, linked to sensory pathways of the cranial pairs V, VII and IX, with nuclei in the brainstem, will have visceral afferent general and special stimuli conducted thought base nuclei up to the cerebral cortex.From the cortex, efferent direct or indirect commands (involving the base nuclei) reach the motor nuclei of the brainstem, under cerebellar mediation, from where the motor pathways of these nerve pairs coordinate the dynamics of the peripheral effectors (1,8,29,30) .Afferent pathways of nerves V, VII and IX go to the cerebral cortex.From the trigeminal (V) sensory nucleus, tactile sensitivity pathways pass to the thalamus and cortex trough secondary dorsal tracts.From the spinal nucleus of the cranial pair V, tactile, pain and temperature pathways go to thalamus and cortex via the secondary ventral tract.The facial (VII) and glossopharyngeal (IX) nerves connect with the cerebral cortex through sensitive fibers coming from the solitary tract nucleus through medial lemniscus and thalamus.The efferent pathways from the cortex to the brainstem motor nuclei of these three pairs of cranial nerves, modulated by the cerebellum, occur with bilateral (mainly cross) connections of the cortex-nuclear tract (voluntary).These voluntary pathways will end in the brainstem in connection with the motor nucleus of the nerves V and VII, as well as with motor neurons of the pair IX in the ambiguous nucleus (28,29) . ORAL PHASE OF SWALLOWING The oral phase can be classified into five subtypes: 1) Nutritional voluntary oral phase; 2) Primary cortical voluntary oral phase; 3) Semi-automatic oral phase; 4) Subsequent gulps oral phase; and 5) Spontaneous oral phase.These five oral phase possibilities occur in association with pharyngeal and esophageal reflex phases. Nutritional voluntary oral phase The nutritive swallowing following chewing, with the bolus prepared and qualified, will put it usually over the tongue (organize) and transfer it (eject) to the pharynx (57) .The voluntary oral phase of swallowing leads information to the cortex by the afferent pathways of the nerves V, VII and IX (mixed pairs) that allow the cortex to activate the motor portions of these mixed nerves in association with the hypoglossal (XII -motor pair).Originating in peripheral receptors, afferent pathways reach the brainstem.From the sensory nuclei of the cranial pair V, through the secondary ventral and dorsal tracts, they reach the thalamus and cortex with tactile (also volume and viscosity), thermal and possibly nociceptive sensations.Afferent general (sensitivity) and special (taste) pathways led by the cranial nerves VII and IX reach the solitary tract nucleus in the dorsal region of the medulla oblongata.From this, afferent pathways connect with the base nuclei, including thalamus, and then with the cerebral cortex on the postcentral gyrus of both hemispheres, transferring the received afferent signals to the precentral gyrus, from where efferent pathways go to the brainstem motors nuclei (V, VII, IX, XII). Based on the hemisphere dominance, one can conclude that both afferent general (sensitive) and special (taste), and efferent special (motors) and general (parasympathetic) pathways interconnecting both sides of cortex and brainstem arrive and leave as direct and cross paths.This organization gives to each cerebral hemisphere the total information collected in the oral cavity, enabling effective commands from each hemisphere to reach both sides of the brainstem, integrating the cranial nerves that act in the oral phase (58) . After activating the sensorial cortex on both sides from the base nuclei, the peripheral information passes to the motor cortex, where the necessary intensity is modulated and re-transmitted to the base nuclei and brainstem.In the latter, the efferent pathways of the trigeminal, facial and hypoglossal nerves would produce an oral dynamic that would end by ejecting its contents into the pharynx. Although one of the hemispheres is dominant, both are fully informed, allowing them to exercise full functions (48)(49)(50)(51) .There is evidence that the dysphagia generated by injury to the dominant hemisphere allows increase in the representation of the non-dominant (non-injured) hemisphere, associated with apparent function recovery (52)(53)(54) .There are pathways crossing from one side to the other through the corpus callosum, integrating the hemispheres.Thus, in healthy individuals, the dominant cortex can exert inhibitory action on the contralateral one by a connection that passes through the corpus callosum.It is also possible to consider the existence of excitatory pathways from the dominant motor cortex to the base nuclei of the contralateral hemisphere.This organization would explain not only the already evidenced function recovery when there is lesion of the dominant hemisphere (52)(53)(54) , but also the integrated bilateral stimulus that is observed, despite the inhibition of the sensorial and motor cortex of the non-dominant hemisphere.It is also possible to assume that these excitatory pathways exist in both directions. Between the brainstem and the cortex, there are also interconnected pathways arriving at, and leaving from, the cerebellum, considered able to modulate muscular contraction intensity and sequence.In this way, cerebellar pathways connect with efferent voluntary (cortex-nuclear) pathways that will make synapses with the motor nuclei of the cranial nerves V, VII, IX and XII.From these nuclei, the efferent stimuli follow to the oral effectors, providing them with signaling of adequate contraction intensity and sequence, coordinated by the cortex and modulated by the cerebellum. The bolus volume and viscosity will interfere with the muscular contraction intensity, defined by the cortex according to the oral qualification, to generate the necessary oral ejection.Nevertheless, the contraction activation sequence of the effectors will be common to all sequences involving the oral phase, suggesting that the neural organization has a predefined sequence.Taste and temperature do not exert influence on the oral muscular contraction intensity defined by the cortex.This observation means that, within limits of acceptability, chemical-reception, thermo-reception and certainly pain-reception do not interfere with the oral activity, which is governed by the mechanical reception, in particular volume and viscosity, which will affect the amount of motor units to be depolarized for an effective oral phase.The generation of the necessary and adequate muscular contraction intensity will be responsible for the information to be passed and maintained during the reflex phase of swallowing.The pressure intensity transferred by the oral phase will be the stimulus to be answered to by the neural control of the reflex pharyngeal phase.The esophageal phase, also reflex, should be influenced at least partly by the oral phase (57,58) . One can describe the basic dynamics of the swallowing oral phase as follows: The Dental arcades touch one another by chewing muscle contraction (pair V).This dental arcades position allows skin-inserted muscles, in special buccinators and orbicularis oris (pair VII), to generate intraoral pressure resistance to prevent pressure escape out of the oral cavity during the bolus transference to the pharynx.The pressurized and resistant oral cavity will enable ejection of the bolus by the tongue (pair XII), which will transfer pressure and bolus to the pharynx.Still as part of the oral phase actions, the tensor veli palatini muscle (pair V) will provide resistance to the soft palate, which will be superiorly and posteriorly projected by the levator veli palatini muscle against the first fascicle of the pharynx superior constrictor muscle (pterygopharyngeus fascicle) at the beginning of the pharyngeal phase.The suprahyoid muscles elevate the hyoid and larynx, opening the pharyngeal-esophageal transition because it undoes the tweezers action between the vertebral body and larynx.The elevation of the hyoid and larynx that acts by undoing of the tweezers action, produced by the apposition of the larynx against the spine is coordinated mainly by the cranial nerves V and VII and also by C1 through the ansa cervicalis.The hyoid elevation starts at the end of the oral phase, and stays active till the end of pharyngeal phase.Contraction of the longitudinal stylopharyngeus muscle (IX) will reduce the pharyngeal distal resistance.Finally, in the end of oral phase, by possible involvement of the respiratory center on the floor of the fourth ventricle in the brainstem, swallowing apnea (preventive apnea) takes place.In sequence, but with an independent mechanism of apnea, beginning the pharyngeal phase, vocal folds adduction will occur.All the oral events remain active during the entire pharyngeal phase by assimilation of the reflex pharyngeal phase coordination (42,(59)(60)(61)(62)(63) .(FIGURE 2).this type of neural control does not have, as an integral part, the afferent signaling coming from the oral receptors to the sensitive cortex.In this way, the sequence from the motor cortex to the oral effectors will be exactly the same (58) . Semiautomatic oral phase This type of neural control is a temporary substitute for the one that occurs during the nutritional swallowing process.It replaces the voluntary control of the nutritional oral phase when, in a repetitive way, this has its parameters qualified and accepted as usual and within appropriate limits.In such cases, if the attention has been divided with another interest that demands cortical activity, swallowing control can be replaced by a semiautomatic control, which will be processed in subcortical level (base nuclei).Considering the proposed organization for the integration between base nuclei and cortex, we can hold that the base nuclei take control of the oral phase, maintaining its integrative activity, but repressing in their level the information brought from the periphery.Nevertheless, the base nuclei retain the ability to reactivate cortical control at any time, in particular if changes are detected (58) .I believe that the dominant hemisphere controls this semiautomatic process from its base nuclei, also through corpus callosum, on the same way of the inhibitory control. Subsequent gulps oral phase Subsequent gulps oral phase swallowing in subsequent gulps implies liquid intake that, in healthy individuals, demands depolarization of fewer motor units, because the necessary ejection force does not require too much effort.The control of this oral phase type of swallowing is, at least for the first gulp, similar to the control of nutritional swallowing.Although the material to ingest is liquid, a proper qualification is necessary, since it may have characteristics unexpected or distinct from the appearance.Taste, temperature and viscosity are assessed during the first gulp and, if accepted, go promptly to semiautomatic coordination, similar to that occurring in the nutritional diet.Here, the semiautomatic dynamics can start without requesting any other cortical attention, and without losing the basic perception of the gulps' characteristics.Like in nutritional swallowing, the resumption of the voluntary cortical control is immediate if desired or if any irregularity is perceived. Spontaneous oral phase Spontaneous oral phase is the swallowing that occurs to clarify oral cavity of the saliva produced and released in discrete volumes, but continuously.This type of oral phase occurs repeatedly over the course of the day's 24 hours, with the individual awake or asleep, in the absence of conscious control.These swallowing efforts generate a mechanical sequence similar to the other swallowing types with origin in the oral cavity.However, in some respect it is distinct in its trigger mechanisms.I believe that is possible to assume that this type of swallowing is due to the airways protective mechanism to prevent aspirations and compromising of the respiratory system.It has been demonstrated that the saliva adsorbed to the mucous membrane is capable of lubricating the laryngeal vestibule and vocal folds without producing discomfort.Also, the resulting volume of accumulated saliva would be compressed between the vestibular folds and epiglottis tubercle during swallowing with the adduced vocal folds, resulting in return of the residual saliva to the pharynx (64) .It is possible to believe that spontaneous swallowing is a product of this physiological airways permeation.Black, dotted lines represent the oral afferent pathways that pass through the (1) sensorial ganglion and connect with sensitive nuclei of the solitary tract and nerve V nuclei in the brainstem (2).From there, they connect with the base nuclei (3) through direct and cross pathways.From the base nuclei (3), in nutritious swallowing the signals stimulate the postcentral (sensorial) and precentral (motor) gyruses ( 4), which start the efferent (motor) pathway.(Note 1: Sensory pathways do not exist in the primary cortical voluntary oral phase).Red, solid lines represent efferent motor pathways from the cortex to the base nuclei (3) and brainstem nuclei (2) where nerves V, VII, IX and XII conduct the stimuli (modulated by the cerebellum) to the oral effectors.(Note 2: In semiautomatic swallowing and while normality is maintained, motor responses are produced without cortical intervention).From the dominant hemisphere, there is an inhibiting pathway (black, dashed line) going to the opposite hemisphere and an excitatory pathway (red, solid line) and also to the base dominant nuclei to the non-dominant side. Primary cortical voluntary oral phase This type of oral phase reproduces all dynamic events observed in the nutritive oral phase of swallowing, without having any intraoral content to be qualified.It happens as if the cerebral cortex imagined a bolus with such known features, that the efferent cortical motor area reproduces an oral ejection with the same characteristics and using the same efferent pathways that it would if that imagined bolus could be exposed to oral receptors.Thus, The spontaneous swallowing that occurs repeatedly, being the individual awake or during sleep and in the absence of conscious control, seems to be the same semiautomatic swallowing observed in the nutritious swallowing sequence, though with a distinct trigger mechanism, probably related to airway protection. Besides other functions, saliva is important in the chewing bolus preparation and in the lubrication of the mucous membranes to suitable transport.Saliva is produced in continuous volume and physical-chemical characteristics by the salivary glands, with mediation of parasympathetic fibers conducted by the cranial nerves VII (facial) and IX (glossopharyngeal). Spontaneous swallowing helps in the distribution of saliva over the oral, pharyngeal and even vestibular mucosa, humidifying these membranes and probably helping to maintain fluid the mucus over the laryngeal ventricles.Inhalation and expiration dry the mucosa by the continuous airflow, and spontaneous swallowing keeps the moisture level of these mucous membranes.Spontaneous swallowing is also important for the control of small volume of liquids adsorbed to the laryngeal vestibule walls, removing any excess over this mucosa.During swallowing, with the adduced vestibule folds, the tubercle of the epiglottis presses against these folds, making the vestibule lumen virtual, expelling to the pharynx any excess existing there (58,64) . NEURAL CONTROL OF THE SWALLOWING PHARYNGEAL PHASE The reflex pharyngeal phase takes place without voluntary control or direct cortical command.This phase starts from the pharyngeal pressure stimulus transferred by the oral phase.In nutritional swallowing, after bolus qualification, in special in relation to volume and viscosity (mechanoreceptors), the oral ejection will transfer the qualified information (bolus and pressure) to the pharynx.From there the perceived stimulus go to the brainstem (solitary tract nucleus).In the brainstem, in special in the ambiguous nucleus, a motor reflex response will determine sequential muscle contractions in delay line based on the values qualified and transferred by the oral phase (58,65,66) .Delay line is the contractile sequential muscular response of the muscles involved in the pharyngeal phase to a single pressure stimulus, which departs from the pharynx to the posterior sensory portion of the brainstem, and which returns to it via a ventral motor pathway, producing the sequential dynamics of the pharynx contractile activity.Although there is no direct motor cortex influence on the pharyngeal phase, the transferred content can be perceived, for example, for its temperature.This kind of perception means that there is afferent sensitivity, possibly to provide the oral transfer with tolerance limits. The stimulus that triggers the pharyngeal phase is not the contact produced by the passage of food through the pharynx (67,68) , but the pressure that distends it, with or without contents (58,69) .In nutritional swallowing, food and pressure are transferred, but in cortical swallowing, only pressure is, and the pharyngeal response is similar to that of nutritious swallowing, indicating that the pressure distending the pharyngeal walls is the element that stimulates the pharyngeal motor activity (58) . The pharyngeal distention pressure is identified and transferred to the brainstem through sensitive afferent fibers of the pharyngeal plexus (cranial nerves IX, X, XI).The glossopharyngeal (IX) nerves in the oropharynx and vagus and accessory (X and XI) in the laryngopharynx carry to the brainstem dorsal region (solitary tract nucleus -sensitive) the stimulus based on the pressure value transferred from the oral cavity to the pharynx.The dorsal region (sensitive) and the ventral one (motor) are integrated by interneurons of the brainstem's reticular system.A unique stimulus reaches the solitary tract nucleus, and motor reflex response is composed by a sequential action of several muscles in different times, configuring muscular sequential contraction in delay line. It is reasonable to admit a cerebellum modulation over the pharyngeal reflex responses determined by the brainstem, explaining the sequential muscular contraction in the pharyngeal phase (delay line).Among its main functions, the cerebellum coordinates the temporal sequence of the synergic contraction of the different skeletal striated muscles, with the possibility to generate delay of the motor signals by fractions of a second, creating delay in the muscle contraction sequence (1,8,29) . In a didactic way, and not failing to admit the possibility of a delay line control by inhibitory neurotransmitters, we have considered that the sensory-motor connection in the brainstem would be carried out by distinct amounts of synapses between interneurons connecting sensitive and motor nuclei, generating different transfer times between the solitary tract nucleus to the ambiguous one.Thus, a stimulus perceived by the pharyngeal receptors and transmitted to the solitary tract nucleus as unique would be retransmitted to the ambiguous nucleus, passing by a different and increasing number of interneurons, configuring the delay line observed in the swallowing pharyngeal phase. Besides the sequence and intensity of muscular contraction determined by the brainstem from pressure reception, the pharyngeal phase incorporates or assimilates, as its functional part, the oral phase developments already in course.The oral phase incorporated elements and the pharyngeal phase will end together.Therefore, the brainstem, during the pharyngeal phase, integrates the sequence of the oral phase with the pharyngeal one.The pharyngeal phase starts by action of the pharyngeal plexus, composed of the glossopharyngeal (IX), vagus (X) and accessory (XI) nerves, with secondary involvement of the trigeminal (V), facial (VII), glossopharyngeal (IX) and the hypoglossal (XII), and also some elements of the cervical plexus (C1, C2).The cervical plexus and the hypoglossal nerve on each side form the ansa cervicalis, from which a pathway goes to the geniohyoid muscle, one of the muscles that act in the elevation of the hyoid-laryngeal complex (58,65,70,71) . The accessory (XI) nerve, not always considered among those associated with swallowing, is admitted as having special visceral efferent (motor) fibers originating from the ambiguous nucleus that would follow associated with the vagus nerve, which would also display this type of fiber (1,47) .Thus, the accessory (XI) nerve is also responsible for the motor innervation of the musculature of the palate, pharynx, larynx and esophagus, in association with the vagus nerve. The pharyngeal phase shows adjustment, over the tongue on each side, of the palatoglossal muscle, innervated by the motor portion of the pharyngeal plexus (X, XI) to prevent pressure from returning to the oral cavity.The tension (V) and elevation of the palate (X, XI) against the first fascicle (pterygo-pharyngeal) of the upper constrictor muscle of the pharynx, innervated by the cranial nerves X and XI, blocks the possible pressure escape from the oropharynx to the rhinopharynx. The superior, middle and inferior constrictor muscles of the pharynx are each one constituted of distinct parts, with individualized insertions.Each one of these parts is inserted in one side in anterolateral fixed points, and in the other, in the posterior median line of the pharynx (pharyngeal raphe).As a consequence of the individualization of their motor units, they can contract in sequential mode.The superior constrictor muscle has four parts (pterygopharyngeal, buccopharyngeal, mylopharyngeal and glossopharyngeal), the middle, two parts (chondropharyngeal and ceratopharyngeal), and the inferior, two parts (thyreopharyngeal and cricopharyngeal).The cricopharyngeal presents two fascicles, the upper, oblique, and the lower, transverse, whose fibers seems to cross with each other's in the midline.Between the two fascicles of the cricopharyngeal muscle, there is an anatomically less resistant area due muscular absence (72) . The four parts of the superior constrictor occupy the entire extension of the oropharynx.Thus, it is necessary that only the first portion of its superior (pterygopharyngeal) part perform apposition against the palate, isolating and preventing pressure escapes from the oropharynx to the rhinopharynx.In the same time, the oral pressure can pass to the pharynx without resistance.The sequential contraction of the superior, middle and inferior constrictors' parts do not generate pharyngeal peristalsis, since there is no circular muscle on the pharyngeal wall.With closing of the pharyngeal contiguous cavities except for the pharyngealesophageal transition, which opens as a result of the elevation of the hyoid and larynx, there is a constrictors muscle contraction generating a pressure sequence in the cranial-caudal direction.This pressure sequence displaces the transient bolus from the pharynx to the permissive, less resistant esophagus by the opening of the pharyngeal-esophageal transition (42,70) . By definition, peristalsis is a sequential expression produced by a muscle circular layer.In this way this cranial-caudal pressure sequence with distal less resistance without muscle circular layer should not be considered as peristalsis or peristalsis like as is often defined. The suprahyoid muscles are innervated by the cranial nerves V and VII and by the (C1) cervical plexus, connected via ansa cervicalis with the hypoglossal nerve.The mylohyoid branch of mandibular nerve (mixed root of trigeminal -V) innervates the mylohyoid and the anterior belly of the digastric muscles; the posterior belly of the digastric and the stylohyoid muscles, by the facial nerve (VII).The geniohyoid and thyrohyoid muscles are innervated by ansa cervicalis (usually C1) through the hypoglossal (XII) nerve.The cervical plexus (usually C2) through the ansa cervicalis innervate the other infrahyoid muscles.The suprahyoid muscle group is responsible for the forward and upward movement of hyoid and larynx, with modulation by the infrahyoid group.This action moves the larynx away from the vertebral body and opens the pharyngeal-esophageal transition.Moreover, while moving the larynx away, the suprahyoid group is able to sustain this open condition depending on the bolus volume and viscosity.The opening of the pharyngeal-esophageal transition is also enhanced by the contraction of the longitudinal pharyngeal muscles, the stylopharyngeal ones, innervated by the glossopharyngeal (IX) nerve, and the palatopharyngeal muscle, innervated by motor fibers from cranial nerves X and XI (42,65,71,72) . Still in the oral phase, as a last act, a preventive apnea (swallowing apnea) ensues, being assimilated by the pharyngeal phase and remaining until its end.Associated with the airways resistance produced by apnea, there is independent vocal folds adduction (X, XI), followed by closure of the vestibular folds with the bolus passage through the already open pharyngeal-esophageal transition.The adduction of the vestibular folds is due to the compression of the pre-epiglottic fatty cushion produced by the elevation of the hyoid and larynx, which compresses this cushion contained in the pre-epiglottic fibrous space.This space has, as its point of least resistance, the lateral aspects of the tapered end of the epiglottis, which corresponds to the projection of the vestibular folds on both sides.Thus, the compression produced by this fatty cushion on the sides of the epiglottis causes the medial shift of the vestibular folds, which end up in apposition against the epiglottis tubercle.On its turn, the epiglottis, everted by the tongue, moves posteriorly, adjusting its tubercle against the now adduced vestibular folds (59)(60)(61)(62) .At the same time, the constrictor muscles' parts, including the crycopharingeal one, carry out the sequential, cranio-caudal contraction (nerves X and XI), driving the bolus from the pharynx into the esophagus (62,63) .(FIGURE 3).The pharyngeal and esophageal phases, both reflex, present anatomical and functional relation.The firsts 10 cm of the esophagus are formed by skeletal striated muscle, like the oral and pharyngeal ones.In the distal extremity of this striated segment, by 2 or 3 cm, a muscular distinction is identified macroscopically in fresh anatomical specimens, which is microscopically defined as a mixture of skeletal striated muscle (long and multinucleated fibers) and fibers of smooth muscle (short and mono-nucleated), where the first ganglion of the myenteric plexus appears (73) .(FIGURE 4). The cricopharyngeal muscle has been known as a skeletal striated muscle type that demands expressive consumption of ATP (adenosine triphosphate), because it depends on ATP both to contract and to relax.In order to demonstrate that the cricopharyngeal muscle is not contracted at rest, only to relax when the pharyngeal-esophageal transition opens, as believed by many, we performed manometry of the pharyngeal-esophageal transition.This manometry was carried out with a balloon built with a latex glove finger to measure the positive pressure resistance of the pharyngeal-esophageal transition of 12 fresh corpses, in the first 6 to 12 hours postmortem.This research were permitted by an agreement between the Anatomy Department of the Biomedical Sciences Institute of the Federal University of Rio de Janeiro (Universidade Federal do Rio de Janeiro -UFRJ) and the Legal Medical Institute of Rio de Janeiro, Brazil. The balloon traction shows that positive pressure values remain present on the pharyngeal-esophageal transition in all studied fresh corpses.A second pressure verification, with insertion of a metallic prosthesis between the vertebral body and the larynx, shows absence of resistance in this region, where the prosthesis eliminates the tweezer mechanism of the larynx against the vertebral body.Based on the positive values observed in the first measure and absent in the second, with the prosthesis insertion, we concluded that resistance on the pharyngeal-esophageal transition is dependent on the tweezer action of the larynx against the vertebral body.(FIGURE 6). In two cricopharyngeal muscles, we also carried out electric stimulation, including analysis of tolerance to calcium pump inhibitors (verapamil) and polyacrylamide gel electrophoresis with dodecyl sodium sulfate paired with other striated muscles.The high-pressure zone designated as the upper esophageal sphincter is located at the distal pharynx, where a tweezer action closes the pharynx between the larynx (cricoid cartilage) and the cervical lordosis at the level of the 5th to 6th cervical vertebrae.Usually this high pressure is considered as due to the maintained contraction of the cricopharyngeal muscle, part of the inferior pharyngeal constrictor.This conception is a severe misunderstanding about the anatomical and functional characteristics of this region.The inferior constrictor of the pharynx is a skeletal striated muscle consisting of two fascicles (thyropharyngeal and the cricopharyngeal).The cricopharyngeal fascicle presents two parts of fibers in its organization, an upper, oblique and a lower, transverse.The upper one inserts on each side of the cricoid cartilage, from where its fibers go from the bottom upwards and from lateral to medial, inserting on the posterior pharyngeal raphe.The lower or transverse part inserts on each side of the cricoid cartilage, with a transverse direction, intercrossing in the midline, where the raphe cannot be seen.The width of the pharyngeal lumen at the level of the transverse cricopharyngeal part is about 17 mm and there is not muscular ring in this region, which can be described as a muscular half-curvature.The divergence between the oblique and transverse parts of the cricopharyngeal muscle creates an intermediary zone without muscular fibers that constitutes an anatomically less resistant point, already described as the Kilian zone, where the posterior pharyngeal diverticulum, known as Zenker's diverticulum, can occur.This anatomically less resistant area is coincidentally the point of higher-pressure values, certainly due to the tweezer action produced by the vertebral body and the larynx (65,71) .(FIGURE 5).We obtained these two cricopharyngeal muscles from specimens immediately resected from total laryngectomies, with surgical indication and consent.These muscles showed the same characteristics of other striated muscles under electric stimulation, including their tolerance to calcium pump inhibitors.The electrophoresis paired with other striated muscles revealed the same protein patterns and molecular weights.These two experiments allow the conclusion that the cricopharyngeal muscle has morphology and function of a striated muscle.(FIGURES 7 and 8). The open pharyngeal-esophageal transition intercommunicates the pharynx and esophagus, allowing the video-fluoroscopic examination to show the flow of contrast medium filling both cavities almost simultaneously.One can observe that the pharyngeal and esophageal cavities present a relation with the contrast medium that occurs during the time of the pharyngeal phase.Thus, the beginning of the esophageal phase occurs, practically, in the same time of the pharyngeal phase, demonstrating the clear functional relationship between these reflex phases, which is so much or more consistent than the observed between the oral and pharyngeal phases.This fact demonstrates that the pharyngeal and esophageal phases are responsible for the conduction of the contents transferred by the oral phase (61,74) .(FIGURE 9). NEURAL CONTROL OF THE SWALLOWING ESOPHAGEAL PHASE The sequential contraction of pharyngeal muscles leads the bolus transferred by pharyngeal pressure.It results from the special visceral efferent innervation conducted by the vagus nerve, originating in the ambiguous nucleus, also responsible for the striated muscle of the upper portion of the esophagus.The bolus inside the esophagus is conducted by sequential contractions in distal direction, defined as primary peristalsis. The mechanical relation between bolus and smooth muscle in the esophagus wall will be able to stimulate this kind of muscle, unlike striated one.The smooth muscle in the esophagus wall will capable of interfering with the tonus and motility of the smooth portion of the esophagus (1) .It is also believed that the esophagus distal extremity presents resting tonic contraction involving the distal circular musculature.Hormones would regulate this resting tonic contraction in association with intrinsic and extrinsic nerves that generate pressure values around 20 mmHg.This prevailing hypothesis considers that the gastroesophageal transition opens to due muscle relaxation that would occur in association with the primary peristalsis, induced by vagus fibers that would inhibit the tonic contraction of the circular musculature, with possible mediation of VIP (vasoactive intestinal polypeptide) neurotransmitters and NO (nitric oxide) (77) . Despite the prevailing concepts, it has not been identified, in the distal portion of the esophagus, a muscular ring with the classical characteristics observed in smooth muscle sphincters.However, the gastroesophageal transition, without muscular thickening, presents positive resting pressure, which fades away during the primary peristaltic wave that leads the bolus to the stomach.Due to the lack of knowledge about the morphology responsible for the high pressure of this transition defined as cardia, it has been deemed a physiological sphincter.This situation has given rise to speculations that add up to about 27 possible mechanisms, isolated or in association, including those involving the regional muscle organization (74) . The esophagus presents an internal layer, defined as circular, and another external, as longitudinal.The external one, when contracting, reduces the resistance of the esophageal tube, and the internal propels the bolus in sequential contraction.It is possible that the esophageal muscular layers are arranged in a way that the external layer displays long-pitch, spiral fibers, and the internal one, short-pitch, spiral fibers.This morphology, associated with the concept of energy preservation, allows us to admit that the contraction of the external layer would be able to widen the esophagus, decreasing the resistance to the flow, probably also by opening the gastroesophageal transition.On its turn, the internal layer would propel the food downwards by sequential contraction.Thus, during the resting esophageal stage, there would be no energy expenditure (58,73,74) .The opening of the gastroesophageal transition would be an active response to the esophageal peristalsis that would activate the myenteric plexus during the entry of the bolus into the esophagus.Corroborates this hypothesis the fact that the esophagus, when subjected to pure pressure distension, responds differently than in the presence of the concrete bolus.be depolarized in a syncytial way, where the depolarization of the muscle cells is freely transferred from one to the other, with contraction processed in the entire extension of the muscle layer.Thus, we can consider, as a hypothetical mechanism, that the contents transferred from the pharynx to the esophagus while in its striated portion are conducted similarly to the way that takes place in the pharynx, by depolarization of motor units.Nevertheless, when the bolus passes through the striated/smooth transition, it is capable of stimulating the myenteric plexus from this transition on, generating syncytial contraction.This syncytial depolarization is able to cause contraction of the longitudinal layer, reducing the resistance of the esophagus as a whole, increasing its complacency and culminating, or at least participating, in the opening of the gastroesophageal transition that occurs in concomitance with the onset of primary peristalsis.It is also possible that the circular musculature depolarizes and contracts during the bolus passage through the striated/ smooth transition, in association with the primary peristalsis.This contraction pressurizes the esophageal lumen downwards, leading the bolus in transit through the esophagus (75)(76) . The pharynx and the esophagus first portion are both formed by striate muscle innervated by the special visceral efferent (motor) pathway of the vagus nerve.This cranial nerve also has the general visceral efferent (parasympathetic) pathway, which is preganglionic to the myenteric plexus.In this way, another hypothesis would be that the esophageal smooth muscle motor coordination be done by myenteric postganglionic stimulation in sequence with the special visceral efferent pathway (motor to striate muscle) in association with the general visceral efferent pathway (parasympathetic -motor to smooth muscle) of the vagus nerve. The contents transferred from the pharynx to the esophagus, notably the ones with solid fragments, not always reach the stomach.Sometimes they stop at the level of the esophagus smooth muscle, from where they are able to locally stimulate the submucosal plexus, which transfers an activation command to the myenteric plexus, producing muscle contraction from the retention point on.This downward contractile wave is defined as secondary peristalsis, which ends up conducting the residual esophageal contents to the stomach (73) . It has been considered that the general visceral efferent pathway (parasympathetic fibers) originating in the posterior motor nucleus of vagus as preganglionic fibers will connect to intraparietal ganglia in the esophageal wall, from where postganglionic fibers connect with visceral effectors that release neuro-hormones FIGURE 1 . FIGURE 1. Lateral view of an anatomical specimen (brain), highlighting the sensory, postcentral and the motor, precentral gyrus, separated by the central sulcus.The main anatomical elements are described over the figure.1, 2 and 3: areas of somatosensory cortex; 5 and 7: sensitive association areas; 4: motor cortex; and 6: premotor area. FIGURE 2 . FIGURE 2. Frontal view of schematic diagram over an anatomical specimen representing the neural control of the nutritional oral phase.Black, dotted lines represent the oral afferent pathways that pass through the (1) sensorial ganglion and connect with sensitive nuclei of the solitary tract and nerve V nuclei in the brainstem(2).From there, they connect with the base nuclei (3) through direct and cross pathways.From the base nuclei (3), in nutritious swallowing the signals stimulate the postcentral (sensorial) and precentral (motor) gyruses (4), which start the efferent (motor) pathway.(Note 1: Sensory pathways do not exist in the primary cortical voluntary oral phase).Red, solid lines represent efferent motor pathways from the cortex to the base nuclei (3) and brainstem nuclei(2) where nerves V, VII, IX and XII conduct the stimuli (modulated by the cerebellum) to the oral effectors.(Note 2: In semiautomatic swallowing and while normality is maintained, motor responses are produced without cortical intervention).From the dominant hemisphere, there is an inhibiting pathway (black, dashed line) going to the opposite hemisphere and an excitatory pathway (red, solid line) and also to the base dominant nuclei to the non-dominant side. FIGURE 3 . FIGURE 3. Neural control representation of the pharyngeal phase over anatomical specimens where 1 -oral cavity, 2 -pharynx, 3 -esophagus, 4 -swallowed bolus, 5 -brainstem, X -pharyngeal receptors, 6solitary tract nucleus, 7 -Ambiguous nucleus.Over 5, lower dotted arrows from six to six -afferent integration, and upper dotted arrows from six to seven -efferent integration.From 6 (sensitive nucleus) to 7 (motor nucleus), multi-dotted arrows are a didactic representation of the growing number of interneurons of the delay line.From 7 (ambiguous nucleus) to a, b, and c on both sides, dashed arrows represent the efferent stimulus to muscle delay line.There is pressure transference from 1 to 2 (pharyngeal distention), represented by widening of 4. Hollow arrowheads show displacement of the bolus (4) from mouth to esophagus. FIGURE 4 . FIGURE 4. A -fresh esophagus segment where there is mixture of 1 -smooth and 2 -striated muscle.B -histological specimen obtained from (A), with (1) first ganglion of the myenteric plexus and mixture of long and multinucleated striated muscle fibers (2) and short and mononucleated smooth ones (3). FIGURE 5 . FIGURE 5. Posterior view of anatomical specimen involving the pharynx, larynx, esophagus and trachea, where 1 -Cricopharyngeal muscle, oblique fascicle, 2 -Cricopharyngeal muscle, transverse fascicle, inserted on the larynx cricoid cartilage, 3 -Kilian zone, the anatomically less resistant zone on the posterior pharyngeal wall where the pharyngeal diverticulum described by Zenker occurs.This less resistant zone is due to the divergence of the oblique and transverse fascicles of the cricopharyngeal muscle.4 -Trachea, 5 -Esophagus. FIGURE 6 . FIGURE 6.A. Manometry on a fresh corpse.B. Scheme highlighting (a) -pharynx between tweezer formed by vertebral body and larynx that compresses the pharynx at rest, (b) -elastic and distensible balloon, and (c) -sphygmomanometer.(d) -rectangle containing three possibilities of pressurization of the system, where X represents flow closure, 1 and 2 represent air flow to be balanced with the distended balloon, and 3, the three-way tube that allows the balance of pressures, (e) -syringe, 4 -direction of balloon traction.C.After verification of basal pressure (positive in all 12 cases), cervical dissection for passage of a metallic prosthesis separating the larynx from the spine.D. Prosthesis installed for re-verification (absence of positive pressure in all 12 cases). FIGURE 7 . FIGURE 7. Polygraphic record of isometric tension of the cricopharyngeal muscle.On top, the polygraph used.The first bar shows constant increase of the contraction force as the stimuli intensity (Volts) increases.The second bar shows gradual contraction frequency increase of the cricopharingeal muscle with the stimuli pace (Hz) increment, until the installation of tetany.The third bar shows the use of verapamil (calcium pump blocker) in increasing concentrations: both in the absence and with increasing doses of the calcium pump blocker, the muscle behavior is the expected for skeletal striated muscle.The three bars therefore register a skeletal striated muscle behavior. FIGURE 8 . FIGURE 8. Protein Electrophoresis.On the right side, protein fractions distribution near the same plane for the four tested muscle samples, where two are cricopharyngeal samples and two other muscles previous known as striated (extensor halluces longus and soleus muscle).On the left, superposition of the protein weights of the four tested muscle samples, confirming that the cricopharyngeal muscles has the similar protein fractions distribution of the previous known as striated muscle. FIGURE 9 . FIGURE 9. Video-fluoroscopic examination of swallowed contrast media. 1 -Oropharynx, 2 -Epiglottis, 3 -Piriform recesses, 4 -Pharyngealesophageal transition, and 5 -Esophagus.The pharyngeal phase begins at frame 20 and ends at 50, with total time of 0.99 sec.(each frame lasts 0.033 msec.).After 0.2 to 0.23 sec.(frame 26 to 27), the esophageal phase is already starting in superposition with the pharyngeal one.The epiglottis remains in the vertical position and will only close the pharyngeal-esophageal communication (horizontal position) at the end of the pharyngeal phase, in frame 45.At this time, the pharynx, with residual volume, starts its return to resting position, with closure of the pharyngeal-esophageal transition by the return of the larynx in opposition to the vertebral body and with the epiglottis in vertical position.
2018-09-15T21:18:11.170Z
2018-08-23T00:00:00.000
{ "year": 2018, "sha1": "3cd877bb2a4baaa5d4142205230a9bf49f015be9", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/ag/v55s1/1678-4219-ag-s0004280320180000045.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "3cd877bb2a4baaa5d4142205230a9bf49f015be9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259125808
pes2o/s2orc
v3-fos-license
The Epidermal Transcriptome Analysis of a Novel c.639_642dup LORICRIN Variant-Delineation of the Loricrin Keratoderma Pathology Loricrin keratoderma (LK) is a rare autosomal dominant genodermatosis caused by LORICRIN gene mutations. The pathogenesis of the disease is not yet fully understood. So far, only 10 pathogenic variants in LORICRIN have been described, with all of them but one being deletions or insertions. The significance of rare nonsense variants remains unclear. Furthermore, no data regarding the RNA expression in affected patients are available. The aim of this study is to describe the two variants in the LORICRIN gene found in two distinct families: the novel pathogenic variant c.639_642dup and a rare c.10C > T (p.Gln4Ter) of unknown significance. We also present the results of the transcriptome analysis of the lesional loricrin keratoderma epidermis of a patient with c.639_642dup. We show that in the LK lesion, the genes associated with epidermis development and keratocyte differentiation are upregulated, while genes engaged in cell adhesion, differentiation developmental processes, ion homeostasis and transport, signaling and cell communication are downregulated. In the context of the p.Gln4Ter clinical significance evaluation, we provide data indicating that LORICRIN haploinsufficiency has no skin consequences. Our results give further insight into the pathogenesis of LK, which may have therapeutic implications in the future and important significance in the context of genetic counseling. Background Loricrin keratoderma (LK, MIM 604117, Vohwinkel syndrome with ichthyosis, VS) is a rare autosomal dominant genodermatosis caused by pathogenic variants in the LORICRIN gene. LORICRIN encodes one of the key proteins conferring the insolubility, mechanical resistance and impermeability of the epidermal barrier [1] Hydrophobic and insoluble loricrin is expressed in the orthokeratinizing epithelia, except for internal ones. In the skin, its synthesis occurs in the upper layer of epidermisstratum granulosum (SG). Loricrin is involved in cytoskeleton stabilization forming crosslinks within and between the proteins, and in the formation of the cornified cell envelope (CE), being the most abundant (commonly > 70%) protein there [2,3]. Consequently, the clinical symptoms of LK patients are related to the skin surface and comprise the following: ichthyosis; palmoplantar keratoderma (PPK), often with honeycomb pattern, pseudoainhum and/or amputation; knuckle pads; and collodion membrane at birth [4]. Of note, the clinical symptoms are heterogenous and may differ even among relatives [5]. The data concerning LK are limited. Only 10 pathogenic variants in 21 affected families (overall 106 patients) were described in the literature so far [4]. Moreover, all of them, apart from one substitution, are deletions/insertions. The only pathogenic missense variant known so far was identified as causative in late-onset loricrin keratoderma [6]. The clinical significance of the other variant types remains questionable. The aim of the study is to describe two variants in the LORICRIN gene that were found during diagnostic procedures of cornification disorders. The first one, c.639_642dup (p.Thr215GlyfsTer122), is a novel pathogenic variant detected in a family with autosomal dominant hyperkeratosis, for which we also present the results of a transcriptomic analysis. This is the first transcriptomic analysis of a loricrin keratoderma lesion. Another variant, the rare c.10C > T (p.Gln4Ter), was detected in the other family as a secondary finding of unknown significance. Considering the highly limited data on the clinical significance of LORICRIN premature stop codon (PTC) variants, we provide data showing that p.Gln4Ter leading to a premature stop codon has no skin consequences. This has important significance in the context of genetic counseling. Experimental Design All patients gave informed consent to participate in the study. Patients Family 1: The family (two daughters and their father, Figure 1A) was referred to genetic counselling because of hyperkeratosis of the palms and soles and a clinical diagnosis of ichthyosis. The clinical symptoms were manifested by ichthyosiform dermatosis, diffuse generalized. In the girls, the symptoms were noted at birth. Then, palmoplantar keratoderma occurred at the age of 2-3 months. The course of the disease varied, with occasional exacerbations. The improvement was noted after the use of emollients. Occasionally, the transgradient extension of hyperkeratosis onto the wrists and on the bends of the elbows and knees was present, pseudoainhum was not observed and, according to the patient, was also absent in the other affected family members. The honeycomb pattern of PPK was negative during clinical evaluation and, according to the mother, had not been observed before. The keratoderma was neither painful nor inflammatory. Family 2: The proband was a girl born from an uneventful pregnancy at 38 weeks of gestation (birth weight 2880 g, Apgar score: 7). She had clinical recognition of autosomal recessive congenital ichthyosis (ARCI), due to a homozygous pathogenic variant c.1562A>G (p.Tyr521Cys) in ALOX12B. The symptoms of ARCI were typical (collodion baby; laterin-life dryness of the face skin and, less intensive, of the whole body; fingers and toes contractures; erythema; stiff and cracking skin of the hands and feet; slight psoriasis lesions on the knees and elbows), no nail and hair disturbances were present and the teeth appeared normally, though a slight yellow discoloration of permanent teeth was observed ( Figure 2). The LORICRIN variant p.Gln4Ter was detected as a secondary finding during a molecular test. A segregation analysis has shown that the variant was inherited from the patient's father. A dermatological evaluation of the father did not reveal any skin symptoms at the age of 41; only dystrophic nails were present and massive caries (currently with upper teeth dentures) from the age of 20. Family 2: The proband was a girl born from an uneventful pregnancy at 38 weeks of gestation (birth weight 2880g, Apgar score: 7). She had clinical recognition of autosomal recessive congenital ichthyosis (ARCI), due to a homozygous pathogenic variant c.1562A>G (p.Tyr521Cys) in ALOX12B. The symptoms of ARCI were typical (collodion baby; later-in-life dryness of the face skin and, less intensive, of the whole body; fingers and toes contractures; erythema; stiff and cracking skin of the hands and feet; slight psoriasis lesions on the knees and elbows), no nail and hair disturbances were present and the teeth appeared normally, though a slight yellow discoloration of permanent teeth was observed ( Figure 2). Family 2: The proband was a girl born from an uneventful pregnancy at 38 weeks of gestation (birth weight 2880g, Apgar score: 7). She had clinical recognition of autosomal recessive congenital ichthyosis (ARCI), due to a homozygous pathogenic variant c.1562A>G (p.Tyr521Cys) in ALOX12B. The symptoms of ARCI were typical (collodion baby; later-in-life dryness of the face skin and, less intensive, of the whole body; fingers and toes contractures; erythema; stiff and cracking skin of the hands and feet; slight psoriasis lesions on the knees and elbows), no nail and hair disturbances were present and the teeth appeared normally, though a slight yellow discoloration of permanent teeth was observed ( Figure 2). Genetic Analysis We identified a novel variant in the LORICRIN gene: c.639_642dup (p.Thr215GlyfsTer122) in family 1 ( Figure 1A). The pathogenicity status was scored as likely pathogenic (LP) according to the American College of Medical Genetics (ACMG) classification. Importantly, similarly to other LORICRIN pathogenic variants reported so far, the c.639_642dup caused delayed translation termination and introduced an arginine and leucine reach sequence. Thus, the diagnosis of loricrin keratoderma was established. Transcriptome Analysis of the Probant vs. Control The 15,210 genes with distinct ensemble identification (ID) and more than five counts in each sample were detected. Considering the fact that the data analysis was largely limited and included a single patient vs. single control analysis, we highly strengthened the differentially expressed genes (DEG) parameters to the absolute value of logarithm fold change (|logFC|) > 3 and logarithm of counts per million reads (logCPM) > 1, resulting in 1722 genes. Among them, 276 genes were upregulated (logFC between 3.05 and 12.7) and 1445 downregulated (logFC between −3.0 and −14.45). However, only in 10 and 53 genes, respectively, was the statistical difference significant (p-value < 0.005) ( Table 1). With respect to ontology, genes-encoding proteins involved in epidermis development and keratinocyte differentiation were mainly upregulated. In turn, those engaged in, i.e., cell adhesion, developmental processes and anatomical structure morphogenesis, cellular ion homeostasis and transport, cell differentiation, regulation of signaling and cell communication were downregulated ( Table 2). Discussion In this study, we described the novel LORICRIN gene pathogenic variant: c.639_642dup with the first transcriptome analysis of lesional loricrin keratoderma epidermis, and a rare p.Gln4Ter variant in the same gene, as evidence that the haploinsufficiency of loricrin does not cause skin symptoms of LK. The in silico prediction showed that the consequence of c.639_642dup on the protein level is a generation of a sequence rich in basic amino acids, mainly arginine. It has already been proven that all the other known frameshifts in the C-part of the loricrin also lead to the formation of arginine-rich regions generating nuclear localization signals (NLS) [7]. Indeed, such loricrin derivative mutated proteins were found to be deposited in the nucleus and distort epidermal differentiation [5,8]. This was also observed in a mouse model of LK, where mutated loricrin was almost exclusively present in the nucleus. This, in fact, was further proven to be an LK-causative factor. It was also shown that the LK phenotype of transgenic mice was more severe in the absence of wild type loricrin [8]. Next-generation sequencing technologies (NGS) enabled robust progress in the genetics of the disorders of cornification. While DNA sequencing has already revealed a plethora of disease-causing variants, showing great heterogeneity in the molecular basis of these diseases, RNA sequencing data from these patients are rather limited. Nevertheless, a few studies have already shown that transcriptome analyses may be crucial for obtaining deeper insight into the pathophysiology of the cornification disorders. However, according to our knowledge, no data on the gene expression in loricrin keratoderma patients are available, probably due to the rarity of this disorder. Herein, we showed the results of the transcriptome analysis performed using mRNA isolated from the lesion epidermis of the patient with heterozygous novel variant c.639_642dup. In total, 1722 genes were differentially expressed, of which 276 genes were upregulated and 1445 downregulated. However, only 10 and 53 genes reached statistical significance, respectively. The HRNR-encoding hornein was the most upregulated gene. This gene is located on chromosome 1q21 within the human epidermal differentiation complex (EDC). Hornein belongs to S100 fused-type proteins (SFTPs) and is involved in the cornified epithelium formation [9]. Furthermore, hornein has an antimicrobial activity as the source of cationic intrinsically disordered antimicrobial peptides (CIDAMPs) [10,11]. It has previously been shown that HRNR mRNA expression increased transiently in cultured human epidermal keratinocytes during Ca 2+ -dependent differentiation [12]. Of note, Rice et al. and Kim et al. have shown that in healthy people, the HRNR is preferably expressed in palmoplantar skin compared to other regions [13,14]. So far, the HRNR gene was mainly analyzed in the context of the other skin diseases: psoriasis and atopic dermatitis (AD), where barrier defects occur as well, but due to distinct immunogenetic factors. The HRNR transcripts were detected in regenerating human skin after wounding in the periphery regions of psoriatic lesions [15]. Moreover, the hornein immunoreactivity in the lesions, but not in the healthy skin, of psoriasis and atopic dermatitis patients was also diminished in another study [12]. Furthermore, Henry et al. showed that the expression was lower also in the healthy skin of AD patients. The authors demonstrated that hornein is a component in a cornified envelope (CE) and suggested that it plays a role in the alterations in the CE and in the abnormality of the AD epidermal barrier [16]. Just recently, Makino et al. checked the HRNR expression by immunostaining in skin lesions from patients affected by hyperkeratosis-associated diseases (ichthyosis vulgaris, epidermolytic ichthyosis (EI), Darier's disease, lichen planus, pustulosis et plantaris, actinic keratosis and seborrheic keratosis). The increased expression was detected in lichen planus and pustulosis et plantaris, followed by an irregular signal pattern in epidermolytic ichthyosis and actinic keratosis. In the remaining diseases (ichthyosis vulgaris, Darier's disease and seborrheic keratosis), the expression was decreased. Thus, in light of our results and those mentioned above, further studies are needed to evaluate the hornein involvement in epidermal barrier restoration [17]. Among the other top 10 upregulated genes, we detected a few more associated with barrier formation: LCE3D (late cornified envelope protein 3d), KRT9 (keratin 9) and CDSN (corneodesmosin). Those genes were also found to be upregulated in the other types of ichthyoses [18]. Specifically, LCE3D was also found to be upregulated in the other diseases with keratoderma: Pachyonychia Congenita and Curth-Macklin ichthyosis [19,20]. Due to the fact that our analysis consisted of only one patient and one control, the statistical analysis was very limited. Therefore, we also focused on the genes that had logFC over 3.0 or below −3.0 and logCPM > 1, irrespective of the p-value. In this group, several others had induced expression as well, including the IL-17/TNF-α-associated molecules IL36G and S100A9. Of note, previous studies also showed that the Th17 pathway is induced in various forms of ichthyosis, which proves that in terms of immune response, ichthyoses resemble psoriasis. Hence, novel therapies using IL-17 may be deliberated in the future [18,21]. Overall, among the 276 upregulated genes, those associated with epithelium development, keratocyte differentiation and keratinization were the most represented. It has been shown that apart from some commonly expressed genes, different ichthyoses vary in terms of gene expression. This is reflected even by the numbers of DEG (patient's lesions vs. control) in different disorders. Malik et al. showed that in the Netherton syndrome patient, the number was relatively low: 63 upregulated and 33 downregulated DEGs comparing to epidermolytic ichthyosis (EI), where the number of DEGs was 223 and 150, respectively. Furthermore, Kim et al. identified lipid metabolism and barrier junction genes to be downregulated in four common ichthyosis types, which were less pronounced in EI [21]. Furthermore, Malik et al. proved that the expression of lipid metabolism genes was diminished in lamellar ichthyosis (LI) patients, but not, or to a lesser extent, in EI [18]. This phenomenon may result from the distinct molecular basis of those disorders: LI is mainly caused by mutations in genes involved in lipid metabolism, while in EI, pathogenic variants in structural keratins 1 and 10 are causative. Finally, when we compared the genes downregulated in our patient with those published by Ortega-Recalde et al. in Curth-Macklin ichthyosis, two were concordantly downregulated (PHYHIP, PAMR1), while DCD, FABP4, PLIN1, SCGB1D2, SCGB2A2, ADIPOQ, G0S2, KRT19 and MUCL1, downregulated in our case, were upregulated in Curth-Macklin ichthyosis. Among the mostly downregulated 53 genes, we found a few involved in lipid metabolism, e.g., PLAAT3, FABP4, PLIN1/4 and PLNPLA6. However, a gene ontology analysis performed in the wider context showed that the majority of 1445 genes detected by us are involved in cell adhesion developmental processes and anatomical structure morphogenesis, cellular ion homeostasis and transport, cell differentiation, regulation of signaling and cell communication. This finding is in line with the molecular pathomechanism of loricrin keratoderma, which, as already mentioned before, comprises the nuclear deposition of mutated loricrin and the dysregulation of keratinocyte differentiation. Of note, once we compared the 53 mostly downregulated genes of our patient with the DEG profile of atopic dermatitis and psoriasis presented by Malik et al., 14 also had a diminished expression in AD and 15 in psoriasis, whereas only a few (1)(2)(3)(4)(5) overlapped with other ichthyoses [18]. Since the results, to our knowledge, are the first transcriptome analysis of LK lesion and were performed on the one patient only, replicative studies are needed. Nevertheless, our results provide novel insight into the pathogenesis of the disease and may have therapeutic implications in the future. Another issue raised by us concerns the clinical significance of nonsense variants in LORICRIN. It has been shown that transgenic mice with one copy of the loricrin gene are phenotypically normal [22]. However, as far as we can tell, there are no phenotypic descriptions of humans with one functional copy of the LORICRIN gene available thus far. Hence, the genetic counseling in such cases may be ambiguous. In the ClinVar database, one premature stop codon (PTC) variant [NM_000427.3:c.624C > G (p.Tyr208Ter), ID: 1324671] is recorded and is assigned as likely pathogenic. On the contrary, in the SNP database (SNPdb), 13 nonsense variants are recorded, with the frequency ranging from 0 to 0.00004, according to the GnomAD or Kaviar databases. None of the variants were detected in homozygosity and each was classified as VUS according to the ACMG [23] classification. The variant c.10C > T (p.Gln4Ter) detected in family 2 is also recorded in SNPdb and was found in 2 out of 231 412 GnomAD alleles of European, non-Finnish ancestry. The proband of family 2 was diagnosed as autosomal recessive congenital ichthyosis ARCI with ALOX12B biallelic mutations; therefore, it was impossible to initially correlate the clinical symptoms with the LORICRIN genotype. Since we have shown that the c.10C > T (p.Gln4Ter) variant was of paternal origin, the father was clinically evaluated. There was no history of skin involvement, but dystrophic nails and massive carries from the age of 20 were reported. Nail involvement in LK is uncommon and also was not described in knockout mice models [1,22,24], although loricrin is expressed in the nail proximal fold [25]. Nevertheless, considering the fact that the father of family 2's history of dystrophic nails was negative, as well as the fact that there were no nail symptoms in the ARCI-affected proband, the nail dystrophy of the father seemed to occur independently. There were also no cases of massive caries among the father's relatives. Interestingly, previous studies have shown that in murine and human aggressive periodontitis, LORICRIN mRNA expression was diminished [26,27]. Therefore, though no skin changes were noted, an open question remains as to whether the presence of the heterozygous PTC variant confers susceptibility to caries. In conclusion, our results broaden the knowledge about LORICRIN gene variants and their phenotypic significance and give insight into the molecular pathology of loricrin keratoderma lesions. Skin Biopsy The transcriptome analysis was performed using RNA isolated from lesional epidermis. The 3 mm skin biopsy from the lesion located on the upper tibia was taken from the LK patient and from the same location of a healthy age-matched male. The biopsies were immediately frozen and kept at −80 • C. The epidermis was mechanically detached from the underlying skin layers in a cryotome prior to RNA isolation. RNA Sequencing The samples were mechanically homogenized, and RNA was isolated using an RNeasy Micro Kit (Qiagen, Hilden, Germany). The quality and integrity of total RNA were assessed with an Agilent 2100 Bioanalyzer using an RNA 6000 Pico Kit (Agilent Technologies, Ltd. Santa Clara, CA, USA) In total, polyA enriched RNA libraries were prepared using the QuantSeq 3 mRNA-Seq Library Prep Kit according to the manufacturer's protocol (Lexogen GmbH, Vienna, Austria). Briefly, libraries were prepared from 5 ng of total RNA. The first step in the procedure was a first-strand cDNA synthesis using reverse transcription with oligodT primers. Then, all remaining RNA was removed to what was essential for an efficient second-strand synthesis. The second-strand synthesis was performed to generate double-stranded cDNA (dsDNA). It was initiated by a random primer containing an Illumina-compatible linker sequence. The obtained cDNA was purified using magnetic beads to remove all reaction components. cDNA libraries were amplified by PCR using starters provided by a producer. The library evaluation was completed with an Agilent 2100 Bioanalyzer using the Agilent DNA High Sensitivity chip (Agilent Technologies, Ltd., Santa Clara, CA, USA). The mean library size was 220 bp. Libraries were quantified using a Quantus fluorometer and QuantiFluor double-stranded DNA System (Promega, Madison, WI, USA). Libraries were run in the rapid run flow cell and were single-end sequenced (75 bp) on HiSeq 1500 (Illumina, San Diego, CA, USA). Statistical Analysis The quality of sequencing data was firstly checked with the FastQC program [28]. Then, data were mapped to the reference human genome GRCh38 with a star aligner [29]. The calculation of read counts was performed with the HT seq [30]. All genes with very low expression (below 5 counts) across the examined samples were discarded. Due to no replicates for the differential expression analysis, the edgeR method [31], recommended for such an experimental design, was used. We used the value 0.75 as an approximation of the dispersion parameter based on our previous experience with similar data. The gene ontology was performed using the system PipeR package [32]. As important genes, those with an absolute value of log fold change higher than 3 and an abundance of read measured by log counts per million higher than one were chosen. All statistical analyses were carried out using R software v. 4.2.3 [33]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.
2023-06-11T05:08:19.589Z
2023-05-29T00:00:00.000
{ "year": 2023, "sha1": "3603c63a58aac1a78ddf1520aab3d9c901ba926c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3603c63a58aac1a78ddf1520aab3d9c901ba926c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
18814368
pes2o/s2orc
v3-fos-license
Adding Streptomycin to an Intensified Regimen for Tuberculous Meningitis Improves Survival in HIV-Infected Patients In low- and middle-income countries, the mortality of HIV-associated tuberculous meningitis (TM) continues to be unacceptably high. In this observational study of 228 HIV-infected patients with TM, we compared the mortality during the first nine months of patients treated with standard antituberculosis therapy (sATT), intensified ATT (iATT), and iATT with streptomycin (iATT + STM). The iATT included levofloxacin, ethionamide, pyrazinamide, and double dosing of rifampicin and isoniazid and was given only during the hospital admission (median 7 days, interquartile range 6–9). No mortality differences were seen in patients receiving the sATT and the iATT. However, patients receiving the iATT + STM had significant lower mortality than those in the sATT group (hazard ratio [HR] 0.47, 95% confidence interval [CI] 0.24 to 0.93). After adjusting for other covariates, the mortality hazard of the iATT + STM versus the sATT remained statistically significant (adjusted HR 0.2, 95% CI 0.09 to 0.46). Other factors associated with mortality were previous ATT and low albumin concentrations. The mortality risk increased exponentially only with CD4+ lymphocyte concentrations below 100 cells/μL. In conclusion, the use of iATT resulted in a clinically important reduction in mortality compared with the standard of care only if associated with STM. The results of this study deserve further research. Introduction Over half of the patients with HIV-associated tuberculous meningitis die soon after diagnosis, and many of the survivors suffer from chronic neurological sequelae [1]. In spite of the poor prognosis, there has not been any major breakthrough in the chemotherapy of tuberculous meningitis in the last decades [2]. However, in recent years, there has been a growing interest in finding new intensified regimens that could result in improved survival [3][4][5]. In a phase two randomized trial investigating the effect of moxifloxacin and intravenous rifampicin during the first two weeks of treatment of tuberculous meningitis, higher exposure to rifampicin was associated with a survival benefit compared with a standard antituberculosis therapy (sATT) [3,6]. Streptomycin (STM) was the first drug to reduce mortality in the treatment of tuberculous meningitis in the nineteen forties [7]. Interesting, the addition of isoniazid and para-aminosalicylic acid to STM in the nineteen fifties achieved similar survival rates to the ones achieved with the currently recommended ATT [8]. In more recent studies, STM resistance is associated with slower CSF clearance of mycobacteria and might be associated with poorer prognosis in patients with isoniazid resistance [9,10]. However, the availability of less toxic drugs with better CSF penetration has limited the use of STM in the treatment of tuberculous meningitis [11]. In a previous study from our cohort, we observed a mortality reduction in HIV-associated tuberculous meningitis after implementation of an intensified ATT (iATT) [5]. However, some of the patients included in the iATT group in the previous study received also STM following the National Guidelines for the treatment of tuberculosis for patients with previous history of ATT (category 2 ATT) [12]. In this study, 2 Interdisciplinary Perspectives on Infectious Diseases we aimed to assess the effect of STM on the effectivity of the iATT comparing three treatment groups: sATT, iATT, and iATT with STM (iATT + STM). [13]. The HIV epidemic is largely driven by heterosexual transmission and it is characterized by low CD4 cell counts at presentation, poor socioeconomic conditions, and high levels of illiteracy [14][15][16]. Setting and For this study, we included all HIV-infected patients diagnosed with tuberculous meningitis from 1 January 2011 to 1 October 2014 from the VFHCS database. The selection of patients from the database was executed on 14 March 2015. Patients who did not meet the proposed criteria for definite, probable, or possible tuberculous meningitis were excluded from the analysis [17]. Treatment. The management of tuberculous meningitis in our cohort has been described in detail elsewhere [5]. All patients with tuberculous meningitis were admitted to the hospital. Before 29 January 2012, patients were treated with a sATT (isoniazid 300 mg, rifampicin 450 mg, ethambutol 800 mg, and pyrazinamide 1500 mg) and, after 29 January 2012, patients received an iATT (isoniazid 600 mg, rifampicin 900 mg, pyrazinamide 1500 mg, levofloxacin 750 mg, and ethionamide 750 mg) while admitted (sATT was given after discharge). Following Indian Guidelines for tuberculosis, in patients who had received ATT for at least one month in the past, 750 mg intramuscular streptomycin was added during the first two months of ATT [12]. Intravenous dexamethasone was given and was rapidly tapered during admission. Patients not on ART at the time of ATT initiation were counselled to start ART after 14 days of hospital discharge. Statistical Analysis. We used time-to-event methods to study the mortality during the first nine months after the diagnosis of tuberculous meningitis. Time was measured from ATT initiation to death. Patients who did not die during the study period were censored at nine months or at their last visit date, whichever occurred first. Univariate and multivariate analyses were performed using Cox proportional hazard models. The proportional hazard assumption was assessed performing log-log survival curves based on Schoenfeld residuals [18]. The log-linearity assumption was checked for all continuous variables. Continuous variables that did not have a linear relationship with the log-hazard were transformed using restricted cubic splines with four knots [19]. As the coefficients for restricted cubic splines are difficult to interpret, the relationship of continuous covariates with the event of interest was presented graphically [20]. Statistical analysis was performed using Stata Statistical Software (Stata Corporation; Release 12.1. College Station, Texas, USA). The VFHCS was performed according to the principles of the Declaration of Helsinki and was approved by the Ethics Committee of the Rural Development Trust Hospital. Kaplan-Meier survival estimates by treatment group are shown in Figure 1, and univariate and multivariate analysis of factors associated with mortality are presented in Table 2. In the univariate analysis, patients in the sATT and iATT groups had similar mortality risk (iATT versus sATT hazard ratio [HR] 0.90, 95% confidence interval [CI] 0.62 to 1.32). Patients in the iATT + STM had lower mortality risk than patients in the sATT group (HR 0.47, 95% CI 0.24 to 0.93), and the mortality difference at nine months was 23.3% (95% CI 2.1 to 44.5). In the multivariate analysis, being previously treated for tuberculosis (adjusted HR [aHR] 3.23, 95% CI 1.68 to 6.24), low albumin concentrations (aHR 0.74 per increase of 1 g/dL, 95% CI 0.58 to 0.95), and low CD4+ lymphocyte concentrations ( Figure 2) were independently associated with increased mortality. The risk of death increased exponentially only with CD4+ lymphocyte concentrations below 100 cells/ L ( value = 0.013). Compared with the sATT, the use of the iATT + STM was associated with a reduced risk of mortality (aHR 0.2, 95% CI 0.09 to 0.46), but the use of iATT without STM was not (aHR 1.14, 95% CI 0.74 to 1.76). In a sensitivity analysis, we removed being previously treated for tuberculosis from the multivariate model, and the use of iATT + STM remained significantly associated with reduced mortality (aHR 0.5, 95% CI 0.25 to 0.99). Discussion These results complement the findings from a previous study of our cohort where we observed a mortality reduction in HIV-associated tuberculous meningitis after implementation of the iATT [5]. After adjusting for STM use, the iATT was only effective in reducing mortality when combined with STM. The results of previous studies suggest that rifampicin is the most important drug among those included in the iATT [3,6]. Unlike in a previous clinical trial using intravenous rifampicin for 14 days [3], we did not observe a survival benefit when the iATT was given without STM. In a recent clinical trial comparing the bactericidal effect of the standard dose of rifampicin (10 mg/kg) with higher doses, an improved bactericidal activity was achieved only with rifampicin dosing equal to or greater than 30 mg/kg [21]. It is possible that the oral dose of rifampicin used in our study (near 20 mg/kg) was not high enough to result in a higher bactericidal activity. STM was given only to patients previously treated for tuberculosis, which is a known risk factor for mortality and poor outcomes [16,22,23]. Given the poor CSF penetration of STM [24,25], the beneficial effect on survival of STM and the iATT is intriguing. In combination with a standard dose of rifampicin (10 mg/kg) in patients with pulmonary tuberculosis, streptomycin has a strong bactericidal activity during the first six days of ATT [26]. Our results suggest that STM and higher doses of rifampicin could have a synergetic effect against tuberculosis, but new studies are needed to confirm this hypothesis. The use of restricted cubic splines allowed for a flexible representation of the relationship between the concentrations of CD4+ lymphocytes and mortality. The mortality hazard increased exponentially with lower CD4+ lymphocyte concentration, but only in patients with CD4+ lymphocytes <100 cells/ L. This nonlinear association between CD4+ lymphocytes and the log-hazard should be taken into account in future clinical trials or studies investigating prognostic factors of HIV-associated tuberculous meningitis [27]. The study has some limitations. Observational studies can be biased due to unknown confounders not included in the multivariate analysis. Unlike in randomized clinical trials, treatment groups were not uniformly balanced, and patients in the iATT + STM group were more likely to be on ART at the time of ATT initiation. In addition, we did not have information about the drug sensitivity of mycobacteria among treatment groups or the severity of the clinical condition of the patients. On the other hand, we did not exclude patients because of the severity of tuberculous meningitis, so the study reflects the "real-life" of HIV-associated tuberculous meningitis in a resource-limited setting. Conclusions In this cohort study in a resource-poor setting, the use of iATT resulted in reduced mortality only when combined with STM in HIV-infected patients with tuberculous meningitis. Because the study is observational in nature, we should be cautious about our findings. However, the mortality reduction was clinically important, and the results of this study deserve further research.
2018-04-03T04:26:17.147Z
2015-08-04T00:00:00.000
{ "year": 2015, "sha1": "0db55aa7f4c473ead1844390e62bdda652fce462", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ipid/2015/535134.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "258a2348a6d0b94d1ae69bc9d57b0a62749ce8a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51615965
pes2o/s2orc
v3-fos-license
Prevalence of extraintestinal manifestations in Korean inflammatory bowel disease patients Background The prevalence of inflammatory bowel disease (IBD) in South Korea is increasing. Although extraintestinal manifestations (EIMs) are an important factor in the clinical outcomes of IBD patients, EIMs have not yet been investigated in Korea. Thus, we conducted a cross-sectional study to assess the prevalence of EIMs in Korean IBD patients. Methods The 2014 claims data from the National Health Insurance System (NHIS) of Korea were used. IBD patients were identified by codes for Crohn disease (CD) and ulcerative colitis (UC) in the NHIS registration system for rare or intractable diseases. International Classification of Diseases, Tenth Edition codes were used to identify EIM cases. To estimate the prevalence of EIMs in the general population of Korea, we used national sample data. Standardized prevalence ratios (SPRs) were calculated to compare the prevalence rates of EIMs among IBD patients to those among the general population of Korea. Results A total of 13,925 CD patients and 29,356 UC patients were identified. CD and UC patients were different in terms of demographics and utilization of medication. Among the 17 EIMs investigated, pyoderma gangrenosum, osteomalacia, Sweet syndrome, and scleritis were observed in very few patients. The SPRs were greater than 1 for all EIMs. Aphthous stomatitis, rheumatoid arthritis, and osteoporosis were highly prevalent in both CD and UC patients, but the SPRs of the EIMs were not high. Conclusion The study confirmed that EIMs are more prevalent among IBD patients than among the general population of Korea. The prevalence of EIMs in IBD patients suggests the need for greater attention and effort in clinical practice. Methods The 2014 claims data from the National Health Insurance System (NHIS) of Korea were used. IBD patients were identified by codes for Crohn disease (CD) and ulcerative colitis (UC) in the NHIS registration system for rare or intractable diseases. International Classification of Diseases, Tenth Edition codes were used to identify EIM cases. To estimate the prevalence of EIMs in the general population of Korea, we used national sample data. Standardized prevalence ratios (SPRs) were calculated to compare the prevalence rates of EIMs among IBD patients to those among the general population of Korea. Results A total of 13,925 CD patients and 29,356 UC patients were identified. CD and UC patients were different in terms of demographics and utilization of medication. Among the 17 EIMs investigated, pyoderma gangrenosum, osteomalacia, Sweet syndrome, and scleritis were observed in very few patients. The SPRs were greater than 1 for all EIMs. Aphthous stomatitis, rheumatoid arthritis, and osteoporosis were highly prevalent in both CD and UC patients, but the SPRs of the EIMs were not high. PLOS Introduction Big Data Hub (http://opendata.hira.or.kr) operated by the Korean Health Insurance Review and Assessment Service. The prevalence of EIMs in the general population of Korea was measured using the same International Classification of Diseases, Tenth Edition (ICD-10) codes as those used in IBD patients [17]. Inclusion criteria of patients and the comparison group In this study, CD and UC were defined according to the corresponding ICD-10 codes (CD with K50, and UC with K51) and registration codes from the rare/intractable diseases (RID) patient-support program (CD: V130, UC: V131). The RID registration program was established in 2009 in order to lessen the burden on patients who are suffering from these diseases in Korea. In order for these IBD patients to be eligible for benefits, their doctor needs to submit a separate form to prove that the condition has been confirmed. Although the validity of these codes has not been evaluated in a separate publication, it was confirmed through a review of the medical records in a study by Kim et al. [10]. As a denominator to investigate the prevalence of EIM, IBD patients from the NHID 2014 data between January 1, 2014 and December 31, 2014 were defined according to the following criteria: 1) a diagnosis with the corresponding ICD-10 codes (CD: K50, UC: K51) in at least 1 claim, and 2) the presence of a registration code from the RID patient-support program (CD: V130, UC: V131) in at least 1 claim. Subjects who were under 19 years of age or older than 75 years of age were excluded. To calculate the prevalence of EIMs in the general population of Korea, the same criteria were applied in the 2014 National Patient Sample data. Age, sex, and drug utilization patterns were compared between CD and UC patients. The variables considered in the analysis of drug utilization patterns were anti-inflammatory drugs (corticosteroids, mesalazine, and balsalazide), immunosuppressants (methotrexate, azathioprine, sulfasalazine, and 6-mercaptopurine) and tumor necrosis factor (TNF)-alpha inhibitors (adalimumab, infliximab, golimumab, and etanercept). To define the age and sex of patients, we used the National Health Insurance subscriber information. In the NHID, this information was obtained from the 2014 subscriber information database. The NPS did not have such a database, so the age of patients was determined based on their age recorded in the first claim in 2014. Anti-inflammatory drugs, immunosuppressants, and tumor necrosis factor (TNF)alpha inhibitors were identified using the NHID formulary, and exposure to the drugs was defined as presence of more than 1 the prescription code between January 1, 2014 and December 31,2014. EIMs in the analysis were categorized into 4 categories: ophthalmologic EIMs (scleritis, episcleritis, and iridocyclitis), hepatopancreaticobiliary EIMs (cholelithiasis, sclerosing cholangitis, and acute pancreatitis), dermatologic EIMs (aphthous stomatitis, psoriasis, erythema nodosum, pyoderma gangrenosum, and Sweet syndrome), and musculoskeletal EIMs (rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, sacroiliitis, osteoporosis, and osteomalacia). The corresponding ICD-10 codes are presented in S1 Table. Statistical analysis The baseline characteristics were summarized using the mean and standard deviation for continuous variables and frequency and proportion for categorical variables. The Student t-test was used for the comparison of continuous variables, and the chi-square test or the Fisher exact test was used for categorical variables as appropriate. The prevalence of an EIM was calculated as the number of patients with that EIM divided by the number of IBD patients. The prevalence of each EIM was presented for CD and UC patients, respectively, with 95% confidence intervals (CIs). We used indirect standardization to compare EIM prevalence between IBD patients and the general population of Korea [18,19]. The distribution of EIMs in the general population of Korea by age and sex is presented in S2 Table. To estimate the age/sex-standardized prevalence ratio, the number of expected events of each EIM in CD and UC patients was calculated by multiplying the age/sex-specific EIM prevalence in the general population by the number of patients in each age/sex stratum of CD and UC patients, and then summing the total number of expected events. The age/sexstandardized prevalence ratios with 95% CIs were calculated for each EIM as the number of expected events divided by the number of observed events. All statistical tests were 2-sided, and P values < 0.0038 (0.05/13, where 13 equals the number of tests) were considered to indicate statistical significance based on the Bonferroni correction to cope with multiple comparisons. SAS version 9.4 (SAS Institute, Inc., Cary, NC, USA) was used for the statistical analysis. Ethical considerations This study protocol was exempted from review by the Institutional Review Board of Seoul National University Hospital and the Seoul National University College of Medicine (IRB number: E-1510-071-711). Results A total of 13,925 CD patients and 29,356 UC patients were included in the analysis. The CD patients were younger and more likely to be male than the UC patients. The use of anti-inflammatory drugs was less frequent in the CD patients than in the UC patients, but the use of immunosuppressants was more common, except for sulfasalazine. The use of TNF-alpha inhibitors was more common in the CD patients than the UC patients, and the most common such drugs were adalimumab and infliximab (Table 1). Among the 17 EIMs investigated, pyoderma gangrenosum, osteomalacia, Sweet syndrome, and scleritis were observed in very few patients. We did not find any female patients with Sweet syndrome. No female CD patients showed scleritis. Ankylosing spondylitis, aphthous stomatitis, acute pancreatitis, cholelithiasis, rheumatoid arthritis, and osteoporosis were found in more than 10 patients per 1000 CD cases, while the prevalence rates of psoriasis, aphthous stomatitis, acute pancreatitis, cholelithiasis, rheumatoid arthritis, osteoporosis, and iridocyclitis in UC patients were more than 10 patients per 1000 cases ( Table 2, Table 3). The age/sex-standardized prevalence rate ratios, which were calculated using the sample data of the general population of Korea, were greater than 1 for all EIMs (Table 4). However, the 95% CIs of osteomalacia, Sweet syndrome, episcleritis, and scleritis in CD patients, which had a small number of observed events, included unity. Osteomalacia and Sweet syndrome in UC patients also showed standardized prevalence ratios without statistical significance. Aphthous stomatitis, rheumatoid arthritis, and osteoporosis were highly prevalent in both CD and UC patients, but the standardized prevalence rate ratios of the EIMs were not high. Discussion This is the first study to evaluate the prevalence rates of 17 ophthalmologic, hepatopancreaticobiliary, dermatologic, and musculoskeletal EIMs with a cross-sectional design using Korean national health insurance claim data from 2014. Statistically significant higher standardized prevalence rate ratios were observed, except for some EIMs with extremely low frequencies. Among the statistically significant EIMs, the highest standardized morbidity rate ratio was estimated for psoriatic arthritis, followed by erythema nodosum and sclerosing cholangitis in CD, and psoriatic arthritis, sclerosing cholangitis, and ankylosing spondylitis in UC. The standardized prevalence rate ratios were generally higher in CD than in UC. Age, sex, and medication utilization patterns were significantly different in CD and UC patients, and differences in CD and UC were found for standardized morbidity rate ratios. The medication utilization pattern is likely to be influenced by physician subspecialty. Because musculoskeletal EIMs are the most frequent, it may be expected that there will be differences in the medicines prescribed for patients with IBD depending on whether they are seen by a rheumatologist or a non-rheumatologist. The treatment of severe musculoskeletal EIMs may often involve close collaboration between rheumatologists and gastroenterologists. For example, when choosing an immunomodulatory agent, rheumatologists tend to prefer methotrexate and gastroenterologists tend to prefer azathioprine. Regarding TNF inhibitors, although etanercept has no effect on IBD, as infliximab and adalimumab tend to be preferred by rheumatologists. Rheumatologists tend to prescribe more sulfasalazines, while gastroenterologists prefer to use mesalazine in comparison to rheumatologists. According to Koutroubakis et al., arthralgia, peripheral arthritis, aphthous stomatitis, and erythema nodosum were the most frequent EIMs among 1860 Greek patients with IBD. EIMs were more prevalent in females and in patients with CD [20]. In a retrospective study using the national health insurance database of Taiwan between 1998 and 2011 including 3153 IBD patients, peripheral arthritis was the most common EIM, followed by ankylosing spondylitis and osteoporosis. The prevalence of EIMs was higher in CD patients and in females [21]. In a population-based study that used the Manitoba Health administrative databases (1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996), EIMs were defined based on the presence of at least five relevant claims in IBD patients. The 10-year prevalence rates for iritis/uveitis, primary sclerosing cholangitis (PSC), ankylosing spondylitis, pyoderma gangrenosum, and erythema nodosum using their definition ranged from 0.6% to 2.8%. Compared with the matched general population cohort, the prevalence of EIM was relatively high in IBD patients [5]. As the prevalence of EIMs varies across studies owing to differences in the study periods and the definitions of EIMs, it is difficult to compare study findings directly. However, the tendency for EIMs to be more prevalent in females, CD patients, and IBD patients than in the general population, which was reported in previous studies, was also confirmed in our study. Ophthalmologic EIMs Following joints and the skin, the eye is the site of many immune-related EIMs [22]. According to previous studies conducted in India, Switzerland, and Turkey, ophthalmologic EIMs occur in up to 13% of IBD patients [23][24][25], and are more frequent in CD patients than in UC patients [26], although our results showed that they tended to be more common in UC patients on a frequency basis. However, this difference was not prominent when the standardized prevalence rate ratio was compared. Rather, the main inconsistency we found with previous studies is that iridocyclitis was more frequently observed than episcleritis. Previous studies have found episcleritis to be the most common ocular EIM, whereas uveitis was relatively rare [7]. In our study, iridocyclitis was more common, but another previous study conducted in Switzerland reported a higher prevalence rate for uveitis than was found in this study [24]. Thus, episcleritis may be less likely to be a comorbidity in Korean IBD patients. Hepatopancreaticobiliary EIMs Up to 50% of IBD patients may have hepatobiliary manifestations [7]. PSC has a well-known association with IBD. Seventy-five percent of PSC patients have UC, and 5%-10% of PSC patients have CD [27,28]. Moreover, 5% of UC patients and 2% of CD patients develop PSC [29]. Ye et al. reported that the prevalence of PSC was 1.1% among 1849 Korean ulcerative colitis patients using medical records (from July 1977 to September 2009) of a tertiary hospital [30]. In our study, the prevalence of sclerosing cholangitis was one-tenth of the reported prevalence, but a tendency for a higher prevalence in UC patients than in CD patients was observed. Cholelithiasis is a common EIM in IBD patients, and is frequently seen in cases of CD, especially those involving the ileum. It has been reported that up to 13%-34% of cases are caused by obstruction of the enterohepatic circulation, which is known to occur more frequently in IBD patients than in the general population of Sweden [31]. Our data indicated that this condition occurred more frequently in CD patients than in UC patients, and this tendency was more pronounced in comparison to the general population. Acute pancreatitis is a common side effect of 6-mercaptopurine or azathioprine treatment. However, it is associated with gallstones and is more common in cases of CD. A retrospective multicenter study in Spain reported that acute pancreatitis occurred in 1.6% of IBD patients [32]. A Dutch study reported that acute pancreatitis was four times more common in CD patients and two times more common in UC patients than in the general population [33], which is very similar to our data. Dermatologic EIMs It has been reported that 2%-34% of IBD patients experience dermatologic EIMs [34]. In a study performed in Turkey, dermatologic EIMs were present in 9.3% of patients; erythema nodosum was present in 7.4% of patients, and pyoderma gangrenosum in 2.3% [35]. Erythema nodosum has been reported in 15% of CD patients and in 10% of UC patients [36], and it has been reported to be more common in women [36]. Our study also confirmed the tendency for this condition to be more common in CD patients and for there to be a higher prevalence in women. Pyoderma gangrenosum is a much rarer, more severe, and debilitating EIM that is more common in UC patients than in CD patients [37][38][39]. Its prevalence has been reported to be 1%-10% in UC patients and 0.5%-20% in CD patients [40]. Psoriasis is more common in CD patients, and is known to be more common in CD than in the general population of the United States [41]. Although the prevalence rates identified in our study were lower, the standardized prevalence rate ratio was higher in UC patients than in CD patients. Oral lesions including aphthous stomatitis are known to be common in IBD patients, and are more common in CD patients [9,22,42]. In our data, the prevalence rate was 3%-4% in both CD and UC patients; this rate was significantly higher than that observed in the general population of Korea, but the magnitude of the difference was not large. Acute febrile neutrophilic dermatosis (Sweet syndrome) is a rare dermatologic EIM [43][44][45]. The literature on its prevalence is scarce, and only case reports are available. Our data differ from previous reports in that it has been reported to be more common in women, but there were no female patients with this condition in our data. However, there were not enough cases in our data to assess the prevalence rate in a meaningful way. Musculoskeletal EIMs Traditionally, the main musculoskeletal EIM is seronegative arthralgia/arthritis [46]. Approximately 5%-10% of UC patients and 10%-20% of CD patients experience these symptoms [47]. Axial arthropathies are less common than peripheral arthralgia/arthritis, and have been reported to be experienced by 3% to 5% of IBD patients [22,26,48]. Asymptomatic sacroiliitis can be observed in radiological examinations in up to 52% of CD patients [49]. Although ankylosing spondylitis and isolated sacroiliitis are distinct diseases, sacroiliitis can be reported as ankylosing spondylitis in the health insurance claims data, so it is possible that the prevalence of sacroiliitis was underestimated and that of ankylosing spondylitis was overestimated. When ankylosing spondylitis and sacroiliitis were grouped as axial arthropathy and rheumatoid arthritis and psoriatic arthritis were grouped as peripheral arthralgia/arthritis, the trends reported in the existing literature were reproduced in our study. Patients with IBD have a high risk of osteoporosis, with consequences including corticosteroid therapy, decreased physical activity, inflammation-related bone resorption, and dietary malabsorption of minerals [50]. As a result, patients with IBD have a greater risk of fractures than the general population of Korea, and this risk is higher in both men and women. The prevalence of osteoporosis has been found to be 7%-35% in CD patients [51][52][53] and 18% in UC patients [52]. In our data, the prevalence rate was higher in women and the standardized prevalence rate was higher in CD patients. This study has the advantage of high representativeness because it used health insurance claim data including all Koreans. Due to some limitations, however, some precautions should be taken regarding the correct interpretation of the results. First, the validity of the diagnostic codes for EIMs should be carefully considered. There is a possibility that the prevalence rate was underestimated due to the low claim rate in routine clinical practice for erythema nodosum, pyoderma gangrenosum, and osteomalacia, while overestimation of EIMs could have occurred due to the inclusiveness of the ICD-10 codes for rheumatoid arthritis and iridocyclitis. For the comparison with the general population, however, there is little concern about underestimation and overestimation because the prevalence was measured in the same way in both databases. Even though we calculated standardized prevalence ratios by adjusting for age and sex, other possible confounding variables were not considered. In particular, obesity has recently been reported to be associated with EIMs [54]; however, it was not possible to evaluate whether patients were obese using the claims database, because this information was not included. Moreover, no further examination was conducted of the relationships between drug utilization patterns and EIM prevalence in this study. Due to the limitation of the cross-sectional study design for evaluating drug effects, we suggest that the associations between EIMs and drug utilization should be carefully interpreted with evidence from studies with higher on the evidence hierarchy. A cohort study with a new-user design would be needed to provide relevant evidence, since the rarity of both IBD and some EIMs will limit the availability of appropriate patients for randomized controlled trials. This study presents data on the prevalence of each EIM in IBD patients. Considering the possible impact of EIMs on the quality of life of the patients, the high prevalence presented in this study suggests that more attention and appropriate treatment are needed. The study also confirmed that the EIMs are more prevalent in Korean IBD patients than in the general population of Korea, which suggests that more attention should be paid to EIMs in IBD patients in everyday clinical practice, and that our results can be used for patient education.
2018-07-17T00:46:15.430Z
2018-07-10T00:00:00.000
{ "year": 2018, "sha1": "49e8d71fec89586df7c842c65d12fab3e3ee662e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0200363&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49e8d71fec89586df7c842c65d12fab3e3ee662e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258673006
pes2o/s2orc
v3-fos-license
Characterization of M8 Wheat Mutant Adaptability to Low Land The wheat requirement in Indonesia is still fulfilled by import, which increases annually. To reduce the import dependence, Indonesia is required to elevate the domestic wheat production appropriate to the Indonesian agroclimatic condition by wheat plant breeding mutation. This study was aimed to characterize several mutant wheat commodities, that are adaptive to lowland condition. The experiment was arranged in a completely randomized block design with 16 observed-wheat genotypes (G). The wheat genotypes used were: 1) (G1) N1 200 2.4.B.6, 2) (G2) N 200 2.3.3, 3) (G3) N 200 2.5.2, (G4) N 350 3.6.2, (G5) N 350 3.7.1, (G6) N 300 3.6.1, (G7) N 350 3.1.3, (G8) N 250 3.7.1, (G9) M 200 1.7.1, (G10) S 300 7.9.1, (G11) S 300 2.1, (G12) D 200, and several comparative varieties, such as (G13) Guri-3 , (G14) Selayar , (G15) Nias , and (G16) Dewata . The results obtained that the lowland-adapted M8 wheat mutant with high productivity level was found in N 200 2.4.B.6 (2.75 t.ha -1 ), N 200 2.3.3 (2.69 t.ha -1 ), and D 350 3.6.2 (2.35 t.ha -1 ). Characters with the highest heritability level were number of tillers, number of productive tillers, seed weight per panicle, and production. Meanwhile, characters, that were correlated with production, were plant height, number of tillers, number of productive tillers, harvesting age, seed-filling period, number of spikelets per panicle, percentage of empty florets, number of seeds per panicle, and seed weight per panicle A. Introduction analysis was performed by analysis of variance with the Least-Significant Different (LSD) test at a 0.05 confidence level. To find out the relationship between the characters, regression and correlation studies were carried out, while the genetic diversity of mutant genotypes was carried out by heritability analysis C. Result and Discussion The LSD test results from Table 1 indicate that the N200.2.4.B.6 (g1) obtained the highest plant height (57.00 cm), which was significantly different from the comparative varieties, namely Guri-3 (a) and Selayar (b). Based on the number of tillers, the N 200 2.4.B.6 (g1) also had the most tillers (5.60 tillers) and was significantly different from the comparative varieties, namely Selayar (b), Nias (c), and Dewata (d), while the number of productive tillers showed that the N 200 2.4.B.6 (g1) had the most productive tillers (4.53 tillers) and was significantly different from the comparative varieties, such as Guri-3 (a) and Dewata (d). The number of productive tillers was affected by an environmental factor, namely air temperature, whereas higher air temperature tended to inhibit the number of productive tiller growth. This condition followed Andriani & Isnaini (2011), that the number of tillers depended on the variety and environmental conditions. Each tiller has the potential to produce one panicle. The number of tillers importantly contributes to the harvesting product. This condition was similar to Rachmadani, S., Damanhuri, D., & Soetopo, L. (2017), that the number of tillers could directly affect the product per plant as part of the selection criteria to gain a highly potential wheat genotype. The higher number of productive tillers, the more seed produced in the plants. Based on the LSD test results from Table 2, the N 350 3.1.3 (g7) obtained the fastest flowering day (43.33 DAP) and was significantly different from the comparative varieties, namely Guri-3 (a) and Selayar (b), while seed-filling rate showed that the N 200 2.4.B.6 (g1) obtained the fastest seed filling (27.47 days) and was significantly different from the comparative varieties, namely Guri-3 (a), Selayar (b), and Dewata (d). The flowering day in the wheat plant can also be affected by the environment, mainly temperature. High temperatures can accelerate the flowering process. Wheat planted in lowland flowers faster than in highland (<400 asl). This condition was caused by the lowland environmental condition is more supportive for wheat growth, based on the temperature, humidity, and sunlight level. Wahyu, Y., Samosir, A. P., & Budiarti, S. G (2013) performed a study on wheat and stated that the flowering day of wheat in the lowland was about 43-70 DAP. A faster harvesting day was thought due to temperature stress in the lowland. Besides environmental conditions, the factor of land height can also affect the flowering day and harvesting period. Wahyu et al (2013) stated that extremely high air temperatures could affect the harvesting period in several wheat varieties in the low-elevated area. The LSD test results from Table 3 present that the N 250 3.7.1 (g8) obtained the highest panicle length (8.47 cm) and was significantly different from the comparative varieties of Guri-3 (a), Selayar (b), and Dewata (d). The number of spikelets per panicle data presents that the N 200 2.4.B.6 (g1) has the highest value and (13.20 spikelets) and was significantly different from the comparative varieties of Guri-3 (a) and Nias (c). Furthermore, the number of seeds per panicle data indicates that the N 200 2.4.B.6 (g1) had the highest value (28.13 seeds) and was significantly different from the comparative varieties. Panicle length is a product component that has a direct connection to the number of spikelets. Based on Kirby (2002), the longer panicle found, the more spikelets formed as a potential of a greater number of seeds produced. The panicle length also showed an interaction after the temperature stress treatment (Syuryawati, Rahmi YA, Zubachtirodim, 2007). The LSD results in Table 4 present that the N 200 2.4.B.6 (g1) obtained the greatest number of seeds per panicle (28.88 seeds) and was significantly different from the comparative varieties of Guri-3 (a), Selayar (b), and Dewata (d). The seed weight per panicle data indicates that the N 200 2.4.B.6 (g1) obtained the highest weight (0.79 g), as also found in the weight of 1000 seeds (28.84 g), which was significantly different from all comparative varieties, while the production value showed that the N 200 2.4.B.6 (g1) was the most productive variety (2,75 t.ha -1 ) and significantly different from all comparative varieties. Commonly, one spikelet per wheat panicle has three florets, each of which is filled with one wheat seed. Therefore, the greater number of spikelets, the greater number of florets formed. This condition was similar to Wahyu et al. (2013), who stated that a greater number of empty florets per panicle represents a lower number of seeds produced per panicle. The increased number and percentage of empty florets in the lowlands was caused by the drought or no rainy weather during the seed filling period, followed by the increased temperature that caused the pollen development failure and seed production inhibition. The number of empty florets affects directly the decreased seed weight per panicle and wheat production per grove. The percentage of empty florets in the lowest value is quite tolerable, following Nur (2013), who mentioned that the genotype selection with low empty floret level should be performed to obtain a highly temperature-tolerant genotype in lowland cultivation. In Table 5, the highest heritability level is presented from the number of productive tillers' character (91.28%). Heritability level depends on the genotype and environment variance. Heritability is a variance proportion caused by the genetic factors against the phenotype variance. Heritability level is one of the genetic parameters considered for character selection (Wirnas, D., I. Widodo, Sobir, Trikoesoemaningtyas, D. Sopandie , 2006;Suharsono & Jusuf, 2009;Sungkono Trikoesoemaningtyas, D. Wirnas, D. Sopandie, 2009;Syukur M., S. Sujiprihati, A. Siregar, 2010;Yunianti R., S. Sastrosumarjo, S. Sujiprihati, M.Surahman, S.H. Hidayat, 2010;Barmawi et al., 2013). Based on the highest heritability level, characters that can be considered as selection characters to choose the best family are the number of tillers, number of productive tillers, seed weight per panicle, and production. Correlation denotes the amount of relationship occupied in the observed parameters. The correlation coefficient analysis results from Table 6 indicate the correlation of productivity character with other characters. This condition means that plant height, number of tillers, number of productive tillers, chlorophyll index, harvesting period, seed-filling period, number of spikelets per panicle, percentage of empty florets, production per grove, number of seeds per panicle, seed weight per panicle are significantly correlated with the production, as each of which has the correlation values of 0. 39, 0.75, 0.78, 0.56, -0.33, -0.37, 0.33, -0.53, 0.99, 0.58, and 0.62, respectively. Correlation analysis is an overview of the kinship level between one character to the others, but the value of correlation cannot explain the causal relationship of the kinship level among characters. Therefore, the role of cross-print analysis is important to elaborate the correlation coefficient. The results of the cross-print analysis can describe how significant the direct and indirect effects of a character to the main character (Rohaeni & Permadi, 2012). The use of correlation analysis and cross-print analysis in determining the selection character has also been performed in many studies, including Milligan Penicle Lenght D. Conclusion The results showed that the genotypes of M8 wheat mutants in lowland obtained the highest production in the N 200 2.4.B.6 (2.75 t. ha-1), N 200 2.3.3 (2.69 t.ha -1 ) and N 350 3.6.2 (2.35 t.ha -1 ). Characters that had high heritability level in M8 wheat mutant were number of tillers, number of productive tillers, weight seeds per panicle, and production. Furthermore, characters that were highly correlated with production were plant height, number of tillers, number of productive tillers, harvesting period, seed filling period, number of spikelets per panicle, percentage of empty florets, number of seeds per panicle, and seed weight per panicle.
2023-05-14T15:19:58.997Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "49cd0f95f8e6975741ea212b7ff2a114d2c2acc2", "oa_license": "CCBY", "oa_url": "http://usnsj.id/index.php/ATJ/article/download/7.2.%20112-120/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0f93a9b2c57bda02c4cd600956dc1238a76f7710", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
14723510
pes2o/s2orc
v3-fos-license
Proximate Composition, and l-Carnitine and Betaine Contents in Meat from Korean Indigenous Chicken This study investigated the proximate composition and l-carnitine and betaine content of meats from 5 lines of Korean indigenous chicken (KIC) for developing highly nutritious meat breeds with health benefits from the bioactive compounds such as l-carnitine and betaine in meat. In addition, the relevance of gender (male and female) and meat type (breast and thigh meat) was examined. A total of 595 F1 progeny (black [B], grey-brown [G], red-brown [R], white [W], and yellow-brown [Y]) from 70 full-sib families were used. The moisture, protein, fat, and ash contents of the meats were significantly affected by line, gender, and meat type (p<0.05). The males in line G and females in line B showed the highest protein and the lowest fat content of the meats. l-carnitine and betaine content showed effects of meat type, line, and gender (p<0.05). The highest l-carnitine content was found in breast and thigh meats from line Y in both genders. The breast meat from line G and the thigh meat from line R had the highest betaine content in males. The female breast and thigh meats showed the highest betaine content in line R. These data could be valuable for establishing selection strategies for developing highly nutritious chicken meat breeds in Korea. INTRODUCTION Meat is a highly nutritious food because it can provide not only all the essential amino acids but also micronutrients such as minerals and vitamins (Biesalski, 2005). Among meat sources, chicken meat contains higher protein as well as lower fat and cholesterol contents than red meat, and consequently is considered superior for human health (Choe et al., 2010). In addition, it is cheaper than pork and beef and has fewer religious restrictions. For these reasons, the consumption of chicken meat has increased and is predicted to increase as much as 34% by 2018 (OECD-FAO, 2009). In Korea, chicken meat has mainly been produced from broiler breeds, which have the benefits of fast growth, low production cost, and high meat yield, by selection with an intensive fattening system . Recently, increasing consumer interest in eating healthier meat has resulted in an increasing interest in indigenous chicken breeds because the meat of indigenous chicken breeds has higher protein and lower fat contents as well as unique flavors compared with broiler breeds (Choe et al., 2010;Jung et al., 2011;Jayasena et al., 2013). Therefore, indigenous chicken breeds are regarded as good sources for the production of meat that has high nutritional value . Chicken and other meats contain various endogenous compounds known to be beneficial to humans (Schmid, 2009;Lee et al., 2015). In addition to the basic nutrients, these compounds have received much attention in terms of their bioactivities and their contribution to the nutritional value of meat . Among the bioactive compounds in meat, L-carnitine (γ-trimethylamino-βhydroxybutyric acid) is a small, water-soluble, nitrogencontaining compound that is biosynthesized from lysine and methionine (Hoppel, 2003;Schmid, 2009). Its key role in the body is fat metabolism. The inner membrane of mitochondria is impermeable to fatty acyl-coenzyme A (CoA). However, as an ester with L-carnitine fatty acyl-CoA is transported through the inner membrane of mitochondria and subsequently undergoes β-oxidation (Arslan et al., 2003;Schmid, 2009). In addition, L-carnitine has other roles such as buffering the ratio of acyl-CoA to CoA, branched chain amino acid metabolism, removal of excess acyl groups, and peroxisomal fatty acid oxidation (Hoppel, 2003;Steiber et al., 2004). Betaine (N-N-N-trimethylglycine) is a zwitterionic compound and a methyl derivative of glycine. Betaine is used as an organic osmolyte and as a source of methyl groups (Craig, 2004). Consequently, it has been suggested to be an important nutrient in humans (Craig, 2004). Although L-carnitine and betaine can be synthesized in humans, the intake of these compounds from food could be advantageous for maintaining or improving health. A previous study found that the supplementation of Lcarnitine reduced body weight, serum triglycerides, and total cholesterol in mice (Jang et al., 2014). Flanagan et al. (2010) suggested that L-carnitine supplementation could prevent cardiovascular disease and control obesity. Betaine has been suggested as a nutrient that can protect cells and proteins from environmental stress and prevent chronic diseases in humans (Craig, 2004;Patrick, 2002). In addition, Cholewa et al. (2013) reported that betaine supplementation improved the performance and body composition of strength-trained males. Therefore, it seems likely that increasing the concentrations of L-carnitine and betaine would improve the nutritional value of meat. Recently, a governmental organization in Korea has been trying to develop a new chicken breed that can produce high quality meat based on Korean indigenous chickens (KICs) to satisfy consumers' demands. For this project, five lines of KIC (i.e., black [B], grey-brown [G], red-brown [R], white [W], and yellow-brown [Y]) were proposed as candidates for selection. The present study was conducted to investigate the proximate composition and the L-carnitine and betaine contents of meats from five lines of KIC, which are proposed as candidates for selection, to obtain potentially useful information for establishing selection strategies to develop a new chicken breed that can produce highly nutritious meat with health benefits from the bioactive compounds such as L-carnitine and betaine in meat. Animals A two-generation resource pedigree using 5 lines of KIC was established and managed in this study. Within each line, three sires were mated with 14 to 15 dams to produce F 1 chicks. In total, 595 F 1 progeny ( Chickens were raised at the National Institute of Animal Science (NIAS) of Korea and fed ad libitum commercialformula feed containing 18.2% concentrated protein and 2,859 kcal/kg metabolizable energy. Chicken care facilities and the procedures were performed met or exceeded the standards established by the Committee for Accreditation of Laboratory Animal Care at NIAS in Korea. This study was also conducted in accordance with "The Guide for the Care and Use of Laboratory Animals" published by the institutional Animal Care and Use Committee of NIAS, Korea. Chickens were weighed individually and slaughtered after 4 h of feed withdrawal at 20 weeks of age using conventional neck cuts and bleeding for 2 min; the feathers were then removed and the chickens were eviscerated. The carcasses were vacuum-packed after chilling in ice-cold water and stored in refrigerator at 4°C for 24 h. And then vacuum-packed carcasses were frozen in a freezer at -20°C until analysis. Sample preparation Before analysis, the frozen carcasses were thawed in a refrigerator at 4°C for 24 h. Breast and thigh muscles were dissected from each thawed carcass. Then, the right and left breast and thigh muscles from each chicken were minced separately using a food mixer (CH180A, Kenwood Ltd, Hampshire, UK) for 30 s. Minced meat samples were used for analysis. Determination of proximate composition The proximate composition of meats was determined by a slightly modified method of the Association of Official Agricultural Chemists (1995). Moisture content was determined by drying 3 g of samples placed in aluminum dishes for 15 h at 104°C. Crude protein content was measured by the Kjeldahl method (VAPO45, Gerhardt Ltd., Idar-Oberstein, Germany). The amount of nitrogen obtained was multiplied by 6.25 to calculate the crude protein content. Crude fat content was measured by the Soxhlet extraction system (TT 12/A, Gerhardt Ltd., Germany). Crude ash content was measured by burning 2 g of sample overnight in a furnace at 600°C. Determination of L-carnitine and betaine contents L-Carnitine and betaine contents in meat samples were determined by the method of Li et al. (2007) with some modifications. Each meat sample (3 g) was homogenized at 13,500 rpm for 30 s (T25b, IKA-Works Sdn Bhd, Selangor, Malaysia) with 10 mL of acetonitrile-methanol solution (9:1 v/v) and centrifuged at 2,090×g for 5 min at 4°C (Union 32R, Hanil Co., Ltd., Incheon, Korea). The supernatant was filtered into a 20-mL volumetric flask through a funnel plugged with glass wool. The remaining filtrate was again mixed with 10 mL of acetonitrile-methanol solution and centrifuged under the same conditions. The resulting supernatant was collected in the same volumetric flask, which was then filled with acetonitrile-methanol solution. Subsequently, 2 mL of this sample was mixed with 810 mg of Na 2 HPO 4 and 90 mg of Ag 2 O (9:1 w/w) in a 15-mL tube by vigorous shaking and vortexing. Sample tubes were then dried by shaking without their caps in a shaking machine for 30 min and centrifuged again (Union 32R) at 2,090×g for 5 min at 4°C. A 0.5-mL aliquot of each supernatant was then mixed with 0.5 mL of derivatizing reagent (1.39 g of bromoacetophenone and 0.066 g of 18-crown-6 in 100 mL of acetonitrile) in a 15-mL tube, vortexed, and heated (80°C) for 60 min in a water bath. After cooling under running water, this mixture was filtered through a 0.2-μm membrane filter and injected into an Atlantis HILIC high-pressure liquid chromatography silica column (4.6 mm×150 mm, 3 μm, Waters) equipped with a Waters 1525 Pump and a Waters 717 Plus Autosampler (Millipore Co-Operative, Milford, MA, USA). A Waters 2487 Diode-Array Detector (Millipore Co-Operative, USA) was used at 254 nm to measure L-carnitine and betaine contents. The mobile phase A was 25 mM ammonium acetate in which pH was adjusted to 3.0 using formic acid, and the mobile phase B was acetonitrile. The mobile phase was supplied at 1.4 mL/min for 20 min with isocratic elution (90% of A and10% of B). L-carnitine and betaine contents were calculated using standard curves of each compound. L-carnitine hydrochloride (≥98.0%) and betaine (≥99.0%) standards were obtained from Sigma-Aldrich (St. Louis, MO, USA). Statistical analysis In this study, all meats of 595 chickens were analyzed, and all data (pooled data) were analyzed by multifactorial analysis of variance using the general linear model to investigate the effect of meat type (breast and thigh), gender (male and female), and line (five lines of KIC). After grouping the data by each meat type with each gender, the data were analyzed by the general linear model to confirm the line effect in each meat type with each gender, and the gender effect in each meat type with line. Tukey's multiple range test was used to compare significant differences between least square mean values (p<0.05). Least square mean values and standard error of the least square means are reported. Additionally, Pearson's correlation coefficients were calculated (p<0.05). SAS software (version 9.3, SAS Institute Inc., Cary, NC, USA) was used for all statistical analyses. Proximate composition The proximate composition of breast meat from lines B, G, R, W, and Y of KIC are presented in Table 1. The moisture content of breast meat was not significantly different among the 5 lines in both males and females. The protein content of breast meat was significantly higher in line G than in line Y for males, and higher in line B than in lines R, W, and Y for females (p<0.05). Fat content was not significantly different in male breast meat among the 5 lines. However, the fat content in line B was significantly lower than that in line G females (p<0.05). The males of lines W and Y and the females of lines R and Y had significantly higher ash content than that of the same genders in the other lines (p<0.05). In the comparison of proximate composition between genders, the male breast meat in lines G and W had significantly higher moisture and ash contents than did the female breast meat (p<0.05). On the other hand, the female breast meat in lines B and G had significantly higher protein and fat contents than did the male breast meat (p<0.05). In thigh meat, lines W and R had significantly higher moisture content in males than did lines B, G, and R (p<0.05). However, there was no significant difference in moisture content in females (Table 2). Protein content of thigh meat was significantly higher in males in lines B, G, and R compared with that in males in lines W and R (p<0.05); however, there was no significant difference in females. The thigh meats from males in lines B and G and females in line B showed significantly lower fat content compared with lines W and Y (p<0.05). Ash content was significantly higher in the thigh meat from males in lines W and Y and females in line Y compared with males in lines B, G, and R and females in line B and G (p<0.05). In comparing proximate composition of thigh meat between genders, the males in lines W and Y had significantly higher moisture content and lower protein content than did females (p<0.05). However, there were no significant differences in moisture and fat contents between genders in lines B, G, and R. The significant difference in fat content between males and females was found only in line G, in which females had higher values than did males (p<0.05). Ash content of thigh meat from males was significantly higher than that of females only in line W (p<0.05). From the pooled data (presented as p and F values) of proximate composition in the present study, moisture, protein, fat, and ash contents were significantly affected by line but the differences were small, p<0.05 (Table 2). Previous study reported that the protein, fat, and ash composition of breast meat and moisture, protein, fat, and ash composition of thigh meat between native breeds in Thailand and imported breeds were different (Wattanachant et al., 2004). In the present study, gender and meat type were also influential factors in proximate composition with significance (p<0.05). Thomas et al. (1984) reported that gender was one of the factors affecting proximate composition of chicken meat; however, López et al. (2011) did not observe any such gender effect. Therefore, the effect of gender on the proximate composition of chicken meat remains an open question. Regarding the effect of meat type on proximate composition, Suchý et al. (2002) found that breast meat from Ross 308 and Cobb broilers had higher moisture, protein, and ash contents and lower fat content than that of the thigh meat. These results are at least partially consistent with those of the present study in which the protein and ash contents were high in breast meat, and fat content was higher in thigh meat than in breast meat. However, the moisture content of breast meat was lower than that of thigh meat in the present study (data not shown). L-Carnitine content The L-carnitine content of breast and thigh meat from KICs is presented in Table 3. The male breast meat from lines B and Y had significantly higher L-carnitine content than that from lines G, R, and W (p<0.05). Lines W and Y showed significantly higher L-carnitine content in female breast meat than did line R (p<0.05). In thigh meat, Lcarnitine content was significantly higher in line Y than that in males in lines R and W (p<0.05). Line Y also exhibited the highest L-carnitine content in female thigh meat compared with other lines with significance (p<0.05). Based on these results, the pooled data revealed that there was a highly significant effect of line on the L-carnitine content of breast and thigh meat from KICs (p<0.0001). These data are consistent with a recent publication by Jayasena et al. (2015), which reported the existence of genetic effect on L-carnitine content in chicken meats. The male breast meat in lines R and Y showed significantly higher L-carnitine content than did female breast meat, while there was no significant difference in Lcarnitine content between genders in thigh meat. From the pooled data, it was found that gender had an effect on the Lcarnitine content of meat from KICs (p<0.01). However, the difference in L-carnitine content between males and females (10.43 vs 9.99 mg/100 g meat) was insignificant (data not shown). Previous studies reported that gender is an influential factor for L-carnitine content in animal tissues. The skeletal muscles of male rats had high L-carnitine content compared with those of female rats (Borum, 1978). Abuzaid (2010) found a higher L-carnitine content in beef from male Angus than in that from female Angus. Regarding the meat type, thigh meat exhibited high Lcarnitine content compared with breast meat regardless of KIC line and gender. In addition, the effect of meat type on L-carnitine content of meat from KICs was the highest among the effects of line, gender, and meat type. This result is consistent with a previous study in which the L-carnitine content was higher in thigh meat than breast meat from a KIC (Jayasena et al., 2014). This result may be due to the differences in metabolic requirements for energy production between breast and thigh muscles in chickens. Breast and thigh muscles in chickens are composed of different muscle fiber types: I (slow-twitch oxidative red fiber), IIA (fasttwitch oxidative-glycolytic white fiber), and IIB (fast-twitch glycolytic white fiber). Thigh muscle, which is a red muscle, predominantly consists of type I muscle fibers with relatively large quantities of mitochondria and myoglobin, while breast muscle, which is white muscle, has a high ratio of type IIB muscle fibers and comparatively smaller quantities of mitochondria and myoglobin . Therefore, the energy production in thigh muscle relies on aerobic metabolism in mitochondria that requires L-carnitine as a carrier of fatty acids, which may result in the accumulation of L-carnitine in thigh muscle (Arslan et al., 2003;Ehrenborg and Krook, 2009). Shimada et al. (2004) also reported that the concentration of L-carnitine in red muscle was higher than in white muscle and suggested that oxygen metabolism and myofiber types were related to L-carnitine concentration in muscle. In addition, Rigault et al. (2008) reported that fat content in beef showed a positive correlation with L-carnitine content, while there was no correlation between L-carnitine and moisture or protein content. Jayasena et al. (2014) suggested that the differences in L-carnitine content between breast and leg meat in chickens might be due to the differences in fat content. In the present study, the moisture and fat contents showed a positive correlation with L-carnitine content, while protein and L-carnitine contents were negatively correlated (Table 4). However, moisture and fat content did not show any correlation with L-carnitine content in breast or thigh meats from both genders. Protein content exhibited a negative correlation with L-carnitine content in the male breast meat; however, this correlation was inconsistent in the female breast meat and the male and female thigh meat. Therefore, it is likely that the correlations of moisture, protein, and fat content with L-carnitine content of meat in the pooled data may be caused by the large differences in moisture, protein, and fat content, with a difference in Lcarnitine content between breast and thigh meat. Based on this analysis, it seems that the fat content as well as the moisture and protein content of chicken meat are not influential factors for the L-carnitine content of chicken meat. Betaine content The betaine content of meat from KICs is shown in Table 5. The male breast meat from lines B and G contained significantly higher amounts of betaine than that from line W (p<0.05). The betaine content of female breast meat was the highest in line R compared with lines W and Y (p<0.05). In thigh meat, lines B and R showed significantly higher betaine content than line W (p<0.05). The female thigh meat from line R had a significantly higher betaine content compared with those from the other lines (p<0.05). From the pooled data, it was found that there was an apparent line effect on the betaine content of meat from KICs (p<0.0001). A similar result was found in a previous study that showed significantly different betaine levels in meat from different chicken breeds . The gender effect on betaine content of meat from KICs was found in pooled data (p = 0.0467). However, an individual comparison of betaine content between genders showed that a significant difference was found only in thigh meat from line R, while other lines in thigh meat and all lines in breast meat did not show significant differences. Based on the pooled data from KIC meat, it was found that betaine content was highly influenced by meat type (p<0.0001). The mean betaine content in breast and thigh meats from KICs was 8.37 and 22.11 mg/100 g meat, respectively (data not shown). Previously, Jayasena et al. (2014) reported that chicken leg meat contained over a twofold higher amount of betaine than chicken breast meat. Patterson et al. (2008) found that chicken drumstick and thigh meat had high betaine and choline contents compared with breast meat. Betaine is synthesized by a two-step oxidation of choline in which mitochondrial choline oxidase first catalyzes the production of betaine aldehyde, which is further oxidized by mitochondrial betaine aldehyde dehydrogenase to betaine (Dragolovich, 1994). Therefore, it is plausible that the higher choline content and number of mitochondria in thigh muscle compared with breast muscle of chickens result in the higher accumulation of betaine. In the present study, the correlation coefficient of betaine content with moisture, protein, and fat contents in meat from KICs was analyzed (Table 4). From the pooled data, there was a positive correlation between moisture and fat contents, while protein and betaine contents were negatively correlated. However, those correlations may be caused by the large differences between betaine content and moisture, protein, and fat contents in breast versus thigh meat, because there was no consistent correlation within individual meat types and genders. Therefore, we conclude that there is no correlation between betaine content and moisture, protein, and fat contents in meat from KICs. Mahmoudnia and Madani (2012) reported that betaine acts as a methyl donor for the synthesis of L-carnitine. Indeed, we did find a positive correlation between betaine and Lcarnitine contents in meat from KICs in the pooled data (Table 4). However, the correlations between betaine and Lcarnitine content were positive in breast meat and negative in thigh meat from males, while there were no correlations in females. Thus, we conclude that no correlation exists between the betaine and L-carnitine content of meat from KICs. In the present study, the moisture, protein, fat, and ash content of meats from 5 lines of KIC were significantly different, but the differences are small. The L-carnitine and betaine contents, which can be considered positive nutritional factors with health benefit, differed significantly between the meats from the 5 KIC lines. To our knowledge, there have been no studies on the heritability of L-carnitine and betaine content in chicken meat. However, the heritability of both compounds would not be surprising because many endogenous compounds show heritability in animals (Mateescu et al., 2012). Therefore, we conclude that these data can be valuable in establishing selection strategies for developing a new chicken breed that can produce highly nutritious meat. However, an investigation of the heritability of L-carnitine and betaine content in chicken meat is warranted. In addition, a comparative analysis of the phenotype and genotype of each line is needed to clearly understand the characteristics of these five lines of KIC. CONFLICT OF INTEREST We certify that there is no conflict of interest with any financial organization regarding the material discussed in the manuscript.
2016-05-04T20:20:58.661Z
2015-09-03T00:00:00.000
{ "year": 2015, "sha1": "1a0de5ee103a1cd14983b5fc194ade614211d9c9", "oa_license": "CCBY", "oa_url": "http://www.ajas.info/upload/pdf/ajas-28-12-1760.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a0de5ee103a1cd14983b5fc194ade614211d9c9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
85513819
pes2o/s2orc
v3-fos-license
Using Kinect v2 to Control a Laser Visual Cue System to Improve the Mobility during Freezing of Gait in Parkinson's Disease Different auditory and visual cues have been proven to be very effective in improving the mobility of people with Parkinson's (PwP). Nonetheless, many of the available methods require user intervention and so on to activate the cues. Moreover, once activated, these systems would provide cues continuously regardless of the patient's needs. This research proposes a new indoor method for casting dynamic/automatic visual cues for PwP based on their head direction and location in a room. The proposed system controls the behavior of a set of pan/tilt servo motors and laser pointers, based on the real-time skeletal information acquired from a Kinect v2 sensor. This produces an automatically adjusting set of laser lines that can always be in front of the patient as a guideline for where the next footstep would be placed. A user interface was also created that enables users to control and adjust the settings based on the preferences. The aim of this research was to provide PwP with an unobtrusive/automatic indoor system for improving their mobility during a Freezing of gait (FOG) incident. The results showed the possibility of employing such system, which does not rely on the subject's input nor does it introduce any additional complexities to operate. Introduction Freezing of gait (FOG) is one of the most disabling symptoms in Parkinson's disease (PD) that affects its sufferers by impacting their gait performance and locomotion. FOG is an episodic phenomenon that introduces irregularities in the initiation or continuation of a patient's locomotion and usually occurs in later stages of PD where patients' muscles cannot function normally and appear to be still when they are trying to walk [1][2][3][4]. is makes FOG one of the most intolerable symptoms that not only affects PD sufferers physically but also psychologically, as it makes them almost completely dependent on others for their basic and daily tasks. Consequently, the patient's quality of life decreases, and the healthcare and treatment expenditures increase, as does the cost of the injuries caused [1]. It has been estimated that about 50% of PwP experience FOG incidents [5]. Moreover, it has been proven that visual and auditory cues can have a positive impact on the subject's gait performance during a FOG incident [6][7][8]. Visual cues such as laser lines can act as a sensory guidance trick that provides an external trigger, which, in turn, can initiate movement [7]. ere has been much research conducted towards implementing apparatus and systems that can provide visual and auditory cues for PwP. In work done by Zhao et al. [9], a wearable system based on modified shoes was developed in order to cast a laser-based visual cue in front of PwP. e system consisted of a 3D printed add-on that included a red laser line projector and pressure sensors that detect the stance phase of a gait cycle and turn the laser pointer on. e unit provided the option to adjust the distance between the laser light strip and the subject's foot for the optimal effectiveness, depending on the user's preferences. e research provided a simple, yet effective approach towards providing visual cues for PwP with locomotion issues. Nonetheless, like any other approaches, this too has some limitations, such as the constant need to carry the shoe add-on, the batteries needed for the device, charging the batteries, and remembering to switch them on. In another attempt [10], researchers evaluated the effect of visual cues using two different methods, including a subject-mounted light device (SMLD) and taped step length markers. It was concluded that using laser projections based on SMLD have promising effects on the PwP's locomotion and gait performance. e method required patients to wear a SMLD that some patients might find inconvenient to have or even impractical in some situations. Moreover, SMLD systems have stability issues and steadiness difficulties due to the subjects' torso movements during a gait cycle. As expected, the visual cues must be constantly enabled during a gait cycle, regardless whether they are needed or not. In [11], although the SMLD method was employed, researchers added the 10 seconds on-demand option to the "constantly on" visual cue casting. is system was more sophisticated, consisting of a backpack having a remotely controllable laptop that made the subjects' mobility even more troublesome. In other attempts [12,13], a different approach was implemented by using virtual cues projected on a pair of goggles that is only visible to the patient. In [14], the effect of real and virtual visual cueing was compared, and it was concluded that real transverse lines casted on the floor are more impactful than the virtual counterparts. Nonetheless, using virtual cueing spectacles (VCS) eliminates the shortcomings in other techniques such as limitations in mobility, steadiness, and symmetry. VCS have also the advantage of being used in an external environment when the patient is out and about. Moreover, several research studies have been conducted using virtual reality (VR) to assess the possibility of VR integration for Parkinson's related studies [15][16][17][18][19][20]. Nonetheless, as the VR technology blocks patients' view and makes them unable to see their surroundings, the usage of this is limited to either rehabilitation by implementing exercise-based games, FOG provoking scenarios, or the assessment of patients' locomotion rather than real-time mobility improvement using cues. Although they are effective to some extent, these attempts tend to restrict the user either by forcing them to carry backpacks or wear vests containing electronics, or making them rely on conventional approaches such as attaching laser pointers to a cane [21], or laser add-on for shoes. e hypothesis of this study, on the other hand, is to propose a different technique: casting parallel laser lines as a dynamic and automatic visual cuing system for PwP based on Kinect v2 and a set of servo motors suitable for indoor environments. As Kinect has been proven to be a reliable data feed source for controlling servo motors [22,23], the Kinect camera was chosen for real-time depth data feed for this study. is paper also examines the possibility of using the Kinect v2 sensor for such purposes in terms of accuracy and response time. is research uses subject's 3D Cartesian location and head direction as an input for servo motors to cast visual cues accordingly. is eliminates the need of the user intervention or trigger, and at the same time, the need to carry or wear any special equipment. Despite this approach being limited to environments equipped with the proposed apparatus, it does not require any attachments or reliance on PwP themselves, something that can be beneficial in many scenarios. e system comprises a Microsoft Kinect v2, a set of pant/tilt servo motors alongside a microcontroller based on Arduino Uno and two laser line laser pointers. A two-line projection was chosen so that the second traversed laser line could be used to indicate a set area for which the next step has to land. e system was tested in different conditions, including a partially occluded scene by furniture to simulate a living room. Methods During the initial testing phase, 11 healthy subjects were invited, consisting of both males and females ranging from ages 24-31, with the age mean of 27 and SD of 2.34, a mean height of 174.45 cm (68.68 inch) and SD of 8.31 cm (3.27 inch) ranging from 163 to 187 cm (64.17 to 73.62 inch). ey were asked to walk in predefined paths: 12 paths per subject, walking towards the camera and triggering a simulated FOG incident by imitating the symptom while having the Kinect camera positioned at a fixed location. e subjects' skeletal data were captured and analyzed by the Kinect camera in real-time. e software was written in C# using the Kinect for Windows SDK version 2.0.1410.19000. e room that was used for conducting the experiments consisted of different pieces of living room furniture to mimic a practical-use case of the device. is not only yields more realistic results but also tests the system in real-life scenarios where the subject is partially visible to the camera and not all the skeletal joints are being tracked. To test and compare the Kinect v2's accuracy in determining both vertical and horizontal angles according to the subject's foot distance to the Kinect camera and body orientation, eight Vicon T10 cameras (considered as the gold standard) were also used to capture the subject's movements and compare those with the movements determined by the Kinect. e Vicon cameras and the Kinect v2 captured each session simultaneously while the frame rate of the recorded data from the Vicon cameras was down-sampled to match the Kinect v2 at approximately 30 frames per second. At a later stage and following an ethical approval, there was a recruitment of 15 PwP (with the collaboration of Parkinson's UK) to test the system and provide feedback. is research was published separately in [22]. e more indepth analysis and information with regard to this focus group can also be checked via [24]. Kinect RGB-D Sensor. Microsoft Kinect v2 is a time-offlight (TOF) camera that functions by emitting infrared (IR) lights on objects, and upon reflection of the lights back to the IR receiver, it constructs a 3D map of the environment where the Z-axis is calculated via the delay of receiving IR light [25]. Kinect v2 introduced many features and improvements compared to its predecessor such as 1080p and 424p resolution at approximately 30 frames per seconds for its RGB and depth/IR streams, respectively, as well as a wider field of view [26]. e ability to track 25 joints of six subjects simultaneously enables researchers to employ Kinect v2 as an unobtrusive human motion tracking device in different disciplines, including rehabilitation and biomedical engineering. Angle Determination. e Kinect v2 was used to determine the subjects' location in a 3D environment and localize the subject's feet joints to calculate the correct horizontal and vertical angles for servo motors. To determine the subject's location, Kinect skeletal data were used for joints' 3D coordinate acquisition. A surface floor can be determined by using the vector equation of planes. is is necessary to automate the process of calculating the Kinect's height to the floor that is one of the parameters in determining vertical servo angle: where A, B, and C are the components of a normal vector that is perpendicular to any vector in a given plane and D is the height of the Kinect from the levelled floor. x, y, and z are the coordinates of the given plane that locates the floor of the viewable area and are provided by the Kinect SDK. Ax, By, Cz, and D are also provided by the Kinect SDK once a flat floor is detected by the camera. For vertical angle determination, a subject's 3D feet coordinates were determined, and depending on which foot was closer to the Kinect in the Z-axis, the system selects that foot for further calculations. Once the distance of the selected foot to the camera was calculated, the vertical angle for the servo motor is determined using the Pythagorean theorem, as depicted in Figure 1. e subject's skeletal joints' distance to the Kinect on the Z-axis is defined in a righthanded coordinate system, where the Kinect v2 is assumed to be at origin with a positive Z-axis value increasing in the direction of Kinect's point of view. In Figure 1, a is the Kinect's camera height to the floor that is the same as variable D from equation (1) and c is the hypotenuse of the right triangle, which is the subject's selected foot distance to the Kinect camera in the Z-axis. θ is the calculated vertical angle for the servo motor. Note that we have considered the position offsets in the X and Y axes between the Kinect v2 camera and the laser pointers/servo motors in order to have the most accurate visual cue projection. Our experiments showed that the Kinect v2 determines a joint's Z-axis distance to the camera by considering its Y-axis value; i.e., the higher the value of a joint's Y-axis is to the camera's optical center, the further the distance it has to the camera in the Z-axis. is indicates that unlike the Kinect's depth space, the Kinect skeletal coordinate system does not calculate Z-axis distance (Figure 1, variable c) in a perpendicular plane to the floor, and as a result, the height of the points, that in this case are joints, are also taken into consideration. In case of a joint being obstructed by an object, for example, a piece of furniture, the obstructed joints' 3D Cartesian coordinate location tracking was compensated and predicted using "inferred" state enumerate, a built-in feature in the Kinect SDK. By implementing the "inferred" joint state, a joint data was calculated, and its location was estimated based on other tracked joints and its previously known location. Figure 2 shows the Kinect v2 accuracy in determining a subject's joint (left foot) distance to the camera in Z-axis compared to a gold standard motion capture device (Vicon T10). It was concluded that Kinect v2 skeletal data acquisition accuracy was very close (98.09%) to the industry standard counterpart. e random noise artifacts in the signal were not statistically significant and did not affect the vertical angle determination. e subject's body direction that determines the required angle for the horizontal servo motor can be yielded through the calculation of rotational changes of two subject's joints including left and right shoulders. e subject's left and right shoulder joints' coordinates were determined using skeletal data and then fed to an algorithm to determine the body orientation as follows: servo angle � 90 ± sin −1 |shoulderA − shoulderB| . (2) In Figure 3, d is the Z-axis distance difference to the camera between the subject's left and right shoulders. Once d based in the equation (2) was calculated, the angle for the horizontal servo motor can be determined by calculating the inverse sine of θ. Depending on whether the subject is rotating to the left or right, the result would be subtracted or added from/to 90, respectively, as the horizontal servo motor should rotate in reverse in order to cast laser lines in front of the subject accordingly. FOG Detection. In previous studies, the authors have implemented the process of FOG detection in [27] using the gait cycle and walking pattern detection techniques [26,28]. Once the developed system detects a FOG incident, it will turn the laser pointers on and start determining the appropriate angles for both vertical and horizontal servo motors. After passing a user-defined waiting threshold or disappearance of the FOG incident characteristics, the system returns to its monitoring phase by turning off the Journal of Healthcare Engineering laser project and servo motors movements. Figure 4 shows the GUI for the developed system application. e left image shows a Parkinson's disease patient imitator during his FOG incident. e right window shows that the subject is being monitored, and his gait information is being displayed to healthcare providers and doctors. As it can be seen in the "FOG Status" section displayed in the bottom rectangle, the system has detected a FOG incident and activated the laser projection system to be used as a visual cue stimulus. e circled area shows the projection of laser lines in front of the subjects (according to the distance from their feet to the camera) and their body direction. e developed system also allows further customization, including visual cue distance adjustments in front of the patient. Serial Connection. A serial connection was needed to communicate with the servo motors controlled by the Arduino Uno microcontroller. e transmitted signal by the developed application needed to be distinguished at the receiving point (the Arduino microcontroller), so each servo motor can act according to its intended angle and signal provided. We have developed a multipacket serial data transmission technique similar to [29]. e data was labeled at the transmitter side, so the microcontroller can distinguish and categorize the received packet and send appropriate signals to each servo motor. e system loops through this cycle of horizontal angle determination every 150 ms. is time delay was chosen as the horizontal servo motor does not need to be updated in real-time due to the fact that a subject is less likely to change his/her direction in very short intervals. is ensures less jittery and smoother movements of horizontal laser projection. e vertical servo motor movement was less prone to the jitters as the subject's feet are always visible to the camera as long as they are not obstructed by an object. Design of the Prototype System. A two-servo system was developed using an Arduino Uno microcontroller and two class-3B 10 mW 532 nm wavelength green line laser projectors as shown in Figure 5(a); green laser lines have been proven to be most visible amongst other laser colors used as visual cues [30]. A LCD display has also been added to the design that shows all the information with regard to vertical and horizontal angles to the user. Figure 5(a) shows the laser line projection system attached to the tilt/pan servo motors. Figure 5(b) shows the top view of the prototype system including the wiring and voltage regulators. Figure 5(c) shows the developed prototype system used in the experiment at different angles including the Kinect v2 sensor, pan/ tilt servo motors, laser pointers, and the microcontroller. Figure 6 demonstrates the calculated vertical angle based on the subjects' feet/joint distance to the Kinect camera in Zaxis. Results e right foot has been omitted in the graph for simplicity. As Figure 6 demonstrates, the system provided highly accurate responses based on the subject's foot distance to the camera in Z-axis and the vertical servo motor angle. Subjects were also asked to rotate their body in front of the Kinect camera to test the horizontal angle determination algorithm, and as a result, the horizontal servo motor functionality. Figure 7 shows the result of the calculated horizontal angle using equation (2) for the left and right directions. Figure 7 shows how the system reacts to the subject's body orientation. Each subject was asked to face the camera in a stand-still position while rotating their torso to the left and to the right in turns. As mentioned before, the horizontal angle determination proved to be more susceptible to noise compared to the vertical angle calculation. is is due to the fact that as the angle increases to more than 65 degrees, the shoulder farthest away would be obstructed by the nearer shoulder, and as a result, the Kinect should compensate by approximating the position of that joint. Nonetheless, this did not have any impact on the performance of the system. Overall, the entire setup including the Kinect v2 sensor, tilt/pan servo motors, laser projectors, microcontroller, and LCD except the controlling PC will cost about £137.00, making it much more affordable than other less capable alternatives available on the market. Discussion A series of pan/tilt servo motors have been used alongside laser line projectors to create a visual cuing system, which can be used to improve the mobility of PwP. e use of the system eliminates the need to carry devices, helping patients to improve their mobility by providing visual cues. e implemented system has the ability to detect FOG using only the Kinect camera, i.e., fully unobtrusive, and provide dynamic and automatic visual cues projection based on the subject's location without the patient's intervention as opposed to other methods mentioned. It was observed that this system can provide an accurate estimation of the subject's location and direction in a room and cast visual cues in front of the subject accordingly. e Kinect's effective coverage distance was observed to be between 1.5 and 4 meters (59 and 157.48 inch) form the camera, which is within the range of the area of most living rooms, making it an ideal device for indoor rehabilitation and monitoring purposes. To evaluate the Kinect v2's accuracy in calculating the vertical and horizontal angles, a series of eight Vicon T10 cameras were also used as a golden standard. Overall, the system proved to be a viable solution for automatic and unobtrusive visual cues' apparatus. Nonetheless, there are some limitations to this approach including the indoor aspect of it and the fact that it requires the whole setup including the Kinect, servos, and laser projectors to be included in the most communed areas of a house such as the living room and the kitchen. Additionally, during the experimentation, the Kinect's simultaneous subject detection was limited to only one person. Nevertheless, Kinect v2 is capable to detect six simultaneous subjects in a scene. However, the laser projection system, in order to work properly, should only aim at one person at a time. e developed system has the ability to either lock on the first person that comes into the coverage area or distinguish the real patient based on the locomotion patterns and ignore other people. Despite that, the affordability and ease of installation of the system would still make it a desirable solution should more than one setup need to be placed in a house. Moreover, the use of a single Kinect would limit the system's visibility and visual cue projection as well. Conclusion e results of this research showed a possibility of implementing an automatic and unobtrusive FOG monitoring and mobility improvement system, while being reliable and accurate at the same time. e system's main advantages such as real-time patient's monitoring, improved locomotion and patient's mobility, and unobtrusive and dynamic visual cue projection make it, in overall, a desirable solution that can be further enhanced for future implementations. As a next step, one could improve the system's coverage with a series of this implemented system to be installed in PwP's houses to cover most of the communal areas, or areas where a patient experiences the FOG the most (i.e., narrow corridors). One could also investigate the possibility of using such systems attached to a circular rail on a ceiling that can rotate and move according to the patient's location; this removes the need for extra setup in each room as the system can cover some additional areas. Moreover, by coupling the system with other available solutions such as laser-mounted canes or shoes, patients can use the implemented system when they are at home, while using other methods for outdoor purposes. is requires integration at different levels such as a smartphone application and visual cues in order for these systems to work as intended. Finally, the system's form factor can be made smaller to some extent by removing the Kinect's original casing and embedding all the equipment in a customized 3D printed enclosure, which makes it more suitable for a commercial production. Data Availability e gait analysis data used to support the findings of this study are restricted by the Brunel University London Ethics Committee in order to protect patient privacy. Data are available from CEDPS-Research@brunel.ac.uk for researchers who meet the criteria for access to confidential data. Journal of Healthcare Engineering 7
2019-03-28T13:02:31.450Z
2019-02-20T00:00:00.000
{ "year": 2019, "sha1": "92349fd74420168451ea60b37ac2a7cf973780d4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/3845462", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92349fd74420168451ea60b37ac2a7cf973780d4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
429104
pes2o/s2orc
v3-fos-license
Role of interfaces on the stability and electrical properties of Ge2Sb2Te5 crystalline structures GeSbTe-based materials exhibit multiple crystalline phases, from disordered rocksalt, to rocksalt with ordered vacancy layers, and to the stable trigonal phase. In this paper we investigate the role of the interfaces on the structural and electrical properties of Ge2Sb2Te5. We find that the site of nucleation of the metastable rocksalt phase is crucial in determining the evolution towards vacancy ordering and the stable phase. By properly choosing the substrate and the capping layers, nucleation sites engineering can be obtained, thus promoting or preventing the vacancy ordering in the rocksalt structure or the conversion into the trigonal phase. The vacancy ordering occurs at lower annealing temperatures (170 °C) for films deposited in the amorphous phase on silicon (111), compared to the case of SiO2 substrate (200 °C), or in presence of a capping layer (330 °C). The mechanisms governing the nucleation have been explained in terms of interfacial energies. Resistance variations of about one order of magnitude have been measured upon transition from the disordered to the ordered rocksalt structure and then to the trigonal phase. The possibility to control the formation of the crystalline phases characterized by marked resistivity contrast is of fundamental relevance for the development of multilevel phase change data storage. In many areas of science and technology, such as production of semiconductors, but also of pharmaceuticals, as well as formation of biominerals, it is essential to control crystallization processes. These usually occur via nucleation and growth and, in most practical circumstances, crystallization starts with heterogeneous nucleation at a foreign surface 1 . Despite its widespread occurrence, mechanistic understanding of the role of a surface in heterogeneous nucleation is limited. However, to control crystallization, the contribution of different surface properties to the effectiveness of a surface in inducing nucleation must be elucidated. Indeed the presence of interfaces can modify the nucleation process through various means, such as via favourable interactions with the crystallizing material and lattice match between the substrate and the compound to be crystallized. Here we study the effect of the interfaces on the crystallization of the metastable rocksalt phase in amorphous GeSbTe (GST) thin films, to explore the correlation between the produced microstructure and the subsequent path followed for the conversion from the rocksalt phase to the stable phase, with trigonal structure. Thanks to the ability to rapidly switch between two phases with high electrical and optical contrast, GST alloys belonging to the GeTe-Sb 2 Te 3 pseudo-binary line are the optimal candidate for non volatile phase change memories 2-4 as well as for electronic displays 5 and ovonic threshold switches 2,6 . The phases used as logic states are usually the amorphous and the metastable phase, with rocksalt structure. It is well assessed in literature that in the rocksalt structure Te occupies the anion sites and the cation sites are randomly occupied by Ge, Sb and vacancies 7 . Recently it has been shown that vacancy ordering can be induced in the metastable phase [8][9][10] , giving rise to an ordered rocksalt phase in which the electronic transport is modified, changing from an insulating transport, typical of the rocksalt phase with random vacancy distribution, to a metallic behaviour 8 . Currently, multi-level storage 11,12 has been realized by controlling the fraction of the crystalline 11 or amorphous 13 regions within a cell. The vacancy ordering could be used for multi level bit storage by employing amorphous, disordered rocksalt and ordered rocksalt or trigonal, to make three different logic states exhibiting contrast in the resistance value. However, in order to develop devices according to this new approach, a detailed understanding of the growing conditions and/or interfaces that can affect the vacancy ordering, as well as the relationship between the vacancy ordering and the electrical properties is required. In this paper we show that the nucleation of the metastable phase plays a relevant role, since both the process of vacancy ordering and the electrical conductivity are extremely sensitive to the interfaces and to the microstructure of the metastable rocksalt phase. We also show that three clearly distinct resistivity levels are associated to the crystalline structure with different degrees of order. Results and Discussion Surface Effects on the nucleation of the rocksalt phase. The phase change films were deposited in the amorphous phase on different substrates: Si (111) or SiO 2 deposited on a Si (100) wafer. The films have been then converted into the metastable rock-salt structure by ex-situ thermal annealing. The formation of the metastable structure, as well as the vacancy ordering can be followed by XRD and Raman spectroscopy. Figure 1(a) and (b) show the Raman spectra and the XRD pattern of amorphous GST deposited on Si (111) and annealed at different temperatures. The crystalline phase formed at the lowest temperature (110 °C) is the disordered rocksalt phase, characterized by a large peak in the Raman spectrum at about 155 cm −1 . By annealing at 170 °C the Raman spectrum appears largely modified, with the peak at 155 cm −1 completely absent and the appearance of a sharp peak at 178 cm −1 . This peak, typical of the trigonal phase, is associated to the A 1g Raman mode 14,15 and it is also observed in the sample annealed at 270 °C. XRD spectra acquired in Bragg-Brentano configuration, shown in Fig. 1(b), reveal that the sample annealed at highest temperature is indeed in the trigonal phase and it is highly textured with the {0001} planes parallel to the surface. Instead the sample annealed at 170 °C exhibits the diffraction pattern typical of the cubic phase, but with a small and broad peak at 47°, that has been ascribed to the presence of ordered vacancy layers 8 . Figure 2(a) shows a TEM micrograph of the film on Si (111) after annealing at 170 °C. It is polycrystalline, but highly textured with (111) planes parallel to the surface. Since the A 1g Raman peak is related to the oscillation of Te atoms close to the van der Waals gaps 15,16 , the presence of such a peak in a sample with cubic structure and ordered vacancy layers could sound unexpected. However, we find here that this oscillation is representative not simply of the ordering of the vacancy layers, but also of the complete modification of the Te bonds. Indeed, in the sample annealed at 170 °C the stacking of the planes is that of the rocksalt phase (ABCABC), as shown by the red spots in the TEM micrograph of Fig. 2(b). The width of the vacancy layers is double than the distance between two adjacent planes in the rocksalt structure (indicating that it is a vacancy layer, not a van der Waals gap). However, as shown in the intensity profile, the distance between the cationic planes adjacent to the Te plane at the vacancy layer is reduced to about 0.14 nm. Such a value is less than the distance (0.17 nm) between two planes in the disordered rocksalt structure, and suggests a situation more similar to the trigonal phase, therefore indicating that a modification of the Te bonds has occurred. The microstructure of the GST film deposited on SiO 2 appears very different. In this case annealing at 110 °C for 1 h is not sufficient to complete the crystallization of the rocksalt phase. Figure 3 shows the Raman spectra acquired for GST films on SiO 2 , covered also by a thin ZnS:SiO 2 cap layer. Films annealed up to 200 °C exhibit the typical spectrum of the disordered rock-salt phase, with a peak at about 155 cm −1 . An intermediate situation is represented by the sample annealed at 250 °C, in which the peak at 155 cm −1 , typical of the disordered rocksalt phase, disappears. Only above 250 °C the A g1 peak is clearly detectable. At higher temperature, up to 350 °C the contribution at about 155 cm −1 continuous to decrease and, according to previous studies on the effect of ordering on the trigonal structure, this could correspond to the progressive increase of Sb occupation at the cationic planes close to Te atoms at the van der Waals gaps 17,18 . The stacking is that of the rocksalt phase, with ordered vacancy layers, as shown by the red spots. The distance between Te layers close to the vacancy layer (blue dashed lines) and the first cation plane is lowered only in some cases (red dashed lines). STEM analysis in dark field of the sample annealed at 250 °C, shown in Fig. 4, reveals that it has a rocksalt structure with ordered vacancy layers. According to the STEM intensity, and in very good agreement with the Raman spectrum, the bonding of the Te atoms close to the vacancy layers is not completely modified, indicating the vacancy layers are still not completely empty. It is known in literature 19,20 that in GST films deposited on SiO 2 substrate the nucleation of the rock-salt phase is heterogeneous, as indicated by the low activation energy barrier for nucleation of a critical nucleus ΔG* (0.3 eV). It has been also shown by in situ TEM analyses that the nucleation starts at the amorphous film top surface 21,22 and it is followed by a second heterogeneous nucleation regime, with the formation of grains also at the GST/SiO 2 interface. In the case of GST deposited on Si (111), the presence of a strong texturization with the (111) plane parallel to the surface, clearly indicates that the nucleation of the GST rock-salt structure has occurred preferentially at the interface with the Si substrate. In order to avoid the competition between different nucleation seeds, here we have intentionally left the GST film uncovered, and performed the annealing for crystallization in vacuum. In this way we have obtained nucleation site control and engineering 23 . Therefore the smaller θ is, the greater the affinity between the nucleus and the substrate in the crystallizing medium and the lower the free energy barrier for heterogeneous nucleation 24 . Figure 5(b) shows the function f(θ) determining the reduction of the nucleation barrier due to the heterogeneous nucleation, as a function of the contact angle. In Fig. 5(c) and (d) the heterogeneous nucleation sites and their evolution as a function of temperature are shown for Si(111) and SiO 2 substrate, respectively. According to our observation we can roughly conclude that: However, this approach is limited by the assumption of the classical nucleation theory and it assumes that the nucleus is cap shaped rather than multifaceted, since the microscopic interfacial free energies are assumed to be the same. In the reality the surface energy of the crystal has a pronounced dependence on the orientation of the surface. For a grain with rock-salt structure, based on broken bonds model, the surface with lower surface energy is the (100). The (111) surface is instead characterized by the highest number of broken bonds and it has therefore a higher surface energy. Surface Effects on vacancy ordering and conversion to the trigonal phase. Engineering of the nucleation sites does not only affect the microstructure and the surface orientation of the rocksalt phase, but it has also a strong effect on the ordering of the vacancy layers. This usually occurs at temperatures in the range between the (disordered) rock-salt crystallization temperature and the temperature at which transition to the trigonal phase occurs. Figure 6 shows the microstructure evolution of the film deposited on SiO 2 upon annealing at different temperatures, from 200 °C to 350 °C. At 200 °C the material is in the disordered rocksalt phase and exhibits randomly oriented fine grains ( Fig. 6(a) and (b)). Upon annealing at 315 °C, as shown in Fig. 6(c), both the rocksalt with ordered or disordered vacancies, as well as the trigonal phase can be detected. The vacancy ordering process in the rocksalt phase occurs at the {111} planes 25,26 . In a single grain these planes form a tetrahedral. The intersection of two of such {111} planes at an angle of 70.5° is shown in the TEM cross section micrograph of Fig. 7. The situation may be similar to the formation of stacking fault tetrahedra in metals with face centered cubic structure under cold work plastic deformation, quenching experiments from temperatures close to the melting point or under irradiation 27,28 . However, only one of the {111} planes with ordered vacancies would be useful for the conversion of the materials into the trigonal structure. The other three planes should be annealed out. Figure 6(c) shows that grains with different orientations start to align their vacancy layers and following this process much larger grains with trigonal structure will be obtained. This process is completed upon annealing at 350 °C, after which the film is completely converted into the trigonal phase. Although polycrystalline, the film exhibits a marked texturing, with the {0001} planes parallel to the surface. Figure 6(d) shows that in the trigonal phase the grain size is larger and Fig. 6(e) is representative of typical grain orientation. The theory for the equilibrium shape of crystals determined by the difference of surface energy, based on broken bonds model, is again illuminating in determining the growth of trigonal structured Ge 2 Sb 2 Te 5 . As it has been shown in ref. 29, the planes which are (0001), (1-103), and (1-106) have low surface energy, with (0001) < (1-106) < (1-103). Since the surface energy depends on the number of atoms in each plane, this scheme is typical of the rhombohedral stacking. Indeed, also for trigonal GST it has been often reported a preferential orientation of the (0001) plane at the surface, even for samples deposited on SiO 2 substrate. Going back to the sample deposited on Si(111), we observe that it is already aligned with a (111) plane parallel to the surface, even in the rocksalt phase. This situation facilitates the vacancy ordering, which occurs at much lower temperature (170 °C), compared to the randomly oriented sample (250 °C). Considering the equivalency between the {111} planes of the rocksalt structure and the {0001} of the trigonal one, it is clear that the presence of {111} rocksalt planes parallel to the surface may reduce the atomic movements required to reach the situation corresponding to the minimum surface energy, i.e. with the trigonal (0001) plane at the surface, thus facilitating the conversion to the trigonal phase. Texturization starting from Si (111) is therefore also advantageous for the vacancy ordering and the transition to the trigonal structure. In the case of a silicon oxide substrate, instead, several {111} planes with ordered vacancies, not necessarily parallel to the surface, may form. In order to reach the stable configuration with trigonal (0001) planes at the surface more atomic planes need to be modified. Surface effect on the electrical properties. The electrical properties of GST are strongly dependent on the ordering of the crystalline structure and this fact is crucial for developing phase change memories that can be based on the ordering of the structure, such as for example, interfacial phase change memories or for multi-level storage. The trigonal phase is known to have a metallic behaviour while the disordered rocksalt structure is an insulator 30 . It has been shown that a transition from an insulating to a metallic behaviour (MIT) may be obtained by thermal annealing either in the trigonal structure (as shown for GeSb 2 Te 4 ) 30 or in the rocksalt structure 8 . In particular it has been shown that the MIT in the rocksalt structure is due to the ordering of the vacancy layers 8 and, on the contrary, by introducing disorder in the trigonal phase through ion irradiation, a reverse metal-to-insulator transition can be observed, driven by disordering of the vacancy layers 16 . Here we find that the resistivity, as well as its behaviour as a function of temperature, which is modulated by the vacancy ordering and by the subsequent conversion to the trigonal structure, are strongly dependent on the interfaces. Figure 8 shows the resistivity as a function of annealing temperature as measured on samples with different interfaces: on SiO 2 without capping layer, on SiO 2 with ZnS:SiO 2 capping layer, and on Si(111) without capping. The sample on Si(111) exhibits the lower resistivity, probably due to the larger grain size. Ordering of vacancy layers occurs at lower temperature and induces the MIT at 170 °C. It is important to note that, although the resistance values and temperature conditions may be quite different by changing the interfaces, in all of the samples the MIT is observed to occur for resistivity around 5 mOhm cm or below, i.e. at the maximum resistivity for a metal according to the Anderson model, as reported in ref. 30. In particular, it is shown 30 that the MIT occurs when the mobility edge crosses the Fermi level. According to this description, in films with rocksalt structure or with mixed rocksalt-trigonal phase, the activation energy governing the temperature dependence of the resistivity represents the distance between the mobility edge and the Fermi level (mobility gap). The energy values reported in Fig. 8 indicate the measured mobility gaps in the range 40-100 °C. In the case of randomly oriented rocksalt grains, obtained on a SiO 2 substrate without capping layer, the resistance rapidly decreases as a function of the annealing temperature and the vacancy ordering occurs at about 200 °C, as indicated by the disappearing of the peak at 155 cm −1 in the Raman spectrum (not shown). However, the vacancy ordering, in this sample, appears to be a competing process with the formation of the trigonal phase, that is formed at about 250 °C, not very different from the case of Si(111) substrate. The mobility gap is 75 meV in the disordered rocksalt phase, formed at 140 °C, and it monotonically decreases as the degree of order increases. The MIT occurs at about 200 °C, for mobility gap lower than 30 meV, indicating the mobility edge crosses the Fermi level within KT at room temperature (≈26 meV). In the sample with the ZnS:SiO 2 capping layer the nucleation of the (disordered) rocksalt phase occurs at the highest temperature, as well as the vacancy ordering and the conversion to the trigonal structure. The MIT is observed at about 330 °C and then the conversion into the trigonal phase occurs at about 350 °C. These results indicate that the capping layer inhibits the nucleation at the top surface. The nucleation of the rocksalt phase therefore may occur at the interface with the SiO 2 , which is characterized by higher surface energy. For the samples on Si (111) the mobility gap can not be evaluated in the temperature range 40-100 °C, since it is affected by the Si conductivity, as well as by the onset of crystallization, occurring at temperatures around 100 °C. Low temperature measurements 8 show a weak temperature dependence of the resistivity even in the case of disordered rocksalt phase. Nevertheless, in all the investigated samples, depending on the degree of ordering of the structure, three distinct resistance values can be distinguished, with a difference of about one order of magnitude, by changing from disordered to ordered rocksalt and to the trigonal phase. Conclusions The role of the interfacial energy on the structure and electrical properties of Ge 2 Sb 2 Te 5 has been investigated. We have explored the correlation between the sites of nucleation of the rocksalt phase and the subsequent path followed for the conversion into the stable phase, with trigonal structure. This work evidences how the nucleation of the metastable phase, and consequently its microstructure and grain orientation, is a crucial factor in determining the vacancy layers ordering and the evolution towards the stable trigonal phase. Decrease of about one order of magnitude in the resistivity has been observed upon vacancy layer ordering and then a further decrease has been measured upon transition to the trigonal phase. Therefore our data show that it is possible i) to obtain nucleation sites engineering by properly choosing the substrate and cap layer; ii) to clearly distinguish upon three different Methods Sample preparation. Ge 2 Sb 2 Te 5 films were deposited in amorphous phase either on Si(111) or SiO 2 substrate. Sample on Si (111) were deposited by molecular beam epitaxy (MBE) equipped with separate Ge, Sb, and Te effusion cells and then annealed ex-situ at several temperatures by means of rapid thermal annealing. Ge 2 Sb 2 Te 5 films on SiO 2 were deposited by sputtering on silicon oxide at room temperature, using a single stoichiometric target. Some samples have been capped with a 10 nm thick ZnS:SiO 2 layer. Annealing treatments were then performed at several temperatures in the range 150 °C-350 °C for 30 min in a vacuum furnace. The film thickness was 50 nm for the sputtered films and 25 nm for the MBE films. The specimens for TEM analysis were prepared by standard cross-sectional mechanical polishing followed by Ar + ion milling at ≈100 °C, using a Gatan PIPSII system. Ion energy ranged from 2.0 keV to 0.1 keV in order to avoid sample amorphization and damaging during thinning. HAADF STEM. A JEOL ARM200F Cold FEG condenser Cs-corrected STEM/TEM working at 200 kV was adopted to obtain High Resolution (HR) micrographs of the sample. The HAADF STEM images were obtained with a convergence semiangle of 33 mrad, a nominal point resolution of 0.68 Å. We operated at a very high Dark Field detector inner semi-angle (83 mrad) at which the scattering cross-section is well approximated by the Rutherford formula 31 , predicting an intensity roughly proportional to Z 2 . In some cases, as indicated, we have used also 42 mrad. HAADF STEM, compared to conventional TEM, is almost free of delocalization phenomena, at this high detection semi-angle, because also of the large incoherent electron scattering. Referring to the trigonal cell, the film was observed mainly along the < 11-20 > direction in order to directly analyze the stacking sequence of the atomic planes along the c-axis. The HAADF Micrographs were obtained using a dwell time per pixel of 40 μs and an electron beam current of about 50 pA. This low value should avoid any relevant artifact. Raman Spectroscopy. A Horiba Jobin Yvon HR800 system equipped with a 633 nm HeNe laser has collected Raman spectra. In order to avoid heating of the sample, the power of the laser was kept below 1 mW, with a laser spot diameter of about 4 μm. The spectral window is 320 cm −1 and the resolution 0.2 cm −1 . Each spectrum has been acquired using 3 accumulations, each with collecting time of 40 s. X-Ray diffraction. Samples were characterized by means of ex-situ X-ray diffraction (XRD), utilizing a PANalytical X' Pert PRO MRD diffractometer with Ge (220) hybrid monocromator, Employing a Cu Kα 1 radiation (λ = 1.540598 Å). Specular ω−2ϑ scans were performed in double axes mode in order to access the growth direction of the films, in a range of 10°-110°, with a step 0.02° and integration time of 2.5 s. Electrical Measurements. The electrical properties have been studied by measuring the sheet resistance with a four point probe and its temperature dependence has been evaluated using a Temptronics thermal chuck, in the range from 20 °C to 100 °C. For resistance measurements a HP4156B parameter has been employed.
2018-04-03T03:00:59.104Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "7729881743efaa203a29a89c1318f45fdcabf9f4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-02710-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7729881743efaa203a29a89c1318f45fdcabf9f4", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
242060764
pes2o/s2orc
v3-fos-license
On the need of bias adjustment for more plausible climate change projections of extreme heat The assessment of climate change impacts in regions with complex orography and land‐sea interfaces poses a challenge related to shortcomings of global climate models. Furthermore, climate indices based on absolute thresholds are especially sensitive to systematic model biases. Here we assess the effect of bias adjustment (BA) on the projected changes in temperature extremes focusing on the number of annual days with maximum temperature above 35°C. To this aim, we use three BA methods of increasing complexity (from simple scaling to empirical quantile mapping) and present a global analysis of raw and BA CMIP5 projections under different global warming levels. The main conclusions are (1) BA amplifies the magnitude of the climate change signal (in some regions by a factor 2 or more) achieving a more plausible representation of future heat threshold‐based indices; (2) simple BA methods provide similar results to more complex ones, thus supporting the use of simple and parsimonious BA methods in these studies. Heat extremes are very likely to be more frequent and intense in the future (Seneviratne et al., 2021), mainly as a direct consequence of the increase in mean temperature (Fischer & Schär, 2010;Schär et al., 2004). Global climate models (GCMs) are the fundamental tools producing future climate projections for impact and adaptation studies. However, uncertainties still remain for key large-scale processes (see e.g., Fernandez-Granja et al., 2021) and sub-grid scale processes, which are often misrepresented due to the coarse resolution of GCMs (e.g., Maraun, 2016). This is particularly true for regions with complex orography, intricate coastlines and/or small islands (e.g., Karl et al., 1999;Peterson et al., 2001;Sanjay et al., 2017), leading to large uncertainties and biases for extreme events. Absolute threshold-based temperature indices are largely sensitive to systematic model biases, and therefore they cannot be reliably calculated from raw GCM outputs. Bias adjustment (BA) methods are often used to correct specific statistical properties and reduce these biases (see e.g., Li et al., 2020;Maraun, 2016;Matthews et al., 2017;Teutschbein & Seibert, 2012). A proper application of BA (Ehret et al., 2012;Maraun et al., 2017) provides an improved and more robust signal (Dosio, 2016) through the reduction of the multimodel ensemble spread (see e.g., Zhao et al., 2015), by placing all models on equal footing, at the expense of additional uncertainty related to the BA method (Casanueva et al., 2020a;. Previous studies have shown the high sensitivity of threshold-based index projections to the BA method, although these are mostly limited to regional models or/and regional spatial scales (Ahmed et al., 2013;Dong Dosio, 2016;Schmith et al., 2021). Here, we provide a global analysis of the effects of BA on an extreme temperature index (the annual number of days with maximum temperature above 35 C, TX35), using three fit-for-purpose BA methods applied to the GCM simulations of the Coupled Model Intercomparison Project Phase 5 (CMIP5) from historical and RCP8.5 experiments. The effect of BA on the projected TX35 changes is then evaluated for different global warming levels (GWLs) analysing regional differences focusing on the updated IPCC-WGI reference regions . 2 | DATA AND METHODS | Model data (CMIP5) Daily maximum temperature from 28 GCMs from CMIP5 (Taylor et al., 2012, curated version used for IPCC-AR5) was used in this work considering both the historical and RCP8.5 experiments (see Table 1). All the simulations were downloaded from the IPCC Data Distribution Centre (https://www.ipcc-data.org/sim/gcm_monthly/AR5/ index.html; last accessed, 31 December 2019). For comparability, all simulations were interpolated to a common 2 grid. The common grids and land/sea masks used are available in the ATLAS GitHub repository. 1 The period 1986-2005 was considered as the historical baseline while +1.5 C, +2 C, and +3 C GWLs (with respect to the pre-industrial 1850-1900 mean value, see for example, Nikulin et al., 2018) were used for future projections. The corresponding time periods for each GCM are computed using 20-year moving windows. Table 1 shows the central years (n) of the 20-year window where the warming is first reached. The GWL period is thus taken as [n À 9, n þ 10]. The use of a 20-year moving window is selected to be consistent with 20-year time slices typically used for near-term (2021-2040), mid-term (2041-2060), and long-term (2081-2100) future projections. The reference GWLs (and additional supplementary materials and reproducibility scripts) are available in the ATLAS GitHub repository. 2 | Observational data Daily maximum temperature of W5E5 (Cucchi et al., 2020;Lange, 2019;Weedon et al., 2014) was used as the observational reference to calibrate the GCM output. This dataset was developed as part of the Phase 3b of the Inter-Sectoral Impact Model Intercomparison Project (https://www. isimip.org/), being the observational reference for the calibration of the GCMs considered in the third phase of this initiative, which is focused, among others, on the detection and attribution of observed impacts following the definition established by the IPCC-WGII (Cramer et al., 2014). W5E5 is a global daily dataset with 0.5 horizontal resolution covering the period 1979-2016. It is a merged dataset which combines WFDE5 data (Cucchi et al., 2020;Weedon et al., 2014) over land with ERA5 (Hersbach et al., 2020) over the ocean. To avoid spurious effects due to the scale gap between model and observations, W5E5 was regridded to the same common 2 resolution grid used for the GCMs before training the BA methods. This way, the downscaling effect is avoided, being thus BA used as a mere adjustment (see Casanueva et al., 2020a, for a discussion on this). | Bias adjustment In this study, we use three BA methods of increasing complexity. The simplest parametric methods adjust only the mean (referred to as MA) and the mean and variance (MVA), respectively (similar to RaiRat-M6 and RaiRat-M7 in the Cost Action VALUE intercomparison experiment Gutiérrez et al., 2019, see Appendix A1). These methods are applied on a monthly basis, that is, the parameters are adjusted separately for each month. Empirical quantile mapping (EQM) is a popular BA method and consists in calibrating a transfer function over the control period to map the quantiles from the empirical cumulative distribution function of the model output onto the corresponding observed distribution. Here we use the implementation from Déqué (2007) which fits 99 empirical percentiles and uses constant extrapolation for out-ofsample values (i.e., values below and above the calibration range). The EQM implementation is similar to that in the Cost Action VALUE Gutiérrez et al., 2019), with a slight modification in the moving window size in order to alleviate the computational demand of the method for the global domain (here EQM is applied on a monthly basis, consistently with MA and MVA). The reader is referred to for an overall evaluation of these methods over Europe. The intercomparison of the three BA methods presented here allows to assess the suitability of simple (parsimonious) versus complex BA methods for this particular problem. Casanueva et al. (2013) showed that adjusting the mean (MA) reduces to a large extent biases in high-and lowtemperature percentiles and these are close-to-zero after the second-order correction (MVA). Here we further analyse the practical implications for heat indices depending on absolute temperature thresholds (annual number of days with maximum temperature above 35 C, TX35). Moreover, this comparison allows to assess the effect of inflation in the results. Whereas MVA and EQM produce an inflation of the variance, thus modifying the climate change signal, the simplest method (MA) does not affect the variance and preserves trends (trend-preserving method). The BA methods were trained over the historical 1986-2005 period and subsequently applied to the 20-year GWL periods (Table 1) considering every land gridbox for each GCM separately. Finally, values of TX35 were calculated from both the raw and BA daily maximum temperature time series. These methods are implemented in the R package downscaleR through the function biasCorrection, using the optional arguments method = "eqm" or method = "scaling" or method = "mva", and window = c(30,30). Further details and worked examples of the EQM application for absolute threshold, temperaturebased climate indices are given in Iturbide et al. (2019) and companion materials. | Model biases in the historical period When compared with the observed data in the calibration period, the EQM exhibits very low biases of the ensemble mean TX35 (see Figure 1, first column) with less than F I G U R E 1 The first row shows the number of mean annual days with maximum temperature above 35 C-TX35-For the W5E5 observational reference (first column) and the raw CMIP5 ensemble for the historical 1986-2005 period (second) and +2 C global warming level (third). Rows 2-4 show the results corresponding to the different bias adjustment methods (simple mean bias adjustment-MA-, mean-variance bias adjustment-MVA-And empirical quantile mapping-EQM-, respectively), representing the bias (first column) and the differences between the raw and bias-adjusted values (adjustment factor) in the training period (middle column) and a representative test period (right). The overlaid map shows the updated IPCC-WGI reference regions (see Figure 4a) 3 days/year bias over most of the global surface. This is not surprising since this method adjusts the percentiles of the distribution. Moreover, the results are very similar for the MVA method, indicating that adjusting just the mean and variance indirectly produces a good adjustment of the upper quantiles (e.g., those related to TX35). The simplest method, MA results in higher differences but still smaller than the adjusted biases (as shown in column 2). Therefore, parametric methods seem to be convenient for the assessment of the TX35 index, related to high percentiles of the maximum temperature. The ensemble mean maps of the historical (training) period exhibit large differences between raw and biasadjusted TX35 over sizeable global land areas (see Figure 1, middle column), highlighting the large effect of the BA step. These differences, expressed as the absolute difference BA À raw (days), are remarkable affecting pre-eminently the intertropical range, for which the bulk of land area is concentrated on Eastern Africa and Sahara Desert with positive differences over 50 days (IPCC-WGI regions SAH, WAF, CAF, NEAF, see Figure 4a), South America (mixed pattern with positive differences over NWS and SAM and mostly negative in NSA-Amazon Basin-and NES), Central America (SCA), Arabian peninsula (ARP, with positive/ negative differences in the northern/southern parts), and Southern Asia (mostly negative in SAS). Some important differences are also found in some adjacent areas of the extratropic in North/South America (NCA/SES regions), Northern/Southern Africa (MED/SWAF), the Middle East (WCA), and Australia. In South America, positive differences F I G U R E 2 Changes in mean number of annual days with maximum temperature above 35 C-TX35-For three future 20-year periods corresponding to +1.5 C, +2 C, and +3 C global warming levels (GWL) (in columns) relative to the historical 1986-2005 period. The first row shows the results corresponding to the raw model data and the second to fourth rows show the results for the MA, MVA and EQM bias adjustment methods, respectively. Hatching represents multimodel uncertainty (hatched areas correspond to weak model agreement, that is, less than 80% of the models agreeing on the sign) are located over the major mountainous areas of the continent, namely, the Andes (NWS, SAM, SES) and Brazilian highlands (SAM), while the negative differences are strengthened in low-lying areas of the Amazon (NSA) and Paran a (SES) River Basins. Overall, we find a distinctive spatial pattern of the difference between raw and BA maps resembling the spatial pattern of CMIP5 biases in extreme values of mean temperature described in previous studies (see e.g., Zhao et al., 2015). This pattern is consistent among the different BA methods (Figure 1), particularly for EQM and MVA which exhibit an almost identical pattern worldwide. The simpler MA method yields some differences of low magnitude in a few regions; Eastern South America (NES), F I G U R E 3 Historical and RCP8.5 time series of the individual models (thin lines) and the multimodel mean (solid lines) of regional TX35 for the Southeast Asia (SEA) region for the (a) raw and (b-d) MA, MVA and EQM bias-adjusted model data. The red shaded area indicates the multimodel range Central Africa (CAF) and Indian Peninsula (SAS), and Southeast Asia (SEA), for example, but in general these differences are weak and restricted to small areas. This result supports the use of simple and parsimonious BA methods over more complex EQM involving multiple parameters. | Future projections of extreme heat The differences between raw and BA projections are increasingly reinforced consistently for the three BA methods from the historical experiment to future projections, as shown in Figure 1 (right column) which corresponds to +2 C GWL. In the Amazon basin (NSA, SAM) the negative difference in the historical period is inverted showing a strong positive increment at +2 C GWL. These differences tie in with the increment of the projected changes from lower to higher levels of warming shown in Figure 2, which unveils a progressive southward displacement toward South American, South African and SEA regions, more accentuated with increasing GWLs (Figure 2). In general, there is a strong multimodel agreement over the bulk of land areas, notably improved in the landsea transitions after BA application over most of the world coastal regions like in west Africa (western SAH, CAF, WSAF) and Indian Ocean coasts (NEAF, SEAF, ARP, SAS, SEA, NAU, SAU), as indicated by limited areas with hatched pattern in Figure 2. The low multimodel agreement depicted in the higher latitudes of the northern hemisphere (less than 80% of models bear the same delta change sign) are due to the very low delta changes in the number of days above the 35 C threshold, ranging from small negative to small positive values around zero. Differently, multimodel agreement is met (all zero values) in those areas where this temperature threshold is never reached by any model (e.g., Antarctica and Greenland). To gain a better insight into the effect of BA, Figure 3 shows the multimodel time series of the spatially averaged TX35 over SEA. Here, the climate change signal is largely reinforced after BA application. The trend of the ensemble mean is clearly more pronounced for the BA series, with a quasi-linear increment until reaching +125 days/year by the end of the 21st century, this is, more than double the raw projection (50 days/year). Furthermore, the ensemble spread of the historical period and the near-future projections is drastically reduced with EQM and MVA (not that much with MA), yielding a more robust ensemble projection than the raw version. This particular result is consistent with the overall improvement in the multimodel agreement found after BA application in this region, as indicated by the reduction of hatched areas in Figure 2. Figure 4 summarizes the main results by showing regional averages of the projected change signal at +2 C GWL before and after BA (Figure 4b,c), and the magnitude of the adjustment for the +2 C GWL (BA À raw, Figure 4d). This synthesis of the information clearly shows that the magnitude of the correction is, in absolute value, similar or even larger than the raw TX35 climate change signal (for the policy relevant +2 C GWL) in many IPCC-WGI reference regions, mostly within the tropical range. This result highlights the paramount importance of the BA step in order to obtain credible TX35 change projections. Overall, the BA projections in this case yield much more intense future heat in the equatorial range in F I G U R E 4 (a) Land subset of the updated IPCC-WGI reference regions (see Iturbide et al., 2020, for details). (b, c) Climate change signals of TX35 for the 2 C warming level w.r.t. the historical values, for raw and BA (MVA) data. (d) Differences between bias-adjusted and raw data for the 2 C warming level Africa and South America, regions where the raw projections may dangerously underestimate future impacts. Moreover, these results suggest that the change in the climate change signal is mostly due to the nonlinear transformation between high temperatures and number of threshold excesses (a desired effect) with variance inflation playing a small role in this case. | DISCUSSION AND CONCLUSIONS Even though fundamental model errors cannot be improved by BA, and process-informed BA is in general a preferable approach (Maraun et al., 2017), BA could still be justified for highly biased climate indices such as those defined using absolute thresholds, for which the raw signal is unreliable (Dosio, 2016), as we show using TX35. In this context, some BA methods allow, by construction, the modification of the raw climate change signal at the cost of introducing some additional uncertainty inherent to the statistical adjustment of the raw model outputs. The BA application of the TX35 CMIP5 products here presented constitutes a suitable example of this, allowing for more credible future extreme heat projections. The added value of the correction is noteworthy since the observational dataset (W5E5) has a higher native resolution allowing for a better representation of orographical features, even after the degradation of its resolution prior to BA. Similarly, over islands and predominantly insular regions, the trend is reinforced after BA application (see e.g., Figure 3, SEA region). This effect can therefore be considered as an actual "correction" of the raw model outputs based on the more reliable representation of observed conditions of the reference dataset (here W5E5) used for the adjustment. While EQM applies a specific adjustment factor for each of the 99 percentiles of the distribution, MA acts on just one parameter (the mean) and MVA on two (mean and variance). Our results show that the three methods yield similar results (virtually identical in the case of EQM and MVA), thus supporting the use of simpler and more parsimonious BA methods (MVA in this case) for threshold-based indices. This work also paves the way for further analyses with alternative BA techniques for handling climate indices based on absolute thresholds. The increasing temperature trends throughout the 21st century projections (particularly accentuated for the RCP 8.5) pose a non-stationarity problem that is tackled in the case of the EQM using a constant correction for the outlying values of the latest percentile. However, alternative techniques may prove to be better suited to this particular extrapolation problem, and also in preserving the warming signal trends (Casanueva et al., 2020a). This study demonstrates the need of BA for achieving a more plausible representation of future climate impacts, since observational references can aid to improve the poor GCM representation of these features due to their coarse original resolution. These results unveil a stronger and more rapid increase of the frequency of heat extremes in the future than that one may expect using the raw model outputs alone. The projected changes affect to large world land areas, some of them highly populated and vulnerable, stressing the compelling need for adaptation and mitigation strategies to face unprecedented heat extremes in the next decades.
2021-11-04T15:15:31.335Z
2021-11-01T00:00:00.000
{ "year": 2022, "sha1": "014f6b7ccb1ee2178d5c9830c2348f9d425fee4e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/asl.1072", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "9e4ba407396f6d87a10633ab6c7a9e9f83b1cd33", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
53331674
pes2o/s2orc
v3-fos-license
Acute Left Ventricular Outflow Tract Obstruction in Non-Mitral Cardiovascular Surgery: A Case Series Analysis. Anaesth Critic Care Med J 2018, 3(2): 000137 Objective: We aimed to analyze the clinical signs of left ventricular outflow tract obstruction and its management in the perioperative period of major non-mitral cardiovascular procedures. Design. Case series analysis. Methods and Results Thirteen (10 males, 3 females) patients aged 64 (56; 74) y.o. with acutely emerged left ventricular outflow tract obstruction during/after non-mitral cardiovascular procedure between May 2006 and May 2018 were included. Methods: The procedures were as follows: coronary artery bypass grafting – n=11, aortic valve replacement – n=1, abdominal aortic membrane resection (aortic dissection DeBakey type I, acute legs ischemia) – n=1. Left ventricular outflow tract obstruction with systolic anterior motion of anterior leaflet of mitral valve was detected in 0.9% of the total number of perioperative echocardiography examinations. Three variants of its clinical course were described: (1) intracardiac and systemic hemodynamics recovery with a specific therapy (most cases); (2) full resistance to therapy with sustainable systolic anterior motion persistence; (3) termination of systolic anterior motion as a result of the therapy, but the paradoxical persistence of low cardiac output syndrome. Conclusion: Practitioners’ vigilance and Echocardiographic monitoring are needed for early detection of acute left ventricular outflow tract obstruction. Its development can be a marker of the extremely hard concentric left ventricular hypertrophy as a cause of the low cardiac output syndrome. Introduction Left ventricular outflow tract obstruction (LVOTO) due to systolic anterior motion (SAM) of the anterior mitral leaflet (AML), which typically occurs in patients with hypertrophic cardiomyopathy [1,2], is well known as a complication of mitral valve repair (MVR) [1,[3][4][5][6][7]. During non-mitral cardiovascular surgery, such as coronary artery bypass grafting (CABG) or aortic valve replacement (AVR), LVOTO has been described almost exclusively in detached small studies [8,9]. Some case reports also described acute LVOTO during liver transplant and other major surgical procedures for which transesophageal echocardiography (TEE) is used routinely [10][11][12][13][14]. The predictors of this disturbance are left ventricular hypertrophy (LVH), hyper dynamic left ventricular ejection fraction (LVEF) [15], Hypovolemia, arterial hypotension, and inotropic therapy [1,16]. These factors can promote the Venturi effect with the development of suction force in LVOT leading to SAM. The size of AML and reduced mitro-aortic angle are important contributing factors in MVR cases [16][17][18]. Conventional hemodynamic monitoring, including pulmonary artery catheter (PAC) is not fully potent to diagnose LVOTO. In fact, the effectiveness of diagnostics and the intensity of the registration of SAM are dependent on the use of echocardiography for routine monitoring. The detection of SAM has been speculated to be a criterion of sufficient educational and qualifying levels of the anesthesiological team [19]. We aimed to describe and analyze the clinical and Echocardiographic signs of LVOTO in the perioperative period of major cardiovascular procedures ( Figure 1). Methods All consecutive patients with acute LVOTO during or after major cardiovascular procedures from September 2008 to May 2018, except MV surgery, were included. Echocardiography. In all cases, the diagnosis of LVOTO/SAM was identified by multiplane TEE using 2D, color Doppler, pulse wave (PW), and continuous wave (CW) Doppler modes. The criteria of LVOTO were typical abnormal AML systolic motion, peak gradient in LVOT >40 mmHg, mitral regurgitation (MR) II-IV, and asymmetric LVH with interventricular septum (IVS) bulging [1]. A typical echocardiogram is shown in the figure 1. Standard hemodynamic monitoring. All patients were monitored with invasive arterial pressure (AP), PAC (before and/or just after LVOTO diagnostics) to control pulmonary arterial pressure (PAP), pulmonary arterial wedge pressure (PAWP), and cardiac index (CI). Standard LVOTO therapy generally involves inotropic support cancellation, beta-blockers, fluid loading, and arterial hypertension induction by phenylephrine. Data collection and statistical analysis were performed using Microsoft Excel 2010 for Windows 8. Statistical data were reported as median (interquartile range, IQR). The Mann-Whitney U-test was used for data comparison. Pvalues <0.05 were considered significant. Asymmetric LVH was diagnosed only preoperatively in five (38.5%) patients. Before surgery, LVEF was >50% in all patients ( Table 1). The significant increase in LVEF and IVST were recorded during perioperative LVOTO diagnostics. Acute MR II-IV and PG in LVOTO >40 mmHg were presented in all patients. CABG (three grafts) with CPB was performed in a 64year-old man with a history of severe arterial hypertension and LVH (IVST and LVIWT, 17 mm). Low CI presented after weaning from CPB, and LVOTO was diagnosed with TEE 20 min after protamine administration. The typical Echocardiographic image of SAM with MR IV and the peak gradient in LVOT of 44 mmHg were present. However, standard therapy (volume loading, metoprolol, and phenylephrine) did not yield a consistent effect, and AV pacing with short (70 ms) delay was started, resulting in full recovery of the intracardiac circulation. Surprisingly, CI remained extremely low: 1.7 before and 1.9 L*min −1 *m −2 after SAM termination. Restrictive type of hemodynamics was observed: PAWP >20 mmHg, restrictive pattern of mitral inflow [20] with giant E-wave, and small A-wave on PW Doppler, as well as IVST and LVIWT >20 mm, and systolic LV obliteration. Severe LCOS without SAM led to the patient's death 8 days postoperatively. The next case showed a similar situation with opposite outcome. A 79-year-old woman underwent on-pump CABG surgery (three grafts). LVOTO was diagnosed by TEE, and she was successfully treated with metoprolol and volume loading 1.5 h after the patient's arrival in the ICU. However, the significant LCOS remained in the background of LVH (LVIWT and IVST >20 mm, LV systolic obliteration). The right atrial pressure (RAP), PAWP, hear rate (HR), CI, and AP were 21 mmHg, 16 mmHg, 64 min −1 , 1.8 L*min −1 *m −2 , and 125/64 mmHg, respectively. Despite obvious risk of inotropes, dobutamine (up to 8 µg*kg −1 *min −1 ) was applied to restore right ventricular function. Thus, we observed a CI increase to 2.6 L*min −1 *m −2 with HR of 78 min −1 , RAP of 12, and PAWP of 18 mmHg. The restrictive pattern of the mitral inflow was detected before and after the start of dobutamine infusion. No SAM recurrence was observed on TEE. The patient was extubated 12 h postoperatively, dobutamine infusion was terminated, and the patient was discharged from the ICU two days postoperatively. Thus, all types of responses to therapeutic measures are summarized in the Table 2 Discussion Previously, cases of SAM due to acute myocardial infarction (AMI), during urgent CABG, and after AVR were presented in a few case reports [8,9,21]. The most significant studies with systemic analysis of acute SAM in cardiac surgery performed recently have addressed MVR. Crescenzi, et al. [4] and Landoni, et al. [5] presented the most detailed analysis of SAM after MVR and offered the following management steps: step 1 -expanding intravascular volume and discontinuing any inotropic drug; and step 2 -increasing the after load through manual compression of the ascending aorta while administering an intravenous bolus of esmolol. The authors described three types of response to the therapy: "easy-to-revert" (step 1 was effective), "difficult-torevert" (step 2 was effective), and "persistent" (repeated surgical procedure was required). Accordingly, we analyzed some specific mechanisms of LVOTO, with IVS thickening (bulging Subaortic septum) [1,22] manifesting after aorta cross-clamping as the most important. Significant IVST increases can be hypothetically associated with reperfusion myocardial edema in basal IVS, but the specific mechanism of this reperfusion injury remains unclear. Our single-center study was unable to observe the multitudinous group of patients with this uncommon hemodynamic disturbance. Despite this limitation, the three variants of response to therapy can be described as follows: (1) intracardiac and systemic hemodynamic recovery (most cases) -easy-to-revert or difficult-torevert based on Crescenzi et al. and Landoni,et al. [4,5]; (2) full resistance to therapy with sustainable SAM and LCOS persistence (so-called "persistent SAM" [4,5]); (3) termination of SAM, but paradoxical LCOS persistence due to extremely severe concentric LVH (Table 2). Thus, LVOTO is not only the obvious direct cause of the severe hemodynamic disorders, and it could also be a marker of hard concentric LVH with extremely severe diastolic dysfunction. The persistence of SAM is the most significant, but not always the single pathogenic mechanism of circulatory insufficiency. Actually, in the abovementioned patient, there was a lowest peak gradient in LVOT. The role of restrictive LV remodeling with reduced diastolic compliance is not less important. Selecting a treatment strategy for such cases can be difficult. Traditionally, inotropic therapy has been considered as "a crime" in patients with LVH and SAM. However, we have paradoxically experienced the effective application of inotropes in this case. In our opinion, the useful effects of inotropic agents are an improvement of the right ventricular pump function, leading to additional volume loading of restrictive LV, and HR increase in cases with rigid small stroke volume. Previously, van der Maaten, et al. [23] expressed highly original views on the opportunity for medical improvement of the left atrial pump function. Its rise with inotropic agents immediately after AS correction, along with the evidence that enoximone does not degrade the hypertrophied myocardium compliance, was reported. The necessary condition for providing such extraordinary therapy in patients with severe LVH is careful Echocardiographic control. In our experience, LVOTO detection was almost always unexpected. This complication was predicted in only 1 of 13 cases studied. This study is an additional argument in favor of routine TEE monitoring in cardiac and major vascular surgeries [24]. We also do not oppose the use of PAC, although it is debatable [25]. The frequency of the use of Swan-Ganz catheter is not decreasing, and the mortality in patients undergoing PAC application during cardiac procedures tended to be lower [26]. However, only echocardiography can identify the real mechanism of circulatory failure. We agree with the approach presented earlier [4,5], demonstrating that a standardized treatment algorithm is essential for rapid detection of cases of persistent SAM. The surgical treatment of this complication after non-mitral valve cardiac procedures has not been commonly implemented. In 2015, Lee et al. reported the successful use of alcohol and albuminglutaraldehyde (BioGlue) for septal ablation to percutaneously treat LVOTO immediately after aortic and mitral valve replacement [6]. Topical case reports of the MitraClip application for SAM removal were published [27][28][29]. In conclusion, in the practice of cardiovascular anesthesia, LVOTO, which is not associated with MVR, is an uncommon but dangerous complication. Practitioner vigilance and Echocardiographic monitoring are needed for early SAM detection and effective management. The development of LVOTO during or after non-mitral valve cardiovascular procedures can be a marker of the extremely hard concentric LVH with diastolic restriction as a cause of the low flow status.
2019-03-18T14:03:37.461Z
2018-06-11T00:00:00.000
{ "year": 2018, "sha1": "7d89d32eee11c2a65993da3e81d788a74fde428c", "oa_license": null, "oa_url": "https://doi.org/10.23880/accmj-16000137", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9bdeb1e4c8459f656de318f629960787117cf6f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265214384
pes2o/s2orc
v3-fos-license
Ventilator-associated pneumonia in Polish intensive care unit dedicated to COVID-19 patients Background Healthcare-Associated Infections (HAI) are most frequently associated with patients in the Intensive Care Unit (ICU). Coronavirus disease 2019 (COVID-19), caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), led to ICU hospitalization for some patients. Methods The study was conducted in 2020 and 2021 at a hospital in southern Poland. The Healthcare-Associated Infections Surveillance Network (HAI-Net) of the European Centre for Disease Prevention and Control (ECDC) was used for HAI diagnosis. The aim of this case-control study was to retrospectively assess the epidemiology of HAIs in ICU patients, distinguishing between COVID-19 and non-COVID-19 cases. Results The study included 416 ICU patients: 125 (30%) with COVID-19 and 291 (70%) without COVID-19, p < 0.05. The mortality rate was 80 (64%) for COVID-19 patients and 45 (16%) for non-COVID-19 patients, p < 0.001. Ventilator-Associated Pneumonia (VAP) occurred in 40 cases, with an incidence rate density of 6.3/1000 patient-days (pds): 14.1/1000 pds for COVID-19 patients vs. 3.6/1000 pds for non-COVID-19 patients. Odds Ratio (OR) was 2.297, p < 0.01. Acinetobacter baumannii was the most often isolated microorganism in VAP, with 25 cases (incidence rate 8.5%): 16 (18.2%) in COVID-19 patients vs. 9 (4.4%) in non-COVID-19 patients. OR was 4.814 (1.084–4.806), p < 0.001. Conclusions Patients treated in the ICU for COVID-19 faced twice the risk of VAP compared to non-COVID-19 patients. The predominant microorganism in VAP cases was Acinetobacter baumannii. Background Coronaviruses, members of the Coronaviridae family, have been recognized in contemporary medicine since the 1960s [1].In the realm of infectious diseases, coronaviruses were previously responsible for approximately 20% of upper respiratory tract infections in both children and adults.However, a significant paradigm shift occurred at the beginning of 2019 when cases of acute, unexplained lung inflammation emerged in China.This novel threat was identified as a new type of coronavirus, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) [2], leading to the nomenclature of Coronavirus Disease 2019 (COVID-19) by the World Health Organization (WHO). For some patients, SARS-CoV-2 infection manifested as Acute Respiratory Distress Syndrome (ARDS), necessitating treatment in Intensive Care Units (ICUs).Patients in ICUs are at risk of invasive procedures, including mechanical ventilation (MV), which may result in nosocomial pneumonia (NP), specifically Ventilator-Associated Pneumonia (VAP). A study conducted by Guan et al. [3] in China at the early stages of the pandemic (January 2020) revealed that 5% of COVID-19 patients required ICU admission, with 2.3% undergoing mechanical ventilation.It is estimated that approximately 20% of patients experience a severe or very severe course of the disease, primarily characterized by gas exchange disorders, notably hypoxemia [4]. The aim of this case-control study is to retrospectively analyze the epidemiology of VAP in patients treated in 2020-2021, categorizing them into COVID-19 and non-COVID-19 groups. Methods This analysis is based on the results of a two-year surveillance conducted in the ICU of St. Luke Regional Hospital in Tarnów in 2020 and 2021. Patients diagnosed with COVID-19 were accommodated in the ICU in a dedicated nine-person room with specialized medical staff, sanitary, and hygienic facilities.Non-COVID-19 patients in the ICU were treated in a five-person room with their own specialized personnel and facilities.These two groups of patients and their respective medical personnel did not interact. Active, continuous, and targeted surveillance of Healthcare-Associated Infections (HAI) was conducted.Approximately 50% of nurses in the ward treating COVID-19 patients were transferred from other non-ICU hospital wards.Data on patients and hospital infections were collected as part of an active and targeted surveillance process following the standardized protocol established by the European Centre for Disease Prevention and Control (ECDC), version 4.3 [11].The definition of a hospital-acquired infection, as per the implementing decision of the European Commission in 2018, was applied [12].Patients with an ICU stay of fewer than 48 h were excluded from the analysis. Statistical analysis A retrospective statistical analysis was performed using IBM SPSS (SPSS -Statistical Package for the Social Sciences, STATISTICS 24, Armonk, NY, USA) and Microsoft Excel (Microsoft Office 2016 Redmond, WA, USA).Statistical calculations included frequencies (n), percentages (%), medians (Me), standard deviations (SD), significance levels (p), where p < 0.05 indicated statistical significance.The analysis involved calculating odds ratios (OR) and 95% confidence intervals (95% CI) for both groups, classified by the presence or absence of HAI.Fisher's exact probability test was used due to sample size considerations. Incidence rates were calculated for VAP, indicating the number of new cases per 100 admissions in the ICU, as well as incidence density rates, reflecting the number of new VAP cases per 1000 patient-days with mechanical ventilation.Additionally, utilization rates (UR) for patients with mechanical ventilation (MV) were calculated as the number of days with the procedure per 100.A minimum sample size of 399 hospitalized patients was required for this study. The data used for analysis were anonymized.The study was based on routinely collected hospitalization data, obviating the need for additional consent for usage. The study was conducted with the approval of the Bioethics Commission of the Jagiellonian University in Krakow (no KBET 1075.6120.12.2023) and adhered to the principles of the Helsinki Declaration [13]. Results From January 1, 2020, to December 31, 2021, a total of 416 patients who met the study criteria were admitted to the ICUs.Of these, 125 patients ( Among the etiological factors, particularly noteworthy were non-fermenters, with Acinetobacter baumannii being the dominant pathogen, accounting for 47 (36.4%) of the cases (see Table 3). Patients diagnosed with COVID-19 who required intensive care experienced a shorter duration of invasive mechanical ventilation compared to patients treated for other medical conditions.The utilization rates (UR) were notably lower in COVID-19 patients (0.36) compared to non-COVID-19 patients (0.94).The incidence rate of Ventilator-Associated Pneumonia (VAP) was significantly higher in COVID-19 patients, with an incidence rate of 14.1 per 1000 patient-days with a ventilator, in contrast to the lower rate of 3.6 per 1000 patient-days in non-COVID-19 patients (as detailed in Table 4).The basis for microbiological diagnosis of VAP in patients with COVID-19 was the material from lower airways, 18 (100%) (see Tables 3 and 4). Discussion From our study population, which consisted of patients admitted to the ICU with SARS-CoV-2 infection, none had been vaccinated against COVID-19.Vaccination against COVID-19 significantly lowers the risk of severe disease.Consequently, the studied COVID-19 patients were more exposed to a severe course of SARS-CoV-2 [14].In the city of Tarnów, where this study took place, vaccination coverage was 49%, while outside the city, it was no more than 41% [15].These low vaccination rates are likely a consequence of the increasing influence of anti-vaccination movements.These movements have undermined public trust in vaccinations and led to a rise in refusals of mandatory immunizations [16,17].This situation has significantly burdened the Polish healthcare system and has become a significant public health issue. The HAI incidence rate per 100 hospitalized patients in our study was 31%, and it was similar in both the group of COVID-19 patients (33%) and the group of non-COVID-19 patients (30%).Another Polish study conducted in two ICUs among COVID-19 patients reported a considerably higher HAI incidence rate at 56% [18].Similarly, studies from other European countries have also shown high HAI incidence rates during the COVID-19 pandemic among patients hospitalized in the ICU [19,20].For instance, Grasselli et al. [19], in a multicenter study across 8 Italian hospitals, reported an ICU incidence rate of 46% among COVID-19 patients.In a single-center study in Spain, patients treated in the ICU due to COVID-19 had a 41% HAI incidence rate.Several factors may explain the increased rate of healthcarerelated infections in the population of ICU patients with COVID-19, including structural factors such as the introduction of new ICU beds, organizational factors such as the inclusion of new teams of physicians and nurses without prior intensive care experience, and functional factors like changes in patient care standards [21].All of these structural, organizational, and functional changes were present in the dedicated ICU ward for COVID-19 that we investigated. One of the most common clinical forms of infections in Polish ICUs is nosocomial pneumonia (NP) [22,23].In studies conducted before the COVID-19 pandemic in southern Poland, the frequency of nosocomial pneumonia ranged from 4 to 10% [8,9], [24,25].However, a considerably higher incidence rate (17%) of hospitalacquired pneumonia was reported in the period before the COVID-19 pandemic (2017-2018) by Dubiel et al. [23] in a study that involved 11 Polish ICU wards located in the northern region of Poland.According to the ECDC report [26] from studies conducted before the COVID-19 pandemic in European countries from 2008 to 2012, the average incidence rate of NP in ICUs was 6%.Other ECDC reports [10,22] from studies conducted in European ICUs also indicated a 6% incidence rate of NP. During the COVID-19 pandemic, significantly higher incidence rates of nosocomial pneumonia were reported among patients hospitalized in the ICU due to COVID-19.In our study, the incidence rate of NP in this group of patients was 14% and it was almost twofold higher compared to the group of non-COVID-19 patients (8%).Kozłowski et al. [18], in their study involving two ICU wards in northern Poland, reported a frequency of nosocomial pneumonia in patients treated for COVID-19 as 30%.Conversely, in a single-center ICU study conducted by Bardi et al. [20] in Spain, the incidence rate of nosocomial pneumonia in COVID-19 patients was 23%. In our study, we calculated the VAP incidence rate per 100 patients treated in the ICU, which was 8% for both non-COVID-19 and COVID-19 patients.The VAP incidence rate obtained in our study aligns with results from other studies.Chinese studies from Wuhan reported a VAP incidence rate per 100 patients treated in the ICU due to COVID-19 at 31% [27].Italian researchers Fig. 2 Healthcare-associated infections death rates in COVID-19 vs. Non-COVID-19 patients observed an even higher VAP incidence rate of 50% [18].A systematic review and meta-analysis conducted by Ippolito et al. [28] estimated the overall VAP frequency in patients treated in the ICU due to COVID-19 to be 45%.The elevated incidence of hospital-acquired ICU infections among patients with COVID-19 may be attributed to their increased susceptibility to lung tissue infections by bacteria present in the ICU environment, owing to the initial damage caused by SARS-CoV-2 [29].Patients admitted to the ICU often had acute pneumonia due to SARS-CoV-2, accompanied by respiratory distress syndrome, comorbidities, and advanced age [20].In many cases, a majority of patients (96%) [20], and even the entire cohort in some investigations [18], required invasive mechanical ventilation, a significant risk factor for VAP.It has been demonstrated that intubation and mechanical ventilation can increase the risk of pneumonia by 6 to 21 times [30]. In our study, 292 (70%) of the patients required mechanical ventilation of lungs.Interestingly, we observed a higher incidence of hospital-acquired pneumonia related to ventilation (VAP) in the group of COVID-19 patients frequently require prolonged invasive mechanical ventilation (MV), involving prone positioning, heavy sedation, and muscle blockers for several weeks.Furthermore, there is substantial evidence of prolonged immunosuppression, including deep lymphopenia [32].This accounts for a high risk of secondary hospital-acquired infections, primarily ventilator-associated pneumonia (VAP) [33].Diagnosing ventilator-associated infections remains a challenge, primarily due to the significant heterogeneity in clinical presentations.There is currently no consensus on appropriate diagnostic strategies for VAP.Regardless of the definition, a precise diagnosis of VAP necessitates clinical signs of infection, microbiological evidence, and chest X-ray findings.However, the interpretation of the latter can be complicated by pre-existing parenchymal injuries [34]. In our study, bronchoscopy was performed in only 5% of COVID-19 patients and 20% of non-COVID-19 patients.The basis for microbiological VAP diagnosis in COVID-19 patients was derived from material obtained from the lower airways in all 18 cases, using a diagnostic approach known as non-protected sample with quantitative culture (PN2).A study conducted before the COVID-19 pandemic, involving seven Polish ICU wards, observed that the duration of treatment for VAP patients who were correctly diagnosed using PN1 was shorter [34].There was also a notable shift over time in the microbiological diagnostic methods employed for VAP patients.Notably, A. baumannii was predominantly observed in VAP cases diagnosed using substandard methods (non-PN1) [35].The clinical presentation of COVID-19 pneumonia tends to be relatively uniform, commonly featuring high fever, hyperleukocytosis, severe hypoxemia, extensive bilateral radiologic infiltrates, and biological inflammatory syndrome.Given the similarity in presentation between COVID-19 pneumonia and VAP, the traditional diagnostic criteria for VAP are not applicable to the critically ill COVID-19 population [33].Performing fiberoptic bronchoalveolar lavage in severely hypoxemic COVID-19 patients is often impractical due to the inherent risk of exacerbating hypoxemia.As a result, many ICUs resort to less invasive endotracheal aspirate (ETA) sampling with quantitative or semiquantitative cultures, even though these methods may be less reliable for determining the necessity of antibiotic treatment.It is exceedingly challenging to distinguish between COVID-19-associated ARDS with asymptomatic bacterial colonization and a true VAP based solely on traditional threshold values, such as the 10 5 CFU/ml for ETA samples [33].These microbiological diagnostic challenges contribute to distinct differences in VAP classification and diagnosis in patients with COVID-19. The precise identification of COVID-19 patients in need of new antibiotics for clinically relevant bacterial superinfections is a challenging task, which often results in the overuse of broad-spectrum antibiotics, even in the absence of supporting data in the literature [36].Consequently, the majority of ventilated COVID-19 patients with ARDS receive prophylactic antibiotics as a preventive measure against undocumented VAP.This strategy carries a substantial risk of selecting multi-drug-resistant bacteria or even fungi, particularly in patients expected to remain on invasive MV for a long period [33].The predominant causative agent of infections in our study was Acinetobacter baumannii, accounting for 36% of cases.However, in the group of patients with COVID-19, this microorganism was responsible for 63% of infections, whereas in the non-COVID-19 group, it accounted for 24% of infections.Previous Polish studies have consistently reported the frequent isolation of Acinetobacter baumannii in ICUs [9,25,37].In a study by Kozłowski et al. [17], Klebsiella pneumoniae and Acinetobacter baumannii were identified as the most common pathogens responsible for VAP.Another study conducted by seven Polish ICUs from 2013 to 2015 found that Acinetobacter baumannii was primarily associated with VAP cases diagnosed using suboptimal methods (non-PN1) [35].The concerning observation in our study is the increasing trend in the incidence rate of Acinetobacter baumannii.In 2020, it accounted for 10% of cases, rising to 13% in 2021 (OR = 3.342, 95% CI 1.799-6.208,p < 0.001).It is noteworthy that the incidence rate of Acinetobacter baumannii in patients admitted to our investigated ICU between 2012 and 2019 was 4%.An important characteristic of Acinetobacter baumannii is its ability to survive in dry conditions for extended periods, making the hospital environment a significant reservoir for this microorganism.It is suggested that Acinetobacter is more likely to cause infections in facilities with older infrastructure [23]. In our study, the mortality rate among COVID-19 patients was 64%, which was more than four times higher compared to non-COVID-19 patients (16%).Furthermore, significant disparities in mortality were noted among patients with HAI: in the COVID-19 group, a nearly twofold higher mortality rate of 21% was observed compared to 12% in the non-COVID-19 group.This pattern aligns with the findings of Kozłowski et al. [18], who reported a 72% mortality rate in COVID-19 patients with HAI versus 65% in those without HAI.Notably, a multicenter Italian study reported a 30% mortality rate among COVID-19 patients [19].Bardi et al. [20] reported a 36% mortality rate in a university clinic in Madrid and highlighted a significant association between HAI and patient mortality.Specifically, the death rate was 54% in the group of patients with HAI compared to 24% in the group without HAI. Hospital-acquired infections are a common complication in patients with COVID-19 treated in the ICU, which may contribute to the elevated mortality observed in this patient population [20]. In our study, it was also observed that among patients with NP, the mortality rate in the group of COVID-19 patients was almost twice as high compared to the non-COVID-19 group, at 10% versus 4%, respectively.This pattern is consistent with the findings of Maes et al. [31], where the mortality rate in the COVID-19 patient group with VAP was nearly twice as high as in non-COVID-19 patients, with rates of 38% versus 21%.According to a meta-analysis of 20 studies, the average mortality rate due to VAP in the group of COVID-19 patients was 43% [28].It appears that critically ill COVID-19 patients, hospitalized in the ICU, grappling with acute viral infections, often necessitating mechanical ventilation and other invasive treatments, and exposed to multidrug-resistant strains that colonize the ICU, frequently face a challenging battle for survival. Limitations of the study Our study has several limitations.The most significant of them include its single-setting nature, the relatively small sample size and the short-term duration of the study.Another notable limitation is the absence of data on comorbidities. Conclusions In patients treated in the ICU with COVID-19, the incidence of PN and VAP and the risk of Acinetobacter baumannii infection were much higher than in patients treated in the ICU for reasons other than COVID-19.Although high, the risk of infections in our study was similar to the results reported by other authors.However, the proportion of Acinetobacter baumannii correlated with sub-optimal sample type for microbiological diagnostics.This observation indicates important challenge for infection control which is improving microbiological diagnostics methods and cooperation with infection control team and microbiological laboratory. 2 ) PN based on the microbiological diagnosis[11] PN1 -positive quantitative culture from minimally contaminated lower respiratory tract specimen such as broncho-alveolar lavage 5 brush or distal protected aspirate, endotracheal aspirate (ETA) non-protected sample with quantitative culture 15 Acinetobacter baumannii a n (%) 2020 Table 1 Demographic characteristic of ICU patients, their number, hospitalization patientdays, incidence rate per 100 cases of hospitalization, death rate in 2020-2021 SD Standard deviation, ICU Intensive care unit, W Woman, M Man, UR Utilization rate, HAI Healthcare-Associated Infections Table 2 Clinical forms, number of HAI, HAI incidence rate per 100 cases of hospitalization and HAI death rate in ICU in 2020-2021 Healthcare-associated infections incidence rates in COVID-19 vs. Non-COVID-19 patients infection and the risk of being treated in the ICU.Poland's COVID-19 vaccination coverage is relatively low compared to other countries.According to the European Centre for Disease Prevention and Control (ECDC), only 61% of the Polish population has received at least one dose HAI Healthcare-Associated Infections, PN Pneumonia -lungs infection, BSI Bloodstream infection, UTI Urinary tract infection, GI Gastrointestinal system infection, SYS Systemic infection, SST Skin and soft tissue, SSI Surgical site infection, LRI Lower respiratory tract infection, n number, SD Standard deviation a Clostridioides difficile Infection (GI-CDI)-8 cases/ incidence rate per 10 000 patientdays = 8.3 (GI-CDI) Table 3 Microorganism responsible for HAI in ICU in 2020-2021 a Incidence rate of A. baumannii in patients in the investigated ICU in 2012-2019 was 4.2% Table 4 Analysis of healthcare-associated infections related to the use of mechanical ventilation in ICU in 2020-2021 PN Pneumonia, VAP Ventilator-associated pneumonia, MV Mechanical ventilation, UR Utilisation rate
2023-11-16T14:05:29.982Z
2023-11-16T00:00:00.000
{ "year": 2023, "sha1": "21dcdbe997713d1955cf918f428a5a20f6c13a6a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "d11f82334c00de849437b75b8fe07488d8bebafb", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
212941315
pes2o/s2orc
v3-fos-license
Derivative spectrophotometric for simultaneous estimation of propranolol hydrochloride and hydrochlorothiazide in synthetic mixture A simple, sensitive and economical spectrophotometric method for simultaneously estimation of PRO and HCTZ. The irst derivative (D1) of the UV spectrumwas used in the determination of both drugs in their synthetic mixtures. The Peak to baseline and Peak area at suitable wavelengths were used in the study. The linearity of both drugs was up to a concentration of (5-40 μg/ml). The analytical results of the estimation of PRO were, Rec% 97.179-102.424% and RSD%0.001-4.996%. While for estimation of HCTZwere, Rec% between 95.406-103.681% and RSD% 0.001-3.676%. The method was accurate, good repeatability and successfully applied in the estimation of both drugs in their synthetic mixtures. INTRODUCTION The scienti ic name of Propranolol (1isopropylamino-3-(1-naphthyloxy)-2 propranolol hydrochloride). The propranolol hydrochloride, as shown in Figure 1-a, has a molecular formula is C 16 H 22 CINO 2 and the molecular mass is 298.807 g/mol. It is commonly used for Hypertension aMyocardial infarction, Anxiety tremor, Portal hypertension, Anxiety, as showed in Figure 1 (Mohammed et al., 2018). Also, it is useful in regulation Tachycardia (Triipathi, 2008) and considered as a non-selective inhibitor for beta receptors. Where it inhibits the receptor's response of beta 1 and beta 2 competitively, so it leads to slowing in heart rate (Esteve-Romero et al., 2016). The binary mixture is used to treat heart-related diseases and hypertension (Savaj et al., 2015). There are many methods for the estimation of both drugs, individually or simultaneously. These methods were derivative spectrophotometric methods for the estimation of PRO and panadol Ruiz and . (1998) or PRO and Hydrolazine (Peña et al., 1991). HCTZ was estimated by the oxidative coupling method with phenyldiamine-O (Hasan and ., 2019). HCTZ and Valsartan were estimated by irst derivative method (Patel and ., 2012) The dual wavelengths and area under curve method were used for the simultaneous estimation of HCTZ and Olmesartan Medoxomil in their mixture (Ilango and Kumar, 2012). Both drugs were estimated by chro-matographic methods individually or simultaneously (Kim et al., 2001;Hegazy et al., 2011;Umamaheshwari and ., 2015). The Simultaneous Determination of Propranolol Hydrochloride and Isosorbide Mononitrate was done by Central Composite Rotatable Design (Khan et al., 2019). Some another methods were used, such as LC-MSIMS (Johannsen et al., 2019;Li and Hongbin, 2018). The present study aims to develop a new spectrophotometric method for simultaneously estimation of PRO and HCTZ by a irst derivative method in pure form and in their synthetic mixture. EXPERIMENTAL Apparatus A shimadzu UV-ViS 1650 spectrophotometer using a 1 cm quartz cell was used to the spectrophotometric measurements and ultrasound water bath produced by the lab tech. To dissolve the pure and samples. Preparation of Standard Solutions Stock solution containing 100 µg/ml either PRO or HCTZ were prepared by dissolved 0.1g of each pure material in amount of distilled water for PRO the volume was completed to the mark with the same solvent in 100 ml volumetric lasks, while for HCTZ the ultrasound water bath was used for 12 min to dissolve the pure material completely before completing the volume in 100ml volumetric lask with the same solvent. Further dilutions were done using distilled water as described under the construction of calibration curves. Analysis of Pharmaceutical Preparation twenty tablets of the pharmaceutical preparation (indicardin 10 mg) were weighted and were grinded by a ceramic mortar, they were mixed well and the weight equivalent to one tablet was taken from the mixture, which is 0.450g then it was dissolved in an amount of distilled water in 100 ml volumetric lack. The solution was iltered by using a liter paper (Whatman No.40) to get a clear solution. Then the volume was completed to the mark with the same solvent to prepare 100 µg/ml of PRO. Ten tablets of the pharmaceutical preparation (HCTAIWA 25 mg) were weighted and were grinded by a ceramic mortar, they were mixed well and the weight equivalent to one tablet was taken from the mixture, which is 0.16048 g, then it was dissolved in amount of distilled water and by using the ultrasound water bath in 100 ml volumetric lask for 15 minute to dissolve the pure material completely. The solution was iltered by using a ilter paper (Whatman No.40) to get a clear solution. Then the volume was completed to the mark with the same solvent to prepare 250 µg/ml of HCTZ. Absorption Spectrum A set of concentrations were prepared for both of PRO and HCTZ, with a range of concentrations of 5-40 µg/ml. Scanning for the wavelengths was done, which was around 190-400 nm to get the zero spectrum. The maximum absorption of HCTZ is at a wavelength of 272 nm, while for PRO at 290 nm. The Simultaneous Determination of Propranolol and Hydrochlorothiazide The procedure of The Determination of Propranolol Hydrochloride Equal amounts of 1 ml HCTZ 10 0µg/ml were transferred from the standard solution to a series of 10 ml volumetric lasks, then increasing quantities of PRO (50-400) µg/ml were added to these lasks and the volume was completed to the mark with distilled water. Zero spectrum of the mixtures was scanned between 190-400 nm and the values of the irst derivative of these spectra were obtained depending on Peak to baseline at wavelength of 266 nm and Peak Area at 251-273 nm for the quantitative analysis of PRO under the optimum conditions namely, Medium scan speed, sampling interval 0.1nm, slit width 2nm, ∆λ 20 and scaling factor 8. The procedure of the Determination of Hydrochlorothiazide Equal amounts of 1 ml PRO 50µg/ml were transferred from the standard solution to a series of 10 ml volumetric lasks, then increasing quantities of HCTZ (50-400) µg/ml were added to these lasks and the volume was completed to the mark with distilled water. The zero spectrum of the mixtures was scanned between 190-400 nm and the values of the irst derivative of these spectra were obtained. The measurements of the irst derivative depending on the Peak to baseline at wavelengths of 258,282 nm and Peak Area at 249-277 nm, 271-307.5 for quantitative analysis for HCTZ under the optimum conditions that were used with PRO. The Selection of the Optimum Conditions A lot of solvents were used to dissolve both components such as distilled water, ethanol, methanol and their mixtures with or without HCl and NaOH. The results show that the distilled water was the best solvent for both components, safe, less expensive Regarding to the other conditions, different values of ∆λ were used, which was around 20-160nm to choose an appropriate value of λ∆ and the best value was 20nm. Its notice that when there is an increasing in λ∆ value, the spectrum becomes distorted. A changing of the scaling factor was done from 1-10, whereas the method was more sensitive at the value 8 of the scaling factor and this is shown through the value of the slope, the value of R 2 was high so the value 8 was chosen as in Table 1. The best concentration of HCTZ was chosen, which is 10µg/ml as a constant concentration to determine the PRO. Also, the concentration of PRO was chosen, which is 5µg/ml as a constant concentration to determine HCTZ. Absorption Spectrum A set of concentrations were prepared for PRO and HCTZ, with a range of concentrations up to 5-40 µg/ml. Scanning for the wavelengths was done, which was around 190-400 nm and its zero spectrum was obtained. The maximum absorption of HCTZ is at a wavelength of 272 nm, while for PRO at 290 nm. The absorption spectra of both drugs are severely overlapped as in Figure 2, so the simultaneous estimation of these components by the direct spectrophotometric method was impossible. The irst derivative method was used in quantitative estimation without separation. Figure 3 and Figure 4 represent the irst derivative spectrum for a mixture of PRO with concentrations up to 5-40 µg/ml in the presence of 10 µg/ml HCTZ and the irst derivative spectrum for a mixture of HCTZ with concentrations up to 5-40 µg/ml in the presence of 5µg/ml PRO. Construction Of Calibration Curves Calibration curves were constructed in according to optimum conditions of the suggested procedure, the linearity for all of the calibration curves was around 5-40 µg/ml for each PRO and HCTZ. The values of the slope for all of the calibration curves were around 0.0033-0.3355 and the values of LOD and LOQ were around 0.0371-1.0900 µg/ml and 0.0553-3.6335 µg/ml respectively, it was calculated based on ICH (21) . The values of R 2 was around 0.9994-0.9979 for both drugs. The results of the simultaneous estimation of PRO and HCTZ at a range of concentrations 5-40 µg/ml for each of them in the presence of the another, shown in Table 2. Accuracy and Precision The accuracy and precision for the proposed method were tested through the calculation of recovery percentage Rec% and relative standard deviation RSD% for the concentrations of calibration curves and by doing seven repetitions for each measurement process (n=7 Application of Suggested Method The proposed method was used in the quantitative estimation of the pharmaceutical form HCTAIWA 25 µg/ml at concentrations of 30,35 in the presence of 5 µg/ml PRO and the pharmaceutical form Indicardin 10 µg/ml at concentration 20,25 in the presence of 10 µg/ml HCTZ, each measurement was done seven times (n=7) for both drugs. The modes of the derivative which were used are Peak Area and Peak to Baseline. The results showed the success of the application of this method, the Rec% value was around 95.2370-104.2420% and 98.3333-104.0354%. The RSD% value was around 0.1885-1.5230% and 0.327-4.7334% for PRO in the presence of HCTZ and for HCTZ in the presence of PRO, respectively. The Methods Comparison A comparison between the analyzing characteristics of the suggested method and other analyzing method was done. Table 3 shows the results of this comparison (Ny et al., 2015). CONCLUSIONS The irst derivative method which is used in the determination of PRO and HCTZ simultaneously, consider as a useful method due to its analyzing properties, its a simple, accurate, sensitive, economical method, useful in increasing the speed of work, low economical cost, does not need an expensive solvents and without losing the accuracy in the determination of drugs in the pharmaceutical formula.
2020-02-06T09:09:58.183Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "e9a0d2ae3630fb9ae493b6c334a9d03d309a8148", "oa_license": null, "oa_url": "https://doi.org/10.26452/ijrps.v11i1.1922", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "71414fbc95d67a202a50e956827fed45e7fe798b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
266560721
pes2o/s2orc
v3-fos-license
VE-cadherin junction dynamics in initial lymphatic vessels promotes lymph node metastasis Exposure of initial dermal lymphatics to VEGFA or a tumor results in zippering of VE-cadherin at junctions, concomitant with Src-dependent VE-cadherin fragmentation. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, https://www.life-science-alliance.org/authorsWe encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript.If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information.These files will be linked online as supplementary "Source Data" files.***IMPORTANT: It is Life Science Alliance policy that if requested, original data images must be made available.Failure to provide original images upon request will result in unavoidable delays in publication.Please ensure that you have access to all original microscopy and blot data images before submitting your revision.***- --------------------------------------------------------------------------Reviewer #1 (Comments to the Authors (Required)): 1. Sáinz-Jaspeado et al. have submitted the manuscript "Fragmentation of capillary lymphatic junction facilitates tumor metastasis to lymph nodes".The manuscript describes the role of VEGFA-pVEGFR2 signalling in lymphatic vessels during cancer progression and lymph node metastasis, and focuses exactly on VEGFR2 Tyr949 phosphorylation.They discovered fragmentation of LEC junctions in lymphatic capillaries adjacent to melanoma tumors, which was not seen in the VEGFR2-Y949F phosphorylation deficient knock-in (KI) mouse model.The study is a continuum to their previous work related to the VEGFR2-Y949 in blood vessels, and here they deepen the understanding of VEGFR2 phosphorylation in the regulation of adherens junction stability specifically in the lymphatic vessels. 2. Figure 1 compares dermal lymphatic capillaries in E14.5 mouse embryos and in adult wt and KI mice, VEGFR levels at P 10 as well as lymphatic drainage in adult wt and KI mice.No major differences are observed, except for the increased lymphatic vessel density in the KI mice as compared to wt.Thus, the dermal lymphatics appear develop largely normally in the KI mice. Figure 2 focuses on lymph node (LN) metastasis in the wt vs KI mice using two models.It is shown that while tumor growth is similar in both wt and KI mice, the metastases are less in KI mice as compared to wt.In Fig. 2H, the authors should include an image of a LN also from the KI mice.In addition, could the results be explained by decreased tumor lymphagiogenesis in wt vs KI mice?Lower tumor interstitial pressure in KI mice, due to reduced leakage, might result in decreased lymphangiogenesis in the periphery.This could be confirmed using immunohistochemical staining and subsequent quantification of the lymphatic markers in tumors and surrounding tissue. Figure 3 shows that melanoma-induced VE-cadherin loss in peritumoral lymphatic capillaries is decreased in KI mice.The results would be strengthened by analysis of VE-cadherin protein / mRNA levels in tissues from wt vs KI mice +/-cancer. Figure 4 shows that VEGF impairs VE-cadherin positive junctions in the lymphatic capillaries from wild type but not KI mice.Since VEGF-induced loss of VE-cadherin is presented as a mechanism in the tumor models, VEGF levels should be analyzed in wt vs KI mice +/-cancer. Figure 5 studied the contribution of SFK on VEGF-induced Ve-cadherin loss in isolated LECs from wt and KI mice.By utilizing a conditional SFK knock-out mice they showed that VEGF-induced VE-cadherin impairment was mediated by SFK.Summary: The goal of this study was to test the hypothesis that tumor metastasis can be facilitated by fragmentation of junctions in lymphatic vessels, akin to the phenomenon reported for blood vasculature.The authors tested the hypothesis in melanoma and breast cancer models in a model with functionally defective VEGFR-2 that lacks the ability to phosphorylate junctional proteins (Y949F) and compared the data to wild-type (WT).They found statistically significant higher presence of tumor cells in lymph node (LN) in WT compared with Y949F which correlated with reduced lymph clearance representing lymph flow.The latter is presumably due to better junctional stability in Y949F animals which prevents stasis and facilitates moving fluid forward (this is my understanding, the underlying mechanism is not exactly spelled out).Characterization of lymphatic endothelial cell junctions in vivo and in vitro showed that either tumors or their derived VEGF-A or VEGF-C increase fragmentation of junctions to short segments which may increase intravasation of tumor cells.This event was also shown to be c-Src dependent, as it is in blood vessels. Major critiques: 1) While the study describes a potentially important mechanistic aspect of lymphatic metastasis, some data are not very convincing and some -are not well presented.Specifically, the differences between WT and VEGFR-2 mutant mice seem to be miniscule and hard to interpret in terms of the exact metastatic burden.This is because qPCR method is based on ratio and normalization, but in this case, it is unclear how Ct values were normalized among different animals, what was taken as foldchange of 1.0; and how differences in the number, protein contents, and sizes of different lymph nodes were accounted for.This method does not allow to translate Ct values (assuming they are properly normalized per housekeeping genes) into tumor cell equivalents, and with such a small difference shown in Fig. 2D, it is hard to be convinced that that tumors in WT mice metastasize more efficiently than in Y949F animals.Also, the difference in several mgs in LN weight, although appears to be statistically significant, it is difficult to be taken as representative of all tumors due to an unusual site of tumor implantation, a very early tumor stage, and only minor changes despite using a highly sensitive qPCR technique.Indeed, the breast model did not show such changes.2) E0771 model produced more convincing results but the problem here (as with all other models) that analyses were performed only in one time point, so the difference in ~100 cells per LN (based on Fig. 2G) might be not sustainable at later points.In other words, even if the differences in invasion of lymphatics have some impact at early tumor stages, this might not have a significant effect on overall metastatic burden and spread to LNs beyond the sentinel nodes and other normal organs.The kinetics analysis of one or both models would be more convincing than an isolated time point. 3) The main claim that reduced clearance directly corresponds to increased vascular invasion should be confirmed by additional means.It is logical to suggest that impairment of vascular integrity through junctional fragmentation might result in both phenomena, but at the same time, other known factors can produce opposing effects.For instance, reduced flow can decrease the number of LN-bound migrating tumor cells or reduce number of exiting immune stimulatory cells thus promoting tumor cell kill.The presented data obtained at one early time point and showing rather minor differences, although supportive of the claim, are not sufficient to draw a bold conclusion (page 4, the second paragraph from bottom, claim about "suppression" of metastasis).4) Data presentation for "Fragmentation category" is very confusing.The data should be presented as bars for each category rather than linking all four categories with a line.There is no reason to connect the percentage of "long" or "very long" fragments to "short" or "very short".Once again, an additional method confirming junctional fragmentation in tumor-associated lymphatic vessels mediated by tumor-secreted factors would significantly strengthen the study conclusions.5) The Discussion section contains very lengthy discussions on morphology of lymphatic vessels in different normal organs but no reference to either tumors or metastasis.How important is the described fragmentation of junctions in lymphatic vessels to the metastatic process?How significant it is for human cancers given other factors that can influence intravasation, transmigration of tumor cells through lymphatic barrier at non-junctional sites and other parameters.Does the efficiency of early invasion determine the overall outcome?Does fragmentation occur in non-metastatic tumors?Given the focus of the study, it seems that Discussion should address at least some of these questions while shortening a bit less relevant literature analyses of lymphatic morphology outside of the tumor context. Minor critiques: 1) VEGF-A, VEGF-C, VEGFR-3 and VEGFR-2 should be spelled out with dash to denote proteins.2) TRP1 should be identified as tyrosinase related protein-1 and its proper gene name TYRP1.3) Fig. 4A and Fig. 4G look identical and it takes some time to understand that one depicts 15 min assessment whereas the other shows results for 30 min.Please indicate the time difference in the graphs in addition to description in figure legends.4) Classification of analyzed fragments and description of each category should be described in a more quantifiable manner (e.g., what is the numerical difference between each of 1-4 categories).1st Authors' Response to Reviewers November 13, 2023 1 Re: Rebuttal letter Life Science Alliance manuscript #LSA-2023-02168-T Dear Editor, We appreciate the reviewers' constructive and insightful criticisms.An important addition to the study, inspired by these comments, is shown in the new Figures 4 and 5, where VEcadherin fragment lengths have been measured specifically at junctions.The data clearly demonstrates "zippering" both of tumor-proximal lymphatics and dermal initial lymphatics after VEGFA-injection.Zippering occurred in both genotypes studied, wildtype (WT) and Vegfr2 Y949F/Y949F mice.In contrast, the automated quantification of VE-cadherin shapes (based on aspect ratio/circularity) shown in the original submission, was done throughout the cell.The VE-cadherin shape change occurred only in the WT and not in the Vegfr2 Y949F/Y949F mutant when analyzing tumor-proximal lymphatics and initial lymphatics in the VEGFAinjected mouse ear.We speculate that the shape change, i.e.VE-cadherin internalization/fragmentation provides at potential mechanism for the difference in tumor cell extravasation and metastatic spread between the WT and the Vegfr2 Y949F/Y949F mice, as outlined in the new Discussion.The text has been adjusted throughout. 5) The list of antibodies in Supplementary Please see below for our point-by-point responses. Reviewer #1 1. Sáinz-Jaspeado et al. have submitted the manuscript "Fragmentation of capillary lymphatic junction facilitates tumor metastasis to lymph nodes".The manuscript describes the role of VEGFA-pVEGFR2 signalling in lymphatic vessels during cancer progression and lymph node metastasis, and focuses exactly on VEGFR2 Tyr949 phosphorylation.They discovered fragmentation of LEC junctions in lymphatic capillaries adjacent to melanoma tumors, which was not seen in the VEGFR2-Y949F phosphorylation deficient knock-in (KI) mouse model.The study is a continuum to their previous work related to the VEGFR2-Y949 in blood vessels, and here they deepen the understanding of VEGFR2 phosphorylation in the regulation of adherens junction stability specifically in the lymphatic vessels. Response: We would like to thank the reviewer for the comments and questions which have led to generation of new data and improvement of this study. 2. Figure 1 compares dermal lymphatic capillaries in E14.5 mouse embryos and in adult wt and KI mice, VEGFR levels at P 10 as well as lymphatic drainage in adult wt and KI mice.No major differences are observed, except for the increased lymphatic vessel density in the KI mice as compared to wt.Thus, the dermal lymphatics appear develop largely normally in the KI mice. Figure 2 focuses on lymph node (LN) metastasis in the wt vs KI mice using two models.It is shown that while tumor growth is similar in both wt and KI mice, the metastases are less in KI mice as compared to wt. In Fig. 2H, the authors should include an image of a LN also from the KI mice. Response: The lymph node picture from a WT mouse in Fig. 2H is to validate lymphatic metastasis in this tumor model.We could not provide a LN image from KI mice because, unfortunately, all lymph nodes collected in this study have been used for quantitative analysis of tumor cell counts by using FACS (Fig. 2G).To again isolate lymph nodes from EO771 challenged female mice would have taken more time than allowed due to current problems in the facility with poor birth and small/lost litters and therefore inability to expand the Vegfr2 Y949F/Y949F colony. In addition, could the results be explained by decreased tumor lymphagiogenesis in wt vs KI mice?Lower tumor interstitial pressure in KI mice, due to reduced leakage, might result in decreased lymphangiogenesis in the periphery.This could be confirmed using immunohistochemical staining and subsequent quantification of the lymphatic markers in tumors and surrounding tissue. Response: We did LYVE1 immunostaining in the ears from B16F10 tumor engrafted mice and quantified the lymphatic vessel density at the tumor periphery.No significant difference was shown between WT and Vegfr2 Y949F/Y949F mice.The data is now shown in the Supplementary Figure 1A,B. Figure 3 shows that melanoma-induced VE-cadherin loss in peritumoral lymphatic capillaries is decreased in KI mice.The results would be strengthened by analysis of VE-cadherin protein / mRNA levels in tissues from wt vs KI mice +/-cancer. Response: We have now analyzed VE-cadherin patterns in the tumor proximal and tumor distal (healthy) region, see Fig. 4A-D and managed to obtain higher resolution and more representative images.The results show zippering of lymphatic VE-cadherin junctions in the tumor proximal lymphatics.The fluorescent intensity of VE-cadherin is not different between WT and KI mice.In addition we analyzed Cdh5 transcript levels in B16F10 melanoma tumors from WT and Vegfr2 Y949F/Y949F mice.There is a trend of decreased Cdh5 expression is shown in KI mice, however the difference is not statistically significant (Supplementary fig.1D). Figure 4 shows that VEGF impairs VE-cadherin positive junctions in the lymphatic capillaries from wild type but not KI mice.Since VEGF-induced loss of VE-cadherin is presented as a mechanism in the tumor models, VEGF levels should be analyzed in wt vs KI mice +/-cancer. Response: We performed qPCR to determine the expression of Vegfa in B16F10 tumors from WT and Vegfr2 Y949F/Y949F mice and no significant difference was detected (Supplementary fig.1C). Figure 5 studied the contribution of SFK on VEGF-induced Ve-cadherin loss in isolated LECs from wt and KI mice.By utilizing a conditional SFK knock-out mice they showed that VEGFinduced VE-cadherin impairment was mediated by SFK.Reviewer #2 (Comments to the Authors (Required)): Summary: The goal of this study was to test the hypothesis that tumor metastasis can be facilitated by fragmentation of junctions in lymphatic vessels, akin to the phenomenon reported for blood vasculature.The authors tested the hypothesis in melanoma and breast cancer models in a model with functionally defective VEGFR-2 that lacks the ability to phosphorylate junctional proteins (Y949F) and compared the data to wild-type (WT).They found statistically significant higher presence of tumor cells in lymph node (LN) in WT compared with Y949F which correlated with reduced lymph clearance representing lymph flow.The latter is presumably due to better junctional stability in Y949F animals which prevents stasis and facilitates moving fluid forward (this is my understanding, the underlying mechanism is not exactly spelled out).Characterization of lymphatic endothelial cell junctions in vivo and in vitro showed that either tumors or their derived VEGF-A or VEGF-C increase fragmentation of junctions to short segments which may increase intravasation of tumor cells.This event was also shown to be c-Src dependent, as it is in blood vessels. Response: We would like to thank the reviewer for the comments and questions which have led to generation of new data and improvement of this study.We have now tried our best to increase the clarity of the presentation. Major critiques: 1) While the study describes a potentially important mechanistic aspect of lymphatic metastasis, some data are not very convincing and some -are not well presented.Specifically, the differences between WT and VEGFR-2 mutant mice seem to be miniscule and hard to interpret in terms of the exact metastatic burden.This is because qPCR method is based on ratio and normalization, but in this case, it is unclear how Ct values were normalized among different animals, what was taken as fold-change of 1.0; and how differences in the number, protein contents, and sizes of different lymph nodes were accounted for.This method does not allow to translate Ct values (assuming they are properly normalized per housekeeping genes) into tumor cell equivalents, and with such a small difference shown in Fig. 2D, it is hard to be convinced that that tumors in WT mice metastasize more efficiently than in Y949F animals.Also, the difference in several mgs in LN weight, although appears to be statistically significant, it is difficult to be taken as representative of all tumors due to an unusual site of tumor implantation, a very early tumor stage, and only minor changes despite using a highly sensitive qPCR technique.Indeed, the breast model did not show such changes. Response: We apologize for the poor presentation of the qPCR data in Fig. 2D in the original manuscript.We reanalyzed the qPCR data and replaced it with a new graph.The data was analyzed by the ddCt method using mouse housekeeping gene Rpl19 as control.mRNA levels of Tyrp1 in the cervical lymph nodes from B16F10-challenged WT and Vegfr2 Y949F/Y949F mice were normalized to the Tyrp1 level in unaffected inguinal lymph nodes collected from WT mice. The ear dermis was chosen as the site of injection because it restricts local growth of tumor volume and promotes spread to sentinel lymph nodes.As we are sure the reviewer agrees, there are not many transplantable, C57Bl/6 compatible metastatic models to choose from that also are amenable to imaging.We could have tried to place the B16F10 tumors on the flank, surgically remove the tumors when at 1cm 3 (ethical restrictions do not allow bigger tumors) and wait for lung metastasis to become established.We have attempted this strategy but were not sufficiently proficient at resecting the primary tumors in a consistent manner, giving rise to quite uneven regrowth.We know this is also a considerable problem in other laboratories.We agree that the extent of lymphatic metastasis is quite variable between individuals as shown in the new Fig.2D and the images of lymph nodes presented below (Rebuttal Fig. 1).However we respectfully disagree with attributing the difference between WT and Vegfr2 Y949F/Y949F mice in metastatic dissemination to lymph nodes, to random statistically variation in the two tumor models, the different methods shown in Fig. 2. Taken together we are confident that the metastasis in lymph nodes is reduced in the Vegfr2 Y949F/Y949F mutant mice, especially for the incidence of large lymphatic metastasis. Rebuttal Figure 1.Images of cervical lymph nodes collected at day 12 after implantation of B16F10 melanoma cells in the ear dermis of WT and Vegfr2 Y949F/Y949F mice.Note the metastasis of the melanoma cells that are visible in some of the lymph nodes. 2) E0771 model produced more convincing results but the problem here (as with all other models) that analyses were performed only in one time point, so the difference in ~100 cells per LN (based on Fig. 2G) might be not sustainable at later points.In other words, even if the differences in invasion of lymphatics have some impact at early tumor stages, this might not have a significant effect on overall metastatic burden and spread to LNs beyond the sentinel nodes and other normal organs.The kinetics analysis of one or both models would be more convincing than an isolated time point. Response: The ~100 cells difference per LN represent a 50% decrease in metastatic burden in Vegfr2 Y949F/Y949F mice compared to WT at this stage of tumor growth.In a separate study to describe and validate the EO771 model (to be submitted; M H. Ulvmar, senior author), we did a kinetic analysis of lymphatic metastasis at day 12 and day 20 after tumor cell implantation.The result showed that Tomato+ tumor cells could be detected by FACS analysis in WT inguinal lymph nodes as early as day 12 (about 40 cell counts per LN) and the cell count significantly increased at day 20 to a mean of about 200 cell counts per LN (Rebuttal Fig. 2).Therefore the difference of 100 cells in this model does represent a solid reduction in metastastatic spread. To have this tumor model grow longer is not feasible; it severely increases the risk of wounding at the primary tumor site, and the tumors grow too large.We are permitted to have tumors grow to 0.5 cm 3 /site for this model, and of course with time, the tumors start to grow exponentially.Therefore for ethical reasons we could not extend much beyond the 20 days of tumor growth shown in this study.Thus, although we in principal agree with the reviewer that it is important to perform kinetic analysis, lymph node spread would not be possible to study at later time points for ethical reasons and at earlier time points, the number of tumor cells/LN is quite small.Day 20 appears to be an optimal time point for analysis.3) The main claim that reduced clearance directly corresponds to increased vascular invasion should be confirmed by additional means.It is logical to suggest that impairment of vascular integrity through junctional fragmentation might result in both phenomena, but at the same time, other known factors can produce opposing effects.For instance, reduced flow can decrease the number of LN-bound migrating tumor cells or reduce number of exiting immune stimulatory cells thus promoting tumor cell kill.The presented data obtained at one early time point and showing rather minor differences, although supportive of the claim, are not sufficient to draw a bold conclusion (page 4, the second paragraph from bottom, claim about "suppression" of metastasis). Rebuttal Response: We apologize; it was not our intention to overstate the results or conclude that changes in clearance directly corresponds to changes in metastatic spread.We agree that clearance of interstitial fluid and intravasation of tumor cells into lymphatics can be through different mechanisms.The clearance data and tumor cells intravasation data are now put together in a new Figure 3 to show that the Y949F mutation led to improved lymphatic drainage and reduced transmigration of tumor cells into lymphatic vessels, indicating an enhanced peritumoral lymphatic barrier compared to the WT.We have toned down the claim about suppression of metastasis throughout. 4) Data presentation for "Fragmentation category" is very confusing.The data should be presented as bars for each category rather than linking all four categories with a line.There is no reason to connect the percentage of "long" or "very long" fragments to "short" or "very short".Once again, an additional method confirming junctional fragmentation in tumorassociated lymphatic vessels mediated by tumor-secreted factors would significantly strengthen the study conclusions. Response: We thank the reviewer for this important request.We have now reanalyzed the data using an additional method.The VE-cadherin analysis in the original manuscript was meant to classify in an automated manner, the shapes of the VE-cadherin fragment based on aspect ratio and circularity.It could not analyze the actual length of the fragments.Therefore, we realize that the description of "long" and "short" fragments with this classification was not appropriate; instead, we now refer to shape categories.A more detailed description of the classification has been added (page 5, second paragraph from the bottom).All the graphs regarding the shape analysis have been changed to bar graphs. Moreover, in the new junctional analysis, we focused on the number and lengths of VEcadherin fragments at the endothelial cell junctions of initial lymphatic vessels.This analysis led to the important insight that zippering of lymphatic junctions is established in tumorproximal lymphatics (Fig. 4A-D) and upon VEGF injection in the ear dermis (Fig. 5A-D).The phenomenon of lymphatic junction zippering by VEGFA signaling has been well studied recently (Zarkada, Chen et al., 2023, Zhang, Zarkada et al., 2018) and we now report on zippering also in the peritumoral lymphatics, which as far as we are aware has not been previously observed. Junctional zippering was accompanied by an increase in pan-cellular circular/low aspect ratio VE-cadherin shapes in the wild-type mouse lymphendothelium, indicating increased VEcadherin dynamics, but not in mice expressing the Y949F mutant VEGFR2.In contrast, we show that the Y949 site in VEGFR2 is dispensable for VEGFA-induced zippering of lymphatic junctions in agreement with data in Zarkada et al., 2023.However, importantly, signaling downstream of Y949 in VEGFR2 is required for VEGFA/VEGFC-induced internalization and turnover of VE-cadherin. 5) The Discussion section contains very lengthy discussions on morphology of lymphatic vessels in different normal organs but no reference to either tumors or metastasis.How important is the described fragmentation of junctions in lymphatic vessels to the metastatic process?How significant it is for human cancers given other factors that can influence intravasation, transmigration of tumor cells through lymphatic barrier at non-junctional sites and other parameters.Does the efficiency of early invasion determine the overall outcome?Does fragmentation occur in non-metastatic tumors?Given the focus of the study, it seems that Discussion should address at least some of these questions while shortening a bit less relevant literature analyses of lymphatic morphology outside of the tumor context. Response: We have amended the Discussion.We would like to point out that the propertie of tumor lymphatics was also described in the Introduction. Response: We have adopted the conventional style of showing gene names in italics (Cdh5, Vegfa, Vegfr2, Tyrp1 etc for mouse genes) and proteins in their non-italic capitalized versions. We are not aware that the hyphen is a required designation for proteins but if this is a journal style requirement we will adjust. 2) TRP1 should be identified as tyrosinase related protein-1 and its proper gene name TYRP1. Response: TRP1 has been changed to Tyrp1 (rather than TYRP1, to show that we are referring to a mouse gene. 3) Fig. 4A and Fig. 4G look identical and it takes some time to understand that one depicts 15 min assessment whereas the other shows results for 30 min.Please indicate the time difference in the graphs in addition to description in figure legends. Response: The time point for all the VEGF injection experiments has been added to the graphs. 4) Classification of analyzed fragments and description of each category should be described in a more quantifiable manner (e.g., what is the numerical difference between each of 1-4 categories). Response: A detailed description has been added in the text as below in page 5, second paragraph from the bottom. "Four categories of VE-cadherin shapes were defined: category 1; fragments with aspect ratio above the upper quartile (Q3) and circularity between 0-0.25, category 2; aspect ratio Q2-Q3 and circularity 0.25-0.5,category 3; aspect ratio Q1-Q2 and circularity 0.5-0.75,category 4; aspect ratio below Q1 and circularity 0.75-1 (Figure 4E).The small round shapes in categories 3-4 may result from dynamic turnover of VE-cadherin, i.e. internalization (Bentley, Franco et al., 2014)." 5) The list of antibodies in Supplementary Response: tdTomato in the image of lymph node (Fig. 2H) was detected by antibody staining, although in FACS no antibody was used.Tdtomato antibody information has been added to Supplementary table 1. CD169 antibody information has also been added to Supplementary table 1. Anti-NRP2 was used in Figure 1A.HRP-conjugated secondary antibodies were used in the western blots in Figure 1G. 6) Please confirm that E0771 tumor cells were injected without Matrigel.The standard method of tumor implantation does require Matrigel. Response: For the B16F10 mdel, we used Matrigel but for the EO771 cell implantation the protocol does not require Matrigel. 7) Results state that B16 cells were injected intradermally but the Methods describe it as a subcutaneous model.Please reconcile. Response: We apologize.The description in Methods has been corrected; intradermal injection was applied.Thank you for submitting your revised manuscript entitled "VE-cadherin junction dynamics in initial lymphatic vessels promotes lymph node metastasis".We would be happy to publish your paper in Life Science Alliance pending final revisions necessary to meet our formatting guidelines. Along with points mentioned below, please tend to the following: -please add the Twitter handle of your host institute/organization as well as your own or/and one of the authors in our system -please update your callouts for the Supplementary Figures in the manuscript LSA now encourages authors to provide a 30-60 second video where the study is briefly explained.We will use these videos on social media to promote the published paper and the presenting author (for examples, see https://twitter.com/LSAjournal/timelines/1437405065917124608).Corresponding or first-authors are welcome to submit the video.Please submit only one video per manuscript.The video can be emailed to contact@life-science-alliance.orgTo upload the final version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plexYou will be guided to complete the submission of your revised manuscript and to fill in all necessary information.Please get in touch in case you do not know or remember your login name. To avoid unnecessary delays in the acceptance and publication of your paper, please read the following information carefully. A. FINAL FILES: These items are required for acceptance. --An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs). --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, https://www.life-science-alliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max.200 characters including spaces).This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title.It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person.Author names should not be mentioned. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, https://www.life-science-alliance.org/authorsWe encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript.If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information.These files will be linked online as supplementary "Source Data" files.**Submission of a paper that does not conform to Life Science Alliance guidelines will delay the acceptance of your manuscript.****It is Life Science Alliance policy that if requested, original data images must be made available to the editors.Failure to provide original images upon request will result in unavoidable delays in publication.Please ensure that you have access to all original data images prior to final submission.****The license to publish form must be signed before your manuscript can be sent to production.A link to the electronic license to publish form will be available to the corresponding author only.Please take a moment to check your funder requirements.****Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript.If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.**Thank you for your attention to these final processing requirements.Please revise and format the manuscript and upload materials within 7 days. Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript.If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.***IMPORTANT: If you will be unreachable at any time, please provide us with the email address of an alternate author.Failure to respond to routine queries may lead to unavoidable delays in publication.***Scheduling details will be available from our production department.You will receive proofs shortly before the publication date.Only essential corrections can be made at the proof stage so if there are any minor final changes you wish to make to the manuscript, please let the journal office know now. DISTRIBUTION OF MATERIALS: Authors are required to distribute freely any materials used in experiments published in Life Science Alliance.Authors are encouraged to deposit materials used in their studies to the appropriate repositories for distribution to researchers. You can contact the journal office with any questions, contact@life-science-alliance.orgAgain, congratulations on a very nice paper.I hope you found the review process to be constructive and are pleased with how the manuscript was handled editorially.We look forward to future exciting submissions from your lab.Sincerely, Eric Sawey, PhD Executive Editor Life Science Alliance http://www.lsajournal.org Fig 2, 3 and 4. Reviewer #2 (Comments to the Authors (Required)): In summary, the figures now show the following: o Figures 1 and 2 remain essentially as as before.o Figure 3 shows the clearance of an interstitial tracer and tumor cell intravasation into lymphatics, in the two genotypes.o Figures 4 and 5 combine the new measurements of VE-cadherin dynamics at the junctions, with the previously shown shape change throughout the cells in tumorproximal vs tumor-distal lymphatics (Figure 4) and after VEGFA-injection (Figure 5).o Figure 6 shows that the Src pathway is required for VEGFA-induced VE-cadherin shape change o Supplemental Figure 1 which is new, shows lymphatic vessel density and expression levels of Vegfa and Cdh5.o Supplemetnal Figure 2 shows VE-cadherin phosphorylation in lymphatics.oSupplementary Figure3shows VE-cadherin shape change in response to VEGFC.o The previous Supplementary Figure2which showed inflammation when a very high dose of VEGFA was administered has been removed as it appeared redundant in the revised version. Figure 2 . FACS analysis of Tomato+ EO771-CCR7 tumor cells in inguinal lymph nodes at day 12 (A) and day 20 (B) after implantation of the tumor cells in the mammary fat pad of female mice. : Suppl Fig 2 A, B If you are planning a press release on your work, please inform us immediately to allow informing our production team and scheduling a release date. http://www.lsajournal.org Reviewer # 2 ( Comments to the Authors (Required)):The authors addressed the critiques raised by original version.This reviewer has no additional comments.submitting your Research Article entitled "VE-cadherin junction dynamics in initial lymphatic vessels promotes lymph node metastasis".It is a pleasure to let you know that your manuscript is now accepted for publication in Life Science Alliance.Congratulations on this interesting work. Table 1 does not accurately correspond to used antibodies in the study.It is not clear whether Tomato was detected by red fluorescence or by anti-Tomato antibody.It is not clear what was used for detection of CD169 in LNs.None of the presented figures shows staining with anti-neuropilin-2 IgG listed in the Table.It is not clear what was the use of HRP-conjugated secondary antibodies (all images are from IF). 6) Please confirm that E0771 tumor cells were injected without Matrigel.The standard method of tumor implantation does require Matrigel.7) Results state that B16 cells were injected intradermally but the Methods describe it as a subcutaneous model.Please reconcile. Table 1 does not accurately correspond to used antibodies in the study.It is not clear whether Tomato was detected by red fluorescence or by anti-Tomato antibody.It is not clear what was used for detection of CD169 in LNs.None of the presented figures shows staining with anti-neuropilin-2 IgG listed in the Table.It is not clear what was the use of HRP-conjugated secondary antibodies (all images are from IF).
2023-12-28T06:16:50.044Z
2023-12-26T00:00:00.000
{ "year": 2023, "sha1": "677c1fe80a25aac47f1c3551f24749a1c2ab446f", "oa_license": "CCBY", "oa_url": "https://www.life-science-alliance.org/content/lsa/7/3/e202302168.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4fac2c811e5aff4dadb6bc2d35eebaf63214587", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220612088
pes2o/s2orc
v3-fos-license
Rare cancers of unknown etiology: lessons learned from a European multi-center case–control study Rare cancers together constitute one fourth of cancers. As some rare cancers are caused by occupational exposures, a systematic search for further associations might contribute to future prevention. We undertook a European, multi-center case–control study of occupational risks for cancers of small intestine, bone sarcoma, uveal melanoma, mycosis fungoides, thymus, male biliary tract and breast. Incident cases aged 35–69 years and sex-and age-matched population/colon cancer controls were interviewed, including a complete list of jobs. Associations between occupational exposure and cancer were assessed with unconditional logistic regression controlled for sex, age, country, and known confounders, and reported as odds ratios (OR) with 95% confidence intervals (CI). Interviewed were 1053 cases, 2062 population, and 1084 colon cancer controls. Male biliary tract cancer was associated with exposure to oils with polychlorinated biphenyls; OR 2.8 (95% CI 1.3–5.9); male breast cancer with exposure to trichloroethylene; OR 1.9 (95% CI 1.1–3.3); bone sarcoma with job as a carpenter/joiner; OR 4.3 (95% CI 1.7–10.5); and uveal melanoma with job as a welder/sheet metal worker; OR 1.95 (95% CI 1.08–3.52); and cook; OR 2.4 (95% CI 1.4–4.3). A confirmatory study of printers enhanced suspicion of 1,2-dichloropropane as a risk for biliary tract cancer. Results contributed to evidence for classification of welding and 1,2-dichloropronane as human carcinogens. However, despite efforts across nine countries, for some cancer sites only about 100 cases were interviewed. The Rare Cancer Study illustrated both the strengths and limitations of explorative studies for identification of etiological leads. Introduction Rare cancers together constitute a considerable part of the cancer burden. In Europe, a rare cancer was defined as a cancer with an incidence of less than six per 100,000. This led to identification of 198 disease entities, together constituting 24% of cancers diagnosed in the European Union [1]. Occupational exposures are known risk factors for several rare cancers; examples are asbestos and pleural mesothelioma, and benzene and acute myeloid leukemia [2]. The potentially occupational origin of rare cancers has in most cases been suggested by alert clinicians. Exposure to wood dust as a risk factor for nasal adenocarcinomas was based on a cluster in furniture-makers in Buckinghamshire, United Kingdom [3]. A causal link between vinyl chloride and liver angiosarcoma was suggested by two company physicians observing four cases among workers in the polymerization section of a plant in Kentucky, United States [4]. Recently, a cluster of cholangiocarcinomas was observed in a small offset color-proof printing facility in Osaka, Japan, where workers had been exposed to 1,2-dichloropropane and dichloromethane [5]. On this background, one might hypothesize that a systematic search for associations between occupational exposures and rare cancers would reveal new etiological leads. To obtain sufficient numbers, patients for such a study should be recruited from a large population. We undertook a European multi-center case-control study on risk factors for seven rare cancers. Incident cases and controls were recruited from nine European countries with personal interviews of 4000 participants. Here we report on selected key findings and in light of our experiences discuss strengths and limitations of this study approach. Material and methods The Rare Cancer Study aimed to serve both as a confirmatory study of specified hypotheses, and as an explorative study. The design was described previously [6]. In short, incident cases aged 35-69 years diagnosed 1995-1997 with cancers of the small intestine, bone sarcoma, uveal melanoma, mycosis fungoides, thymus, and male biliary tract and male breast were recruited. The lower age limit was set to allow for some time for accumulation of occupational exposures prior to age of diagnosis, and the upper age limit was set to avoid comorbidities that would incapacitate participation. The seven cancer sites were chosen based on a literature review [7], which indicated that occupational risk factors could be involved in the etiology of these diseases. Population-based in Denmark and Latvia, in ten areas in France, five in Germany, three in Italy, and four in Sweden; hospital-based in three places in Spain, two in Portugal, and at one eye-hospital in United Kingdom, Fig. 1. Cases were reviewed by an expert pathologist. Sex and age-matched population-controls four times the most frequent cancer were selected from Denmark, France, Germany, Italy, and Sweden, and colon cancer controls from Denmark, and from Latvia, Spain, and Portugal where population-controls could not be selected. We aimed for interviewing all identified cases and selected controls. To ensure a high response rate, cases were interviewed as soon as possible after the diagnosis. If a case had died or was unable to participate, we aimed for interviewing a next-of-kin. Controls were interviewed in batches throughout the data collection period. The pathology review was undertaken in parallel with the interviews, and in the analysis we included only interviewed cases for whom the diagnosis was considered to be definite or possible. The core questionnaire covered demographic variables, eye color, medical and x-ray history, use of drugs, tobacco and alcohol, and occupational exposures as organic solvents, pesticides, and electromagnetic fields. A complete history was obtained of all jobs lasting at least six months including data on working hours, materials handled, and chemical exposures. In addition, using the method by Siemiatychi [8] we used 27 job-specific supplementary questionnaires providing a comprehensive picture of the exposures in given job (ref). Questionnaires were translated into the languages of the respective countries. Jobs were coded using the International Classification of Occupations from 1968 [9], and the European Classification of Industries from 1993 [10]. Associations between occupational exposure and cancer were assessed with unconditional logistic regression controlled for sex, age, country, and known confounders, and reported as odds ratios (OR) with 95% confidence intervals (CI). Selected known associations with medical conditions were studied to check data validity, before associations with occupational exposures were explored. The study was undertaken in accordance with Ethical Committee requirements in each country. Results In total, 1457 patients were recruited, with diagnosis assessed as definite/possible for 1252, and 1053 were interviewed, almost 90% in-person, Table 1. Use of colon cancer controls In Denmark, both 320 population and 254 colon cancer controls were recruited. The groups were similar in education, medical history and smoking, but colon cancer controls had higher alcohol intake, less frequent work as a farmer, and less exposure to pesticides than population controls [11]. These differences affected some findings. The association between sunlight exposure and uveal melanoma was OR 1.91 (95% CI 1.22-2.98) with colon cancer controls; but only OR 1.24 (95% CI 0.88-1.74) with population controls; reflecting that farmers had outdoor work and low risk of colon cancer [12]. With both population and colon cancer controls exposure to pesticides showed an excess risk with bone sarcoma; OR 2.33 (95% CI 1. 31-4.13), that decreased with population controls only; OR 1.63 (95% CI 0.77-3.45) [13]. Biliary tract carcinoma in men Biliary tract carcinoma was studied including 153 cases and 1421 population controls. A history of gallstones is a known risk factor, and this was confirmed; OR 4.68 (95% CI 2.80-7.84) [14]. Questionnaire data on chemicals were used to construct a cumulative exposure index taking probability, intensity and duration in each job into account. In the analysis, the exposed participants were categorized in tertiles; low, medium, high, of the joint distribution of cases and controls [15]. As an example, 6%; 12%, 19% and 63% of biliary tract carcinoma cases were categorized as low, medium, high and unexposed, respectively, to endocrine-disrupting compounds. Exposure to endocrine-disrupting compounds as a risk factor for male biliary tract carcinoma was studied, because the preponderance of this disease in women is assumed related to female sex hormones. The data showed an OR of 1.4 (95% CI 0.9-2.0) based on all data, and of 1.7 (95% CI 1.1-2.8) based on job-specific questionnaires, Table 2, with no dose-response relationship [15]. For the subgroup of endocrine disrupting compounds including oils with polychlorinated biphenyls (PCB) the OR was 2.8 (95% CI 1.3-5.9) for all data, and OR 3.2 (95% CI 1.4-7.4) for job-specific questionnaires; no dose-response relationship. A possible causal association for PCB was supported by an OR of 2.3 (95% CI 1.2-4.5) for men employed in electrical work, as 70% of electrical workers were classified as exposed to endocrine-disrupting compounds, including PCB [15]. No association was found with exposure to pesticides in general, and power was insufficient to distinguish between types [16]. Exposure to organic solvents was assessed using a cumulative exposure score constructed from the job history combined with a French Job Exposure Matrix (JEM). Exposure probability, frequency, intensity and duration in each job were taken into account. In the analysis, exposed workers were dichotomized into low and high according to the median score among the exposed controls [18]. For trichloroethylene this resulted in 16% of cases being categorized as low exposed; 28% as high exposed; and 56% as unexposed. The risk of male breast cancer was increased for trichloroethylene; OR 1.4 (95% CI 0.7-2.5) for low, and OR 1.9 (95% CI 1.1-3.3) for high score [18]. Results by occupation and JEM supported each other, as motor vehicle mechanics and painters were exposed to trichloroethylene. Table 1 European rare cancer study. number of identified and interviewed cases and controls NR Not relevant a Includes 7 cases of unknown site; b may indicate some incomplete identification of eligible controls; c numbers from papers where the two subgroups were reported separately; d due to missing values on some variables, the actual number of cases and controls included in a given analysis could be lower than the numbers interviewed. The numbers might also be slightly higher, reflecting for instance extra case recruitment of uveal melanoma cases in Germany Uveal melanoma Persons with light skin or blue/gray eyes have increased risk of uveal melanoma; corroborated in the French part of the study with OR 2.3 (95% 1.1-4.7) and OR 3.0 (95% CI 1.4-6.3), respectively [22]. Associations previously reported in the literature between occupation and uveal melanoma were confirmed for cooks; OR 2.40 (95% CI 1.35-4.28); welders and sheet metal workers; OR 1.95 (95% CI 1.08-3.52); and service workers not otherwise specified; OR 1.43 (95% CI 1.02-2.00) [12]. In addition, an excess risk was found for launderers, dry-cleaners and pressers; OR 3.14 (95% CI 1.44-6.86). The International Agency for Research on Cancer (IARC) classified welding as a group 1 carcinogen based on the excess risk of ocular melanoma reported in, amongst others, the Rare Cancer Study [23]. The Rare Cancer data were included in a meta-analysis identifying both welding; OR 2.05 (95% CI 1.20-3.51); and occupational cooking; OR 1.81 (95% CI 1.31-2.46) as risk factors, while the increase was marginal only for occupational sunlight exposure; OR 1.37 (95% CI 0.96-1.96) [24]. So, while ultraviolet exposure from sunlight is the most important risk factor for skin melanoma this is not the case for uveal melanoma. As stated by Logan et al. [25] this is consistent with the properties of the adult crystalline lens and cornea to filter out wavelengths below 400 nm. However, short wave light at 400-500 nm, blue light, can reach the posterior uveal tract. Logan et al. noted that arc welding produces short-wave light. It can be added that blue light is emitted also by gas burners often used for professional cooking. Thymoma Due to their rarity and heterogeneous histology, hardly anything is known about risks for thymoma. The Rare Cancer Study included 103 histologically confirmed cases showing a dose-response relationship for tobacco smoking, OR 2.1 (95% CI 1.1-3.9) for > 41 pack-years; and no overall association with alcohol intake, but OR 2.4 (95% CI 1.1-5.4) for > 25 g/day of spirits [26]. Rare Cancer data in meta-analysis At the time of the Rare Cancer Study, a 7% increase in breast cancer risk in women per increment of 10 g alcohol/day had been demonstrated [27], but data for men were mixed. Alcohol consumption is high in countries included in the Rare Cancer Study, and the risk of male breast cancer increased with alcohol consumption, being more than fivefold for 9 + drinks/day compared with < 1.5 drinks/day; OR 5.62 (95% 1.54-20.52), [28], Table 3. The Rare Cancer data were included in a meta-analysis of 14 studies where no association was found between alcohol intake and male breast cancer [29], Table 3. The pooled estimate for an intake of 9 + drinks/day compared with nondrinkers was OR 1.08 (95% CI 0.74-1.58). The difference between patterns in the Rare Cancer Study and in the metaanalysis was surprising, but most studies in the meta-analysis reported alcohol consumption during the last year, and misclassification may occur if former drinkers are then classified as non-drinkers. The consistency of the dose-response pattern in the Rare Cancer data makes it difficult to discard the result as a random finding. Testing of new findings The cluster of cholangiocarcinoma cases, both intra-and extrahepatic, from a printing company in Japan [5], was followed up in the Nordic Occupational Cancer Study (NOCCA) [30]. Male workers in printing and related industries had a standardized incidence ratio (SIR) of 2.34 (95% CI 1.45-3.57) for intrahepatic cholangiocarcinoma; and female workers a SIR of 1.95 (95% CI 0.84-3.85), while no association was found for extrahepatic cholangiocarcinoma, ampulla of Vater, and gall bladder. In the Rare Cancer dataset, including gallbladder and extrahepatic cholangiocarcinoma, printing workers had an OR of 2.42 (95% CI 0.81-7.24); being OR 5.78 (95% CI 1.43-23.29) for typesetters [31]. From the majority of Japanese cases both detailed clinical findings [32], and pre-disease levels of liver enzymes measured in blood samples collected at annual health examinations [33], supported a causal association between exposure to 1,2-dichloropropane and cholangiocarcinoma. In 2014, IARC classified 1,2-dichloropropane as carcinogenic to humans (Group 1) based on sufficient evidence in humans that exposure to 1,2-dichloropropane causes cholangiocarcinoma. Dichloromethane was classified as probably carcinogenic to humans (Group 2A) [34]. Discussion The Rare Cancer Study showed that well established medical risk factors for the studied diseases could be reproduced, supporting a high validity of the collected data. The study illustrated also how use of cancer controls could lead to spurious findings when the control disease itself was associated with the studied risk factor, as for colon cancer and outdoor work. Results from the Rare Cancer Study provided evidence for classification of welding and 1,2-dichloropropane as carcinogenic to humans. The experiences from the Rare Cancer Study did, however, also illustrate some of the limitations with this study approach. Analysis by occupation Despite major efforts in several countries with identification of 1457 patients, only 1053 of these patients had both the diagnosis confirmed at the pathology review and completed the interview. This meant that only about 100 cases per cancer site could be included in the analysis. Broad occupational categories were therefore used in the analysis, i.e. "wholesale, food and beverages", and with the expected heterogeneous working tasks, a possible association between a given exposure and a disease would be diluted, with only very strong associations remaining visible. This may explain why the observed ORs rarely exceeded 2-3 with the lower confidence limit close to one. To overcome the heterogeneity, the analysis could proceed from broad to specific groups. An example was an OR of 2.93 (95% CI 1.55-5.53) for bone sarcomas in "bricklayer, carpenter, other construction worker", where the excess derived from "carpenter, joiner, parquetry worker"; OR 4.25 (95% CI 1.71-10.5). The possibility of a causal association was strengthened by an increased risk in manufacture of wood, wood and cork products, straw and plaiting industry; OR 3.58 (95% CI 1.70-7.56). A next logical step would have been to collect exposure and clinical data for the carpenters with bone sarcomas, but the group included only six patients, and an attempt to collect detailed data from several countries could easily fail for confidentiality and/or practical reasons. Analysis by exposure Analysis by exposure is a way to get around problems with analysis by occupation. In the study of male biliary tract cancer, an index of exposure to specific chemicals was constructed. The questionnaire job task data were, however, not detailed enough to assess probability, intensity, and duration of exposure, and approximations were needed. It is on this basis not possible to know whether the modest OR of 1.7 (95% CI 1.1-2.8) for exposure to endocrine-disrupting compounds reflected a true value or a deflated value due to lack of sensitivity [15]. A French JEM was used in the male breast cancer study to aggregate persons from occupations exposed to organic solvents; indicating an association with trichloroethylene; high exposure OR 1.9 (95% CI 1.1-3.3) [18]. Again, lack of details in the questionnaires might limit correct allocation, as study subjects had worked in eight countries over a period of three to four decades. NOCCA-data on male breast cancer were combined with a JEM for Nordic countries, showing for trichloroethylene an OR of 1.55 (95% CI 0.64-3.76) [35]. These two explorative studies indicate that the association between exposure to trichloroethylene and male breast cancer deserves further scrutiny. The NOCCA-data showed a reduced risk of male breast cancer for men with physical workloads; OR 0.78 (95% CI 0.67-0.91) [35]. This was not confirmed in the Rare Cancer Study with an OR of 0.9 (95% CI 0.6-1.4) for agriculture, and OR 2.4 (95% CI 1.0-5.6) for forestry/logging [17]; both physically demanding industries. Reflections It was an underlying assumption of the Rare Cancer Study that rare cancers do not occur at random but result from rare exposures and/or rare susceptibility to exposures, and that these risks could be identified in a systematic search for associations in a large dataset. The study demonstrated, however, that the approach had build-in limitations. For each rare cancer site, systematic tabulation across occupations revealed some increased ORs, but mostly in the order of 2-3. These ORs may represent etiological associations buried in noise, or they may simply reflect random variation in tabulation of many associations. There is no way to solve this question within the dataset itself. The lack of sensitivity of the Rare Cancer Study for detection of signals is a characteristic shared with other explorative studies. An example is the NOCCA-study, where Nordic census data were linked individually with cancer data for 15 million persons [36], and where the largest divergences between occupations were for cancers associated with tobacco and alcohol. It is characteristic for reports where alert clinicians provided hints on occupational risks for rare cancers that they had very detailed data on the patients; both on exposures and histology, and sometimes even on pre-diagnostic biomarkers. In comparison, data in explorative studies are very limited. The most constructive use of explorative studies in their present form is therefore for identification of consistent findings across studies, as for uveal melanoma in welders and cooks, and for targeted studies of already suspected associations, as for cholangiocarcinoma in printers. It is therefore important to document and store the data, and to make them easily available for researchers. As stated in the preamble to the IARC Monographs on the identification of carcinogenic hazards to humans [37], evidence for a causal association in human studies is strengthened by consistent findings. Outcomes from explorative studies may play an important role here, and as illustrated above data from the Rare Cancer Study have proved valuable in this context. Explorative studies may also form part of surveillance systems for occupational safety and health [38]. The usefulness of explorative studies in identifying new possible forms of work risks to better protect workers could be further enhanced, if risks revealed in the statistical analysis could be followed up by confirmatory studies of the relevant sub-groups with individual data on diagnoses, exposures, and possible confounders. However, the possibility for such targeted enrichment of explorative studies is limited by data protection rules. The present disease pattern reflects the history of our life, and as the working conditions have changed over time one could question the relevance of results from explorative studies for future protection of workers. Some aspects of working environments are, however, relatively stable over time, and there is no doubt that the identification of carcinogenic compounds and work processes has in itself been a driver for changes in working conditions [39]. In conclusion, the Rare Cancer Study proved it possible to collect valid data with interviews conducted in several languages. However, despite efforts across Europe only about 100 cases per cancer site could be identified, confirmed, and interviewed within the study period. The sensitivity of explorative studies in the search for etiological leads is limited by use of broad occupational groups and lack of access to individual, detailed exposure and clinical data. The Rare Cancer data set is a valuable source for comparisons of findings across explorative studies and for targeted confirmatory studies. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-07-18T15:40:17.397Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "38eea80af177d26a40c2c21d3070ed4d485e449b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10654-020-00663-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "38eea80af177d26a40c2c21d3070ed4d485e449b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
16628211
pes2o/s2orc
v3-fos-license
Magnetic-field-induced enhancement of the vortex pinning in the overdoped regime of La$_{2-x}$Sr$_x$CuO$_4$ : Relation to the microscopic phase separation In order to investigate the inhomogeneity of the superconducting (SC) state in the overdoped high-$T_{\rm c}$ cuprates, we have measured the magnetic susceptibility, $\chi$, of La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) single crystals in the overdoped regime in magnetic fields parallel to the c-axis up to 7 T on warming after zero-field cooling. It has been found for $x$ = 0.198 and 0.219 that the temperature dependence of $\chi$ in 1 T shows a plateau, that is, $\chi$ is almost independent of temperature in a moderate temperature-range in the SC state. Moreover, a so-called second peak in the magnetization curve has markedly appeared in these crystals. These results indicate an anomalous enhancement of the vortex pinning and strongly suggest the occurrence of a $microscopic$ phase separation into SC and normal-state regions in the overdoped high-$T_{\rm c}$ cuprates. Introduction Recently, the electronic inhomogeneity in the overdoped high-T c cuprates has attracted interest in relation to the mechanism of the high-T c superconductivity. Early studies of the specific heat of La 2−x Sr x CuO 4 (LSCO) and Tl 2 Ba 2 CuO 6+δ (TBCO) have revealed that the electronic specific-heat coefficient in the superconducting (SC) state extrapolated to zero temperature increases with an increase of the hole concentration, p, in the overdoped regime. 1,2 These results indicate that the number of quasiparticles increases with increasing p even in the SC ground state, suggesting the occurrence of a phase separation into SC and normal-state regions in the overdoped high-T c cuprates. The phase separation in the overdoped regime has also been suggested by transverse-field muon-spin-relaxation measurements of TBCO 3,4 and Y 0.8 Ca 0.2 Ba 2 Cu 3−z Zn z O 7−δ 5 revealing that the muon-spin depolarization rate proportional to the SC carrier density decreases with increasing p and by nuclear-magnetic-resonance measurements of LSCO revealing that the residual spin Knight shift in the SC ground state increases with increasing p. 6 Very recently, we have investigated the possible phase separation * E-mail address: youichi@teion.apph.tohoku.ac.jp 1/7 J. Phys. Soc. Jpn. Letter in the overdoped regime through the estimation of the SC volume fraction of LSCO from measurements of the magnetic susceptibility, χ, on field cooling. 7-10 As a result, it has been found that the absolute value of χ at 2 K on field cooling decreases with an increase of x. Therefore, it has been concluded that the SC volume fraction decreases with increasing x, supporting the occurrence of the phase separation into SC and normal-state regions in the overdoped regime of LSCO. The next issue is whether the phase separation is as microscopic as suggested from the scanning-tunneling-microscopy measurements 11 3. Results Figure 1 shows the temperature dependence of χ in magnetic fields of 0.001 T ≤ H ≤ 7 T on warming after zero-field cooling in LSCO with x = 0.198. The SC transition in a field of 0.001 T below the lower critical field, H c1 , is sharp suggesting the good quality of the crystal. With increasing field, the SC transition becomes broad, but a clear two-step transition is observed in 1 T. In high magnetic fields above 1 T, the two-step transition tends to be smeared out with increasing field and changes to a single broad one in 7 T. In the case of χ vs. T on field cooling, on the other hand, the SC transition tends to become broad monotonically with increasing field and no two-step transition is observed. H is the onset field above which the vortex-pinning effect becomes marked, and T = 10 K in 1 T in χ vs. T is the onset temperature above which χ is almost independent of temperature. In Fig. 3(b), it is found that the second peak is also observed for 0.178 ≤ x ≤ 0.238 where the two-step SC transition is observed in greater or less degree in χ vs. T as shown in Fig. 2. These results indicate that the two-step SC transition in χ vs. T is well correlated with the second peak in M vs. H. show the H -T phase diagram and the temperature dependence of χ on warming after zero-field cooling in a microscopically phase-separated state, respectively. In a microscopically phase-separated state, weak SC regions appear around the boundary between intrinsic SC and normal-state regions due to the proximity effect. Supposed that microscopic weak SC regions are ubiquitously distributed in a crystal, the superconductivity in weak SC regions tends to be destroyed earlier than that in intrinsic SC regions with increasing temperature or field, so that the weak SC regions change to normal-state regions regarded as pinning centers for vortices. In the case of Fig. 4(i), the applied magnetic field, H 2 , is between H c1 in the intrinsic SC regions and the upper critical field in the weak SC regions, H w c2 , at the lowest temperature, 4/7 T 1 . In this case, the number of vortices penetrating into the crystal increases with increasing temperature, resulting in the decrease of the shielding effect of superconducting currents. When the temperature reaches T 2 in Fig. 4, a number of microscopic normal-state regions appear in the crystal due to the destruction of the superconductivity in the weak SC regions, as shown in Fig. 4(ii). Summary In summary, it has been found from χ vs. T measurements in magnetic fields parallel to the c-axis up to 7 T on warming after zero-field cooling in the overdoped regime of LSCO single 5/7 crystals that χ is independent of temperature in a moderate temperature-range in the SC state in 1 T for x = 0.198 and 0.219, while the almost temperature-independent χ disappears for x ≥ 0.238. Moreover, a second peak has markedly appeared in M vs. H measurements in the overdoped regime of LSCO. These results indicate an anomalous enhancement of the vortex pinning and are understood assuming the occurrence of a microscopic phase separation into SC and normal-state regions in the overdoped regime. That is, microscopic weak SC regions appear around the boundary between intrinsic SC regions and normal-state regions due to the proximity effect, and the superconductivity of the weak SC regions is destroyed earlier than that of the intrinsic SC regions with increasing temperature or field so that the weak SC regions operate as strong pinning centers for vortices, resulting in strong vortex pinning in a moderate range of temperature or field in a microscopically phase-separated SC state. Accordingly, these results strongly suggest that a microscopic phase separation into SC and normal-state regions takes place in the overdoped high-T c cuprates.
2007-11-20T07:30:16.000Z
2007-11-12T00:00:00.000
{ "year": 2007, "sha1": "62657d05d9407d2f5572507bbd2476f9983ee815", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0711.3078", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "62657d05d9407d2f5572507bbd2476f9983ee815", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
236284613
pes2o/s2orc
v3-fos-license
Association of four gene polymorphisms in Chinese Guangxi population with diabetic retinopathy in type 2 diabetic patients Abstracts Background Diabetic retinopathy (DR) is one of the most common chronic microvascular complications of diabetes. Many studies have suggested that genetic factors are important in the context of DR. This study evaluated the associations of GWAS (Genome-wide association study) -identified DR-associated SNPs in a Chinese population in Guangxi Province with type 2 diabetes mellitus (T2DM). Methods A total of 386 hospitalized T2DM patients without proliferative diabetic retinopathy (PDR) and 316 hospitalized T2DM patients with PDR were included in this case–control study. Four tag SNPs, including rs1800896 in the IL-10 gene, rs2010963 in the VEGFA gene, rs2070600 in the RAGE gene and rs2910164 in the miR-146a gene, were examined using KASP (kompetitive allele specific PCR) genotyping assays. Results There were no significant differences in the genotype or allele frequencies of the miR-146a polymorphism (rs2910164) between subjects with PDR and those without DR. The TC genotype of rs1800896 was determined to be associated with an increased risk of PDR (the odds ratio (OR) was 2.366, with a 95% confidence interval (CI) ranging from 1.144 to 4.894). The CG genotypes of rs2010963 was associated with an decreased risk of PDR (the OR was 0.588, with a 95% CI ranging from 0.366 to 0.946). Regarding rs2070600, 2 genotypes (TT and CT) were associated with a decreased risk of PDR (the OR of the TT genotype was 0.180, with a 95% CI ranging from 0.037 to 0.872, and the OR of the CT genotype was 0.448, with a 95% CI ranging from 0.266 to 0.753). Conclusions The rs1800896 polymorphisms in the IL-10 gene, rs2010963 in the VEGFA gene and rs2070600 in the RAGE gene are associated with the risk of PDR in the Han Chinese population of Guangxi Province. Our findings provide suggestive evidence that these polymorphisms may be involved in the pathogenesis of PDR and should be investigated further. Introduction Diabetes is an endocrine disease that severely impacts human health, and its disability and fatality rates are second only to those of cardiovascular and cerebrovascular diseases and cancer [1]. It is estimated that the percentage of people with diabetes worldwide will reach 4.4% by 2030 [2]. It has been recognized that diabetes is a main source of morbidity and mortality given its related acute Open Access *Correspondence: jinhe930930@glmc.edu.cn † He Jin, Dongdong Jiang, Zhixiang Ding and Yu Xiong contributed equally to this paper. 1 Affiliated Hospital of Guilin Medical University, Guilin Medical University, Guilin 541001, China Full list of author information is available at the end of the article and chronic side effects. Diabetic retinopathy (DR) is one of the most common chronic microvascular side effects of diabetes [3]. With the incidence of diabetes increasing worldwide, the incidence of DR is expected to increase to alarming levels [4]. Furthermore, DR is the main cause of blindness in diabetic patients [3]. Diabetes duration, poor glycaemic control and hypertension are known as the primary risk factors related to the progression of DR [4]. However, clinical observation has revealed that some patients with poorly controlled or long-lasting diabetes do not develop retinopathy, whereas others, even those with relatively good glycaemic control, eventually develop advanced retinopathy [5]. These clinical observations suggest that other factors are involved in the development of DR. Many studies suggest that genetic factors are important in the context of DR, that DR exhibits a complicated inheritance mode and that genetic relationship studies are useful for the identification of the genetic elements impacting the pathogenesis of DR [6]. According to this information, genetic elements exert an effect on the development of DR [7]. It has been estimated that the heritability of DR is approximately 25% [8]. Through an increased understanding of the genetic foundation of DR, the hidden pathophysiological mechanisms governing its development can be identified. These genetic data may also be useful for the risk profiling of DR among patients suffering from diabetes, thereby promoting its early treatment and administration. Robust relationships of DR-susceptibility variants may make them ideal genetic markers to enhance the prediction of DR via traditional clinical predictors and thus achieve more precise risk stratification. Genome-wide association studies (GWASs) represent a selective strategy for the detection of new genetic loci related to DR, and some GWASs have been performed using different ethnic groups to identify novel genetic variants related to DR susceptibility in diabetes mellitus cohorts [9][10][11][12]. Single nucleotide polymorphisms (SNPs) are deoxyribonucleic acid (DNA) sequences that commonly differ in populations [13]. These changes in DNA sequences may affect gene expression if they occur in putative regulatory regions [13]. In several previous reports, some GWAS-identified SNPs were found to be highly associated with DR [6,[14][15][16]. Research population The research protocol was examined by the Research Ethics Committee of the Affiliated Hospital of Guilin Medical University and adhered to the principles of the Declaration of Helsinki. All participants gave written informed consent before their enrolment. A total of 386 T2DM patients without diabetic retinopathy (DR) and 316 T2DM patients with proliferative diabetic retinopathy (PDR) were included in this case-control study. All patients' data were collected from the endocrinology department and ophthalmology department and were included in the study in the enrolment. All patients lived in Guangxi Province and were Han Chinese. The diagnosis of type 2 diabetes was made on the basis of the American Diabetes Association criteria [29]. All patients received ophthalmic examinations such as best-modified visual acuity, intraocular pressure, slit lamp, and dilated fundus examinations, in the Department of Ophthalmology, Affiliated Hospital of Guilin Medical University. PDR was defined as having eyes with definite neovascularization and/or vitreous/preretinal haemorrhages. Patient and medical data, including age, sex, age at diabetes diagnosis, presence of arterial hypertension, application of medication or insulin, and other comorbidities, were collected using a questionnaire. Individuals with peripheral vascular diseases, coronary artery diseases, acute infection, history of any thrombotic event, or any other ocular disorders were excluded. Genotyping Whole-blood specimens from all patients were gathered in EDTA tubes and stored at − 20 °C for less than 2 months. A TIANamp Genomic DNA Kit (TianGen, Beijing, China) was used to extract genomic DNA from whole blood before analysis. Genotyping for SNP screening analyses was conducted using the KASP (kompetitive allele specific PCR) assay. Equal amounts of genomic DNA (0.8 μl/patient) from DR and DNR patients were mixed with the KASP Master Mix and KASP Assay Mix. Next, the SNP-containing DNA fragments were amplified by PCR. The PCR program was as follows: initial denaturation at 94 °C for 15 min, 10 cycles of denaturation at 94 °C for 20 s and annealing at 65 °C (0.8 °C decrease every cycle) for 1 min; and 27 cycles of a final extension at 59 °C for 1 min. Primers for the KASP SNP assays were designed using Primer Premier 5.0, and allele frequencies were analysed using IntelliQube software (LGC Genomics, UK). Statistical analyses Continuous data are shown as the mean ± SD. Categorical variables are reported as numbers (percentages) or percentages. The normality of the distribution of quantitative variables was verified by the Kolmogorov-Smirnov test. After the normal distribution test, the comparison of continuous variables among groups of diabetic subjects was made by ANOVA for normally distributed variables. The χ 2 test was used to compare categorical variables. Gene counting was applied to determine allele frequencies, and the χ 2 test was used to verify departures from Hardy-Weinberg equilibrium. The comparison between allele and genotype frequencies was made among groups of subjects using the χ 2 test. SPSS (version 20.0; SPSS, Inc., Chicago, IL, USA) was employed to conduct statistical analyses. We considered P values less than 0.2 to be great for inclusion in multivariate analysis. Multivariate analysis, adjusting for the use of insulin treatment, systolic blood pressure and glomerular filtration rate (GFR), was performed with binary logistic regression analysis. According to the outcomes, odds ratios and 95% confidence intervals (CIs) are shown. We considered P values less than 0.05 to be statistically significant. Table 1 summarizes the features of the subjects. The cases and controls were 58.5 and 58.95 years of age on average, respectively, and the groups included 44.04 and 44.87% females, respectively, indicating good match between two groups with regard to age and sex (both P > 0.05). No significant variation was noted in the duration of DM, body mass index, HbA1c, HDL cholesterol, total cholesterol or LDL cholesterol between the groups. There were significant differences in analyzing systolic blood pressure and using insulin treatment and glomerular filtration rate (GFR) between the groups. Candidate gene and single nucleotide polymorphism selection We selected 4 SNPs (rs1800896, rs2010963, rs2070600, rs2910164), which from IL-10, VEGF, RAGE and miR-146a, respectively. And these genes were related to DR in at least one population or are logical candidate genes according to the present understanding of the pathogenesis of DR; there are selected SNPs in the promoter areas, 5′ UTR regions, or coding areas of candidate genes. In addition, the total call rates of these 4 SNPs (rs1800896, rs2010963, rs2070600, and rs2910164) were 99.43, 98.86, 97.72, and 84.90%, respectively. Polymorphisms of 4 SNPs in type 2 diabetic subjects based on the presence of PDR The distribution of the genotype and allele frequencies of the 4 SNPs' polymorphisms in type 2 diabetic subjects based on the presence of PDR are shown in Table 2. All 4 SNPs were distributed in accordance with Hardy-Weinberg equilibrium. No significant variation in the genotype and allele frequencies of the miR-146a polymorphism (rs2910164) was noted between subjects with PDR and those without DR, indicating that this SNP might not be related to the presence of PDR. However, the data show that the other 3 SNP (rs1800896, rs2010963 and rs2070600) were significantly associated with the presence of PDR. Multivariable logistic regression analysis showed that 3 SNPs (rs1800896, rs2010963 and rs2070600) were associated with the PDR phenotype after adjustment for insulin therapy, systolic blood pressure and GFR. The frequency of the TC genotype of rs1800896 was higher among subjects with PDR than in those without DR (16.03% vs. 8.29%, P = 0.002). The frequency of the TT genotype was lower in subjects with PDR (82.69% vs. 91.71%, P = 0.001). After multivariable analysis, the TC genotype was determined to be related to an increased risk of PDR. The odds ratio (OR) was 2.366, with a 95% confidence interval (CI) ranging from 1.144 to 4.894 (P adjusted = 0.020). The frequency of the CG genotype of rs2010963 was reduced in subjects with PDR compared with those without DR (44.87% vs. 56.02%, P = 0.003). After multivariable analysis, the CG genotypes were related to a decreased risk of PDR. The OR of the CG genotype was 0.588, with a 95% CI ranging from 0.366 to 0.946 (P adjusted = 0.028). The frequency of the CC genotype of rs2070600 was increased among subjects with PDR than in those without DR (66.67% vs. 51.81%, P = 0.001). The frequency of the CT genotype was lower in subjects with PDR (30.67% vs. 41.97%, P = 0.002). Multivariable analysis revealed that the other 2 genotypes (TT and CT) were related to a reduced risk of PDR. The OR of the TT genotype was 0.180, with a 95% CI ranging from 0.037 to 0.872 (P adjusted = 0.033). The OR of the CT genotype was 0.448, with a 95% CI ranging from 0.266 to 0.753 (P adjusted = 0.002). Discussion Many studies have suggested that genetic factors are important in the context of DR. In this study, we analyzed the association of 4 SNPs selected from 4 DR-associated factors in an independent cohort of patients in Guangxi Province with type 2 diabetes mellitus (T2DM). Our data showed significant associations with the IL-10, VEGF and RAGE genes. It has been proposed that DR is associated with persistent low-grade inflammation [30]. Interleukin-10 (IL-10) prevents the generation of proinflammatory cytokines and stimulates the proliferation, differentiation and survival of several types of immune cells [31]. Most cells of the adaptive and innate immune systems such as dendritic cells, leukocytes, and macrophages express IL-10 [31]. DR progression may be promoted by IL-10, an anti-inflammatory cytokine with strong deactivating nature [14]. IL-10 gene rs1800896 polymorphism (IL-10 -1082G/A polymorphism) in the promoter region was associated with production of IL-10 [32]. And it is reported that the IL-10 gene polymorphism is related to the risk of DR among various populations [14,33,34]. This study showed that the TC genotype was associated with the risk for PDR. The various genetic backgrounds of the study samples, sample sizes, exposure to environmental factors, and clinical phenotypes of PDR may explain the conflicting outcomes reported by the abovementioned studies. According to the multivariable analyses, the IL-10 rs1800896 polymorphism is associated with a significant risk of PDR. However, these outcomes should be explained carefully due to the limited sample sizes in the stratified analyses and the limited power. However, evidence for a possible effect between the rs1800896 polymorphism and several T2DM risk elements is indicated by our findings. Vascular endothelial growth factor (VEGF) drives angiogenesis, breaks down the blood-retinal barrier, excites the growth of endothelial cells, induces neovascularization, and enhances vascular permeability in the ischaemic retina [35,36]. Increased expression level of VEGF has been observed in DR patients [37]. In Yang's study [38], they summarized that the rs2010963 was a risk contributor to PDR in overall populations, while no significant association was detected between rs2010963 and PDR risk in Caucasians. Our analysis demonstrated that carriers of the CG genotype had a lower risk for PDR compared with those with the GG genotype. In accordance with our study, Awata and coworkers [39] found no association between the CC genotype of rs2010963 polymorphism and PDR. Carriers of another 2 additional homozygous genotypes exhibited altered susceptibility to DR, suggesting that rs2010963 might be an important genetic marker for DR among patients in Guangxi Province with T2DM. Many factors could determine the differences in the findings, such as sample size, study design, and sunlight exposure [40]. Receptor for advanced glycation end product (RAGE) gene polymorphisms impact DR due to pathophysiological information related to retinopathy and advanced glycation end products (AGEs) [6]. The RAGE gene is located on the short arm of chromosome 6: 6p21.3 [17]. AGEs result from the nonenzymatic glycation of proteins and lipids [41]. AGEs are observed at enhanced levels in individuals with diabetes and can result in enhanced oxidative stress and receptor-mediated activation and secretion of different cytokines [41]. The RAGE polymorphism assessed in this study occurs at a predicted N-linked glycosylation motif in the AGE binding site, impacting AGE-RAGE interactions [6]. This study analysed the RAGE SNP (rs2070600), and the results showed that TT genotype or T allele carriers were associated with a reduced risk for PDR. Similar strong relationships between rs2070600 and diabetic retinopathy were also observed in Asian Indians and Asian Chinese people with type 2 diabetes and in an Indian study [6,17]. Moreover, analysis of the allelic frequency of rs2070600 in different ethnic groups showed different results. The T allele frequency in this research was 18.00% in the PDR group, which is similar to the findings of an earlier report in the Chinese population (23.1%) [42] and another report in the Japanese population (17.3%) [43]. In previous reports, the T allele occurred with an incidence of 5% in Caucasians [44] and 2% in Indians [45]. Allelic variants of the RAGE gene may alter protein function and gene expression, which may influence disease progression. The high proportion of variant alleles in the Chinese population may confer enhanced susceptibility to diabetic side effects in this population. In Kaidonis' study [16], rs2910164 was found to potentially enhance susceptibility to retinal injury via a pathway involved in both angiogenesis and breakdown of the blood retinal barrier. This SNP significantly related to DN in patients suffering from type 1 diabetes mellitus (T1DM) after multivariate analysis [16]. In our study, we collected samples from T2DM patients to analyse the risk of DR, but T1DM and T2DM are unique diseases with various aetiologies. DR progression is impacted by environmental elements that may occur under the background of a given type of diabetes mellitus. Furthermore, DR commonly develops early in susceptible patients with T1DM [46]. Statistical analysis revealed that rs2910164 was not significantly related to DR. Further studies with a larger cohort size are warranted to more accurately assess these given phenotypes in relation to microRNA-146a (miR-146a) SNPs. Potential limitations of the current study should be taken into account. First, the sample size was not large, which may have caused our study to be underpowered. Second, we cannot exclude confounding effects of unmeasured variables that may affect the stability of blood glucose levels, such as dietary and other lifestyle factors. Third, no detailed information regarding DR severity or treatment response was obtained, which limited our conclusions. Fourth, we planned to avoid population substructures in our research. Nevertheless, it is possible that the positive and negative outcomes obtained in this study may be attributed to subtle population stratification, and the results should be considered suggestive instead of definitive. Moreover, our study requires a more direct assessment of the association between SNPs and related serum levels. The mechanisms underlying these SNPs in DR merit further study. Conclusion According to the outcomes of this research, the rs1800896 polymorphisms in the IL-10 gene, rs2010963 in the VEGFA gene and rs2070600 in the RAGE gene are related to the risk of PDR in the Han Chinese population of Guangxi Province. Our findings provide suggestive evidence that these polymorphisms may be involved in the pathogenesis of PDR and should be examined further. Moreover, our study suggests that the rs2910164 polymorphism in the miR-146a gene may not be related to DR in the Guangxi Province population. Nevertheless, these findings should be examined by additional welldesigned multicentre studies with larger sample sizes that include gene-environment interaction assessments.
2021-07-26T00:06:39.591Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "65d546331f00d7758987bf9ea4554b14151e5821", "oa_license": "CCBY", "oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-021-02146-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcfd3045823a60a08b549e1c9f46672faeae2b6a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263844346
pes2o/s2orc
v3-fos-license
Mechanism of Innate Immune Response Induced by Albizia julibrissin Saponin Active Fraction Using C2C12 Myoblasts Albizia julibrissin saponin active fraction (AJSAF), is a prospective adjuvant with dual Th1/Th2 and Tc1/Tc2 potentiating activity. Its adjuvant activity has previously been proven to be strictly dependent on its spatial co-localization with antigens, highlighting the role of local innate immunity in its mechanisms. However, its potential targets and pathways remain unclear. Here, its intracellular molecular mechanisms of innate immune response were explored using mouse C2C12 myoblast by integrative analysis of the in vivo and in vitro transcriptome in combination with experimental validations. AJSAF elicited a temporary cytotoxicity and inflammation towards C2C12 cells. Gene set enrichment analysis demonstrated that AJSAF regulated similar cell death- and inflammatory response-related genes in vitro and in vivo through activating second messenger–MAPK–CREB pathways. AJSAF markedly enhanced the Ca2+, cAMP, and reactive oxygen species levels and accelerated MAPK and CREB phosphorylation in C2C12 cells. Furthermore, Ca2+ chelator, CREB inhibitor, and MAPK inhibitors dramatically blocked the up-regulation of IL-6, CXCL1, and COX2 in AJSAF-treated C2C12 cells. Collectively, these results demonstrated that AJSAF induced innate immunity via Ca2+–MAPK–CREB pathways. This study is beneficial for insights into the molecular mechanisms of saponin adjuvants. Introduction Vaccine development is facing serious challenges due to the emergence of new infectious diseases and the spread of cancer [1].Adjuvants should be capable of eliciting appropriate T-cell-mediated and/or humoral immunities targeting specific pathogens [2].However, up to now, few adjuvants have been licensed for human use.In addition to safety issues and lower potency, limited understanding of the mechanism and drug targets is also a key factor behind the lack of effective adjuvants [3]. Saponin adjuvants achieve adaptive immunity by reconstructing the local immune microenvironment.Although considered as an undesired side effect, early inflammatory response is crucial for adjuvants to elicit adaptive immune responses [4].Most vaccines are administered intramuscularly [5].However, the muscle tissues contain only a few resident immune cells, whose abundance and type are altered by most adjuvants [6].Adjuvant-induced acute inflammatory responses in muscle tissues promote the recruitment of immune cells to improve antigen uptake and expedite the migration of antigen-loaded immune cells to the lymph nodes, leading to the efficient activation of naive T cells and the establishment of adaptive immunity [7][8][9].Mouse C2C12 myoblasts are commonly used to study the development and regeneration of human skeletal muscle cells [10].In our previous works, it was found that C2C12 myoblasts could be applied as an in vitro model in understanding the mechanisms of saponin adjuvants [11]. Albizia julibrissin saponin active fraction (AJSAF) is a prospective adjuvant with dual Th1/Th2 and Tc1/Tc2 potentiating activity on OVA and several commercial livestock and poultry vaccines [12].It has been previously reported that AJSAF induced the protein expression of cytokines and chemokines at the site of injection and its adjuvant activity was strictly dependent on its spatial co-localization with antigens [4].Although the potential targets and pathways of the AJSAF-induced local innate immunity in mice has been also studied, the results need to be verified [13].Here, its intracellular molecular mechanisms of innate immune response were explored using mouse C2C12 myoblast through integrative analysis of the in vitro and in vivo transcriptome in combination with experimental validations. AJSAF was prepared as previously described [12] and was identified to contain 29 saponins including 10 new compounds by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass database [14].The endotoxin level was 0.253 ± 0.004 EU/mL in AJSAF solution (2 mg/mL), being excluded from endotoxin contamination. Cell Culture and Stimulation Mouse C2C12 myoblast cell line (ATCC CRL-1772) was purchased from the cell bank of the Shanghai Branch of the Chinese Academy of Sciences, Shanghai, China, and cultured in DMEM complete medium containing 10% fetal bovine serum, 100 U/mL penicillin, and 100 µg/mL streptomycin in a 37 • C and 5% CO 2 atmosphere.After 24 h of adhesion culture, the cells were treated with AJSAF at the designated concentration from 100 µg/mL to 300 µg/mL, and then the pelleted cells and culture supernatants were collected at the indicated time, respectively. Real-Time Quantitative Polymerase Chain Reaction (RT-qPCR) Total RNA isolated with TRIzol reagent was subjected to reverse transcription.PCR was performed using the specific primers (Table S1) and FastStart Universal SYBR Green Master (Rox) according to the MMIQE guidelines [15].The relative expression level to Gapdh was calculated using the 2 −∆∆Ct method [11]. Gene Set Enrichment Analysis (GSEA) GSEA was performed for the whole genes detected in AJSAF-treated C2C12 cells and mouse muscle tissues (Supplementary Methods) [13] using mouse GSKB [18].Leadingedge gene sets (LEGSs) were identified based on |normalized enrichment score| > 1, p < 0.05, and false discovery rate < 0.25.The genes in the LEGSs with core enrichment being "Yes" were considered as the core genes.The core genes were plotted for the heat maps at https://www.omicstudio.cn/tool(accessed on 19 February 2023), and their GO functions and KEGG pathways were analyzed using Metascape (http://metascape.org/,accessed on 19 February 2023).The network components were predicted using molecular complex detection technology (MCODE) [19,20]. Relevance Analysis of Transcriptome In Vivo and In Vitro The heatmap and PPI network of the top 20 clusters with GO-and KEGG-enriched terms of both in vivo and in vitro core genes was build using Metascape (http://metascape.org/, accessed on 19 February 2023) [16,17].Transcriptional Regulatory Relationships Unraveled by Sentence-based Text mining (TRRUST) database (https://www.grnpedia.org/trrust/, accessed on 19 February 2023) was employed to predict the transcription factors (TFs) [21].The overlap Circos diagrams were plotted for screening the in vivo and in vitro common core genes.The common core genes were subjected to eight algorithms (EPC, MCC, MNC, Betweenness, Closeness, Degree, Radiality, and Stress) in the Cytoscape cytoHubba plug-in (https://cytoscape.org/,accessed on 19, February, 2023).The intersecting genes of the top 10 genes from each approach were identified to be hub genes using UpSet (https://www.omicstudio.cn/tool,accessed on 19 February 2023).GeneMANIA (http://www.genemania.org/,accessed on 19 February 2023) was applied to construct a co-expression network of hub genes [22]. cAMP, Free Ca 2+ and ROS Detection The levels of intracellular cAMP, free Ca 2+ , and ROS were determined using cAMP assay kit, Fluo-3 AM, and ROS assay kit, respectively [11,23]. Western Blotting C2C12 cells were lysed with RIPA lysis buffer containing protease inhibitor and phosphatase inhibitor, and then protein contents were detected using BCA assay.The denatured proteins were separated on SDS-PAGE and transferred to PVDF membrane.After incubation with primary antibodies overnight at 4 • C, the membrane was blotted with HRP-conjugated IgG for 1 h.The signals were visualized with ECL on the iBright TM CL1500 Imaging System (Thermo Fisher Scientific, Waltham, MA, USA) [23]. Statistical Analysis Data were expressed as mean ± SD and statistically analyzed with ANOVA and Student's t-tests using the GraphPad Prism 9.0 software (GraphPad Software, San Diego, CA, USA).p < 0.05 was statistically significant. AJSAF Elicited a Temporary Cytotoxicity and Inflammation in C2C12 Cells AJSAF exhibited remarkable toxicities against C2C12 cells at the concentration of >150 µg/mL for 4 h, with the IC 50 value being 210 µg/mL.No cytotoxicity, however, was found in the AJSAF-treated C2C12 cells for 24 h except for 300 µg/mL (Figure 1A). Western Blotting C2C12 cells were lysed with RIPA lysis buffer containing protease inhibitor and phosphatase inhibitor, and then protein contents were detected using BCA assay.The denatured proteins were separated on SDS-PAGE and transferred to PVDF membrane.After incubation with primary antibodies overnight at 4 °C, the membrane was blotted with HRP-conjugated IgG for 1 h.The signals were visualized with ECL on the iBright TM CL1500 Imaging System (Thermo Fisher Scientific, Waltham, MA, USA) [23]. Statistical Analysis Data were expressed as mean ± SD and statistically analyzed with ANOVA and Student's t-tests using the GraphPad Prism 9.0 software (GraphPad Software, San Diego, CA USA).p < 0.05 was statistically significant. AJSAF Elicited a Temporary Cytotoxicity and Inflammation in C2C12 Cells AJSAF exhibited remarkable toxicities against C2C12 cells at the concentration of > 150 µg/mL for 4 h, with the IC50 value being 210 µg/mL.No cytotoxicity, however, was found in the AJSAF-treated C2C12 cells for 24 h except for 300 µg/mL (Figure 1A).AJSAF was reported to facilitate gene expression of Il-6, Cxcl1, and Cox2 (Ptgs2) in mouse quadriceps muscles [13].The mRNA expression of these inflammatory factors in C2C12 cells was also significantly up-regulated by AJSAF; it peaked at 4-6 h and then speedily declined (p < 0.001, Figure 1B).Meanwhile, AJSAF markedly and concentrationdependently induced the production of IL-6, CXCL1, and COX2 in C2C12 cells (p < 0.001, (Figures 1C-E and S1).These results indicated that AJSAF elicited a temporary cytotoxicity and inflammation in C2C12 cells. Functions and Pathways of AJSAF-Induced DEGs in C2C12 Cells To characterize the transcriptional profiling, C2C12 cells treated with AJSAF were subjected to SurePrint G3 microarray (Figure 2).AJSAF resulted in 738 up-regulated and 700 down-regulated DEGs in C2C12 cells (Figure 3A).AJSAF-induced mRNA expression levels of four putative up-regulated (Rgs, Thbd, Hmox1, and Il33) and four putative downregulated (Rnd3, Tgfb3, Wnt4, and Fas) genes by RT-qPCR coincided with microarray data (Figure 3B and Table S2).AJSAF was reported to facilitate gene expression of Il-6, Cxcl1, and Cox2 (Ptgs2) in mouse quadriceps muscles [13].The mRNA expression of these inflammatory factors in C2C12 cells was also significantly up-regulated by AJSAF; it peaked at 4-6 h and then speedily declined (p < 0.001, Figure 1B).Meanwhile, AJSAF markedly and concentrationdependently induced the production of IL-6, CXCL1, and COX2 in C2C12 cells (p < 0.001, Figure 1C-E and Figure S1).These results indicated that AJSAF elicited a temporary cytotoxicity and inflammation in C2C12 cells. Functions and Hub Genes of AJSAF-Induced Common Core Genes In Vitro and In Vivo Both the top 20 enriched GO and KEGG terms of the in vitro and in vivo core genes were concerned with cell activation, cell death, granulocyte chemotaxis, inflammatory response, MAPK cascade pathway, and second-messenger-mediated signaling pathway (Figure 6A).These top 20 clusters were correlative and constituted a network centered around granulocyte chemotaxis, inflammatory response, cell activation, cell death, and positive regulation of locomotion (Figures 6B and S2).Meanwhile, the top 20 TFs of the in vitro and in vivo core genes induced by AJSAF were predicted using the TRRUST database, respectively.Among the top 20 TFs, Crebbp, Nfe2l2, Ppara, Stat1, Stat3, Stat5a, Ep300, Sp1, Ets1, Fos, Egr1, Rela, Cebpb, Trp53, Ikbkb, Jun, and Nfkb1 were the shared TFs, except Usf2, Sp3, and EIk1 for C2C12 cells (Figure 6C). Functions and Hub Genes of AJSAF-Induced Common Core Genes In Vitro and In Vivo Both the top 20 enriched GO and KEGG terms of the in vitro and in vivo core genes were concerned with cell activation, cell death, granulocyte chemotaxis, inflammatory response, MAPK cascade pathway, and second-messenger-mediated signaling pathway (Figure 6A).These top 20 clusters were correlative and constituted a network centered around granulocyte chemotaxis, inflammatory response, cell activation, cell death, and Furthermore, Il6, Csf2, Cxcl1, Il1b, Ptgs2 (Cox2), and Stat3 were identified to be the hub genes of the common core genes using upset plot (Figure 6F and Table S5).Six hub genes formed a PPI network with co-expression of 89.51, prediction of 8.88, and other of 1.61 (Figure 6G).The functions of the six hub genes mainly included the regulation of ERK1 and ERK2 cascade, cytokine-mediated signaling, cell chemotaxis, acute inflammatory response, ROS metabolic process, chemokine production, and cell activation (Figure 6G).The above results suggested that AJSAF regulated the cell death-and inflammatory responserelated genes in vitro and in vivo through the second-messenger-MAPK pathway. AJSAF Induced the Inflammation in C2C12 Cells through Ca 2+ -MAPK-CREB Pathway The microarray analysis revealed that AJSAF potentially activated the secondmessenger-mediated signaling (Figure 6A,B) and regulated the ROS metabolic process (Figure 6G).Therefore, the second messenger components and ROS generation in C2C12 cells were examined.AJSAF significantly and time-dependently increased the cAMP con-tents in C2C12 cells, which ascended at 0.5 h and peaked at 1 h (Figure 7A).AJSAF also time-and concentration-dependently induced a significant Ca 2+ influx and ROS production in C2C12 cells (Figure 7B,C).The transcriptome correlation analysis revealed that AJSAF regulated the MAPK cascade and protein phosphorylation in vitro and in vivo (Figure 6A,B).Meanwhile, six hub genes were identified to be correlative with the ERK1 and ERK2 cascade.TRRUST predicted that the TFs such as CREB-binding protein (Crebbp) and Nfkb1 regulated AJSAFinduced core genes.Therefore, MAPK, NF-κB, and CREB phosphorylation in C2C12 cells was detected using Western blotting.AJSAF markedly promoted JNK, ERK1/2, p38 MAPK, and CREB phosphorylation in C2C12 cells from 15 min to 2 h.However, AJSAF did not affect NF-κB p65 phosphorylation in C2C12 cells (Figures 7D,E and S3).These results suggested that AJSAF activated second messenger-MAPK-CREB pathways. Discussion Although saponin adjuvants have been widely investigated for their use in vaccines, their mechanisms of action are poorly understood [24].A high concentration of adjuvant is generated at the local injection site after intramuscular vaccination, and the dominant cell population in contact with adjuvants is muscle cells.AJSAF was found to up-regulate both neutrophil-active (CCL3, CCL7, CXCL1, and CXCL5) and neutrophil-derived genes (CCL2, CCL3, and CCL4) in mouse quadricep muscles [13].In fact, many different chemoattractants with similar functions are usually present at sites of inflammation [25].In this study, AJSAF elicited a temporary cytotoxicity and inflammation in C2C12 cells (Figure 1).RNA-seq analysis showed that AJSAF induced 1438 DEGs in C2C12 cells (Figure 3A).These DEGs were involved in cell proliferation, differentiation, migration, and death, as well as response to wounding and external stimulus (Figure 3D).The core genes in C2C12 cells induced by AJSAF involved the cell chemotaxis, inflammatory response, cell death, cell migration, apoptotic process, cytokine activity, chemokine activity, and growth factor activity and were correlated with TNF signaling, PI3K-Akt signaling, MAPK signaling, and cAMP signaling (Figure 4C).Similarly, AJSAF-induced core genes in mouse quadricep muscles were relative with the GO functions including inflammatory response, cell chemotaxis, cell migration, cell death, apoptotic process, chemokine activity, cytokine activity, and growth factor activity, as well as KEGG pathways such as TNF signaling, JAK-STAT signaling, MAPK signaling, and cAMP signaling (Figure 5C).Moreover, both top 20 enriched terms of AJSAF-induced core genes in vitro and in vivo were co-regulated and associated with cell activation, cell death, granulocyte chemotaxis, inflammatory response, MAPK pathway, and second-messenger-mediated signaling (Figure 6A,B).These results suggested that AJSAF induced similar in vitro and in vivo functions and pathways. The transcriptomic analysis revealed that AJSAF potentially activated the second messenger-MAPK-CREB pathway in vitro and in vivo.Actually, a very early response was observed showing that the intracellular Ca 2+ , cAMP, and ROS levels peaked in AJSAFstimulated C2C12 cells within 2 h (Figure 7A-C).These three second messengers could promote the phosphorylation of MAPK and activate the inflammasome [26][27][28].In this study, AJSAF significantly and rapidly induced the MAPK and CREB phosphorylation in C2C12 cells, especially ERK1/2 and CREB, reaching almost their peak at 0.25 h after stimulation.CREB is an important TF for mediating the immune-related genes containing a cAMP-responsive element, including Il2, Il6, Il10, Tnfα, and Cox2 [29].Resident macrophages in healthy skeletal muscles regulate tissue homeostasis.CREB-C/EBPβ cascade induces the expression of M2 genes and promotes muscle injury repair [30].A previous study showed that AJSAF activated RAW264.7 macrophages to secrete IL-1β, TNF-α, CCL2, CCL22, and CXCL2 [23].In addition, CREB phosphorylation directly inhibited NF-κB activation [31], which might explain why AJSAF did not affect NF-κB p65 phosphorylation in C2C12 cells.Furthermore, the inhibition assay revealed that all Ca 2+ , ERK1/2, CREB, JNK, and p38 MAPK inhibitors could reverse the up-regulation of IL-6, CXCL1, and COX2 in the AJSAF-treated C2C12 cells (Figure 7F-H).These findings confirmed that the Ca 2+ -MAPK-CREB pathway was involved in the AJSAF-induced inflammation in C2C12 cells.However, how AJSAF affects the intracellular Ca 2+ , cAMP, and ROS levels in C2C12 cells and the role of these second messengers in mediating the adjuvant activity of AJSAF are an issue that warrants further evaluation. AJSAF induced the lysis of C2C12 cells at 250 µg/mL from 2 h to 4 h.However, the clearance of cell debris and maintenance of homeostasis were found in AJSAF-treated C2C12 cells at the same concentration for 24 h.The dying and/or dead cells released danger-associated molecular patterns (DAMPs), which were sensed by immune and nonimmune cells.DAMPs were reported to activate the ERK1/2-CREB pathway to induce an inflammatory response and adaptive immunity [32][33][34][35].Host DNA released from alumtreated cells influenced its adjuvanticity [36].AJSAF was observed to induce DAMPs with adjuvant activities including S100A8, S100A9, and IL-33 in mouse quadricep muscles [13].Therefore, which DAMPs released by muscle cells are essential to the adjuvant activity of AJSAF also remains to be further elucidated. In conclusion, this study demonstrated that AJSAF elicited a temporary cytotoxicity and inflammation in C2C12 cells through the Ca 2+ -MAPK-CREB pathway (Figure 8).AJSAF might exert adjuvant activity by eliciting the inflammatory cytokines, chemokines, and DAMPs at the injection site.This study is beneficial for understanding the molecular mechanism of action of saponin adjuvants.AJSAF induced the lysis of C2C12 cells at 250 µg/mL from 2 h to 4 h.However, the clearance of cell debris and maintenance of homeostasis were found in AJSAF-treated C2C12 cells at the same concentration for 24 h.The dying and/or dead cells released danger-associated molecular patterns (DAMPs), which were sensed by immune and non-immune cells.DAMPs were reported to activate the ERK1/2-CREB pathway to induce an inflammatory response and adaptive immunity [32][33][34][35].Host DNA released from alumtreated cells influenced its adjuvanticity [36].AJSAF was observed to induce DAMPs with adjuvant activities including S100A8, S100A9, and IL-33 in mouse quadricep muscles [13].Therefore, which DAMPs released by muscle cells are essential to the adjuvant activity of AJSAF also remains to be further elucidated. In conclusion, this study demonstrated that AJSAF elicited a temporary cytotoxicity and inflammation in C2C12 cells through the Ca 2+ -MAPK-CREB pathway (Figure 8).AJSAF might exert adjuvant activity by eliciting the inflammatory cytokines, chemokines, and DAMPs at the injection site.This study is beneficial for understanding the molecular mechanism of action of saponin adjuvants. Figure 4 . 4 . Figure 4. Function of the AJSAF-induced core gens in C2C12 cells.(A) Enrichment plot of the leading-edge gene sets by the gene set enrichment analysis.(B) Heatmap of the core genes.(C) GO Figure 4. Function of the AJSAF-induced core gens in C2C12 cells.(A) Enrichment plot of the leading-edge gene sets by the gene set enrichment analysis.(B) Heatmap of the core genes.(C) GO function and KEGG pathway of the core genes.(D,E) Three densely connective networks (D) and their functional annotation (E) of the core genes by Cytoscape MCODE plug-in.BP: biological process. Figure 5 . Figure 5. Function of the AJSAF-induced core gens in mouse quadricep muscles.(A) Enrichment plot of the leading-edge gene sets by the gene set enrichment analysis.(B) Heatmap of the core genes.(C) GO function and KEGG pathway of the core genes.(D,E) Four densely connective networks (D) and their functional annotation (E) of the core genes by Cytoscape MCODE plug-in.BP: biological process, MF: molecular function. Figure 5 . Figure 5. Function of the AJSAF-induced core gens in mouse quadricep muscles.(A) Enrichment plot of the leading-edge gene sets by the gene set enrichment analysis.(B) Heatmap of the core genes.(C) GO function and KEGG pathway of the core genes.(D,E) Four densely connective networks (D) and their functional annotation (E) of the core genes by Cytoscape MCODE plug-in.BP: biological process, MF: molecular function. Figure 6 . Figure 6.Functions and hub genes of AJSAF-induced common core genes in vitro and in vivo.(A) and (B) Heatmap (A) and network (B) of the Top20 clusters with GO and KEGG enriched terms.(C) and (D) top 20 transcription factors (C) and overlap Circos diagram (D) of the in vitro and in vivo core genes induced by AJSAF.(E) GO function and KEGG pathway of the common core genes.(F,G) Upset plot (F) and protein-protein interaction network (G) of 6 hub genes. Figure 6 . Figure 6.Functions and hub genes of AJSAF-induced common core genes in vitro and in vivo.(A) and (B) Heatmap (A) and network (B) of the Top20 clusters with GO and KEGG enriched terms.(C) and (D) top 20 transcription factors (C) and overlap Circos diagram (D) of the in vitro and in vivo core genes induced by AJSAF.(E) GO function and KEGG pathway of the common core genes.(F,G) Upset plot (F) and protein-protein interaction network (G) of 6 hub genes.
2023-10-12T15:02:56.473Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "38a6d7274656809e6aba0e189bf5e81682fd3a0f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/11/10/1576/pdf?version=1696940801", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "b40d80c1ff3522ee933a5f9fa685953eca50a086", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235071090
pes2o/s2orc
v3-fos-license
Repeated trips by patients seeking medical treatment overseas from the United Arab Emirates: Results from the Dubai Health Authority during 2009–2016 Background: The Dubai Health Authority (DHA) spends millions of dollars to cover United Arab Emirates (UAE) nationals seeking healthcare overseas each year. Many patients undertake multiple visits following their initial trip. This paper analyzes repeated trips following an initial medical trip overseas so that the DHA can more effectively address patients’ needs and reduce the risks associated with multiple trips overseas. Methods: Administrative data were obtained from the DHA for the UAE nationals who sought treatment overseas during 2009–2016. We examined the match of the medical specialty between the initial and subsequent trips. Medical specialty was the key independent variable and other covariates included gender, age group, travel season, treatment destination and trip order. A mixed-effects logistic regression model with subject specific random intercept was used to assess the relationship of the outcome with the key variables of interest. Results: The analysis included 2,344 UAE nationals who had at least one trip following the initial medical visit. Oncology was the most common medical specialties sought by patients who travelled for repeated visits (18%). Patients in the age group 13-18 years had the highest odds of subsequent visits matching the medical specialty at the initial trip relative among age groups (OR 1.93, 95% CI:1.351,2.757). Patients travelling for oncology, orthopedic surgery, neurosurgery, ophthalmology, obstetrics/gynecology and otolaryngology had higher odds of subsequent trips matching the specialty of initial trips. The odds of subsequent trips matching the medical specialty of the initial trip increased with each additional trip (OR 1.73, 95%CI: 1.533,1.94). Conclusions: This is the first longitudinal study that examines the repeated medical trips among UAE nationals supported by the DHA. The results demonstrated that age group, medical specialty sought at the time of the initial trip, and number of trips were significant factors for understanding the match of the medical specialty between the initial and subsequent trips. The study results may help the DHA to establish an overseas treatment registry to collect information about patients seeking medical treatment overseas. In addition, the study will support establishing follow-up care programs to improve patients’ outcome and the clinical care for those patients. Introduction The demand for global healthcare services is experiencing tremendous growth (1)(2)(3)(4)(5)(6)(7). Each year, the Dubai Health Authority (DHA) -the government health entity that oversees healthcare services in the Emirate of Dubai -spends an average of 77 million US dollars to cover the costs for an average of 1500 UAE national patients seeking healthcare overseas (8). The health sector in Dubai comprises government facilities [1], private facilities, and a free zone [2] (9). Although the government of Dubai provides free healthcare services to UAE nationals at public health facilities, which is mandated by law, a number of patients travel to seek healthcare services outside the UAE. Since there are many governmental entities in the UAE and in Dubai other than the DHA that sponsor UAE nationals for treatment overseas, the number of these patients is not accurately enumerated and cannot be easily traced. Patients travelling overseas for healthcare seek an array of treatment options ranging from preventive procedures to complex surgeries (10). In addition, patients travel to different treatment destinations ranging from low-middle income countries to high income countries (11)(12)(13)(14).The treatment destinations sought for healthcare services are determined by patients and their families, often in consultations with physicians. As per the government law in the Emirate of Dubai, any Emirati citizens, irrespective of their socioeconomic status, are eligible to seek healthcare services overseas. Seeking treatment overseas with government coverage is conditioned by the unavailability of treatment in the government/public sector or by the belief that a better option exists overseas. A patient seeking health care overseas must provide a valid medical report from a physician in one of the DHA healthcare facilities stating the unavailability of optimal treatment in the government sector. The patient must sign and agree to government rules and regulations for the treatment plan at the treatment destination. This agreement states that patients should stay under the supervision of the DHA at all times to be granted final approval and financial coverage during the overseas treatment period. Obtaining healthcare services overseas may be associated with risks and complications to patients compared with obtaining healthcare domestically (15). Receiving routine follow-up treatment may 4 also be challenging for many of these patients especially when multiple follow-up care visits are required overseas. Certain medical specialties require multiple follow-up visits or a long course of therapy for some conditions (16). Given the high cost of medical services overseas, the potential risks associated with multiple trips abroad and the availability of free healthcare services in the UAE, it is important to analyze the repeated medical trips following the initial trip. (17)(18)(19)(20). Our key interest is to identify the medical specialties with more repeated trips. This analysis was performed by matching the specialty between the initial and subsequent trips. The study results will provide valuable information for the DHA to establish follow-up care policies and programs for the patients seeking healthcare overseas. In addition, these results will aid in improving the policies for patients with multiple trips ensuring that these repeated trips are addressing patients' needs. Moreover, these findings may help to reduce the risks of exacerbating patients´ health problems due to repeated trips not addressing the reasons for the initial trip overseas. (21,22) [1] The government healthcare sector in Dubai consists of two entities: Dubai Health Authority (DHA) and Ministry of Health (MOH). DHA is the health authority that is responsible for the healthcare in the Emirate of Dubai only. MOH is the federal health authority that is responsible for the healthcare in all the Emirates. [2] Free-trade zones (FTZs) are special economic zones established with the objective of offering tax concessions and customs duty benefits to expatriate investors. Data Source, Study Design, Variables and Measures Administrative data of UAE nationals who sought medical treatment overseas during the period 2009 -2016 under the sponsorship of the DHA were obtained from the DHA. The data contained the following information: birth date, gender, departure date, medical specialty sought overseas, and treatment destinations. Birth date was converted to a categorical variable with 7 age groups based on patterns of association between the medical specialty sought and age. Medical specialty was defined as the specialty for which patients sought medical treatment at the treatment destination and used as 5 a categorical variable consisting of 42 medical conditions. The American Board of Medical Specialties' categories were used to improve standardization and increase precision. Patients who had more than one medical specialty reported in the DHA records for a given trip during 2009 -2016 were removed from the analysis (3.2%). Three new variables were created from the departure date: number of trips, trip order, and travel season. Number of trips is a discrete variable and defined as the total number of trips taken by a patient to treatment destinations during the study period. Trip order is another discrete variable to reflect the sequence number of each trip during the study period for each patient. Travel season is a categorical variable with 4 seasons and defined as the season during which a patient travelled for treatment overseas. Treatment destination, a categorical variable with 22 countries, defined as the treatment destination a patient traveled to for medical diagnosis/treatment overseas. The outcome of interest is binary and defined as whether the medical specialty a patient sought treatment for during subsequent trips matched the medical specialty of the initial trip overseas. Ethical Issues The study protocol was submitted to the Johns Hopkins School of Public Health Institutional Review Board where it was defined as not involving human subjects research (IRB No: 00007896). Statistical Analysis Descriptive statistics were used to examine the independent variables. We applied mixed-effects logistic regression models with subject specific random intercepts and robust standard errors for examining the matching of medical specialty between the initial and subsequent trips for a given set of covariates, including the types of medical specialties (23). Our mixed-effects logistic regression models were adjusted for clustering due to multiple observations and hierarchically fitted for potential confounders. Results were reported with odds ratios (ORs) to identify factors associated with matching the medical specialty at the initial and subsequent trips, with 95% confidence intervals (CI) and p-values <0.05 indicating statistical significance (24).The covariates in the models included gender, age group, medical specialty, travel season, treatment destinations and trip order. The variance inflation factor (VIF) was performed to ensure absence of significant collinearity among 6 independent variables. The mean VIF was less than 2 which indicated there was no significant collinearity. Our analysis model is illustrated below: The statistical analyses were conducted by using Stata 13 (Stata Corporation, College Station TX). Descriptive Statistics There were 2,344 patients who had at least one trip following their initial trip when seeking healthcare services overseas from the United Arab Emirates through the Dubai Health Authority during 2009 -2016. The frequencies of study variables are shown in Table 1. Patients aged 19 -39 were the largest age group (28.2%). Among the top 15 medical specialties that patients sought treatment for oncology and orthopedic surgery had the highest number of repeated trips (18.0% and 12.7% respectively). The most common destinations patients travelled overseas for were Germany (45.6%), followed by the UK (18.2%). The mean, median and maximum number of trips are shown in Table 2. Although patients aged years had the maximum number of trips (19), patients aged 5-12 years had the highest mean value of trips (mean 2.61, SD1.84). Among the top 15 medical specialties, patients who travelled for nephrology had the largest maximum number (19) with the highest mean number of trips (mean 3.25, SD 3.43). Among the top 7 treatment destinations, patients who travelled to the USA had the largest maximum number of trips (19) with the highest mean number of trips (mean 2.85, SD 1.98). The majority of patients had one trip following their initial trip (n=1,289) as shown in Figure 1. The odds of matching the medical specialty between the initial and subsequent trips Unadjusted and adjusted odds ratios (ORs) from the mixed-effects logistic regression models are shown in Table 3. A patient in the age group of 13-18 years had the highest odds of the medical specialty matching between the initial and subsequent trips relative to a patient in the youngest age group of 0 -4 years given the same underlying propensity of matching the medical specialty (OR 1.93, 95%CI:1.351,2.757). Patients in other age groups, including 19-39 years, 40-54 years, and 55-69 years also had higher odds of the medical specialty matching between the initial and subsequent trips relative to the reference group. The odds of repeated visits for the same medical specialty decreased with age for age groups older than 18 years after adjusting for covariates. After adjusting for covariates, patients who initially travelled overseas for oncology, orthopedic surgery, neurosurgery, ophthalmology, obstetrics/gynecology and otolaryngology had higher odds of seeking these specialties on subsequent trips relative to patients who initially travelled for other medical specialties not included among the most frequent 15 medical specialties for which patients sought treatment. On the other hand, patients who initially sought medical treatment overseas for a medical specialty not specified in the DHA records, had lower odds of seeking the same medical specialty during subsequent trips relative to patients who initially travelled for other medical specialties not included in the top 15 specialties after adjusting for covariates (OR 0.14, 95%CI: 0.069,0.262). The odds of the medical specialty matching with the initial trip increased with every additional follow-up trip (OR 1.73, 95%CI: 1.533,1.94). Discussion The majority of patients in our study had two trips following their initial trip. Study results indicate that the repeated trips for some medical specialties were significantly more likely to match the medical specialty at the initial trip. Patients who initially sought medical treatment overseas for oncology, orthopedic surgery, neurosurgery, ophthalmology, obstetrics/gynecology and otolaryngology were more likely to travel again for the same medical specialty. On the other hand, a patient seeking treatment overseas with unknown medical condition and not specified in the DHA records was less likely to match the subsequent trips with the initial trip. Age group and the number of additional trips were factors influencing traveling again seeking advice and/or treatment for the 8 same medical specialty. Patients who travelled overseas for healthcare sought a range of treatment options. Some treatment options might necessitate follow-up care more than others as a part of the treatment regimen. Although our results illustrated the generally positive association of age with repeated visits for the same medical specialty, the odds of the medical specialty matching between initial and subsequent trips decreased for older age groups. Due to lacking medical visit details, it was not possible to distinguish the purpose of subsequent visits. As a result of aging, older individuals are more prone to multiple chronic diseases, may experience health decline and are at risk of complications (25)(26)(27). While these factors were not captured by our study, they may explain why the odds of seeking the same medical specialty in subsequent trips decreased in older age groups. More information is needed to better understand the relationship between age and patterns of healthcare seeking abroad. Ophthalmology was one of the top 15 medical specialties that patients sought medical treatment for overseas. Our previous study has shown that patients travelling overseas specifically for this medical specialty had a higher than expected number of trips during the period 2009 -2016 compared to other medical specialties (28). Although there are insufficient clinical studies about ophthalmology in the UAE, some research has linked ophthalmology visits to the association between diabetes mellitus and retinopathy (29)(30)(31)(32). Currently the prevalence of diabetes in the UAE is among the highest in the world (33). Complications of diabetes mellitus could be one cause of seeking healthcare overseas for ophthalmology, although to understand the extent to which there is a relationship, more information is needed than there is available in the administrative dataset. Our study demonstrated that patients who initially sought medical treatment overseas for oncology or orthopedic surgery were more likely to have subsequent trips for the same medical specialty. In general, there are a lack of clinical and pathological studies in the UAE related to cancers, or to orthopedic or spine surgeries (34)(35)(36)(37). However, some studies have been conducted on rheumatoid arthritis showing a gap between the onset of the disease and timely referral to appropriate treatment options (38)(39)(40). Orthopedic surgery is the second most common medical specialty in our study for which patients sought treatment overseas with relatively high odds of subsequent trips being made 9 for the same specialty relative to other medical specialties for which treatment was sought. However, the lack of information in the administrative data regarding reasons for travel, such as diseases classified according to international standards, makes it difficult to identify the different medical conditions that led to seeking orthopedic surgery, including whether it was due to arthritis, injuries or other conditions. Similarly, our administrative data lacked sufficient details regarding the underlying conditions of patients seeking treatment regarding neurosurgery, obstetrics/ gynecology and otolaryngology. As a result, it was difficult to determine what led to seeking treatment overseas for these medical specialties. However our previous research in understanding the motivational factors for choosing treatment destinations indicated that stroke (brain hemorrhage) is one of the factors for seeking medical treatment overseas (41,42). Stroke is a multifactorial disease in which a combination of risk factors can influence the probability overtime of a person experiencing this condition. Hypertension, diabetes mellitus, cardiovascular disease, life style, smoking and previous transient ischemic attack are risk factors associated with stroke or brain hemorrhage that may lead to neurosurgery (43). Stroke could be one cause of seeking medical treatment overseas for neurosurgery; brain tumors, or neurodegenerative diseases or other medical conditions could also be involved (44,45). Patients who travelled overseas for medical treatment sponsored by the Dubai Health Authority travelled to a number of different destinations. Currently there is no prospective registry at the DHA that captures comprehensive information related to the overseas treatment of patients. A registry focusing on the overseas treatment of patients should be designed to capture patients' sociodemographic profiles, disease specific characteristics, breakdown of expenditures, treatment destination details, and patient reported outcomes. This is necessary for future research and establishing cost-effective policies (46). Our results demonstrated that 6 out of 15 of the most frequent medical specialties for which treatment is initially sought overseas are more likely to have subsequent trips for the same medical specialty compared to medical specialties for which overseas treatment is sought less frequently. Additional research should begin by focusing on these top medical specialties. Among the top 7 treatment destinations, 79% of the patients travelled to high income destinations such as Germany, UK, USA, Singapore, and Spain [1]. To ensure that patients' needs are met during the repeated trips and their medical conditions are managed effectively, measuring the value of care received overseas is necessary. A report including the detailed expenditures of each trip should be mandatory. Aggregating and analyzing such information will help guide the government in establishing future comparative and cost-effectiveness studies. These studies will help to assess the extent to which resources allocated to overseas treatment are being utilized optimally and are obtaining the best value for the healthcare expenditures (47-50). We acknowledge some limitations of our study. The data collected from the DHA did not include international classification of disease codes or information about the severity and type of diseases for which overseas treatment was sought. However, the American Medical Specialty Board classification of medical specialties was used to accomplish some standardization in data management. In addition, patients who had more than one medical specialty reported in the DHA records during the study period for a given trip were excluded from the analysis with the assumption they could potentially introduce a bias to the analysis. The exclusion decision was made due to the inability to access patients' records for further information in order to know the primary medical specialty the patient sought healthcare for overseas. Using previously collected administrative data limited the ability to access additional variables that could explain the patterns in care seeking detected. Since the study was limited to patients sponsored through the Dubai Health Authority only, we cannot generalize the results to patients sponsored by other health authorities in the UAE such as the Ministry of Health and the Department of Health in Abu Dhabi. However, the availability of these data is considered a strength since it can begin to provide some guidance in order to improve the policies and strategies related to sponsoring overseas treatment. Establishing a registry that contains all the essential variables would prepare the government for conducting future research to measure patient outcomes after receiving treatment overseas. For example, measuring pre and post-operative health status following treatment overseas through patient-reported outcome assessment could guide clinical care following overseas treatment (51)(52)(53). Accordingly, follow-up care programs could be tailored to address the needs of patients after the overseas experience based on medical specialty guidelines regarding appropriate follow-up time and required therapy (54). In the future, different types of telemedicine could be explored, as a substitute to follow-up overseas using patient assessments, monitoring and outcomes reporting (55). The results of this study can suggest some areas for the government to provide treatment options in the UAE, whether in the government or the private sector, or through establishing public-private partnerships. This step could help channel patients toward better utilization of the private sector in the UAE as an alternative to overseas treatment. [1] High income destination: a country with high gross national income (GNI) as per the World Bank country classification Conclusion This is the first longitudinal study related to overseas treatment in the UAE and therefore contributes to the limited empirical research in the field of travel medicine. The results demonstrated that age group, medical specialty sought at the time of the initial trip, and number of trips were significant factors for understanding whether repeated trips were made to seek treatment for the same medical specialty. Creating an overseas treatment registry is an important next step to capture comprehensive information related to patients' travelling for healthcare services overseas. Measuring patient outcomes through patient reported outcome tools is important to guide the clinical care of patients following their overseas experience. Follow-up care programs are essential to help assure high quality patient outcomes and the cost-effective use of resources. In the future, telemedicine could be explored as one strategy of providing a substitute to the risks associated with treatment overseas and could allow patients to utilize the services provided in the public and private sectors within the UAE (56-60). Ethical approval The study protocol was approved by the Johns Hopkins School of Public Health Institutional Review Board, which determined the study to be not human subjects research (IRB No: 00007896) given the use of previously collected administrative data without patient identifiers. Consent for publication Not applicable. Availability of data and materials The data that support the findings of this study are from the Dubai Health Authority. However, restrictions apply to the availability of these data, which were used under special agreement for the current study; thus, these data are not publicly available. Data are available from the corresponding author upon reasonable request and with permission of the Dubai Health Authority. Competing interests The authors declare that they have no competing interests. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. However, we would like to express our gratitude to the Ministry of Higher
2020-02-13T09:12:43.867Z
2020-02-11T00:00:00.000
{ "year": 2020, "sha1": "9ab2143f83253f1b4abaa3edff5cb59864c88e61", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-13735/v1.pdf?c=1585679855000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "d489aec8b41d94833b0c77802ef42ca99687a2e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Political Science" ] }
5001544
pes2o/s2orc
v3-fos-license
The participation of Early Maladaptive Schemas (EMSs) in the perception of pain in patients with migraine: A psychological profile ABSTRACT Young's early maladaptive schemas questionnaire (YSQ-S3) is used to understand psychological aspects. Objective: EMSs were evaluated in patients with migraine. Methods: Sixty-five subjects were evaluated using the YSQ-S3 under standard conditions in a room with air conditioning at 22 ± 2°C. The subjects were stratified by morbidity (migraine), gender (male/female) and age (18-29 / 30-39 / 40-55). Controls (without migraine), n = 27 and patients (with migraine), n = 38, men (n = 19) and women (n = 46); participants aged 18-29 years, n = 34, aged 30-39 years, n = 14 and aged 40-55 years, n = 17. Data were analyzed using the Chi-square test, with p-values <0.05. Results were expressed as percentages in contingency tables. Results: There was a significant association between migraine and female gender (84.21%; p-value <0.05, Table 1), between hypervigilance and inhibition, and unrelenting standards (56.52%; p-value <0.0.014, Table 2) and female gender with migraine. Moreover, there was a significant association between hypervigilance and inhibition, and unrelenting standards (73.68%; p-value <0.0001) and self-punishment (84.21%; p-value <0.0001) in patients with migraine of both genders (Table 3). Conclusion: The individuals with migraine had a psychological profile of being overly demanding with themselves and others and self-punishing, where this was more frequent in women. O ne of the main goals of the physiotherapist is to alleviate or eradicate acute and chronic pain in patients. 1 However, even with the proper use of several techniques, the different forms of expression of pain among individuals 2 still appear to pose a great challenge in the interpretation of the efficacy of the treatment process, hindering even the perception of its termination. 3 From the physiological point of view, knowledge about the difference between the peripheral and central circuits of pain seems only to provide support to understand the differences between the perceptions of acute and chronic pain, respectively. 4 Substances such as bradykinin, prostaglandins, leukotrienes, substance P, serotonin and acetylcholine are released with peripheral pain. These substances act on different populations of neurons, reducing the activation threshold of nociceptors. 4,5 In central sensitization, the responses of dorsal horn neurons are increased after repeated stimulation of C-fibers, which release glutamate and stimulate N-methyl-D-aspartate (NMDA) receptors in these neurons until reaching the thalamus. 5,6 Understanding of the pathophysiologic mechanism of pain does not yet seem to explain the diverse responses of different patients to similar intensities of pain that follow the same circuit. One individual with acute pain shows little evidence of that pain 7 while another subject demonstrates the sensation of pain by much suffering 8 expressed through resounding cries. 9 Thus, the major problem for the physiotherapist still seems to involve the different behaviors related to pain intensity. 2 There are descriptions in the literature 10,11 concerned with the cognitive model of fear arising from the need to avoid pain. 12 However, the impression is that this perspective does not yet answer the question of how to help patients minimize their suffering. In this context, it seems relevant to understand the origin of the different expressions of sensations, which often leads to the physiotherapist feeling a sense of failure after several tiresome maneuvers during the recovery process. 9 Thus, for this study we chose an interdisciplinary approach to understand the psychological aspects of pain, not for treatment as such, but to see the patient as a whole and be able to refer them for treatment of the psyche, if their needs no longer relate to the neurophysiological circuits of pain perception. In this interdisciplinary literary search, Young (2003) 13 appears to give more support to this approach. He establishes that people internalize thoughts in childhood, which become part of their personality structure and that, in adulthood, these thoughts can promote social adaptation or maladjustment. 14 For thought to exist, it is necessary to have fully functioning neural networks, 15 with each neural network accommodating a type of thinking that is modulated by small chains of proteins called peptides. 16,17 Thus, the way of thinking and acting in the world mobilizes neurochemical signaling, because, for each type of thinking, the hypothalamus releases neuropeptides that enable short-term or long-term mobile synapses. 18 The activation of these neural networks in schematic formats involves the ways of thinking and acting as human beings; Young (2003) 19 called this complex set ´early schemas´ and according to him, they can be adaptive or maladaptive. After many years of studies, Young constructed a scale to identify the early maladaptive schemas (EMSs) in subjects 19 that can be modified in psychotherapy sessions. 20 Thus, the purpose of this study was to investigate the participation of EMSs in the perception of pain, especially migraine, because this is not merely a headache, but an active and incapacitating disease classified by the Brazilian Society of Headache and also because it is a risk factor for brain lesions. 21 Subjects Sixty-five patients, 38 with migraine (Migraine Group) and 27 without migraine (Control Group) from a population of 207 patients of the Brain and Neurofeedback Technology (Clinic/Cérebro e Tecnologia Neurofeedback Recife) -CTNR were evaluated. These patients were undergoing brain training at the CTNR and presented at the clinic with medical diagnoses indicative for this auxiliary neurofeedback treatment. The patients studied were university students, air force personnel and businesspersons. Only the university students were single, while all the other subjects were married. All subjects were submitted to evaluations using the EMS questionnaire (YSQ-S3) under standard conditions in an air-conditioned room at a temperature of 22 ± 2°C. Inclusion criteria were subjects of both sexes aged 18-60 years, who had chronic migraine without aura and used non-opioid analgesics, antiemetic agents or anti-inflammatory drugs only during crises. Chronic migraine without aura was defined according to the second and revised edition of the International Headache Society classification of 2004,redefined to include chronic headache occurring on eight (previously 15) or more days per month for more than three months in the absence of overuse of medications. Subjects under 18 or over 60 years of age were excluded according to the criterion of the YSQ-S3 validation process in Portuguese. 22 Groups The Chi-Square Distribution and Simple Random Method (90% confidence level with 10% probability of error) were used. The subjects of this cross-sectional study were stratified in two ways (gender and age group): Assessments This work was approved by the local Research Ethics Committee (CAAE Number #1.383.600 on January 5, 2016). Before data collection, all the subjects signed informed consent forms and all the women ensured that they were not in the pre-menstrual period and in the prodromal period of the migraine on the day of the assessment. Specifications of the Young Schema Questionnaire -Short Form 3 (YSQ-S3) The EMS questionnaire used in this study was developed by Jeffrey Young in 2003 19 Individuals cannot develop a sense of confidence, of establishing themselves in the world by themselves, generally possessing overprotective families that, in an attempt to protect the child, end up not reinforcing their autonomy. rd Domain: Impaired limits • Entitlement/grandiosity • Insufficient self-discipline Linked to failures to apply realistic limits, the ability to follow rules and norms, respect the rights of others and fulfill personal goals. Selfishness is the main characteristic of these individuals, and the family is generally permissive. th Domain: Orientation to the other • Subjugation • Self-sacrifice • Recognition-seeking In order to gain approval and avoid retaliation, patients in this domain have an overemphasis on meeting the other's wants and needs at the expense of their own needs. The family of origin usually establishes a conditional love relationship, that is, the child only receives attention and approval, if it suppresses its free expression and behaves in the desired way. th Domain: Hypervigilance and inhibition • Negativism/pessimism • Emotional inhibition • Unrelenting standards • Punitiveness Because of a rigid, repressive education in which there was no possibility to express their emotions in a free way, individuals with schemas linked to this domain are generally sad and introverted, with overly rigid internalized rules, exaggerated self-control and pessimism, and hypervigilance for possible negative events. Data analysis Data analysis was carried out using the SIGMA STAT computer program for Windows -Version 2.0 (Jandel Corporation). The results were analyzed using the Chisquare test, with a p-value <0.05 considered statistically significant. Results are expressed as percentages and represented in contingency tables. Association between migraine, gender and age group There was a statistically significant association between migraine and female gender (84.21%; p-value <0.05 - Table 1). Association of hypervigilance and inhibition and the EMSs negativism/pessimism, emotional inhibition, unrelenting standards and self-punishment with patient gender There was a significant association between hypervigilance and inhibition and unrelenting standards (56.52%; p-value <0.0.014) and female gender in patients with migraine ( Table 2). Association of hypervigilance and inhibition and the EMSs negativism/pessimism, emotional inhibition, unrelenting standards and self-punishment with patient gender and migraine There was a significant association between hypervigilance and inhibition and unrelenting standards (73.68%; p-value <0.0001) and self-punishment (84.21%; p-value <0.0001) in female patients with migraine (Table 3). DISCUSSION This study found a significant association between migraine, female gender, hypervigilance and inhibition, unrelenting standards and self-punishment. Although the designations of unrelenting standards and self-punishment, which are EMSs linked to the hypervigilance and inhibition domain, were used in this study, it is important to clarify that these findings were not specifically of the EMSs, because they did not meet the requirements established by Young et al. 19 on the response screen, as the expected means for this classification should be above an average of 5.0. During the statistical analysis, it was found that the tendency for a significant result in the 65 subjects evaluated in this study would not indicate a personality disorder related to the EMSs as there were few results with average scores above 5.0. Thus, an average score above 3.8 was chosen for each item on the YSQ-S3 response screen that could characterize at least one psychological profile. 26 The choice of this mean score was based on reports by other authors 25,27 that used the YSQ-S3 in their clinics during workups preceding psychotherapy and suggested that a mean score of 3.0 would be a very basic starting point to understand each patient's way of thinking and the origin of their crib education using the domains. 25 The unrelenting standards and self-punishment found in this study, for example, demonstrate that these individuals experienced a rigid, repressive education in which there was no possibility of expressing their emotions freely. Thus, individuals who have scores between 3.0 and 5.0 may not have EMSs, but appear to be have a strong tendency to be overly -demanding of himself or herself, thus suggesting a psychological profile or way of being yet without solid evidence of a personality disorder. 28 Nevertheless, some hold that 'way of being' is related to personality disorders. 26 Henri Ey regards some people (explosive, theatrical, systematic, meticulous, obsessive, obscene, very emotional and with other difficult traits) as having a pathological ego, characterizing not only a way of being in the world, but above all, a way of existing in the world. 29 Karl Jaspers affirms that personalities that make people and those around them suffer are not normal. According to Jaspers, abnormal personalities represent non-normal variations of human nature, which can be perfectly understood as personality disorders. 30 However, 'way of being' for this study was not in this context, because the subjects appeared to affect only themselves with the discomfort of migraine. Author contribution. Ketlin Helenise dos Santos Ribas: participated in the writing of the entire text, in data collection and statistical analysis. Silano Souto Mendes Barros: contributed to writing of the Introduction and was involved in the data collection. Valéria Ribeiro Ribas: assisted in data collection and sample calculations. Maria da Glória Nogueira Filizola: assisted in the data collection and devising of the methodology. Valdenilson Ribeiro Ribas: helped in the textual organization of the Results and the Discussion and also contributed to the statistical analysis and was the corresponding author. Renata de Melo Guerra Ribas: assisted in data collection and statistical analysis. Paulo César da Silva: assisted in the data collection and in the searching and organization of references. Hugo André de Lima Martins: assisted in data collection and oversaw the overall study as Advisor.
2018-04-27T07:21:59.316Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "e9ce4a09fc7c380f653979c581fed7eb73ef851b", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/dn/v12n1/1980-5764-dn-12-01-0068.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9ce4a09fc7c380f653979c581fed7eb73ef851b", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220974664
pes2o/s2orc
v3-fos-license
Integrative Transcriptomic and Small RNA Sequencing Reveals Immune-Related miRNA–mRNA Regulation Network for Soybean Meal-Induced Enteritis in Hybrid Grouper, Epinephelus fuscoguttatus♀ × Epinephelus lanceolatus♂ A 10-week feeding experiment was conducted to reveal the immune mechanism for soybean meal-induced enteritis (SBMIE) in hybrid grouper, Epinephelus fuscoguttatus ♀ × Epinephelus lanceolatus ♂. Four isonitrogenous and isolipidic diets were formulated by replacing 0, 10, 30, and 50% fish meal protein with soybean meal (namely FM, SBM10, SBM30, and SBM50, respectively). The weight gain rate of the SBM50 group was significantly lower than those of the other groups. Plica height, muscular layer thickness, and goblet cells of the distal intestine in the SBM50 group were much lower than those in the FM group. The intestinal transcriptomic data, including the transcriptome and miRNAome, showed that a total of 6,390 differentially expressed genes (DEGs) and 92 DEmiRNAs were identified in the SBM50 and FM groups. DEmiRNAs (10 known and 1 novel miRNAs) and their DE target genes were involved in immune-related phagosome, natural killer cell-mediated cytotoxicity, Fc gamma R-mediated phagocytosis, and the intestinal immune network for IgA production pathways. Our study is the first to offer transcriptomic and small RNA profiling for SBMIE in hybrid grouper. Our findings offer important insights for the understanding of the RNA profile and further elucidation of the underlying molecular immune mechanism for SBMIE in carnivorous fish. INTRODUCTION In recent years, the global contribution of fish meal to aquafeeds has sharply reduced (1). As a result, soybean meal, which is considered one of the most hopeful candidates for fish meal replacement, can partially or fully replace the fish meal but introduces many anti-nutritional factors. Consequently, fish enteritis induced by plant proteins has become one of the main challenges for sustainable aquaculture (2), and this occurs in a dose-dependent manner (1,3). Soybean meal-induced enteritis (SBMIE) has been found in many commercial fish species such as Atlantic salmon (Salmo salar) (4), grass carp (Ctenopharyngodon idella) (5), and turbot (Scophthalmus maximus L.) (6,7), and mainly occurs in carnivorous fish (4,6,7). The symptoms of SBMIE are most apparent in the posterior/distal intestine of turbot (6,7) and grass carp (5), the most important mucosal immune organ (8). Fish SBMIE has been found to be accompanied by a decrease in the height of villi and microvilli, downregulation of tight junction protein claudin-4, occluding, and ZO-1 mRNA levels, and upregulation of pro-inflammatory cytokine genes, including TNF-α, IL-1β, IL-8, and IL-16 (9)(10)(11)(12), influencing the innate immune response. Zebrafish SBMIE is T celldependent and has a T helper (Th) 17 cytokine profile (13). What are the underlying immune mechanisms in fish SBMIE? The new omics technologies, including genomics, proteomics, and transcriptomics, have great potential for investigating and explaining the complex relationship between fish nutrition and immunity, both in intestine health and disease (14). At the genomic level, most components associated with T lymphocyte function have been identified in fish, suggesting that gut-associated lymphoid tissue has similar functionalities between fish and mammalian T lymphocytes (15). At the transcriptomic level, immune-related pathways of fish SBMIE have been gradually reported, showing that cytokinecytokine receptor interaction, NOD-like receptor interaction, the intestinal immune network for IgA production, and the NF-kB, Jak-STAT, T-cell receptor, and TNF signaling pathways played key roles in response to SBM stress (3,16). Fish meal replacement in fish diet by alternative protein sources could change the fish intestine proteome, including innate immune proteins (17). The especially interesting fact is that miRNAs are involved in regulating intestine function, including epithelial cell growth (18), mucosal barrier function (19), and the development of gastroenteric diseases (20)(21)(22). An important aspect is that miRNA can also regulate mRNA expression in fish at a transcriptional level. Recently, studies of miRNAome in turbot intestinal function have reported that miRNAs contributed to the intestinal immune responses, preventing host infection, which the potential target genes of differentially expressed miRNAs were involved in multiple functional categories, including the RIG-I signaling pathway, immune defense/evasion, the toll-like receptor signaling pathway, and inflammatory responses (23). Also, it was found via small RNA sequencing that fish diet could affect the expression of intestinal miRNAs and target genes and immune-related pathways, including cell adhesion molecules, ECM-receptor interaction, the apoptosis signaling pathway, cytokine-cytokine receptor, and the VEGF signaling pathway (3,24). However, there has been a lack of investigation of the underlying immune response by combining both transcriptomes and small RNA sequencing, and this requires further elucidation, especially in carnivorous fish. Hybrid grouper (Epinephelus fuscoguttatus♀ × Epinephelus lanceolatus♂) is a carnivorous fish species that is the main farmed species in China due to its outstanding delicious taste and better growth rate and survival compared with the broodfish. Information on how nutrition influences the intestine health of hybrid grouper has been gradually accumulated in recent years (25)(26)(27)(28)(29)(30). Our previous research used metabolomics technology to identify 17 potential markers of SBMIE in hybrid grouper (31). However, this study aims to reveal the immunerelated miRNA-mRNA regulation network for SBMIE in hybrid grouper by integrative transcriptome and small RNA profiling from the perspective of molecular immunology and to provide another important insight to further solve the problem of fish intestinal health. Experimental Diets The use of hybrid grouper juveniles was approved by the Animal Research and Ethics Committees of Guangdong Ocean University, China. Four isonitrogenous and isolipidic diets were formulated to contain 0, 7.41, 22.24, and 37.07% of soybean meal (SBM) by replacing 0% (FM, control), 10% (SBM10), 30% (SBM30), and 50% (SBM50) of fish meal (FM) protein, respectively. The formulation of the basic experimental feeds is presented in Table 1. All ingredients were systematically mixed with lipid sources such as fish oil, soybean oil, and soybean lecithin and then purified water was added to produce a homogenous mixture. The dough was pelleted through a double helix extrusion mechanism (F-75, South China University of Technology, China). Feeds (2.5 mm diameter) were air-dried and then stored at −20 • C until feeding. Feeding Trial Hybrid grouper Juveniles were obtained from a native species farm (Zhanjiang, China). All fish were adapted under the feeding system for 2 weeks by feeding with a commercial diet. Uniformly sized fish (mean initial weight ± SE = 17.01 ± 0.04 g) were randomly divided into four groups in triplicate, with 30 individuals in each fiberglass tank (300 L). The fish were slowly fed twice a day at 08:00 and 17:00 for 10 weeks. During the experiment, the water temperature fluctuated from 28 to 30 • C, the dissolved oxygen concentration was kept at >7 mg/L, and ammonia and nitrate were kept at <0.03 mg/L. Sample Collection Before the termination of the 10-week feeding experiment, the fish per tank were fasted for 24 h before collecting samples and were then counted and weighed to determine growth indexes, including weight gain rate, feed conversion ratio, and survival rate. After weighing, distal intestines of two fish per tank were collected and instantly transferred to 4% paraformaldehyde solution for histological examination. At the same time, distal intestines of another three fish per tank were collected as a single sample and instantly frozen in liquid nitrogen, then stored at −80 • C for RNA extraction. Based on growth performance (see section Growth Performance) and histological examination, small RNA and transcriptome analyses were performed on distal intestine samples of the FM and SBM50 groups to ensure the maximum difference between samples. Thereby, the probability of detecting differential expression was increased between samples. Intestinal Morphology The fixed distal intestine samples from FM and SBM50 groups were dehydrated in a series of graded ethanol and embedded in paraffin. Distal intestine sections (7 µm thick) from each sample were cut and then stained with hematoxylin/eosin. The sections were observed under an inverted microscope (Nikon, Japan), and 10 plicas and muscle layer thicknesses (MLT) were randomly selected per slice. Plica height (PH), plica width (PW), MLT, and the number of goblet cells (GC) per slice were measured using the image acquisition software (NIS Elements, version 4.60, Nikon, Japan). Transcriptome Sequencing and de novo Assembly One microgram total RNA from the FM and SBM50 treatment groups was used for transcriptome library preparation. Total RNA was purified by beads containing oligo (dT). First-strand cDNA was then generated in a First-Strand Reaction System by PCR, and the second-strand cDNA was generated as well. The cDNA fragments with adapters were amplified by PCR, and the products were purified using AMPure XP Beads. The library was validated on the Agilent Technologies 2100 bioanalyzer for quality control. Transcriptome sequencing was carried out on a BGISEQ-500 platform (BGI-Shenzhen, China). Trinity (32) was used to achieve de novo assembly with clean reads, and Tgicl (33) was then used to cluster transcripts to Unigenes. The expected number of fragments per kilobase of transcript sequence per million base pairs sequenced (FPKM) was used to calculate the mRNA gene expression. Differentially expressed genes (DEGs) of two groups (SBM50 and FM) were quantified using the cutoff |log2FC| >1, P < 0.001 by DESeq R package (34,35). After assembly, All-Unigenes were searched and annotated against the publicly available protein databases, including Nr (NCBI non-redundant protein sequences), Nt (NCBI non-redundant nucleotide sequences), KOG (EuKaryotic Orthologous Groups), Swiss-Prot, and GO (Gene Ontology). The pathway assignments were performed by sequence searches against the KEGG (Kyoto Encyclopedia of Genes and Genomes) database. KEGG terms with corrected P-values (Q-values) ≤ 0.05 were considered significant. Transcriptome (de novo assembly) sequencing data were deposited into the NCBI SRA database with the accession number SUB7020170. Construction and Sequencing of Small RNA Libraries Six small RNA libraries were constructed from the FM and SBM50 treatment groups. Total RNA extraction was performed from the distal intestine using Trizol Reagent (Invitrogen, USA). Subsequently, 1 µg of total RNA per sample was used for small RNA sequencing. The quality of RNA samples was evaluated using the Agilent 2100 Bioanalyzer. Small RNA fractions were ligated to 3 ′ and 5 ′ adapter. Quantitative reverse transcription PCR (RT-PCR) was carried out on the adaptor-ligated small RNAs. PCR products were purified by QIAquick Gel Extraction Kit (Qiagen, Germany) and used for sequencing on the BGISEQ-500 platform (BGI-Shenzhen, China). Clean reads were obtained by cleaning low-quality tags, removing adapter sequences, and filtering adaptor-ligated contaminants and sequences fewer than 18 nucleotides (nt). The final reads were mapped to the Hypoplectrus puella (GCA_900610375.1) reference genome by Bowtie2 (36). Clean reads were compared against small RNAs (rRNA, scRNA, snoRNA, snRNA, tRNA, and mRNA) using the Rfam database to annotate small RNA sequences. Finally, the miRBase20.0 was used to look for know miRNA. The hairpin structures were used to predict novel miRNAs using miRDeep2 software (37). miRNA expression levels were compared between the SBM50 and FM groups to identify differentially expressed miRNAs (DEmiRNAs). Firstly, data were normalized to obtain transcripts per million (TPM) values using the following formula: normalized expression = actual miRNA count/(total reads) ×1,000,000 (38). Fold-change values were then calculated based on log 2 (SBM50/FM) expression. The corrected P-value corresponds to the differential gene expression test using the Bonferroni method (39). Differential miRNA expression between two groups was analyzed with DESeq software based on the following thresholds: P-value ≤ 0.01 and |log 2 ratio| ≥ 1. RNAhybrid (40) and miRanda (41) software predicted the potential target genes of miRNA candidates, as described previously elsewhere (42,43). The DAVID gene annotation tool was used for the KEGG pathway annotation of predicted miRNA targets. Small RNA sequencing data were deposited into the NCBI SRA database with the accession number SUB7175134. Network Analysis of DEmiRNA and DEG Interaction Pearson's correlation coefficients between DEmiRNAs and their target genes were calculated using the correlation function in RStudio. To obtain the positive and negative correlations between two groups, the potential target genes of miRNAs were overlapped with the identified upregulated or downregulated DEGs, respectively. All of the relationship pairs between DEmiRNAs and their DE target genes were used to construct the interaction network using Cytoscape v.3.7.2 software. Validation by Real-Time Quantitative PCR (RT-qPCR) RT-qPCR validation was carried out on the same samples used for transcriptome sequencing (n = 3). Primers were designed from the candidate gene sequences by premier 5.0 software and the online Primer-BLAST program. Primers used in this study are provided in Table S1. One microgram total RNA for RNA sequencing was reverse transcribed into cDNA. Real-time PCR assays were conducted on a CFX96 real-time PCR Detection System (Bio-Rad, Hercules, CA) with 5 µL SYBR Green Master Mix (Takara, China). β-actin was selected as the reference gene according to a previous study (26). The small RNA of the same samples used for sequencing was extracted using an RNAiso for Small RNA Kit (Takara, China) according to the manufacturer's protocol. Subsequently, the first-strand cDNA was synthesized for mature miRNA expression analysis by a Mir-X TM miRNA First-Strand Synthesis Kit (Code No. 638315, Takara, China). The qPCR was carried out using a miRNA SYBR Green RT-qPCR Kit (Takara, China) with the provided miRNA reference gene (U6). Relative quantitative levels were calculated based on the 2 − CT method (44). Statistical Analysis The normal distribution and the homogeneity of the variance of growth indexes were tested, followed by one-way analysis of variance and Tukey s test. Morphological analysis between two groups was assessed by two-tailed unpaired Student's t-test (GraphPad). For statistically significant differences, P < 0.05 was required. All statistical analyses were carried out using SPSS 24.0 software. The barplot was generated by Graphpad Prism 8.0.1 software. Growth Performance The survival rate (SR) was not affected by dietary treatment levels (P > 0.05, Table 2). The weight gain rate (WGR) of the SBM50 group was significantly lower than that of other groups (P < 0.05). The feed conversion ratio (FCR) of the SBM50 group was significantly higher than that of the other groups (P < 0.05). Histological Examinations and Intestinal Morphometry More swelling of the lamina propria (LP) of the SBM50 group was observed compared with the FM group, and the intestinal villi of the SBM50 group showed signs of shedding (Figures 1A,B). The plica height (PH), muscular layer thickness (MLT), and number of goblet cells (GC) of the SBM50 group were much lower than those of the FM group ( Figure 1C, P < 0.01). There was no significant difference in plica width (PW) between the FM and SBM50 groups (P > 0.05). Analysis of mRNA Sequencing A total of six qualified libraries from the FM and SBM50 groups, with three biological replicates per treatment, were sequenced. An overview of the sequencing and assembly data is presented in Table 3. Approximately 33.49 and 33.6 Gb of clean reads were obtained in the FM and SBM50 groups. More than 87.1% of the reads had Q-scores at the Q30 level, and more than 75.8% of the clean reads were aligned. The length distribution of the Unigene in all six libraries is shown in Figure S1. Analysis of Small RNA Sequencing Six small RNA libraries were constructed, with three biological replicates per treatment. A total of 85,789,507 and 87,445,155 raw reads were found in the FM and SBM50 groups, respectively ( Table 4). Also, 77,115,163 and 81,360,125 clean reads were found in the FM and SBM50 groups, respectively. The total mapped tags to the reference genome in the FM and SBM50 groups were 71,413,180 (92.60%) and 76,824,617 (94.43%), respectively. Most small RNAs were 21-23 nt in length in all six libraries, with 22 nt being the most frequent length (Figure S2), and more than 61.7% were miRNA in the catalog of small RNA in all six libraries ( Figure S3). A total of 682 mature miRNAs (Table S3) and 29 novel miRNAs (Table S4) were identified in these six small RNA libraries. Integration Analysis of the DEmiRNAs and DEGs A total of 244 miRNA-mRNA interactions were identified in the FM and SBM50 groups, with the involvement of 92 DEmiRNAs and 211 DEGs (Figure 6, Table S7). A positively correlated expression pattern was seen for 180 mRNA-miRNA pairs. Most miRNAs had multiple possible target genes, while different miRNAs could regulate the same target. For instance, miR-196 was the regulator of CL4654.Contig2_All, CL7786.Contig3_All, and Unigene16944_All, whereas miR-20a-5p and miR-459-5p_1 could regulate the expression of CL8081.Contig7_All. KEGG Enrichment Analysis of Differential Target Genes Differential target genes were predicted and annotated to 241 pathways. The top 5 of the top 20 KEGG pathways were phagosome, tuberculosis, osteoclast differentiation, natural killer cell-mediated cytotoxicity, and mineral absorption (ko04978), respectively (Figure 7, Table S8). To further explain the possible immune response for SBMIE, combined with the KEGG enrichment pathway of DEGs found that 15 miRNA-mRNA pairs involved in the phagosome, 8 miRNA-mRNA pairs involved in natural killer cell-mediated Table 5. Real-Time Quantitative PCR Validation In total, 10 miRNAs and 10 mRNAs were selected to test their expression and the results suggested that DEmiRNAs except for miR-194-5p and DEGs showed a similar expression pattern to the high-throughput sequencing data (Figure 8). DISCUSSION The nutritional trial of this study revealed that a 22.4% SBM, i.e., 30% substitution of FM protein, did not significantly influence the growth of hybrid grouper compared to the FM diet. However, the effect of SBM on growth was substitution-related, and the growth performance became progressively compromised with increasing SBM substitution level, which is consistent with the previous studies (45,46). In this study, growth performance was significantly reduced when dietary SBM content reached 370 g/kg, i.e., 50% substitution of the FM protein. Hybrid grouper fed with SBM replacing 50% of the FM protein diet showed swelling of the lamina propria and reduction of plica height, muscular layer thickness, and number of goblet cells in the distal intestine of hybrid grouper, indicating that enteritis and intestinal injury had appeared. The severity of histopathological changes observed under SBM application relies on the level of soybean inclusion. SBMIE is now commonly used as a model for the study of intestinal inflammation in fish (6,7,47). The histopathological changes of SBMIE have been widely researched and are characterized by a swelling of the subepithelial mucosa and lamina propria, a reduced mucosal fold height, a profound infiltration of various inflammatory cells, and loss of normal enterocyte supranuclear absorptive vacuolization (13,31,48). Liu et al. (10) found that turbot fed with SBM replacing 40% of the fish meal protein diet showed obvious enteropathy, including a reduction of absorptive surface and obvious infiltration of mixed leukocytes in the lamina propria. To reveal the underlying immune response of fish SBMIE, 6,390 DEGs in the distal intestine of hybrid grouper were identified in this study. Also, the enhanced gene expressions were found to involved in immune-and inflammatory-related pathways, including phagosome, natural killer cell-mediated cytotoxicity, the intestinal immune network for IgA production, Fc gamma R-mediated phagocytosis, and the NF-kappa B signaling pathway. The above results suggested that these pathways played a vital role in fish SBMIE. Similar immunerelated pathways have been reported in SBMIE in other carnivorous fish, such as salmon (16,49) and turbot (6,7). It is worth noting that the downregulated gene expression was in KEGG pathways mainly involved in lipid metabolisms such as biosynthesis of unsaturated fatty acids, linoleic acid metabolism, cholesterol metabolism, fat digestion, absorption, and arachidonic acid metabolism. This may indicate that impaired lipid metabolism could be a consequence of "tissue malfunction" (47,49). The results enable a better understanding of why LC-PUFA biosynthesis, cholesterol biosynthesis, lipid digestion, and the PPAR signaling pathway of distal intestine were influenced when Atlantic salmon ingested feed in which the fish meal was partially replaced by soybean meal (1,16). Despite the major contribution of mRNAs, miRNAs also play a key role during immune processes in fish SBMIE. In this study, predicted target genes for DEmiRNAs are annotated to 340 signaling pathways. Immune-related pathways, including ECM-receptor interaction, the NF-kappa B signaling pathway, and the IL-17 signaling pathway, were enriched. The downregulation of miR-192-3p and miR-212-5p expression involved in the regulation of the ECM-receptor interaction pathway. The down-regulation of genes involved in the ECMreceptor interaction pathway in response to SBM stress was also reported in Grass carp (5). To further introduce miRNAs in detail, enteritis-related miRNAs have been found both in human and mammals. The present miRNA results include inflammatory bowel diseases (IBD) related-miRNAs, such as miR-124, miR-24, miR-221, and miR-132. The results of this study showed that upregulation of miR-124 expression and downregulation of miR-24, miR-221, and miR-132 expression were observed in the SBM50 group (Table S5). Similar expression patterns were found in colon tissues of children with active ulcerative colitis (UC), where decreased levels of miR-124 appeared to enhance expression and activity of STAT3, which could induce inflammation and pathogenesis (50). There were elevated levels of miR-24, miR-221, and miR-132 in colonic biopsies from UC, which suggested that they are an important regulator of the intestine barrier that may be essential in the pathogenesis of IBD (51). The above results indicated that miR-124, miR-24, miR-221, and miR-132 may play important roles in SBMIE of hybrid grouper juveniles, which may suggest their therapeutic potential. To further explain the possible immune response for SBMIE through integrative transcriptomic and small RNA sequencing, the KEGG enrichment pathways of differential target genes were analyzed. Combined with the KEGG enrichment pathways of DEGs, it was concluded that immune-related signaling pathways such as phagosome, natural killer cell-mediated cytotoxicity, Fc gamma R-mediated phagocytosis, and the intestinal immune network for IgA production were enriched. The most enhanced gene expression being in the phagosome pathway suggested the involvement of a macrophage as the main intestine phagocyte during enteritis (52). A total of 14 miRNA-mRNA pairs also suggested that the phagosome pathway could play a key role in intestinal inflammation. In addition, this study identified 92 DEGs related to the natural killer cell-cytotoxicity pathway. The role of this pathway in the immune response to pathogens has also been reported in different fish species such as large yellow croaker (Larimichthys crocea) (53) and half-smooth tongue sole (Cynoglossus semilaevis) (54). A previous study reported that mammalian natural killer (NK) cells mediated cytotoxic activity via two distinct pathways (55). NK cells can release cytotoxic granules, including perforin and granzymes, on the surface of diseased cells. Granzymes can then stimulate caspase activation, mitochondrial dysfunction, or apoptosis (55). In this study, the upregulation of granzyme (gene id: Unigene24888_All) and perforin (gene id: Unigene31830_All, Blast Nr annotated in Table 5) gene expressions indicated that granule-mediated cytotoxicity may be triggered by the targeted release of lytic granules toward a locally attached target cell. Wu et al. (5) reported that the intestinal immune network for the IgA production pathway was upregulated in the early stages of Grass carp in response to high-SBM-content stress. Similar results were also reported in the present study: most genes in pathways of Fc gamma receptormediated phagocytosis and the intestinal immune network for IgA production were significantly upregulated. The role of the two pathways in immune response to pathogens has also been reported in fish such as half-smooth tongue sole (53), large yellow croaker (54), and darkbarbel catfish (Pelteobagrus vachellii) (56). Previous studies reported that the target genes of the DEmiRNAs were involved in pathways of Fcγ R-mediated phagocytosis and the intestinal immune network for IgA production (57,58). In this study, related miRNA-mRNA pairs were also enriched in the two pathways mentioned above (Table 5). Thus, our results suggested that phagosome, natural killer cell-mediated cytotoxicity, Fc gamma R-mediated phagocytosis, and the intestinal immune network for IgA production pathways may play a vital role in SBMIE of carnivorous fish. In conclusion, our study is the first to offer the transcriptomic and small RNA profiles for SBMIE in hybrid grouper. Overall, 6,390 mRNAs and 92 miRNAs were differentially expressed under dietary SBM stress. Our findings support the notion that DEmiRNAs and their target mRNAs play an important role in immune regulation. Also, investigation of KEGG enrichment pathways by integrative transcriptomic and small RNA profiling revealed that the immune mechanism for SBMIE in hybrid grouper may be associated with the phagosome, natural killer cell-mediated cytotoxicity, Fc gamma R-mediated phagocytosis, and the intestinal immune network for IgA production pathways. Our findings offer important insights for the understanding of the RNA profiles and further elucidation of the underlying molecular immune mechanism for SBMIE in carnivorous fish.
2020-08-06T13:07:20.220Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "625100644b559068a5d7b448c02e9f4a7e3806e5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.01502/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "625100644b559068a5d7b448c02e9f4a7e3806e5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
79097
pes2o/s2orc
v3-fos-license
Exit of Plasmodium Sporozoites from Oocysts Is an Active Process That Involves the Circumsporozoite Protein Plasmodium sporozoites develop within oocysts residing in the mosquito midgut. Mature sporozoites exit the oocysts, enter the hemolymph, and invade the salivary glands. The circumsporozoite (CS) protein is the major surface protein of salivary gland and oocyst sporozoites. It is also found on the oocyst plasma membrane and on the inner surface of the oocyst capsule. CS protein contains a conserved motif of positively charged amino acids: region II-plus, which has been implicated in the initial stages of sporozoite invasion of hepatocytes. We investigated the function of region II-plus by generating mutant parasites in which the region had been substituted with alanines. Mutant parasites produced normal numbers of sporozoites in the oocysts, but the sporozoites were unable to exit the oocysts. In in vitro as well, there was a profound delay, upon trypsin treatment, in the release of mutant sporozoites from oocysts. We conclude that the exit of sporozoites from oocysts is an active process that involves the region II-plus of CS protein. In addition, the mutant sporozoites were not infective to young rats. These findings provide a new target for developing reagents that interfere with the transmission of malaria. Introduction The Plasmodium life cycle in the Anopheles mosquitoes begins with the ingestion of a blood meal containing Plasmodium gametocytes. After fertilization of the resulting gametes, the zygotes transform into motile ookinetes that traverse the midgut epithelium, reach the basal lamina, and then transform into oocysts. The young oocyst, surrounded by a capsule and by the basal lamina, undergoes multiple mitotic nuclear divisions and progressively enlarges in size without cytokinesis. At the same time, the cytoplasm is subdivided by membrane clefts forming structures named ''sporoblasts.'' Later, uninucleate sporozoites bud from the sporoblast membrane. The mature oocyst is about 50 lm in diameter and contains thousands of sporozoites. Sporozoites leave the oocysts asynchronously, enter the hemolymph, and then invade the salivary glands where they remain until they are injected, with the saliva, into the skin of the mammalian host [1,2]. In order to reach the flowing hemolymph, sporozoites must traverse two physical barriers: the oocyst capsule and the mosquito basal lamina. Because oocyst sporozoites display limited movement [3], their egress from oocysts is generally thought to be a passive process. Early ultrastructural observations reveal the presence of small openings in the capsule of mature oocysts and the basal lamina. Occasionally, sporozoites are found ''penetrating'' these openings and entering the hemolymph [4]. The oocyst capsule contains laminin of mosquito origin and displays trans-glutaminase activity probably of parasite origin [5,6]. In addition, the inner surface of the capsule is covered with the Plasmodium circumsporozoite (CS) protein [7][8][9]. The development of sporozoites in oocysts is CS dependent. When the CS gene is deleted, the oocysts are devoid of mature parasites [10]. To investigate the mechanisms leading to this developmental arrest, we have generated Plasmodium berghei parasites bearing different mutations in the CS coding region. In one of the P. berghei CS mutants, we substituted the positively charged amino acids of the conserved region II-plus with alanines. Region II-plus is located at the 59 end of the thrombospondin type 1 repeat (TSR) domain of CS protein. Several in vitro observations strongly suggest that region IIplus participates in the initial steps of sporozoite attachment and invasion of the host's hepatocytes via interaction with heparan sulfate proteoglycan (HSPGs) on the host cells [11,12]. Here we show for the first time that the mutation in region IIplus of CS protein prevents the exit of sporozoites from oocysts and progression of the Plasmodium lifecycle. In addition, the mutant sporozoites are unable to infect rats. Construction of CS-RIImut and CS-WT A P. berghei clone with mutated region II-plus of CS protein (R290A, K291A, R292A, and K293A) was obtained by homologous recombination. In order to avoid any potential defects in the locus associated with the recombination event, a control clone, CS-WT, which produces wild-type CS protein, was generated by the same method (pRCS-WT and pRCS-RIImut, Figure 1A and 1B). A PstI site was introduced in pRCS-RIImut in order to detect the presence of mutations by PCR and Southern blot analysis [13]. The schematic structure of CS and the sequence of region II-plus of wildtype and mutant CS are shown in Figure 1C. Genomic DNA from WT, CS-WT, and CS-RIImut were digested with XbaI and PstI, and subjected to Southern blot hybridization. WT displays a 7.9-kb band, whereas CS-WT displays a 5.5-kb band and CS-RIImut a 2.2-kb band, indicating that the CS-RIImut and CS-WT have a correct recombination locus ( Figure 1D). This was confirmed by PCR amplification specific for recombinants ( Figure 1E and 1F), subsequent PstI digestion ( Figure 1E), and by sequencing of PCR products. The sequences of coding regions of wild-type and mutant CS are as expected. We cannot exclude the possibility that the substitution of the four positively charged amino acids of region II-plus led to secondary changes in the structure of CS protein. CS-RIImut Sporozoites Do Not Exit from Oocysts Groups of Anopheles stephensi mosquitoes were infected with CS-WT, a clone of wild-type P. berghei NK65 strain (WT), and two independent clones of CS-RIImut. CS-WT and WT are identical; therefore, CS-WT was used as a wild type control in all experiments. The numbers of oocysts and oocyst sporozoites at 14 d after blood meal (post-infection [PI]) were very similar in CS-WT and in the two mutant clones (Table 1). However, profound differences were observed at later time points. At day 16 and 18 PI, mutant infected mosquitoes contained many more oocyst sporozoites compared to wildtype infected ones (Figure 2A). In two other independent feeding experiments, similar results were obtained at day 16 PI: 50,000 and 96,000 oocyst sporozoites/mosquito for CS-RIImut, versus 35,000 and 75,000 oocyst sporozoites/mosquito for CS-WT, respectively. By contrast, the hemolymph of mosquitoes infected with CS-WT parasites contained many more sporozoites compared to mosquitoes infected with mutant parasites. CS-WT sporozoites entered the hemolymph beginning at day 12 PI, and the peak numbers were reached around day 18 PI. In contrast, even at 28 days PI, only minimal numbers of CS-RIImut sporozoites were found in the hemolymph ( Figure 2B and 2C). Thus, the observed increase in the numbers of CS-RIImut oocyst sporozoites between days 14 and 18 is most likely a consequence of their inability to be released in to the hemolymph. CS-RIImut Sporozoites Display Normal Morphology and Motility The morphology of CS-RIImut sporozoites was analyzed by immunofluorescence assays (Figure 3A), transmission electron microscopy, and immuno-electron microscopy using monoclonal antibodies (3D11) to the repeats of the CS protein. (Figure 3B-3F). CS-RIImut did not display any abnormalities in the sporozoite morphology during development ( Figure 3B). The detailed structure of the mutant sporozoite surface is shown Figure 3C and 3D. The structures of the trimembrane pellicle (plasma membrane and inner membrane complex) and subpellicular microtubules are indistinguishable from those of wild type. Patterns of CS protein labeling in CS-WT and CS-RIImut sporozoites were indistinguishable. In the mutants, very similar to wild type, CS protein was detected on the surface of budding or fully developed sporozoites ( Figure 3A and 3E), and on the inner surface of the capsule ( Figure 3E and 3F) [2,14]. To compare the amounts of CS protein in mutant and CS-WT parasites, extracts of CS-WT and CS-RIImut oocyst sporozoites were analyzed by Western blot ( Figure 3G). The intensity of both precursor (54 kDa) and mature (44 kDa) forms of CS [15], was very similar in the WT and the mutant. The two bands appear slightly smaller in the mutant as a result of the replacement of four basic residues (R290, K291, R292, and K293) with alanines. As a control we analyzed levels of TRAP (thrombospondin-related anonymous protein), another sporozoite surface protein [16], and found that it was not affected in the CS mutant ( Figure 3G). We conclude that the recombination event and mutations did not grossly affect CS protein expression or stability. It could be argued that the sporozoite exit from oocysts requires sporozoite motility and that the motility is impaired in the mutants. In fact, previous studies have shown that sporozoite motility is neither required nor does it ensure the exit of sporozoites from oocysts [14,17]. In contrast to the circular gliding observed in salivary gland sporozoites, movements of oocyst sporozoites are mostly limited to stretching and back-and-forth gliding [3]. We observed that approximately 3% CS-RIImut oocyst sporozoites display discontinuous gliding motility, and approximately 10%-15% display stretching and bending. The numbers are very similar to those of CS-WT. CS-RIImut Oocysts Are More Resistant to Proteolytic Activity Although CS-WT and CS-RIImut are morphologically indistinguishable, develop equally well in mosquitoes, move similarly, and contain equal levels of CS protein, sporozoite Synopsis Malaria affects hundreds of millions of people, and kills at least 1 million children per year. The infective stages of the malaria parasites, named ''sporozoites,'' are found in the salivary gland of Anopheles mosquitoes, and are injected along with the saliva during blood feeding. From the skin, sporozoites enter the blood circulation and invade liver cells where the parasites multiply. When they exit the liver, these parasites infect blood cells and can cause severe symptoms. If ingested by mosquitoes, the blood-stage parasites continue their lifecycle in the insect stomach. Thousands of sporozoites are formed within a cyst-like structure (oocyst). The sporozoites come out of the oocyst and infect the salivary gland, where they remain until injected back into humans. Malaria parasites are increasingly resistant to drugs, mosquitoes are difficult to eliminate, and effective vaccines are not yet available. New tools to combat malaria are urgently needed. One exciting approach, although the application is in the distant future, is to release in endemic areas genetically modified mosquitoes that are resistant to parasite growth. This paper provides a new target for generating these ''transmission-block'' mosquitoes and shows that the exit of sporozoites from the oocysts is an active process that requires the enzymatic digestion of components of the oocyst wall. If these enzymes are inhibited in transgenic mosquitoes, sporozoites will never reach the salivary gland. egress from mutant oocyst is profoundly defective. Little is known of the process of sporozoite exit from oocysts, but some information can be obtained from the erythrocytic stages of the parasite. During development in the red blood cells, malaria parasites reside inside a parasitophorous vacuole. Merozoite egress requires the rupture of the parasitophorous vacuole and the membrane of the red blood cells. Release of merozoites from infected erythrocytes requires proteases and is inhibited by inhibitors of proteolytic enzymes [18][19][20]. Plasmodium falciparum falcipain-2 (a cysteine protease) cleaves erythrocyte membrane skeletal proteins at late stages of parasite development [21], facilitating the merozoite egress. To examine the possible role of a proteolytic event in the release of sporozoites from oocysts, we treated isolated midguts from CS-WT-or CS-RIImut-infected mosquitoes (14 days PI) with trypsin and measured the number of released sporozoites ( Figure 4). The lower temperature (25 8C) was chosen to mimic natural conditions of oocyst development in the mosquito midguts. In the absence of trypsin, very few sporozoites were released from either CS-WT-or CS-RIImut-infected midguts even after 3 h of incubation. Treatment with trypsin for 40 min leads to a significant increase in the number of sporozoites released from CS-WT, but not from CS-RIImut. Release of sporozoites from CS-RIImut oocysts was achieved only after extended treatment with trypsin. In Figure 4 we show the release at 14 d PI, but identical results were observed with 18-d oocysts (data not shown). This effect is trypsin specific, because the release of sporozoites was abolished when the soybean trypsin inhibitor was included in the incubation (data not shown). These results indicated that the sporozoite egress from oocysts is a protease-dependent process and that CS-RIImut oocysts are more resistant to the trypsin treatment. CS is detected on the inner surface of the oocyst capsule [7][8][9]. Therefore, on their way out from the oocysts, sporozoites have to first traverse the CS protein layer beneath the capsule. The positively charged residues, arginines and lysines, of region II-plus are preferred substrates for certain cysteine proteases and serine proteases, such as trypsin. Therefore, it is possible that proteolysis of the CS protein layer underneath the capsule is required for sporozoite egress and is abolished in the mutant in which region II-plus has been substituted. Thus, a possible explanation for the defect of the mutant sporozoite is that the substitutions made in the region II-plus render CS protein more resistant to the putative protease. Trypsin treatment of the oocysts most likely cleaved the CS protein in many places, not only in region II-plus, since lysines and arginines are very abundant in the CS domains outside the repeats. Nevertheless, the subtle mutations introduced in CS region II-plus resulted in a clear difference in the kinetics of sporozoite release after treatment with the enzyme. The exit of sporozoites from oocysts is most likely a stepwise process aimed at the sequential disruption of the capsule and the mosquito-derived basal lamina. In P. falciparum, PfCCp2 or PfCCp3, two secreted multidomain putative adhesive proteins, play an essential role in sporozoite release-in the absence of either protein, sporozoites were not released from the oocysts-but their localization and mechanism of action are unknown [22]. Our observations suggest that proteolysis of CS protein that lies beneath the capsule is likely to be an early event in sporozoite egress. Our hypothesis is supported by a recent finding that a papain-like cysteine protease egress cysteine protease 1 (ECP1) is required for sporozoite egress from oocysts [23]. Members of the papain family of cysteine proteases, similar to trypsin, consistently attack peptide bonds formed by lysine and arginine. We presume that ECP1 is only active when oocysts are ''mature'' and the sporozoites are ready to enter the hemolymph. At that particular time, only the CS protein that is beneath the capsule, but not CS on the sporozoite surface, is cleaved by ECP1, facilitating the egress of sporozoites from the oocysts. A possible explanation for this selectivity is that the enzyme that cleaves the capsule CS protein is also part of the capsule. We cannot exclude the possibility that the egress of sporozoites from oocysts is preceded by a proteolytic cascade and that ECP1 is only one of the participants. CS-RIImut Oocysts Sporozoites Are Not Infective to Mammalian Hosts As mentioned earlier, in vitro experiments strongly suggest that the region II-plus of CS protein plays an important role in the initial stages of sporozoite invasion of hepatocytes. Initial studies demonstrated that CS protein binds specifically to HSPGs in sections of human liver and that this binding is region II-plus dependent [24,25]. Synthetic peptides representing region II-plus specifically inhibit CS protein binding and sporozoite adhesion of HepG2 cells, the reference cell line that allows sporozoites to develop into mature exo-erythrocytic forms [26]. This inhibition is dependant on the downstream positively charged residues of region II-plus [26]. These and other findings (reviewed in [11,12]) give evidence that it is likely that the lysines and arginines (highly conserved in Plasmodium species) of region II-plus form ionic bonds with the negatively charged sulfate molecules of the HSPG glycosaminoglycan chains (GAGs). Our region II-plus mutant provided an opportunity to confirm the in vitro studies using a genetic approach. The wild-type and mutant sporozoites obtained by mechanical disruption of the midguts were incubated briefly with HepG2 cells to compare their binding to host cells. The sporozoites were obtained at a time when they are fully developed but not yet egressing from oocysts. There was a significant difference (;40%) in the binding of wild-type and mutant sporozoites ( Figure 5). We emphasize that this assay was performed under static condition. The shear force generated by circulating blood in vivo should lead to more dramatic decrease in adhesion of mutant sporozoites to cells, as shown previously in vitro when the attachment assay was performed under rotating conditions [12]. Indeed, this is what we observed when we compared the infectivity of oocyst sporozoites from CS-WT and CS-RIImut to rats. The CS-WT and CS-RIImut sporozoites were injected intravenously into rats, and the prepatent period of infection (time until the proportion of infected erythrocytes is less than 0.01%) measured. In two independent experiments, there was no infection in rats injected with 1-9 million CS-RIImut oocyst sporozoites ( Table 2). In contrast, blood-stage parasites were detected in all rats injected with as few as 100,000 CS-WT oocyst sporozoites or 2,000 CS-WT salivary gland sporozoites [27]. Tewari et al. [28] also investigated the function of CS regions II-plus. In their study, the mutation was introduced in a P. berghei line in which the endogenous CS (PbCS) had been replaced by P. falciparum CS (PfCS), leading to a substantial decrease in parasite infectivity [29,30]. In their studies, Tewari et al. deleted the entire region II-plus of CS protein, including two of the four cysteines of the CS thrombospondin domain, and noted that the mutants did not enter the salivary glands. However, they did not measure the number of parasites in the mosquito hemocoel. It is possible therefore that the Tewari's mutant has the same phenotype as ours. We conclude that the same CS protein motif participates in two different stages of the sporozoite lifecycle. Region II-plus is first required for sporozoite egress from oocysts. Later it is required for the invasion of the mammalian hepatocytes. It is tempting to speculate that receptors for region II-plus are identical in the mammalian host liver and in the oocysts capsule/basal lamina in the mosquito midgut, i.e., they are HSPGs. The presence of HSPGs has been documented in Drosophila [31], and HSPG core proteins are represented in the Anopheles genome (http://www.ensembl.org) but have not yet been characterized biochemically. Perhaps capsule/basal lamina HSPGs interact with the positively charged stretch of amino acids of CS region II-plus, and the disruption of those peptide bonds by the newly identified cysteine protease (ECP1), or other participating proteases in the same proteolytic cascade, is an early and necessary step for sporozoite egress from oocysts. Materials and Methods Parasite. The parasite is a wild-type pyrimethamine-sensitive, gametocyte-producing clone of the P. berghei NK65 strain. DNA construct and mutagenesis. pQWCS-WT contains pUC19 backbone and 2.8 kb of CS cassette [32] cloned into XbaI and XhoI sites. Mutations were introduced into the CS coding region by using QuikChange Site-Directed Mutagenesis Kit (Stratagene, La Jolla, California, United States). Primer1 (sense, 59-GGTATAA-GAGTTGCTGCAGCAGCAGGTTCAAATAAGAAAGC-39) and its reverse and complement primer2 were used to mutate R290, K291, R292, and K293 to alanines. The resulting construct is named ''pQWCS-RIImut.'' Targeting construct. pMD205GFP is used as a backbone to generate targeting constructs, pRCS-WT and pRCS-RIImut. pMD205GFP contains the mutated copy of P. berghei DHFR-TS gene that confers resistance to pyrimethamine [33] and an Aequorea victoria green-fluorescent protein open reading frame (ORF) [34], and 2.2 kb and 0.55 kb of 59-and 39UTRs of P. berghei DHFR-TS. pQWCS-WT was digested with KpnI and XhoI to release the 2.1-kb fragment containing the CS cassette (0.6 kb of 59UTR, 1-kb CS protein ORF and 0.5-kb 39UTR). Subsequently, this fragment was cloned into pMD205GFP treated with same restriction enzymes to generate the intermediate construct pCS-WT. Targeting construct pRCS-WT was constructed by cloning a 0.6-kb BamHI-NotI fragment of CS 39UTR (500-1,100 base pairs downstream of the stop codon) into the pCS-WT treated with the same restriction enzymes. Targeting construct pRCS-RIImut was constructed in the same way as pRCS-WT. Parasite transfection and genotype analysis. Schizonts were collected for transfection, and targeting constructs were introduced by electroporation as previous described [33]. Southern blotting was performed with the entire CS ORF and 0.6-kb 59UTR as a probe. The probe was labeled with DIG-ddUTP by random priming, and the chemiluminescence was detected using CSPD (Roche, Basel, Switzerland). Specific amplification of the 59 recombinant locus was preformed with a forward primer CS1 (sense: 59-CTTTTTCACCCT-CAAGTTGGG-39, which hybridizes to the CS 59UTR missing in the pRCS-WT/pRCS-RIImut), and a reverse primer PB103 (sense, 59-TAATTATATGTTATTTTATTTCCAC-39, which hybridizes to the 59UTR of DHFR-TS). Specific amplification of the 39 recombinant locus was preformed with a forward primer PB106 (sense, 59-TGTGCATGCACATGCATGTA-39, which hybridizes to the 39UTR of DHFR-TS), and a reverse primer CS4 (sense, 59-CGAAATAAGT-TACTATTCGTGCCC-39, which hybridizes to the CS 39UTR missing in the pRCS-WT/pRCS-RIImut). Mosquito infection and analysis of parasite development. A. stephensi mosquitoes were fed on infected young Sprague-Dawley rats and dissected at various days PI. Midgut and salivary gland sporozoite populations were prepared from the various mosquito compartments and analyzed as previous described [27]. Hemolymph from each mosquito was perfused from the hemocoel with RPMI medium via air displacement from a micro-inoculation capillary inserted through the neck membrane and into the hemocoel. A small drop was made in the distal abdominal wall by gently removing the last two segments. The first three drops of perfusate (hemolymph and medium) from each mosquito were collected. Perfusate from at least 20 mosquitoes was collected. The number of sporozoites was determined using a haemocytometer. Indirect immunofluorescence assays. Oocyst sporozoites were collected at day 18 PI, centrifuged onto glass slides, and fixed with 4% paraformaldehyde for 20 min at room temperature. Sporozoites then were pre-incubated in PBS-3% BSA for 1 h at 37 8C followed by incubation of various anti-CS antibodies for 1 h at 37 8C. Bound anti-CS was detected with FITC-conjugated anti-mouse IgG. Western blotting analysis of sporozoite lysates. Protein samples were analyzed by SDS-PAGE and electrophoretically transferred to polyvinylidene difluoride membrane. CS-WT and CS-RIImut oocyst sporozoites on day 14 and 18 PI were collected, resuspended in SDS sample buffer, and incubated for 5 min at 70 8C prior to loading. The migrating bands were revealed with antibodies to P. berghei TRAP and CS protein, followed by horseradish peroxidase-coupled donkey antirabbit, and sheep anti-mouse IgG respectively, and visualized with enhanced chemiluminescence (ECL; Amersham Bioscience, Little Chalfont, United Kingdom). Analysis of sporozoite infectivity. To analyze sporozoite motility, sporozoites were incubated in 3% BSA-RPMI 1640 medium for 3 h prior to microscopic examination [3]. To determine the infectivity of sporozoites in vivo, young Sprague/Dawley rats were injected intravenously with sporozoite suspensions in RPMI 1640. The parasitemia of inoculated rodents was checked daily by a 10-min examination of a Giemsa-stained blood smear. Sporozoite attachment assay. A total of 100,000 midgut sporozoites were added and centrifuged down to confluent HepG2 cells. After 5min incubation at 37 8C, cells were washed twice with PBS, and fixed with 4% formaldehyde. Adherent sporozoites were stained with a combination of anti-CS 3D11 and goat anti-mouse FITC antibodies. For each well, 25 microscopic fields were counted in duplicate using a 4003 magnification. In vitro assay of oocyst sporozoite release. Intact midguts were dissected from either CS-WT-or CS-RIImut-infected mosquitoes. For each experiment, 10 midguts were incubated at 25 8C in 200-ll RPMI medium with or without trypsin (50 lg/ml; Sigma, St. Louis, Missouri, United States). At different time points, 10 ll was taken out after gentle shaking the tubes, and sporozoites were counted. At the end of the experiment, the midguts were ground in order to determine the mean number of remaining oocyst sporozoites per mosquito. Transmission electron microscopy. P. berghei (CS-WT and CS-RIImut) oocysts within mosquito midguts were fixed with 2.5% glutaraldehyde in 0.05 M phosphate buffer (pH 7.4) with 4% sucrose for 2 h. and then post-fixed in 1% osmium tetroxide for 1 h. After a 30-min en bloc stain with 1% aqueous uranyl acetate, the cells were dehydrated in ascending concentrations of ethanol and embedded in Epon 812. Ultrathin sections were stained with 2% uranyl acetate in 50% methanol and with lead citrate, and then examined in a Zeiss CEM902 electron microscope. Immunoelectron microscopy. P. berghei oocysts within mosquito midguts were fixed with 3% paraformaldehyde, 0.25% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4). Fixed samples were washed, dehydrated, and embedded in LR White resin (Polysciences, Warrington, Pennsylvania, United States) as described previously [7]. Thin sections were blocked in PBS containing 0.01% (v/v) Tween-20 and 5% (w/v) nonfat dry milk (PBTM). Grids were then incubated for 2 h at room temperature with the primary mouse anti-CS monoclonal antibody 3D11, and diluted 1:500 in PBTM. Normal mouse serum or PBTM were used as negative controls. After washing, grids were incubated for 1 h with 15-nm gold-conjugated goat antimouse IgG (Amersham Life Sciences), diluted 1:20 in PBS containing 1% (w/v) BSA and 0.01% (v/v) Tween-20, rinsed with Tween-20, and fixed with glutaraldehyde to stabilize the gold particles. Samples were stained with uranyl acetate and lead citrate, and then examined in a Zeiss CEM902 electron microscope.
2014-10-01T00:00:00.000Z
2005-09-01T00:00:00.000
{ "year": 2005, "sha1": "01e146b5871551a569fefa0c2414d887f0c4b1be", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.0010009&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01e146b5871551a569fefa0c2414d887f0c4b1be", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247943082
pes2o/s2orc
v3-fos-license
A review on the anesthetic management of obese patients undergoing surgery There has been an observed increase in theprevalence of obesity over the past few decades. The prevalence of anesthesiology related complications is also observed more frequently in obese patients as compared to patients that are not obese. Due to the increased complications that accompany obesity, obese patients are now more often requiring surgical interventions. Therefore, it is important that anesthesiologists be aware of this development and is equipped to manage these patients effectively and appropriately. As a result, this review highlights the effective management of obese patients undergoing surgery focusing on the preoperative, perioperative and postoperative care of these patients. Background According to the World Health Organization (WHO), the prevalence of obesity has significantly increased since 1975. Based on this source, in 2016, approximately 13% of the world's population were labeled as obese [1]. In addition, over the past few decades, the prevalence of obesity has been steadily increasing in the United States [2,3]. The Centre for Disease Control and Prevention (CDC) declares that about 35.7% of adults in the United States are now obese [4]. Obesity is associated with comorbidities such as hypertension, type 2 diabetes mellitus and coronary artery diseases. Furthermore, patients who are overweight or obese may also experience dyslipidemia, obstructive sleep apnea (OSA) liver and gallbladder diseases, osteoarthritis, cancers and reproductive and psychological disorders. It is also important to note that obesity is a major risk factor for asthma development and higher prevalence of this disease is commonly seen in obese and overweight persons as compared to nonobese individuals [5]. Due to this myriad of concomitant diseases and complications that accompany obesity, the management of obese patients, especially those undertaking surgical procedures, is now becoming increasingly challenging. The presence of these conditions at some point may require surgical intervention and therefore, anesthesiologists are frequently faced with the challenge of effectively managing obese patients along with their pre-existing comorbidities [6]. According to the literature, obesity with its related comorbidities, significantly increase the risk for preoperative, intraoperative and postoperative surgical complications [7]. Preoperatively, most of the complications observed are associated with the respiratory system as obese patients are more prone to experiencing decreased lung volume, lung collapse, abnormalities in lung and chest wall compliance in addition to varying degrees of hypoxemia [8]. Intraoperative complications are associated with increased block failures [8], peripheral nerve injuries, thrombotic complications and difficulties with airway management and fluid administration [9] , . Postoperatively, obese patients also exhibit an increased risk for developing myocardial infarctions, wound and urinary tract infections, deep venous thrombosis (DVT) and nerve injuries [7]. There may also be challenges encountered in finding the appropriate drug doses for induction and maintenance in these patients [10]. As a result, it is imperative that the anesthetic team acquire adequate and relevant knowledge for the effective management of obese patients undertaking different types of surgery. It is also extremely important that the patients be appropriately accessed preoperatively for the identification of anesthesia related risk factors so that the team can adequately prepare for the proper management of any complication that may arise throughout the course of surgery. This paper will therefore discuss the clinical management of obese patients undergoing surgery as a means of providing anesthesiologists with the necessary information needed to properly prepare and manage these patients before, during and after surgery. Main Text Definition of Obesity Obesity is defined by body mass index (BMI). BMI is calculated by dividing body weight measured in kilograms by height measured in meters squared. A BMI ranging between 25.0 and 29.9 kg/m 2 is used to define overweight while obesity is defined by a BMI of 30 kg/m 2 or greater (see Table 1). For individuals between the ages of 2 to 18 years the percentile scale is utilized for defining obesity rather than BMI [11,12]. Fat in the body can be described differently. Increased fat deposition in the lower regions of the body is described as peripheral obesity whereas higher abdominal or visceral fat deposition is considered central obesity [13]. A waist circumference of more than 88 cm in women and 102 cm in men or a waist to height ratio of more than 0.55 defines central obesity [14,15]. Central obesity is most commonly associated with pathological conditions [16,17]. Fat tissues that are distributed in the central region of the body are more likely to produce inflammatory mediators which may place obese patients at greater risks for obesity associated metabolic diseases [18]. Patients exhibiting central obesity also show increased risks for experiencing perioperative complications [19]. Pathophysiology of Obesity Obesity is described as a multifactorial disease caused by the interplay of various environmental, genetic and hormonal factors. Excessive intake and decreased expenditure of calories can contribute to the development of obesity. Energy balances within the body is partly controlled by interaction between the hypothalamus and peripheral tissues and organs [21]. Genes such as the beta-3-adrenergic receptor gene, peroxisome-proliferator-activated receptor gamma 2 gene, chromosome 10p, and melanocortin-4 receptor gene have all been identified as genetic contributors to the pathogenesis of obesity. Adipocytes produce hormones called adipokines primarily tumour necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), leptin and adiponectin. TNF-α promotes insulin resistance and inflammation of blood vessels. IL -6 also promotes inflammation, impairs host immunity and induces tissue injury [22]. Leptin decreases appetite and its deficiency is rarely observed in humans; however, obese individuals are often described as being leptin-resistant. Adiponectin promotes insulin sensitivity, reduces inflammation, and inhibits atherogenic activities. It is observed that adipose tissues of obese individuals exhibit a decreased expression of adiponectin messenger RNA [23]. The presence of central obesity promotes inflammation which subsequently leads to insulin resistance and endothelial dysfunction as increased levels IL-6, TNF-α and C-reactive protein and decreased levels of adiponectin and interleukin-10 are observed [22]. Obesity seems to be associated with lower levels of vitamins such as vitamins A,D and E. Furthermore, deficiencies in the B vitamins have also been associated with obesity [24]. Minerals such as zinc, iron, calcium and selenium, when deficient, can also contribute to weight gain and subsequent obesity. Evidence suggests that persons who are morbidly obese may exhibit lower levels of vitamins C and E [25] and further evidence also demonstrates generally lower levels of beta-carotene and vitamin C in adults that are overweight or obese [26]. These vitamins and minerals may work to prevent obesity in different ways which may be achieved by inhibiting adipogenesis, inducing apoptosis of adipocytes, regulating the production of certain hormones like leptin, decreasing oxidative stress and inflammation, inhibiting lipogenesis and promoting lipolysis [27]. Therefore, the deficiency of micronutrients should also be given special consideration when investigating potential causes of obesity. Anatomical airway changes in obese patients Normal respiration may be affected in obese patients due to the excessive amount of adipose tissue that is deposited in areas like the chest walls, ribs, diaphragm and abdomen [28]. For normal respiration to occur, the diaphragm contracts, displacing abdominal contents inferiorly and anteriorly. The external intercostal muscles also contract pulling the ribs superiorly and anteriorly [29]. In individuals that are obese, these normal actions are mechanically impeded by the presence of excessive adipose tissue in the thoracic and abdominal regions; their lung compliance is decreased. Measurements of maximal inspiratory pressure (MIP) and maximal expiratory pressure (MEP) can be used to evaluate the strength of respiratory muscles and these measurements are observed to be reduced in individuals that are obese [30]. In addition, when an obese individual lies flat on the back, weight from the abdomen moves superiorly into the thoracic cavity. This compresses and occludes small airways at the lung bases causing laboured ventilation and impairment in the normal function of major respiratory muscles [31,32]. Various changes in lung volumes are observed in obese patients. The expiratory reserve volume (ERV), functional residual capacity (FRC), and the overall total lung capacity (TLC) are all reduced in individuals that present with obesity. These changes occur due to imbalances in pressures within the lungs, resulting in abnormal lung inflation and deflation [33]. Even though most obese individuals will have a normal arterial partial pressure of oxygen (PaO 2 ), it is observed that individuals that are morbidly obese present with mildly widened alveolararterial oxygen gradients [P(A-a) O 2 ]. This occurs due to the ventilation perfusion imbalances that occur in the lungs of morbidly obese individuals secondary to partial lung collapse. Observations show that the lungs of individuals that are morbidly obese exhibit increased ventilation and perfusion in the upper regions and decreased ventilation and perfusion in the lower regions [28,34]. Perioperative Care of Obese Patients undergoing surgery Obese patients, especially those presenting with comorbidities, may potentially exhibit increased risks for experiencing complications during surgical procedures [14]. The Obesity Surgery Mortality Risk Stratification score (OS-MRS) has been established for the assessment of patients who are undertaking gastric bypass surgery [35]. This score is essential as it helps in the isolation and identification of risk factors that may increase mortality outcomes in obese patients undergoing bariatric surgery. Despite its implications for use in gastric bypass surgeries, this assessment tool may also prove beneficial in assessing obese patients undergoing normal surgeries. Patients with an OS-MRS score of 4-5 should be closely monitored during surgical procedures [14]. As obese patients are prepared for undertaking surgery, it is important that their BMI be calculated and the resulting information be relayed to the operating team so that necessary preparations would be made to accommodate the patient safely and comfortably during surgery. Patients should also be carefully assessed as to identify any pre-existing comorbidities and to determine potential complications that may arise from the surgery [36]. Proper guidance should also be provided through the use of counseling, highlighting necessary modifications such as smoking cessation before surgery and early mobilization after surgery [14] as this helps to limit the occurrence of complications. Before the surgery, proper assessment of major body systems is also important. Respiratory Assessment Determination of arterial saturation for obese patients undergoing surgery is essential as patients presenting with an arterial PCO 2 (Partial Pressure of Carbon Dioxide) that is greater than 6 kPa has an increased risk for experiencing complications as some degree of respiratory failure is usually present [37]. While completing the general respiratory assessment, it is also important to query about sleep-disordered breathing which can be done using the STOP-BANG questionnaire. A score of 5 or more obtained from this screening method implies the presence of sleep-disordered breathing [38,39]. This, therefore, warrants a referral to a specialist prior to surgery. For patients presenting with a score of less than 5, a referral to a specialist may also be necessary if the patient has a history of dyspnea upon exertion, experiences headaches especially in the morning, or presents with ECG changes indicative of right atrial hypertrophy [36]. Patients that present with OSA and an inability to tolerate continuous positive airway pressure (CPAP) also demonstrate increased risk for perioperative respiratory and cardiovascular complications [40]. It is important to note that chances of difficult or failed intubation are much greater in patients that are obese. The measurement of the patients' neck circumference can be helpful as a neck circumference over 60 cm increases the chances for experiencing difficult intubation [41]. In addition to difficult or failed intubations, difficult bag-mask ventilations are also observed in patients that are obese [36]. As part of the preoperative airway assessment the anaesthesiologists should enquire about the following from the patients' past medical history: (1) a history of OSA, (2) a history of gastro-oesophageal reflux disease and (3) a history of difficult anaesthesia or airway management. As the preoperative airway assessment is done, it is important to note that patients presenting with a short distance between the chin and the tip of the thyroid cartilage, flattened anterior-posterior craniofacial features, narrowed oropharynx and relative macroglossia are at an increased risk for experiencing airway obstruction when undergoing general anaesthesia. In general, when carrying out a preoperative respiratory assessment in obese patients, the following should be noted (1) the circumference of the patients neck, (2) the distance between the mentum and upper boundary of the thyroid cartilage, (3) the extent of mouth opening and jaw protrusion, (4) neck mobility, (5) the presence of excessive adipose tissue in the cervical region of the neck and (6) and the general features of the patients' head and face. Assessments should also be carried out for the presence of OSA [42]. Cardiovascular Assessment During the cardiovascular assessment phase, it is important to pay close attention to any features of metabolic syndrome that may be present as this may be a major indication for cardiovascular complications [43]. The use of ECGs are also critical as part of the cardiovascular assessment as they allow for the identification of undiagnosed pre-existing cardiac abnormalities [44]. This is particularly important as obese and overweight patients exhibit an increased risk for developing arrhythmia, especially atrial fibrillation, and ventricular tachycardia, which can be detected by the ECG. Cardiac arrhythmias in obese or overweight patients are usually precipitated by a myriad of factors such as hypoxia and preexisting heart diseases. Mechanical factors such as obstructive sleep apnea may also influence the development of arrhythmias in these patients [45]. Recent evidence suggests an association between obesity and the development of atrial fibrillations [46]. Furthermore, overweight, and obese patients may show a 50% increased risk for developing this arrhythmia [47]. Different contributors such as remodeling of the atrium, increased blood volume, elevated left atrial pressure and neurohormonal factors, amongst others, may significantly influence this occurrence [46]. Hemodynamic changes observed in the obese causes structural and physiological changes within the heart that potentially induce atrial fibrillation. Excess deposition of adipose tissues increases total blood volume which subsequently increases cardiac output (increases mainly due to an increase in stroke volume) [48]. As cardiac output steadily increases, hypertrophy (eccentric or concentric) of the left ventricle eventually occurs [49], subsequently increasing left ventricular filling pressures, therefore, causing diastolic dysfunction. Systolic dysfunction may also ensue following enlargement of the left ventricle [50]. In addition, left atrial hypertrophy occurs causing the pressures and volumes within the left atrium to increase [51]. This therefore causes pulmonary hypertension to develop. Besides, obesity is also associated with OSA which effects can consequently alter autonomic tone due to hypoxia, acidosis, and disturbances in the sleep cycle. Autonomic tone alterations potentially increase pulmonary arterial pressures which subsequently cause right ventricle hypertrophy and eventual ventricular failure [52]. These changes observed in the left and right heart along with the observed hemodynamic changes, significantly contribute to the development and maintenance of atrial fibrillations observed in the obese. Therefore, it is quite important that obese patients be assessed for the presence of atrial fibrillations and other common arrhythmias such as ventricular and supraventricular tachycardia and premature atrial and ventricular contractions. In addition, these patients should be closely monitored for the postsurgical development of arrhythmias especially if the patient has pre-existing heart diseases. In addition, as part of the cardiovascular assessment phase, the cardiopulmonary exercise testing can be applied as it helps in predicting postoperative prognoses including complications that may arise and the average length of hospital stay that might be required [53,54]. It is sometimes difficult to measure the blood pressure of obese patients with the use of standardized equipment; therefore, direct arterial monitoring can be employed for the accurate determination of blood pressure measurements [55]. Knowledge on the following conditions may assist in assessing the potential risks of cardiovascular related morbidities: (1) the type of surgery, whether it's considered high-risk or not, (2) the presence of coronary artery disease, (3) an existing history of congestive heart failure, (4) the presence of cerebrovascular disease, (5) a history of insulin use preoperatively and (6) plasma creatinine levels measuring >2 mg/dl prior to surgery [56]. Pre-oxygenation As compared to non-obese patients, morbidly obese patients may desaturate more quickly during apnoea. As a result, steps should be taken to prevent or reduce the chance for a fall in oxygen saturation after pre-oxygenation. The necessary steps are as follows: (a) when the patients is being pre-oxygenated, an upright head position of about 25 degrees should be maintained [57], (b) while inserting the laryngoscope, oxygen should be passively administered, with the use of a 10 Fr catheter through the nasopharynx, at a rate of about 5 L/min −1 [58] and 3) during pre-oxygenation, the application of 10cmH 2 O of positive end-expiratory pressure (PEEP) should be considered [59]. To reduce the occurrence of pre-oxygenation induced atelectasis, inspiratory pressure should be maintained at about 55cmH 2 O for 10 s directly following the application of 10cmH 2 O of PEEP [60,61]. In morbidly obese patients, once the airway is secured the inspired oxygen fraction should be reduced and maintained at about 0.4 [62,63]. Pre-anaesthetic medication Pre-anaesthetic medications may be considered in obese patients undergoing surgeries as to alleviate surgical complications which can take the form of infections, gastrointestinal disturbances, postsurgical pain, hypercoagulation and anxiety. Antimicrobials such as cefazolin can be appropriately administered as prophylaxis for the prevention of postsurgical infections [64][65][66]. Obese individuals with body weight ≥ 120 kg will require a prophylaxis dose of 3 g of cefazolin to curb the risk of surgical-site infections [66]. Nausea and vomiting may be commonly observed gastrointestinal disturbances. As means of preventing postsurgical nausea and vomiting, the preoperative use of dexamethasone combined with ondansetron and haloperidol can be considered [36,67]. Pregabalin, gabapentin and melatonin [68] can be used as prophylaxis treatment for alleviating postoperative pain [69][70][71]. Thromboembolic stockings or low-dose subcutaneous unfractionated heparin or low molecular weight heparin (LMWH) can also be used to prevent the postsurgical development of thromboembolisms [36,72,73]. The oral administration of benzodiazepines should also be considered for relieving surgical related anxiety [74]. Assessment for required postoperative care Other factors in addition to obesity may cumulatively determine the extent and the nature of the treatment plan that may be required postoperatively in an obese patient. These factors may include the following: (1) the presence of comorbidities that were present prior to the surgery, (2) an OS-MRS score of 4-5 which indicates an increased risk, (3) the type of surgical procedure applied during surgery, (4) the presence of OSA that is untreated along with an existing need for postoperative opioids administered parentally, and (5) the competence level of the postoperative management team [36]. The type of the surgery and the site at which the surgery was done are both major determinants that influence the degree of postsurgical care that may be required. Patients requiring the administration of long-acting opioids would have to be closely monitored for any complications that may potentially arise [53]. Intra-operative Care Positioning In obese patients, excess fat in the cervical region of the neck creates a fat pad causing excessive flexion. Therefore, it is important to elevate the patient's upper body, head and neck above chest level until the patient's external auditory meatus lies in the same horizontal plane as the sternal notch [75,76]. This positioning is called the ramp-up position and it helps to significantly improve intubation outcomes in these patients [76,77]. The utilization of this position permits better laryngoscopic visualizations in addition to promoting easier ventilation. The ramp-up position may be achieved through the use of folded blankets, pre-manufactured elevation pillows or inflatable pillows [61,78]. Furthermore, operating tables may be equipped with different features that may facilitate the appropriate positioning of the obese patient with the trunk in an elevated position [75]. Intraoperative Fluid Management During open surgery, due to evaporation, patients potentially experience fluid loss. Obese patients undergoing surgery presents an increased risk for experiencing postoperative renal failure as preoperatively, they commonly present with protracted volume. This protracted volume may be due to prolonged fasting before surgery or due to increased urine output secondary to the use of antihypertensive and hypoglycemic drugs. A pre-existing history of renal disease, a BMI greater than 50 kg/m 2 or extended surgical procedures are all predisposing risk factors [79]. In obese patients, appropriate fluid management is therefore important to prevent renal injury. One proposed method for fluid management during surgery of morbidly obese patients is to employ a goaldirected therapy (GDT) approach which is guided by the patient's reaction/ responsiveness to administered fluids [80,81]. Fluid responsiveness refers to the ability of the heart to respond to an increase in volume through the increase of stroke volume. While maintaining sinus rhythm, fluid responsiveness can be assessed through the analysis of arterial waveforms; a method that provides information on pulse pressure variation (PPV) and stroke volume variation (SVV) [82,83]. Plethysmographic waveform variation (PWV) provided by the pulse oximetry waveform is also suggested as a useful non-invasive method for determining fluid responsiveness. However, this method has been demonstrated to be more useful at levels of more extreme hypovolemia [84]. The ccNexfin is another non-invasive method that can be used to determine fluid responsiveness through the analysis of CO, PPV and SVV. In obese patients presenting with serious cardiovascular comorbidities, a minimally invasive method called the FloTrac can also be applied for the assessment of fluid responsiveness. With the use of this method, vascular tone and CO can be calculated from the analysis of arterial line waveforms. In addition, it provides information on SVV and when attached to a central venous line, it also gives information on CO and central venous oxygen saturation (ScvO2). During surgery, morbidly obese patients that are deemed high risk can also be monitored using pulsecontour analysis-based techniques, such as PiCCO. In addition to providing information on PPV, SVV and CO, this technology also analyses: (1) Global End-Diastolic Index, (2) intrathoracic blood volume, and (3) extravascular lung water. Despite its usefulness, due to its expensive nature, it is mainly utilized in critically ill patients requiring major surgery [85]. Awake tracheal intubation The use of awake tracheal intubation is one option that can be applied in instances where tracheal intubation seems difficult [86]. Since the presence of obesity is already associated with potential difficult intubations, this method can also be utilized in these patients. With the use of awake tracheal intubation, the upper airway should be appropriately anaesthetized using nerve blocks or aerosolized anaesthetics. Flexible Fiberoptic Bronchoscopy (FOB) and video laryngoscopes are two methods that are incorporated when performing awake intubations. With the patient in the ramp-up position, FOB can be useful for nasal or oral intubations. Excess pharyngeal adipose tissue may make proper visualization difficult with the use of FOB and the placement of the bronchoscope in these situations may further compromise spontaneous breathing. With the use of FOB, laryngeal mask airway may be utilized as means of keeping the airway patent and facilitating breathing after the patient is induced. However, in emergency situations, the use of video laryngoscopes is recommended over FOB [87]. The use of curved blade video laryngoscopes can be applied successfully in obese patients with neck trauma, or in obese patients who are unable to adequately extend their necks or have narrowed oral openings [88]. The use of video laryngoscopes may prove difficult in obese patients presenting with excessive breast tissue [87]. Induction and maintenance Anaesthetic drugs used for inducing non-obese patients can also be used for induction in obese patients. Despite this fact, it is also important to be aware that the presence of excess fat in obese patients affects the pharmacokinetics of anaesthetic drugs depending on their liposolubility and tissue distribution. Obese patients metabolize lipophilic agents more rapidly in comparison to non-obese patients [89]. Thiopental sodium Thiopental sodium is a drug that is commonly used for the administration of general anaesthetics. It is highly lipophilic; therefore, an increased volume of distribution is usually observed when it is used in obese patients. Following its administration, the levels of thiopental sodium rapidly decrease in the blood. Thiopental undergoes hepatic elimination and its clearance rate is twice as fast in obese patients as compared to non-obese patients [90,91]. Propofol Propofol is highly lipophilic; therefore, this anaesthetic agent has a high volume of distribution and is rapidly cleared from the blood following its administration. Due to these features, propofol is the most preferred drug for induction in morbidly obese patients [37,92]. In obese patients, the administration of continuous infusions of this anaesthetic agent, demonstrates increased volume of distribution and clearance in proportion to total body weight (TBW). One study by Servin et al. that investigated the recovery rates and the pharmacokinetics of propofol infusion in morbidly obese patients, demonstrated that there were no major differences in the initial volume of distribution of propofol in morbidly obese study subjects as compared to non-obese subjects. However, this study also showed that there was a linear increase in volume of distribution at steady state and of clearance with increase in TBW [93]. Etomidate The use of etomidate is recommended in individuals experiencing a state of hemodynamic instability because this drug does not majorly supress the cardiovascular system. However, its use may be of some concern as it has been associated with adrenal insufficiency potentially resulting in organ failure [93,94]. When used for induction, required dosage adjustments should be made relative to non-fat body weight similar for the pharmacokinetic and pharmacodynamics features observed for propofol and thiopental sodium [37,92] . Opioids Obese patients that are undergoing surgery may experience depression of the respiratory system in addition to obstruction of the airway [95]. The use of opioids in the presence of obesity increases the occurrence of obstructive and central sleep apnoea and obese patients may also experience hypoxia and upper airway obstruction [96][97][98]. As a result, it is important to note that the therapeutic window is narrowed when opioids are used in obese patients. Fentanyl Fentanyl is one of the most used opioids for anaesthetic induction and it is about 100 times more potent than morphine. The action of fentanyl in the blood is short; however, after continuous infusion, peripheral compartment saturation is achieved [99][100][101]. This drug is highly lipophilic and therefore has a high volume of distribution. In obese patients, following a single dose of fentanyl, the plasma levels of this drug is significantly reduced as obese patients experience a larger volume of distribution [102]. Fentanyl is cleared at a faster rate in obese patients. There is a non-linear association between clearance of fentanyl and TBW but there is a linear increase in the clearance of fentanyl with "pharmacokinetic mass", with a significant correlation to lean body weight [103]. Alfentanil As compared to fentanyl, alfentanil is less lipophilic and therefore has a lower volume of distribution. Alfentanil is also less potent as compared to fentanyl. In obese patients, the presence of larger CO significantly decreases plasma levels of alfentanil during early distribution phases. It is therefore theorized that obese patients should experience larger volumes of distribution, longer half-lives and prolonged elimination times of alfentanil as compared to non-obese patients [100,104]. Sufentanil Sufentanil is more potent than fentanyl and is described as the most lipophilic opioid. Obesity increases the volume of distribution and the rate of elimination of sufentanil but the clearance of this drug in obese patients is comparable to its clearance in non-obese patients [105]. Remifentanil Remifentanil is a rapid acting anaesthetic agent and it is highly metabolized by tissue and plasma esterases thus resulting in a short duration of action in the blood. This anaesthetic agent is normally administered as a continuous infusion when used as a sedative. A combination of remifentanil with inhalation agents or intravenous hypnotic agents can also be used for administration of general anaesthetics [104]. One study aimed at assessing the effects that body weight has on the pharmacokinetics of remifentanil, concluded that there is no significant difference in the observed pharmacokinetics of remifentanil between obese and non-obese patients. This study also concluded that ideal body weight (IBW) or lean body mass should be used to determine the required dose of remifentanil, as the pharmacokinetic parameters of this agent is more closely related to these measurements as opposed to TBW [106]. Another study by Bidgol et al. which compared the use of tight control infusions of sufentanil and remifentanil in morbidly obese patients undertaking laparoscopic gastroplasty surgery, concluded that the use of tight control infusion of sufentanil was associated with better quality of recovery in morbidly obese patients as compared to the use of tight control infusion of remifentanil [107]. Inhalation agents The presence of excess fat tissue in obese patients combined with high lipophilicity result in the increased release of inhalation agents. In addition, evidence shows that obese patients take a longer time to recover from anaesthesia due to the prolonged release of inhaled anaesthetic agents from fat tissues [3,108]. The degree of liposolubility of the different inhalation agents varies and therefore, different agents may exhibit different effects on recovery rates when used in obese patients [109,110]. Isoflurane and Sevoflurane From the three agents: sevoflurane, desflurane and isoflurane, isoflurane is considered most lipophilic and as a result, this anaesthetic agent is not most favoured for use in morbidly obese patients [111]. In obese patients, there is reduced blood flow to fat tissues, and the time to achieve equilibrium in the blood is usually longer with the use of isoflurane [112,113] Sevoflurane is not as lipophilic or as soluble as compared to isoflurane; therefore, in morbidly obese patients, the effects of this agent in the blood are usually shorter and it is eliminated more rapid [114]. Despite the lack of evidence to support the exact effects of sevoflurane in patients suffering from renal impairment, this anaesthetic agent should be used with caution in patients suffering from renal insufficiency. One of the metabolic by-products of sevoflurane called inorganic fluoride is toxic to the kidneys at blood concentrations above 50 mmol litre −1 . Sevoflurane can be broken down into compound A, which may cause renal toxicity [115]. Even though this effect is already proven from animal studies, more evidence is required to determine the effects of compound A on the kidney of humans [116]. Desflurane BMI does not have a significant effect on desflurane's absorption in the body [111]. Desflurane is the best choice for anaesthetic induction in morbidly obese patients as it is least lipophilic and least soluble as compared to other inhalant agents. Obese and non-obese patients exhibit faster recovery with the use of desflurane as compared to isoflurane [111,117]; however, evidence comparing recovery rates of desflurane and sevoflurane yields controversial results [118][119][120]. Neuromuscular Blockers Neuromuscular blockers are described as polar and hydrophobic. Due to these properties, these agents are not highly distributed in fat tissues [121]. Succinylcholine Succinylcholine is a non-depolarizing neuromuscular blocker. It is broken down and deactivated by pseudocholinesterase. The levels of pseudocholinesterase are increased in obese patients; therefore, when used during anaesthetic induction in obese patients, the onset and duration of effects of this drug are very rapid and usually, a higher dose of the drug may be needed to produce the required effects. Due to its extremely fast onset of effects and short duration of its action, succinylcholine is preferred for its use in obese patients as these features facilitate prompt tracheal intubations and also promote the rapid restoration of spontaneous ventilation [122,123]. Vecuronium Vecuronium is a non-depolarizing aminosteroid neuromuscular relaxant. Obese patients show an increase in the volume levels of vecuronium in extracellular fluid; however, this does not affect the volume of distribution of this drug [124]. Vecuronium is removed from the body by the hepatic and biliary systems and improper clearance may prolong the effects of this drug in the body. In addition, when TBW is used for dose estimations, there may be overestimations of required doses, thus resulting in drug overdose. Therefore, in obese patients, the required dose of vecuronium should be calculated based on IBW instead of TBW [124,125]. Schwartz et al. carried out a study to assess how obesity affects vecuronium with regards to deposition and action. Fourteen participants were recruited; seven obese and seven control subjects. Both groups of patients received 0.1 mg/kg of Vecuronium. This study suggested that when vecuronium is being administered to obese patients, dosage should be calculated based on IBW as it was evident that when calculated based on TBW recovery was delayed due to drug overdose [126]. Rocuronium Rocuronium is described as an aminosteroid neuromuscular blocker which contains a quaternary ammonium group in its chemical structure. Rocuronium is not readily distributed to peripheral tissues and its pharmacokinetics is not majorly affected by the high volumes of extracellular fluid observed in obese patients [127]. In order to prevent the prolongation of the effects of this drug in the body, it is important that the administered dose be calculated based on IBW [122,128]. Puhringer et al. studied the pharmacokinetics and pharmacodynamics in six obese and six normal weight (control) patients in whom 0.6 mg kg −1 of rocuronium was administered. It was demonstrated that the time to onset of action of rocuronium was shorter in the obese patients as compared to the control group; however, the duration of action and the time to recovery was comparable for both groups. Reversal of Neuromuscular Blocking Agents Reversal of neuromuscular blockade is a very important phenomenon, especially in obese patients. The presence of obesity is usually associated with an increased risk for the occurrence of respiratory complications following surgery [129,130]. Obese patients commonly experience a decrease in diaphragmatic tone in addition to a reduction in end-expiratory lung volumes during sleep induction as compared to non-obese patients [131]. The pharmacological reversal of neuromuscular blockage may help to reduce the occurrence of major complications [132]. Neostigmine Neostigmine is described as an acetylcholine receptor blocker. The reversal of neuromuscular blockage with the use of neostigmine has been found to be delayed in patients that are obese. This drug can be administered at dosages between 0.04 and 0.08 mg/kg; however, administered doses should never exceed 5 mg [133]. Sugammadex Sugammadex is a very potent agent used for the reversal of neuromuscular blockage. It is derived from cyclodextrin, has varying degrees of affinity for the different neuromuscular blockers and provides quick and complete recovery from neuromuscular blockage. For the sufficient and total reversal of intermediate or deep blocks, it is recommended that the administered dose of sugammadex be calculated based on TBW or IBW plus 40% [81,132]. Postoperative care Post surgically, obese patients as opposed to non-obese patients possess a higher risk for experiencing respiratory complications such as acute respiratory failure and pneumonia. Lung collapse occurs more often in obese patients following extubation [134,135]. Patients that are non-obese may experience atelectasis post surgically; however, this condition will rapidly resolve following surgery. On the other hand, in the obese patients, atelectasis takes a longer time to resolve and may result in breathing difficulties post surgically [134]. Once there is awareness, steps can be taken by the postoperative care team to alleviate these potential complications. Postoperatively, obese patients should be closely monitored in the post anaesthesia care unit (PACU) and the following steps should be considered: the patient should be nursed with the head in an upright position [68,72] and the use of standard oxygen therapy as well as the use of CPAP or non-invasive positive pressure ventilation (NIPPV) should be considered following extubations [36,74,136]. High flow oxygen delivered via a nasal cannula may be used [137] and CPAP should also be considered in patients who require opioids [138]. These considerations are important as they help to (1) prevent the occurrence of airway obstruction, (2) ensure proper ventilation, (3) prevent collapse of the lungs, (4) support better gaseous exchange within the lung, (5) restore and preserve normal respiratory functions, (6) improve the patients breathing and (7) reduce the risk for developing postsurgical respiratory failure [139]. Following surgery, the patient should be given oxygen therapy until preoperative arterial oxygen saturation levels are achieved and the patient is totally mobilized. There is an increased likelihood that following surgery, the obese patient will require mechanical ventilation. It is suggested that for mechanical ventilation, peak inspiratory pressure be maintained below 35 cm/H2O and 5-7 ml/kg of tidal volume calculated based on ideal body weight, be administered [140]. Post surgically, it is not recommended that continuous infusions be used for pain management in obese patients requiring opioids. Instead, depending on the procedure performed, opioid analgesics such as fentanyl or morphine can be used for pain control [135]. It is also important to note that myopathies such as rhabdomyolysis can occur in the obese following surgery; therefore, close monitoring is important for the development of deep tissue pains. If post surgically, signs of rhabdomyolysis occur, then steps should be taken to immediately treat this condition and prevent the occurrence of acute kidney injury (AKI) [141]. In addition, evidence suggests that postoperative cognitive dysfunction (POCD) may be a complication observed more commonly in obese patients. Despite the fact that only a minimal association has been established between obesity and this postsurgical complication, it is important to be cognisant of this potential development [142]. Before discharge for care on the surgical ward, it is important that obese patients be monitored for a minimum time of 1 h to ensure that normal respiratory parameters are returned and maintained [135,143]. Conclusions In conclusion, the presence of obesity increases the risk for surgical and postsurgical complications; however, with proper collaborative efforts among medical disciplines, the occurrence of these complications can be reduced quite significantly. It is important preoperatively that certain assessments of the cardiovascular and respiratory systems be carried out. During surgery, proper positioning of the obese patient is very important in addition to the appropriate airway maintenance and fluid management. The choice of anesthetic agent, along with the route of administration is extremely important, as based on their properties, these agents can confer different complications both intra and postoperatively. More intense research is needed for the use of these anesthetic agents in emergency settings. Postoperatively, the necessary steps should be taken to ensure that the patient is fully recovered with limited complications.
2022-04-05T13:45:39.418Z
2022-04-05T00:00:00.000
{ "year": 2022, "sha1": "df7ca81b2e8b15faae7fe4b1897f57fd80b7cab9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "df7ca81b2e8b15faae7fe4b1897f57fd80b7cab9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16707227
pes2o/s2orc
v3-fos-license
Galaxy Pairs in the Sloan Digital Sky Survey I: Star Formation, AGN Fraction, and the Luminosity/Mass-Metallicity Relation (Abridged). We present a sample of 1716 galaxies with companions within Delta v<500 km/s, r_p<80 kpc and stellar mass ratio 0.1<M_1/M_2<10 from the Sloan Digital Sky Survey (SDSS) Data Release 4 (DR4). In agreement with previous studies, we find an enhancement in the star formation rate (SFR) of galaxy pairs at projected separations<30--40 kpc. In addition, we find that this enhancement is highest (and extends to the greatest separations) for galaxies of approximately equal mass, the so-called `major' pairs. However, SFR enhancement can still be detected for a sample of galaxy pairs whose masses are within a factor of 10 of each other. In agreement with the one previous study of the luminosity-metallicity (LZ) relation in paired galaxies, we find an offset to lower metallicities (by ~ 0.1 dex) for a given luminosity for galaxies in pairs compared to the control sample. We also present the first mass-metallicity (MZ) relation comparison between paired galaxies and the field, and again find an offset to lower metallicities (by ~ 0.05 dex) for a given mass. The smaller offset in the MZ relation indicates that both higher luminosities and lower metallicities may contribute to the shift of pairs relative to the control in the LZ relation. We show that the offset in the LZ relation depends on galaxy half light radius, r_h. Galaxies with r_h<3 kpc and with a close companion show a 0.05-0.1 dex downwards offset in metallicity compared to control galaxies of the same size. Finally, we study the AGN fraction in both the pair and control sample and find that whilst selecting galaxies in different cuts of color and asymmetry yields different AGN fractions, the fraction for pairs and the control sample are consistent for a given set of selection criteria. depends on galaxy half light radius, r h . Galaxies with r h 3 h −1 70 kpc and with a close companion show a 0.05-0.1 dex downwards offset in metallicity compared to control galaxies of the same size. Larger galaxies do not show this offset and have LZ and MZ relations consistent with the control sample. We investigate the physical impetus behind this empirical dependence on r h and consider the galaxy's dynamical time and bulge fractions as possible causes. We conclude that the former is unlikely to be a fundamental driver of the offset in the LZ relation for paired galaxies, but that bulge fraction may play a role. Finally, we study the AGN fraction in both the pair and control sample and find that whilst selecting galaxies in different cuts of color and asymmetry yields different AGN fractions, the fraction for pairs and the control sample are consistent for a given set of selection criteria. This indicates that if AGN are ignited as a result of interactions, this activity begins later than the close pairs stage (i.e. once the merger is complete). Introduction The evolutionary path followed by a galaxy is shaped by its merger history, which in turn depends on its environment. This dependence is epitomized by the properties of galaxies in rich environments such as clusters (e.g. Dressler 1980;Whitmore, Gilmore & Jones 1993;Balogh et al. 1998Balogh et al. , 1999Poggianti et al. 1999;Pimbblet et al. 2002;Wake et al. 2005). The effect of such high density living is generally to suppress star formation, through mechanisms that can include cluster tidal fields, gas (ram pressure) stripping, and strangulation (e.g. Byrd & Valtonen 1990;Moore et al. 1999;Diaferio et al 2001). From this point of view, one may expect to see the most extreme effects of density-induced properties from environments that are rich on scales of a few hundred kpc. However, it is now emerging that density on smaller scales can be the major impetus behind galaxy evolution (e.g. Lewis et al. 2002;Gomez et al. 2003;Blanton & Berlind 2007). Galaxies in compact groups, for example, exhibit a clear tendency towards lower metallicities and older stellar populations compared with isolated galaxies (e.g. Proctor et al. 2004;Mendes de Oliveira et al. 2005;de la Rosa et al. 2007). On these smaller scales, galaxy mergers provide the most obvious mechanism for change. Simulations predict that prior to halted star formation, there should be a phase of increased activity (e.g. Di Matteo, Springel & Hernquist 2005) which precedes the final merger, particularly in gas-rich systems. Observations of early-stage galaxy interactions will therefore complement those of rich environments to provide a more complete picture of the evolutionary process. In this sense, close pairs or morphologically disturbed galaxies may be the pre-cursors to the 'red-and-dead' galaxies seen in dense environments. The seminal study of the effect of interactions on galaxy colors is the work of Larson & Tinsley (1978). They found that disturbed galaxies in the Arp catalogue had a wider spread of colors, including more blue galaxies, than the field galaxies in the Hubble atlas. In the last 30 years, this distinction in color has been confirmed numerous times in larger samples. In general, galaxies with close companions, including those showing clear signs of morphological asymmetry, tend to have bluer (integrated) optical colors (e.g. Carlberg et al. 1994;Patton et al. 1997Patton et al. , 2005. These results are indicative of enhanced star formation, a scenario supported by high equivalent widths of Hα emission when spectra are available (e.g. Kennicutt et al. 1987; Barton, Geller & Kenyon et al. 2000;Lambas et al. 2003;Alonso et al. 2004;Nikolic, Cullen & Alexander 2004). In turn, the star formation heats galactic dust which emits thermally in the IR, leading to an IR-excess in galaxy pairs (Kennicutt et al. 1987;Xu & Sulentic 1991;Geller et al. 2006). This large body of observational data paints a clear picture of enhanced star formation activity associated with galaxy proximity on scales of a few tens of kpc. Clues to the finer details of enhanced star formation can be gleaned from galaxy simulations. In the models of Mihos & Hernquist (1994, the interaction-induced star formation occurs specifically in the central regions (inner 1-2 kpc) of the galaxy, as a result of gas inflows. Observational evidence to support this theoretical prediction includes 1) centrally peaked distributions of Hα and continuum emission in interacting galaxies (Bushouse 1987;Smith et al. 2007), 2) enhanced Hα flux or suppressed metallicities determined from nuclear spectroscopy of interacting galaxies (e.g. Barton et al. 2000;Kewley, Geller & Barton 2006) and 3) enhanced radio continuum emission in the central parts of pairs of spiral galaxies, but not in their disks (Hummel 1981). However, there are also claims for enhanced disk star formation (e.g. Kennicutt et al. 1987). At the same time, models of interacting galaxies predict the nature of induced star formation to depend sensitively on the mass distribution in the galaxies. For example, for interactions that are observed early-on in the merging process, Mihos & Hernquist (1996) found that galaxies with shallower potentials (i.e. less bulge dominated) more efficiently funnel gas to the center through the formation of a bar. Conversely, bulge dominated galaxies are minimally affected by close interactions until the merger event is well advanced (e.g. Cox et al. 2007). The induced star formation activity associated with interactions and mergers is expected to have an impact on the metallicity of galaxy pairs. There is a well established correlation between luminosity and metallicity, which is a manifestation of a more fundamental stellar mass-metallicity relation (e.g. Tremonti et al. 2004;Salzer et al. 2005;Lee et al. 2006), which is likely be be 'disturbed' for interacting galaxies. It is not clear a priori how these scaling relations between luminosity, mass and metallicity might be affected by interactions. The galaxy luminosity may significantly increase due to the additional star formation experienced as a result of the merger. The overall metallicity of an interacting galaxy may first appear to decrease as metal-poor gas flows into its inner regions. However, we eventually expect the metallicity to increase as the star formation proceeeds and eventually returns its nucleosynthetic products into the interstellar medium. The end point metallicity will depend on a number of factors such as the mass and metallicity of the inflowing gas, efficiency of the starburst and the metal yield. The first major observational study of these effects was presented by Kewley et al. (2006a), who found a shift towards lower metallicities by ∼ 0.2 dex in galaxy pairs for a given luminosity, compared to a control sample. Since their spectra included only the central 10% of the galaxies' light, Kewley et al. (2006a) interpreted this result as the signature of metal-poor gas that had been funnelled into the center of the galaxies. As the merging process advances, an expected consequence of the gas funnelling might be the ignition of an AGN. Effectively all galaxies are thought to harbor black holes at their centers, the masses of which correlate on the mass of the galaxy's bulge component as measured through stellar velocity dispersion (e.g. Marconi et al. 2004;Shankar et al. 2004 andFerrarese &Ford 2005 for a review). Infall of gas onto the black hole via a galaxy interaction is a natural way to engage nuclear activity. Indeed, it has previously been noted that low redshift Seyfert galaxies often occur in groups (e.g. Stauffer 1982) and that a high fraction of galaxies close to AGN appear to be interacting (see the review by Barnes & Hernquist 1992). However, although Seyfert galaxies may show evidence for recent nuclear star formation (e.g. Storchi-Bergmann et al. 2001), there is so far no evidence that AGN activity is enhanced in denser environments relative to the field, including in close pairs (e.g. Schmitt 2001;Sorrentino, Radovich & Rifatto 2003;Alonso et al. 2007). Instead, AGN activity is best signalled by morphological disturbances (e.g. Barnes & Hernquist 1992;Alonso et al. 2007). Investigating the myriad effects of galaxy interactions clearly requires measurements of a suite of properties, including stellar mass, star formation rates (SFRs), AGN contribution, metallicities, color and morphology as characterized by measures such as bulge-to-total ratios and asymmetry. Whilst many of these properties have been previously studied (see above references) no work to date has been able to combine all of these parameters for a single, large sample. In this regard the Sloan Digital Sky Survey (SDSS) is an excellent resource with both photometric and quality spectroscopic data available for over half a million galaxies in the Data Release 4 (DR4). In this paper series (see also Patton et al in preparation, henceforth Paper II, and other forthcoming papers) we have combined SDSS photometry with the results of spectral synthesis modelling, which yield estimates for properties such as the stellar mass, metallicity, star formation rate and bulge and disk image decomposition in five filters (Simard, in preparation) to yield morphological parameters. Therefore, this sample provides the first coherent dataset for which such a wide suite of galaxy parameters can be investigated, and the relationships between these properties studied in a systematic way. Moreover, the statistical power of the SDSS allows us to be highly selective in the way we form our sample. Therefore, although our final pairs sample is not the largest to date (c.f. Alonso et al. 2006; Paper II), our selection criteria are amongst the most stringent. This is particularly important when using spectroscopic data to determine quantities such as metallicity, where the combination of several emission lines can become very sensitive to poor S/N (e.g. Kewley & Ellison, 2008). In Paper II we investigate the photometric properties of SDSS galaxies in close pairs. In this paper, we combine the basic survey properties of a sample of galaxy pairs with spectroscopic properties determined by e.g. Kauffmann et al. (2003b), Brinchmann et al. (2004, Tremonti et al. (2004) and Kewley & Ellison (2008). This allows us to investigate the sensitivity of metallicity, AGN incidence, mass and star formation rate on a galaxy's proximity to a companion. The layout of this paper is as follows. In §2 we describe the compilation of our galaxy pairs and control samples. In §3 we use the wide pairs sample defined in §2 to study the effect of pair proximity and relative stellar masses on star formation rate. Based on these results, we define a sample of close pairs which are most likely to exhibit interaction-induced effects. In §4 we investigate the luminosity-and mass-metallicity relations and in §5 the AGN fraction in galaxies with close companions. Each of the three science sections ( §3 -5) can be read largely independently, although we recommend that all readers understand the sample selection laid out in §2. We summarize the full results of this paper in §6. Sample Selection Our galaxy pairs sample is selected from the DR4 of the SDSS and includes requirements based on both photometric and spectroscopic selection. The imaging portion of the DR4 covers 6670 deg 2 in five bands and the spectroscopic catalog is magnitude limited for extinction corrected Petrosian r < 17.77. To construct our galaxy samples, we use the DR4 catalog of 567,486 galaxies compiled by the Munich group 1 . Pipeline processing which fits galaxy templates and spectral synthesis models to the spectra yields physical properties such as stellar masses and star formation rates as well as measurements of line fluxes (e.g. Kauffmann et al. 2003b;Brinchmann et al. 2004;Tremonti et al. 2004). Although metallicities are available for the majority of these galaxies, Ellison & Kewley (2005) and Kewley & Ellison (2008) have shown that different empirical calibrations can yield metallicities that vary by up to a factor of 3. The Tremonti et al. (2004) metallicities are amongst the highest of these calibrations. We used the published line fluxes to calculate the metallicities according to the 'recommended' method of Kewley & Dopita (2002) which solves iteratively for metallicity and ionization parameter. We made this selection for two reasons. First, the calibration of Kewley & Dopita (2002) yields one of the tightest mass-metallicity relations (Kewley & Ellison 2008). Second, the metallicity conversions between various strong line diagnostics presented by Kewley & Ellison (2008) show that conversions to/from the Kewley & Dopita (2002) calibration exhibit one of the smallest scatters. Other properties used in this paper (e.g. SFR and stellar mass) are taken directly from the catalogs made generously available by the Munich team. Our sample selection differs importantly from that of Paper II, which focuses on the photometric properties of galaxies in pairs. Although spectroscopic redshifts and stellar masses were required for pair selection in Paper II, no other spectral requirement was included in the selection criteria. However, since we will be focussed on properties that are derived from spectra, such as SFR and metallicity, our selection criteria are more stringent, and our sample correspondingly smaller. Moreover, since our metallicity determinations require moderately high S/N in the emission lines (see below), the galaxies in this sample are necessarily star-forming or AGN dominated. There are no quiescent, inactive ('red and dead') galaxies in our sample. From the catalog of over half a million SDSS DR4 galaxies we select galaxies that fulfill the following criteria: 1. Galaxies must have extinction corrected Petrosian magnitudes in the range 14.5 < r ≤ 17.77. The faint limit matches the criterion of Sloan's Main Galaxy Sample and ensures a high completeness and unbiased selection for mass estimates (see below). The bright limit avoids deblending problems that confuse the identification of close pairs (Strauss et al. 2002). We also required that the objects were classified as galaxies from the SDSS imaging (SpecPhoto.Type=3 ) and were classified spectrally as either a galaxy or QSO (SpecPhoto.SpecClass=2,3 ). 2. Galaxies must be unique spectroscopic objects. We reject duplicates in the initial sample of 567,486 galaxies by including the single galaxy that has been classified as 'science worthy' (flag scienceprimary=1 in the SDSS 'SpecObjAll' . This criterion ensures a high effective S/N which in turn facilitates accurate classification of the galaxies as either star-forming or AGN-dominated (e.g. Kewley et al. 2001;Kauffmann et al. 2003a) and for accurate metallicity determination from empirical strong line diagnostics (e.g., Kobulnicky, Kennicutt & Pizagno 1999;Kewley & Ellison, 2008). This criterion automatically selects star-forming galaxies and will exclude passively evolving or 'red and dead' galaxies, as well as galaxies with very high extinction and metal-poor galaxies with faint emission lines. 5. Stellar mass estimates must be available (e.g. Kauffmann et al 2003b;Tremonti et al. 2004). These are available in the Munich catalogs and are derived from spectral template fitting and have typical uncertainties ∼ 0.1 dex. Drory, Bender & Hopp (2004) have shown that the spectrally determined stellar masses compare well with those derived from optical and IR colors and they are good surrogates for the dynamical mass when log M ⋆ > 10M ⊙ (see also Brinchmann & Ellis 2000). At lower stellar masses, M ⋆ is larger than the dynamical mass by < 0.4 dex (Drory et al. 2004). 6. Metallicities as calculated by the Kewley & Dopita (2002) diagnostic must be available, although we do not require that both galaxies in a pair have known metallicities. 7. Galaxies must be classified as star-forming and not AGN dominated, according to the line diagnostic criteria given in Kewley et al. (2001). We impose this criterion since metallicities derived from strong line calibrations assume a stellar ionizing background and are not applicable if there is a (local) AGN component. Recently, Kewley et al. (2006b) have proposed a new AGN removal scheme that is more stringent than the original Kewley et al. (2001) criteria. However, Kewley & Ellison (2008) have shown that, for metallicities derived from the Kewley & Dopita (2002) strong line calibration, the mass metallicity relation is identical for the Kewley et al. (2001) and Kewley et al. (2006b) AGN filtering schemes. We remove the criterion of AGN exclusion for our study of AGN fractions in §5. From this master sample, we then select galaxies with companions that we shall refer to as 'galaxy pairs', although ∼ 5% consists of galaxies in triples and a minority of higher multiples. For inclusion in the sample of galaxy pairs, we further require that 8. Galaxies have one or more companions with projected physical separations of r p < 80 h −1 70 kpc. Although previous observational and theoretical studies have found 30 h −1 70 kpc (∼ 20 h −1 100 kpc) to be the approximate scale on which pairs start to exhibit distinct properties compared with the field (e.g. Barton et al. 2000;Patton et al 2000;Lambas et al. 2003;Alonso et al. 2004;Nikolic, Cullen & Alexander 2004;Perez et al. 2006a), we consider wider pairs in order to investigate trends in separation. All pairs with separations r p < 15 h −1 70 kpc were inspected visually, since erroneous pair identifications do occur at small separations. The majority of spurious pairs were at a r p < 5 h −1 70 kpc and occur e.g. when an HII region in a single galaxy is identified as separate galaxy. For separations r p > 10 h −1 70 kpc, the fraction of spurious pair identifications is less than 1%. 9. The rest-frame velocity difference of a galaxy pair must be ∆v < 500 km s −1 . This velocity offset was selected in order to provide a balance between contamination and statistics. Although a much smaller velocity separation reduces contamination, it also reduces the overall sample size, which may ultimately become a limiting factor in pair statistics. The trade-off between these effects has been addressed in Patton et al. (2000). 10. Relative stellar masses must be within a factor of 10. Although we expect to see more interaction-induced effects in pairs of almost equal mass (e.g. Woods, Geller & Barton 2006;Cox et al. 2007;Woods & Geller 2007), we include a wide range of mass ratios in order to investigate the relative impact of major and minor interactions. If a galaxy fulfills the first seven of the above criteria, but not the latter three, it is a candidate for our control sample. A galaxy fulfilling all ten criteria may be potentially included in our sample of wide pairs. Before constructing the final control and wide pairs samples, we make two further restrictions in order to make the two samples directly comparable. Both of these restrictions are driven by the requirement that the redshift and stellar mass distributions of the pairs and control samples should be statistically indistinguishable. This is an important requirement since the distributions of stellar mass and redshift can impact the observed ranges in properties such as luminosity and star formation rate. The redshifts of the galaxies selected simply from the above criteria are shown in Figure 1 where the histogram of pairs' redshifts has been roughly normalized to the number of galaxies in the control sample for display purposes. Clearly, the redshift distribution of the pairs is skewed towards lower values than the control, which could potentially bias our results. This can largely be understood by examining the lower panel of Figure 1 which shows the projected separation of pairs as a function of redshift and demonstrates a clear excess of pairs at low redshift and wide separation. This is mostly due to the spectroscopic follow-up strategy of the SDSS survey. There is a 55 arcsecond fiber collision limit due to the size of the fiber housing, which prevents pairs with angular separations less than this from being observed spectroscopically on the same plate 2 . However, contiguous plates have considerable overlap and some sky regions are observed more than once, so that many close pairs exist in the final spectroscopic catalog. The net effect on our preliminary pairs sample is that the spectroscopic completeness drops sharply below 55 ′′ , leading to the relative over-abundance of pairs with wide physical separations at low redshifts in Figure 1. Fortunately, it is straightforward to model and correct for this effect. Patton & Atfield (2008) find that the ratio of spectroscopic to photometric pairs decreases from ∼ 80% at angular separations θ > 55 ′′ to ∼ 26% (on average) at smaller separations. We therefore make a first attempt to correct the disparity in redshift distributions by randomly excluding 54/80 = 67.5% of galaxies in pairs with θ > 55 ′′ . We use this cull to compile our final wide pairs sample, which contains 1915 paired galaxies before AGN removal and 1716 galaxies with one or more companions after AGN removal. When the stellar mass ratio of the pairs is not highly discrepant (0.3 <M 1 /M 2 3) the cull described above yields redshift distributions for the pairs and control samples that are statistically indistinguishable. However, for more contrasting mass ratios, the redshift distributions remain statistically different. This is a common, well-known feature of pairs' samples (e.g., Patton et al. 2000; and is due to the magnitude limited nature of the parent galaxy sample and the associated limit in dynamic range. Pairs with very disparate stellar mass ratios are biased towards low redshifts, because the magnitude limit of the survey hinders their detection (i.e. detection of a much lower mass, fainter companion) at higher redshifts. Since we want to be able to study pairs with stellar mass ratios up to 10, the control sample requires further culling. At this point, a simple prune in redshift is insufficient, due to the strong correlation between mass and redshift. At z 0.05 galaxies with stellar masses ranging from approximately 10 8.5 to 10 11 M ⊙ are detected. At higher redshifts, the lower mass galaxies are no longer detected, since they are generally too faint. We therefore have to prune the control sample simultaneously in stellar mass and redshift. This is achieved by matching one control galaxy to each paired galaxy in mass-redshift space and repeating (without replacement) as many times as possible while requiring that the KS probability of the control-pair mass and redshift distributions be consistent with each other at at least the 30% level. The matching process is done before any removal of AGN-dominated galaxies so that the analysis of §5 (on AGN fractions) can be achieved. For each of the 1915 (pre-AGN removal) paired galaxies, there are 23 control galaxies, i.e. the control sample contains 44045 galaxies before AGN-dominated galaxies are removed. The KS probabilities that these samples of pairs and control galaxies are indistinguishable in redshift and stellar mass is 32% and 34% respectively (i.e. no formal statistical difference). Once the AGN-dominated galaxies have been removed, as is required for the majority of our analysis, the samples are reduced to 1716 paired galaxies and 40095 control galaxies, a reduction in each case by approximately 10%. Figure 2 shows the redshift and stellar mass distributions for these fiducial samples. For both paired and control samples, the mean stellar mass is log M ⋆ = 10.1 and the mean redshift is z = 0.073. We can now be confident that our control sample is well-matched to our pairs sample and should contain no observational bias that will affect our assessment of proximity induced effects. The strict selection criteria that we impose mean that our sample of galaxies is not complete in either magnitude or volume. As noted above, the S/N criterion in particular will lead to a sample that excludes (at least some) galaxies that are highly reddened, very metalpoor and not actively star-forming. However, the same selection biases will apply equally to the control and the pairs samples, allowing us to make differential comparisons between the two. As described in the above discussion, our sample of pairs is also not complete. This should not introduce any bias into our pairs sample, since spectroscopic incompleteness does not depend significantly on the intrinsic properties of the galaxies. However, due to spectroscopic incompleteness, many true pairs will have a redshift measured for only one member galaxy, which may then fall into the control sample. Fortunately, any resulting contamination of the control sample is negligible, since only ∼ 2% of galaxies are found in close pairs (see Patton & Atfield 2008). Given that we are interested in the effects of mergers/interactions, it is also important to acknowledge the fact that some of the pairs in our sample will not be close enough for such encounters to occur. This contamination is on the order of 50% for the closer pairs (r p < 30 h −1 70 kpc) in our sample (Patton & Atfield 2008), and rises as pair separation increases (Alonso et al. 2004, Perez et al. 2006a). While we do not attempt to correct for this explicitly, we infer that (a) any differences seen between close pairs and the control sample are likely to be underestimated and (b) the wider pairs are likely to suffer from increasing contamination due to non-interacting systems. One other parameter that may affect measured spectral properties is the fiber covering fraction (CF). Although aperture effects are likely to affect galaxy metallicities (Kewley, Jansen & Geller 2005;Ellison & Kewley 2005) we do not make any a priori cuts in CF. This is mainly because, after the above culls, the CFs are consistent between the pairs and control samples, see Figure 2. Moreover, we will be explicitly investigating the impact of CF, which is calculated by comparing the galaxy's photometric g' band Petrosian magnitude with the fiber magnitude in the same filter, as a free parameter in sections 3 and 4. However, we note that the quantities of stellar mass and SFR are corrected for aperture effects (Brinchmann et al. 2004) and therefore represent total quantities. It is worth noting that one of the novel properties of our sample is the stellar mass selection criterion, whereas most previous samples have made no requirement on the relative fluxes/masses of their visually identified pairs. Moreover, when cuts have been made in order to investigate the effect of relative mass, flux is usually used as a surrogate for mass (e.g. Woods et al. 2006;Woods & Geller 2007). In Figure 3 we show the impact of this assumption by plotting relative fluxes versus relative stellar masses. The fact that the distribution of flux to mass ratios is flatter than 1:1, means that for a given flux ratio cut the completeness rate for the same mass ratio is quite high, but the contamination is significant. For example, a flux selection which requires a ratio within 2:1 selects 86% of galaxy pairs with masses whose ratios are within 2:1. However, 46% of the galaxies selected by this flux cut will have actual mass ratios outside the 2:1 range, leading to a high contamination rate. Selection by relative flux could therefore potentially dilute properties that depend sensitively on relative stellar mass. The reason that the correlation between relative fluxes and masses is flatter than unity in Figure 3 can be understood in terms of specific star formation rates (SSFR). Recently, Zheng et al. (2007) have shown convincingly that SSFR, i.e. SFR per unit mass, is higher for lower mass galaxies. In turn, this broadly translates to a higher flux per stellar mass (F/M) for lower mass galaxies. Therefore, when the M 1 /M 2 ratio is less than unity, i.e. the low mass galaxy is in the numerator, this translates to a generally higher With the stringent criteria outlined above, we have not only constructed one of the largest, but also one of the most rigorously selected samples of galaxy pairs to date. Moreover, with the combination of a wide range of derived spectral properties, photometric measurements and morphological decomposition, we have an extensive arsenal with which to tackle the effects of galaxy proximity. Star Formation Rate In Galaxy Pairs In this section, we investigate the effects of projected separation, relative stellar masses and fiber covering fraction on the SFR of paired galaxies. We use the results to select a pairs sample for the investigation of proximity effects on the LZ and MZ relations in the following section. In the top panel of Figure 4 we show the star formation rate as a function of galaxy separation for the wide pairs in our sample. The figure demonstrates that galaxies in pairs with separations ≤ 30 h −1 70 kpc have a median SFR that is higher than the control galaxies, by up to 40%, at 1-2 σ significance. This result is consistent with previous studies of SFRs in close pairs of galaxies (e.g. Barton et al. 2000;Lambas et al. 2003;Nikolic et al. 2004;Geller et al. 2006). However, Barton et al. (2008) have suggested that the level of excess star formation in close pairs may have been under-estimated in these previous works due to the typically higher density environments inhabited by pairs relative to control galaxies. This conclusion may also apply to this work, although we discuss this further in the next subsection. Star Formation and Relative Galactic Stellar Mass We expect (e.g. Lambas et al. 2003;Woods et al. 2006;Bekki, Shioya & Whiting 2006;Cox et al. 2007;Woods & Geller 2007) that pairs with almost equal masses ('major mergers/interactions') will exhibit more pronounced interaction-induced effects than unequal ('minor') mass encounters. Although dynamical mass may be the fundamental parameter which governs the outcome of galaxy interactions, stellar mass is both more readily determined from observations and is a reasonable surrogate for dynamical mass above 10 10 M ⊙ (e.g. Brinchmann & Ellis 2000;Drory et al. 2004). Moreover, stellar mass is a quantity that is directly traced through many simulations, e.g. the minor pairs models of Cox et al. (2007). Only a handful of simulations have studied the effect of star formation in minor mergers either in general (Mihos & Hernquist 1994;Cox et al., 2007) or for specific cases (e.g. Mastropietro et al. 2005 for the Milky Way -LMC). Based on this limited modelling, it has been found that induced central star formation in the larger galaxy of an unequal mass merger can eventually occur, albeit at a lower level than expected in a major merger, and usually when the interaction is well advanced, i.e. after several gigayears. In this section, we investigate whether unequal mass pairs can be affected by galaxy proximity and compare our results with pairs whose galaxies have comparable stellar masses. The only previous observational studies to assess the effects in minor mergers in close galaxy pairs were those of Woods et al. (2006) and Woods & Geller (2007). The latter paper, which benefits from significantly better statistics than the former, finds that the specific SFR of the less massive (as inferred from a fainter magnitude) galaxy in a minor pair is enhanced compared to the field, whereas the more massive galaxy is not. However, these two previous studies relied upon relative magnitudes, and as we pointed out in §2, this can lead to a high rate of contamination. In this work, we use the measured stellar masses, corrected for aperture bias, determined by spectral modelling and compare our results to the flux-selected minor pairs of Woods & Geller (2007) We begin by assessing the impact of our mass ratio criterion of 0.1 < M 1 :M 2 < 10 by considering sub-samples of galaxy pairs with different stellar mass ratios. For each mass cut, the matching of the control sample in stellar mass and redshift is repeated as described in section 2 for our fiducial (wide) pairs sample. This ensures that the distribution of stellar masses is comparable between each pairs' sub-sample and its control sample. In Figure 4 we show the SFR as a function of separation for three different mass ranges (stellar mass ratios within 1:10, 1:3 and 1:2). From this Figure we draw two conclusions. First, the enhancement in SFR persists out to at least 30 h −1 70 kpc for all three mass ranges considered, with the closer stellar mass ratio pairs showing an increase out to 40 h −1 70 kpc. Second, and perhaps more interesting, is that the amount of SFR enhancement increases (and becomes more significant) for pairs whose stellar masses are most similar to one another. The enhancement, which we found in the previous subsection to be 40% for pairs with stellar mass ratios within 1:10, increases to 60% and 70% for ratios within 3:1 and 2:1 respectively and with ∼ 2σ significance in each case. This confirms quantitatively the suggestion that major interactions, i.e. those between almost equal mass galaxies, will induce the most significant effects in one another. These results also demonstrate that the SFR can be affected even in samples with relatively discrepant masses, at least up to a ratio of 1:10 (as also concluded by Woods & Geller 2007 for their minor pairs). At large separations (r p 50 h −1 70 kpc) we see an upturn in the SFR of the pairs. This is a complex effect that is driven by a combination of contamination from projected pairs that are not truly interacting and the way in which our control sample is constructed. Since the control sample has been culled in redshift and mass in order to match the distribution in the pairs sample, it is not representative of the true field population. Since pairs tend to be found in higher density environments (e.g. Barton et al. 2008), the mass-matched control sample has a higher mean stellar mass than the field (i.e. the pre-cull control sample). In turn, this means that the control sample galaxies are themselves biased towards denser environments and are therefore likely to have, on average, lower SFRs than the field. At wide separations, an increasing number of pairs are not truly interacting, leading to an increased contamination of the sample. The SFRs at these wide separations are therefore averages of the values of true interacting pairs (which at wide separations probably have SFRs tending towards the control mean) and contaminating field galaxies (which tend to have a higher SFRs than the control mean). This leads to an apparent upturn in the SFRs at wide separations. The prominence of this upturn will depend on the actual mass matching of each mass ratio sub-sample. Since the 0.1 < M 1 /M 2 < 10 mass-matched control sample is most similar to the field sample (i.e. mass distribution of the pre-cull control sample), the upturn is much smaller (in fact, absent) than in the 0.5 < M 1 /M 2 < 2 mass-matched control sample, which is most discrepant from the field mass distribution. This explains why the majority of previous surveys have not seen this upturn: they do not impose relative flux or mass cuts, hence their SFR versus separation correlations most closely approximate to the top panel of Figure 4. For example, Lambas et al. (2003) see an upturn at r p > 60 kpc for their L 1 ∼ L 2 sample, but not in their L 1 >> L 2 sample. Nonetheless, an upturn such as that seen in the middle and bottom panels of Figure 4 has been reported by Perez et al. (2006a) in their analysis of mock galaxy pair catalogs from cosmological simulations, by Nikolic et al. (2004) in their study of SDSS pairs and, as mentioned above, by Lambas et al. (2003). We conclude that in the absence of projection effects, the SFR of pairs with r p > 50 h −1 70 kpc would tend to the control value. Next, we classify major pairs as those with mass ratios within 2:1 and minor pairs as those with more discrepant masses 3 . We further distinguish between the more massive galaxy in a minor pair (M gal /M companion > 2) and the less massive galaxy in a minor pair (M gal /M companion < 0.5). Since a number of fundamental galaxy properties such as SFR depend on stellar mass (e.g. Brinchmann et al. 2004), simply using the matched control sample for comparison with minor/major pairs (whose mass distributions will be very different from one another) would not give a true indication of relative effects. We have therefore further adapted our control samples to be equivalent in mass distribution by selecting a control galaxy matched in stellar mass to each paired galaxy. In Figure 5 we show the total SFR as a function of galaxy pair separation for 3 stellar mass scenarios: major pairs and the more/less massive galaxies in minor pairs. In the top panels of we show individual SFRs, and in the middle panel, the median values in bins of 13 h −1 70 kpc. The shaded region in the middle panel shows the median SFR (with vertical height corresponding to the σ/ √ N) in the matched control sample. The overlap of the scatter in the data points (vertical error bars on the binned values) with the gray bar gives an indication of consistency with field values. In the lower panel we show the SFR enhancement relative to the control sample by normalizing each bin to the control median. The stellar mass matching of the control fields is particularly important here. It can be seen that the median values for the three middle panels are highest for the highest mass sub-samples 4 . Figure 5 demonstrates the, by now, familiar enhancement of SFR at small separations for galaxies with approximately equal masses, see also Figure 4. Although the result is not highly significant (∼ 1 − 2σ), we also find tentative evidence for higher SFR for the less massive galaxy in a pair at both close separations and at ∼ 60-70 h −1 70 kpc (see the previous section for discussion on the turn-around in enhanced SFR as a function of separation). A similar conclusion has been drawn by Woods & Geller (2007). Although some of the binned SDSS data points for the more massive galaxy in a pair are also above the field mean, the size of the error bars makes this result less significant (barely 1σ) and difficult to draw conclusions from. If confirmed, these results would be consistent with the less massive pair member in an unequal mass interaction being susceptible to enhanced star formation, although less so than galaxies in equal mass interactions. In turn, this result has interesting implications for cosmic metal enrichment: Whereas low mass galaxies can usually remain gas-rich because of low star formation efficiency, strong bursts of star formation during interaction may increase metal production which may be more easily dispersed into the surrounding intergalactic medium. However, the results from this section are inconclusive and the analysis of Woods & Geller (2007) remain the strongest evidence for enhanced star formation in less massive galaxies in minor pairs. In a complementary study of star-forming galaxies in the SDSS, Li et al. (2008a) have also recently found evidence that SFRs are more enhanced in lower mass galaxies with companions. Possible reasons that we have not found similarly significant results include 1) the different definition of major and minor pairs and 2) the smaller sample size of our work, mostly due to the criteria imposed in §2 (although Woods & Geller 2007 use the somewhat larger DR5, compared to our DR4 sample). The major pairs sample of Woods & Geller (2007) is 60% larger than ours, whilst the minor pairs sample is contains almost twice the number of galaxies. The median luminosity ratio for the Woods & Geller minor sample is ∼ 11 (compared with a median mass ratio of 3.85 in our sample), and ∼ 4 for the major sample (compared with our median major mass ratio of 1.38). Therefore, if luminosity ratio were taken as a substitute for mass ratio, more than half of the Woods & Geller (2007) major pairs sample would fall into our definition of a minor pair. In future work, it will be interesting to examine how selection based on relative stellar masses and luminosities (e.g. Figure 3) and the definition of major and minor pairs may affect results. Our data confirm the conclusion of previous work (e.g. Barton et al. 2000;Lambas et al. 2003;Alonso et al. 2004;Nikolic, Cullen & Alexander 2004;Li et al. 2008a) that galaxies in pairs closer than ∼ 30 h −1 70 kpc exhibit SFRs that are higher than in the 'field'. For the rest of this paper, we therefore define a sample of 'close pairs' where r p < 30 h −1 70 kpc. Although we have shown that approximately equal stellar mass pairs show higher proximity-induced SFRs, we elect to use the 0.1 < M 1 /M 2 < 10 sample in order to maximise the statistical significance of our work. This selection also facilitates comparisons with previous works, which generally do not have relative stellar mass or flux limits in their pairs selection. The r p < 30 h −1 70 kpc, ∆v < 500 km s −1 and 0.1 < M 1 /M 2 < 10 criteria now forms our fiducial pairs sample unless otherwise stated. The merging timescale for these galaxies is ∼ 250 -500 Myrs (e.g. Patton et al. 2000;Masjedi et al. 2006). Metallicities of Galaxy Pairs In the previous section, we used SFR as a function of separation to define a 'close pairs' sample with r h < 30 h −1 70 kpc as those pairs most likely to exhibit interaction induced effects. We now use this sample to investigate the impact of proximity on galaxy metallicity using this close pairs sample. The metallicities of the SDSS galaxies can be determined using strong emission line diagnostics that are calibrated either empirically against 'direct' electron temperature determinations, or against theoretical photoionization models. A wide range of such metallicity diagnostics is currently on the market, some of the most popular include various empirical calibrations of R 23 originally formulated by Pagel et al. (1979) & Pagel 2004) and calibrations which solve iteratively for ionization parameter using photoionzation models (e.g. Kewley & Dopita 2002;Kobulnicky & Kewley 2004). It is well known that at high metallicities these strong line diagnostics show a positive offset relative to the metallicities determined from electron temperature methods (e.g. Bresolin, Garnett & Kennicutt 2004;Bresolin 2007). Moreover, Kewley & Ellison (2008) have shown that strong, systematic differences exist between strong line diagnostics and have stressed the importance of using a single calibration where possible. In this paper, we use the Kewley & Dopita (2002) 'recommended' method which can both overcome the usual double-value degeneracy of the R 23 method, and also solves for the ionization parameter. As noted in §2 the (necessary) selection of galaxies with strong emission lines means that our sample contains a dearth of metal-poor galaxies. However, not only does the consistent selection of control and paired galaxies ensure an internally fair comparison, but repeating our analysis with less stringent emission line detection constraints (3σ rather than 5σ) yields identical results for all of the tests performed in this section. The Luminosity Metallicity Relation The relationship between luminosity and metallicity is well-established over 8 magnitudes in M B (e.g. Salzer et al. 2005;Lee et al. 2006) and out to redshifts z ∼ 1 (e.g. Kobulnicky & Kewley 2004;Maier et al. 2005). The reason for the luminosity-metallicity (LZ) relation, and the tighter mass-metallicity (MZ) relation is still unclear. Although yields from luminous, high mass galaxies indicate that the relation is driven by the depth of the potential well and mass loss during star formation (e.g. Tremonti et al. 2004), lower mass galaxies show a large scatter in effective yield, with some showing values as high as the most massive galaxies (Lee et al. 2006). Simulations of chemical evolution offer a variety of alternatives, including variable initial mass functions (Koppen, Weidner & Kroupa 2007), star formation efficiency (Brooks et al. 2007) and the interplay between poor-metal gas inflow and mass-loaded winds (Finlator & Davé 2007). Ellison et al. (2008) have recently shown that the normalization of the MZ relation depends on specific SFR and r h , and conclude that differences in star formation efficiencies can explain these dependencies. However, the basic form of the relation remains intact over the full range in these properties and apparently does not depend sensitively on large-scale environment (Mouhcine, Baldry & Bamford 2007). Regardless of the origin of the LZ relation, the enhanced star formation discussed in the previous section should ultimately impact on the correlation of luminosity and metallicity in close galaxy pairs. The direction of this impact is dependent on timescales. If a galaxy's metallicity is measured after an interaction-driven starburst is complete, then we may expect an enhanced metallicity in the HII regions where star formation has occurred. Conversely, if we measure the metallicity of the region experiencing the starburst whilst it is ongoing, the inflow of more metal-poor gas from the outer regions of the galaxy may decrease the HII region metallicity. Shifts in luminosity may also be applicable due to enhanced star formation. This question has been recently tackled by Kewley et al. (2006a) who used 86 galaxies in pairs selected from the CfA2 redshift survey and compared them with a control sample from the Nearby Field Galaxy Survey (NFGS). For both samples, nuclear spectra containing ∼ 10% of the galaxy's light were used for the metallicity determinations. Kewley et al. (2006a) found that galaxies with separations < 30 h −1 70 kpc 5 have metallicities that are offset downwards by 0.2 dex at a given luminosity. Further observational evidence that merger-induced starbursts lead to lower metallicities comes from studies of ultra-luminous infra-red galaxies (ULIRGs; Rupke, Veilleux & Baker 2007) and compact ultra-violet luminous galaxies (UVLGs; Hoopes et al. 2008). These populations, believed to have been -18recently involved in merger events, are more metal-poor by up to a factor of two compared to SDSS galaxies of the same mass. The simulations of Perez et al. (2006b) also support the concept of metal-poor gas inflow in pairs. They find that the gas phase metallicity of galaxies in simulated pairs is typically 0.2 dex higher when the integrated metallicity over 2 optical radii is compared to that over half an optical radius. In Figure 6 we show the LZ relation for our SDSS samples of pairs and control galaxies. In the top left panel we show all galaxies in our close pairs sample, i.e. with transverse projected separations r p < 30 h −1 70 kpc. Kewley et al. (2006a) have argued that offsets from the field LZ relation will be most clear when the spectra are of a nuclear nature, i.e. only cover the central few kpc of the galaxy where the starburst is occuring. We therefore plot the LZ relation for three different CF cuts. In order to make any offsets between the field and pairs more clear, in Figure 7, we show binned versions of all the SDSS pairs, as well as for the various CF cuts. We note that due to the exclusion of very metal-poor galaxies in our sample, it is possible that any downward shift in metallicity in the pairs sample is underestimated. For comparison, we also show the Kewley et al. (2006a) CfA pairs sample and their NFGS control sample, both as individual galaxies and binned. The visual impression that the CfA pairs of Kewley et al. (2006a) have lower metallicities for their luminosity than NFGS control galaxies is confirmed quantitatively with a 2D KS test which shows that the LZ distribution of the two samples differs at the 98% confidence level. If we consider the SDSS sample as a whole (top left panels of Figures 6 and 7), we see a mild tendency towards lower metallicities for pairs compared with the control sample. However, the offset is small, <0.05 dex, compared to the offset seen by Kewley et al. (2006a), which is typically 0.1-0.2 dex. The main difference between the SDSS sample and the NFGS/CfA sample studied by Kewley et al. (2006a) is that the latter had nuclear spectra with CF∼ 10%. The majority of the SDSS galaxies have much higher covering fractions ( Figure 2). If the effect observed by Kewley et al. (2006a) is therefore truly nuclear, then the typically higher covering fractions of the SDSS fibers may hide the impact of gas dilution in the galaxies' centers. It would therefore be more appropriate to consider only the SDSS galaxies (in both pairs and control sample) with CF<10%. The top right panels in Figures 6 and 7 show the individual galaxies, and binned metallicities for the CF<10% criterion. Although our sample of CF<10% pairs is smaller than the CfA (23 galaxies, compared with 37 in the CfA), the scatter in metallicity is also smaller for a given M B , leading to smaller error bars (which represent the standard error on the mean). The SDSS CF<10% control sample is also much larger than the NFGS: 2060 galaxies compared with 43 at comparable separations. Figure 7 therefore shows the interesting result that, at least for intermediate luminosity galaxies, SDSS pairs with CF<10% have marginally higher metallicities for their luminosity than the control sample. Recall that this offset is in the opposite sense to the CfA pairs studied by Kewley et al. (2006a). A 2D KS test gives a 3% probability that the SDSS control and pairs sample have the same LZ distributions. As stated above, the KS probability is 2% for the Kewley et al. samples, so both datasets give statistically significant results, but in contrary directions. It is worth noting that the covering fractions for the CfA and SDSS samples are calculated slightly differently: Kewley et al. consider the fraction of light in the slit relative to the B26 isophote, whereas we consider the fiber magnitude relative to the Petrosian magnitude in the g-band. However, this can not explain the trend of our result, i.e., that we see a larger offset in the pairs' LZ relation relative to the control for higher CFs, which is contrary to the expectation from nuclear metallicity dilution. Comparison with the work of Kewley et al. (2006a) The results in the previous subsection indicate an apparent discrepancy in the relative metallicities of galaxy pairs in the SDSS versus the CfA samples for nuclear (CF<10%) spectra. On the one hand, Kewley et al. (2006a) find low metallicities at a given luminosity in close pairs, whereas we find tentative evidence for high metallicities compared with a control sample when the CF<10%. Conversely, we do find lower metallicities in pairs when the CF>20% (Figure 7), a regime in which Kewley et al. (2006a) have little data. In this subsection we investigate the cause of this apparent discrepancy. First, we consider whether the small number of low CF galaxies in the SDSS (23, versus 37 in the CfA) could lead to disagreement relative to the nuclear LZ relation of Kewley et al. (2006a). We quantify the effect of small number statistics by bootstrapping 10,000 samples of 23 galaxy pairs from the CfA sample and calculating the 2D KS probability compared with the NFGS control sample. This test simulates the effects of the smaller number of pairs in the SDSS compared with the CfA, i.e. by testing whether the CfA/NFGS comparison would have detected an LZ offset if it had only had as many pair galaxies as the small CF bin of the SDSS. We find that for samples of 23 pairs a significant KS probability of <0.05 is achieved in 86% of the bootstrap renditions and a probability of <0.02 for 63% of trials. Therefore, although we can not completely rule out the possibility that small numbers are the cause of the apparent discrepancy between the SDSS and CfA nuclear LZ relation for pairs, it seems unlikely. We next consider whether there any obvious differences between the selection of Kewley et al. (2006a) and our samples. Both works rely on pair identification from transverse (projected) separation and relative velocity. We have selected our close pairs sample r p < 30 h −1 70 kpc to match the closest separation bin of Kewley et al. (2006a). Our velocity cut is somewhat more stringent than Kewley et al., 500 km/s rather than 1000 km/s. However, repeating the LZ and MZ analyses with a 1000 km/s cut for the SDSS pairs does not change our results (increasing the velocity range only increases our pairs sample by 7%). The CfA pairs sample has a lower redshift range than the SDSS, the former having a lower redshift cut-off of z = 0.0077 and a median redshift of z = 0.018, which is close to the low z cut-off in the SDSS. However, we consider it unlikely that evolutionary effects can be significant over the redshift ranges covered by the two surveys. Ellison & Kewley (2005) and Kewley & Ellison (2008) have also stressed the importance of using the same metallicity diagnostics in comparisons, since there can be a factor of three offset for different calibrations. Both Kewley et al. (2006a) and our work both use the Kewley & Dopita (2002) 'recommended' metallicity calibration, so there should be no offset due to diagnostic differences. At this point, it is instructive to compare the two control samples of this work and Kewley et al. (2006a). Although the selection of the CfA sample is done in the B-band, as opposed to the r-band selection of SDSS pairs, Figure 8 shows that a similar range in M B is probed by both samples (although the latter extends to slightly more extreme values at both ends of the M B distribution, thanks to the larger sample). Figure 8 also shows that, despite our caveat in §2 that we may be missing low metallicity galaxies, the SDSS sample is not deficient in sub-solar abundance galaxies compared with the CfA. Nonetheless, from Figure 8 it is clear that the NFGS control galaxies are inconsistent with the SDSS control; a 2D KS test rules out the null hypothesis with 99.8% confidence. Therefore, despite apparently similar selection in terms of redshift, projected separation, ∆v, metallicity diagnostic and CF, the LZ distributions of the NFGS and SDSS control samples are significantly different. A possible clue as to the origin of the difference between the CfA/NFGS and SDSS samples is revealed by the trend in LZ offset with CF seen in Figure 7. Although the SDSS pairs show mildly enhanced metallicities for CF<10%, for 10 < CF < 20% there is no offset compared with the control, but at 20 < CF < 50% the pairs are systematically more metalpoor. Since CF will obviously be a strong function of galaxy half light radius, so the trend in LZ offset with CF might actually be a trend in galaxy size. If confirmed, this would imply that galaxies with smaller r h tend to have low metallicities for their luminosity/mass, whereas larger galaxies may be offset in the opposite direction. In Figure 9 we compare the r h distributions of the CfA pairs with the SDSS pairs with two CF cuts: CF<10% and 20 < CF < 50%. The histogram clearly shows that the CfA pairs have a r h distribution that is skewed towards smaller sizes than the SDSS CF<10% pairs. Therefore, although these two samples have similar covering fractions, the size distribution of galaxies is very different. On the other hand, the SDSS 20 < CF < 50% CF and CfA pairs have very similar r h distributions. In turn, the LZ relations of these two samples (CfA pairs and SDSS pairs with 20 < CF < 50%) show concordantly low metallicities for a given luminosity. We can see this explicitly in Figure 10 where we plot the LZ relation for different half light radii; galaxies with r h < 3 h −1 70 kpc are metal-deficient for their luminosity, but this effect is absent for larger galaxies. The small enhancement in metallicity that was present for small CFs in Figure 7 is absent for the large r h sub-sample in Figure 10. This may be due to the fact that nuclear spectra are required to see the effect, i.e. the galaxies need to be large and the spectra must have small covering fractions. Our sample is not large enough to test this hypothesis, but it would be clearly interesting to obtain more nuclear spectra of galaxies with r h > 6 h −1 70 kpc in the future. Finally, the results shown in Figures 7 and 10 also demonstrate that the impact of low metallicity gas infall is seen not only in the CF∼ 10% nuclear spectra of the CfA pairs, but also in the larger covering fractions of the SDSS pairs. This indicates that the offset in the LZ relation may be driven by changes that occur on scales larger than 'nuclear'. There are (at least) two reasons why this might be the case. First, we may be observing the galaxies early enough in their interaction that the gas flows are still on-going, i.e. the metal poor gas is still on its way to the center. This would imply that the offset in the LZ plane on scales of several kpc is highly transient. Alternatively, galaxy interactions, which are thought to enhance bar formation (e.g. Gerin, Combes & Athanassoula 1990) and contribute to central gas flows (e.g. Friedli & Benz 1993), may result in galaxies with flatter abundance gradients (e.g. Martin & Roy 1994). Combined with the transport of metal-poor gas to the center, this could result in a longer lasting suppression of the LZ relation in some galaxy pairs. We return to the reason for the offset in the LZ relation in section 4.4. The Mass Metallicity Relation We repeat the analysis of the previous section, but now replace luminosity with stellar mass. In Figure 11 we show the MZ relation for our control and close pairs samples for different cuts in covering fraction. Comparison with Figure 6 highlights the result of Tremonti et al. (2004) that the MZ relation is much tighter than the LZ relation, with a 1σ spread < 0.2 dex for a given stellar mass. In Figure 12 we show the binned MZ relation for close pairs and control samples for all the SDSS galaxies as well as for the three CF cuts. Although there is a slight tendency towards marginally lower metallicities for a given mass in the full pairs sample, as seen in the binned LZ relation, the shift is again < 0.05 dex, and not significant given the error bars. However, the CF<10% sample again shows a significant enhancement in metallicity at intermediate masses. The KS probability that the MZ distributions of the CF<10% pair and control samples being drawn from the same population is 2%, i.e. as significant as the LZ result for the CfA sample (Kewley et al. 2006a) and slightly more significant than the LZ result for the SDSS pairs presented above. We see a similar trend in the offset in metallicities as a function of covering fraction in the MZ relation as in the LZ relation -an increase in metallicity for small CFs and lower metallicities for pairs with high CF spectra. However, although the offset towards lower metallicities in the 20< CF < 50% CF bin is systematic in MZ, it is slightly less statistically significant than in the LZ. Whereas the offset in the LZ relation for 20< CF < 50% is 0.05 -0.1 dex, with the largest offsets at the lowest luminosities, the offset in MZ is consistently around 0.05 dex. This indicates that the brightest galaxies (M B < −20) may be exhibiting a pure metallicity shift. This is perhaps not surprising since a starburst of fixed luminosity will have a fractionally small impact on the luminosity of an intrinsically bright galaxy. Moreover, the flat slope of the LZ relation at bright magnitudes means that any luminosity shift will need to be large in order to be detected. However, if the metallicity shift is about 0.05 dex downwards for all luminosities/masses (as indicated in Figure 12), there may be an additional luminosity component to the LZ relations shift that contributes up to ∼ 0.4 mag. Kewley et al. (2006a) argued that the offset observed in the LZ relation determined for their CfA pairs sample was driven by a difference in metallicity rather than luminosity. Their argument was based on the fact that their absolute magnitudes were derived from the r-band where new star formation will contribute little continuum flux. Barton et al. (2001) also concluded that triggered star formation will not significantly increase the luminosity of a paired galaxy, based on comparisons of the Tully-Fisher relation. However, the marginally larger offset that we find in the LZ relation, particularly at M B > −20, compared to the MZ relation for a given CF, raises the question as to whether some of the shift may be due to an increased luminosity in pairs, as well as a lower metallicity. Although there is little continuum flux expected from a starburst in the r-band, the Hα line is present in this bandpass and may contribute significantly. To test whether the shift in the LZ relation may be due to increased luminosity in close galaxy pairs, we calculate the absolute magnitude in four SDSS filters (u, g, r and i) and use these magnitudes in the LZ relation. If an increase in luminosity from a central starburst is shifting the LZ relation of pairs towards brighter absolute magnitudes, we expect to see this effect more strongly in the blue filters. In Figure 13 we show the LZ relation derived for 20< CF < 50% for SDSS control and pairs samples for absolute magnitudes in four filters. We first note that the LZ relation is much flatter for bluer filters, an effect particularly noticeable in the u-band. This is probably due to the high sensitivity of the u-band magnitude to instantaneous star formation which smears out the underlying relation of metallicity with mass. The correlation of i-band magnitude with metallicity very closely resembles the MZ relation since redder filters more faithfully represent the underlying stellar mass. Figure 13 shows that the horizontal shift in the linear (fainter absolute magnitude) part of the LZ relation is shifted marginally more in the u-band filter (∼ 0.75 mags) than in the other filters (∼ 0.6 mags). Combined with the smaller shift in the MZ relation, this indicates that part of the overall shift of pairs relative to control galaxies in the LZ relation may be due to the brightening of pairs experiencing a starburst. This idea is further supported by the brighter median M B in the close pairs sample: −20.15 compared with −19.94 for the control galaxies. Recall that the two samples are well matched in mass (see Figure 2), so that this difference in absolute magnitude is likely associated with the additional star formation in pairs found in §3. Shioya, Bekki & Couch (2004) model the change in absolute magnitude in starbursting mergers and predict a total brightening of ∼ 1 magnitude in M B . However, fading happens rapidly and a brightening of a few tenths of a magnitude is commensurate with a time of only a few hundred million years after the burst. On the Shift in the LZ/MZ Relations The results in the previous sections, and shown in Figures 7 and 12, hint that the magnitude and direction of the offset in the LZ/MZ relations is a function of covering fraction. We have also shown that the dependence on CF is a manifestation of a strong empirical dependence on the intrinsic galaxy half light radius. Ellison et al. (2008) have shown that a segregation in the MZ relation exists even within the control sample of galaxies. However, in the control sample of non-paired galaxies used by Ellison et al. (2008) there is a shift towards lower metallicities for larger radii. In the pairs sample, it is the galaxies with the smallest half light radii that show lower metallicities for a given mass. The mechanism for the metallicity shift in pairs is therefore likely to be driven by a different physical cause. Based on qualitatively similar downward shifts in metallicity for a given stellar mass in ULIRGs and compact UVLGs (Rupke et al. 2007;Hoopes et al. 2008), but the absence of a significant dependence on large scale environment (Mouhcine et al. 2007), it is likely that this effect is due to merger activity. In this subsection we explore two possible 'fundamental' parameters that may be the underlying cause of the r h dependence of the MZ relation for paired galaxies. Finlator & Davé (2007) have recently proposed a general model (i.e. not specific to galaxy pairs) for the existence and form of the MZ relation. These authors suggest that the MZ (and by association, the LZ) relation can be understood via the interplay of gas accretion from the intergalactic medium, star formation and subsequent mass loss through winds. In this model, there is an equilibrium metallicity for a galaxy of a given mass from which the galaxy may be displaced by the inflow of metal-poor material. In response to the deposition of fresh fuel, which in turn increases the gas surface density, the galaxy will experience an increase in its SFR. A key parameter in this model is the ratio of the galaxy's dynamical time (t dyn ) and the dilution time (t d ). The dilution time is defined as the time taken for the galaxy to recover from the injection of metal poor gas, and return to its equilibrium metallicity. If t d < t dyn then the galaxy 'recovers' its equilibrium metallicity promptly, leading to very little scatter in the MZ relation. Conversely, if t d > t dyn , then the galaxy struggles to recover promptly from inflows. In Figure 14 we test the effect of t dyn in the normalization of the LZ relation by splitting the pairs and control galaxies by dynamical time, which we calculate from r h and stellar mass. For short dynamical times, we find a tendency for pairs to have low metallicities for their luminosity. This could be understood, in the context of the model described above, if galaxies with short t dyn are those that most efficiently funnel metal-poor gas. However, it is then difficult to explain why galaxies with longer dynamical times should have metallicities higher than the control sample. Enhanced metallicities might be associated with induced star formation that has already deposited its metals back into the ISM, but this is unexpected for long t dyn , which should have less prompt induced star formation than galaxies with short t dyn . We therefore conclude that dynamical time is unlikely to be the fundamental parameter driving the sensitivity of the LZ in pairs to r h . This is perhaps not surprising given that the gas accretion in the 'field' galaxies simulated by Finlator & Davé (2007) occurs via a very different mode than the infall of gas to the nucleus of a paired galaxy. I.e. the former is dependent on the free-fall time of gas from the intergalactic medium, whereas the latter requires funnelling of gas to the center that is already settled in the outer part of the galactic disk. Bulge Fraction The segregation of the galaxy pair LZ/MZ empirically depends on r h . Galaxies with smaller sizes for a given mass will have a higher mass density, whereas galaxies with larger r h and the same mass will have shallower mass potentials. We therefore next consider whether it is the spatial mass distribution in galaxies that drives the offset in the LZ and MZ relations of paired galaxies. Simulations of galaxy interactions have previously shown that one of the factors that regulates gas inflow and nuclear starbursts is the relative prominence of the galaxy's bulge (e.g. Mihos & Hernquist 1994Cox et al. 2007). Bulges appear to provide stability against gas inflow, so that galaxies with low bulge fractions more efficiently funnel gas to their centers. We therefore investigate whether bulge fraction may be driving the different offsets in the LZ/MZ relations for different galaxy half light radii. In Figure 15 we show the histogram of i-band bulge-to-total (B/T) ratios for galaxies with close companions. We chose the i-band for this comparison since the B/T fractions measured in blue filters may primarily measure any increase in nuclear star formation (e.g. Paper II). The i-band is selected to be a good indicator of the underlying mass distribution between the bulge and the disk. In Figure 15 we have further divided the close pairs sample into those galaxies which have small half light radii, r h < 3 h −1 70 kpc, and those with larger sizes. Figure 15 shows that small r h galaxies in close pairs tend to have higher bulge fractions than large galaxies. The KS probability that the two distributions are the same is 0.007. Figure 15 shows a potential link between r h and B/T. Cox et al. (2007) have suggested that galaxies in unequal mass mergers with B/T> 0.3 will have burst efficiencies 3 times lower than a bulgeless galaxy in an otherwise identical interaction 6 . Paired galaxies with r h < 3 h −1 70 kpc appear to have a marked dearth of bulge fractions below this value, indicating that small galaxies may be less efficient at funnelling gas to their centers for star formation. A possible explanation for the offsets in the LZ relation seen in Figure 7 may therefore be the connection between galaxy size and typical bulge fraction. Indeed, dividing the galaxy samples by B/T does show an LZ offset for large, but not small, bulge fractions (see Figure 14). This can be explained if smaller galaxies (r h < 3 h −1 70 kpc), which tend to have B/T > 0.3 (see Figure 15), have their metal-poor gas reservoirs disrupted in an interaction, leading to an overall injection of metal poor gas into the central ∼ 5-10 h −1 70 kpc. However, this gas is not efficiently funnelled into the very center of the galaxy, leading to less efficient star formation and overall lower gas metallicity extending over a projected area of several kpc. Although this gas may eventually experience a starburst, Cox et al. (2007) have shown that this event is delayed relative to the initial (first passage) starburst by ∼ 1 Gyr. Larger galaxies, which are more likely to have B/T< 0.3, more efficiently funnel gas to their centers, leading to a prompt nuclear starburst and rapid metal-enrichment and recovery to metallicity levels commensurate with the control sample ( Figure 14). To further test this hypothesis, in Figure 16 we plot the bulge g − r colors for 3 cuts in B/T, where the cuts are applied to both the control and pairs samples. We find that the galaxies with the smallest bulge fractions (B/T< 0.3) have no difference in g − r color, compared to 0.31 and 0.18 for 0.3 < B/T < 0.6 and B/T > 0.6 respectively. Indeed, the distribution of g − r colors for the lowest bulge fraction galaxies is actually consistent (KS probability = 0.19) between the control sample and the pairs. The key to interpreting this result is relative timescales: that of color changes following a starburst versus interaction timescales. The scenario decribed above, in which r h depends on B/T, the latter parameter being a determining factor in the efficiency of nuclear star formation, could explain the g − r distributions of Figure 16 if the timescale for post-starburst color changes is shorter than, or comparable to, the dynamical time of the pair. Bruzual & Charlot (1993) show that for a 10 7 year burst of star formation, the optical colors evolve most rapidly over the first 10 8 years after the starburst. After this point, both the models and actual star cluster data show a relative plateau in color, changing by less than 0.1 mag in B − V up to 1 gigayear. Moreover, the fading of a starburst is typically a few magnitudes from 10 7 − 10 8 years after the burst, after which it will usually be barely visible on top of the continuous, ambient starforming galaxy population (Sawicki, private communication), although the exact contrast will of course depend on the relative strength of the starburst. The typical dynamical time of close pairs is of the order of a few hundred Myrs to half a gigayear (Mihos & Hernquist 1996;Barton et al. 2000;Patton et al. 2000). We therefore speculate that one explanation of our observations is that many of the close pairs in our sample have already experienced gas disruption from an initial pass ∼ 10 8 years ago. In the larger galaxies (which have a tendency towards smaller bulge fractions) this has resulted in a prompt nuclear starburst and metal-enrichment leading to high metallicities for a given luminosity compared with the field, but a stellar population that has already lost its massive O and B stars. In the smaller galaxies, the re-distribution of metal-poor gas has led to a lower metallicity for a given luminosity compared with the field. In these bulge dominated galaxies, star formation still occurs, but is delayed relative to the first passage (Cox et al. 2007), so we still see the evidence of on-going activity in their colors. AGN Fraction There is strong observational and theoretical evidence linking the interactions of galaxies and the onset of nuclear activity. Storchi-Bergmann et al. (2001) found a correlation between central star formation activity and AGN in interacting galaxies, providing a causal link between the two processes. This observation was confirmed by Kauffmann et al. (2003a) who found that the star formation in AGN dominated galaxies is distributed over the central few kpc of active galaxies. Kauffmann et al. (2003a) also found that a larger fraction of AGN galaxies (as opposed to non-active massive galaxies) have experienced significant bursts of star formation in the past few gigayears. Alonso et al. (2007) draw a similar conclusion, based on lower values of the break index D n (4000) which indicates more recent star formation in visibly merging galaxies with AGN activity. In previous sections, we have presented evidence for central starburst activity in close galaxy pairs; do we see any evidence for enhanced AGN activity in our pairs that has followed the starburst? To investigate this question, we remove the criterion that galaxies must be classified as HII (star-forming) galaxies and also include those that have been classified as dominated by an AGN ionizing spectrum. The classification of galaxies as star-forming or AGN dominated can be achieved with a variety of line strength diagnostics; in this work we use the diagnostic of Kewley et al. (2001). This leads to an approximate 10% increase in the size of our pairs and control samples. However, we impose the criterion that the bulge-to-total ratio be in the range 0 < B/T <1, i.e. that the galaxy is fitted with two components and excludes pure disks and pure bulges, which significantly reduces the number of galaxies considered. Furthermore, although our main pairs sample is still defined as containing galaxies with companions whose separations lie in the range r < 30 h −1 70 kpc, we also consider a wide pairs sample of galaxies whose companions have separations 30 < r < 80 h −1 70 kpc. The wide pairs sample acts as a consistency check, since any differences due to proximity should be weaker in the wide pairs sample than the sample of close pairs. A summary of the numbers of galaxies in the various samples considered in this section are given in Table 1. We now examine the fraction of galaxies in the pairs versus control sample which are classified as AGN as a function of color, B/T and smoothness; our results are given in Table 1. The smoothness parameter, S, is derived from the GIM2D bulge+disk fits as described in detail by Simard et al. (2002). In brief, S measures both the smoothness of the disk+bulge and its asymmetry with higher values of S indicating a higher degree of asymmetry across the galaxy within 2 half light radii. Smoothness is therefore a good indictor of morphology with later type galaxies exhibiting generally higher values of S (McIntosh, Rix & Caldwell 2004). Here, we use S g , smoothness as measured in the g-band. In Figure 17 we show the fraction of 'all' galaxies (i.e. corresponding to the first line in Table 1) that are AGN as a function of redshift. The control galaxies show a steady increase in AGN fraction with redshift. However, this is likely to be dominated by systematic selection rather than physical effects. Since the stellar mass distribution is strongly skewed to higher values at higher redshifts (see discussion in §2) and higher mass galaxies have higher AGN fractions (lines 2 and 3 in Table 1) it is not surprising that the control galaxies exhibit increasing AGN fraction at higher redshifts. Restricting our sample to only galaxies with stellar masses above 10.5 M ⊙ reverses the trend and gives lower AGN fractions at higher redshift. This is likely to be due to aperture bias (e.g. Kauffmann et al. 2003a). Despite these systematic effects, we can still compare differentially the AGN fraction in the control and pairs samples as a function of redshift. Apart from the lowest redshift bin in Figure 17, the AGN fractions are consistent between the control and the pairs samples. However, the small number of pairs (particularly at high redshift) in each redshift bin means that the uncertainties on AGN fraction are quite high. The results in Table 1 show that different selection criteria yield different AGN fractions. In general, more massive, redder, elliptical (low S g , high B/T) galaxies have a higher AGN fraction than less massive, bluer, spiral galaxies. There are a few selection criteria for which the close pairs have a higher AGN fraction than the control, e.g. (g − r) bulge ≥ 0.8. However, in no case do we see a higher AGN fraction for close pairs than for both the wide pairs and the control samples. The wide pairs add as a consistency check because a) they do not show proximity induced effects such as enhanced SFR or offset in LZ and b) we know that the wide pairs sample is likely to be quite highly contaminated (e.g. Perez et al. 2006a). The fractions given in Table 1 therefore do not provide any convincing evidence that interactions lead to an increased AGN fraction in close pairs. A similar conclusion was reached by Barton et al. (2000) for their CfA redshift pairs sample. A larger and more recent study by Alonso et al. (2007) draws the same conclusion -the distributions of properties such as color, concentration (analogous to our B/T ratio) or morphology (measured here by S g ) are indistinguishable for close pairs and control galaxy samples. These results are consistent with the finding of Li et al. (2006) that only one AGN in 100 has an extra neighbour within 70 kpc compared with a control sample of non-AGN matched in mass and redshift. Extending this work beyond galaxy pairs, Miller et al. (2003), and references therein, found that AGN fraction is also independent of environment in groups and clusters. Most recently, Li et al. (2008b) have used a sample of 90,000 AGN from the SDSS DR4 to demonstrate that although active galaxies with close neighbours show similar enhancements in star formation as non-AGN galaxies, the presence of close neighbours does not promote nuclear activity. Conversely, Woods & Geller (2007) find a higher AGN fraction in both minor and major pairs compared with field galaxies in a sample of 1200 galaxies with companions in the SDSS DR5. If we had not considered separately the close and wide pairs, and considered the latter as a 'secondary control', we would have drawn an identical conclusion for some of the subsets considered in Table 1. However, our expectation that the wide pairs should approximate to the control sample, lead us to reject the significance of the increased AGN fraction in the three cases where it is seen in Table 1. The typical dynamical and burst timescales of close pairs are typically a few hundred Myrs (Mihos & Hernquist 1996;Barton et al. 2000), an order of magnitude shorter than the time-since-burst of the Kauffmann et al. (2003a) AGN sample. Taken together, this paints a picture of delayed AGN activity that begins much later than the initial central starburst. This is also the scenario provided by merger models which show that starbursts in the central regions of galaxies can be seen early in the interaction process. However, accretion rates only increase later when the merging is much more advanced, i.e. after at least a Gyr, and the galaxy has formed a massive elliptical (e.g. Bekki & Noguchi 1994;Di Matteo, Springel & Hernquist 2995;Bekki et al. 2006). The simulation results are born out observationally by the work of Alonso et al. (2007) who, having found no distinction in galaxy properties for their pairs/control sample, visually classified the subset of pairs that were clearly interacting or merging. This visual classification led to a clear distinction in the properties of galaxies that were actively merging, rather than those that were simply close in ∆v and separation. Summary and Conclusions We have presented a sample of 1716 galaxies with close (r p < 80 h −1 70 kpc, ∆v < 500 km s −1 and 0.1< M 1 /M 2 < 10) companions selected from the SDSS DR4, whose properties we have compared with a control sample of 40095 galaxies. The combination of photometric and spectroscopic data for these galaxies yields a consistent, large sample of properties including metallicity, SFR, mass, B/T ratios, colors and AGN contribution. Our main conclusions are • Star Formation Rate and Proximity: Galaxy pairs have higher SFRs by up to 70% for separations < 40 h −1 70 kpc compared with a control sample of galaxies with equal stellar mass distribution. This result is in agreement with inferences from numerous other studies (e.g. Barton et al. 2000;Lambas et al. 2003;Nikolic et al. 2004;) which have measured the enhancement of Hα equivalent width as a function of separation. • Star Formation Rate and Relative Galactic Stellar Mass: The enhancement in SFR is largest for galaxies in pairs with mass ratios 0.5< M 1 /M 2 < 2 and steadily decreases for paired galaxies with more discrepant stellar masses. We find tentative evidence for enhanced SFR in the less massive galaxy of a minor (mass ratio greater than 2:1) pair, but the result is not statistically significant. The (luminosity-selected) pairs study of Woods & Geller (2007) provides the strongest evidence for a more enhanced SFR in the lower mass galaxy of a pair. • Luminosity-and Mass-Metallicity Relation: We find an offset in the LZ and MZ relations for galaxies in pairs with r p < 30 h −1 70 kpc relative to our control sample. For galaxies with small half light radii (r h < 3 h −1 70 kpc), which tend to be observed with large covering fractions in the SDSS, we find a 0.05-0.1 dex offset in the LZ relation towards lower metallicity in the pairs compared with the control. This is consistent with the previous result of Kewley et al. (2006a). A shift is also present in the MZ relation for large CF/small r h galaxies, at the 0.05 dex level. Based on the LZ relation derived for absolute magnitudes in different SDSS filters we conclude that the shift is partly in metallicity (∼ 0.05 dex) and partly in luminosity (up to 0.4 mags at M B > −20). We find tentative evidence that larger r h galaxies (which tend to be observed with small covering fractions) may have enhanced metallicity for a given mass/luminosity in pairs relative to the field. We investigate what fundamental parameters may drive the empirical dependence of the LZ/MZ offsets on r h . We conclude that a dependence on bulge fraction provides a consistent picture with the observations. In this scenario, the smaller galaxies (with half light radii typically r h < 3 h −1 70 kpc), tend to have larger bulges which delays the interaction-induced star formation. • AGN Fraction: For given cuts in color, bulge fraction and smoothness, pairs of galaxies have AGN fractions consistent with the field, consistent with the conclusions of Barton et al. (2000) and Alonso et al. (2007). However, redder galaxies and those with more symmetric morphologies have higher AGN fractions (∼ 20-30%) than blue or asymmetric galaxies (∼5-10%). Overall, our results support the picture that close interactions (within a few tens of kpc) between galaxies causes gas to inflow to the central regions, engaging new star formation. The outer parts of the galaxy and the disk, are largely unaffected by additional star formation (e.g. Paper II). The process of gas infall and star formation is most efficient for approximately equal mass galaxies, and in interactions of galaxies with low bulge fractions. The observed shift in the LZ/MZ relations depends on the relative timescales of the interactions, gas flows and induced star formation. Based on the consistency of AGN fraction in the pairs and control samples, we conclude that all of these processes occur on timescales shorter than the activation of the central black hole. We are indebted to both the SDSS team for their provision of quality, public datasets, and also to the Munich group for making public the results of their spectral template fitting. This work could not have been done without the hard work and community spirit of the many people who provided these catalogs. We are particularly grateful to Jarle Brinchmann who provided advice and guidance on using the Munich catalogs. We are also grateful to Lisa Kewley for providing the metallicities used in this paper and to Betsy Barton for providing data on the CfA pairs sample and for several useful discussions. We also benefitted from discussions with TJ Cox, Marcin Sawicki, Evan Skillman and Christy Tremonti. SLE, LS and DRP acknowledge the receipt of NSERC Discovery Grants which funded this research. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, 0.08± 0.01 0.07± 0.01 0.09± 0.02 S g < 0.1, (g − r) bulge ≥ 0.8 9614 220 69 0.19± 0.01 0.29± 0.04 0.33± 0.07 S g < 0.1, (g − r) disk ≥ 0.5, 4584 105 29 0.33± 0.01 0.47± 0.07 0.52± 0.14 (g − r) bulge ≥ 0.8 -SFR for galaxies with a companion, as a function of pair separation for three different mass ratio samples. The SFRs have all been normalized to the median control value for that mass range. This figure shows an increase in SFR relative to the field for projected separations r p < 30 h −1 70 kpc for all mass ratios. As the disparity in masses decreases (from top panel to bottom), this enhancement increases in magnitude, significance and out to larger separations. The apparent increase in SFR at r p > 50 h −1 70 kpc is due to contamination effects, see text for details. -SFR for galaxies with a companion, as a function of pair separation for three different mass samples. The first column of 3 panels represents major pairs (mass ratio < 2:1, the middle 3 are the less massive galaxies in minor pairs and the right-most column are the more massive galaxies in minor pairs. Upper panels: individual galaxies. middle panels: SFRs binned by separation. The gray region shows the field median for galaxies with a matched mass distribution (see text for details). Lower panel: SFRs normalized to the median control value. The left-hand panels for major mergers confirm the results of Figure 4. We find no convincing evidence for enhanced SFR in minor mass pairs. Kewley et al. (2006). In all panels, filled points refer to control samples and open points to pairs. For the SDSS sample, we show both the full control/pairs sample and CF cuts as in Figure 6 (see panel labels). The Kewley et al. (2006) data also correspond to CF∼10% and are shown both unbinned (bottom left panel) and binned (bottom right panel). Kewley et al. (2006) and this paper (top panel) and the pairs from the same works (bottom panel). Symbols are as before -filled points are control, open points are pairs; stars are for Kewley et al. (2006) circles/dots for the SDSS. Only the CF<10% galaxies from the SDSS have been plotted in order to be comparable with Kewley et al. -Luminosity-metallicity (LZ) relations for our control sample (filled circles) compared with galaxies with close companions (open circles) for 20 < CF < 50%. The four panels show the LZ relation as determined for absolute magnitudes in four different SDSS filters: u, g, r and i. The persistence of an offset in the LZ relation even in the reddest SDSS filters indicates that the shift is predominantly in metallicity, not higher luminositites due to increased star formation. -Histogram of i-band B/T ratios for galaxies in pairs. The solid line shows galaxies with r h < 3 h −1 70 kpc and the dashed line is for larger galaxies. The solid histogram has been scaled up by a factor of 4 for display purposes.
2008-03-03T01:05:03.000Z
2008-03-03T00:00:00.000
{ "year": 2008, "sha1": "3b18daa233dce363de39ff5a975c6602f95398ac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0803.0161", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3b18daa233dce363de39ff5a975c6602f95398ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
104410807
pes2o/s2orc
v3-fos-license
Defects and their range in pure bismuth irradiated with swift Xe ions studied by positron annihilation techniques Investigations of defects and their spatial distribution in Bi irradiated with 167 MeV Xe26+ ions of different doses have been performed using conventional positron lifetime spectroscopy and variable energy positron beam. In an implanted layer, in which ions are traveling, interacting with atoms and stopping, only clusters which consist of more than eight vacancies were found. It was assigned from ab initio theoretical calculations of positron lifetime in vacancy clusters in Bi. The thickness of this layer corresponds to the range of implanted ions calculated from the SRIM code. However, beyond this layer, an extended layer with such defects has also been found. Its thickness is comparable to the thickness of the implanted layer and it depends on the dose. Defects induced by implantation are also present near the entrance surface, and their concentration depends on the dose of implanted ions as well. Three methods for reconstructing the actual mean positron lifetime and thus the induced depth defect distribution have been proposed, two of them are used in current research. Introduction and motivation Many aspects of swift ion-solid interactions have been intensively studied, including the morphology of atomic defects generated during this interaction [1]. This is important because they play a key role in the constitution of the final physical properties of the region exposed to this interaction. It is generally accepted that defects are generated mainly at the end of the ion track results from inelastic nuclear collision cascades when ion energy is of the order of keV. This takes place in the nuclear stopping power regime. However, they can also occur in the electronic regime, i.e., during ionization and excitation of electrons, when ion energy is much higher. According to the thermal spike model, strong coupling of excited electrons with phonons results in rapid heating and cooling of the material in the vicinity of the track. This leads to a transient and highly disordered zone [2]. Damage of the target takes place also at the entrance surface when the target atoms are spattered [3]. Thus, implantation of swift ions into target results in a generation of damages ranging from point defects to phase transition. It is accepted, that the thickness of the damaged layer is from nanometers to a dozen micrometers, which overlaps with the ion projectile range (R d ). The magnitude of the damage induced depends not only on the irradiation conditions, i.e., the incident energy of ions, their charge and the fluence but also on the physical and chemical properties of the irradiated target material. The response to irradiation must be different depending on the material, even under the same conditions. Here the question arises about the depth distribution of the generated defects. The theoretical simulations based, for instance, on the SRIM/TRIM code indicate that along the track the vacancies concentration is almost constant but near the end, the track gradually increases and drops to zero at or beyond the ion projectile range [4]. The theoretical distribution has a characteristic peak at the end of the track, in the region of the Bragg peak of the curve for electronic stopping power. Nevertheless, experiments have shown a much complex shape of the distribution which additionally is extended much beyond the projectile range [5]. This was detected by experimental techniques such as TEM or microhardness and also positron annihilation techniques. Sharkeev et al. even proposed to call this as the long-range effect (LRE) [6]. However, there is no consensus about the existence of this. Nevertheless, we argue that the positron annihilation technique, extremely sensitive to the defects can be a proper tool to confirm or reject this effect. Positron annihilation (PA) spectroscopy has been used to study implantation-induced defects mainly in semiconductors [7], however, also in other materials (see for instance Ref. [8]). Positrons are uniquely sensitive to the open volume defects like vacancies and their clusters, voids or dislocations. Additionally, by means of variable energy positron beam technique it is also possible to obtain the distribution or profile of these defects below the surface [9]. However, this technique allows testing to a depth of only one or two micrometers. Recently, using the conventional positron source, i.e., 22 Na we applied the experimental technique which allows us to perform tests at the depth of dose micrometers [10]. This is suitable for profiling defects induced by irradiation with MeV ions as it was shown in our recent studies. We have carried out several such studies so far where metals like Cu [11], Ag [12], Fe [13] and Ti [14] were irradiated with swift Xe 26+ ions of energy about 167 MeV. In the case of Ag and Fe, defects were observed far beyond the projectile range, but in the case of Cu and Ti, the depth of the damage layer correlated exactly with the range of ions. We did not confirm the total range of damage extended about three orders than the projectile range reported by some authors [6,15,16], however, we have not ruled out the possibility of such a large depth of damage. It can be assumed that samples' preparation and irradiation procedure can generate the damage at a larger distance than expected, i.e., R d . The finite element analysis calculations have pointed out that high stress, significantly above the yield strength that can occur during implantation, may extend plastic deformation beyond the projectile range [13]. In this paper, we intend to report similar studies of defect distribution not in metal but in semimetal, i.e., polycrystalline Bi irradiated with swift Xe 26+ ions. Many authors have studied Bi bulk samples and thin films irradiated with different ions. Dufour et al. irradiated with swift Xe and Ta ions Bi samples at several temperatures between 20 and 300 K and they monitored electrical resistivity as the function of fluence [3]. They have shown that the damage efficiency and ions track radii increase as the irradiation temperature increases. This supports the thermal spike model, i.e., the initial kinetic energy transfers to target atoms mainly by the electronic excitation. It should also be reminded of other studies involving irradiated Bi. Its foils placed in lead target irradiated by 660 MeV protons were applied for production of Po isotopes. The maximum of production of these isotopes was observed on the end of a range of protons when their energy is less than 100 MeV [17]. Many authors studied the spattering yields from Bi thin films bombarded by swift Cu 4+ and Cu 7+ ions [18] and Ar + ions [19]. They found a satisfactory agreement with Monte Carlo simulation using the SRIM 2008 code. In this article, we present positron annihilation research in polycrystalline, well-annealed samples of Bi exposed to irradiation by Xe 26+ ions with 167 MeV of different fluencies. Conventional positron lifetime measurement will be used to determine the depth profile of defects induced by these ions irradiation. The two proposed methods allow us to reconstruct the actual defect depth profile. The region close to the surface of the irradiated samples will be characterized by the slow positron beam technique. To recognize the type of defect, the ab initio calculations of positron lifetime in vacancy clusters are presented. In general, the purpose of our research is to confirm previous arrangements for LRE, but in semimetal Bi. Sample preparation Samples of pure Bi (99.997% purity purchased from Goodfellow) had the shape of a plate with a height of 2 mm and a diameter of 10 mm and prior to irradiation, they were annealed in an N 2 gas flow at 200 °C for 1 h and then slowly cooled to room temperature. To clean the surface, all samples were etched in a 25% solution of nitric acid in distilled water and their thickness was reduced by 50 µm. The measurements of the positron lifetime spectrum for such virgin samples revealed only one component equal to 241 ± 1 ps which corresponds well with the experimental bulk value reported, i.e., 241 ps, [20], 249 ps [21], 240 ps [22]. This proves that only residual defects remain in the samples, which are not actually visible to positrons. The implantation process of the samples was performed at IC-100 cyclotron at Flerov Laboratory of Nuclear Reactions at Joint Institute for Nuclear Research (JINR) in Dubna, Russia. Xe 26+ heavy ions with an energy of 167 MeV and three doses, namely 10 13 , 5 × 10 13 and 10 14 ions/cm 2 were applied. The average ion flux was 5 × 10 9 cm −2 s −1 . The temperature of the samples in their volume during irradiation did not exceed 80 °C. Positron measurements The present work used conventional fast-fast coincidence digital positron lifetime spectrometer APV8702 (made by TechnoAP Co. Ltd.). As the detector, two photomultipliers H3378-50 (made by Hamamatsu) coupled with BaF 2 scintillators were applied. The time resolution of the spectrometer (FWHM) was about 180 ps. The isotope 22 Na was used as the positron source encapsulated into a titanium foil 7 µm thick, with an activity of about 27 µCi. Two identical prepared samples were placed on both sides of the positron source. With an approximation, the depth profile of the implanted positrons, i.e., the depth distribution where positrons stop, as they are already thermally and practically annihilated, is expressed by the exponential decay function characterized by only one parameter, linear absorption coefficient. For Bi, its value is about 435 cm −1 , so about 63% of positrons emitted from this source are stopped in a layer with a thickness of 23 µm [23]. At closer inspection, closer to the surface, at a depth of less than 10 µm, the profile changes shape, and becomes steeper (see e.g. Ref. [24]). The linear absorption coefficient value in this region increases about factor of two. So, this convinces us that using conventional positron techniques with 22 Na positrons, one can study the near-surface regions, at the depth of about 10 µm. Indeed, such studies are possible and this was shown in our previous papers devoted to samples exposed to implantation with swift Xe 26+ ions [13]. Slow positron beam equipment called as the variable energy positron beam (VEP), suitable for studies of the damages induced by irradiation at a depth less than 1 µm, was used at JINR in Dubna. The energy of injected positrons was between 0.1 and 35 keV. The beam intensity was about 10 5 e + /s and its diameter was 5 mm. The HPGe detector with an energy resolution of 1.20 keV at 511 keV was used to monitor an annihilation line shape parameter, S-parameter, in relation to positron incident energy. This parameter is defined as the ratio of the area below the central part of the annihilation peak to the total area in the compartment of this line. The energy interval taken for calculation is always constant within the whole measurement session. Positron lifetime depth profiles To detect the depth profile of defects resulting from implantation, the sample was etched to remove a layer of about 2 µm thick and then the positron lifetime spectrum was measured. 25% solution of nitride acid in distilled water was chosen as an etching solution. The thickness of the sample was measured before and after etching using a digital micro-screw with an accuracy of ± 0.5 µm. The procedure was subsequently repeated until the bulk value of the positron lifetime was obtained, this means that the damage layer has been removed and the virgin region has been reached. In practice, this procedure allows us to scan the sample to any depth, although it should be mentioned that the sample will be destroyed after such measurement. Initially, near the entrance surface, in each spectrum, two lifetime components τ 1 and τ 2 were resolved. Their values and intensities, I 1 and I 2 (in percentage) will be analyzed. However, it is convenient to define the mean positron lifetime value as follows: Deeper only single lifetime component was resolved from the measured spectra, and this value will be treated as the mean positron lifetime too. The mean positron lifetime is sensitive to the presence of the open volume defects. When these defects occur in a test sample, its value increases significantly above the bulk value. It is a solid parameter, which is an indicator of the presence of open volume defects [7]. The projectile range, R d , of implanted Xe 26+ ions into Bi was obtained using SRIM code [4], its value was about 13.2 µm and longitudinal straggle about 1 µm. In Fig. 1a, the shadow rectangular represents the implanted layer (IL), i.e., a region where ions travel and interact with target atoms. However, in this figure, we depicted the theoretical depth distribution of vacancies generated by recoil obtained by this code. 10 6 ions were used in the SRIM code simulations, but the obtained results were recalculated to obtain the corresponding dose used in the implantation. It is clearly visible that the number of vacancies increases from the entrance surface and is maximal at the end of the ion tracks. The maximum height is proportional to the dose, as expected. (1) (z) = 1 I 1 + 2 I 2 ∕100%. This is clear, because only ions are the source of damages, if more ions are implanted, then more damages, and defects are present. In Fig. 1b, the measured depth profile of the mean positron lifetime is shown for all three doses. In the region of the IL, the mean positron lifetime decreases gradually. However, the bulk value of the mean positron lifetime, tagged by hatched rectangular is obtained above R d . For the highest dose, this was at the depth of about 28 µm, i.e., about two times more than R d . This clearly indicates that the damage layer involved by ions is extended much beyond the IL. For the lower doses, this effect is also well visible, however, the depth at which the bulk value is obtained is slightly lower, i.e., 18 µm, for both cases. The obtained profiles, Fig. 1b, are the result of convolution of the positron implantation profile with the actual profile of the mean positron lifetime, which in turn is associated with the profiles of open volume defect concentration caused by implantation. This is clearly represented by Eq. (4) in Appendix 1 where a detailed discussion is given too. The approximated reconstruction of the actual profile can be carried out using Eq. (7), and the results are depicted in Fig. 2. This profile presents the actual value of the mean positron lifetime in a layer with a thickness of approximately 5 µm. The fraction of positrons w in Eq. (7) which annihilate in the adjacent to the positron source layer is about 23%, this value was calculated using the LYS-1 code [25,26], where the fraction of positrons emitted from 22 Na in a stack of layers with different thicknesses is evaluated. The general features of the profiles of Fig. 1b are also reproduced in Fig. 2, i.e., cut off at the certain depth that exceeds the projectile range and the small reduction of the mean positron lifetime with increasing depth. However, the obtained positron lifetime values are about 30-80 ps larger than those of Fig. 1b. This is understandable, as discussed in Appendix 1 and illustrated in the typical examples of profiles in Fig. 5, Appendix 1. It should be noted that such a reconstruction is subject to great uncertainty mostly caused by uncertainty in thickness measurements ca. 0.5 µm, because it could influence the determination of the w value which could differ around ± 10%. Additionally, the reconstructed value depends on the inverse of the w value, which is much smaller than the unity, Eq. (7). It seems that another more accurate approach may be proposed, in which the experimental relationships of Fig. 1b can be described directly by means of Eq. (4). However, in this case, one should know what kind of dependency should be expected for the actual mean positron lifetime. Looking at examples of dependencies of Fig. 5 in Appendix 1, it seems that the dependency in Fig. 5a (solid line) best suits the experimental dependencies. Let us assume then that the actual dependency of the mean positron lifetime is described as a step-like function as follows: where θ(z) is the Heaviside step function, and a, b and c are the adjustable parameters. From Eq. (4), the following relation can be obtained for the mean positron lifetime convoluted with the positron implantation profile represented by Eq. (5) The solid lines in Fig. 1b represent the best fits of this relation to the experimental points for each dose. The width of the step, b parameter is equal to 15.1 ± 0.7, 16.5 ± 0.5 and 27.6 ± 0.6 µm, for dose 10 13 , 5 × 10 13 and 10 14 ions/ Fig. 1b cm 2 , respectively. The parameter a representing step height is equal to 69.6 ± 3, 71.8 ± 2, 57.9 ± 1.6 ps for dose 10 13 , 5×10 13 and 10 14 ions/cm 2 , respectively. The value of the c parameter was fixed and equal to 240 ps. The linear absorption coefficient µ is also an adjustable parameter, its value obtained from the fits is about 565 ± 23 cm −1 , 730 ± 91 cm −1 and 806 ± 80 cm −1 , for dose 10 14 , 5×10 13 and 10 13 ions/ cm 2 , respectively. However, in the dependencies presented in Fig. 1b, its value was fixed at average value, i.e., equal to 700 cm −1 , and it does not change χ 2 in the fitting procedure which was above 0.95. The fact that the above values are higher than 435 cm −1 , referred to above, confirmed difference of absorption coefficient observed, see Refs. [23,24]. Thus, this approach indicates that the actual mean positron lifetime, and hence defects can be distributed uniformly at the depth beyond R d . Generally, this is not contradicted to the results obtained from reconstruction, in Fig. 2, the dashed lines represent the step dependency Eq. (2) obtained from the fits. Certainly, Eq. (2) can only be considered as an approximation of the actual dependence, another function may also be used, however, without a clear physical interpretation of adjustable parameters. Nevertheless, the most important conclusion obtained in this approach is that with increasing dose the sum thickness of IL and ELD increases, i.e., the b parameter, while the step height, the a parameter does not change. A slight effect was also observed in Ag, to which Xe 26+ ions are also irradiated, see Fig. 3 in Refs. [12,13]. But, it was not the same as a result of the Bi case in this work. It should be mentioned that this effect was also observed by Lu et al. [5] in nickel, NiCo and NiFe after irradiation of 3 MeV-charged Au ions. The total range of defects induced by the irradiation was equal to ca. 450 nm for doses 2 × 10 13 ions/cm 2 , and 1100 nm for 5 × 10 15 ions/ cm 2 as in Ni single crystal. Therefore, not only the energy of the implanted ions, as shown in Ref. [13], but also the dose has a large impact on the distribution of defects beyond the IL. The presence of defects in the IL is obvious, but their presence outside it is a surprising result. In the literature, the authors suggest several explanations for this phenomenon. According to Lu et al., the extended layer of damage (ELD) can result from the diffusion of point defects from the IL into deeper regions [5]. However, our previous research and also other authors did not confirm this. Annealing an implanted iron sample resulted in the disappearance of ELD instead of its expansion [13], but, as presented by other authors, implantation at low temperature also induces ELD [27]. Sharkeev and Kozlova pointed out that mechanical stress caused by ion implantation may induce the generation of defects beyond IL [16]. Two reasons justify this explanation. The model using the finite element analysis method explains qualitatively the expansion of defects outside IL observed by Fig. 1b. The gray rectangle represents the implanted layer, whose thickness was calculated from the SRIM code positrons [13]. The implantation process causes tensile stress in IL, but deeper layers where ions do not reach anymore are consistently squeezed. If the yield strength is exceeded in both layers, then plastic deformation will cause defects in both IL and ELD. It is important to note that they may overlap defects generated in the nuclear collision cascade. This may explain why the shape of the defect distribution as seen by positron in Figs. 1b and 2, does not correspond to the distribution obtained by the SRIM code, Fig. 1a. In this case, the concentration of defects increases with increasing depth, which should be reflected in the increase in the mean positron lifetime, which can be seen in the example in Fig. 5b or d. It is also puzzling that close to the surface of the actual mean positron lifetime and thus the concentration of defects, is greater than deeper, Fig. 2 (solid lines). Near the surface, the energy of penetrating ions is high and stopping power by nuclear collision is low, thus one should expect a low amount of defects and hence a lower value of the mean positron lifetime. The presence of tensile stress across the IL can explain this. It can be generated by increasing the temperature during implantation and lattice expansion. This can happen when the sample, e.g., is not sufficiently cooled. The presence of slowed ions can lead to swelling of the IL, and this can be another source of tensile stresses. In the finite element analysis, we assumed the tensile stress has a constant value, in fact, the depth distribution is not excluded [13]. Thus, the results of Fig. 1b, and Fig. 2 may indicate that another defect generation mechanism works in all its IL volume. To identify the type of defects generated by implantation in Bi in Fig. 2, we present the values of individual positron lifetime components and their intensity obtained for all the irradiated samples. It should be noted that the value of τ 1 , closed circles in Fig. 3, is smaller than bulk value, i.e., 241 ps to a depth of approximately 30 µm. For a smaller depth, this value decreases to about 180 ps on the entrance surface. The fact that the τ 1 value is smaller than the bulk value, is well-explained by the two-state trapping model (see for instance Ref. [7]). According to this model, positrons can annihilate, both in a free and bound state, i.e., trapped in a defect. In other words, there is a bulk region in a sample in which positrons move freely and defects that trap them. The τ 1 value is the reciprocal of the annihilation rate, the latter being the sum of the annihilation rate in the free state, i.e., in bulk and the trapping rate on the defect. Because trapping rate is proportional to the defect concentration, then with its increase τ 1 value decreases. We can conclude that the increase of the τ 1 value, shown in Fig. 3 indicates a reduction in defect concentration with increasing depth. This reduction is smaller in the IL than in EDL, as it is well visible for the sample irradiated with the highest dose, Fig. 3 (right). The intensity of this lifetime component, open squares in Fig. 3, also increases with increasing depth, also indicates this trend. Two-state trapping model assumes that only one type of defect is present in the sample, the value of the second lifetime τ 2 , closed circles in Fig. 3 is associated with this defect. In the IL, this value is about 440 ps for the sample with the highest dose, however, for the samples with lower doses, its value is slightly smaller about 420 ps. Such a large value suggests that cluster which consists more than eight vacancies are present in the IL (see Appendix 2, Fig. 6). It is possible to indicate a high intensity of the second lifetime component of about 45%, at the entrance surface for all samples, open squares in Fig. 3. Its value decreases rapidly in the IL. In the case of the sample irradiated with the highest dose, also in the EDL, this component is observed. Thus, irradiation with fast Xe 26+ ions induces the formation of large vacancy clusters, whose concentration decreases with increasing depth from the entrance surface. The intensity of the second lifetime component can be compared with the results of the positron lifetime measurement on the worn surface of Bi sample exposed to dry sliding in the friction process [28]. In this case, the intensity was much lower, around 20%, but the value positron lifetime value was almost the same, i.e., about 420 ps. Also, in this case, the value of the first lifetime component was lower than bulk value, i.e., about 230 ps. Certainly, the process of implantation and friction is completely different, but Bi's structural response leads to similar defects. As is known, in the case of friction, the surface and region below are exposed to a stress. This may suggest that static and dynamic stresses are present during implantation, but much larger, as it was reported by several authors [16,29,30]. It is difficult to estimate its value from positron results, because only for very small values one can expect a linear relationship between the tensile stress and the mean positron lifetime. At high stress, this relationship becomes saturated. In our last investigation, we suggested that the stress in the IL can be an order of about 1 GPa, however, in Bi this value can be different. VEP results The results obtained using conventional positron lifetime spectroscopy, despite low spatial resolution, of about 1-2 µm, indicate that swift ions many damage the near-surface region. To confirm this, a slow positron beam technique was used to study the irradiated samples. The results are shown in Fig. 4. Note that the mean depth of implanted positrons is marked on the upper axis, this depth was calculated using the following relationship, z (nm) = 8.61 × E 1.372 , where E is the input positron energy in keV, shown in bottom axis [31]. The total range of this measurement is from 10 nm to 1.3 µm. The open diamonds represent the dependency of the S-parameter versus the positron incident energy obtained for the reference, well-annealed, non-irradiated sample. It is well-known that the S-parameter reflects the annihilation of positrons with low momentum electrons. The latter is present in open volume defects, such as vacancy and its clusters. Therefore, this parameter similar to the mean positron lifetime is sensitive to these defects in measured samples. In Fig. 4, the value of the S-parameter for the virgin sample is lower than for the samples subjected to irradiation. There is also a tendency that the S-parameter increases in value as the dose increases. It is not clearly visible in Fig. 1b, where the values of the mean lifetime of positron on the entrance surface for all these samples are almost equal. We can conclude that despite the high energy of implanted Xe 26+ ions and consequently, their deep penetration, the surface close to the surface is strongly damaged. What's more, the level of this damage depends on the dose. Values of the measured S-parameter that are presented in Fig. 4 do not change significantly with the increase of positron energy, the dependencies are almost flat. It results from the fact that the value of parameter S on the surface does not differ significantly from the value in the interior. It makes it impossible to obtain quantitative information, e.g., values of the positron diffusion length. Thus, let us remain on the qualitative analysis of the results from Fig. 4. Final remarks Taking into account the results in this article, we can summarize our current research on LRE in the following way. ELD accompanies IL in Bi, its occurrence is the result of ion presented implantation, however, that ELD was not observed for Cu and Ti irradiated with Xe 26+ ions [11,14]. This may indicate that the properties of the target material may also affect ELD formation. The dose seems to be an important factor responsible for the distribution of defects in the ELD, and their depth distribution. Processes occurring in IL are the source of defects in ELD. Dynamic or static stress occurring during implantation is very likely, the value of which exceeds the yield strength. This stress must cause additional IL defects that overlap defects resulting from ionization and nuclear collision with nuclei of atoms of the target material. This can explain why in all investigated cases the depth distribution of defects does not correspond to the vacancy depth distribution obtained from the SRIM simulation. The defect concentration seen by positrons is higher near the entrance surface and then gradually decreases according to the reconstruction that at least remains constant throughout the depth. There is no maximum observed at the depth at which the Bragg peak should be located. This was observed in all metals tested [10][11][12][13][14] and also this time for semimetal in Bi. The question arises whether processes at the atomic scale or in global processes induce stress, such as an increase in temperature when the current with ions hits the target. Thermal lattice expansion can be responsible for tensile stress. Nevertheless, an increase in temperature enhances the migration of defects, their coalescence, rearrangement or annealing and finally reduces stress. It can be taken into account that stress can also be accompanied by ionization, causing the local melting of the material according to the thermal spike model. It locally expands the crystalline lattice and plastically deforms the region also outside the spike. It can be dynamic stress, which works only in a very short time of a few ns or less, but it is enough to leave a lot of defects. Ion implantation is used in doping technology for the production of flat devices mainly for electronics [32]. The fact that this implantation leads to radiation damage, i.e., defects' generation, can counteract the desired effect of doping. In addition, according to this study, defects may also be generated outside IL, this may affect the substrate or other structures below. In particular, it can affect the electrical resistivity that is sensitive to the presence of defects at the atomic scale and can be modified due to this effect. The use of a high dose may enhance this effect. This can be important when the implantation takes place in a multilayer system, because implanting ions into the top layer can generate defects and modify properties in other layers below if they exist. Conclusions The only in the implantation layer, but also beyond this layer at depths exceeding the projectile range. For the highest dose it is about 28 µm, whereas the projectile range obtained from the SRIM/TRIM simulations is only about 13.2 µm. The total thickness of the damage layer increases with dose increase. In damaged layers, clusters which consist of eight or more vacancies are generated defects, according to ab initio calculations of a positron lifetime in vacancy clusters performed for Bi. Also near the entrance surface region with defects induced by implantation is clearly visible. It suggests that not only inelastic nuclear collision cascades, but ionization and excitation of electrons also induce defect generation. We suggested that the occurrence of the tensile stress, results from, for instance, thermal expansion and/or swelling may be responsible for the long range of defect distribution. Due to low range of induced defects in comparison to the positron implantation range, two approaches for reconstructing the real mean positron depth of life were proposed. In the first approach, the reconstruction was done using only the measured values. Despite the low accuracy, it can be helpful in detecting the actual defect distribution. In the second approach, the proposed depth profile after convolution with positron implantation profile was fitted to the experimental data. A step-like function, as a proposed profile, is able to describe the experiment results in a satisfactory manner.
2019-04-10T13:13:05.344Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "37c20a33a25988ac63461bf0059c381b723f0d8c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00339-018-2367-x.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9fd1c02755c6c25ebea88c18328a4a409c19bf43", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
216107776
pes2o/s2orc
v3-fos-license
Examining the impact of a social skills training program on preschoolers' social behaviors: a cluster-randomized controlled trial in child care centers. Background Preschoolers regularly display disruptive behaviors in child care settings because they have not yet developed the social skills necessary to interact prosocially with others. Disruptive behaviors interfere with daily routines and can lead to conflict with peers and educators. We investigated the impact of a social skills training program led by childcare educators on children’s social behaviors and tested whether the impact varied according to the child’s sex and family socio-economic status. Methods Nineteen public Child Care Centers (CCC, n = 361 children) located in low socio-economic neighborhoods of Montreal, Canada, were randomized into one of two conditions: 1) intervention (n = 10 CCC; 185 children) or 2) wait list control (n = 9 CCC; 176 children). Educators rated children’s behaviors (i.e., disruptive and prosocial behaviors) before and after the intervention. Hierarchical linear mixed models were used to account for the nested structure of the data. Results At pre-intervention, no differences in disruptive and prosocial behaviors were observed between the experimental conditions. At post-intervention, we found a significant sex by intervention interaction (β intervention by sex = − 1.19, p = 0.04) indicating that girls in the intervention condition exhibited lower levels of disruptive behaviors compared to girls in the control condition (f2 effect size = − 0.15). There was no effect of the intervention for boys. Conclusions Girls may benefit more than boys from social skills training offered in the child care context. Studies with larger sample sizes and greater intervention intensity are needed to confirm the results. Trial registration Current clinical trial number is ISRCTN84339956 (Retrospectively registered in March 2017). No amendment to initial protocol. Background The use of early education and care services has substantially increased over the past four decades in most Western industrialized countries [1]. Early education and care services refer to regular group-based care of children prior to school entry (i.e., under age 5 years in North America) by someone other than the parents. Group-based child care centers (CCC) are one of the most important structured environments for early child socialization. Research suggests that exposure to highquality child care in preschool settings has a positive effect on children's social and cognitive school preparedness [2][3][4]. Benefits are particularly evident among children raised in poverty or in a low socio-economic status (SES) families [4][5][6][7]. Attending an early education and care setting is therefore an important preventive strategy for social adjustment and academic attainment problems [3,8]. During the preschool years, children are more likely to exhibit disruptive behaviors such as aggression, noncompliance with rules and negative affectivity especially in social settings like CCCs [9]. This is because they are required to interact with many peers and educators for many hours each day and because they have not yet acquired sufficient self-control and the social skills necessary to communicate their needs and negative emotions [10,11]. Emotional and cognitive immaturity in CCC settings may also be compounded by a phenomenon known as social contagion whereby preschoolers exposed to peers with disruptive behaviors mirror these behaviors or are forced to respond in similar ways in order to adapt to the social context (e.g. pushing, hitting, kicking) [12][13][14]. Children with disruptive behavioral problems tend to disrupt CCC daily routines, leading to conflict with peers and educators [15]. They are also more likely to be excluded from socially and cognitively stimulating activities and consequently to experience academic and social adjustment difficulties later on [15,16]. It is therefore vital to provide child care environments that promote the development of good social relationships with peers and educators as early as possible so that children can enter the formal education system with adequate social and cognitive abilities [17]. Children at higher risk of disruptive behavior problems During the preschool years, boys and girls exhibit similar levels of disruptive behaviors, but males exhibit more problems after school entry [14,18]. Studies show that early preventive interventions delivered in CCC settings can yield short-and long-term benefits [19][20][21]. However, the question of whether boys and girls respond differently to these interventions is not well-documented in the literature. Of five preschool intervention studies that targeted children's socio-emotional development [22], only one reported testing the interaction between the experimental conditions and the children's sex [23]. Girard and colleagues reported that an educator training intervention designed to scaffold peer interactions and use dramatic play reduced aggressive behaviors in boys but not girls [23]. This suggests that males and females may respond differently to disruptive behavioral intervention programs and further investigation of sex as a putative moderator is therefore warranted. Another potentially important moderator of the effects of disruptive behavioral intervention programs is the SES of the child's family. Children from low-SES families are more likely to exhibit disruptive behaviors from preschool to pre-adolescence when compared with children from higher SES families [14,24]. Consequently, children from low-SES families are more prone to enter school with socio-emotional skills deficits that undermine school adjustment [15]. However, CCC attendance may counteract the influence of a socio-economically deprived familyenvironment on children's socio-emotional skills by providing cognitive stimulation and socialization opportunities in a well-structured environment [25]. Children from low-SES families might therefore be more responsive to interventions delivered in CCC that target social-emotional skills development. Interventions on Children's social development in child care context Behavioral and cognitive management strategies in the context of preschools have shown positive short-and long-term effects on social behaviors, academic readiness and cognitive abilities, especially in the context of Head Start programs [20,[26][27][28][29]. However, outside of the Head Start literature, few studies have investigated the role of child care interventions on children's socioemotional development [22]. Doing so is important because the resources available to educators may vary between Head Start and community-based CCC settings. Head Start is a highly-structured government-run preschool program in which teachers have formal training in early childhood education and follow a prescribed curriculum focused on improving school readiness [30]. Community-based child care services, in contrast, may be run by public or private agencies, in which child care educators may not endorse a structured curriculum and may or may not have received formal training. Consequently, educators' capacity to effectively implement social skills programs may vary widely between these contexts. Previous CCC interventions have typically targeted caregiver-child relationship as their active ingredient and implemented a specific curriculum, i.e., activities around a certain theme [22]. One example is the Preschool Life Skills (PLS) which focuses on thirteen skills related to instruction-following, functional communication, delay tolerance, and friendship. Studies show that the PLS can significantly reduce disruptive behaviors in preschool children [21]. Additionally, educators reported that the social skills training was easy to incorporate into their daily routine and improved the social dynamics between children in their groups [21]. In this project, we evaluate a social skills training similar to the PLSthe "Minipally" programwhich focuses on social skills development in a group context. The Minipally program is distinct that it is oriented less towards communication skills and preparedness for the school environment, and more towards social and emotional regulation skills. Objectives Using a cluster-randomized controlled trial, we tested the impact of a social skills training program, delivered by child care educators, on children's disruptive and prosocial behaviors. We also examined whether children's sex and family SES moderated the impact of the program. We expected children exposed to the social skills training program to exhibit lower levels of disruptive behaviors and higher levels of prosocial behaviors at postintervention compared to children in the control condition. Given the lack of evidence showing that children's sex and family SES moderate the impact of social skills programs in CCC contexts, we did not have hypotheses about these variables. Study design Heads of 38 public CCC of the greater Montreal region were invited to participate in the study as they respected our eligibility criterion for participation: i.e., providing services to a minimum of 25% of children from lowincome families and being in low-SES neighborhoods. Neighborhood SES was defined according to official provincial [31] and national criteria [32]. Lower-income families were those entitled to a special government subsidy program providing free child care access for families with an annual family income below CAN$20,000. After an information session, nineteen CCCs agreed to participate in the 8-month study. The CCCs were randomized with a 1: 1 ratio to either: 1) the intervention condition (receiving the program in year 1) or 2) the wait list control condition (receiving the program in year 2) using a computergenerated randomization sequence. Each CCC included between one and 5 groups (mean = 2.32), n = 8 preschoolers led by an educator. Forty-three groups (n = 361 children) from 19 CCCs were recruited in September 2013 and took part in the study ( Fig. 1: Trial Flow Diagram). Written consent to participate in the study were obtained from parents, educators and directors of the CCCs. The study was approved by the Sainte-Justine Hospital Ethical Research Committee (ref: 3738) and registered on a primary clinical trial registry prior to beginning data analysis. A detailed description of the study protocol describing the rationale behind the Minipally program and its evaluation was published shortly thereafter [33]. Minipally curriculum The Minipally program is an adaptation of an earlier social skills training programs for school-aged children i.e. Fluppy programwhich was developed by our research team and has shown long-term benefits for academic achievement, employment, income, delinquency and substance abuse [34,35]. Over the past 20 years, experienced educational psychologists and psychoeducators have updated the Fluppy program to address the evolution of best practices in social skill training and adapt it to younger age groups, i.e. preschool-aged children. For example, in the school-aged program, children are taught how to deal with several emotions at the same time (e.g., feeling sad and upset) and to talk about their frustrations, while in the preschool version, children are taught to identify and name emotions and to manage their frustrations using age-appropriate stress-releasing techniques. Thus, while preschool-aged children are taught to use breathing techniques using the butterfly analogy, i.e. to breathe and raise their wings (arms) like a butterfly, school-aged children are taught to pause, withdraw from the situation if possible, and take five deep breaths. The Minipally curriculum is delivered by each educator to her own group of children using a puppet via 16 play sessions over a period of 8 months. The puppet presents itself as a loyal and enthusiastic friend who visits the CCC to model prosocial behaviors and social inclusion by discussing/playing with his friends (other puppets) and with the children. The full curriculum includes generic components of social skills training programs: introduction to social contact (make and accept contact from others, make requests); problem solving (identifying the problem, generating solutions); self-regulation (deep breathing to calm down, accepting frustration, learning to share, tolerating frustration); and emotional regulation (identifying and expressing emotions, listening to the other). The skills taught in each workshop are presented in Table S1 in supplementary material. Specifically, in each workshop, the educator calls on the Minipally puppet who then directly solicits the participation of each child and models adaptive social skills. Like children, Minipally feels great joys, but also has some difficulties with contact with others. The workshops are lively to solicit the participation and feedback of children as Minipally suggests ways for children to do things or asks them for suggestions. During the workshops, Minipally verbalizes a lot; he communicates everything he thinks and does in order to help children remember his actions, words, emotions and attitudes. Minipally is very attentive throughout the workshop as he congratulates children who exhibit the desired behaviors (i.e., wait for his turn, help another child) and encourages those who make efforts to practice the new skills presented. In other words, Minipally acts as a safe and friendly figure for children and a playful tool for child care educators to introduce new concepts and rules in a group context. Child care educators are also invited to reinvest the strategies presented by Minipally in natural settings on a day to day basis: they are encouraged to observe children during free play, reinforce positive behaviors as they occur and invite children to refer to what they learned during the last Minipally visit. Educator training and supervision The program was implemented as follows. The 16 workshops of the Minipally curriculum were presented to the educators during a 2-day training delivered by trained professionals (i.e., psychoeducators). After the workshops the psychoeducators remained available by telephone for additional questions during the implementation of the curriculum by the educators. CCC directors were financially compensated for the replacement of the educators while they were trained. Next, the educators delivered the Minipally intervention over 8-months (one session every 2 weeks) and received 12 h (i.e., 4 × 3-h supervision; week 6, 12, 18 and 24 of the trial) of group supervision. During the supervision sessions, between 8 and 10 educators met with a psychoeducator to discuss the challenges associated with the implementation of the Minipally curriculum. Outcomes: disruptive and prosocial behaviors assessed by educators Educators completed the Social Behavior Questionnaire [36] for each child in their group at pre-and postintervention. Two dimensions of the questionnaire were used: a) Disruptive Behaviors, which included five opposition items (e.g., has been defiant or has refused to comply with an adult request), four impulsivity/hyperactivity items (e.g., has had difficulty waiting for his/her turn in games) and six physical aggression items (three reactive, e.g., has reacted aggressively when teased, and three non-reactive, e.g., has gotten into fights) (Cronbach alpha = 0.86); and b) Prosocial Behaviors (e.g., has helped other children, has shared his toys with others, has comforted a child who was upset; 7 items) (Cronbach alpha = 0.79). Educators rated each item using a 3-point Likert scale according to the frequency of the behavior in the last 2 weeks (0 = never, 1 = sometimes, and 2 = often). For each dimension, we created a cumulative score varying from 0 to 10, with 0 indicating that the child did not exhibit this behavior and 10 indicating that the child often exhibits this behavior. Covariates and moderators Family sociodemographic characteristics Before beginning the intervention, the child's parents completed a questionnaire about their child's CCC attendance details (e.g., number of hours per week, number of months since first attendance), the age and sex of their child, their family composition (e.g., number of siblings), and their socio-demographic background (education and income). A family SES score was then created by combining the maternal education and family income variables (i.e., total income in the household where the child lives most of the time). A low-SES score was assigned if the child lived in a household where the family earned less than CAN$20,000 per year and where the highest level of maternal education was a high school diploma. If the child was living in a household where the family was earning more than CAN$20,000, or where the mother had obtained any training following her high school diploma, the child was assigned to the middlehigh SES group. Statistical analysis Sample size calculation Prior to the recruitment, we performed an a-priori power analysis to determine the sample size needed for the trial. The mean and standard deviation estimates for preschoolers' disruptive and prosocial behaviours were taken from the Quebec Longitudinal Study of Children's Development [24]. We did not have an estimate of the intra-class correlation (ICC) for CCC, so we estimated different scenarios using 0.1, 0.15 and 0.20 as the ICC coefficient and potential effect sizes (i.e., 0.3, 0.4 and 0.5) based on the difference in mean levels of disruptive and prosocial behaviours between the intervention and control conditions. We used Heo's statistical procedure for cluster randomized trials with three-level units in our sample size estimation [37]. In other words, our calculation was based on the expected mean number of groups within each child care centers-i.e. 2 groups per child care center. Using the 0.15 ICC scenario, our power calculation indicated that 19 child care services would allow to detect a medium-size effect of the intervention on the selected outcomes, with 90% power at a 2-sided significance level of α = 5%. Our model can be stated as Y ijk = β 0 + δX i + u i + u j (i) + e ijk ; where Y ijk is the postintervention response of the i th study participant in the j th educator group nested in the k th child care center, β 0 represents the baseline value of our primary outcome, while δX i is the main effect of the intervention (where X = 0 for wait list group and X = 1 for the intervention group), and the last three terms are random effects at every level of the trial analysis [37]. This scenario was chosen in accordance with our financial resources and the feasibility of the study [33]. The cluster randomization ensured that children from the control wait list condition were not exposed to the intervention. After completion of data collection, all control CCC received the social skills training. Preliminary analysis Randomization balance analysis Despite the use of a cluster randomization, there is still the possibility that individual characteristics are unequally distributed between the two experimental conditions. We therefore performed a series of preliminary analyses to compare the intervention and control conditions at baseline on a host of variables that may directly or indirectly impact the effect of the intervention (see Table 1). Only children's age, the number of months of attendance at the CCC and family SES differed between the intervention and control groups. However, these variables were not significantly associated with any of the outcomes and were therefore not included as control variables based on the randomization balance analysis. Attrition analysis No CCC withdrew from the study over the course of the intervention. However, 25 children left their CCC between pre-and post-intervention, representing a 7% attrition rate. These children were replaced by 33 newcomers (14 in the control condition and 19 in the intervention condition). If the new children entered their CCC in the first half of the trial (i.e., week 16 out of 32), they were included in the post-intervention assessment and in further analysis, after first obtaining parental consent. Children who entered the CCC after the 16th week of the intervention were not invited to participate in the study. In attrition analyses, we compared the 25 children who left the study with the 33 children who entered after the pre-intervention assessment (i.e., newcomers) and the 303 children who entered at pre-intervention and remained in the study. More children left the intervention condition than the control condition, but newcomers were equally distributed in both experimental conditions. There were no statistically significant differences between the children enrolled at baseline, those who left the study and those who entered later, in terms of sex, age and number of siblings. However, children who entered the intervention group later were more likely to come from middle-high SES families while children who entered the wait list group were more likely to come from low-SES families. We therefore controlled for family SES in all analyses. Are there differences between experimental conditions at pre-intervention on children's disruptive and prosocial behaviors? We used hierarchical linear mixed models to examine differences in disruptive and prosocial behaviors between children in the intervention and control conditions at pre-intervention. No differences were found with respect to pre-intervention disruptive and prosocial behaviors (see Supplementary material Table S2). However, girls in 148 a Frequency (%) b Mean (SD) Note1. SD = Standard deviation Note 2. We used bivariate analyses (t-test for continuous variables and chi-square for categorical variables) to verify whether socio-demographic characteristics of the child's family were balanced between the intervention and control groups the intervention group exhibited significantly higher levels of prosocial behaviors compared to girls in the control group and compared to boys from both the intervention group and the control group, respectively (β intervention by sex = 1.61, p < 0.01). We therefore controlled for pre-intervention levels of children's prosocial behaviors in post-intervention models, in addition to assessing a potential moderating effect of children's sex. For disruptive behavior, we did not find any significant interaction between the experimental condition and children's sex, and consequently did not control for preintervention levels of disruptive behaviors in subsequent models. Main analysis Hierarchical linear mixed models were used to estimate the main effects of the intervention on post-intervention disruptive and prosocial behaviors and to estimate if the impact of the intervention varied according to children's sex and family SES. To account for variation in the number of children across CCCs, we used the restricted maximum likelihood estimator in every model. The analysis was performed in five steps. First, because randomization was performed at the CCC level, we had to account for clustering in our data and we therefore ran an unconditional model to estimate the intra-class correlation (ICC) between clusters. The ICC is the proportion of variance in the outcome variable that is explained by the grouping structure of the hierarchical model [38]. It reports the amount of variation unexplained by any predictors in the model that can be attributed to the grouping variable, compared to the overall unexplained variance [38]. In the unconditional model, only the intercept was introduced as a fixed effect. Second, we introduced the experimental condition variable as a main fixed predictor with and without the family SES covariate. Since the CCCs are the unit of randomization in this study, we expected variation between and within clusters and therefore accounted for this by introducing random effects. In other words, because children's sex and family SES could vary within the same cluster, i.e., children from different SES backgrounds attended the same CCC, we introduced them as fixed and random effects for the adjusted and interaction models. In subsequent models, we added an interaction term between our hypothesized moderators (i.e., children's sex and family SES) and the experimental condition variable in the prediction of children's disruptive and prosocial behaviors. Once again, the random effects specified in these models were the intercept, as well as family SES and children's sex. Because of baseline differences between the experimental conditions found in preliminary analysis, we also added children's preintervention prosocial behavior score as a fixed and random effect when assessing the moderating effect of children's sex on the association between the experimental condition and post-intervention prosocial behavior. Fourth, we performed pairwise comparisons between the intervention and the control group according to children's sex and family SES, based on the mixed hierarchical model mean estimates. Finally, we estimated the effect sizes of the difference in means using the f2 fixed effect size estimation [39] for hierarchical linear mixed models recommended by Lorah (2018) [40]. The f2 effect size statistic represents the proportion of variance explained by the given fixed effects relative to the unexplained proportion of outcome variance. Effects of 0.02, 0.15 and 0.35 are considered small, medium and large respectively [41]. Descriptive statistics Participants Children (n = 361) were distributed into 19 different CCCs. Table 1 shows that most children attended CCC for 30 to 40 h per week and that the number of boys and girls in the intervention group and the control group was roughly equal. Table 2 shows children's raw scores for disruptive and prosocial behaviors at pre-and postintervention according to the experimental conditions. Implementation of Minipally All educators were female, and most (85%) had a professional early education training. All educators in the intervention group received the two-day Minipally training. Implementation was monitored throughout the year via four half-day supervision sessions. At the last supervision session (week 24 out of 32 in the trial), all educators in the intervention group had implemented 12 of the 16 Minipally workshops. Thereafter, the exact number of workshops conducted by every educator was not monitored. Did the intervention have an impact on children's social skills? Disruptive behaviors At post-intervention, the unconditional model showed that about 9% of the total variation in post-intervention disruptive behaviors was accounted for by differences between CCCs. When entering the experimental condition variable as a fixed effect, while adjusting for children's family SES (β = 0.27, p = 0.52), we found no main effect of the intervention on children's post-intervention disruptive behaviors (β = 0.39, p = 0.34). This suggested that the mean level of post-intervention disruptive behaviors was not different between the intervention and the control group. The ICC for this model dropped to 0.05, indicating that we accounted for a larger portion of the variation among the different CCCs and that less variation existed in the random intercepts of our model. Coefficients for the post-intervention models and their associated ICCs are presented in Table 3. Did child's sex or the socio-economic status of the family moderate the impact of the intervention? We found a significant interaction between experimental conditions and children's sex (β = − 1.19, p = 0.04, Fig. 2a), indicating lower levels of post-intervention disruptive behaviors in the intervention group compared to the control group for girls (F = 4.19, df = 43.08, p = 0.04; f2 effect size = − 0.15). For boys, there was no difference in post-intervention disruptive behaviors between the intervention group and the control group (F = 0.37, df = 49.20, p = 0.55; f2 effect size = 0.04). We also investigated the potential moderating effect of family SES, but no significant interaction was found (β = 0.17, p = 0.86; f2 effect size for middle-high SES children < 0.01, f2 effect size low SES < 0.01). Prosocial behaviors For prosocial behaviors, there was no main effect of the intervention and no moderation effect of children's sex or family SES. Coefficients and ICCs for all tested models are presented in Table 3. Figure 2b shows the prosocial scores according to experimental conditions and children's sex. Sensitivity analysis We performed the same set of analyses with a restricted sample of children who had both pre-and postintervention assessments (i.e., newcomers were excluded from the sensitivity analysis). We found the same patterns of results, namely that the intervention led to a decrease in disruptive behaviors among girls only but had no impact on prosocial behaviors for girls or boys. Discussion This study used a cluster-randomized controlled trial design to test the impact of a social skills training program on children's social behaviors in Child Care Centers in low-SES neighborhoods. Using hierarchical linear mixed models, we found that the sex of the child moderated the impact of the social skills training program, reducing the level of disruptive behaviors for girls but not for boys. The failure to find an effect for prosocial behaviors may be due to the high levels of prosocial behaviors in the experimental conditions at pre-intervention, leaving little room for improvement (i.e., ceiling effects). Furthermore, we found no evidence that the SES of the child's family moderated the impact of the intervention. Examination of the evaluated intervention With respect to disruptive behaviors, our results are consistent with earlier findings from a similar social skills intervention developed by our research team for school-aged children-the "Fluppy program" [42] which found that disruptive behaviors at the end of the 8-month intervention were reduced for girls but not for boys [42]. One explanation for the observed sex differences is the highly verbal nature of these interventions. Sex differences in children's verbal abilities are welldocumented, particularly early in development [43,44], so it is possible that the content and delivery of the interventions were not sufficiently accessible to boys. Indeed, the Minipally and Fluppy programs are specifically designed to improve social skills that frequently depend on verbal skills such as the ability to articulate questions or describe emotions. Thus, while girls might be receptive to educator-led workshops that focus on enhancing social skills and reducing disruptive behaviors, this might not be the best approach for boys, who might instead benefit from educator-led dramatic play sessions, stronger educatorchild relationships, and supervised peer play to scaffold social competences [23,45,46]. More broadly, our results corroborate the hypothesis that children's sex is an important moderator of the impact of a social skills training program during early childhood and possibly later. A further consideration for future studies is that adding a parenting component to the Minipally program could increase its impact. According to a recent meta-analysis, interventions with a parent component, either alone or in combination with other components, are more likely to benefit children who exhibit high levels of behavioral problems [47]. Future studies should therefore examine the unique and combined impact of child care-based and parenting-based interventions on children's social behaviors Finally, previous work shows that social skills training programs for childhood disruptive behaviors are effective only if they are of moderate-to-high intensity [47]. It is possible that our intervention lacked the intensity necessary to significantly increase children's prosocial behaviors and reduce disruptive behaviors in boys. The educators in our trial conducted at least 12 out of 16 workshops in the Minipally child curriculum, but their reinvestment activities (i.e., follow-up activities throughout the week) were not monitored. A higher intensity intervention with systematic reinvestment activities would arguably have had a greater impact on children's social skills, especially for those exposed to risk factors in their home environment. Strengths and limitations The strengths of this study are its cluster-randomized experimental design, low level of cluster (0%) and individual attrition (7%), and the use of hierarchical linear mixed models, which accounted for the nested structure of randomization. The study had good ecological validity. It was implemented in community-based CCCs by educators who, apart from receiving a 2-day training and 12 h of supervision for the social skills program, had only a two-year professional degree (after high school) in early childhood and child care education. The study has several limitations. First, we underestimated the ICC of the data in our sample size calculation, which, when combined with our modest sample size, limited our capacity to detect small effects. Future studies should replicate the intervention using larger samples and test a putative interaction with children's sex and family SES, as well as other potential moderators, such as children's baseline levels of prosocial and disruptive behaviors. Second, children's behavioral questionnaires were completed by the educator who also delivered the Minipally program. Childcare educators are a reliable source of information on disruptive behaviors because of their established ability to distinguish between normative and atypical behaviors [48,49]; However, since the educators were involved in both the implementation of the Models adjusted for children's family socio-economic status intervention and the pre-and post-intervention behavioral assessments, this may have introduced a bias. For instance, due to their proximity to the project, educators in the intervention group may have noticed smaller improvement in children's behaviors than educators in the control group. Nevertheless, it is unlikely that such bias would explain the different impact of the intervention on disruptive behaviors between boys and girls. The decision to rely on the CCC educators who participated in the study was based on extensive literature that shows there is only weak to moderate agreement in social skills evaluations between raters [50]. Social skills are highly context specific, and the skills necessary to function at home are considerably different from those required in group contexts typical of CCC settings [50]. Future studies seeking to replicate our intervention should consider evaluating children's social competences based on assessments performed by independent raters. The use of objective testsfor example "The white crayon does not work …" task by Ostrov et al. [51] in which children are asked to participate in a group drawing exerciseshould be considered in future studies to examine the impact of a social skills training program on children's social behaviors. Also, a follow-up assessment at school entry with kindergarten teachers who have not been involved in the project may yield more reliable results. Finally, we did not track the number of workshops implemented by child care educatorswe only know that all educators performed 12 or more of the 16 workshops during the implementation year. Future studies should include a comprehensive implementation and content validity evaluation. Conclusions CCCs provide one of the earliest opportunities to equip children with social skills that will benefit them for the rest of their lives [52]. This study adds to a small but growing body of literature suggesting there may be important sex differences in children's responsiveness to early psychosocial interventions. Preschool programs that provide social skills training with higher intensity, a defined educative curriculum, and parent engagement may help reduce behavior problems and enhance social skills with long-term benefits to individuals and society. Additional file 1: Table S1. Skills taught in the Minipally program by workshops. Additional file 2: Table S2. Linear Mixed Models Linking Intervention Conditions to Disruptive and Prosocial Behaviors in Pre-intervention.
2020-04-25T13:05:39.484Z
2020-04-23T00:00:00.000
{ "year": 2020, "sha1": "a475ac85232a75ae5f6b265b725b2176e95e95e0", "oa_license": "CCBY", "oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-020-00408-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "366a9aea6852c83bc290ad4f251e6eea38f0d3fa", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
244711914
pes2o/s2orc
v3-fos-license
Safety of delivering bronchial thermoplasty in two treatment sessions Background Bronchial thermoplasty (BT) is a novel endoscopic therapy for severe asthma. Traditionally it is performed in three separate treatment sessions, targeting different portions of the lung, and each requires an anaesthetic and hospital admission. Compression of treatment into 2 sessions would present a more convenient alternative for patients. In this prospective observational study, the safety of compressing BT into two treatment sessions was compared with the traditional 3 treatment approach. Methods Sixteen patients meeting ERS/ATS criteria for severe asthma consented to participate in an accelerated treatment schedule (ABT), which treated the whole left lung followed by the right lung four weeks later. The short-term outcomes of these patients were compared with 37 patients treated with conventional BT scheduling (CBT). The outcome measures used to assess safety were (1) the requirement to remain in hospital beyond the electively planned 24-h admission and (2) the need for re-admission for any cause within of 30 days of treatment. Results The total number of radiofrequency activations delivered in the ABT group was similar to CBT (187 ± 21 vs 176 ± 40, p = 0.326). With ABT, 11 in 31 admissions (37.9%) required prolonged admission due to wheezing, compared to 5.4% with CBT (p = 0.0025). The mean hospital length of stay with ABT was 1.8 ± 1.3 days, compared to 1.1 ± 0.4 days (p < 0.001). ICU monitoring was required on 5 occasions with ABT (16.1%), compared to 0.9% with CBT (p = 0.002). Subgroup analysis demonstrated that females were more likely to require prolonged admission (OR 11.6, p = 0.0025). The 30-day hospital readmission rate was similar for both groups (6.4% vs 5.4%, p = 0.67). All patients made a complete recovery after treatment with similar outcomes at the 6-month follow-up reassessment. Conclusion This study demonstrates that ABT results in greater short-term deterioration in lung function associated with a greater risk of prolonged hospital and ICU stay, predominantly affecting females. Therefore, in females, these risks need to be balanced against the convenience of fewer treatment sessions. In males, it may be an advantage to compress treatment. Background Bronchial thermoplasty (BT) is a bronchoscopic, nonpharmacological intervention for the management of asthma. It offers an alternative therapeutic option for those with severe asthma, defined by the Global Initiative for Asthma (GINA) as those with persistent symptoms requiring step 5 of controller treatment [1]. BT involves the delivery of radiofrequency energy to distal airways of 2-10 mm in diameter, using a catheter electrode introduced by a flexible bronchoscope [2]. The goal of treatment is to induce atrophy in the airway smooth muscle layer, which is known to be hypertrophied in severe asthma [3,4]. Treatment benefits have been established in three randomised controlled trials, and three real-world registries, which have each demonstrated improved symptom control and quality of life scores, and reduced exacerbation frequency [5][6][7][8][9][10]. The major adverse effect of bronchial thermoplasty is short-term aggravation of asthma in the immediate post-operative period [2]. Following BT, an average fall in post-bronchodilator Forced Expiratory Volume in 1-s (FEV1) of 9% has been reported [11]. This is maximal 24 h post procedure, after which there is steady recovery. The degree of fall in FEV1 is proportional to the quantum of radiofrequency treatment applied [11]. Historically, BT has been divided into 3 treatment sessions, separated by 3-4 weeks, each treating different portions of the lung [2]. However, this treatment plan does mean that patients require three separate hospital admissions with three separate anaesthetics to complete their treatment. This adds to the cost and the inconvenience of the treatment, particularly when patients live remotely to the treatment centre. If patients could be safely treated in two sessions rather than three, this would be a more attractive proposition for patients, doctors, hospitals and health funds. Therefore, the aim of this study was to investigate whether BT could be safely compressed into two treatment sessions. Study subjects This was a single centre, prospective, observational study conducted at a tertiary referral centre. Patients were evaluated for BT at the request of their treating specialist respiratory physician, having already been evaluated to ensure that (1) comorbidities had been addressed, (2) biological treatments had been instigated where indicated, and (3) adherence with optimized asthma therapy including high dose inhaled corticosteroids and dual long-acting bronchodilator therapy had been demonstrated. All patients were required to meet the European Respiratory Society/American Thoracic Society (ERS/ ATS) definition of severe asthma. [12]. During the 18 months, January 2019 to June 2020, 16 patients undertook BT using the accelerated, twotreatment, treatment schedule. The outcomes of these patients were compared with the 37 patients in whom BT had been completed prior to January 2019, where conventional BT scheduling using three treatments had been used. All patients undergoing BT at our centre were included. Procedure Patients being treated in two sessions had the left upper and lower lobes treated in the first treatment session, and then the right upper and lower lobes treated in the second session. As is standard practice, the right middle lobe was not treated. All patients received oral steroid premedication of 50 mg Prednisolone/day for three days prior to the procedure and 3 days post procedure, as with conventional BT. Patients also received inhaled bronchodilators immediately prior to the procedure, and intraoperative intravenous dexamethasone and glycopyrrolate. They were routinely observed in hospital overnight following treatment, with expected discharge the next morning. The number of radiofrequency activations generated at each treatment session was recorded. Outcomes In this study, the primary outcomes related to adverse events, and were defined by (1) admission to hospital beyond the planned 24 h and/or (2) readmission to any hospital for any cause in the 30 days following any treatment session. These events were established by medical record review and by direct patient enquiry. The frequency of adverse events were compared between the accelerated treatment group and the cohort of patients who had received conventional BT. In addition, a calibrated portable spirometer (Jaeger Vyntus Pneumo, Carefusion, Germany) was used to record the post bronchodilator FEV1 immediately preoperatively in theatre, and then again, in the ward 24 h later, in order to quantify the fall in FEV1 post procedure. This data was available for all 16 patients treated with the accelerated treatment plan, but only available for 20 of the 37 patients treated with standard BT. Secondary outcome measurements related to the therapeutic effects of BT. All patients were evaluated at baseline, 4 weeks prior to the initial BT procedure, by age, gender, BMI, medication history, exacerbation frequency, spirometry and the Asthma Control Questionnaire, 5-item version [13]. Permission to use this instrument had been specifically granted to us by its author, Elizabeth Juniper. Exacerbations were defined by the need for an increase in oral corticosteroids for 3 days. Evaluations were repeated 6 months after the completion of all BT procedures. Spirometry was undertaken in an accredited laboratory by experienced respiratory scientists, and to ERS/ ATS standards, using the Jaeger Vyntus Body (Carefusion, Germany) calibrated on the day of patient testing [14]. Predicted values were drawn from the Global Lung Initiative [15]. Ethics This study was prospectively approved by the Peninsula Health Human Research Ethics Committee. Patients were enrolled only after informed consent had been obtained. Statistical analysis For normally distributed data, results are presented as mean ± standard deviation, and comparisons are made with a t-test. Where sample sizes are small, data is presented as median (interquartile range) and comparisons are made with a Wilcoxon signed rank test for paired data, and a Mann-Whitney U test for unpaired data. A Fisher's Exact test is used to compare categorical data. Statistical significance was taken at p < 0.05 for a twotailed test. Baseline characteristics The clinical features of both sets of patients are summarized in Table 1. This was a group of very severe asthmatics, with severely impaired lung function, and high medication and symptom burdens. Time-based differences were evident between the two groups of patients with the more recent BT patients being more severely affected as demonstrated by higher ACQ, higher maintenance dose of oral steroids, and more frequent use of reliever medication. This was expected because many of the conventionally treated patients underwent BT prior to the availability of anti-interleukin-5 monoclonal antibody therapy in Australia (January 2017). As a result, those patients undergoing BT in the latter years, and by the accelerated treatment approach, were more likely to be already being treated with biological therapy, and yet, despite this, still severely symptomatic. (Patients who had done well with biological therapy would not have needed BT). Treatment In the accelerated treatment group, 15 patients completed both treatments whilst one patient declined further treatment following the first treatment session. This particular patient was average for the group in terms of baseline FEV1% predicted, ACQ, prednisolone dose and requirement for bronchodilators. However, they were of a particularly anxious predisposition, which the authors believe to be the main reason treatment was not continued. The 37 patients treated with conventional BT completed all 111 treatments. The total number of radiofrequency activations delivered was similar in both patient groups − 187 ± 21 in the accelerated treatment group, compared to 176 ± 40 in the conventional treatment approach (p = 0.326). Thus, both groups received a similar quantum of treatment independent of the scheduling. In practice, this meant that when the whole left lung was treated in the accelerated treatment group, 100 ± 17 activations were administered in one session, by comparison with 49 ± 14 activations when just the left lower lobe was treated in the conventional approach (p < 0.001). Similarly, on the right side, 89 ± 21 activations were delivered in one session to the right lung in the accelerated protocol, compared with 48 ± 17 activations to the right lower lobe with the conventional approach (p < 0.001). In our centre, we allow 45 min of theatre time for each booked BT case. Although the operating time was 10 min longer when a whole lung was treated by BT, every case was completed within the allowed usual theatre time, and without altering subsequent theatre scheduling. The mean fall in post bronchodilator FEV1 24 h after BT was 403 ± 352 ml, or 22.6 ± 16.4% with accelerated treatment. This was significantly greater than when either the right or left lower lobes were treated with conventional scheduling, where the fall after treatment in FEV1 was 114 ± 243 ml or 5.0 ± 15.0% (p = 0.001). However, the fall in FEV1 after conventional upper lobe treatment was not statistically different from treating either the whole left or right lung (p = 0.203) ( Table 2). Adverse events Patients remaining in hospital longer than 24 h after the procedure were deemed to have experienced an adverse event, and with standard treatment, this occurred in 6 instances of 111 admissions (5.4%). By comparison, with accelerated treatment, there were 11 occurrences in 31 admissions (37.9%) when patients remained in hospital after 24 h (p < 0.001). The medical notes recorded Two patients (6.4%) in the accelerated treatment group were readmitted within 30 days of a BT procedure, one for pneumonia and one for non-ischaemic chest pain. Both made a full recovery. This readmission rate was similar in the standard treatment group (5.4%, p = 0.670). In the accelerated treatment group, a subgroup analysis was conducted to compare the baseline characteristics of those patients who remained in hospital longer than 24 h with those who were discharged within 24 h as originally planned. These results are shown in Table 3. Across most parameters, there were no distinguishing differences. However, there appeared to be a gender difference. For males, there was a 1 in 14 admissions chance (7.1%) of remaining in hospital after 24 h with accelerated treatment, whilst in females this chance was 6 in 17 admissions (35.2%) (p = 0.090). Interestingly, in our standard treatment group, all 6 instances of prolonged admission were also all females. Hence, for the pooled group of 53 patients undergoing 142 BT procedures, there was one male admission (1.4%) and 12 female admissions (16.2%) longer than 24 h, resulting in an odds ratio for prolonged hospital stay of 11.6 females to males (p = 0.0025). To explore why there was a higher adverse event rate in females, two further comparisons were made. The baseline characteristics of the 28 females were compared with 25 males (Table 4). Overall, both groups of patients were found to be very similar, but, as expected, males had larger lungs. Therefore, the fall in lung capacity following BT treatment was compared by gender for the pooled group of 36 patients (19 female, 17 male) where FEV1 had been measured routinely 24 h post procedure. This comparison is shown in Table 5. The data suggests that the percentage fall in FEV1 post BT is significantly less in males, who are protected by higher baseline lung volumes. The accelerated treatment group were more obese than their conventional comparitors (Table 1). To ensure that the higher adverse event rate was not an effect of obesity, the baseline BMI was compared in the 10 patients who stayed in hospital longer than 24 h (33.9 ± 2.0) with the 43 patients who were discharged within 24 h (30.2 ± 7.3), and this difference was not statistically significant (p = 0.14). Outcomes 6 months post procedure The clinical responses to treatment were measured 6 months after the completion of BT, and these results are presented in Table 6. Substantive, clinically meaningful improvements were observed in ACQ, exacerbation frequency, and medication usage. A trend towards improvement in FEV1 was observed. The magnitude of the changes were similar, and not statistically different, in both treatment groups. For example, the mean improvement in ACQ was 1.2 ± 1.2 in the accelerated treatment group, compared to 1.4 ± 1.3 with conventional teeatment (p = 0.56), and the median (Q1, Q3) improvement in short acting beta-2 agonist use was − 7.0 (− 12, − 3.5) puffs/day in the accelerated treatment group compared to − 4.0 (− 9.0) puffs/day with conventional treatment (p = 0.19). Discussion This is the first study to examine the delivery of BT in two treatment sessions, and make comparisons with conventional treatment in three sessions. Whilst both groups of patients experienced favourable and comparable outcomes at six months, a higher prevalence of prolonged admission was observed in the accelerated group immediately post-procedure (37.9% vs 5.4%). The implications of this will be explored. This was a cohort of patients with very severe asthma. They had a high symptom burden despite biological therapies and oral steroids, and the mean FEV1 of 45.9% predicted was considerably lower than participants in the AIR and AIR 2 trials (whose participants had an FEV1 of > 60% predicted), and RISA of > 50% predicted [16][17][18]. It is common for asthma symptoms to initially worsen as the result of acute airway inflammation and oedema from BT [19]. Those in the accelerated group had a larger number of airways treated in each session, and previous studies have demonstrated a relationship between higher activations delivered and a greater decline in FEV1 [11]. This finding is supported by the greater decline in FEV1 observed 24 h post-procedure with the accelerated treatment in this study. Therefore, it is not surprising that these patients took longer to recover. Nevertheless, the effects seen in the accelerated group were transient. The mean hospital stay was 1.8 ± 1.3 days, and, following hospital discharge, the readmission rate was low and similar between the two treatment groups. The subgroup analysis suggests that the risk of prolonged hospital admission following BT pertained predominantly to females, and this occurred irrespective of the treatment regimen delivered. This has not been previously noted. The three randomized controlled trials of BT did not present a breakdown of adverse effects by gender. The odds ratio of 11.6 for prolonged hospital stay in females is so strong in this study it seems unlikely to have occurred by chance. Table 4 demonstrates the similarity in clinical characteristics here between the females and males, save for the expected anthropomorphic difference of a lower absolute FEV1 in females. We postulate that this may be a factor in the higher post-operative adverse event rate in females. The volume change in FEV1 after BT appears to be at least as great in females as males, but with females starting from a lower baseline, the impact of the deterioration becomes substantially greater (Table 5). There are attractions to performing BT in fewer sessions. Hospital admission and post-operative recovery are disruptive to a patient's life. Patients in this study were generally enthusiastic about the concept of compressing treatment into two sessions, even when informed of a potential risk of a longer hospital stay. This became a particularly strong advantage for patients who lived interstate or remotely to the treatment centre. Some patients also felt it was a significant advantage in having two anaesthetics rather than three. This study demonstrates that there is obviously a trade-off between the convenience of two treatments and the inconvenience of greater short-term post-operative deterioration. There are also differing economic implications depending on the drivers in health services. In a country where access to theatre is a major constraint in the delivery of surgery, such as in a publicly funded healthcare system, the ability to perform BT in two treatments is a significant step forward, and would reduce surgical wait times. On the other hand, in areas where the high cost of an overnight hospital bed is the predominant driver in healthcare delivery, such as the United States of America, bronchoscopists may be better to continue to offer traditional 3 session BT on a day-case basis. In such a country, an alternative approach to improve patient convenience, may be to offer single treatment, limited BT, targeted by preprocedural hyperpolarized Magnetic Resonance Imaging (MRI) [20]. This technique shows great promise, but is currently severely limited by the lack of availability of hyperpolarized MRI in most centres. Being a feasibility study, the numbers of patients studied here were deliberately small, as we were concentrating on establishing patient safety in the first instance. The technique would need repeating on a larger scale before firm recommendations could be made. We must acknowledge that this study was non-randomized, and that time dependent differences between the two patient groups are evident. However, given that the accelerated treatment group were a more severe group of asthmatics, this would serve to exaggerate any differences in safety between the two techniques rather than provide false reassurance. In that sense, this is unlikely to be a limitation and the results presented are more akin to a worst-case scenario. This study shows that it is possible to compress BT into two treatments, and it appears particularly safe to do so in males. However, there is a penalty to pay by taking this approach, namely a greater fall in FEV1 in the immediate postoperative period. Therefore, at our centre, we are not offering this approach to those patients whose baseline FEV1 is less than 50% predicted, until further data becomes available. Improving and refining treatment procedures to minimise patient discomfort and maximise efficiency is a natural development in the evolution of any medical procedure. Further research on a larger scale is required to confirm our results, but accelerating the delivery of BT appears to be safe in some patients without compromising clinical outcomes.
2021-11-29T14:38:47.844Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "8bdcadad55ddc5ee5b148b8ac72d31ed619b761a", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/s12931-021-01901-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bdcadad55ddc5ee5b148b8ac72d31ed619b761a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11364545
pes2o/s2orc
v3-fos-license
Online Adaptive Learning Solution of Multi-agent Differential Graphical Games Distributed networks have received much attention in the last year because of their flexibility and computational performance. The ability to coordinate agents is important in many real-world tasks where it is necessary for agents to exchange information with each other. Synchronization behavior among agents is found in flocking of birds, schooling of fish, and other natural systems. Work has been done to develop cooperative control methods for consensus and synchronization (Fax and Murray, 2004; Jadbabaie, Lin and Morse, 2003; Olfati-Saber, and Murray, 2004; Qu, 2009; Ren, Beard, and Atkins, 2005; Ren, and beard, 2005; Ren, and Beard, 2008; Tsitsiklis, 1984). See (Olfati-Saber, Fax, and Murray, 2007; Ren, Beard, and Atkins, 2005) for surveys. Leaderless consensus results in all nodes converging to common value that cannot generally be controlled. We call this the cooperative regulator problem. On the other hand the problem of cooperative tracking requires that all nodes synchronize to a leader or control node (Hong, Hu, and Gao, 2006; Li, Wang, and Chen, 2004; Ren, Moore, and Chen, 2007; Wang, and Chen, 2002). This has been called pinning control or control with a virtual leader. Consensus has been studied for systems on communication graphs with fixed or varying topologies and communication delays. Introduction Distributed networks have received much attention in the last year because of their flexibility and computational performance.The ability to coordinate agents is important in many real-world tasks where it is necessary for agents to exchange information with each other.Synchronization behavior among agents is found in flocking of birds, schooling of fish, and other natural systems.Work has been done to develop cooperative control methods for consensus and synchronization (Fax and Murray, 2004;Jadbabaie, Lin and Morse, 2003;Olfati-Saber, and Murray, 2004;Qu, 2009;Ren, Beard, and Atkins, 2005;Ren, and beard, 2005;Ren, and Beard, 2008;Tsitsiklis, 1984).See (Olfati-Saber, Fax, and Murray, 2007;Ren, Beard, and Atkins, 2005) for surveys.Leaderless consensus results in all nodes converging to common value that cannot generally be controlled.We call this the cooperative regulator problem.On the other hand the problem of cooperative tracking requires that all nodes synchronize to a leader or control node (Hong, Hu, and Gao, 2006;Li, Wang, and Chen, 2004;Ren, Moore, and Chen, 2007;Wang, and Chen, 2002).This has been called pinning control or control with a virtual leader.Consensus has been studied for systems on communication graphs with fixed or varying topologies and communication delays. Game theory provides an ideal environment in which to study multi-player decision and control problems, and offers a wide range of challenging and engaging problems.Game theory (Tijs, 2003) has been successful in modeling strategic behavior, where the outcome for each player depends on the actions of himself and all the other players.Every player chooses a control to minimize independently from the others his own performance objective.Multi player cooperative games rely on solving coupled Hamilton-Jacobi (HJ) equations, which in the linear quadratic case reduce to the coupled algebraic Riccati equations (Basar, and Olsder, 1999;Freiling, Jank, and Abou-Kandil, 2002;Gajic, and Li, 1988).Solution methods are generally offline and generate fixed control policies that are then implemented in online controllers in real time.These coupled equations are difficult to solve. Synchronization and node error dynamics 2.1 Graphs Consider a graph ( , ) G V   with a nonempty finite set of N nodes 1 { , , } and a set of edges or arcs V V    .We assume the graph is simple, e.g.no repeated edges and ( , ) ,   the weighted in-degree of node i (i.e.i -th row sum of E).Define the graph Laplacian matrix as L D E   , which has all row sums equal to zero. A directed path is a sequence of nodes 0 1 , , , r v v v  such that 1 ( , ) , {0,1, , 1} A directed graph is strongly connected if there is a directed path from i v to j v for all distinct nodes , i j v v V  .A (directed) tree is a connected digraph where every node except one, called the root, has in-degree equal to one.A graph is said to have a spanning tree if a subset of the edges forms a directed tree.A strongly connected digraph contains a spanning tree. General directed graphs with fixed topology are considered in this chapter. Synchronization and node error dynamics Consider the N systems or agents distributed on communication graph G with node dynamics where ( ) x t   is the state of node i, ( ) i m i u t   its control input.Cooperative team objectives may be prescribed in terms of the local neighborhood tracking error n i    (Khoo, Xie, and Man, 2009) The pinning gain 0 i g  is nonzero for a small number of nodes i that are coupled directly to the leader or control node 0 x , and 0 i g  for at least one i (Li, Wang, and Chen, 2004).We refer to the nodes i for which 0 i g  as the pinned or controlled nodes.Note that i  represents the information available to node i for state feedback purposes as dictated by the graph structure. The state of the control or target node is 0 ( ) x t   which satisfies the dynamics Frontiers in Advanced Control Systems 32 0 0 x Ax   (3) Note that this is in fact a command generator (Lewis, 1992) and we seek to design a cooperative control command generator tracker.Note that the trajectory generator A may not be stable. The Synchronization control design problem is to design local control protocols for all the nodes in G to synchronize to the state of the control node, i.e. one requires 0 ( ) ( ), From (2), the overall error vector for network Gr is given by where the global vectors are and 1 the N-vector of ones.The Kronecker product is  (Brewer, 1978). is a diagonal matrix with diagonal entries equal to the pinning gains i g .The (global) consensus or synchronization error (e.g. the disagreement vector in (Olfati-Saber, and Murray, 2004)) is The communication digraph is assumed to be strongly connected.Then, if 0 is nonsingular with all eigenvalues having positive real parts (Khoo, Xie, and Man, 2009).The next result therefore follows from (4) and the Cauchy Schwartz inequality and the properties of the Kronecker product (Brewer, 1978). Lemma 1.Let the graph be strongly connected and 0 G  .Then the synchronization error is bounded by ■ Our objective now shall be to make small the local neighborhood tracking errors ( ) i t  , which in view of Lemma 1 will guarantee synchronization. To find the dynamics of the local neighborhood tracking error, write ( ) This is a dynamical system with multiple control inputs, from node i and all of its neighbors. Cooperative multi-player games on graphs We wish to achieve synchronization while simultaneously optimizing some performance specifications on the agents.To capture this, we intend to use the machinery of multi-player games (Basar, Olsder, 1999) Cooperative performance index Define the local performance indices where all weighting matrices are constant and symmetric with 0, 0, 0 that the i-th performance index includes only information about the inputs of node i and its neighbors. For dynamics (8) with performance objectives (9), introduce the associated Hamiltonians where i p is the costate variable.Necessary conditions (Lewis, and Syrmos, 1995) for a minimum of ( 9) are (1) and Graphical games Interpreting the control inputs , i j u u as state dependent policies or strategies, the value function for node i corresponding to those policies is When i V is finite, using Leibniz' formula, a differential equivalent to (13) is given in terms of the Hamiltonian function by the Bellman equation (The gradient is disabused here as a column vector.) That is, solution of equation ( 14) serves as an alternative to evaluating the infinite integral (13) for finding the value associated to the current feedback policies.It is shown in the Proof of Theorem 2 that ( 14) is a Lyapunov equation.According to ( 13) and (10) one equates The local dynamics (8) and performance indices (9) only depend for each node i on its own control actions and those of its neighbors.We call this a graphical game.It depends on the topology of the communication graph ( , ) G V   .We assume throughout the chapter that the game is well-formed in the following sense. Definition 2. The graphical game with local dynamics (8) and performance indices ( 9) is well-formed if 0 The control objective of agent i in the graphical game is to determine * 1 2 ( ( )) min ( ) Employing the stationarity condition (12) (Lewis, and Syrmos, 1995) one obtains the control policies 1 ( ) ( ) ( ) The game defined in (15) corresponds to Nash equilibrium. The N-tuple of game values   * * * 1 2 , ,..., J J J  is known as a Nash equilibrium outcome of the Nplayer game. Online Adaptive Learning Solution of Multi-Agent Differential Graphical Games 35 The distributed multiplayer graphical game with local dynamics (8) and local performance indices (9) should be contrasted with standard multiplayer games (Abou-Kandil, Freiling, Ionescu, and Jank, 2003;Basar, and Olsder 1999) which have centralized dynamics where n z   is the state, ( ) i m i u t   is the control input for every player, and where the performance index of each player depends on the control inputs of all other players.In the graphical games, by contrast, each node's dynamics and performance index only depends on its own state, its control, and the controls of its immediate neighbors. It is desired to study the distributed game on a graph defined by (15) with distributed dynamics (8).It is not clear in this scenario how global Nash equilibrium is to be achieved. Graphical games have been studied in the computational intelligence community (Kakade, Kearns, Langford, and Ortitz, 2003;Kearns, Littman, and Singh, 2001;Shoham, and Leyton-Brown, 2009).A (nondynamic) graphical game has been defined there as a tuple ( , , ) G U v U the set of actions available to node i, and It is important to note that the payoff of node i only depends on its own action and those of its immediate neighbors.The work on graphical games has focused on developing algorithms to find standard Nash equilibria for payoffs generally given in terms of matrices.Such algorithms are simplified in that they only have complexity on the order of the maximum node degree in the graph, not on the order of the number of players N. Undirected graphs are studied, and it is assumed that the graph is connected. The intention in this chapter is to provide online real-time adaptive methods for solving differential graphical games that are distributed in nature.That is, the control protocols and adaptive algorithms of each node are allowed to depend only information about itself and its neighbors.Moreover, as the game solution is being learned, all node dynamics are required to be stable, until finally all the nodes synchronize to the state of the control node.These online methods are discussed in Section V. The following notions are needed in the study of differential graphical games. Definition 4. (Shoham, and Leyton-Brown, 2009) for all policies i u of agent i. For centralized multi-agent games, where the dynamics is given by ( 18) and the performance of each agent depends on the actions of all other agents, an equivalent definition of Nash equilibrium is that each agent is in best response to all other agents.In graphical games, if all agents are in best response to their neighbors, then all agents are in Nash equilibrium, as seen in the proof of Theorem 1. However, a counterexample shows the problems with the definition of Nash equilibrium in graphical games.Consider the completely disconnected graph with empty edge set where each node has no neighbors.Then Definition 4 holds if each agent simply chooses his single-player optimal control solution * * ( ) , since, for the disconnected graph case one has for any choices of the two sets , ' of the policies of all the other nodes.That is, the value function of each node does not depend on the policies of any other nodes. Note, however, that Definition 3 also holds, that is, the nodes are in a global Nash equilibrium.Pathological cases such as this counterexample cannot occur in the standard games with centralized dynamics (18), particularly because stabilizability conditions are usually assumed. Interactive Nash equilibrium The counterexample in the previous section shows that in pathological cases when the graph is disconnected, agents can be in Nash equilibrium, yet have no influence on each others' games.In such situations, the definition of coalition-proof Nash equilibrium (Shinohara, 2010) may also hold, that is, no set of agents has an incentive to break away from the Nash equilibrium and seek a new Nash solution among themselves. To rule out such undesirable situations and guarantee that all agents in a graph are involved in the same game, we make the following stronger definition of global Nash equilibrium. said to constitute an interactive global Nash equilibrium solution for an N player game if, for all i N  , the Nash condition (17) holds and in addition there exists a policy ' That is, at equilibrium there exists a policy of every player k that influences the performance of all other players i. If the systems are in Interactive Nash equilibrium, the graphical game is well-defined in the sense that all players are in a single Nash equilibrium with each player affecting the decisions of all other players.Condition (21) means that the reaction curve (Basar, and Olsder, 1999) of any player i is not constant with respect to all variations in the policy of any other player k. The next results give conditions under which the local best responses in Definition 4 imply the interactive global Nash of Definition 5. Consider the systems ( 8) in closed-loop with admissible feedbacks ( 12), ( 16) denoted by for a single node k and , The global closed-loop dynamics are , where [.] ik denotes the element (i,k) of a matrix.That is, M is the length of the shortest directed path from k to i. Denote the nodes along this path by where and   ik denotes the position of the block element in the block matrix. All shortest paths to node i from node k pass through a single neighbor 1 An example case where Assumption 1a holds is when there is a single shortest path from k Lemma 2. Let ( , ) j A B be reachable for all j N  and let Assumption 1 hold.Then the i-th closed-loop system ( 22) is reachable from input k v if and only if there exists a directed path from node k to node i. Proof: Sufficiency.If k i  the result is obvious.Otherwise, the reachability matrix from node k to node i has the n m  block element in block row i and block column k given as where * denotes nonzero entries.Under the assumptions, the matrix on the right has full row rank and the matrix on the left is written as Necessity Proof: Let every node i be in best response to all his neighbors i j N  .Then * ( , ) ( , ), ,   and the nodes are in Nash equilibrium. Necessity.If the graph is not strongly connected, then there exist nodes k and i such that there is no path from node k to node i.Then, the control input of node k cannot influence the state or the value of node i.Therefore, the Nash equilibrium is not interactive. Sufficiency. Let ( , ) i A B be reachable for all i N  .Then if there is a path from node k to node i, the state i  is reachable from k u , and from (9) input k u can change the value i J .Strong connectivity means there is a path from every node k to every node i and condition (21) holds for all , i k N  . ■ The reachability condition is sufficient but not necessary for Interactive Nash equilibrium. According to the results just established, the following assumptions are made. a. ( , ) i A B is reachable for all i N  . b.The graph is strongly connected and at least one pinning gain i g is nonzero.Then   L G  is nonsingular. Stability and solution of graphical games Substituting control policies ( 16) into ( 14) yields the coupled cooperative game Hamilton-Jacobi (HJ) equations where the closed-loop matrix is 2 1 ( ) For a given i V , define * ( ) Then HJ equations ( 25) can be written as * * ( , , , ) 0 There is one coupled HJ equation corresponding to each node, so solution of this N-player game problem is blocked by requiring a solution to N coupled partial differential equations. In the next sections we show how to solve this N-player cooperative game online in a distributed fashion at each node, requiring only measurements from neighbor nodes, by using techniques from reinforcement learning. It is now shown that the coupled HJ equations ( 25) can be written as coupled Riccati equations.For the global state  given in (4) we can write the dynamics as where u is the control given by   1 ( )( ) where (.) diag denotes diagonal matrix of appropriate dimensions.Furthermore the global costate dynamics are ( ) ( ) This is a set of coupled dynamic equations reminiscent of standard multi-player games (Basar, and Olsder, 1999) or single agent optimal control (Lewis, and Syrmos, 1995).Therefore the solution can be written without any loss of generality as for some matrix 0 P  nNxnN   . Lemma 3. HJ equations (25) are equivalent to the coupled Riccati equations or equivalently, in closed-loop form, ( ) 0 where P is defined by ( 31), and Take ( 14) and write it with respect to the global state and costate as By definition of the costate one has 1 1 ... ... It is now shown that if solutions can be found for the coupled design equations ( 25), they provide the solution to the graphical game problem. Theorem 2. Stability and Solution for Cooperative Nash Equilibrium. Let Assumptions 1 and 2a hold.Let  be smooth solutions to HJ equations (25) and control policies * i u , i N  be given by ( 16) in terms of these solutions i V .Then a. Systems ( 8) are asymptotically stable so all agents synchronize.   * * * 1 2 , ,..., u u u  are in global Nash equilibrium and the corresponding game values are * ( (0)) , Proof: 25) then it also satisfies (14).Take the time derivative to obtain 1 2 ( ) which is negative definite since 0 ii Q  .Therefore i V is a Lyapunov function for i  and systems ( 8) are asymptotically stable. According to part a, ( ) 0 i t   for the selected control policies.For any smooth functions ( ), u u  be the optimal controls given by ( 16).By completing the squares one has ) Since this is true for all i, Nash condition ( 17) is satisfied. ■ The next result shows when the systems are in Interactive Nash equilibrium.This means that the graphical game is well defined in the sense that all players are in a single Nash equilibrium with each player affecting the decisions of all other players. Corollary 1.Let the hypotheses of Theorem 2 hold.Let Assumptions 1 and 2 hold so that the graph is strongly connected.Then   * * * 1 2 , ,..., u u u  are in interactive Nash equilibrium and all agents synchronize. Global and local performance objectives: Cooperation and competition The overall objective of all the nodes is to ensure synchronization of all the states ( ) x t to 0 ( ) x t .The multi player game formulation allows for considerable freedom of each agent while achieving this objective.Each agent has a performance objective that can embody team objectives as well as individual node objectives. The performance objective of each node can be written as 1 1 ( ) where team J is the overall ('center of gravity') performance objective of the networked team and conflict i J is the conflict of interest or competitive objective.team J measures how much the players are vested in common goals, and conflict i J expresses to what extent their objectives differ.The objective functions can be chosen by the individual players, or they may be assigned to yield some desired team behavior. Policy iteration algorithms for cooperative multi-player games Reinforcement learning (RL) techniques have been used to solve the single-player optimal control problem online using adaptive learning techniques to determine the optimal value function.Especially effective are the approximate dynamic programming (ADP) methods (Werbos, 1974;Werbos, 1992).RL techniques have also been applied for multiplayer games with centralized dynamics (18).See for example (Busoniu, Babuska, and De Schutter, 2008;Vrancx, Verbeeck, and Nowe, 2008).Most applications of RL for solving optimal control problems or games online have been to finite-state systems or discrete-time dynamical systems.In this section is given a policy iteration algorithm for solving continuous-time differential games on graphs.The structure of this algorithm is used in the next section to provide online adaptive solutions for graphical games. Best response Theorem 2 and Corollary 1 reveal that, under assumptions 1 and 2, the systems are in interactive Nash equilibrium if, for all i N  node i selects his best response policy to his neighbors policies and the graph is strongly connected.Define the best response HJ equation as the Bellman equation ( 14) with control * i i u u  given by ( 16) and arbitrary policies { : } where the closed-loop matrix is 2 1 ( ) Theorem 3. Solution for Best Response Policy Given fixed neighbor policies { : } , assume there is an admissible policy i u .Let 1 0 i V C   be a smooth solution to the best response HJ equation ( 38) and let control policy * i u be given by ( 16) in terms of this solution i V .Then a. Systems ( 8) are asymptotically stable so that all agents synchronize.b. * i u is the best response to the fixed policies i u  of its neighbors. b.According to part a, ( ) 0 i t   for the selected control policies.For any smooth functions ( ), V satisfy (38), * i u be the optimal controls given by ( 16), and i u  be arbitrary policies.By completing the squares one has * * 1 2 0 ( (0) , , ) ( ( 0) ) ( ) ( ) The agents are in best response to fixed policies i u  when Then clearly ( (0), , ) ■ Policy iteration solution for graphical games The following algorithm for the N-player distributed games is motivated by the structure of policy iteration algorithms in reinforcement learning (Bertsekas, and Tsitsiklis, 1996;Sutton, and Barto, 1998) which rely on repeated policy evaluation (e.g.solution of ( 14)) and policy improvement (solution of ( 16)).These two steps are repeated until the policy improvement step no longer changes the present policy.If the algorithm converges for every i , then it converges to the solution to HJ equations ( 25), and hence provides the distributed Nash equilibrium.One must note that the costs can be evaluated only in the case of admissible control policies, admissibility being a condition for the control policy which initializes the algorithm. Algorithm 1. Policy Iteration (PI) Solution for N-player distributed games. Step 0: Start with admissible initial policies 0 , i u i  . On convergence-End ■ The following two theorems prove convergence of the policy iteration algorithm for distributed games for two different cases.The two cases considered are the following, i) only agent i updates its policy and ii) all the agents update their policies. Theorem 4. Convergence of Policy Iteration algorithm only i th agent updates its policy and all players i u  in its neighborhood do not change.Given fixed neighbors policies i u  , assume there exists an admissible policy i u .Assume that agent i performs Algorithm 1 and the its neighbors do not update their control policies.Then the algorithm converges to the best response i u to policies i u  of the neighbors and to the solution i V to the best response HJ equation (38). Proof: Using the next control policy 1 k i u  and the current policies k i u  one has the orbital derivative (Leake, Wen Liu, 1967) 1 1 ( , , , ) ( , , ) 42) and ( 43) one has Because only agent i update its control it is true that and by integration it follows that , the algorithm converges, to * i V , to the best response HJ equation ( 38). ■ The next result concerns the case where all nodes update their policies at each step of the algorithm.Define the relative control weighting as 1 ( ) is the maximum singular value of Theorem 5. Convergence of Policy Iteration algorithm when all agents update their policies.Assume all nodes i update their policies at each iteration of PI.Then for small enough edge weights ij e and ij  , i u converges to the global Nash equilibrium and for all i , and the values converge to the optimal game values . Proof: It is clear that and so . By continuity, it holds for small values of , ■ This proof indicates that for the PI algorithm to converge, the neighbors' controls should not unduly influence the i-th node dynamics (8), and the j-th node should weight its own control j u in its performance index j J relatively more than node i weights j u in i J .These requirements are consistent with selecting the weighting matrices to obtain proper performance in the simulation examples.An alternative condition for convergence in Theorem 5 is that the norm j B should be small.This is similar to the case of weakly coupled dynamics in multi-player games in (Basar, and Olsder, 1999). Online solution of multi-agent cooperative games using neural networks In this section an online algorithm for solving cooperative Hamilton-Jacobi equations (25) based on (Vamvoudakis, Lewis 2011) is presented.This algorithm uses the structure in the PI Algorithm 1 to develop an actor/critic adaptive control architecture for approximate online solution of (25).Approximate solutions of ( 40), (41) are obtained using value function approximation (VFA).The algorithm uses two approximator structures at each node, which are taken here as neural networks (NN) (Abu-Khalaf, and Lewis, 2005;Bertsekas, and Tsitsiklis, 1996;Vamvoudakis, Lewis 2010;Werbos, 1974;Werbos, 1992).One critic NN is used at each node for value function approximation, and one actor NN at each node to approximate the control policy (41).The critic NN seeks to solve Bellman equation ( 40).We give tuning laws for the actor NN and the critic NN such that equations ( 40) and ( 41) are solved simultaneously online for each node.Then, the solutions to the coupled HJ equations (25) are determined.Though these coupled HJ equations are difficult to solve, and may not even have analytic solutions, we show how to tune the NN so that the approximate solutions are learned online.The next assumption is made. Assumption 2. For each admissible control policy the nonlinear Bellman equations ( 14), (40) have smooth solutions 0 Frontiers in Advanced Control Systems 48 In fact, only local smooth solutions are needed.To solve the Bellman equations ( 40), approximation is required of both the value functions i V and their gradients / . This requires approximation in Sobolev space (Abu-Khalaf, and Lewis, 2005). Critic neural network According to the Weierstrass higher-order approximation Theorem (Abou-Khalaf, and Lewis, 2005) there are NN weights i W such that the smooth value functions i V are approximated using a critic NN as ( ) ( ) where ( ) i z t is an information vector constructed at node i using locally available measurements, e.g. ( ), { ( ) : } Then, the Bellman equation ( 40) can be approximated at each step k as ˆ( , , , ) It is desired to select ˆi W to minimize the square residual error Then ˆi i W W  which solves (49) in a least-squares sense and i H e becomes small.Theorem 6 gives a tuning law for the critic weights that achieves this. Action neural network and online learning Define the control policy in the form of an action neural network which computes the control input (41) in the structured form 1 1 2 ( ) where ˆi N W  denotes the current estimated values of the ideal actor NN weights i W .The notation ˆi N u  is used to keep indices straight in the proof.Define the critic and actor NN estimation errors as î The next results show how to tune the critic NN and actor NN in real time at each node so that equations ( 40) and ( 41) are simultaneously solved, while closed-loop system stability is also guaranteed.Simultaneous solution of ( 40) and ( 41) guarantees that the coupled HJ equations ( 25 Select the tuning law for the i th critic NN as where ˆ( () , and the tuning law for the i th actor NN as 1 ˆˆˆˆ{ ( ) 4 where 1 ( ) are tuning parameters. Let the error dynamics be given by ( 8), and consider the cooperative game formulation in (15).Let the critic NN at each node be given by ( 48) and the control input be given for each node by actor NN (51).Let the tuning law for the i th critic NN be provided by ( 52) and the tuning law for the i th actor NN be provided by ( 53).Assume /( 1) persistently exciting.Then the closed-loop system states ( ) i t  , the critic NN errors i W  , and the actor NN errors i N W   are uniformly ultimately bounded. ■ Remark 1. Theorem 6 provides algorithms for tuning the actor/critic networks of the N agents at the same time to guarantee stability and make the system errors ( ) i t  small and the NN approximation errors bounded.Small errors guarantee synchronization of all the node trajectories. Remark 2. Persistence of excitation is needed for proper identification of the value functions by the critic NNs, and nonstandard tuning algorithms are required for the actor NNs to guarantee stability.It is important to notice that the actor NN tuning law of every agent needs information of the critic weights of all his neighbors, while the critic NN tuning law of every agent needs information of the actor weights of all his neighbors, Remark 3. NN usage suggests starting with random, nonzero control NN weights in (51) in order to converge to the coupled HJ equation solutions.However, extensive simulations show that convergence is more sensitive to the persistence of excitation in the control inputs than to the NN weight initialization.If the proper persistence of excitation is not selected, the control weights may not converge to the correct values. Remark 4. The issue of which inputs ( ) i z t to use for the critic and actor NNs needs to be addressed.According to the dynamics ( 8), the value functions (13), and the control inputs ( 16), the NN inputs at node i should consist of its own state, the states of its neighbors, and the costates of its neighbors.However, in view of (31) the costates are functions of the states. In view of the approximation capabilities of NN, it is found in simulations that it is suitable to take as the NN inputs at node i its own state and the states of its neighbors. The next result shows that the tuning laws given in Theorem 6 guarantee approximate solution to the coupled HJ equations ( 25) and convergence to the Nash equilibrium. Theorem 7. Convergence to Cooperative Nash Equilibrium. ˆi N u  converge to the approximate cooperative Nash equilibrium (Definition 2) for every i . Proof: The proof is similar to (Vamvoudakis, 2011) but is done only with respect to the neighbors (local information) of each agent and not with respect to all agents. Consider the weights ˆ, i i N W W  to be UUB as proved in Theorem 6. a.The approximate coupled HJ equations are ˆˆ( , , , ), where , i HJ i   are the residual errors due to approximation. After adding zero we have ˆ( , ) After taking norms in (55) and letting All the signals on the right hand side of ( 56) are UUB and convergence to the approximate coupled HJ solution is obtained for every agent. Simulation results This section shows the effectiveness of the online approach described in Theorem 6 for two different cases. Consider the three-node strongly connected digraph structure shown in Figure 1 with a leader node connected to node 3. The edge weights and the pinning gains are taken equal to 1 so that 1 Select the weight matrices in (9) as In the examples below, every node is a second-order system.Then, for every agent 1 2 According to the graph structure, the information vector at each node is Since the value is quadratic, the critic NNs basis sets were selected as the quadratic vector in the agent's components and its neighbors' components.( ,0, ) 0 0 0 ( , ,0) 0 0 0 ( , , ) Position and velocity regulated to zero For the graph structure shown, consider the node dynamics and the command generator 0 The graphical game is implemented as in Theorem 6. Persistence of excitation was ensured by adding a small exponentially decreasing probing noise to the control inputs.Figure 2 shows the convergence of the critic parameters for every agent.Figure 3 shows the evolution of the states for the duration of the experiment. All the nodes synchronize to the curve behavior of the leader node For the graph structure shown above consider the following node dynamics with target generator 0 0 0 1 1 0 The command generator is marginally stable with poles at s j   , so it generates a sinusoidal reference trajectory.The graphical game is implemented as in Theorem 6. Persistence of excitation was ensured by adding a small exponential decreasing probing noise to the control inputs.Figure 4 shows the critic parameters converging for every agent.Figure 5 shows the synchronization of all the agents to the leader's behavior as given by the circular Lissajous plot. Conclusion This chapter brings together cooperative control, reinforcement learning, and game theory to solve multi-player differential games on communication graph topologies.It formulates graphical games for dynamic systems and provides policy iteration and online learning algorithms along with proof of convergence to the Nash equilibrium or best response.Simulation results show the effectiveness of the proposed algorithms. Frontiers are the critic NN activation function vectors, with h the number of neurons in the critic NN hidden layer.According to the Weierstrass Theorem, the NN approximation error i  converges to zero uniformly as h   .Assuming current weight estimates ˆi W , the outputs of the critic NN are given by Fig. 3 . Fig. 3. Evolution of the system states and regulation. Fig. 5 . Fig. 5. Synchronization of all the agents to the leader node. Frontiers . If there is no path from node k to node i, then the control input of node k cannot influence the state or value of node i. ■ Theorem 1.Let ( , ) i A B be reachable for all i N  .Let every node i be in best response to all his neighbors i j N  .Let Assumption 1 hold.Then all nodes in the graph are in interactive global Nash equilibrium if and only if the graph is strongly connected. ) are solved for each node i. System (8) is said to be uniformly ultimately bounded (UUB) if there exists a compact set Thus the NN activation functions are
2015-07-06T21:03:06.000Z
2012-07-25T00:00:00.000
{ "year": 2012, "sha1": "9b54655e0692a73858dadb39ccce4512db581eed", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/37943", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "9b54655e0692a73858dadb39ccce4512db581eed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231420420
pes2o/s2orc
v3-fos-license
SynPo-Net—Accurate and Fast CNN-Based 6DoF Object Pose Estimation Using Synthetic Training Estimation and tracking of 6DoF poses of objects in images is a challenging problem of great importance for robotic interaction and augmented reality. Recent approaches applying deep neural networks for pose estimation have shown encouraging results. However, most of them rely on training with real images of objects with severe limitations concerning ground truth pose acquisition, full coverage of possible poses, and training dataset scaling and generalization capability. This paper presents a novel approach using a Convolutional Neural Network (CNN) trained exclusively on single-channel Synthetic images of objects to regress 6DoF object Poses directly (SynPo-Net). The proposed SynPo-Net is a network architecture specifically designed for pose regression and a proposed domain adaptation scheme transforming real and synthetic images into an intermediate domain that is better fit for establishing correspondences. The extensive evaluation shows that our approach significantly outperforms the state-of-the-art using synthetic training in terms of both accuracy and speed. Our system can be used to estimate the 6DoF pose from a single frame, or be integrated into a tracking system to provide the initial pose. Introduction Robotic interaction plays an essential role in automatic production, showing a significant increase in demand in recent years [1]. At the same time, Augmented Reality (AR) has shown great potential in tasks such as maintenance and training [2,3], proving its ability to improve the efficiency of cognitive tasks. 6 Degree-of-Freedom (6DoF) pose estimation and tracking is a crucial technology for AR and robotic grasping tasks and has therefore recently received increasing attention by the computer vision and robotics communities. Approaches relying on depth images exclusively or in conjunction with RGB images have achieved admirable results over the last years [4,5]. Depth information enables a more reliable pose estimation for low-textured objects, especially under challenging lighting conditions. However, depth information, which can be obtained from stereo cameras or other sensors such as Time-of-Flight (ToF) cameras, is still a privilege of a small group of devices with specific cost and performance limitations. In contrast, monocular camera setups are low-cost and more compact. They are already available on most current mobile devices. Therefore, pose estimation algorithms relying only on RGB image data are of great importance while posing significant challenges as well. Classical approaches with RGB images [2,6] extract hand-crafted features from images and use them in a predefined matching procedure. However, the gradient required for feature extraction is sensitive to motion blur. Moreover, typical features used in image processing, such as ORB features [7], have limitations in scaling, rotation and illumination variations of the targets. They also require target objects with strong edge features. Deep learning based approaches and especially Convolutional Neural Networks (CNNs) have shown excellent results on many computer vision tasks, such as object detection and classification [8][9][10], image segmentation [11] or optical flow [12]. The works of Kendall et al. [13,14] were the first attempts to use CNNs for regression of 6DoF poses for place recognition and direct relocalization. After that, several learning based approaches followed, achieving good results on the object pose estimation problem [15][16][17][18]. The use of 3D pose refinement methods such as Iterative Closest Point (ICP) [19] or 2D methods [20,21] to improve the initial estimate appears to be of crucial importance for the performance of these pose estimation methods. The training dataset is a critical factor for the performance of deep learning based methods since a large amount of representative data is required. For tasks such as image classification or object detection, ground truth can be easily manually labeled. Obtaining ground truth data of object 6DoF poses is a more challenging task. It requires dedicated setups, such as a robotic arm or a tracker with additional markers [4,22]. Such approaches are time-consuming and can only cover a limited variation of the object poses, scene illumination and background. Apart from that, the use of real data can negatively impact the ability of trained networks to generalize well in new environments. Due to these reasons, the use of synthetic images rendered using 3D models of objects is very promising. Training with synthetic images simplifies creating datasets with a large number of images while obtaining ground truth of the 6DoF object pose is given directly by the rendering system. However, new challenges arise from the use of synthetic data since trained models need to overcome the representation gap between real and synthetic data. Thus, a further step of domain adaptation is necessary. Therefore, the domain adaptation problem in deep learning is a highly active field of research. In our previous work [18], we introduced a novel approach to overcome the representation gap between synthetic and real images. We suggested using the pencil filter as an image processing step. Both synthetic and real images are transferred to the pencil filter domain before the image processing. In the work presented here we improve several parts of our pipeline, from network architecture to rotation representation and synthetic training data preparation to achieve a significant increase in accuracy that surpasses the current state-of-the-art as shown in an extensive experimental evaluation on different datasets. In detail, we propose the following novel contributions extending our previous work: • A CNN network architecture specifically designed for increasing accuracy in pose estimation regression through the replacement of pooling layers with convolutional layers. • The use of lie algebra rather than quaternions for angle representation and regression. • An ablation study that quantitatively shows the positive effect of all main points of our proposed approach. • An overall approach that outperforms the state-of-the-art in 6DoF object pose estimation under similar conditions (i.e., no depth images in training, training exclusively on synthetic images) while being computationally very efficient due to the revised network architecture. Some examples of the estimated object pose can be seen in Figure 1. SynPo-Net can efficiently initialize a frame-to-frame tracking system like VisionLib [23] by providing an initial pose or relocalizing the system when tracking is lost. The rest of this paper is organized as follows-in Section 2, we summarize existing work in object pose estimation and domain adaptation. In Section 3, we formulate the addressed problem of our work in detail. We introduce our approach in Section 4, discussing network architecture and training, dataset generation and domain adaptation. Subsequently, we present an extensive experimental evaluation of our approach and a comparison to the state-of-the-art in Section 5. Finally, we give concluding remarks in Section 6. We visualize examples of the estimated pose using only SynPo-Net (without pose refinement). The groundtruth 3D bounding box and the predicted 3D bounding box are represented in red and blue, respectively. Related Work In this section, previous work related to our approach is classified and summarized. We first give a short overview of object pose estimation methods using depth and color information (RGB-D). Subsequently, we discuss state-of-the-art methods relying only on RGB images, which is directly comparable to our work. Additionally, we look at existing synthetic to real domain adaptation techniques not limited to pose estimation problems but also problems of learning from images in general. RGB-D Object Pose Estimation In classical approaches, 2D and 3D features are extracted from the RGB-D source, and hypotheses are made and verified to match the object 3D model in the scene. In templatebased approaches, for example, in the work of Hinterstoisser et al. [4], templates are generated in different viewpoints of the object model. The template consists of color gradient features in the object contour and the depth gradient features on the object surface. The combination of both intensity and depth information helps to provide a reliable matching result. ICP [19] and its variations are often applied to refine the estimated pose. The accuracy and speed of these template-based methods are heavily dependant on the number of used templates. In Reference [24], sub-linear matching complexity was achieved. However, this usually trades speed for accuracy. Tejani et al. [25] adapted Reference [4] into a scale-invariant framework to reduce the number of templates. Point pair features based approaches [5,26] match local features instead of the whole template of the object. In this way, local details which may be discriminating will not be ignored. However, such methods appear to be computationally more demanding. With the development of deep learning, the features for template matching pipeline can be learned with CNNs. In Reference [27], a CNN was used to extract the descriptors of the object from various viewpoints. The approach of Reference [28] achieved significant improvement by using learned local RGB-D features rather than the gradient. Furthermore, Reference [29] proposed a CNN and a multi-view fusion framework to leverage the information from multiple images, which has advantages, especially in video datasets. Additionally, the use of CNNs enables per-pixel matching or per-pixel prediction. In Reference [30], the proposed framework fuses both pixel-wise features from the image and point-wise features from the corresponding depth image. Predictions are made with each of those fused dense features, and the highest confidence pose is chosen as the final prediction. The work of Reference [31] can be seen as an extension of Reference [32] in the RGB-D case. It addressed the pose estimation problem by keypoint voting in the depth map. The pose can be calculated by fitting the detected 2D keypoints to their corresponding 3D keypoints in the object model. RGB Object Pose Estimation Most object pose estimation approaches using only RGB images are realized through deep neural networks. The approaches can be broadly classified into three categories. The first category is the approach of extending object detection algorithms. Based on the detected 2D bounding box, various methods can be used to estimate the object rotation. In SSD-6D [15], the rotation is treated as a discrete viewpoint classification problem. Other methods directly regress rotation in the form of quaternions [33] or a lie algebra [34] representation. To deal with the occlusion problem, Sundermeyer et al. [17] proposed an autoencoder-decoder structure to determine the rotation relying on the object representation in the neural network. However, as Su et al. in Reference [35] point out, the appearance of the object depends not only on the rotation but also on the translation. Estimating the object rotation without considering the bounding box position is not accurate. Reference [36] later solved this issue by introducing a perspective correction. Another group of approaches regress the object pose from the entire RGB image directly. A first attempt to use a CNN for regression of 6DoF poses was PoseNet [13]. The GoogLeNet [37] architecture was used for camera relocalization from images showing moderate accuracy but the method was not evaluated for object pose estimation. Following the idea of using a holistic CNN solution for pose estimation, in Reference [18], a similar network was applied for the regression of object poses. The pencil filter was used as a domain adaptation technique to enable training exclusively with synthetic images. Finally, the third category of approaches determines 3D/2D point correspondences and solves a Perspective n Point (PnP) problem. In contrast to appearance-feature based keypoints, CNNs can detect keypoints in a more complex feature space. For instance, in the work of References [16,38], the 2D projections of the 3D bounding box corners are detected. However, the corners of 3D bounding boxes are virtual keypoints that physically do not belong to the object. In Reference [32], a CNN was trained to predict vectors pointing to the keypoints pixel-wise. A robust RANSAC based voting scheme was used to locate the 2D keypoints using these vectors. More recently, dense per pixel 2D-3D correspondences could be obtained. Park et al. [39] used an autoencoder-decoder to generate object masks with color to obtain the dense 2D-3D correspondences, with the RGB value representing the predicted position in the model local coordinate. Domain Adaptation Techniques Several methods have been introduced to deal with the domain adaptation problem when learning from synthetic images. Different methods are often combined in practice to obtain improved results. Creating synthetic images that resemble reality as much as possible (photo-realistic rendering) is probably the most obvious solution to the problem [40,41]. However, the material of objects and lighting in complex conditions are not easy to simulate. To increase the realism, rendering the object context-aware is also very popular [42]. For example, the object should be positioned on a table. This is required to detect all the plane and its rotation in the background image. Therefore, such approaches are computationally expensive while they tend to perform well only in controlled environments. Domain adaptation with the help of real images from the target domain is also common. Some techniques perform some form of post-processing to the synthetic data to increase the similarity to real images. Learning approaches can be used with pairs of synthetic and real images [43], or Generative Adversarial Networks (GANs) [44] can be trained to generate realistic images from synthetic images [45,46]. Small amounts of real images can also be used to fine-tune the CNN [42]. The appearance of the RGB images can be affected a lot by the environment. So the domain adaptation for the RGB images is inherently not easy. In contrast, depth images provide only spatial information. The domain gap between real depth images and synthetic depth images is much smaller than for RGB images. The texture and illuminations have minimal effect on the depth images. Rad et al. [47] use the features from the depth images to predict the pose. This part can be trained using synthetic datasets, since the domain gap between depth images can be easily overcome. A mapping from the depth map features and color images features can be trained using real RGBD images obtained from depth sensors. Georgakis et al. [48] also learns the key points with depth images, and then learns to match the color image features to the key point features from the depth image. Furthermore, it is common to perform random data augmentation as domain randomization, for example, random noise, random brightness and contrast, random backgrounds to object views and random textures on objects [15,49]. To further improve the diversity of data, Reference [50] also changed the shape of 3D models to get more training images. Trained with images from different domains, the CNN is forced to focus on the real critical part of the image, which is not randomized, that is, objects in our case. Unlike other works that attempt to fit one domain into another, we use a different approach to solve the domain adaptation problem in this work. We transform both the real and synthetic images into a new domain where visual similarity is increased and adaptation is facilitated (details in Section 4.2). Our approach is a general approach, we do not require any images or prior knowledge from the target domain. Our domain adaptation method is used together with domain randomization for further improvement. Problem Formulation The 6DoF object pose can be described with a rotation and a translation from the object coordinate system O to the camera coordinate system C. The translation part can be expressed with a translation vector O c ∈ R 3 representing the position of the object coordinate system origin in the camera coordinate system. The rotation can be formulated in many different ways. In this work, we use lie algebra φ co ∈ R 3 with the footnote co denotes the rotation from object coordinate to camera coordinate. In the frame of this work, we focus on the object pose estimation relying only on the color image, that is, given a single image, the pose of the target object should be estimated. Training of the proposed approaches is done exclusively with synthetic data. Method We describe the entire proposed pipeline of our object pose estimation system in this section. We first present the architecture of our SynPo-Net, which is the CNN designed for the task at hand, together with the used loss function in Section 4.1. Subsequently, we discuss how the predicted pose can be further refined, and the relationship between pose refinement and object 6DoF tracking in Section 4.1.3. In the end, we discuss the synthetic training data generation and the pencil filter as the proposed domain adaptation technique in Section 4.2). 6DoF Object Pose Estimation Currently, most neural network structures are designed for image classification tasks. In this paper, we argue that when using such kinds of network structures for object pose estimation, certain modifications can improve performance. Specifically, we used the Inception network [37] as a base network and researched the impact of input resolution, the use of pooling layers, and the representation of rotation. Input Resolution: Li et al. [51] first mentioned that it is not suitable to directly use the object classification CNN as the backbone for object detection. Object classification tasks only need to recognize the object class. For this purpose, a global overview of the image is of high interest, which can be achieved by applying a large downsampling factor. However, object detection methods still need to estimate the object position accurately. The spatial resolution has more meaning in this case. This argument also applies to object pose estimation tasks. In Reference [51], the downsampling factor is reduced to double the output spatial resolution of the last CNN layers. Rather than reducing the downsampling factor, we suggest increasing the resolution of the input image. Pooling Layers: Max pooling layers are widely used in CNNs for object classification [37,52]. To recognize object classes regardless of the object position in the image, these CNNs are required to be less sensitive to the object position. The max pooling layer selects only the maximum value in the reception field of the kernel. In other words, if the input layer has been shifted within half of the kernel size, the output layer does not change. This might be beneficial for classification tasks but not for pose regression that needs to be sensitive even to small changes of the target position. To avoid the use of max pooling layers, we replace them with convolutional layers. More specifically, for max pooling layers followed by a convolutional layer, we merge them to one convolutional layer, in which the kernel size and the stride are the same as the max pooling layer and the number of output channels is the same as the initial convolutional layer (see Figure 2 as an example). For max pooling layers followed by inception blocks, we replace the max pooling layer with a convolutional layer without changing the kernel size, the stride, and the input size so that the output channel size of the convolutional layer equals the max pooling layer input channel size. We also replace the average pooling layers with convolutional layers. The convolutional layers can be equivalent to average pooling layers when the weight is learned as 1/(kernel_size 2 ). So we suggest that this replacement can further increase the representative capacity of the network. Representation of Rotation: Rotation matrices, Euler angles, quaternions and lie algebra are the most common representations of rotation. Rotation matrices can be used directly to rotate 3D points through matrix manipulation. However, using 9 elements to represent 3DoF transformations is unnecessarily redundant. Besides, rotation matrices need to be normalized, which introduces additional constraints to the optimization process. Euler angles are easy to understand as a representation, and therefore commonly used for human-machine interaction. However, this representation is ambiguous, which means the same rotation can be represented with various combinations of Euler angles. Additionally, the gimbal lock problem creates essentially noncontinuous points during interpolation. These properties make Euler angles less suitable for optimization problems. Quaternions are compact representations that consist of only 4 parameters and are unambiguous except that every quaternion is equal to the negative of itself. This representation also avoids the gimbal lock problem of Euler angles and allows a smooth interpolation for rotation [53]. Nevertheless, quaternions need to be normalized, which makes them suboptimal in regression tasks. (Details in Section 4.1.2). Lie algebra so(3) is a representation of rotation extensively used in optimization problems. It is a compact 3-dimensional vector that can be mapped to a rotation matrix using the exponential map. At the same time, it is ambiguity free in an arbitrary 0 ∼ 2π interval and does not require additional constraints. Therefore, we propose using lie algebra as a representation of rotation for regression with a CNN. Other CNN Structure Adjustments: To make sure the number of output channels of the convolutional layers increases smoothly, we added more layers. Additionally, the technique of batch normalisation [54] has been applied to accelerate the training process, which was not used in our previous work. Our proposed SynPo-Net is graphically represented in Figure 3. Loss Function Definition We used L2-norm losses for both translation and rotation regression. In our previous work, we used quaternions to represent the rotation. In that case the loss function can be expressed as where O c and q co are the predicted translation vector and rotation quaternion andÔ c andq co are the respective ground truth values. Since the predicted quaternions are not restricted, we need to normalize them before they can be used to represent the rotation. α q is the hyper-parameter used to balance the translation and rotation loss. Using lie algebra to represent the rotation, the additional normalization can be avoided. Then the loss function can be formulated as with φ co represent the predicted lie algebra rotation andφ co the ground truth rotation. Thus, the loss function with lie algebra is more straightforward for optimizing the object rotation (without the normalization step). We used the L balanced_O c _lie to train SynPo-Net. Meanwhile, we also trained a CNN using the L balanced_O c _q only for comparison. The result can be found in the experimental section (see Section 5). The loss functions discussed above is also calculated in the middle layers of the network as auxiliary losses and weighted into the primary loss of the network. Those auxiliary losses enable an effective gradient propagation in the lower layers and facilitate the training of the deep neural network. The weighted loss, which is used for the back propagation training, is defined as γ 1 , γ 2 , γ 3 are the hyper-parameters to adjust the effect of the auxiliary and primary loss. Pose Refinement Pose refinement is often used to improve the pose after an initial estimate is available, which can be optionally applied after the pose prediction from CNN. This task has a lot of similarities to frame-to-frame tracking. The tracking [21] or refinement [19,20] algorithm takes the information from previous frame or an initial pose and performs an improvement step of the estimate, often using geometry-based approaches. If the frame rate is high enough, we can expect that the pose difference between 2 continuous frames is minimal. In this case, the pose refinement algorithm can also be used for object tracking. 2D images are less sensitive to object depth translation than the object translation within the image plane. Thus, if depth is available, a pose refinement with depth information can help to estimate the pose very accurately. Since the LINEMOD dataset [4] also provides depth images. Similarly to other works, we also report the result with 3D refinement methods after using our proposed 2D-based estimation. We used the ICP algorithm for the pose refinement. Only the visible surface of the object is taken into account in each ICP iteration. The visible surface is updated after each iteration. Training Dataset with Proposed Domain Adaptation Technique A training dataset should cover as many as possible viewpoints of the object. Following the dataset generation pipeline described in Reference [18], we generated images of the objects from different viewpoints with random backgrounds from the PASCAL VOC [55] and the IKEA [56] datasets. We augment our rendered training data by randomly adding various effects, that is, Gaussian noise, random contrast and brightness adjustment, motion blur, speckle noise (see Figure 4). In the previous work, the augmentations have been applied during the dataset generation. In this work, the augmentations are dynamically applied to the images every time they are loaded for training the network in a random way. Thus, the augmentations for a certain image are not fixed and the capacity of the dataset is increased further. Subsequently, the pencil filter is applied on the synthetic training dataset and the resulting images are then used to train the network following the method of Reference [18]. Unlike other domain adaptation techniques, which attempt to transform one domain to another, we transform both domains into a third intermediate domain, in which the similarity between synthetic and real images is increased. We avoid providing color information to the network, which can be volatile when applied across datasets with different illumination conditions or between synthetic and real images. Also, unlike the 3D reconstructed models, the colors of CAD models are usually different from those of the final products. We use images in the pencil filter domain where the more reliable edge information is enhanced. In Figure 5, we present several rendered and real images with their corresponding pencil filter version to show the increased similarity in the pencil filter domain. This abstraction of information, apart from being effective in domain adaptation, also allows us to decrease the input size to our network from an RGB image to a singlechannel image, positively influencing training and forward pass time. To visualize the effect of the pencil filter, we render the object model with the same pose over the real image. However, it should be noted that for training our network we only rendered the models on random backgrounds. First row: Cropped real images from the LINEMOD dataset [4]. Second row: Real images after applying pencil filter. Third row: Rendered images with same object pose and background as the first row. Fourth row: Rendered images after applying pencil filter. Evaluation Our evaluation results are presented in this section. We have performed an ablation study with selected objects from the LINEMOD dataset [4] to investigate the effects of each proposed CNN design and training decision separately. Subsequently, we compare against the state-of-the-art by evaluating our proposed CNN on the entire LINEMOD and TUD-L [57] datasets. LINEMOD is the most commonly used benchmark for object pose estimation and TUD-L is a dataset focusing specifically on lighting variations. Implementation Details We implemented our CNN with MXNet [58] and trained the CNN on an Nvidia GeForce GTX 2080Ti GPU (Nvidia, Santa Clara, CA, USA). We used the same CNN settings for evaluating on both datasets. We set the hyper-parameters α q and α l both equal to 30 to balance the loss of translation and rotation, and {γ 1 , γ 2 , γ 3 } as {10,10,40} to balance the effects of auxiliary losses and primary loss. We train our models using the ADAM optimizer [59] with a learning rate of 0.0002 and parameters β 1 = 0.9 and β 2 = 0.99. The models are trained for 800 epochs with a batch size of 16. We created the training dataset with OpenGL. It consists of a number of 35,000-40,000 random poses per object. For the LINEMOD training dataset, we rendered the images without any light effect (the object's color is provided directly from the 3D Model). For the TUD-L training dataset, 30% of the images are rendered without any light effect. For the rest 70% images, we applied only diffuse reflection (according to Phong lighting model [60]) with a random light source position (no specular reflection). The pencil filter was applied for the domain adaptation to the training and evaluation datasets. Error Metrics The Average Distinguishable Distance (ADD) error has been first proposed in Reference [4]. This error calculates the average distance of model points projected to the camera domain using the predicted pose to the same model points projected using the ground truth pose. ADD errors are compared to a threshold that is based on the object size (largest diameter) for fairness. For better dealing with ambiguous cases including symmetry and occlusions, the Visible Surface Discrepancy (VSD) error was proposed in Reference [61] and optimized in Reference [57]. It measures distance difference in the depth image using only the visible part of the object in the image. The Maximum Symmetry-Aware Surface Distance (MSSD) error introduced in Reference [62] indicates the chance of successful grasp with the robot arm by focusing on the maximum prediction error rather than the average error in ADD. In contrast, maximum symmetry-aware projection distance (MSPD) is more suitable to evaluate the RGB-only methods that are used in the Benchmark for 6DoF Object Pose Estimation (BOP) [63]. The model points are projected into the image plane for the measurement, which overcomes the RGB-only method's weakness. To take full advantage of the different metrics, a method's average performance with the VSD, MSSD, and MSPD is used as the bop performance score. In our experiments, we use suitable metrics that allow comparison to the related state-of-the-art works. More specifically, we use ADD error for the evaluation of LINEMOD dataset and use bop performance score for the evaluation of TUD-L dataset. The Pencil Filter Effect In our previous work [18], we have already presented a qualitative proof that the accuracy can be improved by applying the pencil filter for domain adaptation. In this work, we try to intuitively represent the effect of pencil filter in reducing the difference between the real and synthetic images. We made the following experiment similar to in Reference [64]. We trained two networks separately with the same synthetic images of an object. One of them was trained with single-channel pencil images, and the other was trained with RGB images. Then we rendered the object above the real images with the corresponding ground truth pose. We test how the CNN is activated differently based on both types of images (the rendered images and the original real images). We passed both types of images to the network to observe the absolute differences in the feature map after the third modified inception block, based on which the first auxiliary pose is predicted. We first calculate the average absolute difference of this layer with all the images for the camera (cam) and watering can (can) in the LINEMOD dataset. The difference can be qualitatively represented with Figure 6. It is obvious that the maximum difference of the synthetic image and the real image is smaller in the pencil domain. Figure 6. We compare the difference in activation of a CNN layer for a network trained with pencil images and with RGB images to illustrate the domain adaptation efficiency. We also quantitatively report the {max absolute difference, mean absolute difference, standard deviation of absolute difference} in this averaged feature map of Figure 6. For the cam object trained with the pencil image, these values are {0.1254, 0.0126, 0.1146}, and trained with RGB images {0.1707, 0.0127, 0.1473} respectively. Despite the fact that pose estimation for the cam object has relatively low accuracy for our approach (see Section 5.4), the pencil filter still helps overcome the gap between the synthetic images and the real images. We achieved an outstanding result with the can object, and naturally, the absolute difference is even smaller than the case of the camera. For the network trained for the watering can, trained with pencil images, the values are {0.11309, 0.0159, 0.114}, and trained with RGB images {0.1414, 0.0137, 0.1398} respectively. CNN Architecture Modification Effects Our main ablation study results are presented in Table 1. Here, we evaluate the influence on the pose estimation accuracy for each one of the proposed ideas. We use the driller object of the LINEMOD dataset as an example and evaluate all proposed CNN modifications in this paper. The pencil filter was applied in all experiments. In the first five experiments, batch normalization was not used, and we set their learning rate to 0.0001 and batch-size of 32 to make full use of the GPU Memory. The CNN in the second experiment corresponds to our previous work [18]. The result in the second experiment is slight better than we reported in Reference [18], because we adjusted the {γ 1 , γ 2 , γ 3 } values. The last experiment with all the modifications corresponds to our proposed SynPo-Net, the training settings are described in Section 5.1. According to the Table 1, we can see that all the proposed modifications in this paper truly help in improving the CNN's performance for object pose estimation. Also, results indicate that the amount of data plays a crucial role. By applying random augmentations after the images have been loaded, we manage to increase the dataset diversity further and have a positive effect on the results. Table 1. We evaluated the proposed contributions in an ablation study. The networks have been tested with the Driller object of the LINEMOD dataset using the Average Distinguishable Distance (ADD) metric with a threshold of 10%. Dynamic augmentation means that the random augmentations will be applied after the images have been loaded for training (as we also mentioned in the end of Section 4.2). The details of other modifications in the LINEMOD Dataset State of the Art Comparison We summarise the result of different methods on LINEMOD in Table 2 for comparison. The methods are divided into two groups based on the data type used for training. Our proposed CNN outperforms the state-of-the-art using synthetic training AAE [36] by a large margin for most objects and on average. However, their approach deals better with symmetric objects such as the glue and eggbox. Our approach removes color information and focuses more on edge information. The 3D model of the eggbox is coarse, which could also influence the quality of synthetic training datasets. Our CNN also performs weakly with the camera object. According to Figure 6, we can attribute the cause to the difference between the real image and synthetic image, as domain adaptation is not that successful in this case. Overall, our proposed method clearly shows the best-reported results thus far when training with synthetic data, and even exceeds Brachmann [65] and BB8 [16] which have been trained with real images. For further comparison, we also trained Pix2Pose [39] and YOLO6D [38] using the same synthetic images as ours (with all augmentations applied). For Pix2Pose [39], we provided the ground truth 2D detection bounding box. It is interesting to note that in contrast to the result when trained using real images, Pix2Pose [39] has a poor performance when trained using synthetic images. We think the dense-correspondence matching methods focus on the appearance of the object in pixel-level. Thus they are easier to overfit to the synthetic images and have problems generalizing to the real images, when trained solely using synthetic images. Table 2. Evaluation results on the LINEMOD dataset using the ADD metric with a threshold of 10%, using RGB images only and no pose refinement. Higher is better. *: We trained YOLO6D [38] and Pix2Pose [39] using the same synthetic images as ours. In Table 3 we also report the results when ICP pose refinement is applied using depth images. Our result is better than that of Reference [36] after applying pose refinement as well. However, the projective ICP that was applied in Reference [15] takes leverage of both image and depth information and performs still best on average. This pose refinement approach is nevertheless not openly available for testing and experimentation. In any case, our approach still outperforms SSD-6D [15] on 9 out of 13 objects of the dataset. Table 3. Results on LINEMOD dataset using the ADD metric with a threshold of 10%, when depth information is used for pose refinement. Higher is better. TUD-L Dataset State of the Art Comparison The TUD-L dataset contains three household objects under challenging light conditions. We summarize the bop performance score [63] (as we described in Section 5.2) in Table 4. Object 2 (Frog) does not have an outstanding contour difference in different poses, and thus its pose is harder to be estimated than the other two objects. Still, our method shows superior performance against the other methods. Runtime Evaluation The frame processing rate achieved by state of the art methods is summarised in Table 5. The methods are tested with different hardware (GTX 1080 or Titan X Pascal). We used an RTX 2080 Ti, which is considered about 30% faster than Titan X Pascal [66]. Taken the hardware difference into consideration, our CNN should be able to run about 65/(1 + 30%) = 50 fps in Titan X. To summarize, we show that our CNN for pose estimation performs favorably against the state-of-the-art not only in terms of accuracy but also in terms of speed. Conclusions In this work, we proposed SynPo-Net, a novel CNN-based approach for 6DoF object pose estimation trained exclusively with RGB synthetic images reduced to single-channel images in pre-processing. We support the idea that neural network architectures need to be adjusted to the specific task of pose regression instead of relying on network layouts designed for classification. We address the domain adaptation problem by transforming synthetic and real images into a new domain with increased similarity. The results of an extensive experimental evaluation support our claims. In an ablation study, we showed how each proposed change to the network increases the pose accuracy. The comparison on the LINEMOD and TUD-L datasets proves that our method outperforms the existing state of the art in both accuracy and inference time. In future work, we plan to use our CNN as the backbone network for multi-object pose estimation.
2021-01-07T09:07:47.260Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "442bc3b98929feff80c8a1d3917cc99f6ab7ea90", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/1/300/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39c46c0ba13957ccf92f6b4bc34941efa2a727c0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
69580044
pes2o/s2orc
v3-fos-license
Performance Analysis of Two Way All Optical Relay Assisted PM-FSO over Different Weather Conditions This paper proposes a novel method of relaying in Polarization Multiplexing (PM) in Free Space Optical (FSO) networks using amplify and forward and decode and forward relays with and without fixed gain. The idea of multiplexing scheme combined with relay helps in improving the channel capacity and the link distance. To mitigate the inter channel crosstalk that occurs in Wavelength Division Multiplexing, PM is proposed. To avoid the degrading performance of the system due to atmospheric turbulence, QAM modulation scheme is used. The performance of the system is analyzed by considering various parameters like BER, link distance and the transmitted and received power. Monte Carlo simulations are used to validate the results. Introduction FSO is the trending technology to be used in communication networks for its ease of deployment and availability of license free spectrum, and bandwidth [1,2].Many researches are already available to increase the capacity of FSO networks, to meet the growing demands of communication and multimedia applications where WDM is the major focus of interest [3][4][5].Since WDM offers extensive capacity increase in terabits/s, this offers a solution the demand in global digital communication.The major problem with WDM in FSO is the inter channel crosstalk, which degrade the system performance more when combined with the turbulence characteristics of atmosphere [5].WDM is a technology where we need a different wavelength for each transmitted channel.For last mile applications, it becomes costly if we add more number of sources for each transmitted channel.But WDM is imposed of inter channel cross talks and hence PM is proposed in this paper.PM offers a solution that, it allows multiplexing of two channels with same carrier wavelength by separating them into different polarizing beams [6].To increase the link distance relays are introduced. The major contribution of this paper can be detailed as follows: in section II, PM in FSO network with amplitude and forward relay is proposed.This offers a cost effective network with easy deployment.This can be combined with RF or Fiber based networks to provide connectivity where RF and fiber solutions are not possible. In section III, Channel model is explained and QAM is proposed as the modulation scheme which provides the constellation with uniform probability and higher spectral efficiency but with shorter distance. In Section IV, BER is analyzed. In Section V, we discussed the results by including all types of noises due to atmospheric turbulences.Gamma Gamma channel model is assumed as the propagation model for simulating the results. System Model: PM-FSO with Relays The existing WDM FSO with relay assisted system as in [7] is shown in Fig. 1.It consists of single and two hop relays.Inter channel crosstalk is the major issue with WDM and it is dealt with the OOK modulation in [8] and M-PPM in [7].The existing system uses a multihop WDM FSO with relay nodes to increase the distance. The system proposed M-PPM as the modulation scheme which is more complex for real time implementation and also it requires a large bandwidth.The multiplexing method used in existing system is WDM which imposes a set of two different laser sources to send two channels which increases the cost of the system. The proposed system uses single Laser source with the transmitted optical beam is split into two polarized beams which can be used for two channels.The proposed system is shown in Fig. 2. The system provides two way communications both upstream and downstream between users at different locations.The total distance transmitted is increased using relay nodes which are at LOS distance from transmitter and receiver. In this proposed model, we have considered a single Laser source with wavelength which is split into and components using a PBS (Polarization Beam Splitter).The two beams are useful in carrying the data for two different channels.The transmitted beams are multiplexed using PBC (Polarization Beam Combiner) instead of WDM MUX. Each transmitted optical beam enters the relay node which amplifies the received signal and retransmits it.The receiver section detects the optical signal, convert it to electrical signal and demodulate it to original signal. In existing system, inter channel crosstalk is prominent in the DEMUX due to the imperfections present in it and also it becomes severe if atmospheric turbulence is also considered.Hence the received signal may suffer severe degradations. In the proposed system, each polarized beam is QAM modulated where each polarized beam is again split into two carriers differ in phase by 90 degrees ( and signal).Before sending into the free space, both the signals are combined and transmitted.At the receiver end, for demodulation the carriers are separated and the information bits are recovered. Channel Model: Gamma-Gamma Atmospheric Turbulence Model We adopted gamma gamma channel model to describe the characteristics of atmospheric turbulences [8].In our proposed system, as shown in Fig. 2, the source node (S) communicates with the destination node (D) via an intermediate relay node (R).Both the two hop communication uses FSO links.The Relay uses Amplify and Forward Scheme where the incoming signal is multiplied with a fixed gain using EDFA amplifier.The signal received is modeled as Where defines the received signal at the relay and is the transmitted signal and defines the noise introduced in the system.ℎ is the channel matrix defined by gamma gamma model.The signal received at the receiver is modeled as ( The instantaneous SNR at the destination is defined in [8] given by The normalized received irradiance is defined in [7] as the product of two statistically independent random processes and defined by the equation ( 4): where and are the large-scale and small-scale turbulent eddies, respectively.Their probability density function are given by the equations ( 5) and ( 6) Gamma gamma irradiance fluctuation is given by equation ( 7) and ( 8) where > 0; and represents small scale and large scale eddies of the scattering process.(.) is modiefied Bessel function of second kind of order , Γ(.) represents gamma function. and values are given by the equations ( 9) and (10) as ) where 2 is the Rytov variance, defined as where is the wavelength used, [9] and 2 varies from 10 −13 m −2/3 to 10 −17 m −2/3 and is defined in [10]. BER Analysis Our proposed 32-QAM signal constellation scheme is expressed as the sequence of two unconstrained 8 PAM signals.The Bit Error rate of QAM (12) can be derived from analyzing PAM signals. In the system developed in [10], the upstream transmission considers signal and interferer travel distinct paths and in downstream, the signal and interferer is assumed to experience same atmospheric turbulence; crosstalk is assumed as interferer and the probability is calculated.In our proposed model, since the signal of different wavelengths travel with different polarization, it doesn't experience the turbulence effects of interferer.Thus it provides better system performance when compared to WDM FSO. Performance Analysis in Upstream and Downstream Transmission The mathematical model for FSO channel is derived for single parallel relay placed between the source node and destination node.The distance between the source node and destination node is varied and the performance is analyzed.The channel is assumed to be independent and randomly varying due to atmospheric turbulences.We consider three loss factors: attenuation due to absorption and scattering (hs -Rayleigh scattering is considered for simulation), attenuation due to geometric properties like beam divergence (hg ) and attenuation due to pointing errors (hp ) which are caused by building sway. The channel between the source and relay node and relay to destination is represented as ( 13) The probability of error can be written as where ℎ 1 is the channel from source to relay node and ℎ 2 is the channel from relay to destination node.SNR for QAM is already defined in equation ( 3). Results and Discussions The simulation parameters used in this work are tabulated in table 1 2 mrad Photo detector responsivity 1 A/W Fig. 3 shows the BER for dual-hop FSO systems with single relay placed between the transmitter and receiver using QAM modulation.The figure is obtained by keeping the distance between the relay and receiver is constant and varying the distance of the relay node from 0.5 km to 2.5 km under clear weather conditions.We obtained better results till 2 km and after 2.5 km, no signal is received.Fig. 3. Avg.SNR Vs BER for dual-hop FSO system Fig. 4 shows the BER for with two relays placed in FSO systems between the transmitter and receiver using QAM modulation.The figure is obtained by keeping the distance between the second relay and receiver is constant and varying the distance of the first relay node from 0.5 km to 2.5 km under light Fog Conditions.We obtained better results till 1.8 km and after 2 km, no signal is received.FSO system Fig. 5 shows the BER for with two relays placed in FSO systems between the transmitter and receiver using QAM modulation.The figure is obtained by keeping the distance between the first relay and receiver is constant and varying the distance of the second relay node from 0.5 km to 1.8 km under clear weather Conditions.We obtained better results till 0.8 km and after 1.2 km, no signal is received.FSO system under clear weather conditions Fig. 6 shows the BER for with two relays placed in FSO systems between the transmitter and receiver using QAM modulation.The figure is obtained by varying simultaneously the distance between the first relay from 0.5 km to 1.2 km and the second relay node from 0.5 km to 1.5 km under clear weather Conditions.We obtained better results when we keep 0.5 km for the first relay and 1 km for the second relay.Here also, we considered a single parallel relay placed between the source and destination nodes.The distance between the relay and source is 1.2 km and between the relay and destination is 0.8 km. Figure shows better performance for AF relays than DF relays.For fixed gain both AF and DF shows similar performance but for variable gain DF shows better performance than the system with AF relays. Fig. 1 . Fig. 1.Existing System WDM/FSO with Relay nodes Figure 2 b shows the remote nodes to provide two way transmissions.The system provides two way communications both upstream and downstream between users at different locations.The total distance transmitted is increased using relay nodes which are at LOS distance from transmitter and receiver.In this proposed model, we have considered a single Laser source with wavelength which is split into and components using a PBS (Polarization Beam Splitter).The two beams are useful in carrying the data for two different channels.The transmitted beams are multiplexed using PBC (Polarization Beam Combiner) instead of WDM MUX.Each transmitted optical beam enters the relay node which amplifies the received signal and retransmits it.The receiver section detects the optical signal, convert it to electrical signal and demodulate it to original signal.In existing system, inter channel crosstalk is prominent in the DEMUX due to the imperfections present in it and also it becomes severe if atmospheric turbulence is also considered.Hence the received signal may suffer severe degradations.In the proposed system, each polarized beam is QAM modulated where each polarized beam is again split into two carriers differ in phase by 90 degrees ( and signal).Before sending into the free space, both the signals are combined and transmitted.At the receiver end, for demodulation the carriers are separated and the information bits are recovered. Fig. 2 . Fig. 2. (a) Transmitters/Receivers at the OLT and Receiver Nodes (b) Remote nodes with MUX and DEMUX Fig. 6 . Fig. 6.Avg.SNR Vs BER for Multi-hop (with 2 relays with varying distance) FSO system under clear weather conditions Fig. 7 . Fig. 7. Gamma Distribution Vs Outage Probability Fig. 8 shows the outage probability for Avg.SNR for both fixed and variable gain AF and DF relays.Here also, we considered a single parallel relay placed between the source and destination nodes.The distance between the relay and source is 1.2 km and between the relay and destination is 0.8 km. Figure shows better performance for AF relays than DF relays.For fixed gain both AF and DF shows similar performance but for variable gain DF shows better performance than the system with AF relays.
2019-02-19T14:08:19.596Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "4eb51b8483cf579f15891d59670ad14d207a20a4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20535/radap.2018.74.30-35", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ce27efd8c8557b00407d61568354e93449990dc0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
223744076
pes2o/s2orc
v3-fos-license
Market Level Price Analysis of Copra Trading in the Philippines The dynamics in market level prices was examined for Philippine copra trading. The analysis of the price formation process in the copra miller-dealer-farmer markets showed that a weak form of market integration characterized the trading of copra resecada between dealers and millers in all Philippine regions. In contrast, integration of any form was absent between miller-farmer and dealerfarmer in all regions except in Region V. Likewise, no integration was noted in all market levels when dealers and millers used copra resecada price while farmers were given the copra corriente Pasa price. Important factors were identified that contributed to the level of market integration. Recommendations made encompass areas on coconut production and productivity, market infrastructure and facilities, and pricing system in copra trading. Introduction The coconut industry in the Philippines encompasses about 3.5 million families directly working in the coconut farm sector and about 25 million Filipinos indirectly dependent on the industry such as traders, exporters, and processors. Its importance is further reflected in the value of its economic contribution. Next to rice industry, it has the second largest contribution to Gross Value Added (GVA). In view of its immense importance, the Philippine government had issued policies and legislations for the past three decades to improve the coconut industry. Within a span of four decades since the 1940s, the industry evolved into a competitive agri-based commodities trading system supported by millions of coconut farmers selling in small lots to dealers for final delivery to millers. In view of how the industry evolved over time and the variation of factors affecting prices, there is a need to study how well copra at different market levels is integrated. The copra marketing system is price efficient if price changes are fully transmitted between market levels that prohibit private traders from obtaining abnormal profits. This is possible only in markets that are well integrated. Methodology The study covered ten (10) regions in the Philippines namely: Regions IVA, V, and NCR in Luzon; Regions VI, VII, and VIII in Visayas; and Regions IX, X, XI, and XII in Mindanao. The primary data were collected through a survey of coconut farmers, dealers, and millers. The market level copra price relationships was tested using farm gate price, dealer price, and miller price within each region for the same period. Time series data for copra following the old classification (copra resecada and copra corriente) was used. These data were gathered from the Philippine Coconut Authority (PCA) and Bureau of Agricultural Statistics (BAS). The stationarity of each variable was tested using Dickey-Fuller (DF) or Augmented Dickey-Fuller (ADF) tests (Sinharoy and Nair, 1994). The Ravallion model (Faminow and Benson, 1990) was used to test the dynamics in market level price relationships. Analysis of factors at the different market levels in relation to market integration data involved regression and correlation analyses. Results and discussions The copra product Copra is a homogenous product with differentiation in terms of quality as indicated by its moisture content (MC). The old classification of copra followed the standards and grades as set by PCA and industry members. Copra resecada was set as those with 6-13% MC while copra corriente are those with higher MC (>14%). The new copra classification standards of 1991 maintained the base price of copra at 12% moisture (semiresecada) but in addition reduced the rejection level to 12%. In effect this means that the "pasa" system (a system where there was an automatic deduction on copra having 14% or higher MC judged only by its appearance) of trading will cease. Tapahan dried copra was permitted to be traded within the moisture range of 12.1-14% provided that this copra was dried down immediately to at least 12% by traders with drying facilities. A table with price adjustment factor to allow for weight loss during the drying from 12% to 7% moisture served as basis for relating MC to price. On the other hand, to further promote the quality and marketability of coconut oil and copra that is consistent with prevailing market prices, the 2003 revision of the price adjustment scale for MC in copra stated that the "on-thespot" price of copra at the mill or farm gate shall apply to the weight of copra adjusted with a deduction calculated from the difference between prescribed and actual MC. No deduction shall apply to trade copra resecada/bodega with 6% MC but prescribed deductions apply within the MC range of 6.1-13.9%. Copra with 14% MC and above was deemed non-merchantable for export or processing to other by-products. In June 2004, a new copra classification table was put in place after months of an information campaign to address the issues of high aflatoxin in copra meal and high free fatty acid content of the oil. The order aimed to prohibit the trading of high moisture copra and penalize prolonged storage that resulted in very low moisture copra. Price Differential of Market Level Prices Within Regions Copra resecada. On the average, the miller copra price was less than the dealer price by PhP 0.03/kg (Table 1). Region VIII had higher mill gate price than dealer price because the oil mills in the region were more cost efficient due to their proximity to the supply source and due to their application of cost reducing measures in partnership with oil mills in other regions. For Region VII, price differential was a low positive (P 0.001). This situation could be attributed to the practice of many big dealers based in Region VII who offered a competitively higher price to attract Region VIII dealers. The strategy includes price speculating, setting target volume, price negotiating based on volume, and calling on fellow dealers with an offer. Hence, although the Region VII mill gate price was lower than the dealer price, the negotiated price can go higher when the dealer has the volume. As the millers said, "traders trade while millers accommodate". Another factor which may have also contributed to the lower mill gate copra price than the dealer price in Region VII was the less aggressive stance of millers to compete for copra supply in the presence of palm oil supply. Accordingly, the volume of copra procurement of certain millers in Region VII decreased in 2004 because of the low price of imported palm oil, which was highly demanded by industrial and fast food chain clients. The price differential between buying stations and dealers followed a similar trend. Except for Region IVA in Luzon, which posted a positive price differential in its mill gate price to attract dealers from other regions, buying stations had lower prices than dealers by PhP0.24/kg on the average. The price difference in Visayas was mainly due to inter-regional freight cost to their oil mill and the administrative cost of operating buying stations. On the other hand, a positive price differential was observed at the millerfarm (PhP1.50/kg) and dealer-farm level (PhP1.19/kg). Overall, the farm-dealer-mill average copra resecada price of five regions showed that miller-dealer price difference (PhP -0.03/kg) was much lower than the miller-farm (PhP1.50/kg) or dealer-farm (PhP1.19/kg) price difference. Dynamics in Market Level Prices Trading of copra resecada between dealers and millers in all regions was characterized by a weak and long run integration ( Figure 1 and Table 3). There was a less than perfect basingpoint system in place, where the dealer bases the price offered on the price set by the miller in all the regions. Overall, price formation between the dealers and millers was more connected than the price realization between the farmers and dealers/millers. In an oligopolistic setting, however, this has other implications on the efficiency of price transmission. The price formation relationship of the farmer-miller in Region V was characterized by a less than perfect basing point system where the farmer based on the miller. This implies that the miller leads the price realization process. Further, the price system between the farmer and the dealer can be described as competitive FOB pricing since the weak form of integration was rejected but the long run form was accepted. In essence, Region V is a major coconut producing region but its copra production is only about 26% of the total milling capacity of the five oil mills in the region. Therefore, aggressive copra procurement by millers may be imminent. One of the events which may have a bearing to the result of a basing-point pricing system between the farmers and miller in Region V was the role of the broker and big traders in the "toll crushing agreement" (TCA) in the area. On the other hand, there was no integration observed at all levels when dealers and millers used copra resecada price while farmers were given the copra corriente or Pasa price (Fig 2). Factors Affecting Integration and efficiency at the Market Level Self-sufficiency in production. There was no integration observed at the miller-farmer and dealer-farmer levels. This could be partly attributed to the low production at the farm level. It was noted that most farmers surveyed had a low volume of nuts harvested (i.e., 35% of the recorded harvests were below 3,000 nuts). And since farmers had limited sources of income, they often harvest and sell immediately regardless of the level of the current price. Results showed that market integration can be enhanced with increased trade flow and improved coconut production. Farmers were able to avail of a better price with higher volume of nuts harvested and copra sold (Table 4). Improved coconut production was significantly correlated with the type of coconut variety planted and age of palm. The hybrids showed high potential even at a young average age of eight years. Difference in copra quality. Variation in product quality, be it a result of a pricing system using the Pasa approach or a real inferiority in quality of copra traded, had a highly significant effect on the failure of market integration at the different market levels. As such, the farm-dealer and farm-miller copra marketing levels were grossly price-inefficient. Prices at the miller and dealer levels were not efficiently transmitted to the farm level. These results imply that policies geared towards improving mill copra prices cannot raise the farm income while the pricing system or Pasa system is still being practiced. About 43% of the farmers surveyed perceived that the quality of copra affected copra price at the farm level. The correlation analysis also revealed that the use of a MC meter at the point of sale was significantly associated with a higher price. An average copra sale with the use of the moisture meter device recorded a higher price (PhP18.53/kg) compared to copra sale using the visual approach (PhP17.21/kg). However, data showed that 82% of the copra sales for the period were made using the visual approach (Table 5). The reasons why the visual approach is dominant given by farmers were (1) the traders did not have a MC meter; (2) the traders had a moisture meter but do not use them; (3) the farmers did not want to wait for the result of the MC reading and opted for immediate cash payments; (4) the Pasa system was the traditional way and moisture reading is not practiced in the area; (5) the volume of copra to be sold was low so there was no need for a moisture reading; (6) the buyers were just small traders; and (7) the buyers did not care. Using the visual method approach in buying copra, the Pasa system with the automatic deductions of 14% on the copra volume being sold by the farmer prevailed even if the copra was resecada. Moreover, if the copra was presumed inferior in quality by the buyers, further deductions were imposed. About 86% of the copra sales for the period covered were given 14% to 22% deductions. Although millers and traders used the MC meter when trading with each other, they did not generally use this when dealing with farmers. About 81.5% of the traders still used the visual approach in buying copra from farmers and only 18.5% of them used the moisture meter when buying copra from other traders. About 70% of the traders reported that the moisture contents of copra from farmer-sellers were greater than 14% using the visual approach. Moreover, about 14% of these traders recorded 6.1-12% MC for the copra sold by other traders. As such, about 15% of the traders reportedly gave discounts of 10.9-14% but majority of them (60.5%) gave 14.1-25% discounts on copra sold by farmers. At the miller-dealer level, copra traded usually underwent moisture content reading using the moisture meter. Hence, spot or contract prices with appropriate deductions were often ensured. About 90% of the traders used the moisture meter in selling, oftentimes to millers. The use of visual approach in copra trading (10%) was observed when traders (i.e., municipal) sold to other traders (i.e. municipal/provincial/regional traders). Using the visual approach, the copra traded was all classified as resecada. About 62% of the discounts ranged from 0-7%. At this level, the spot/contract price less discounts was appropriately implemented indicating that market integration was noted. Hence, with a uniform standard of copra quality traded as a result of an acceptable pricing practice, policies geared towards improving mill copra prices have a positive effect on increasing dealer's prices as well. Long chain of intermediaries in the market structure. The correlation analysis of survey responses showed that the copra price that the farmers received was significantly associated with the type of buyer. The farmers surveyed usually sell to municipal and provincial traders. A positive association was noted between the copra price that the farmers received and the type of buyer based on their geographical coverage in buying and selling. Hence, higher farmers' copra price was associated with provincial/regional/interregional buyers. The relatively higher price could be attributed to the shorter marketing chain wherein fewer participants, who incorporate profit, were involved in the chain. Moreover, the type of buyer based on geographical coverage was also positively and significantly associated with the volume of copra sold. Hence, farmers with higher volume of copra to be traded often sold to bigger traders and negotiated for a higher copra price. Although on the average, the Pcf from the traders was lower than that from the millers (PhP 17.35 vs. PhP 18.39), about 94% of the farmers sold to traders and only about 6% sold directly to millers. This result could be attributed to the other marketing practices that closely linked the traders and the farmers. Farmers usually sell their copra to their regular buyers or "suki" (29%), to buyers who offered them credit/marketing tie-up (24%), and to buyers who offered higher copra price (20%). The "suki" buyers were described as those who gave minimum deductions, had good relationship with the farmers, and normally offered cash advances. Credit/marketing tie-up took the form of Direct Copra Marketing (DCM) arrangements between the cooperative and the farmers; free transport cost for hauling, provision for cash advances with the condition that payments will be upon marketing of their copra; and more importantly, agreement that the copra are to be sold to the lender-buyers. Related to this, copra price given to farmers was negatively associated with their credit status, the mode of payment for their copra, and source of capital or credit. Results of the survey showed that 60% of the farmers got credit from copra buyers and other sources. Meanwhile, the remaining 40% of the farmers had their own capital and did not resort to credit. Results further showed the highly significant effect of the farmers' credit status on the copra price. Farmers without any credit received from buyers and/or from other sources were given significantly higher copra price compared to farmers who were provided with credit services by buyers (PhP17.84 vs. PhP17.18/kg copra). Likewise, farmers who used their own capital in copra production received a better price for their copra (PhP17.84/kg) compared to those who sourced their capital from their copra buyers (PhP17.26/kg). Moreover, farmers who sourced their capital from other sources got the least price for their copra (PhP16.12/kg). Bottlenecks in transportation and infrastructure facilities. Gravel and dirt roads connected the farms of 70% of the farmers surveyed in selling copra. Notably, a higher freight cost of PhP0.31/kg to PhP0.34/kg was incurred by farmers for transporting via dirt and gravel roads, respectively, compared to a cost of PhP0.28/kg when cemented or asphalted roads was used. Hence, the freight cost/kg/km was about PhP0.12 for dirt road and PhP0.03 for cemented/asphalted road. Farmers who had to traverse on dirt road and then transport copra by sea had much higher freight cost of PhP1.23/kg. On the other hand, about 29% of the farmers surveyed relied on the transportation provided by the traders or they delivered and the buyer paid the freight cost. Related to this, the copra price of farmers was lower when they employed the pick-up method than when copra was delivered to buyers (PhP16.87/kg vs. PhP17.72/kg). About 63% of the copra sales were delivered to the buyers while 37% where picked up by the traders. Notably, most of the traders were equipped with trucks for copra procurement operations. Other modes of transportation reported by farmers were the use of public utility vehicles (29%), hired/private vehicles (14%), and horseback and animaldrawn (19%). Pricing practices. Current price determination and pricing practices at the farm level could hinder the farmers from getting the right price. At the farm level though, oligopolistic pricing proved to be disadvantageous to the farmers. The farmers tended to be just price takers and did not contribute in the price formation of the copra they sell. The farmers' survey responses denoted that the buying prices of copra at the farm level were based on the prevailing price (79%). Accordingly, the prevailing price was usually set by millers and traders. About 89% of the farmers indicated that the buyers set the price while only 11% of the farmers stated that both the farmer and buyer negotiated to set the price. Notably, the correlation analysis between the price received by farmers (Pcf) and selected marketing factors showed that Pcf was negatively associated with who sets the price. This indicates that Pcf is higher (PhP17.97/kg) when both the farmer and the buyer negotiate for the price and Pcf is lower (PhP17.37/kg) when only the buyer sets the price. Farmers can negotiate especially when they have the volume of copra. For example, a farmer with a volume of 10 t of copra can negotiate for a price higher by P0.50 -0.80/kg than the "on-the-spot" price. Traders or dealers had several options or strategies to increase income. Since traders had the volume, 67% of the traders surveyed availed themselves of a premium price based on negotiated contracts in selling. Additional income of about PhP0.20/kg to PhP0.70/kg was noted depending on the volume of copra contracted. Traders also availed themselves of negotiated contracts to serve as their protection for any price fluctuations. When spot price was on a downward trend, traders rushed in to contracts. When traders were under contract and the copra price was going up, they either fulfilled their existing contracts as soon as possible then availed of the higher spot price or they made partial delivery of the volume contracted and traded the rest of the volume under spot price. A premium price was also given during copra shortage or when mills had to fulfill a commitment. Hence, the price received by traders depended on the volume, loyalty, competition in the area, and the timing of the sale. About 39% of the traders also mentioned that they applied the cost plus profit method in determining the buying price. Hence, prices received by farmers net of Pasa discounts and freight costs were much lower. The marketing strategies of millers involved positioning of inventory or making a forward sale. Cost management was also a priority and this allowed them to be price competitive. In buying, price determination methods included costs + profit (14.29%), negotiation/contract (28.6%), and spot price (28.57%). Conclusion The mill-dealer copra markets are more integrated, albeit weak and showing long-run integration, than the mill-farmer and dealerfarmer copra markets, which showed no integration at all. The degree of integration between market levels was affected by low copra production at the farm level, difference in copra quality being traded, a long chain of intermediaries in the market structure, bottlenecks in transportation and infrastructure facilities, and oligopolistic pricing practices. Recommendations In view of the factors that contributed to the level of market integration, the following are recommended for the improvement of the Philippine copra markets. Increase Coconut Production/Productivity The level of self-sufficiency in production was conspicuous as results of the study pointed to its positive influence to market integration and efficiency. For the flow of price information to be efficient, coconut production should be immensely improved to meet the current and emerging demands in the coconut industry and to facilitate in raising copra farm gate prices. The study highlighted that although mill gate prices in Luzon regions were higher than those in the Mindanao regions, the farm gate prices of the former were lower than the latter. This is because millers involved in interregional procurement of copra imputed the freight cost in their buying price. Since 90% of the regions are deficit in copra production to meet the demand of millers and processors, helping the coconut regions to be self-sufficient will minimize imports of copra from other regions thus reducing inter-regional transfer cost that the millers had to shoulder to meet their milling requirements. This will also assist to narrow down the mill-farm price differential. Results further showed that improved coconut production was significantly correlated with the type of coconut variety planted and age of palm. Moreover, survey data highlighted that farmers who had planned to cut and convert their coconut plantation had renewed interests in conserving their plantation as they got involved in the production and distribution of new products like virgin coconut oil and coco sap sugar. Hence, programs to increase coconut production and productivity should be fast tracked and should incorporate areas on 1) planting and replanting of available improved and high-yielding coconut varieties; 2) application at the farm level of appropriate technologies on coconut-based farming systems; and 3) creation of demand and promotion of high-value coconut products with farmers as shareholders in the processing of their output. Increase Investment to Improve Market Infrastructure and Facilities The need for improved market infrastructure facilities and specifically farm-tomarket roads was made more evident by the absence of market integration between the coconut farmers and millers/dealers in all regions covered in the study. This area can be tackled with enhanced funding from the Local Government Units (LGUs) and government agencies like the Department of Agriculture (DA), Department of Public Works and Highways (DPWH), and Department of Agrarian Reform (DAR). Improve Pricing System in Copra Trading The study indicated that differences in copra quality as an offshoot of the Pasa approach (system where an automatic deduction in price on copra sale of an amount equal to or greater than 14% is given without moisture content reading), were a significant factor that led to the absence of market integration at the different market levels. But more glaring was the impact of this on the welfare of the coconut farmers. Price at the miller and dealer level was not efficiently transmitted to the farm level. This means that policies geared towards improving mill copra prices will not have an effect in raising the farm income while the Pasa system is practiced. Hence, the following recommendations are proposed: Strict implementation and monitoring of copra moisture standards. The mandatory moisture content (MC) reading was generally ignored at the dealer-farmer level during the period of analysis. Moreover, the Pasa system prevailed. The new copra moisture table for copra trading explicitly prohibits the trading of high moisture copra (14%-18% MC) and penalizes very low moisture copra arising from prolonged storage. A monitoring scheme by the PCA that would aid in the strict implementation of the new copra pricing system is a must. Otherwise, it is expected that despite the new copra moisture table, farmers will continue to be burdened with a minimum of 14% discount because copra is generally not rejected by traders and millers, because there is a deficit in copra supply. Employment of neutral operators of moisture meters would also be of immediate benefit to the farmers at the point of sale. Conduct a stakeholders' forum on copra pricing and quality improvement. It could be emphasized that the objective of getting high profit for the dealers and millers may not be compatible with the objective of providing high income for farmers. Hence, a more participatory approach can be facilitated to provide opportunities for the three groups of stakeholders to settle for a compromise in pricing and income so as to harmonize their incongruent objectives; to discuss solutions on how to correct the Pasa system to give incentives to farmers so that they will continue to survive as producers; and to implement drying technologies like the use of kukum dryer to encourage farmers to produce good copra. The noted aflatoxin problem can be easily solved by good drying, but farmers are dissuaded from delivering good and dried copra because of the Pasa system. Since the weight of copra was a main determinant of income, farmers tended to focus more on weight without fully realizing that deductions due to poor quality can reduce their financial gains. As of now, farmers are disadvantaged by having two systems in operation at the fieldthe Pasa and resecada systembecause anti-competitive dealers take advantage of the lax implementation and subject the farmers to tremendous discounts. With this system uncorrected, price information cannot serve as a guide to policy makers. However, with a uniform standard of copra quality traded, policies geared towards improving the mill copra price will have a positive effect on increasing both the dealers' and farmers' prices. Moreover, the need to adopt improved drying technology at the farm level should go hand in hand with a pricing method that can reward the quality improvement, serving as an incentive for farmers. Related to this, the PCA together with other stakeholders should educate farmers on how to produce good quality copra as well as how to come up with acceptable but more equitable strategies in processing and pricing. Strengthen Existing Coconut Farmers Organizations. One of the major observations of the study was that size, structure, and opportunities available to the key players had a significant bearing on the integration and price efficiency of the markets in the coconut industry. It was noted that small coconut farmers comprised about 89% of the coconut stakeholders but they got a miserly 25% of the potential income from coconut between the farm and the export market. On the other hand, the traders and processors who made up only 2.5% of the industry got 26% of the income while the large coconut farm owners which comprise only 8.5% of the industry get 49% of the income (CIIF and in Aragon, 2002). Coconut production and trading in the Philippines has been considered to be unsuitable for industrial use because of the inefficiency of having to deal with thousands of small holders and several layers of domestic traders. Despite this inefficiency, exporters still derive multimillion peso profits. Coconut exporters belong to the top 500 corporations in the Philippines. This can be attributed to the continued efforts of the milling sector and exporters to address areas of inefficiency. However, some innovative strategies can be implemented so that a more equitable benefit could flow down to the farmers. In any proposed developments, farmers should be factored in as partners in production and processing and not just providers of raw materials. Their active involvement would greatly distribute benefits and improve the welfare of farmers. Otherwise there will be a more skewed farm structure to the peril of small farmers. The survey revealed that dealers and millers employed several options or strategies to increase income. At the farm level, the coconut farmer organizations can be an effective vehicle to increase the income of farmers, if properly managed. Some marketing strategies that can be adopted by the farmers' organizations include serving as "co-producer of oil" in toll crushing agreements. This is an opportunity for farmers to increase their incomes since they do not need to be concerned with the volatile price of copra but they can sell a more valuable product -the coconut oil. However, crucial to any change is the efficiency and effective implementation so that benefits will be legitimately and equitably shared. Although results of this study indicated that there was an improvement in the price efficiency since competitive FOB pricing characterized salient marketing points, anticompetitive practices in the marketing of coconut also existed such that benefits did not accrue to the farmers but went to some unscrupulous traders. Hence, the government should not only aggressively implement programs for farmers and the coconut industry but more importantly, it should be morally vigilant in ensuring that checkpoints be in place to discourage anti-competitive practices.
2020-04-30T09:03:22.735Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "b312466a808f675ed88d1b3416668575ae5498b5", "oa_license": "CCBYSA", "oa_url": "https://journal.coconutcommunity.org/index.php/journalicc/article/download/133/120", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "60c80d1694d10076f60fae3b629b7cebf0bb9e02", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
271230671
pes2o/s2orc
v3-fos-license
Creep Failure Characteristics and Mathematical Modeling of High-Density Polyethylene Geomembranes under High Stress Levels To explore the creep characteristics of geomembrane under different tensile stresses, a series of creep tests were carried out on high-density polyethylene (HDPE) geomembrane specimens. For the interpretation and fitting of the experimental data, refined approximation functions were proposed. Particular attention was paid to the creep failure behavior under high tensile stresses, i.e., 70%, 80%, and 90% of maximum peak stress. To investigate the effects of size on the mechanical response, experiments with two different membrane thicknesses were conducted. The results obtained under high stress levels were compared with creep tests at medium and low stress levels. Depending on load level, different creep characteristics can be distinguished. In the secondary creep state, the creep velocity is higher for higher load levels. In contrast to the medium and low load levels, the geomembrane under high stresses underwent the tertiary creep stage after instantaneous deformation and primary and secondary creep stages. In some tests, it was observed that under very high stress levels, creep velocity does not necessarily follow the expected trend and creep rupture can occur within a short time. For numerical simulation, an improved mathematical model was proposed to reproduce in a unified manner the experimental data of the whole non-linear evolution of creep elongation under different stress levels. Introduction High-density polyethylene (HDPE) geomembranes are widely used in many hydraulic and environmental geotechnical applications like, for instance, water reservoirs, leaching ponds, landfills, tunnels, and water canal construction [1][2][3][4].As barriers against liquid and gas flow, geomembranes are often buried between cushion and protective layers composed of sand or soil in many applications.The material exhibits creep behavior when subjected to long-term loads, such as water pressure, gravitational force of municipal solid waste, and interfacial friction.Large tensile deformation of geomembranes also tends to occur at local contact areas of concave or convex neighboring underlayers and anchoring expansion joints. In particular, under high stress levels, creep behavior can lead to the recombination of the intrinsic stress state and the ultimate rupturing of geomembranes [5].As a consequence of creep rupture, water or waste liquid can pass through the damaged membrane sealing, causing a significant safety risk of anti-seepage control of geotechnical structures.Although the average maximum design stress for geomembrane structures is much lower than breaking strength, unexpected local conditions can lead to higher stress concentration and, Polymers 2024, 16, 2019 2 of 17 consequently, to a reduction of the lifetime of the geomembrane [6].Thus, investigating the long-term behavior of geomembranes under higher load levels is of practical importance with regard to the safety of geotechnical construction and the intended operation period. The raw material of HDPE geomembrane is a high-molecular, semi-crystalline polymer formed by crystalline and amorphous regions [7][8][9][10][11].The special structure of crystalline and interlayer amorphous chains gives HDPE geomembranes a viscoelastic-plastic material behavior [12].Under different load levels, the creep stages of polymeric materials display various characteristics.According to Koerner et al. [13], the creep behavior of HDPE geomembranes can be divided into four stages, namely instantaneous (O-A), primary (A-B), secondary (B-C), and tertiary creep stage (C-D-E), as illustrated in Figure 1.After instantaneous elongation, polymeric materials enter the primary creep stage first.Under low load levels, creep strain tends to stagnate in the secondary creep stage, while under medium load levels, creep strain increases almost linearly with time.Under high load levels, the material enters the tertiary creep stage after maintaining a relatively constant creep rate for a period of time in the secondary stage.Creep strain in the tertiary creep stage increases sharply with time until creep rupture occurs. Polymers 2024, 16, x FOR PEER REVIEW 2 of 17 than breaking strength, unexpected local conditions can lead to higher stress concentration and, consequently, to a reduction of the lifetime of the geomembrane [6].Thus, investigating the long-term behavior of geomembranes under higher load levels is of practical importance with regard to the safety of geotechnical construction and the intended operation period. The raw material of HDPE geomembrane is a high-molecular, semi-crystalline polymer formed by crystalline and amorphous regions [7][8][9][10][11].The special structure of crystalline and interlayer amorphous chains gives HDPE geomembranes a viscoelastic-plastic material behavior [12].Under different load levels, the creep stages of polymeric materials display various characteristics.According to Koerner et al. [13], the creep behavior of HDPE geomembranes can be divided into four stages, namely instantaneous (O-A), primary (A-B), secondary (B-C), and tertiary creep stage (C-D-E), as illustrated in Figure 1.After instantaneous elongation, polymeric materials enter the primary creep stage first.Under low load levels, creep strain tends to stagnate in the secondary creep stage, while under medium load levels, creep strain increases almost linearly with time.Under high load levels, the material enters the tertiary creep stage after maintaining a relatively constant creep rate for a period of time in the secondary stage.Creep strain in the tertiary creep stage increases sharply with time until creep rupture occurs.Considerable research has been conducted to appraise the result of creep tests of polymer materials and to propose mathematical models to simulate the experimental data [14][15][16].However, only a few studies focus on creep failure characteristics under high load levels [17]. From Figure 1, it is obvious that for their numerical simulation, the different creep characteristics require appropriate approximation functions.In a simplified manner, elastic spring elements and damper elements are frequently combined to simulate the onedimensional viscoelastic material behavior [18,19].Depending on the arrangement of spring and damper elements, particular creep characteristics can be described.However, a more detailed inspection shows that the material parameters involved also depend on the load level within the range of a considered load characteristic [20].The present paper proposes a refined concept to describe creep behavior for the whole range of loads within a particular load characteristic using a single set of material parameters.Moreover, the size effects observed in the creep experiments are also captured by the improved model. The paper is organized as follows: Section 2 deals with the experimental investigations of the elongation of HDPE sheets with two different thicknesses under plane stress Considerable research has been conducted to appraise the result of creep tests of polymer materials and to propose mathematical models to simulate the experimental data [14][15][16].However, only a few studies focus on creep failure characteristics under high load levels [17]. From Figure 1, it is obvious that for their numerical simulation, the different creep characteristics require appropriate approximation functions.In a simplified manner, elastic spring elements and damper elements are frequently combined to simulate the onedimensional viscoelastic material behavior [18,19].Depending on the arrangement of spring and damper elements, particular creep characteristics can be described.However, a more detailed inspection shows that the material parameters involved also depend on the load level within the range of a considered load characteristic [20].The present paper proposes a refined concept to describe creep behavior for the whole range of loads within a particular load characteristic using a single set of material parameters.Moreover, the size effects observed in the creep experiments are also captured by the improved model. The paper is organized as follows: Section 2 deals with the experimental investigations of the elongation of HDPE sheets with two different thicknesses under plane stress conditions.In particular, the stress-strain behavior and peak strength of HDPE sheets with an initial size of 100 mm × 50 mm and sheet thicknesses of 0.5 mm and 1.5 mm carried out in strain-controlled tensile tests are shown in Section 2. and the preparation of geomembrane specimens are described in Section 2.2.The results of the creep tests under high load levels of 70%, 80%, and 90% are outlined in Section 2.3.Particular attention was paid to the different stages of the non-linear evolution of creep strain.The relationship between critical creep time, failure time, and load level is under indepth discussion.The experimental results obtained under high stress levels are compared with the results from low and medium load levels in Section 2.4.In Section 3, improved approximation functions are proposed to model creep behavior in a unified manner up to breaking elongation.Conclusions are given in Section 4. Tensile Tests In order to explore the one-dimensional tensile behavior of the HDPE geomembrane used, material displacement-controlled tensile tests with a UTM4503 tensile tester were carried out on specimens with an initial length of 100 mm, a width of 50 mm, and two different thicknesses.The HDPE geomembrane material used in the laboratory experiments was manufactured by Material Co., Ltd. in Dezhou, Shandong, China.The tests were conducted with a constant displacement velocity of 0.334 mm/s.For both specimen thicknesses, namely 0.5 mm and 1.5 mm, the stress-elongation relations are shown in Figure 2, and the quantities of peak strength, breaking strength, and corresponding elongations are summarized in Table 1.For the representation of the test results, nominal stresses and strains are considered. conditions.In particular, the stress-strain behavior and peak strength of HDPE sheets with an initial size of 100 mm × 50 mm and sheet thicknesses of 0.5 mm and 1.5 mm carried out in strain-controlled tensile tests are shown in Section 2.1.The creep test equipment and the preparation of geomembrane specimens are described in Section 2.2.The results of the creep tests under high load levels of 70%, 80%, and 90% are outlined in Section 2.3.Particular attention was paid to the different stages of the non-linear evolution of creep strain.The relationship between critical creep time, failure time, and load level is under in-depth discussion.The experimental results obtained under high stress levels are compared with the results from low and medium load levels in Section 2.4.In Section 3, improved approximation functions are proposed to model creep behavior in a unified manner up to breaking elongation.Conclusions are given in Section 4. Tensile Tests In order to explore the one-dimensional tensile behavior of the HDPE geomembrane used, material displacement-controlled tensile tests with a UTM4503 tensile tester were carried out on specimens with an initial length of 100 mm, a width of 50 mm, and two different thicknesses.The HDPE geomembrane material used in the laboratory experiments was manufactured by Material Co., Ltd. in Dezhou, Shandong, China.The tests were conducted with a constant displacement velocity of 0.334 mm/s.For both specimen thicknesses, namely 0.5 mm and 1.5 mm, the stress-elongation relations are shown in Figure 2, and the quantities of peak strength, breaking strength, and corresponding elongations are summarized in Table 1.For the representation of the test results, nominal stresses and strains are considered.It is obvious that after the stress peak, the material exhibits strain softening and subsequently strain hardening up to breaking strength.Breaking strength is only a little lower than peak strength.There is a certain difference in peak strength and breaking strength for the specimen thicknesses of 0.5 mm and 1.5 mm.The results of repeated tensile tests showed similar differences lying within a range of less than 10%.For the thicker specimen, peak elongation is a factor of 1.36, and breaking elongation is a factor of 1.21 larger than It is obvious that after the stress peak, the material exhibits strain softening and subsequently strain hardening up to breaking strength.Breaking strength is only a little lower than peak strength.There is a certain difference in peak strength and breaking strength for the specimen thicknesses of 0.5 mm and 1.5 mm.The results of repeated tensile tests showed similar differences lying within a range of less than 10%.For the thicker specimen, peak elongation is a factor of 1.36, and breaking elongation is a factor of 1.21 larger than for the thinner specimen.This indicates that the thicker membrane behaves slightly more leniently under stretching.These differences may be explained by the fact that the quality of the manufactured geomembrane material is not perfectly even, and in the softening regime, the behavior is strongly influenced by inhomogeneous deformation and local plastifications.It is worth noting that for rate-dependent material, the stress-strain relation is also influenced by the prescribed loading velocity.Moreover, under higher loads, the phenomenon of significant necking and development of crazing areas locally leads to higher stresses.Thus, the value of such local stress concentrations can be much higher than the computed nominal stress shown in Figure 2. Creep Test Equipment and Preparation of Geomembrane Specimens For creep tests on HDPE smooth geomembrane specimens with an initial length of 100 mm and an initial width of 50 mm, a dedicated test apparatus was developed at Hohai University, as shown in Figure 3.It is equipped with three different devices: the loading device, the clamping device, and the deformation measurement device.The geomembrane specimen is installed horizontally by a fixed clamp and a movable clamp.In order to avoid the influence of gravity, the clamps are mounted horizontally.The movable clamp can slide freely on the horizontal rail.The loading plate is connected to the movable clamp by a steel cable which is guided over a pulley.In each test, the weight was placed on the trays in one step and kept constant during the whole geomembrane creep test.The data acquisition device is composed of a WFS displacement sensor with a resolution of 0.1 mm and an acquisition device.The WFS displacement sensor is a product of Suzhou Fangyi Electric Co., Ltd., Suzhou, China.The acquisition device records measurements at intervals of 1 s.A personal computer is used to store the test data.for the thinner specimen.This indicates that the thicker membrane behaves slightly more leniently under stretching.These differences may be explained by the fact that the quality of the manufactured geomembrane material is not perfectly even, and in the softening regime, the behavior is strongly influenced by inhomogeneous deformation and local plastifications.It is worth noting that for rate-dependent material, the stress-strain relation is also influenced by the prescribed loading velocity.Moreover, under higher loads, the phenomenon of significant necking and development of crazing areas locally leads to higher stresses.Thus, the value of such local stress concentrations can be much higher than the computed nominal stress shown in Figure 2. Creep Test Equipment and Preparation of Geomembrane Specimens For creep tests on HDPE smooth geomembrane specimens with an initial length of 100 mm and an initial width of 50 mm, a dedicated test apparatus was developed at Hohai University, as shown in Figure 3.It is equipped with three different devices: the loading device, the clamping device, and the deformation measurement device.The geomembrane specimen is installed horizontally by a fixed clamp and a movable clamp.In order to avoid the influence of gravity, the clamps are mounted horizontally.The movable clamp can slide freely on the horizontal rail.The loading plate is connected to the movable clamp by a steel cable which is guided over a pulley.In each test, the weight was placed on the trays in one step and kept constant during the whole geomembrane creep test.The data acquisition device is composed of a WFS displacement sensor with a resolution of 0.1 mm and an acquisition device.The WFS displacement sensor is a product of Suzhou Fangyi Electric Co., Ltd., Suzhou, China.The acquisition device records measurements at intervals of 1 s.A personal computer is used to store the test data. Creep Tests under Three Different High Stress Levels To investigate the whole evolution of creep behavior under high stress levels, tests were conducted under constant load levels of 70%, 80%, and 90% of peak strength.In particular, the load level is defined as the ratio of the creep stress to the peak stress.The reference peak strength was taken from the tensile tests outlined in the previous subsection. The creep phenomenon of geomembrane under the three different high load levels is similar for both specimen thicknesses, 0.5 mm and 1.5 mm.A visual inspection shows that the deformation of the geomembrane is inhomogeneous from the beginning of loading caused by the rigid fixation of the end of the specimen within the clamps.In particular, in a zone in the middle of the specimen, the width becomes smaller with continued creep elongation.Figure 4 shows that for an applied load level of 90% of peak load, the phenomenon of necking already becomes dominant when specimen elongation exceeds about 30%. Creep Tests under Three Different High Stress Levels To investigate the whole evolution of creep behavior under high stress levels, tests were conducted under constant load levels of 70%, 80%, and 90% of peak strength.In particular, the load level is defined as the ratio of the creep stress to the peak stress.The reference peak strength was taken from the tensile tests outlined in the previous subsection. The creep phenomenon of geomembrane under the three different high load levels is similar for both specimen thicknesses, 0.5 mm and 1.5 mm.A visual inspection shows that the deformation of the geomembrane is inhomogeneous from the beginning of loading caused by the rigid fixation of the end of the specimen within the clamps.In particular, in a zone in the middle of the specimen, the width becomes smaller with continued creep elongation.Figure 4 shows that for an applied load level of 90% of peak load, the phenomenon of necking already becomes dominant when specimen elongation exceeds about 30%.The appearance of whitening areas is an inherent phenomenon before the creep rupture of high polymer material, as reported by several authors [21][22][23][24][25].In this experiment, it was observed that the whitening area on the surface of the geomembrane (Figure 5) was distributed with different sizes of crazing areas along the length of the specimen.In this context, crazing areas denote whitening areas with local micro-crack initiations.When geomembrane creep strain exceeded about 90%, the crazing area continued to fracture, manifested macroscopically as pronounced cracks.Cracks starting near internal micro voids continued to be stretched and evolved into new crazing areas.The process occurred repeatedly, and ultimately, the cracks were interconnected, resulting in macroscopic creep rupture.The creep behavior of HDPE geomembrane specimens under three different high load levels is shown in Figure 6 for a specimen thickness of 0.5 mm and in Figure 7 for a specimen thickness of 1.5 mm.For all three investigated load levels, the creep characteristic is qualitatively similar, but the duration until creep rupture takes place is different.The appearance of whitening areas is an inherent phenomenon before the creep rupture of high polymer material, as reported by several authors [21][22][23][24][25].In this experiment, it was observed that the whitening area on the surface of the geomembrane (Figure 5) was distributed with different sizes of crazing areas along the length of the specimen.In this context, crazing areas denote whitening areas with local micro-crack initiations.When geomembrane creep strain exceeded about 90%, the crazing area continued to fracture, manifested macroscopically as pronounced cracks.Cracks starting near internal micro voids continued to be stretched and evolved into new crazing areas.The process occurred repeatedly, and ultimately, the cracks were interconnected, resulting in macroscopic creep rupture.The appearance of whitening areas is an inherent phenomenon before the cr ture of high polymer material, as reported by several authors [21][22][23][24][25].In this exp it was observed that the whitening area on the surface of the geomembrane (Figur distributed with different sizes of crazing areas along the length of the specimen context, crazing areas denote whitening areas with local micro-crack initiations.W omembrane creep strain exceeded about 90%, the crazing area continued to fractu ifested macroscopically as pronounced cracks.Cracks starting near internal mic continued to be stretched and evolved into new crazing areas.The process occu peatedly, and ultimately, the cracks were interconnected, resulting in macroscop rupture.The creep behavior of HDPE geomembrane specimens under three different high load levels is shown in Figure 6 for a specimen thickness of 0.5 mm and in Figure 7 Figure 8 shows the creep time and creep strain before the tertiary creep stage and in the tertiary creep stage.It is obvious that for higher load levels, creep rupture occurs significantly earlier.The creep strain at creep failure only shows moderate fluctuation but differs for different membrane thicknesses.In the third stage, namely the tertiary creep stage, creep time gradually decreases with an increase in load level.The dash-dotted lines and dotted lines denote the creep time ratio and the creep strain ratio, respectively.In particular, the creep time ratio is defined as the ratio of creep time to the time at the breaking state, and the creep strain ratio is defined as the ratio of creep strain to strain at the breaking state.Under 70%, 80%, and 90% load levels, the creep strain ratios of a 0.5 mm thick geomembrane in the tertiary stage are 90.9%,90.3%, and 84.9%, and the creep time ratios are 49.0%,49.5%, and 49.6%, respectively.For a 1.5 mm thick geomembrane, the creep strain ratios in the tertiary stage are 91.2%,90.0%, and 92.0%, and the creep time ratios are 60.7%, 69.4%, and 41.9%, respectively.Therefore, the tertiary creep stage under high load levels occupies a dominant part of the whole creep process.Figure 8 shows the creep time and creep strain before the tertiary creep stage and in the tertiary creep stage.It is obvious that for higher load levels, creep rupture occurs significantly earlier.The creep strain at creep failure only shows moderate fluctuation but differs for different membrane thicknesses.In the third stage, namely the tertiary creep stage, creep time gradually decreases with an increase in load level.The dash-dotted lines and dotted lines denote the creep time ratio and the creep strain ratio, respectively.In particular, the creep time ratio is defined as the ratio of creep time to the time at the breaking state, and the creep strain ratio is defined as the ratio of creep strain to strain at the breaking state.Under 70%, 80%, and 90% load levels, the creep strain ratios of a 0.5 mm thick geomembrane in the tertiary stage are 90.9%,90.3%, and 84.9%, and the creep time ratios are 49.0%,49.5%, and 49.6%, respectively.For a 1.5 mm thick geomembrane, the creep strain ratios in the tertiary stage are 91.2%,90.0%, and 92.0%, and the creep time ratios are 60.7%, 69.4%, and 41.9%, respectively.Therefore, the tertiary creep stage under high load levels occupies a dominant part of the whole creep process.From Figures 6 and 7, it is clear that immediately after instantaneous deformation (section O-A), time-dependent creep strain develops in a non-linear manner until creep rupture takes place.In the primary creep stage (section A-B), the curve is concave, and at the turning point B, it assumes a convex shape, indicating an increase in creep velocity.In particular, within the secondary creep stage (section B-C), creep velocity is almost constant and can be approximated in a simplified manner by the red dotted line shown in Figures 6 and 7.The course of the rapid creep growth stage (section D-E) can also be approximated by a straight line.It is obvious that for all load levels, the inclination of the second line (D-E) is much steeper than that of the secondary creep stage (section B-C).Following the concept by Liu [26] and other scholars, the intersection of the extended lines is defined as the "critical creep point" (p cr ), and the corresponding time (t cr ) and strain (ε cr ) denote the "critical creep time" and "critical creep strain", respectively.The corresponding values under different load levels and for the 0.5 mm and 1.5 mm thick membranes are summarized in Table 2. For a higher load level, the critical strain is higher and the critical time is lower.The values are different for different specimen thicknesses, which indicates a certain size effect of deformation behavior under high load levels.With the exception of the experimental data obtained for the 1.5 mm thick specimen under the load level of 90%, the values of creep time shorten approximately by a factor of 10 for every 10% rise in load level.Under different high load levels, the time (t f ) when the creep strain of the geomembrane reaches creep failure is approximately 1.07~1.13 the characteristic values obtained for the particular specimen thickness of 1.5 mm under the load level of 90% are far from the trend of the other experimental results.Such behavior can be explained by local inhomogeneities leading to the instable evolution of microstructure effects, which are typical for materials with strain softening.As shown in Figure 2, strain softening is relevant for the HDPE material used, and thus, it can be concluded that for a very high load level, the stress-strain relation is no longer unique. the tertiary creep stage.It is obvious that for higher load levels, creep rupture occurs significantly earlier.The creep strain at creep failure only shows moderate fluctuation but differs for different membrane thicknesses.In the third stage, namely the tertiary creep stage, creep time gradually decreases with an increase in load level.The dash-dotted lines and dotted lines denote the creep time ratio and the creep strain ratio, respectively.In particular, the creep time ratio is defined as the ratio of creep time to the time at the breaking state, and the creep strain ratio is defined as the ratio of creep strain to strain at the breaking state.Under 70%, 80%, and 90% load levels, the creep strain ratios of a 0.5 mm thick geomembrane in the tertiary stage are 90.9%,90.3%, and 84.9%, and the creep time ratios are 49.0%,49.5%, and 49.6%, respectively.For a 1.5 mm thick geomembrane, the creep strain ratios in the tertiary stage are 91.2%,90.0%, and 92.0%, and the creep time ratios are 60.7%, 69.4%, and 41.9%, respectively.Therefore, the tertiary creep stage under high load levels occupies a dominant part of the whole creep process.From Figures 6 and 7, it is clear that immediately after instantaneous deformation (section O-A), time-dependent creep strain develops in a non-linear manner until creep rupture takes place.In the primary creep stage (section A-B), the curve is concave, and at the turning point B, it assumes a convex shape, indicating an increase in creep velocity.In particular, within the secondary creep stage (section B-C), creep velocity is almost constant and can be approximated in a simplified manner by the red dotted line shown in Figures 6 and 7.The course of the rapid creep growth stage (section D-E) can also be approximated by a straight line.It is obvious that for all load levels, the inclination of the second line (D-E) is much steeper than that of the secondary creep stage (section B-C).Following the concept by Liu [26] and other scholars, the intersection of the extended lines is defined as the "critical creep point" (pcr), and the corresponding time (tcr) and strain (εcr) denote the "critical creep time" and "critical creep strain", respectively.The corresponding values under different load levels and for the 0.5 mm and 1.5 mm thick membranes are summarized in Table 2.In Figures 6 and 7, the bilinear approximation of the creep curves allows a simplified distinction between undercritical and overcritical creep behavior for practical purposes. More precisely, undercritical creep behavior is approximated by the connecting line of stages (B-C) and overcritical creep behavior by the connection between stages (D-E). The Polymers 2024, 16, 2019 9 of 17 inclination of the lines estimates a measure of creep velocity in these two stages, as outlined in Table 3.Compared with the undercritical creep velocity, the overcritical creep velocity is much higher, especially at higher load levels.The values for the 1.5 mm thick specimen under the load level of 90% are out of the expected range as a result of the inhomogeneous evolution of the microstructure, as previously discussed.From Table 2, it can be concluded that critical creep time, t cr , shows a significant downward trend with an increase in load level.The data can be fitted using the following power function: Here, σ p is the load level, and a and b are the fitting parameters.In particular, for the initial geomembrane thicknesses of 0.5 mm and 1.5 mm, the fitted curves are shown in Figures 9a and 9b, respectively.For the different geomembrane thicknesses, the values of parameter a are different, which again indicates a size effect, as discussed above. In Figures 6 and 7, the bilinear approximation of the creep curves allows a simplified distinction between undercritical and overcritical creep behavior for practical purposes.More undercritical creep behavior is approximated by the connecting line of stages (B-C) and overcritical creep behavior by the connection between stages (D-E).The inclination of the lines estimates a measure of creep velocity in these two stages, as outlined in Table 3.Compared with the undercritical creep velocity, the overcritical creep velocity is much higher, especially at higher load levels.The values for the 1.5 mm thick specimen under the load level of 90% are out of the expected range as a result of the inhomogeneous evolution of the microstructure, as previously discussed.From Table 2, it can be concluded that critical creep time, tcr, shows a significant downward trend with an increase in load level.The data can be fitted using the following power function: Here, σp is the load level, and a and b are the fitting parameters.In particular, for the initial geomembrane thicknesses of 0.5 mm and 1.5 mm, the fitted curves are shown in Figure 9a and Figure 9b, respectively.For the different geomembrane thicknesses, the values of parameter a are different, which again indicates a size effect, as discussed above. Comparison of Creep Curves under Low, Medium, and High Load Levels The creep curves for 0.5 mm thick HDPE geomembranes obtained under low and medium load levels of 10%, 20%, 30%, 40%, 50%, and 60% are shown in Figure 10.Each of the creep curves only experiences instantaneous deformation, the primary creep stage, and the secondary creep stage, but not the tertiary creep stage.All curves exhibit a concave shape with decreasing creep velocity over time.It can, therefore, be concluded that under ordinary temperature, the creep characteristics of geomembrane typical in tertiary creep stages can only be observed under very high load levels.The experimental results for a 0.5 mm thick HDPE geomembrane reveal that under low load levels of up to 40%, creep Comparison of Creep Curves under Low, Medium, and High Load Levels The creep curves for 0.5 mm thick HDPE geomembranes obtained under low and medium load levels of 10%, 20%, 30%, 40%, 50%, and 60% are shown in Figure 10.Each of the creep curves only experiences instantaneous deformation, the primary creep stage, and the secondary creep stage, but not the tertiary creep stage.All curves exhibit a concave shape with decreasing creep velocity over time.It can, therefore, be concluded that under ordinary temperature, the creep characteristics of geomembrane typical in tertiary creep stages can only be observed under very high load levels.The experimental results for a 0.5 mm thick HDPE geomembrane reveal that under low load levels of up to 40%, creep strain after 100 h is already less than 3%, and it can thus be expected that the geomembrane will not enter the tertiary stage in a finite period of time and creep rupture will not occur within the usual lifetime of geotechnical structures. Polymers 2024, 16, x FOR PEER REVIEW 10 of 17 strain after 100 h is already less than 3%, and it can thus be expected that the geomembrane will not enter the tertiary stage in a finite period of time and creep rupture will not occur within the usual lifetime of geotechnical structures.The geomembrane under medium load levels, i.e., between 50% and 60%, displays an constant rate in the secondary creep stage.In order to analyze the evolution of creep rate under low, medium, and high load levels, it is convenient to construct a Sherby-Dorn plot, as shown in Figure 11, for the 0.5 mm thick geomembrane.Independent of load level, the creep rate significantly decreases in the primary creep stage.In the secondary creep stage, the decrease in creep rate tends almost to zero under low load levels.Under high load levels, the creep rate in the secondary stage increases slightly, and in the tertiary creep stage, it increases until creep rupture takes place.Under different load levels, a comparison of creep rates in the secondary creep stage and the rapid creep growth stage is shown in Figure 12a and Figure 12b, respectively.The creep rate in the secondary creep stage increases slowly when the load level is less than 70%, but it increases rapidly when the load level is higher than 70%.Under high load levels of 70%, 80%, and 90%, the creep rate in the secondary creep stage increases with increasing load levels.The creep rate in the rapid creep growth stage increases significantly with increasing load level, and the creep rate is much greater than that in the secondary creep stage under the same load level.The geomembrane under medium load levels, i.e., between 50% and 60%, displays an almost constant rate in the secondary creep stage.In order to analyze the evolution of creep rate under low, medium, and high load levels, it is convenient to construct a Sherby-Dorn plot, as shown in Figure 11, for the 0.5 mm thick geomembrane.Independent of load level, the creep rate significantly decreases in the primary creep stage.In the secondary creep stage, the decrease in creep rate tends almost to zero under low load levels.Under high load levels, the creep rate in the secondary stage increases slightly, and in the tertiary creep stage, it increases until creep rupture takes place. Polymers 2024, 16, x FOR PEER REVIEW 10 of 17 strain after 100 h is already less than 3%, and it can thus be expected that the geomembrane will not enter the tertiary stage in a finite period of time and creep rupture will not occur within the usual lifetime of geotechnical structures.The geomembrane under medium load levels, i.e., between 50% and 60%, displays an almost constant rate in the secondary creep stage.In order to analyze the evolution of creep rate under low, medium, and high load levels, it is convenient to construct a Sherby-Dorn plot, as shown in Figure 11, for the 0.5 mm thick geomembrane.Independent of load level, the creep rate significantly decreases in the primary creep stage.In the secondary creep stage, the decrease in creep rate tends almost to zero under low load levels.Under high load levels, the creep rate in the secondary stage increases slightly, and in the tertiary creep stage, it increases until creep rupture takes place.Under different load levels, a comparison of creep rates in the secondary creep stage and the rapid creep growth stage is shown in Figure 12a and Figure 12b, respectively.The creep rate in the secondary creep stage increases slowly when the load level is less than 70%, but it increases rapidly when the load level is higher than 70%.Under high load levels of 70%, 80%, and 90%, the creep rate in the secondary creep stage increases with increasing load levels.The creep rate in the rapid creep growth stage increases significantly with increasing load level, and the creep rate is much greater than that in the secondary creep stage under the same load level.Under different load levels, a comparison of creep rates in the secondary creep stage and the rapid creep growth stage is shown in Figures 12a and 12b, respectively.The creep rate in the secondary creep stage increases slowly when the load level is less than 70%, but it increases rapidly when the load level is higher than 70%.Under high load levels of 70%, 80%, and 90%, the creep rate in the secondary creep stage increases with increasing load levels.The creep rate in the rapid creep growth stage increases significantly with increasing load level, and the creep rate is much greater than that in the secondary creep stage under the same load level. Modelling of Creep with Respect to High Load Levels It was shown by several authors that for low and medium, high load levels, the course of the creep processes up to the secondary creep stage can be well approximated using the following four-element viscoelastic model [19,20], namely where ε is the creep strain, σ0 is the constant creep stress, E1 is the elastic modulus, and E2, η1, and η2 are the material parameters.The first term is related to instantaneous elongation at time t = 0.With an increase in time, Equation ( 2) describes an unlimited increase in creep strain, and for t→∞, the creep rate is σ0/η2.The calibration carried out showed that the values of E2, η1, and η2 strongly depend on the load level and the thickness of the geomembrane.The values of the parameter obtained from the calibration to the individual creep curves show a clear trend.In particular, E2 increases, and η1 and η2 decrease with an increase in load level and the thickness of the geomembrane.This observation gives reason to introduce an appropriate fitting function for each material parameter.It was found that for low load levels up to 40%, a quadratic fitting function, and for load levels 40% < L ≤ 60%, a cubic function can well capture the particular load ranges.The fitting parameters for different load ranges and different membrane thicknesses are summarized in Table 4. Here, L denotes load level. Modelling of Creep with Respect to High Load Levels It was shown by several authors that for low and medium, high load levels, the course of the creep processes up to the secondary creep stage can be well approximated using the following four-element viscoelastic model [19,20], namely where ε is the creep strain, σ 0 is the constant creep stress, E 1 is the elastic modulus, and E 2 , η 1 , and η 2 are the material parameters.The first term is related to instantaneous elongation at time t = 0.With an increase in time, Equation ( 2) describes an unlimited increase in creep strain, and for t→∞, the creep rate is σ 0 /η 2 .The calibration carried out showed that the values of E 2 , η 1 , and η 2 strongly depend on the load level and the thickness of the geomembrane.The values of the parameter obtained from the calibration to the individual creep curves show a clear trend.In particular, E 2 increases, and η 1 and η 2 decrease with an increase in load level and the thickness of the geomembrane.This observation gives reason to introduce an appropriate fitting function for each material parameter.It was found that for low load levels up to 40%, a quadratic fitting function, and for load levels 40% < L ≤ 60%, a cubic function can well capture the particular load ranges.The fitting parameters for different load ranges and different membrane thicknesses are summarized in Table 4. Here, L denotes load level.Figure 13 shows the fitting of creep curves under different load levels using the four-element viscoelastic model ( 2) and the material parameters of Table 4 It is obvious that the mathematical model ( 2) can capture rather well the instanta ous, primary, and secondary creep stages.For higher load levels, however, the descrip of the tertiary creep stage up to the breaking state requires an extension of the fourment model (2).To this end, a term similar to the one proposed by Segard et al. [27 added.The original term has the following structure: where a, b, and c are the material parameters.The standardized time is defined as the r of creep time to the time when creep fracture occurs with a range 0 < tN < 1.The cu described by Equation ( 9) is flat when tN is small but rises rapidly when tN tends to 1. W respect to the experimental data from the present research, it was found that a better aptation of the creep curve can be obtained when relation ( 9) is proportional to stress le σ0 and when standardized time tN is replaced by It is obvious that the mathematical model ( 2) can capture rather well the instantaneous, primary, and secondary creep stages.For higher load levels, however, the description of the tertiary creep stage up to the breaking state requires an extension of the four-element model (2).To this end, a term similar to the one proposed by Segard et al. [27] is added.The original term has the following structure: where a, b, and c are the material parameters.The standardized time is defined as the ratio of creep time to the time when creep fracture occurs with a range 0 < t N < 1.The curve described by Equation ( 9) is flat when t N is small but rises rapidly when t N tends to 1. With respect to the experimental data from the present research, it was found that a better adaptation of the creep curve can be obtained when relation ( 9) is proportional to stress level σ 0 and when standardized time t N is replaced by Here, t s is a dimensionless quantity depending on the current time (t) and critical creep time (t r ).Factor 1.15 in the denominator of relation ( 10) is chosen a little higher than the maximum failure time of the geomembrane, which is approximately 1.07~1.13times critical creep time (t cr ), as shown in Table 2.The improved expression for the additional term is then as follows: where ε is creep strain, Ψ, m, and n are material parameters.By adding the revised relation (11) to the classical four-element model ( 2), the following five-element viscoelastic model relevant for high load levels is obtained: The corresponding material parameters can be obtained by appropriate approximation functions in a similar manner as shown for the low, medium, and high load levels.In particular, the fitting of the approximation functions is based on the three experiments carried out under the load levels of 70%, 80%, and 90% of the maximum stress peak.For geomembrane thicknesses of 0.5 mm and 1.5 mm and for load levels of 70% ≤ L ≤ 90%, the corresponding material parameters are summarized in Table 5. Figure 14 shows the course of the parameters depending on the load level.Figures 15 and 16 show the comparison between the experimenta curves obtained with the numerical model.The simulations with the exten are in good agreement with the experimental creep curves, indicating tha mathematical model can reasonably reflect the whole creep behavior of the under high load levels.In the logarithmic time scale, the model curves under different high load levels display the characteristics of the initial pe and the rapid rise in strain in the tertiary stage up to creep rupture.Thu levels, the improved mathematical description (12) can capture the initi process as well as the non-linear rapid increase in creep strain after passing stage.Figures 15 and 16 show the comparison between the experimental data and the curves obtained with the numerical model.The simulations with the extended model ( 12) are in good agreement with the experimental creep curves, indicating that the improved mathematical model can reasonably reflect the whole creep behavior of the geomembrane under high load levels.In the logarithmic time scale, the model curves and test curves under different high load levels display the characteristics of the initial period of flatness and the rapid rise in strain in the tertiary stage up to creep rupture.Thus, for high load levels, the improved mathematical description (12) can capture the initially slow creep process as well as the non-linear rapid increase in creep strain after passing the secondary stage. under high load levels.In the logarithmic time scale, the model curves and test curves under different high load levels display the characteristics of the initial period of flatness and the rapid rise in strain in the tertiary stage up to creep rupture.Thus, for high load levels, the improved mathematical description (12) can capture the initially slow creep process as well as the non-linear rapid increase in creep strain after passing the secondary stage. Conclusions In this study, the creep behavior of HDPE geomembrane specimens was investigated in tension tests and creep tests under different constant load levels.Particular attention was paid to creep failure behavior under high load levels, namely under 70%, 80%, and Conclusions In this study, the creep behavior of HDPE geomembrane specimens was investigated in tension tests and creep tests under different constant load levels.Particular attention was paid to creep failure behavior under high load levels, namely under 70%, 80%, and Conclusions In this study, the creep behavior of HDPE geomembrane specimens was investigated in tension tests and creep tests under different constant load levels.Particular attention was paid to creep failure behavior under high load levels, namely under 70%, 80%, and 90% of maximum peak stress.To investigate the effects of size on the mechanical response, experiments with two different membrane thicknesses were conducted.A refined mathematical model was proposed to simulate the whole process of different creep characteristics under low, medium, and high load levels.With respect to the assumption of nominal stresses and engineering strains, the following main conclusions can be drawn: 1. Displacement-controlled tensile tests under constant elongation velocity show that after the stress peak, the material undergoes strain softening and subsequently strain hardening up to the breaking state.2. For different membrane thicknesses, the stress-strain curves are slightly different.Such a size effect can be explained by the inhomogeneous evolution of the microstructure of the material, particularly when the local necking of the membrane specimen becomes dominant. 3. The creep tests carried out show that the creep characteristic is strongly dependent on the applied load level.Under high load levels, the geomembrane experienced the tertiary creep stage, which did not occur under low and medium load levels.From the Sherby-Dorn plot, it can be concluded that the creep rate reaches the minimum value in the secondary creep stage and increases rapidly in the tertiary creep stage.The creep rate of the rapid creep growth stage much greater than that in the secondary creep stage.The value of creep strain in the tertiary creep stage accounted for more than 80% of strain when creep rupture occurs.For higher load levels, the so-called critical creep time related to a bilinear approximation is lower.It was found that for very high load levels, the amount of the critical creep time and failure time does not necessarily follow the expended trend.Therefore, in creep tests, significant size effects can also be detected under higher load levels.4. For low, medium, and high load levels, refined fitting functions are proposed, which permit the simulation of the individual creep characteristics within the whole range of particular load levels. Figure 1 . Figure 1.Schematic plot of typical time-dependent elongation under (I) low, (II) medium, and (III) high load levels. Figure 1 . Figure 1.Schematic plot of typical time-dependent elongation under (I) low, (II) medium, and (III) high load levels. Figure 5 . Figure 5. Whitening areas on a detail of the geomembrane surface before creep rupture occurs. Each creep curve has gone through four characteristic stages, namely instantaneous deformation (section O-A), primary creep stage (section A-B), secondary creep stage (section B-C), and tertiary creep stage (section C-E).The tertiary creep stage includes the creep transition stage (section C-D) and the rapid creep growth stage (section D-E). Figure 5 . Figure 5. Whitening areas on a detail of the geomembrane surface before creep rupture oc Figure 5 . Figure 5. Whitening areas on a detail of the geomembrane surface before creep rupture occurs. for a specimen thickness of 1.5 mm.For all three investigated load levels, the creep characteristic is qualitatively similar, but the duration until creep rupture takes place is different.Each creep curve has gone through four characteristic stages, namely instantaneous deformation (section O-A), primary creep stage (section A-B), secondary creep stage (section B-C), and tertiary creep stage (section C-E).The tertiary creep stage includes the creep transition stage (section C-D) and the rapid creep growth stage (section D-E). Figure 8 . Figure 8. Creep time and creep strain depending on the applied load level for the initial membrane thickness of (a,b) 0.5 mm and (c,d) 1.5 mm. Figure 8 . Figure 8. Creep time and creep strain depending on the applied load level for the initial membrane thickness of (a,b) 0.5 mm and (c,d) 1.5 mm. Figure 9 . Figure 9. Relation between critical creep time and load level for the geomembrane thickness of (a) 0.5 mm and (b) 1.5 mm. Figure 9 . Figure 9. Relation between critical creep time and load level for the geomembrane thickness of (a) 0.5 mm and (b) 1.5 mm. Figure 10 . Figure 10.Creep curves of a 0.5 mm thick HDPE geomembrane at low and medium load levels. Figure 11 . Figure 11.Creep rate changes for a 0.5 mm thick HDPE geomembrane under different load levels. Figure 10 . Figure 10.Creep curves of a 0.5 mm thick HDPE geomembrane at low and medium load levels. Figure 10 . Figure 10.Creep curves of a 0.5 mm thick HDPE geomembrane at low and medium load levels. Figure 11 . Figure 11.Creep rate changes for a 0.5 mm thick HDPE geomembrane under different load levels. Figure 11 . Figure 11.Creep rate changes for a 0.5 mm thick HDPE geomembrane under different load levels. Figure 12 . Figure 12.Creep rate values for a 0.5 mm thick geomembrane under different load levels: (a) secondary creep stage; (b) tertiary creep stage. Figure 12 . Figure 12.Creep rate values for a 0.5 mm thick geomembrane under different load levels: (a) secondary creep stage; (b) tertiary creep stage. Figure 13 Figure 13 . Figure13shows the fitting of creep curves under different load levels using the fo element viscoelastic model (2) and the material parameters of Table4. Figure 13 . Figure 13.Creep curves under low and medium, high load levels for element thickness of (a) 0.5 mm and (b) 1.5 mm.(Shapes are experimental data, and solid curves are obtained from the four-element viscoelastic model). 1.The creep test equipment Table 2 . Characteristic creep values under high load levels for 0.5 mm thick and 1.5 mm thick HDPE geomembranes. Table 2 . Characteristic creep values under high load levels for 0.5 mm thick and 1.5 mm thick HDPE geomembranes. Table 3 . Creep velocity in stages B-C and stages D-E for 0.5 mm thick and 1.5 mm thick HDPE geomembranes. Table 3 . Creep velocity in stages B-C and stages D-E for 0.5 mm thick and 1.5 mm thick HDPE geomembranes. Table 4 . Creep parameters of geomembranes under load levels L in the range of 10% ≤ L ≤ 60%. Table 4 . Creep parameters of geomembranes under load levels L in the range of 10% ≤ L ≤ 60%. .
2024-07-17T15:19:20.986Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "6245d9ce031a9d6de629b721d8101832c1c16b75", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ccb25a037e4d88857aa2ad8ad5787cefc36c9d11", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
14827240
pes2o/s2orc
v3-fos-license
Human stanniocalcin-2 exhibits potent growth-suppressive properties in transgenic mice independently of growth hormone and IGFs. Stanniocalcin (STC)-2 was discovered by its primary amino acid sequence identity to the hormone STC-1. The function of STC-2 has not been examined; thus we generated two lines of transgenic mice overexpressing human (h)STC-2 to gain insight into its potential functions through identification of overt phenotypes. Analysis of mouse Stc2 gene expression indicates that, unlike Stc1, it is not highly expressed during development but exhibits overlapping expression with Stc1 in adult mice, with heart and skeletal muscle exhibiting highest steady-state levels of Stc2 mRNA. Constitutive overexpression of hSTC-2 resulted in pre- and postnatal growth restriction as early as embryonic day 12.5, progressing such that mature hSTC-2-transgenic mice are approximately 45% smaller than wild-type littermates. hSTC-2 overexpression is sometimes lethal; we observed 26-34% neonatal morbidity without obvious dysmorphology. hSTC-2-induced growth retardation is associated with developmental delay, most notably cranial suture formation. Organ allometry studies show that hSTC-2-induced dwarfism is associated with testicular organomegaly and a significant reduction in skeletal muscle mass likely contributing to the dwarf phenotype. hSTC-2-transgenic mice are also hyperphagic, but this does not result in obesity. Serum Ca2+ and PO4 were unchanged in hSTC-2-transgenic mice, although STC-1 can regulate intra- and extracellular Ca2+ in mammals. Interestingly, severe growth retardation induced by hSTC-2 is not associated with a decrease in GH or IGF expression. Consequently, similar to STC-1, STC-2 can act as a potent growth inhibitor and reduce intramembranous and endochondral bone development and skeletal muscle growth, implying that these tissues are specific physiological targets of stanniocalcins. STC-2 was initially identified as a stanniocalcin by virtue of its 50% identity and 73% amino acid homology to a stretch of 76 amino acids located between positions 24 and 101 of human (h)STC-1 (8,12,18,32). hSTC-2 amino acid sequence downstream of position 101 shows less identity (23%) to hSTC-1, and it is 45 amino acids larger even though the genes encoding these proteins have identical intron/exon junctions (18). Unlike with STC-1, studies examining the function of STC-2 have not been reported. It is tempting to assume that the function(s) of STC-1 and STC-2 overlaps because of the similarity in primary amino acid sequence and conservation of the cysteine residues found in hSTC-1, such that hSTC-2 probably exists in the native state as a disulfide-linked dimer, as does hSTC-1 (32). However, there are distinct differences between these proteins, including the fact that STC-2 is 55 amino acids larger, most of which is present in the form of a histidine-rich COOH-terminal region (32). With regard to expression patterns in the mouse, STC-1 appears ubiquitous but is most highly expressed during development and in the adult ovary, with significantly lower levels in other adult tissues (7,50). The mouse (m)STC-2 expression pattern is unclear because different reports do not agree on STC-2 mRNA size (8,18). Our preliminary studies indicate that, unlike STC-1, STC-2 is not detectable in mouse embryo tissue RNA by Northern blotting. Collectively, these data suggest that the biological role(s) of STC-2 in mammals may differ from that of STC-1, and this is further supported by the fact that STC-2 is unable to displace STC-1 from its putative receptor (24,29). It is clear that STC-1 functions as an anti-hypercalcemic hormone in fish (51). A number of studies have focused on where and when mammalian STC-1 is produced and make inferences regarding its function on the basis of localization data, but relatively few studies have directly assessed the function of STC-1 (7). There is good evidence for a role for STC-1 in mammalian mineral metabolism because it can significantly decrease renal phosphate excretion (37,52), decrease intestinal Ca 2ϩ uptake, and concomitantly increase PO 4 reabsorption (26) and result in significantly higher serum PO 4 levels in hSTC-1 gain-of-function transgenic mice (49). There is also data supporting a role for STC-1 control of intracellular Ca 2ϩ in rat cardiomyocytes (41) and neuronal cells (61). These studies suggest that the action of STC-1 in directly regulating mammalian Ca 2ϩ levels may lie primarily on intracellular pools rather than systemic Ca 2ϩ regulation. We and others have attempted to gain further insight into the function(s) of STC-1 through the generation of gain-of-function transgenic mice (13,49). It was anticipated that overexpression of STC-1 would result in a deleterious phenotype(s) that would point to specific organ systems particularly sensitive to STC-1 signaling and therein provide the basis for further study of STC-1 action in a specific cell type. This approach was predicated on the fact that STC-1 is not detectable in serum, other than during pregnancy in the mouse (11). Our studies (49) indicated that ubiquitous overexpression of hSTC-1 results in permanent and severe postnatal dwarfism, with transgenic mice achieving body weights 30 -45% less than their wild-type counterparts. Using a similar hSTC-1transgenic mouse model, others (13) observed a less severe dwarf phenotype along with decreased bone formation and somewhat altered bone structure. Congruent with a role for STC-1 in bone formation, a recent study (59) showed that it can stimulate the differentiation of rat calvaria cell cultures, implying that STC-1 is an autocrine/paracrine modulator of osteoblast development. These studies strongly suggest that STC-1 signaling can affect multiple organ systems and that it is a regulator of organ development with potent growth inhibitory properties when present in excess. Because it is increasingly clear that STC-1 plays important physiological roles in mammals, we postulate that a related protein, STC-2, has an equally important function(s) in mammals. Given the distinct biochemical and gene expression pattern differences between STC-1 and STC-2, we chose mouse transgenesis to determine whether STC-2 could regulate physiological pathways separate from those controlled by STC-1 and thus produce unique phenotypes. Our data indicate that STC-2 mRNA levels are markedly lower than those of STC-1 during mouse development and that it is most highly expressed in heart followed by skeletal muscle, uterus, and prostate. We have generated transgenic mice that ectopically produce hSTC-2 early in development, when it is not normally expressed, and throughout adulthood. Here, we describe that STC-2 can cause early intrauterine growth restriction, developmental delay, and severe postnatal growth retardation with disproportional organ growth. This growth inhibition appears to be independent of the growth hormone and IGF-I and -II growth regulatory pathway. These results are the first to show that STC-2 can, in an overexpression model, induce physiological effects similar to STC-1, resulting in similar but more severe phenotypes. MATERIALS AND METHODS Generation of hSTC-2-transgenic mice. The hSTC-2 transgene was constructed by ligation of a 1,247-bp hSTC-2 cDNA fragment from expressed sequence tag (EST) H98185 (Genome Systems, St. Louis, MO) encompassing the 909-bp coding sequence into the XhoI site of the previously described pCAGGS expression vector (34). Expression of the transgene is controlled by the bipartite promoter, consisting of the 384-bp cytomegalovirus immediate early (IE) enhancer fused to the 284-bp chicken ␤-actin promoter contiguous with 83 bp of the chicken ␤-actin 5Ј-untranslated sequence (UTR), followed by the 917-bp chicken ␤-actin intron 1. The XhoI cloning site is located in the rabbit ␤-globin 3Ј-UTR of the expression cassette. The expected size of the transgene-derived hSTC-2 transcript is ϳ1.8 kb, and it is easily distinguishable by Northern analysis from the ϳ4-kb primary endogenous mSTC-2 mRNA. The pCAGGS/hSTC-2-transgenic mice were generated in the Transgenic and Gene Targeting Facility of the London Regional Cancer Centre by microinjection of purified transgene DNA into the pronuclei of fertilized C57BL/6 ϫ CBA oocytes, as previously described (4). Transgenic founder mice were identified and genotyped by dot-blot hybridization with genomic DNA from tail biopsies and a radiolabeled 634-bp PstI/StyI hSTC-2 cDNA fragment encompassing 617 bp of the coding sequence and 17 bp of 3Ј-UTR (49). Southern blot analysis was performed with hSTC-2-transgenic mouse genomic DNA isolated from adult kidney to confirm intact integration of the transgene. hSTC-2-and hSTC-1-transgenic mouse lines were maintained on a C57BL/6 ϫ CBA background. All studies were performed with mice hemizygous for the pCAGGS/hSTC-2 transgene. Mice were housed and used in accordance with protocols approved by the University Council on Animal Care at the University of Western Ontario. Serum hSTC-2 determination and blood chemistry. Antibodies to hSTC-2 were prepared in rabbits after three monthly immunizations, each of 200 g of Chinese hamster ovary cell-expressed recombinant hSTC-2 dissolved in Freund's adjuvant in saline (1:1, vol/vol). Whole blood was taken from transgenic and wild-type mice immediately after CO 2 asphyxiation by opening the abdomen and collecting blood, using a 25-gauge needle, from the caudal vena cava. Blood was allowed to coagulate at room temperature, and serum was collected as the supernatant from two consecutive 18°C 20-min centrifugations at 15,000 g. Mouse serum hSTC-2 was characterized by separating 3 l of a 1:10 dilution or 1 l of serum proteins by SDS-PAGE in 12% gels. These gels were Western blotted with the use of a 1:5,000 dilution of rabbit anti-hSTC-2 antiserum raised by Veterinary Services, University of Western Ontario, against recombinant hSTC-2 provided by Human Genome Sciences (Rockville, MD) (60). The apparent molecular weight of transgene-derived hSTC-2 was compared with hSTC-2 produced by the MCF-7 human breast carcinoma cell line by Western blotting. MCF-7 cells were cultured under serum-free conditions for 3 days, and the conditioned medium was collected and concentrated sixfold with Centricon YM-10 centrifugal filter devices (Millipore, Billerica, MA). Mouse serum IGF-I was measured with a rat IGF-I RIA (Diagnostic Systems Laboratories, Webster, TX) after acid-ethanol extraction of 50-l serum samples from 6-wk-old wild-type and hSTC-2-transgenic mice. Serum IGF-I measurements are presented as means Ϯ SE. Blood chemistry determinations (Ca 2ϩ , PO4, alkaline phosphatase) were carried out with the Synchron Clinical System CX7 and LX20 (Beckman Coulter, Brea, CA) at the London Health Sciences Center (London, Canada). Northern blot analysis. Mice were CO2 asphyxiated, and tissues were removed and extracted in TRIzol (Life Technologies, Grand Island, NY) for the isolation of total RNA. All RNA samples were subjected to Northern blot analysis using the 32 P random primerlabeled 634-bp hSTC-2 cDNA fragment described above (50). Total RNA was pooled from three to five animals or embryos for transgene or endogenous mSTC-2 expression studies. Total RNA, isolated from 5-10 pooled male pituitaries, was blotted and hybridized with a mouse growth hormone (GH) cDNA fragment. IGF-II mRNA levels were assessed by blotting pooled total RNA from three to five embryos at embryonic day (E)14.5, E16.5, and E18.5 and hybridization with a mouse IGF-II cDNA fragment. Pooled liver total RNA from three to five mice aged 5, 10, and 15 days was blotted and hybridized with a rat IGF-I cDNA fragment to determine postnatal IGF-I mRNA levels. To normalize for RNA loading and to determine fold changes in steady-state mRNA levels, blots were hybridized to 18S ribosomal DNA, and the signal was quantified using PhosphorImager and ImageQuant software (Amersham Biosciences, Piscataway, NJ). Major urinary protein analysis. Urine was collected from agematched hSTC-2-transgenic and wild-type mice, and 2 l of urine were analyzed by SDS-PAGE in 12% gels, which were then stained with Coomassie blue. Analysis of hSTC-2-transgenic and wild-type mouse weight gain. Growth studies were conducted on each hSTC-2-transgenic line with wild-type littermates as controls. Hemizygous transgenic male mice were bred with wild-type females to generate timed-pregnant females carrying mixed litters of wild-type and transgenic pups. Embryos were harvested at E12.5, E14.5, E16.5, and E18.5 and dissected from the conceptus, and amniotic fluid was removed by blotting on paper towels. Wet weights were recorded, and the embryos were genotyped as described above using a portion of the embryo for DNA isolation. Human STC-1-transgenic embryo weights were determined similarly, except that wild-type females were bred to homozygous hSTC-1transgenic males from lines 2 and 1A to generate litters of hemizygous transgenic embryos, and wild-type embryo weights were obtained from wild-type crosses. For postnatal growth studies, pups were numbered at birth with a surgical marker and weighed between 8:00 and 10:00 AM on postnatal day (P)1-P17 and P20, P21, P25, P30, and P45. Toes were clipped on P10, and numbered ear tags were applied on P21 to track the mice. Mice were weaned at 21 days of age and transferred to separate wild-type and transgenic mouse stock cages for the remainder of the study period. For line 314 (L314), 25 transgenic females and 30 transgenic males were followed, and for line 372 (L372), 40 transgenic females and 39 transgenic males were included in the growth study. Seventy wild-type males and an equivalent number of wild-type females were used in this study. The growth study was extended to 80 days for L314 hSTC-2-transgenic mice only. Neonatal morbidity. New litters were carefully monitored from four different matings: hemizygous hSTC-2 matings were established within each transgenic line to produce pregnant transgenic females; wild-type C57BL/6 ϫ CBA females were mated with hemizygous hSTC-2-transgenic males from L314, and hemizygous L314 transgenic females were mated with wild-type C57BL/6 ϫ CBA males. Pregnant females were housed individually and monitored twice daily for the presence of a new litter; subsequently, each litter was monitored to harvest dead pups for genotyping. To determine whether hSTC-2-transgenic neonate morbidity was linked to nursing competition with wild-type neonates, another study was established in which hemizygous transgenic females from both lines and C57BL/6 ϫ CBA females were mated with hemizygous transgenic males from both lines. Pregnant females were housed individually and monitored as described above. When litters were discovered, wild-type pups (ϳ1.4 g) were distinguished from transgenic pups (ϳ1 g) by their weight and eliminated from the litter. The average transgenic litter size was 3.8 for each line, and 25 litters were followed from L372 and 27 from L314. Litters were monitored daily, and dead pups were retrieved for genotyping to confirm transgenic status. Organ allometry. Fourteen-week-old wild-type and L314 hSTC-2transgenic male mice were CO 2 asphyxiated and weighed, and external body dimensions (nose-to-tail tip, nose-to-anus, and anus-to-tail lengths) were determined, followed by the removal of the internal organs for wet weight measurements. Organs were weighed on a Mettler PB303 DeltaRange balance immediately upon dissection, with the exception of the heart and lungs, which were briefly blotted on paper towel to remove excess blood. The remaining viscera were removed, briefly blotted dry, and weighed, and then the remaining carcass was weighed. Weighing both the anterior and posterior muscle groups of the hindlimbs and combining the weights of the left and right leg muscles determined the muscle weights of 12-wk-old animals. Tail weight for each animal was also determined after dissection from the body at the base of the tail. Organ and muscle weights were normalized to the intact body weight of the mouse and expressed as a relative percentage of body weight. This was done because a comparison of raw measurements between wild-type and transgenic mice was not feasible, given that the transgenic animals were dwarves. For the weight-matched wild-type mouse organ analysis, 26-day-old wildtype males were used because their body weight most closely matched that of 14-wk-old hSTC-2-transgenic males. Organs were dissected and weighed as described above. The dry weight of testes harvested from hSTC-2-transgenic and age-matched wild-type animals was obtained by placing pairs of testes from each animal in glass drying dishes and first obtaining wet weight. Testes were then incubated at 65°C until weights were constant (3 days), indicating no further detectable loss of water. To obtain an indication of the relative mass of the skeleton, 14-wk-old hSTC-2-transgenic males and females from L314 and age-matched wild-type animals were subjected to dual-energy X-ray absorptiometry (DEXA) using a PIXI-mus Small Animal Densitometer (Lunar, Madison, WI). This analysis was performed at the Centre for Modeling Human Disease Physiology Core at the Samuel Lunenfeld Research Institute, Mount Sinai Hospital, Toronto, Canada. The procedure includes the mouse tail and head in the image and analysis. DEXA bone mineral content measurements show high correlation to the total ashed weight of bone (r ϭ 0.99) (5, 33); therefore, when normalized to body weight, DEXA provided a measurement of ashed skeletal mass derived from mineral content. The percent fat mass and lean mass of each animal were also determined using DEXA. Analysis of mouse embryo fibroblast proliferation. To establish an in vitro model of the intrauterine growth retardation phenotype observed for transgenic embryos, we prepared mouse embryonic fibroblasts (MEFs) from E14.5 hSTC hemizygous transgenic litters and C57BL/6 ϫ CBA wild-type litters (16). Cell number was obtained using an electronic particle counter (Beckman Coulter, Hialeah, FL), and MEFs were seeded at a density of ϳ5 ϫ 10 6 cells/ml and allowed to attach to 175-cm 2 culture flasks overnight. MEFs were grown to confluency, and this was considered passage 0. All experiments were conducted using passage 0 MEFs unless otherwise indicated. For the low-density proliferation assay, wild-type and hSTC-1-transgenic MEFs were plated at 1 ϫ 10 5 cells/25-cm 2 flask in triplicate. The zero time point cell number was obtained 5 h after the initial plating to take into consideration potential plating efficiency differences and obtain starting cell numbers based on adherent cells rather than the cell seeding number. Day 0 mean cell numbers were 61,760 and 70,380 for wild-type and transgenic cultures, respectively. Thereafter, the cell number was determined every 24 h over the 11-day assay. For the high-density proliferation study, MEFs were plated at 1 ϫ 10 6 cells/10-cm dish in triplicate and assayed as described above. MEF proliferation characteristics over the study period were determined by calculating the fold change in cell number per day by dividing the total cell number on a specific day by the total cell number on the previous day. All experiments were performed with three to five different preparations of MEFs. Assessment of cranial suture development. Skeletal preparations of newborn P1 pups were prepared as previously described (28) after CO2 asphyxiation. Once the skeletons had been stained with Alcian blue and Alizarin red, the heads were removed below the base of the skull. The anterior surface of the skulls was photographed with an Olympus microscope (Olympus America, Melville, NY) equipped with a digital camera (Dage-MTI, Michigan City, IN) and Image-Pro Plus 4.5.1 computer software (MediaCybernetics, Silver Spring, MD). The magnification of all skull images was ϫ10. The area of cranial patency, or the open area between the cranial suture edges, was determined as pixel area by tracing along the edge of the sutures enclosing the space between the skull plates using Openlab 3.1 software (Improvision, Lexington, MA). Food intake. To compare food consumption in hSTC-2-transgenic and wild-type mice, we used 14-wk-old L314 transgenic males, 14-wk-old wild-type males, and 27-day-old wild-type males for the weight-matched comparison. Mice were housed in individual shoebox cages equipped with a pellet-feeding tube attached with a tube clip (Bio-Serv, Frenchtown, NJ). The mice were allowed to acclimatize in the cage for 2 days with ad libitum access to regular mouse chow and water. On day 0 of the study, grain-based pellets (Bio-Serv) were loaded into the feeding tube, and the tube/food apparatus was weighed. Each morning for 7 days, the tube containing the food pellets was weighed to calculate the difference from the previous day, providing the amount of food eaten. Chewed pellets were removed from the tube and new pellets were added to fill the tube. The wild-type mice were weighed each morning to determine weight change during the study, and the hSTC-2-transgenic mice were weighed at the start of the study and at the end. Statistical analysis. All statistical analyses of data were performed with the unpaired Student's t-test, using PRISM 3.0a (GraphPad Software, San Diego, CA). Statistical significance was assumed at P Ͻ 0.05 for all experiments. RESULTS Expression pattern of STC-2 in mature wild-type mice. Our analysis of mouse embryos indicated that, unlike STC-1, STC-2 mRNA was not detectable in whole embryo RNA from E10.5 to E18.5 (data not shown). In adult mouse tissues, Stc2 gene expression was detectable as two distinct mRNA species of ϳ4 and 2 kb (Fig. 1). STC-2 mRNA appeared most abundant in heart, prostate, uterus, and skeletal muscle. Lower, but detectable, STC-2 mRNA was present in seminal vesicle, ovary, mammary gland, and white fat depots. Generally, our observations of STC-2 gene expression indicate overlap with STC-1 expression, but the level of expression for each hormone was distinct and tissue dependent. For example, we could not detect STC-2 mRNA in the mouse ovary, whereas STC-1 mRNA is readily detectable by Northern blot (data not shown). Generation of transgenic hSTC-2 mice. hSTC-2 overexpression was achieved by use of the bipartite cytomegalovirus (CMV) IE enhancer fused to the chicken ␤-actin promoter (Fig. 2). Southern blotting to confirm intact integration of the transgene revealed that two independent lines of transgenic mice were successfully created, L314 and L372 (data not shown). Both founders stably transmitted the transgene to progeny, thereby establishing two independent lines of hSTC-2-transgenic mice. hSTC-2 transgene expression is widespread. Our first objective was to determine the pattern of hSTC-2 transgene expression and whether hSTC-2 was detectable in the serum of transgenic mice. Transgene expression was detectable by Northern blotting of whole embryo RNAs from E10.5 onward in both lines of mice, whereas no signal was observed in the wild-type embryo (Fig. 3A). Northern blotting of adult transgenic tissue RNAs from both lines indicated that all tissues tested contained the expected ϳ2-kb mRNA corresponding to the transgene-derived hSTC-2 transcript (Fig. 3, B-D). The highest levels of transgene expression were seen in the heart and skeletal muscles of both lines of hSTC-2-transgenic mice. Moreover, an analysis of transgene expression between the sexes in either line did not reveal a sex-specific difference (Fig. 3, B and C), as seen previously for hSTC-1-transgenic mice (49). The antisera raised to hSTC-2 was characterized by Western blotting, using conditioned medium from human cell lines (Fig. 4A). The predicted and observed molecular mass of the mature secreted form of hSTC-2 is ϳ30.6 kDa (32). A major band of ϳ33 kDa and two larger minor bands were seen in concentrated conditioned media from the human MCF-7 breast carcinoma cell line and not from HeLa cells, which corroborates our Northern analysis of hSTC-2 gene expression in these cell lines. The antisera detected hSTC-2 in conditioned media from the rat GC pituitary tumor cell line, indicative of its ability to cross-react with rodent STC-2 (Fig. 4A), but did not detect STC-2 in wild-type mouse serum as we have reported previously for STC-1 (11). However, in transgenic mouse serum, the antiserum revealed 31-and 35-kDa hSTC-2 bands, possibly due to different posttranslational modifications (Fig. 4B). By Western blot analysis, L314 serum contained approximately fivefold more circulating hSTC-2 than that of L372 (Fig. 4B). hSTC-2-transgenic neonate viability is significantly reduced. While breeding the hSTC-2-transgenic mouse lines, it was clear that a large number of transgenic pups were lost soon after birth. Hemizygous transgenic males were bred with wildtype and hemizygous transgenic females to determine whether neonatal morbidity was associated with transgene homozygosity in both lines. Regardless of the mating genotypes, neonate morbidity ranged from 26 to 33.8% within the first few days of life (Table 1), indicating that neonatal death was not dependent on transgene homozygosity. Importantly, this was the first indication that excess production of a stanniocalcin could cause death. Moreover, the death of transgenic neonates was not due to an inability to compete with wild-type pups for breast milk, as the removal of wild-type pups from mixed litters did not alter neonatal morbidity (31.3% for L314 and 28.4% for L372). Overexpression of stanniocalcins causes intrauterine growth restriction and postnatal growth retardation. This overt phenotype was first noticed in P1 pups; therefore, we examined embryo weights from E12.5 to E18.5 to determine whether hSTC-2 caused intrauterine growth restriction. The weights of hSTC-2-transgenic embryos were significantly less than wild-type embryos from E12.5 onward ( Table 2). The difference in embryo size can easily be seen in the images of representative embryos and is evident as early as E10.5 (Fig. 5). In a previous study with hSTC-1 transgenic-overexpressing mice (49), we also observed postnatal growth restriction. Therefore, in this study, we reassessed the growth of transgenic embryos from hSTC-1-transgenic lines 1A and 2 for intrauterine growth restriction. In the lower-expressing line 1A, significantly smaller embryos were first detected at E14.5, and at E13.5 for line 2. Therefore, both stanniocalcins had similar growth inhibitory effects on embryonic growth that correlated with the onset of transgene activation. To assess hSTC-2-induced postnatal growth restriction, we measured the body weights of wild-type and transgenic ani- mals from both lines for a total of 45 days. Statistical analyses of the weight determinations indicated that the transgenic offspring remained significantly smaller than their wild-type littermates throughout the study (Fig. 6A). By the end of the study, the L314 and L372 transgenic females were 43.5 and 41% smaller, respectively, than their wild-type female siblings. Similarly, the transgenic males were 40.3% (L314) and 45.2% (L372) smaller than their wild-type male littermates. The growth study was extended to 80 days for L314, during which time the weight differential with wild-type littermates was maintained (P Ͻ 0.0001) such that transgenic males and females were 46.8 and 43.2% smaller, respectively (data not shown). These weight measurements were then used to assess weight gain per day to determine whether the pattern of growth of hSTC-2 transgenics was distinctly different from that of wildtype littermates (Fig. 6B). Statistical analysis of the data revealed that the amount of weight gained per day by hSTC-2-transgenic mice was significantly (P ϭ 0.003) less than their wild-type littermates from P2 to P15. From P17 to P25, hSTC-2-transgenic pups experienced a slowdown in growth relative to their wild-type counterparts, and this corresponded to the adjustment period associated with weaning at P21. From P17 to P45, the gap between wild-type and transgenic mice in the amount of weight gained per day increased with age and remained statistically significant (P Ͻ 0.0001). Plotting the body mass data as a growth rate curve was done to determine whether the hSTC-2-transgenic mice exhibited significantly slower growth rates compared with their wildtype littermates (Fig. 6C). The transgenic mice generally exhibited a significantly slower rate of growth from P1 to P30; however, from P30 to P45, both sexes from each transgenic line displayed a higher growth rate than wild-type mice. Then, from P45 to P52, the growth rates of both transgenic and wild-type mice decreased to a constant rate. Interestingly, the growth rates of the wild-type and hSTC-2-transgenic mice were not significantly different from P45 to P73 but became different from P73 to P80, likely because wild-type mice gained additional weight due to fat deposition. hSTC-2 overexpression can affect organ size. To determine whether hSTC-2 affected the growth and morphology of specific tissues, we performed organ allometry studies on 14-wkold hSTC-2 L314 males and normalized the wet organ weights to body weight for comparison with sex-and age-matched wild-type littermates. The hSTC-2-transgenic males were 45% smaller than their wild-type littermates. hSTC-2-transgenic males were significantly smaller than their age-matched wild-type counterparts in total length (3 cm), body length (1.6 cm), and tail length (1.7 cm) ( Table 3). Statistical analyses of the normalized data showed that the brain, kidney, liver, heart, lung, and viscera of hSTC-2-transgenic males comprised a larger percentage of body weight compared with age-matched wild-type males ( Table 4). The largest difference in normalized organ weights was seen for hSTC-2-transgenic testes that were 72% larger than expected and essentially identical in wet weight to those from wild-type male mice. Fluid accumulation in the transgenic testes did not account for the greater-thanexpected weight, as the dry weights of transgenic and wildtype testes were 0.022 and 0.024 g, respectively (P ϭ 0.2961). After removal of the major organs and viscera, the remaining carcass consisting of skin, muscle, and skeleton was weighed, and the normalized data indicated that the hSTC-2transgenic carcass was 8% smaller than expected (P Ͻ 0.0001). This small but statistically significant difference implied that the skeleton and/or skeletal muscle was negatively affected by hSTC-2 overexpression. To analyze the bone, we performed an age-matched comparison of whole skeletons from wild-type and hSTC-2-transgenic mice with DEXA. The data, when normalized to the whole body mass (43), revealed that there were no differences between wild-type and transgenic mice of both sexes, indicating that the change in carcass mass was not due to a disproportionably smaller skeleton (data not shown). However, skeletal muscle clearly showed a 15-32% reduction (P Ͻ 0.05-0.0001), depending on the muscle group (Table 4). This implied that the growth-restrictive effect of hSTC-2 on the whole animal was in part due to a reduction in normal muscle growth. Organ allometry was also performed with weight-matched wild-type mice to determine whether hSTC-2-transgenic mouse body composition was equivalent to a similar-size wild-type mouse. Because of the large difference in weight between 14-wk-old wild-type and hSTC-2-transgenic mice, this study necessitated the use of 26-day-old wild-type males to provide appropriately weight-matched mice (Table 4). In terms of linear measurements, the 14-wk-old hSTCrP-transgenic males were slightly bigger (0.53 cm) than the weight-matched wild-types because of a difference in tail length (Table 3). It is notable that, despite the longer tail length, hSTC-2-transgenic tail mass was 45% less than in weight-matched wild-type males (Table 3). We found that only the lung and heart absolute weights were not statistically different between transgenic and wild-type mice. Collectively, these data suggest that hSTC-2 expression results in a growth restriction phenotype where a 14-wk-old transgenic male typifies a 26-day-old wildtype mouse in terms of body proportions and organ mass. Food intake by hSTC-2-transgenic mice exceeds that of wild-type mice. It is possible that the dwarf phenotype induced by hSTC-2 expression could in part be due to a significant change in food intake. On account of their age (14 wk), the average weight of age-matched wild-type and hSTC-2-transgenic mice did not change significantly over the 7-day study period, whereas weights of the younger weight-matched wildtype mice increased by 5 g. When food consumption was Fig. 3. hSTC-2 transgene expression. A: transgene-derived hSTC-2 transgene mRNA was detectable by Northern blotting with 50 g of whole embryo RNA, starting at embryonic day (E)10.5 in both lines. B-D: transgene hSTC-2 mRNA was detected in all tissues tested, with the highest levels found in the heart and skeletal muscle (B, C). L314 mice (B) exhibited higher levels of transgene expression than L372 mice (C), with no apparent difference in tissue expression pattern or between the sexes after 18S rRNA normalization (not shown). Total RNA (30 g) pooled from 3 to 5 mice was analyzed, and the autoradiograms are from 24-h exposures. M, male; F, female; wt, wild type; Sem Ves, seminal vesicle. examined as a function of body weight, it was clear that hSTC-2-transgenic males ate 22% more food than their agematched wild-type counterparts (P Ͻ 0.05; Fig. 7). This result eliminates the possibility that higher-than-normal STC-2 production led to a behavioral defect that altered food consumption and ultimately caused undernourishment and delayed growth. DEXA analysis also revealed that female and male transgenic mice on average carried 1.8 and 2.4% more fat, respectively ( Table 5). The higher fat content was not statistically significant compared with wild types and may be associated with their greater rates of food intake. MEF proliferation rate and cell size. The obvious intrauterine and postnatal growth restriction induced by hSTC-1 or hSTC-2 suggested that the ectopic expression of these hormones possibly results in reduced cell proliferation, alterations in cell size, or increased cell death. To examine these mechanistic possibilities, we generated primary MEF cultures from E14.5 wild-type and hSTC-1-transgenic embryos and evaluated their growth characteristics. Growth of high-density or low-density MEF cultures was monitored over an 11-day period. We did not observe a significant difference in cell number that correlated with an altered growth rate (data not shown). Forward light scatter flow cytometry was used to determine whether the trend of reduced hSTC-1 MEF cell numbers at high density reflected an increase in cell volume, but no significant difference was detected (data not shown). An increase in cell death due to ectopic hSTC-1 expression was also not likely, because the growth characteristics of transgenic and wild-type MEFs were not significantly different. Therefore, the reduced growth potential caused by overexpression of STCs was not replicated in vitro using embryonic mixed mesenchymal cell cultures. This suggests that the growthrestrictive phenotype is not cell autonomous but rather caused by an inherent change in developmental programming. Intramembranous and endochondral bone formation. To assess whether the dwarfism exhibited by the overexpression of hSTC-2 manifested itself at the level of gross skeletal development, the cartilage and bones of newborn (P1) mice were stained with Alizarin red and Alcian blue. On visual inspection it was strikingly apparent that the intramembranous bones of hSTC-2-transgenic neonates were less developed than the wild types (Fig. 8A). Transgenic mice exhibited severe cranial patency between the leading edge of the frontal and parietal bones compared with wild-type mice. Cranial patency area in hSTC-2-transgenic skulls was found to be 26,010 pixels compared with 12,620 pixels for wild-type skulls (P Ͻ 0.0001) (Fig. 8B). hSTC-1-overexpressing lines were also analyzed in this manner and found to exhibit a greater degree of cranial patency compared with wild-type mice (data not shown). With regard to endochondral bone formation, developmental delay was observed at E16.5 (Fig. 8C). The ilium, ischium, and pubic bones of the hip were ossified in the wild-type embryo, whereas little or no ossification is apparent in the hSTC-2transgenic littermates (Fig 8C). Ossification is also lacking in the sacral vertebrae of the transgenic embryos. hSTC-2 overexpression does not significantly affect serum Ca 2ϩ and PO 4 . A number of reports have described effects of STC-1 on mineral metabolism (26,37,52). Therefore, we sought to determine whether excess hSTC-2 significantly altered serum levels of Ca 2ϩ and PO 4 . To this end, 9-wk-old hSTC-2-transgenic and wild-type sera were analyzed, and only L314 transgenic females showed a significant decrease in serum Ca 2ϩ (Table 6). However, serum alkaline phosphatase levels were significantly lower in hSTC-2-transgenic males and females, but only for the higher expressing L314 mice. GH and IGF production in hSTC-2-transgenic mice is not altered. The obvious growth restriction induced by hSTC-2 overexpression suggested that STC-2 may interfere with expression of the primary growth-promoting trophic factors, GH, IGF-I, and IGF-II, thus accounting for the dwarf phenotype. To examine this possibility, we performed Northern analysis of steady-state GH mRNA levels in 6-wk-old males and observed no difference between transgenic and wild-type animals (Fig. 9A). It is well established that the production of major urinary proteins is dependent on a normal GH secretion pattern and signaling pathway, and alterations in the abundance of major urinary protein (MUP) in the urine is indicative of perturbed GH production (2,31,35,53). A comparison of transgenic and wild-type mouse urinary MUP levels by SDS-PAGE (Fig. 9B) Fig. 4. Western blot analysis of cell line and transgene-derived hSTC-2 protein levels. A: hSTC-2 was detected in concentrated conditioned medium from the human MCF-7 breast carcinoma cell line but not in conditioned medium from the HeLa cells. Two immunoreactive bands were seen in hSTC-2-transgenic serum at ϳ31 and 35 kDa, whereas no signal was observed in wild-type mouse serum. Cross-reactivity to rodent STC-2 was demonstrated using conditioned medium from the rat GC cell line. B: loading of equal volumes (1 l) of serum from hSTC-2-transgenic mice from both lines showed that L314 contained ϳ5-fold higher levels of hormone compared with L372. Tg, transgenic; TgF, transgenic female; TgM, transgenic male. indicated that hSTC-2 overexpression had no effect on GH signaling. IGF-I and IGF-II mRNA levels were examined in transgenic and wild-type tissues, and serum IGF-I was measured in 6-wk-old mice to determine whether hSTC-2 overexpression caused a significant reduction in the production of these growth factors. IGF-II expression is greatest during development. As expected, steady-state IGF-II mRNA levels increased with developmental age, but a significant difference between transgenic and wild-type IGF-II mRNA abundance was not observed (Fig. 9C). Liver IGF-I gene expression was assessed during early postnatal development. As for IGF-II, we did not observe a significant change in IGF-I mRNA levels in transgenics compared with wild-type littermates (Fig. 9D). To confirm the Northern data for IGF-I expression, we measured serum IGF-I by RIA and found higher levels of circulating IGF-I in hSTC-2-transgenic male serum relative to wild-type littermates: 571.2 vs. 355.5 ng/ml, respectively (P Ͻ 0.0001). hSTC-2-transgenic female serum IGF-I levels (452.9 ng/ml) were not different from wild-type levels (401.2 ng/ml) (P ϭ 0.3105). Collectively, these data indicate that hSTC-2-induced growth restriction during development and during postnatal life is not likely due to a change in production of GH or the IGFs. DISCUSSION The initial objective of our study was to determine whether hSTC-2 would generate phenotypes distinct from those elicited by hSTC-1 in transgenic mice and therein suggest that these factors perform distinct functions in mammals. Our results are the first to show that hSTC-2 is bioactive in mammals and, when constitutively expressed in mice, produces a phenotype similar to that generated by hSTC-1. The primary difference was that serum PO 4 and Ca 2ϩ levels in hSTC-1-transgenic mice were significantly altered, whereas we found little evidence for such changes in hSTC-2-transgenic mice. Moreover, high neonate morbidity was caused by hSTC-2 overexpression, whereas this was not reported for hSTC-1-transgenic mice. Collectively, the data show that STC-1 and STC-2 can exert powerful growth-suppressive effects in pre-and postnatal life. We first determined the expression pattern for the Stc2 gene because previous reports do not agree on the size of the mSTC-2 mRNA or its distribution (8,18). Our Northern analysis detected ϳ4and 2-kb STC-2 mRNAs. The ϳ4-kb mRNA was expected because of the long 3Ј-untranslated sequence (ϳ3.1 kb) found in hSTC-2 and mSTC-2 EST sequences. The smaller transcript was likely due to use of alternative polyadenylation signals, because there is no evidence for a larger coding sequence among mSTC-2 EST sequences. Although the expression patterns of the stanniocalcins overlap, the abundance of each mRNA in mature mouse tissues is significantly different. For example, STC-2 is barely detectable in ovary, kidney, and whole embryo RNAs, whereas STC-1 is highly expressed in these tissues (50). This must have important implications for the tissue-specific function of these proteins because, unlike STC-1 in fish, they are not typically found in the circulation. Consequently, their specific sites of Values are mean differences in embryo weights (mg); values in parentheses are the no. of embryos weighed at each time point. hSTC, human stanniocalcin; L314, L2, and L1A, lines 314, 2, and 1A, respectively; E12. 5, E13.5, E14.5, E16.5, and E18.5, embryonic days 12.5, 13.5, 14.5, 16.5, and 18.5, respectively; ND, not determined. *P Ͼ 0.001 (Student's t-test). synthesis may be important determinants of their paracrine/ autocrine functions. Among the phenotypic consequences of hSTC-2 overexpression, the first to be observed was the ϳ30% loss of transgenic neonates in mixed litters, which clearly indicated that hSTC-2 hyperstimulation can be lethal. The physiological basis for transgenic pup mortality was not immediately evident, because transgenic pups did not manifest any obvious physical abnormalities other than being significantly smaller than their wildtype littermates. It is notable that hSTC-1-transgenic mice produced by Filvaroff et al. (13) did not exhibit transgenic neonate morbidity, even though transgene expression was specific to and high in skeletal muscle, similar to the hSTC-2transgenic mouse model described here. This obviously points Fig. 6. Postnatal growth restriction and changes in growth rate exhibited by L314 hSTC-2 transgenic mice. A: weights of transgenic and wild-type pups were followed for 45 days after birth. Transgenic weights were significantly less than wild types on all days (P Ͼ 0.0001). B: weight gain/day was calculated by subtracting the weight obtained on the previous day and plotting the difference. C: growth rate was determined as the percent increase in weight from the 1st day to the last day in the growth period. Black solid line, wild-type male; blue line, transgenic male; red line, wild-type females; pink line, transgenic females. to distinct differences in the physiological properties of STC-1 and STC-2; however, like STC-1, STC-2 can exert significant developmental effects. We observed two significant phenotypic consequences of hSTC-2 and hSTC-1 overexpression that were previously unrecognized embryologically based defects: intrauterine growth restriction and developmental delay. hSTC-2 transgene expression was evident by E10.5, when the embryonic circulation was just beginning to function (30) and possibly allowing secreted hSTC-2 to act systemically. The net effect was that transgenic embryos were significantly smaller in weight from E12.5 onward. A similar intrauterine growth-restrictive phenotype was seen in hSTC-1-transgenic embryos. In all of the transgenic lines analyzed for intrauterine growth restriction, we observed a similar degree of weight reduction in E18.5 embryos despite differing levels of transgene expression. This could be indicative of a maximal growth-suppressive effect caused by excess stanniocalcins, such that even higher levels of STCs do not result in further restrictions in growth. The growth-suppressive effects of hSTC-2 overexpression in utero were maintained and progressed during postnatal life in both sexes of each transgenic line. Unlike other growth retardation mouse mutants, where postnatal catch-up growth is evident, such as in the growth hormone antagonist-transgenic mouse (10), otx1 null (1), EGF overexpresser (6), and the STAT3 knock-in phosphorylation mutant (42), hSTC-2-transgenic mouse growth retardation became more severe with age, progressing from an ϳ25% reduction in body weight at P1 to ϳ42% at P45. The pattern of weight gain by the hSTC-2 transgenics also differed, most notably over the prepubertal weaning period, where the wild types of both sexes showed an increase in weight gain over the period P17-P21 and transgenics did not. This may be indicative of a developmental delay whereby the transgenics were unable to derive adequate nutrition or feed appropriately from solid mouse chow due to a metabolic problem or a behavioral defect. Another developmental anomaly was apparent from P25 onward, a time period that is characterized by a well-documented growth hormonedependent pubertal growth spurt (23,25). The wild-type mice exhibited this growth peak at P30, whereas the transgenic growth peak was at P45. This growth pattern lends further support to the possibility that hSTC-2 overexpression results in developmental delay that is manifested and during gonadal maturation. Fig. 7. hSTC-2-transgenic mice exhibit increased food intake. Calculating the amount of food consumed as %body weight over the 7-day study showed that hSTC-2 transgenics consume more food (22%) than age-matched wild types and less food (17%) than the younger, weight-matched, wild-type mice. Table 4. Overview of absolute organ weights and organ-tobody weight ratios in wild-type and L314 transgenic male mice We also examined embryos and neonates for gross morphological indications of developmental delay. The most significant observation related to this is the degree of skull patency in hSTC-2-transgenic pups that in some respects resembles cleidocranial dysplasia (58). This intramembranous bone phenotype was reminiscent of the sagittal and parietal foramen observed in msx2 null mice (19,40), mice homozygous for the hypomorphic cbfb GFP allele (21), and mice in which the bonemorphogenic protein antagonist noggin has been applied to the developing cranium (56). In all of these mutant mouse models, the degree of patency along the sagittal and parietal suture lines is significantly greater than in wild-type littermates. Filvaroff et al. (13) found that the cranial suture lines in their adult hSTC-1-transgenic mice were morphologically distinct, and TRAP staining suggested increased osteoclast activity along the cranial suture lines. Interestingly, they also showed that hSTC-1 overexpression decreased the rate at which bone mineralization occurred (13). Collectively, these data point to Fig. 8. Cranial patency and endochondral ossification. Delay in bone formation exhibited by the hSTC-2-transgenic mice. A: Alizarin red-and Alcian blue-stained hSTC-2 postnatal day 1 skulls revealed a reduction in the amount of bone formation in the transgenic skull (arrows). The space between the wild-type posterior frontal and sagittal sutures is very narrow compared with that found in transgenic mice (black outline). B: pixel area of the space between the edges of the skull bones in the transgenics (OE) is more than twice that of wild-type mice (F). C: stained E16.5 wild-type and transgenic embryos from the same litter. ଙDifference in ossification (red stain) of hip bones (within black circle) and vertebrae. developmental delay caused by a decreased rate of bone formation due to the overexpression of either hSTC-1 or hSTC-2. In support of this, our hSTC-2-transgenic embryos showed delayed ossification in different endochondral skeletal elements. Interestingly, these results would not have been predicted on the basis of the recent report from Yoshiko et al. (59) showing that, in vitro, STC-1 actually augments the differentiation of mature rat calvaria osteoblasts shown by the enhanced expression of the osteoblast markers. Therefore, the delayed bone development we report here may result from systemic exposure to hSTC-1 or hSTC-2 that may alter early signaling pathways necessary for the normal temporal development of bone and proliferation of osteoprogenitors. Further studies are currently underway to investigate the mechanism by which hSTC-1 and -2 exert their effect on bone development. Obviously, this contributed to the severe postnatal growth restriction seen in our hSTC-1 and hSTC-2-transgenic mice. However, our data also show that growth retardation occurs early in development, before significant bone ossification begins. Therefore, the growth-restrictive properties of STC-1 and STC-2 likely impact other earlier developmental processes and/or cell types. To begin assessing these possibilities, we performed a series of growth studies with E14.5 MEFs from hSTC-1 and wildtype embryos, because the proliferation rate of MEFs from mouse mutants that exhibit embryonic growth restriction have been successfully replicated in vitro (47,55,57). However, experiments with hSTC-1 MEFs did not mimic the growthrestrictive phenotype seen in vivo. Results similar to ours were obtained using MEFs prepared from p57Kip2-deficient mice that also exhibit skeletal abnormalities and growth retardation (44) and with MEFs from cdk4 null mice that display growth retardation to the same extent as our transgenic mice (48). The lack of a proliferation defect in hSTC-1-transgenic MEFs suggests that the reduced-growth phenotype is restricted to a different tissue compartment during development and that these effects are cumulative, causing a progressive and proportional reduction in embryo mass. Many mouse mutants exhibiting a dwarf phenotype are the result of disrupted signaling of the GH/IGF pathway (9,15,25,36). Changes in the levels of GH/IGF signaling components can also occur secondary to the primary mutation and result in growth restriction (3,14,20,22). However, as in the case of hSTC-1 overexpression (49), we did not detect changes in GH/IGF signaling in response to hSTC-2 overexpression, leading us to conclude that intrauterine and postnatal growth retardation occurred independently of this pathway. Clearly, our data indicate that an intact IGF-I signaling system Values represent means Ϯ SE of 10 -15 mice with multiple parameters measured from each mouse. *P Ͻ 0.05 compared with sex-matched wild types of the same line. Fig. 9. Assessment of the somatomedin pathway. A: growth hormone (GH) mRNA expression was analyzed by Northern blot using 5 g of pooled total pituitary RNA, and steady-state mRNA levels were not different between wild types (wtM) and transgenics (TgM) or both lines. B: downstream GH signaling was assessed by Coomassiestained SDS-PAGE of transgenic and wildtype urine for major urinary protein (MUP). The level of MUP was not decreased in the transgenics compared with wild types in both sexes. C: embryonic IGF-II mRNA expression, determined by Northern blotting of whole embryo total RNA (30 g), showed no difference in its expression between transgenics and wild types. D: postnatal IGF-I Northern analysis also showed no observable difference in expression levels in liver total RNA (30 g). cannot counteract the effects of excess circulating levels of hSTC-2. Recently, McCudden et al. (29) showed that STC-1 and its receptor can localize to mitochondria in a variety of tissues and that STC-1 can increase the respiration rate of isolated liver mitochondria in vitro. It is possible that constitutively elevated hSTC-2 resulted in altered metabolism and that this had a contributing effect on the growth-restrictive phenotype. Food consumption by the hSTC-2-transgenic mice was 22% greater than age-matched wild-type mice. Filvaroff et al. (13) also found a comparable increase in food intake along with a 10% increase in oxygen consumption in their hSTC-1-transgenic mice. Obviously, hyperphagia, in combination with increased oxygen requirement, is highly suggestive of a metabolic defect and appears to be a hallmark of excess circulating hSTC-1 or hSTC-2. Hyperphagia has been documented in a number of growth-restricted mouse mutants and is usually associated with aberrant expression or signaling of proteins that regulate various aspects of metabolism, including GH (10), melanocortin-1 and -4 receptors (17,27), and fibroblast growth factor-19 (46). These data are consistent with a role for stanniocalcins in controlling certain aspects of metabolism. On the basis of the data presented here, we propose that the stanniocalcins represent a new class of trophic factors that play a role in growth regulation independently of the GH/IGF axis. When present in supraphysiological levels, stanniocalcins cause developmental delay, especially in intramembranous bone. Our findings to date, along with those of Yoshiko et al. (59), advocate a role for stanniocalcins in regulating bone development. The fact that hSTC-2 overexpression is lethal to a proportion of neonates implies that this hormone has deleterious effects on mammalian development that are linked to cell proliferation and differentiation. The precise mechanism by which stanniocalcins reduce growth and cause death is critical to our understanding of their importance in mammalian physiology, and it appears that tissue-specific effects on cellular metabolism may be key to their mechanism of action.
2017-04-01T15:28:23.473Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "13b1d0fac220e466c2c39e63fe6e5303825a88cb", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/896307/files/article.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "5badb1dfb06c2183b94209c9e3890b451e8a2a8f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
41321989
pes2o/s2orc
v3-fos-license
Revised Adult Immunization Guideline Recommended by the Korean Society of Infectious Diseases, 2014 The Korean Society of Infectious Diseases (KSID) published the 1st edition of Adult Immunization (Koonja Publishing, Inc., Seoul) in October, 2007. Five years later, in May of 2012, the 2nd edition of the book was published (M.I.P, Seoul). The KSID decided to make small-scale revisions every two years before the publication of the new edition of the adult immunization textbook, due to rapid changes in the environment related to adult immunization. In August, 2012, the KSID set up the Committee on Adult Immunization to develop and revise the guideline, and to conduct research on adult immunization. This is the revised version of the existing adult immunization recommendations, reflecting the latest research results and trends on each vaccine after the publication of Adult Immunization 2nd Edition in 2012. This revision provides information about vaccines against Streptococcus pneumoniae; tetanus-diphtheria-pertussis; herpes zoster; human papillomavirus; influenza; meningococcus; Japanese encephalitis; and yellow fever. Partial revisions have been made to recommendations for vaccines against S. pneumoniae, tetanus-diphtheria-pertussis, herpes zoster, and human papillomavirus. For vaccines against influenza, meningococcus, Japanese encephalitis, and yellow fever, the Committee on Adult Immunization has summarized its opinions on recent issues regarding the vaccines. There are no changes from Adult Immunization 2nd Edition for vaccines not mentioned in this revised edition. Special Article The Korean Society of Infectious Diseases (KSID) published the 1 st edition of Adult Immunization (Koonja Publishing, Inc., Seoul) in October, 2007. Five years later, in May of 2012, the 2 nd edition of the book was published (M.I.P, Seoul). The KSID decided to make small-scale revisions every two years before the publication of the new edition of the adult immunization textbook, due to rapid changes in the environment related to adult immunization. In August, 2012, the KSID set up the Committee on Adult Immunization to develop and revise the guideline, and to conduct research on adult immunization. This is the revised version of the existing adult immunization recommendations, reflecting the latest research results and trends on each vaccine after the publication of Adult Immunization 2 nd Edition in 2012. This revision provides information about vaccines against Streptococcus pneumoniae; tetanus-diphtheria-pertussis; herpes zoster; human papillomavirus; influenza; meningococcus; Japanese encephalitis; and yellow fever. Partial revisions have been made to recommendations for vaccines against S. pneumoniae, tetanus-diphtheria-pertussis, herpes zoster, and human papillomavirus. For vaccines against influenza, meningococcus, Japanese encephalitis, and yellow fever, the Committee on Adult Immunization has summarized its opinions on recent issues regarding the vaccines. There are no changes from Adult Immunization 2 nd Edition for vaccines not mentioned in this revised edition. Approval of PCV13 for adults In October 2013, Korea's Ministry of Food and Drug Safety (MFDS) granted approval for the administration of the 13-valent pneumococcal conjugate vaccine (PCV13) to adults 18 years of age or older. The 23-valent pneumococcal polysaccharide vaccine (PPV23) has been widely used for a long time. It contains 23 kinds of serotypes and is known to have some preventive effect against invasive pneumococcal disease (IPD) [1,2]. However, PPV23 has not been proved effective against pneumonia, and it has not been effective enough for patients with underlying diseases who are highly prone to developing pneumococcal disease. Furthermore, PPV23 can stimulate only T-cell independent immune response, which makes the duration of protection short, around 5 years, and it is not able to induce the herd immunity. In contrast to PPV23, PCV13 induces an anamnestic reaction by stimulating a T-cell dependent immune response. It not only prevents IPD but also clearly decreases the occurrence of pneumonia, otitis media, and nasopharyngeal colonization of vaccine serotypes in children [1]. PCV13 has reduced more than 90% of antibiotic-resistant IPD caused by serotypes, among infants and children. Moreover, it is reported that the vaccine induces the herd immunity, in which adults who have not received it also see a decrease in the number of IPD cases [1]. A double-blind and randomized study (Community-Acquired Pneumonia Immunization Trial in Adults, CAPiTA) conducted on 85,000 adults aged 65 years and older found that vaccination with PCV13 reduced vaccine-type community-acquired pneumococcal pneumonia, vaccine-type nonbacteremic pneumococcal pneumonia, and vaccine-type IPD by 46%, 45%, and 75%, respectively [3]. In a clinical trial on adults over 18 years of age, PCV13 demonstrated superior or similar immunogenicity compared to PPV23, with similar rates of adverse reactions [4][5][6]. In a study conducted on patients with AIDS, PCV13 showed immunogenicity superior to PPV23 and demonstrated preventive effects against recurrence of IPD [7,8]. Studies that conducted cost-effectiveness analyses of pneumococcal vaccines agree that the most important elements www.icjournal.org 70 affecting the analysis are the level of herd immunity provided by PCV13 administration in children and the efficacy for nonbacteremic pneumococcal pneumonia. This indicates that if PCV13 administration in children does not induce enough herd immunity, but that the vaccine works better among adults in preventing nonbacteremic pneumococcal pneumonia, PCV13 may be more cost-effective than PPV23 for adults aged ≥65 years and high-risk groups. In fact, after the U.S. and some European countries introduced the protein-conjugated vaccine into the national immunization program (NIP) for children, IPD in those age groups decreased significantly, as well as among adults due to herd immunity. However, adults 65 years of age and older did not see such an evident decrease as the children did [9,10] Patterns of diseases caused by S. Pneumoniae and serotype distributions The graph of IPD occurrence by age shows the shape of a U, which is high under the age of 5, then decreases, increasing again after the age of 50. In the U.S., serotypes contained in PCV13 cause one-third of cases of IPD, and serotypes only contained in PPV23 cause 25% of cases of IPD [11]. Among adults 19 years of age or older with immunocompromising conditions, 50% of IPD cases are attributed to PCV13 serotypes, whereas serotypes only contained in PPV23 are responsible for 21% of cases of IPD [12]. In Korea, it was found that 0.36 out of 1,000 inpatients 18 years of age or older had IPD. The fatality rate was reported to be around 30%, and both the number of patients and the fatality rate tended to be higher with older patients. Among IPD patients, 64.9% were found to have underlying disease [13]. According to research on serotype distribution of IPD among adults 19 years of age or older, before the introduction of PCV13, serotypes contained in PCV7, PCV13, and PPV23 accounted for 39.8%, 67.3%, and 73.4% of IPD, respectively [14]. However, after the introduction of PCV13, the serotype coverage rates for IPD decreased in both PCV13 and PPV23, and the difference between the two vaccines as the cause of the disease ranged between 15 and 20% [2]. Also, there was an increase in the proportion of IPD cases caused by serotypes not contained in any vaccine. Recommendations on pneumococcal vaccine for adults by the KSID In 2014, in its revised guidelines on pneumococcal vaccination, the U.S. Advisory Committee on Immunization Practice (ACIP) advised that all adults 65 years of age and older to receive PCV13, followed by PPV23 [15]. This was the Commit-tee's conclusion after considering the following factors: the insufficient level of herd immunity among adults aged 65 years and older that is induced from vaccinating children; the proven efficacy of PCV13 in CAPiTA against pneumococcal pneumonia; and cost-effectiveness analysis of the strategy. Pneumonia is a growing cause of death in Korea, and S. pneumoniae is reported as the most common cause of the disease. This necessitates administration of PCV13, which prevents non-invasive pneumococcal disease including pneumonia. PVC13 and PPV23 should be given consecutively because the gap of serotype coverage for pneumococcal disease is widening between PCV13 and PPV23, as in the U.S. and Europe. However, there is insufficient evidence to recommend consecutive administration of PCV13 and PPV23 to all persons 65 years of age and older across the board as there has been no assessment of the cost-effectiveness of PVC13 in this age group. When administering PCV13 and PPV23, another area that requires consideration is the immune response according to the order in which the vaccines are administered. Booster effect may be induced if PCV13 is administered first, while hypo-responsiveness occurs if PPV23 is given first. Thus, it may be more beneficial to administer PCV13 first [16,17]. The Committee on Adult Immunization has stated that one of the PVC13 or PPV23 should be given for healthy adults aged 65 years and older. However, the Committee has recommended that adults in the same age group with chronic medical conditions should receive PCV13 first, followed by PPV23 6 to 12 months later, because these individuals have a high risk of severe pneumococcal disease caused by various serotypes. However, PCV13 and PPV23 should not be administered at the same time and there must be an interval of at least 8 weeks between the administrations (Fig. 1). Tetanus-diphtheria-pertussis vaccine <Recommendations on Tetanus-Diphtheria-Pertussis (Tdap) Vaccine for Pregnant Women or Women who Plan on Pregnancy> A. Women without previous Tdap vaccine are recommended to receive a dose right after pregnancy or before pregnancy. Women between 27 and 36 weeks of pregnancy can be given the vaccine as well.* Recommendations on Tdap vaccine for pregnant women in oversea countries Since 2013, the U.S. ACIP has recommended that women receive a Tdap vaccine during each pregnancy [18]. The U.S. has seen an increasing outbreak of diphtheria nationwide since the early and mid 2000s. Since 2006, the Committee had recommended that pregnant women without previous Tdap vaccine administration receive the vaccine right after pregnancy, as well as any family members or caregivers who would contact the newborn. However such a cocooning strategy did not turn out to be effective, and failed to protect newborns from being exposed to pertussis until the time when they received their first diphtheria-tetanus-acellular pertussis vaccine (DTaP) for infants, two months after birth. In June 2011, the U.S. ACIP sought to provide a remedy for the problem by advising women to receive the Tdap vaccine during pregnancy (after 20 weeks). However, only 2.6% of the advised women took the advice, making it difficult to assess whether the new guideline was effective. The number of pertussis patients jumped to 48,277 in 2012, two to three times higher than the average. Considering the disease burden of pertussis in newborns, the safety of the Tdap vaccine in adults, and the weakening of immunogenicity acquired from childhood vaccination over time, the ACIP decided to strengthen the guideline to advise women to receive the Tdap vaccine between 27 to 36 weeks of pregnancy regardless of their Td/Tdap vaccination history. The decision was based on the judgment that mothers can acquire antibodies through vaccination that would be passed down to newborns during pregnancy. The grounds for the judgment are as follows: vaccination after giving birth to the first child maintains a sufficient antibody titer not only for the mother but also for the child in the second pregnancy [19]; Tdap vaccination received at any point in pregnancy provides a high antibody titer in the body at the time of childbirth [20]; vaccination during the third trimester provides the highest concentration of vaccine-specific antipertussis antibodies transported from mother to infant [21]; and vaccination during pregnancy does not cause a critical adverse reaction in mothers or newborn infants [22]. Vaccination guidelines for family members and caregivers have been strengthened as well. In 2011, vaccination was recommended to adults 64 years of age and younger without previous vaccination history and adults older than 64 years of age if there is a child younger than 12 months in the household [23]. The [24], and the 2014 edition expanded the range to the population group 11 years of age and older without previous vaccination, directly and indirectly strengthening the cocooning strategy [25]. In the UK, the number of pertussis cases was reported to be 8,819 in 2012, 10 times higher than in 2011, and 13 newborn infants died from the disease. Since 2013, the country has begun advising women to get vaccinated in every pregnancy, while the recommended time of vaccination-between 28 and 38 weeks gestation, ideally before 32 weeks-varies slightly from that in the U.S. [26]. Canada recommends that pregnant women receive a dose of vaccination right after delivery as the best plan or during the third trimester of pregnancy as the second-best plan, if they have had no previous Tdap vaccination in adulthood [27]. In Australia, the best recommendation is vaccination before pregnancy or right after childbirth, and the second-best is vaccination during the third trimester of pregnancy. The country recommends that women get vaccinated again even if they had received a Tdap vaccine before, if more than 5 years have passed since their last vaccination at the time of delivery [28]. Recommendations on Tdap vaccine for pregnant women or women who plan on pregnancy by KSID Women are recommended to receive a dose of Tdap vaccine before pregnancy or right after delivery in the KSID's Tdap vaccine guidelines for pregnant women in its Adult Immunization 2 nd Edition. The guidelines are similar to those of the U.S. or the UK, as Adult Immunization 2 nd Edition was published before their guidelines were revised. The number of pertussis cases in Korea has increased in a similar manner to that of the U.S. [29]. The average was 11.4 per year from 1995 to 2008, but increased to 66 in 2009, 27 in 2010, and 97 in 2011. The number hit 230 in 2012 due to a pertussis epidemic in Jeollanam-do [30], but fortunately dropped to 45 in 2013. Currently, there is insufficient evidence to recommend that women receive vaccination in every pregnancy in Korea as in the U.S. and the UK, considering the pertussis dynamics in Korea, while the idea could be reconsidered depending on the pertussis trend and possible future epidemics. Therefore, recommendations regarding the Tdap vaccine for women will remain the same-women without previous Tdap vaccination should receive a single dose before pregnancy or right after delivery. However, pregnant women between 27 and 36 weeks gestation can be vaccinated, because there are solid grounds that support the benefits and safety of vaccination during pregnancy. Furthermore, as there is clear evidence that pertussis among family members can cause pertussis infection in infants, relevant government agencies should actively promote Tdap vaccination for parents and grandparents who have not received the vaccine previously. Primary healthcare professionals should be encouraged to help further the cause. Herpes zoster vaccine In July 2011, the MFDS lowered the age limit for herpes zoster vaccine (ZOSTAVAX Ⓡ ) usage from over 60 to over 50, expanding the possible subjects of vaccination. However, the KSID held off recommending the vaccine on adults aged between 50 and 59 in the immunization guidelines of the same year. That was because results from the ZOSTAVAX Efficacy and Safety Trial (ZEST), which provided the grounds for the age limit reduction, were not published officially, making verification of the grounds impossible. However the results were officially disclosed in 2012, and verification thus became possible. According to ZEST, herpes zoster vaccination reduced the number of zoster cases in adults aged between 50 and 59 by 69.8% (95% confidence interval [CI], 54.1-80.6) [31]. More than one adverse reaction was observed in 73% of the vaccine group and 42% of the placebo-controlled group, but the difference was largely due to reactions on local parts, as in the previous Shingles Prevention Study (SPS). However, ZEST did not cover many cases in which herpes zoster was followed by postherpetic neuralgia, making it impossible to assess the vaccine's preventive effect against the condition. The KSID Committee on Adult Immunization officially evaluated the ZEST results and concluded that the study provides grounds that are as sound as the grounds suggested by the previous SPS. However, ZEST did not verify the vaccine's preventive effects against neuralgia following herpes zoster, and there are still controversies regarding long-term immunogenicity and revaccination with zoster vaccine. Therefore, the <Recommendations on Herpes Zoster Vaccine> A. Adults 60 years of age and older should receive shingles vaccination unless a contraindication or precaution exists. B. Adults aged between 50 and 59 may be vaccinated depending on individual health conditions. Committee advised making the vaccination decision after closely considering the cost and benefit of vaccination according to individual health conditions. The U.S. ACIP did not recommend vaccinating every adult aged between 50 and 59. The organization stated that the vaccination decision should be made considering the health factors of each individual as they may include chronic pain, severe depression, and underlying diseases [32]. Furthermore, there has still been no study performed to conduct a cost-benefit analysis of shingles vaccine in Korea, and there are disagreements over various factors-long-term immunogenicity of the vaccine; need for revaccination; how long an adult should wait before a vaccination if he or she has had shingles before; and cost-efficiency of the vaccine for chronic patients-which require further studies. HPV vaccination guidelines for men Males aged 9 through 15 could receive 4-valent HPV vaccine (HPV4) to prevent penis cancer, oral cancer, oropharyngeal cancer and anal cancer related to HPV 16 and 18, and genital warts and recurrent respiratory papillomatosis related to HPV 6 and 11. Theoretically, HPV4 vaccination on boys is expected to have a preventive effect on their future sex partners against cervical cancer. According to a recent phase III clinical trial conducted over more than 4,000 men aged between 16 and 26, HPV4 reduced the cases of anal cancer, genital warts, and premalignant or dysplastic lesions caused by HPV 6,11,16,and 18 [33,34]. The MFDS expanded the age of HPV4 vaccination subjects to men 26 years of age. Therefore, in Korea, males aged between 9 and 26 can receive HPV4 vaccine to prevent anal cancer, genital warts, and premalignant or dysplastic lesions. In the U.S., boys aged 11 and 12 are all recommended to receive HPV vaccine; men aged between 13 and 21 are advised to have follow-up vaccination; and men aged between 22 and 26 are advised to consider receiving vaccination [35]. Everyone with immunocompromising conditions, including HIV, and homosexuals are advised to receive HPV4 before the age of 26. However, in Korea, it is challenging to customize vaccination recommendation according to age group because there has not been sufficient research conducting cost-benefit analysis on recommending different vaccination for people of different ages. The studies that proved the HPV vaccine's efficacy in preventing the aforementioned diseases were conducted mostly in male homosexuals, who have a higher risk of diseases caused by HPV infection along with HIV patients. Therefore, Korea should give more consideration to vaccinating homosexual men or those with HIV. Safety of HPV vaccine Recently, there have been concerns regarding the safety of the HPV vaccine. However, the World Health Organization (WHO), through the Global Advisory Committee on Vaccine Safety (GACVS), stated twice, on July 2013 and February 2014, that a worldwide, comprehensive analysis of safety information has revealed that there is no safety risk associated with cervical cancer vaccine or HPV vaccine [36,37]. Moreover, in March 2014, the MFDS stated that supplements for immune enhancement contained in the HPV vaccine have not shown any safety risk, as a response to concerns raised by some researchers from a Japanese institute [38]. Aluminum hydroxide, the cause for the concerns, is an immune enhancement supplement widely used to improve the effectiveness of vaccines against hepatitis, pneumococcus, and tetanus-diphtheria-pertussis. Its safety has been proved. In fact, the U.S. Food and Drug Administration (FDA) has stated that the maximum amount of aluminum an infant can be exposed to cannot influence his or her health [39]. WHO has also made a statement that aluminum contained in vaccines is harmless [40]. Priorities regarding pregnant women Pregnant women have a higher risk of influenza infection and complications compared to the general population. According to a study conducted in 1918 over 1,350 pregnant women with influenza-like illness (ILI), 43% (585 people) developed pneumonia as a complication, of which 52% (302 people) had a miscarriage [41]. The fatality rate of the subjects was 27%, and it was highest among those in the third trimester of pregnancy. During the 1957 influenza pandemic, influenza <Recommendations on Human Papillomavirus Virus (HPV) Vaccine for Men> A. Men aged between 9 and 26 can receive HPV4 vaccine to prevent anal cancer, genital warts, and premalignant or dysplastic lesions. accounted for 20% of deaths related to pregnancy, and half of the infected women of childbearing age were pregnant [42]. During the 2009 influenza pandemic, pregnant patients were estimated to be 4 times more likely to be hospitalized than the non-pregnant patients, and they had higher a higher risk of fatality from critical illness [43]. 20% of pregnant women who were hospitalized in the intensive care unit died [44]. Moreover, influenza was found to increase the risk of preterm delivery, low birth weight, and stillbirth [45]. From these findings, WHO designated pregnant women as the most prioritized subjects of influenza vaccination and advised pregnant women to receive inactivated influenza vaccine regardless of gestation weeks [46]. Research on the disease burden of influenza in Korea is still insufficient. In the 2009 pandemic, out of the 19,727 women of childbearing age who visited 8 hospitals for ILI, 150 pregnant patients were diagnosed with A (H1N1) pdm09, and none of them were critically ill [47]. However, in a study conducted at a teaching hospital, one fatality was reported out of 5 pregnant women hospitalized for A (H1N1)pdm09 [48]. There is no research in Korea that shows whether influenza infection increases the risk of preterm delivery, low birth weight, and infant stillbirth, and more research is needed. A study was conducted that evaluated the relationship between influenza vaccination and pregnancy in terms of the safety of the fetus. Influenza vaccination of a pregnant woman was safe and did not increase the risk of preterm delivery or low birth weight. The vaccination did not only protect the vaccinated women but also their newborn infants from influenza [49][50][51][52]. In fact, indirect immunization from the mother is highly beneficial to infants, because infants under 6 months cannot receive influenza vaccination despite the high disease burden. Therefore, pregnant women and women of childbearing age must have the highest priority for influenza vaccination during an influenza endemic. Pregnant women are advised to receive the vaccination according to the recommended immunization schedule (from October to December) regardless of gestation weeks. However, the rate of influenza vaccination in pregnancy among Korean pregnant women is only 4 to 20.9%, substantially lower than the vaccination rate among the elderly and people with chronic medical conditions [53][54][55]. There should be more efforts to raise awareness among pregnant women about the importance of influenza vaccination through public campaigns, education, and free vaccination programs. Quadrivalent influenza vaccinae The influenza vaccine used in Korea is a trivalent vaccine that contains antigen representing three influenza viruses -A/ H3N2, A/H1N1 and B. However, influenza B viruses can be categorized into two lineages according to the type of antigen (B/Victoria and B/Yamagata), and the cross-reactive immunogenicity between the two lineages is insignificant. Every year, WHO makes recommendations for vaccine compositions for the Northern and Southern Hemisphere, each according to the lineage that is expected to be epidemic for the year. However, in the last decade, there were lineage-level mismatches between vaccines and circulating strains of influenza B viruses in 50% of the cases. In many cases, influenza B viruses from two different lineages circulated at the same time [56]. In the U.S., five out of 10 seasons from 2001 to 2011 saw lineage-level mismatches between influenza B viruses in vaccines and the circulating B strains. Similarly in Europe, there were mismatches between influenza B vaccines and circulating B viruses in four out of eight seasons from 2003 to 2011. Moreover, 58% of separated viruses belonged to a lineage different from vaccine viruses [56]. According to data provided by the Korea Centers for Disease Control (KCDC), Korea has been seeing influenza B viruses of two different lineages in the same season after the 2009 flu pandemic [57][58][59]. The lineage-level mismatch and simultaneous circulation of influenza B viruses are considered as one of the most important factors that undermine the effectiveness of influenza vaccines. WHO, starting from the 2013-2014 season, recommends quadrivalent influenza vaccine, which contains influenza B virus strains of two different lineages [60]. According to clinical trials conducted for approval of quadrivalent influenza vaccine, the vaccine has shown non-inferior immunogenicity and no difference in terms of local or systemic adverse reactions compared to trivalent vaccine [61,62]. Also, in a large-scale clinical trial conducted on 5,168 children aged between 3 and 8 years to test the efficacy of quadrivalent vaccine, it prevented 59.3% of influenza and was highly effective in preventing moderate or severe influenza infection (74.2%) [63]. After 2012, four kinds of egg-based quadrivalent influenza vaccines have been developed and approved by the U.S. FDA [56]. Of the four vaccines, there are three inactivated influenza vaccines: Fluarix Ⓡ Quadrivalent, GlaxoSmithKline; Fluzone Ⓡ Quadrivalent, Sanofi Pasteur; and FluLaval Ⓡ Quadrivalent, ID Biomedical Corporation/GlaxoSmithKline. FluMist Ⓡ Quadrivalent, MedImmune, is a live-attenuated influenza vaccine. Korea is repeatedly experiencing influenza B epidemics in March and April with mismatching lineages of vaccines and circulating virus strains, which calls for the introduction of 75 quadrivalent vaccines. Quadrivalent influenza vaccines have been developed in Korea and are undergoing clinical trials. Further studies should be conducted to assess the disease burden of the influenza B virus on children and adults and the cost-effectiveness of quadrivalent vaccines. Meningococcal vaccine The meningococcal vaccine that is available on the Korean market is MenACWY-CRM (Menveo Ⓡ ). Menveo Ⓡ is a quadrivalent meningococcal conjugate vaccine approved by the MFDS for people aged between 11 and 55 in May 2012. In Korea, Menveo Ⓡ has been administered to all new recruits in the military since November 2012. In March 2013, the Ministry approved the vaccine for use for children aged between 2 and 11, and in May 2014, the eligible age was expanded to include infants over 2 months old. Menactra®, another quadrivalent meningococcal conjugate vaccine, is expected to enter the Korean market as well. After the approval of Menveo Ⓡ in Korea, more and more children and adolescents have been receiving vaccination. If they have risk factors of meningococcosis such as complement deficiency, history of splenectomy, or hyposplenism, individuals who have received a conjugate vaccine between 2 and 6 years of age need revaccination three years after the first vaccination. If the first vaccination was received after 7 years of age, the person should be revaccinated 5 years later. If the risks continue, revaccination every 5 years is recommended. While there is limited data on meningococcal outbreaks in Korea, more active meningoccal vaccination seems necessary in some age groups in addition to military recruits. According to age analysis of meningococcosis in Korea, the age groups of 0-2 and 10-16 have high numbers of meningococcosis cases, as in other countries [64,65]. Meningococcus is hard to culture, and PCR testing is not widely used. Meningococcal infections do not tend to have characteristic symptoms. Because of these traits, it is considered a critical factor of bacterial meningitis in adolescents [66]. The U.S. ACIP recommends that people who have received a conjugate vaccine between the ages of 11 and 15 be given a dose of revaccination 5 years after their first vaccination. That is because after 4 years of age, adolescence is when the risk of bacterial meningitis is the highest, during which time the antibody titer should be maintained. If the first vaccination was between the ages of 16 and 18, the antibody is maintained for 5 years until 21 years of age, so revaccination is not recommended [67]. Korea also needs to ini-tiate meningococcal vaccination of adolescents aged between 10 and 16, and related research should be conducted. Japanese encephalitis vaccine Japanese encephalitis vaccination is scheduled first for infants 12 to 23 months old, followed by the second vaccination 12 months after the first one due to schedule changes for the live-attenuated vaccine [68]. In the previous schedule, infants 12 to 23 months old received their first vaccination, followed by the second vaccination 12 months later and the third vaccination when 6 years old. The schedule of three vaccinations was determined not based on research results but on the inactivated vaccination schedule. China reduced the number of vaccinations from three to two (at 8 months and 2 years old), based on study results. Korea has also changed the schedule from three to two vaccinations, based on the conclusion that the third vaccination at 6 years of age is unnecessary since two vaccinations would provide sufficient antibodies. According to a study, children aged between 5 and 7 were found to have antibodies even before the third vaccination. Moreover, Japanese encephalitis vaccination is included in the NIP of Korea [69]. While there are concerns over the safety of substrates used in producing a live-attenuated vaccine, WHO has concluded that it is safe and effective, and it was able to be included in the NIP. Inactivated Vero cell-derived vaccines (Beijing-1 strain) were approved in 2013 in Korea and have been used in the country since 2014. Vero cell-derived and genetically recombinant live-attenuated vaccine is produced with chimeric virus (ChimeriVax-JE), which is generated by using YF17D, also used for yellow fever vaccination, as a vector to replace genes that encode prM and E proteins with genes that correspond to SA14-14-2. It was approved in 2013 and is scheduled to be available on the market from 2014. Yellow fever vaccine In May 2013, the Strategic Advisory Group of Experts (SAGE) of WHO stated that yellow fever vaccination is effective for the lifetime of an individual, and there is no need for revaccination every 10 years [70]. However, there are quarantine requirements for yellow fever vaccination, and whether revaccination is required or not depends on the requirements of each country. Therefore, revaccination every 10 years is
2018-04-03T05:01:01.519Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "ee9a89d5307ec59e9a327710035532493ba53a1e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3947/ic.2015.47.1.68", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee9a89d5307ec59e9a327710035532493ba53a1e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
49570357
pes2o/s2orc
v3-fos-license
Mouse MRI shows brain areas relatively larger in males emerge before those larger in females Sex differences exist in behaviors, disease and neuropsychiatric disorders. Sexual dimorphisms however, have yet to be studied across the whole brain and across a comprehensive time course of postnatal development. Here, we use manganese-enhanced MRI (MEMRI) to longitudinally image male and female C57BL/6J mice across 9 time points, beginning at postnatal day 3. We recapitulate findings on canonically dimorphic areas, demonstrating MEMRI’s ability to study neuroanatomical sex differences. We discover, upon whole-brain volume correction, that neuroanatomical regions larger in males develop earlier than those larger in females. Groups of areas with shared sexually dimorphic developmental trajectories reflect behavioral and functional networks, and expression of genes involved with sex processes. Also, post-pubertal neuroanatomy is highly individualized, and individualization occurs earlier in males. Our results demonstrate the ability of MEMRI to reveal comprehensive developmental differences between male and female brains, which will improve our understanding of sex-specific predispositions to various neuropsychiatric disorders. were compared with a likelihood ratio test to assess whether group affected righting reflex time or eye opening score. A third model was run with fixed effects of group, postnatal day, group-postnatal day interaction, sex and a sex-postnatal day interaction, with a random effect of individual mouse. This third model was compared to the first model with a likelihood ratio test to assess whether there was a group by sex interaction. For open field, time spent in the centre of the open field and total ambulatory distance were analysed using linear models. Affine and Non-affine Registrations We illustrated the registration procedure in Supplementary Figure 11 (top row), which shows the registration of the p3 average (source image) to the p5 average (target image). The registration procedure was composed of two stages: affine registration performed using the mni autoreg tools [1] and non-affine registration performed using the ANTs toolkit [2]. The two images were overlaid (middle image, first row), demonstrating that the source image needed to be distorted to fit the target image. After the affine registration was performed, the resultant transformation was used to transform the source image and overlay it on the target image (middle image, second row). It was clear that the alignment has improved but was still unsatisfactory in some brain regions (third row). After the non-affine registration was performed, the source image was transformed using both the affine transformation and the non-affine transformation. This procedure resulted in satisfactory alignment between the two images (fourth and fifth row). Jacobian Determinants Jacobian determinants were used to quantify the volumetric changes caused by deformations. In Supplementary Figure 12, we illustrated the concept of absolute Jacobian determinants. Gridlines in the target image cerebellum were warped upon transformation to the source image. The determinants of this transformation are called the absolute Jacobian determinants. It was computed for every voxel and the resulting voxel map was overlaid on the target image. This illustrated how absolute Jacobian determinants capture the extent to which regions in the source image were smaller or larger than the corresponding regions in the target image. A similar procedure was applied to find relative Jacobian determinants. Gridlines in the target image cerebellum were warped upon transformation to the affine-transformed source image. The determinants of this transformation are called relative Jacobian determinants and its voxel map was overlaid on the target image. Relative Jacobian determinants captured volumetric changes after scaling brains to the same size. Generating Atlas Labels To test for biases, we compared three different sets of atlases (illustrated by Supplementary Figure 13). The first is called a consensus atlas and was used throughout the main study (detailed in Methods). In the second atlas set (called the resampled atlases), we resampled the consensus atlas (atlas on the p65 average) to each of the other 8 time points using the Level 2 transformations and nearest-neighbour interpolation. This atlas specifically removes biases in structure volumes associated with resampling done in the Level 2 registration. However, it does not remove biases associated with using a single atlas as the starting point for structure analysis. The bias associated with using a single atlas can only be removed by manual segmentation of age-consensus averages. Since this process is quite intensive, we instead chose to use the MAGeT [3] pipeline to generate atlases for each age from multiple intermediate atlases. First, the p65 atlas and its associated MRI average were transformed to each individual p65 image using the transforms obtained from the p65 Level 1 registration. Each of the individual p65 resampled atlases were then used as starting atlases in the MAGeT pipeline to segment the average from the earlier adjacent time point: p36 average. In this pipeline, each atlas was registered to the p36 average, followed by a voxel-voting step to determine the label given to each voxel. At the end of these steps, the p36 average had an atlas overlaid on it, and this atlas never used the information that came from Level 2 of our registration. The exact procedure was then applied to the p36 atlas to generate the p29 atlas, and then the p29 atlas was used to generate the p23 atlas, and so on until all the time points had atlases associated with them. While a p65 atlas began this procedure, Level 2 registration information was never used, multiple atlases were generated in intermediate steps, and each time point's atlas only depended on the atlas of the time point immediately older. While these reasons do not eliminate longitudinal registration bias in this third atlas set (called the voted atlases), they greatly reduce it. Gene Expression The underlying gene expression changes associated with sexually dimorphic neuroanatomy ( Figure 6) remains unknown. We wanted to identify candidate genes that might be associated with these dimorphisms. To do so, we used the genome-wide spatial gene expression data available from the Allen Brain Institute [4] and compared them to our neuroanatomical results. There are, however, four caveats associated with using gene expression data for our purpose. The first caveat is that genome-wide gene expression data was only collected in males at p56. While some genes have developmental gene expression at p4, p14, and p28 [5], most genes do not and there is no gene expression data for females. The second caveat is that most genes had their expression data come from only one mouse. The third caveat is that the majority of gene expression data was collected using ISH (In situ hybridization) on sagittal slices spanning only one hemisphere. Only a small subset of genes had ISH conducted on coronal slices spanning the whole brain. Lastly, despite extensive quality-control steps taken by the Allen Brain Institute, several regions in the brain were missing gene expression data. To compensate for this, whenever any gene had multiple replicates, we chose the replicate with the least amount of missing data. Furthermore, we excluded experiments where expression data spanned less than 20% of the brain. While these caveats do limit the conclusions made from this analysis, spatial gene expression data is still useful in identifying candidate genes associated with sexual dimorphisms for further exploration. Neuroanatomy Prediction We wanted to predict the absolute volumes of structures. To model this growth, we used natural spline functions. N th-order natural splines are characterized by N basis functions of age t, where the kth basis function is represented by f k (t). For each structure, the structure volume y ij in the training data was fit with the following model: Variables in this model are described in the main text. While increasing the order N of the natural splines allows one to model more complex growth curves, it can also overfit data leading to inaccurate predictions for data outside the training set. We computed Bayes Factors, using the BayesFactor package [6], to decide which order N of natural splines to use. Bayes Factors compute the evidence any model (12) has versus an intercept only model (y ij = α 1 + ε ij ) and assumes a reasonable set of priors on the predictors. We used the BayesFactor package's default Jeffreys prior for our analysis. By finding the Bayes Factor associated with Model (12) for values of spline order N ranging from one (linear growth) to the number of time points in the data minus one. The spline order N chosen for the structure's model is the one with the highest Bayes Factor associated with it. Once we determined the order N of the natural splines modelling fixed growth effects, we wanted to similarly model random growth effects with M th-order natural splines and optimize M . The training data was fit with the model below: After fitting multiple models with different values for M , we chose the model with the lowest Bayesian information criterion (BIC) [7] to predict structure volumes. Lastly, we also placed a gaussian weighting on data in the training set depending on the age. This step was motivated by the fact that when predicting volumes at a particular age, the time point closest to that age is the most informative. However, only considering the closest time point alone may be less informative than taking some information from the other time points. To balance the two extremes, we placed a gaussian weighting on the data. The gaussian weighting was centered on the time t which we want to predict and its spread parameter (σ 2 ) can be optimized. A high σ 2 implies data over all time is weighted equally by the model, and a low σ 2 implies data closest to the prediction time t is weighted higher than data further away. This spread parameter was optimized using leave-one-out cross validation. The model described above was used primarily in our study. However, we also explored two improvements to our model to check for consistency. The first was adding a covariate for total brain volume V i,j at the time point prior to the one being predicted (j → j − 1), which was done to control for whole-brain volume effects in subjects. The model with this covariate is given below, and fixed N splines and random M splines are optimized as detailed above: The second improvement was to use a random forest. Thus far, structures were modelled independently of each other, i.e. a structure's volume at a certain time t was predicted from the same structure's volume at earlier times. Using the random forest machine learning method, we can predict a structure's volume from other structures at an earlier time. To do so, we first fit the primary model described above to the training data and obtained the residuals of this model at the age t we wanted to predict. We then identified the volume of all 182 structures in the brain at the immediate earlier time and used them to model these residuals using a random forest from the randomForest package [8]. The random forest contained 500 trees and randomly sampled 5 structures at each tree split. The final predicted value was the sum of predicted values from both the initial model and the random forest. Optimizing Growth Models for Absolute Determinants Absolute volumes required more complex growth curves than relative volumes. To model this curve, we used a similar procedure as in the previous section. We found the Bayes Factor associated with Model (12) for values of spline order N (fixed effect of growth) ranging from 1 to 8. For 80% of structures, Bayes Factor was maximized by splines of order N ≤ 6. The data was then fit with the model defined by Model (13) with N = 6 and the optimized M (spline order associated with random effect growth) is determined by finding the model with the minimum BIC. We found that 95% of structures were best fit by M ≤ 2. Thus, we chose order-6 natural splines for fixed effects of age and order-2 natural splines for random effects of age to fit absolute Jacobian determinants. We computed the likelihood-ratio statistic comparing this optimized model to a similar one without sex and sex-age interactions to ascertain significance of sex on absolute determinants. Registration Bias The first level in our longitudinal registration consists of nine independent registrations, each of which registers all the scans from one time point to an age-consensus average. As such, nine ageconsensus averages, one for each age, are created from the first level registration. In the second level, each time point's age-consensus average is registered to the age-consensus average of the immediate next time point, with the exception for p65-the final time point. We chose to make the p65 ageconsensus average the registration consensus average, i.e. the common space to which all subjects and time points are compared to for statistical analysis. While any time point can be chosen for analysis, we picked this age as it is close to the MRI atlas (p60) and the Allen Brain Gene Expression atlas (p56). While it is a necessary part of our analysis, it is important to note that the practice of picking a consensus time point can lead to biases. For example, interpolation bias may be a factor as, one time point (p65 in our case) receives less interpolation than the other time points. Furthermore, these biases may not affect all groups equally [9]. Below, we show that our registration is not biased across sex or individuals, and that the statistics maps generated in our study are similar regardless of the time point chosen as the consensus time point. Interpolation bias has little effect on voxelwise statistics We regenerated our statistics map ( Figure 4) after picking p17 as the registration consensus average (Supplementary Figure 9). This time point was chosen as it was the median time point in our study, which follows more closely to literature recommendations [9]. This map was then resampled to p65 space to facilitate comparison of the two statistics maps. The high similarity of Figure 4 and Supplementary Figure 9 show that the effect of choosing p65 or the p17 median time point on sexual dimorphisms detected is small. Next, we tested whether detection of sexual dimorphisms at any age would be influenced by picking the p65 age-average as the registration consensus average. The two-level registration generates two sets of determinants: one set from Level 1 and one from Level 2. In our main study, we use the determinants from Level 2 as these are all transformed to a consensus p65 space. Figure 10). Furthermore, instead of computing effect sizes associated with sex, we repeated the above procedure with arbitrary spatial statistics patterns, which we computed by calculating effect sizes after random permutations of the sex label. The correlations between Level 1 and transformed Level 2 spatial patterns were also quite high and reported as a density plot (Supplementary Figure 10). Taken together, this indicated that the biases in the statistics maps generated from our longitudinal registration (whether they are associated with sex or not) are minimal. Atlas-based bias does not discriminate individuals or sex We compared structure volumes for every subject and at every time point using each of the three sets of atlases: the atlas placed on the p65 brain (called the consensus atlas), the resampled atlases from each age, and the voted atlases from each age. Supplementary Figure 14A shows the volume of the Lobule 1-2 white matter located in the cerebellum. While the consensus and resampled atlases were slightly different from each other, the voted atlas was very different from them both. In fact, according to the voted atlas, the structure does not exist before 5 days of age. This demonstrates that the particular atlas chosen does play a role in the volumes calculated. However, we sought to test specifically if this effect would apply differently across individuals or sex. To do so, we computed the z-score of the structure-i.e. subtracted the mean volume at every age and scaled by the volume standard deviation at every age (Supplementary Figure 14B). We observed that the atlas method did not play a significant role in affecting volumes of individuals or sex. This was tested using 3 linear mixed-effects models. The first model had z-score volumes as a response variable; fixed effects Fig. 1: Time to right, eye opening, as well as time spent in centre and total ambulatory distance travelled in the open field was assessed neonatally across scanned mice (S), their non-scanned littermates (L) and non-scanned controls (C). A) There was no effect of group (χ 2 8 = 14.54,P= 0.07), nor was there a group-sex interaction (χ 2 4 = 6.59,P= 0.16) on time it took for pups to right themselves across postnatal days 4, 5 and 6. B) There was also no effect of group (χ 2 32 = 20.61,P= 0.94) or a group-sex interaction effect (χ 2 16 = 9.30,P= 0.90) on when eyes opened across postnatal days 10 to 17. For both A and B, linear mixed-effects models were used to create trendlines and bars representing standard error. C) There was no effect of group (F 2,29 = 0.47,P= 0.63) or a group-sex interaction (F 2,29 = 0.64,P= 0.54) on time spent in the centre of the open field at postnatal day 16, nor was there an effect of group on total ambulatory distance travelled (F 2,29 = 1.16,P= 0.33) or a group-sex interaction (F 2,29 = 0.17,P= 0.85). Thus, no neonatal behavioural metrics collected were significantly impacted by scanning, and the results were the same across both males and females. These mice were kept for further testing in the open field as adults (postnatal day 65). Although there was no group-sex interaction on centre time (F 2,26 = 0.91,P= 0.41), there was a significant effect of group (F 2,26 = 6.81,P= 0.004) as scanned mice spent less time in the centre of the open field compared to nonscanned controls (post hoc Tukey test P adj = 0.003). Total ambulatory distance, however, did not show any significant differences across group (F 2,26 = 1.37,P= 0.27), or by sex within each group (F 2,26 = 1.34,P= 0.28). For both C and D, mean and standard error (bars) were calculated using linear models. Supplementary Fig. 2: Top 5% of largest voxels (relative to whole brain) in males and females clustered by their effect sizes over time. Cluster 1 and 2 correspond to regions larger in males in adulthood and these sexual dimorphisms emerge early. Cluster 3 corresponds to regions larger in females and emerges around puberty. Supplementary Fig. 3: Top 5% of largest voxels in males and females clustered by their effect sizes over time. Similar to the case with relative volumes, Clusters 1 and 2 correspond to regions larger in males in adulthood and these sexual dimorphisms emerge early. Cluster 3 corresponds to regions larger in females and emerges around puberty. Supplementary Fig. 4: Top 5% of largest vertices in males and females clustered by their effect sizes over time. Similar to relative and absolute volumes, Clusters 1 corresponds to regions larger in adult males and these dimorphisms occur early in development, while Cluster 3 corresponds to regions larger in females and these dimorphisms occur around puberty. Similarly, Cluster 2 in both the thickness analysis and volume analysis corresponds to regions larger in males whose dimorphisms emerge after the regions in Cluster 1. However, while volume analysis had both Cluster 1 and Cluster 2 dimorphisms emerging in the first 10 days of life, cortical thickness analysis shows dimorphisms in Cluster 2 emerging around male puberty. Supplementary Fig. 5: Preferential spatial expression of genes involved with sex processes in sexually dimorphic regions. A) Estrogen Receptor 1 (Esr 1) has biased expression in BNST, MPON, and MeA; and its expression is 2.1 times higher in sexually dimorphic regions than its average gene expression in the brain. B) GABA A receptor, subunit theta (Gabrq) has biased expression in BNST, MPON, and MeA (regions larger in males), as well as the midbrain and hindbrain (regions larger in females) with a fold change 1.94 relative to mean. C) Solute Carrier Family 6 (Neurotransmitter Transporter, Serotonin), Member 4 (Slc6a4 ) had the highest preferential expression of any gene measured (fold change 3.7) and D) Tryptophan hydroxylase 2 (Tph2 ) had the second highest preferential expression (fold change 3.4). Supplementary Fig. 6: Using RMSPD to measure accuracy shows a similar pattern to RMSD as seen in Figure 8. A) Matrices show RMSPD values between each set of predicted structure volumes (columns, 1 per subject) and observed structure volumes (rows, 1 per subject), shifted such that the diagonal (prediction and observation RMSPD for the same subject) is 0. Red off-diagonals indicate that predictions for Subject X match observations for another subject better than observations for Subject X. The more red off-diagonal terms, the less specific the predictions are. Blue off-diagonals indicate specific predictions for Subject X as it matches observations of Subject X better than other subjects. Off-diagonal RMSPD are shown in a density plot (grey) and the diagonal RMSPD are given by points on the same plot (median is the vertical line). As data from further in development is included, model predictions become more specific. For example, model prediction specificity for p3 (which only uses data from Subject X at p3 to make predictions for Subject X at p36) is poor (58% chance of off-diagonal terms being blue) and prediction specificity for p10 (which uses data from Subject X at p3, p5, p7, and p10 to make predictions for Subject X at p36) is much better (75% chance of blue off-diagonal terms). B) RMSPD decreases (accuracy increases) as subject data from later in development are used in prediction. Male accuracy improves earlier than female accuracy. Supplementary Fig. 7: Predicting p65 structure volumes showed similar patterns of individualisation as predicting p36 volumes (Figure 8). A) Observed and predicted structure volumes for three representative structures with averages for each sex given by horizontal lines. Prediction for subject X at p65 was trained on all data excluding Subject X at p65. Model does not memorize the average, but instead fits individualised patterns in neuroanatomy B) Matrix shows RMSD values between each set of predicted structure volumes (columns, 1 per subject) and observed structure volumes (rows, 1 per subject), shifted such that the diagonal (prediction and observation RMSD for the same subject) is 0. Red off-diagonals indicate predictions for Subject X match observations for another subject better than observations for Subject X. The more red off-diagonal terms, the less specific the predictions are. Blue off-diagonals indicate specific predictions for Subject X as it matches observations of Subject X better than other subjects. Off-diagonal RMSD are shown in a density plot (grey) and the diagonal RMSD are given by points on the same plot (median is the vertical line). C) As data from further in development is included, model predictions become more specific. For example, model prediction specificity for p3 (which only uses data from Subject X at p3 to make predictions for Subject X at p65) is poor (53% chance of off-diagonal terms being blue) and prediction specificity for p17 (which uses data from Subject X at p3, p5, p7, p10, and p17 to make predictions for Subject X at p65) is much better (76% chance of blue off-diagonal terms). D) RMSD decreases (accuracy increases) as subject data from later in development is used in prediction. Male accuracy improves earlier than female accuracy. Supplementary Fig. 8: Plot of within-group sum of squares versus the number of k-means clusters. Increasing the number of clusters decreases the within-group sum of squares indicating that cluster members are more similar to each other. However, beyond 4 clusters there is diminishing returns on the within-group sum of squares. This elbow at 4 implies that 4 clusters are appropriate for k-means analysis [10] on our data. Supplementary Fig. 9: Sexually dimorphic areas in the mouse brain calculated after choosing the p17 age-consensus average as the registration consensus average. Voxelwise statistics were computed similar to those in Figure 4 except that p17 age-consensus average was chosen as the registration consensus instead of p65. The resultant statistics map was in p17 age-consensus space and was transformed to p65 age-consensus space for comparison to Figure 4. A high correlation was observed between these two maps (r = 0.992) indicating that choosing p65 as the consensus versus p17 (the median time point) incurs little bias in identifying sexually dimorphic regions. Supplementary Fig. 10: Statistics maps generated without longitudinal registration are similar to those generated with longitudinal registration. Random statistical maps were generated for each time point by permuting sex labels and computing effect size comparing males and females. For every permutation, the effect sizes were calculated for Level 1 determinants (agnostic to longitudinal data and in age-consensus space) and Level 2 determinants (dependent on longitudinal registration and in p65 age-consensus space). The effect sizes from Level 2 determinants were transformed to the age-consensus space corresponding the effect sizes from Level 1 determinants. Correlations are computed between the transformed Level 2 effect size map and Level 1 effect size map and correlation was observed to be high for all time points. The dotted lines indicate correlations when the sex labels are not permuted and correspond to true volumetric effect sizes between males and females. Supplementary Fig. 11: Registration of a source image (p3 average) to a target (p5 average). The native images (after rigid alignment) are on the top row and their overlay is in the middle column. Poor alignment can be found in structures like the cerebellum where there is rapid neonatal growth. Affine registration scales and shears the source image to better align with target image. The affine transformation (generated from the affine registration) is applied to the source image and is shown in the second row. The overlay shows a good match between the affine-transformed source and the target images. However, zooming into the cerebellum of the affine-transformed source and target image (third row) showed that affine registration does not produce proper alignment of the cerebellum. This is illustrated by applying a red contour to the cerebellum of the target image and overlaying this contour to the source image. The non-affine registration corrects this discrepancy (fourth row) and produces the best alignment between source and target images (fifth row). Supplementary Fig. 12: Visualizing deformations caused by transformation of target image using grids and determinants. Illustrated in the left figure, upon transformation of the target image to the source image (this transformation is the inverse of the transformation in Supplementary Figure 11), gridlines in the target image become warped. In the top row, the gridlines warp from transformation to the source image; and in the bottom row, the gridlines warp from transformation to the affinetransformed source. Volumetric changes caused by the transformation can be qualitatively assessed by observing how the volume of a square region (region defined by the open space between gridlines) changes after transformation. It is clear from the convergence of gridlines in the cerebellum that much of the cerebellum decreases in size after transformation. This implies that the cerebellum is smaller in the source image than the target image. Volumetric changes can also be quantified by calculating the determinants (right figure). If a region in the source image is smaller than the corresponding regions in the target image (i.e. gridlines converge), the region has determinants between 0 and 1. Conversely, regions larger in the source image (i.e. gridlines diverge) have determinants larger than 1. Absolute determinants (top row) characterize volumetric changes upon transformation from target to source images and measure the true volumetric differences between target and source images. Relative determinants (bottom row) characterize volumetric changes upon transformation from target to affine-transformed source images and measure the volumetric differences between target and source images upon removal of a scaling factor (this scaling factor makes source and target images the same size as seen in Supplementary Figure 11: second row). The advantage of absolute determinants is that they can be used to calculate the volumes of regions in canonical units like mm 3 . Relative determinants, on the other hand, calculate volumes relative to total brain volume instead. However, relative determinants remove whole-brain size variability (which is the largest source of variability among mice [11]) to expose more subtle variations in neuroanatomy. Supplementary Fig. 13: Three different sets of atlases used to check for registration bias in structures. Consensus atlas is the MRI atlas registered to the registration consensus average (which is also the p65 age-consensus average). Since all images are registered to this consensus average, we use this atlas alone to quantify structure volumes in our main study. The two additional sets of atlases created test for different types of biases. The resampled atlases are created by transforming the consensus atlas to every single age in a single interpolation step with transformations obtained from Level 2 of our registrations. This atlas allows us to check for resampling bias as, with this atlas, volumetric information need not be transformed to p65 space prior to quantification of structure volumes. For a given time point, its voted atlas is created by aligning its age-consensus average to the atlas overlaid on every subject of the next immediate-older time point. A voxel voting procedure is taken across all subject atlases to create the voted atlas on the time point's age-consensus average. This method greatly reduces the bias in choosing a single starting adult atlas and transforming it to younger time points. There are multiple intermediate atlases and each time point is only responsible for creating an atlas for the immediate-younger time point. Supplementary Fig. 14: Registration bias exists with atlases but does not discriminate individuals or sex. A) The volume of the Lobule 1-2 white matter estimated using the three sets of atlases: consensus, resampled, and voted (see Supplementary Figure 13). Trendline and standard error (shaded region) were obtained by fitting a linear mixed-effects model. We see that registration bias does play a role in volume estimation of small structures like the Lobule 1-2 white matter as, before p7, voted atlases say this structure does not exist and the other atlases do not agree. B) We computed z-score, removing the overall mean and standardizing variability across the three sets of atlases for each age. We test whether registration bias of structure volumes applies equally across all individuals using two linear mixed-effects models: the first model had z-score volumes as a response variable; fixed effect of time point, sex, and atlas (Consensus, Resampled, or Voted), as well as all interactions; and random effect of individual and individual-atlas interaction. The second model was the same but lacked the random effect of individual-atlas interaction. A log likelihood test showed that registration bias does not discriminate individuals (χ 2 5 = 0.33,P> 0.99). To test if registration bias was the same between the sexes, we performed a similar analysis, except the second model lacked all interactions between sex and atlas. We found that registration bias does not discriminate sex (χ 2 12 = 0.08,P> 0.99). All other structures showed little effects of registration bias affecting each individual differently (uncorrected all P>0.93) or affecting each sex differently (uncorrected all P>0.999). This supports our conclusion that while registration bias may exist in structure volume measurements, this bias applies equally to all individuals and sexes. Supplementary Fig. 15: Sexually dimorphic voxels when using time point instead of age as a predictor. Our time points are not evenly spaced in development with more time points concentrated in early life. We assessed whether this has an effect on the sexually dimorphic regions identified by using time point as a categorical predictor. We found more sexually dimorphic voxels compared to Figure 4, but many of the same regions are implicated in both figures. Supplementary Fig. 16: Sexually dimorphic voxels in the mouse brain after removal of all data corresponding to p65. The figure was generated in a similar manner to Figure 4, except all data from p65 was removed prior to statistical analysis. This was done as p65 is almost a month after the previous p36 time point and we wanted to assess whether this type of sampling would have any effect. Since the results are similar to Figure 4, we conclude that this does not for our study. Supplementary Fig. 17: Voxels with sexually dimorphic absolute Jacobian determinants. The analysis was similar to Figure 4 except absolute Jacobian determinants were used as the dependent variable instead of relative determinants. In addition, 6th-order natural splines were used to model fixed effect of age and 2nd-order natural splines were used to model random effect of age. We observed the absolute volumes capture sexual dimorphisms in similar regions of the brain as relative volumes; however, relative volumes capture dimorphisms larger in females in regions like the somatosensory cortex, midbrain, and pons.
2018-07-06T13:12:23.971Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "1b17ad2e6b0f8ad5174bcf7e54b2d73abf42d3e3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-04921-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbc7c55b80ffe367c7f772d666d1439df87875f0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118574809
pes2o/s2orc
v3-fos-license
PT Symmetric Aubry-Andre Model PT symmetric Aubry-Andre model describes an array of N coupled optical waveguides with position dependent gain and loss. We show that the reality of the spectrum depends sensitively on the degree of disorder for small number of lattice sites. We obtain the Hofstadter Butterfly spectrum and discuss the existence of the phase transition from extended to localized states. We show that rapidly changing periodical gain/loss materials almost conserves the total intensity. I. INTRODUCTION Recent experimental realization of PT symmetric optical systems with balanced gain and loss has attracted a lot of attention [1][2][3]. The PT symmetric optical systems lead to interesting results such as unconventional beam refraction and power oscillation [4][5][6], nonreciprocal Bloch oscillations [7], unidirectional invisibility [8], an additional type of Fano resonance [9], and chaos [10]. In the PT symmetric optical systems, the net gain or loss of particles vanishes due to the balanced gain and loss mechanism. These systems are described by non-Hermitian Hamiltonian with real energy eigenvalues provided that non-Hermitian degree is below than a critical number, γ P T . If it is beyond the critical number, spontaneous PT symmetry breaking occurs. It implies the eigenfunctions of the Hamiltonian are no longer simultaneous eigenfunction of PT operator and consequently the energy spectrum becomes either partially or completely complex. The critical number of non-Hermitian degree is shown to be different for planar and circular array configurations [11] and it can be increased if impurities and tunneling energy are made position-dependent in an extended lattice [12]. However, γ P T decreases with increasing the lattice sites [13][14][15][16], hence the PT symmetric phase is fragile. An important consequence of PT symmetric optical systems is the power oscillations. It was shown that the beam power in a one dimensional tight binding chain doesn't depend on the microscopic details such as disorder and periodicity [17]. The probabilitypreserving time evolution in terms of the Dirac inner product for PT symmetric tight-binding ring was considered [18]. It is interesting to note that the PT operator coincides with time evolution operator at some certain times that allows perfect state transfer in the PT symmetric optical lattice with position dependent tunneling energy [19]. The equivalent Hermitian Hamiltonian for a tight-binding chain can also be constructed to understand the non-Hermitian system [20]. In this paper, we investigate disordered array of PT symmetric tight binding chain [21][22][23][24]. It is well known that disorder in quantum mechanical systems induces localization. Here, we show that localization occurs if some certain conditions are satisfied. Our system is described by the PT symmetric extension of the Aubry Andre model [25]. The energy spectrum associated with Hermitian Aubry-Andre model at certain strength of parameters has fractal structure, which is known as the Hofstadter butterfly spectrum [26,27]. We also investigate the Hofstadter butterfly spectrum in the presence of non-Hermitian impurities. II. MODEL Consider an array of N coupled optical waveguides with position dependent gain and loss and constant tunneling amplitude J through which light is transferred from site to site. We adopt open boundary conditions. The beam propagation in the tight-binding structure can be described by a set of equations for the electric field amplitudes c n , where n = 1, 2, ..., N is the waveguide number and the position dependent non-Hermitian degree γ n describes the strength of gain/loss material that is assumed to be balanced, i.e., N n=1 γ n = 0. The field amplitude transforms as c n →c N −n+1 under parity transformation and the complex number transforms as i→ − i under anti-linear time reversal transformation. Thus the global PT symmetry is lost unless a precise relation between γ n holds. To model disorder, γ N can be chosen randomly with zero mean [17]. In this case, the system would be no longer PT symmetric and the corresponding energy eigenvalues are not real. Bendix et al. studied a disordered system by considering a pair of N coupled dimers with impurities (γ n , −γ n ) [14]. They noted that the system is not PT symmetric as a whole (global symmetry), but it possesses a local P d T symmetry that admits real spectrum. Here, we consider a disordered system with global PT symmetry and study localization, which is well known to occur in a disordered Hermitian lattice. Consider the following gain/loss parameter where V and γ 0 are constants, β determines the degree of the disorder and φ N is the constant phase difference which depends on the total number of sites. We require that gain and loss are balanced, so we demand φ N = −πβ(N + 1) + φ 0 , where the constant φ 0 is an integer multiple of π. Without loss of generality, we take φ 0 = 0. We emphasize that the system is PT symmetric globally. The Equ. (1) with (2) can be called the PT symmetric Aubry-Andre model [25], which can now be engineered experimentally [1][2][3]. The most interesting result of the Hermitian Aubry-Andre model is that the states at the center of the lattice is localized (Anderson localization) for irrational values of β when V > 2. Apparently, the non-Hermitian character of the Aubry-Andre equation (1) could change the physics of this system dramatically. Note also that Aubry-Andre model coincides with the Harper model when V = 2 and γ 0 = 0 and the energy spectrum as a function of β is known as Hofstadter butterfly spectrum, which is an example of fractal structure that appears in physics [26,27]. Here, we study the Hofstadter butterfly spectrum and localization effect for the PT symmetric Aubry-Andre model. It is sufficient to analyze the region 0 < β < 1 since the system repeats itself in equal intervals of β. Furthermore, the energy spectrum is symmetric with respect to β = 0.5 axis and the spectrum does not depend on the sign of γ 0 . As a special case, if β is either 0 or 1, then the gain/loss terms vanish. If β = 1/2 and N is even, the system has gain and loss with amplitudes ∓iγ 0 at alternating lattice sites. The gain/loss materials change periodically if β is a rational number and quasi-periodically if β is an irrational number. In the latter case, the gain/loss impurities are disordered. Note that β can be given with a finite number of digits in a real experiment. To increase the incommensurability of β = p/q (p, q are two coprime positive integers), one can choose sufficiently large p and q. Then the system becomes strongly disordered. We look for stationary solutions of the equation (1). Suppose first that V = 0. In the absence of gain and loss, the system has the well known energy spectrum of width 4J: E = −2J cos nπ/N . In the presence of gain and loss, the real part of the energy eigenvalues, R{E}, are still contained in [−2J, 2J] for any N . More precisely, the energy width is a decreasing function |γ 0 |. The distribution of R{E} crucially depends on the strength of disorder through the value of β. It consists of a finite number of bands when β is rational. In this case, R{E} is the union of bands and the length of the gap between any two bands depends on q (β = p/q). On the other hand, the fractal structure appears and the spectrum is a Cantor set when β is irrational (for the mathematicians, this property is known as the 10 Martinis conjecture [29] in the Hermitian limit). Such a fractal structure can be seen in the Fig-(1), where we plot the PT symmetric Hofstadter butterfly spectrum at γ 0 = 2, V = 0 (a) and at V = 2, γ 0 = 0.1 (b). An important difference between V =0 and V =0 cases is that the symmetry with respect to zero energy axis is lost for the latter case. However, the real part of the energy eigenvalues is symmetric with respect to β = 0.5 axis for any V . Note also that the width of R{E} increases with V and it takes its maximum value when γ 0 = 0. We show the nice fractal picture for the real part of the spectrum. As can be seen below, the PT symmetry breaking point is very small for large N and thus the corresponding energy spectrum is not real. However, there exists some special values of β for the Fig-(1.b) with entirely real spectrum. For example, the spectrum is real when β = 1/5. To gain more insight on the role of disorder, let us study how the real and imaginary parts of the energy change with γ 0 for weakly and strongly disordered system. The Fig-(2) plots R{E} as a function γ 0 for a weak β = 1/3 and strong disorders β = 11/30 when N = 30. As can be seen from the figures, the degree of disorder in the lattice has a dramatic effect for large values of γ 0 . The real part of energy shrink to zero (they become degenerate and the energy width becomes zero) for very large values of γ 0 if the system is periodical while this is not the case if it is quasi-periodical. However, the corresponding imaginary part of energy eigenvalues are different from zero for such a large number of γ 0 . If the impurity strength, γ 0 , exceeds a critical point, γ P T , PT symmetry is spontaneously broken and thus the energy eigenvectors are not simultaneous eigenvectors of the Hamilton and PT operators. In this case, the energy eigenvalues become partially or entirely complex. An important consequence for our system is that strong disorder increases the critical point γ P T considerably. The Fig-(3) plots the imaginary part of the energy eigenvalues for various values of N at β = 0.6 and the inverse of the golden ratio β = ( √ 5 − 1)/2≈0.618, which is the common choice in the study of the Aubry-Andre model. We numerically find that due to the disorder, γ P T increases by a factor of nearly 1.6 when N = 25. However, the number of lattice sites N has dominant effect on γ P T and thus increasing the degree of the disorder has slightly changes γ P T for large N . The critical point decreases with increasing N and approaches zero when N is large. The PT symmetric phase is said to be fragile since γ P T is zero as N →∞ [13]. It is well known that the Hermitian Aubry-Andre model, γ 0 = 0, displays a phase transition from extended to exponentially localized states (It is also known as metal insulator transition [28]). Of particular importance is the self-dual point V /J = 2 where the localization transition occurs [25]. Let us now study whether localization takes place for the PT symmetric Aubry-Andre model. Suppose first V = 0. We take the inverse of the golden ratio β = √ 5 − 1 2 , J = 1 and N = 49 with the initial condition |c n (z = 0)| 2 = δ n, 25 . We find numerically that initially localized wave packet delocalizes in time when γ 0 = 2. We repeat numerical solution for large values of γ 0 , but exponentially localized states do not emerge for the PT symmetric Aubry-Andre model. This result is interesting that disorder does not induce localization for the PT symmetric Aubry Andre model contrary to Hermitian one. This is because of the fragile nature of the PT symmetric phase. Before metal insulator phase transition takes place, the system enters broken PT symmetric phase and the corresponding intensity grows exponentially, where the intensity is given by I = R{γ n }|c n | 2 . The intensity grows exponentially in the broken PT symmetric case while it oscillates when the energy spectrum is entirely real. Suppose now V =0. We find numerically the time evolution of the single site excitation. It is well known that the metal-insulator transition occurs at V = 2 and γ 0 = 0 and the wave packet is localized around the single site when V > 2. If V < 2, the probability |c 25 | 2 goes to zero rapidly with z. The presence of gain/loss change the dynamics significantly. Although the probability |c 25 | 2 doesn't rapidly go to zero when γ 0 =0, this can not be considered localization in the rigorous sense. This is because the introduction of gain/loss to the system does not conserve the total intensity and the generated particles enter the system not only n = 25-th lattice site but also the other lattice sites. Thus the occupation on the waveguide away from n = 25 is not negligible. For large z, the particles are generated even at the edges of the system. A question arises. Does the phase transition from extended to exponentially localized state occurs if we some- The total intensity as a function of z for the parameters ω = 0 (dashed) and ω = 3 (solid) at fixed V = 4. It grows exponentially for the static case while it is almost constant for large values of ω. We plot σ(z) for V = 4 (solid) and V = 0 (dashed) at fixed ω = 3. Localization takes place when σ(z) oscillates (non-periodically) and ballistic expansion occurs when σ(z) increases linearly. We take , β = √ 5 − 1 2 , N = 49, J = 1, and γ0 = 2 for both plots. We assume that only n = 25-th well is occupied initially. how find a way to make the total intensity bounded for large values of γ 0 ? To answer this question, consider zdependent periodic impurity strength [30][31][32][33][34][35] iγ n = V cos (2πβn + φ N )+iγ 0 cos(2πωz) sin (2πβn + φ N ) , (4) where ω is a constant. Note that the corresponding Hamiltonian is still PT invariant. The gain and loss are also locally balanced after one period. The intensity oscillates in time when ω = 0 if γ 0 < γ P T . The oscillation is not in general periodic. Introducing periodically changing impurity, ω =0, makes the oscillation periodical with z. Increasing ω decreases the period of the intensity. We assert that the intensity is in principle conserved in the limit ω→∞ since impurities do not have enough time to transfer intensity to the system. So, we expect that rapidly changing impurities practically conserves the intensity. To check this argument, we solve the equation (1) numerically. We find that the intensity is almost constant when ω = 3 as can be seen from the Fig-(4). The disorder has nothing to do with the intensity conservation and the intensity is almost conserved for any values of β. To predict localization, let us define the variance of the probability distribution as σ(z) = n (n −n) 2 |c n | 2 /P , wheren = n n|c n | 2 /P is the average site z-dependent occupation [35]. We plot the variance in the Fig-(4). The linearly increasing σ(z) with respect to z implies that the wave packet delocalizes (spreads ballistically with z). On the contrary, oscillating σ(z) shows us that the wave packet is localized. We find that the onset of localization appears for PT symmetric Aubry Andre model provided that V /J > 2 and the system is disordered. We emphasize that localization doesn't take place if the system is ordered, i.e. β is a rational number. In the localization regime, the occupation at n = 25-the well oscillates periodically with z and is almost constant for large values of V . We note that the underlying mechanism of localization studied here is essentially the same of Anderson localization. To summarize, we have studied PT symmetric tight binding optical lattice with disordered impurities. We have considered the complex extension Aubry-Andre model. We have plotted complex Hofstatder butterfly spectrum and shown that the reality of the spectrum depends sensitively on the impurity strength and β. We have shown that the critical point γ P T increases with the increasing degree of disorder. We have demonstrated that the transition from extended to localized states does not occur for the system described by PT symmetric Aubry Andre model. The metal insulator transition occurs if the impurities changes periodically with z at each site. We have also shown that rapidly changing periodical impurities conserves the total intensity.
2014-02-12T07:37:22.000Z
2014-02-12T00:00:00.000
{ "year": 2014, "sha1": "25c42956509145e6bc510edd604ab8bb128dde33", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.2749", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "25c42956509145e6bc510edd604ab8bb128dde33", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9134206
pes2o/s2orc
v3-fos-license
Utility of arterial phase of dynamic CT for detection of intestinal ischemia associated with strangulation ileus. AIM To clarify the usefulness of arterial phase scans in contrast computed tomography (CT) imaging of strangulation ileus in order to make an early diagnosis. METHODS A comparative examination was carried out with respect to the CT value of the intestinal tract wall in each scanning phase, the CT value of the content in the intestinal tract, and the CT value of ascites fluid in the portal vein phase for a group in which ischemia was observed (Group I) and a group in which ischemia was not observed (Group N) based on the pathological findings or intra-surgical findings. Moreover, a comparative examination was carried out in Group I subjects for each scanning phase with respect to average differences in the CT values of the intestinal tract wall where ischemia was suspected and in the intestinal tract wall in non-ischemic areas. RESULTS There were 15 subjects in Group I and 30 subjects in Group N. The CT value of the intestinal tract wall was 41.8 ± 11.2 Hounsfield Unit (HU) in Group I and 69.6 ± 18.4 HU in Group N in the arterial phase, with the CT value of the ischemic bowel wall being significantly lower in Group I. In the portal vein phase, the CT value of the ischemic bowel wall was 60.6 ± 14.6 HU in Group I and 80.7 ± 17.7 HU in Group N, with the CT value of the ischemic bowel wall being significantly lower in Group I; however, no significant differences were observed in the equilibrium phase. The CT value of the solution in the intestine was 18.6 ± 9.5 HU in Group I and 10.4 ± 5.1 HU in Group N, being significantly higher in Group I. No significant differences were observed in the CT value of the accumulation of ascites fluid. The average difference in the CT values between the ischemic bowel wall and the non-ischemic bowel wall for each subject in Group I was 33.7 ± 20.1 HU in the arterial phase, being significantly larger compared to the other two phases. CONCLUSION This is a retrospective study using a small number of subjects; however, it suggests that there is a possibility that CT scanning in the arterial phase is useful for the early diagnosis of strangulation ileus. INTRODUCTION Strangulation ileus is an intestinal obstruction associated with ischemia of the intestinal tract which, if left untreated, results in intestinal necrosis and could become fatal if it develops from perforation to peritonitis; therefore, it requires timely treatment [1][2][3][4][5] . If diagnosed before the development of intestinal necrosis and surgery is performed, it is possible to avoid enterectomy by simply releasing the strangulation. If time elapses, it results in intestinal necrosis, leaving no choice but to perform enterectomy, which may cause complications such as ruptured suture and anastomotic stricture. We routinely perform dynamic CT in clinical practice for patients who are suspected of having bowel obstruction in order to rule out mesenteric vascular disease such as superior mesenteric arterial thrombosis, which usually develops symptoms similar to those of bowel obstruction. Among our surgical cases of bowel obstruction that needed bowel resection, there were some cases demonstrating hypo-attenuating bowel in the arterial phase that showed an equivalent attenuation in the other phases. We therefore hypothesized that the arterial phase of dynamic CT for patients with bowel obstruction is more useful for the early detection of bowel ischemic change than conventional enhanced CT. The objective of this study is to retrospectively study pre-surgical contrast CT images of subjects in whom ileus surgery was performed at our department and to clarify the diagnosability according to the scanning phases, particularly the usefulness of arterial phase scans. Subjects Between January 2004 and January 2011, among 139 subjects in whom a laparotomy was carried out based on the diagnosis of an intestinal obstruction at our department, contrast CT scanning (including the arterial phase) was performed prior to surgery in 65 subjects. Among these, it was difficult to evaluate the blood flow of the intestinal tract in a total of 20 subjects, including 11 subjects in whom intestinal tract expansion was not observed after an ileus tube was inserted, 4 subjects in whom it was difficult to evaluate the contrast effect of the intestinal tract after ingestion of the oral contrast agent, and 5 subjects in whom a sufficient amount of contrast agent could not be injected due to their poor general condition. As a result, the remaining 45 subjects were selected as subjects of the study. These 45 subjects included 31 male subjects and 14 female subjects. The average age was 61.2 years (range: 14-85 years). There were 43 subjects (95.6%) with an intestinal obstruction believed to be caused by the previous laparotomy, and these were composed of: 25 subjects with digestive cancer, 5 subjects with gynecologic cancer, 5 subjects with inflammatory bowel disease, 2 subjects with vascular lesions, and 6 subjects involving other causes. Regarding the two subjects having no previous laparotomy, one subject had an intra-abdominal abscess caused by appendicitis and the intestinal obstruction was caused by adhesions around the abscess. The remaining one subject had an intestinal obstruction caused by ileum invasion of rectal cancer. Methods Scanning method: All patients were scanned on a 16-multidetector computed tomography (MDCT) (Light Speed Ultra 16, GE Healthcare) and 300 mgI/mL of a non-ionic contrast agent was used, injecting 130 mL to 140 mL at 3-4 mL/s; scanning was carried out in the scanning range from the diaphragm to the pubic symphysis after 30 s in the arterial phase at a slice thickness of 1.25 mm and after 80 s in the portal vein phase. Scanning was performed after 180 s in the equilibrium phase in 27 subjects. The radiation dose of these series ranged from 40 mGy to 80 mGy. Image analysis method: Sites with no or poor contrast effect or an intestinal tract with a prominent edema were determined based on agreement between two physicians (Ohira G and Tohma T), experienced in abdominal image diagnosis. The CT value of the intestinal tract wall was measured using a MDCT workstation (Advantage Workstation, GE Healthcare, America; Virtual Place Advance, Aze, Tokyo). The measurement method was such that the CT value was continuously measured from the inner side to the outer side of the intestinal tract with poor contrast effect or with high edema and the highest value was set as the CT value of the wall. This method was carried out at any three sites in each scanning phase, and the mean value was calculated ( Figure 1). For subjects without poor contrast effect or with no edema, the CT value was measured in the enlarged intestinal tract in a similar manner. Moreover, a circular region of interest was set inside the enlarged intestinal tract in the portal vein phase, the CT value of the fluid inside the enlarged intestinal tract was measured at any three sites, and the mean value was calculated. This method was carried out Ohira G et al . Arterial phase of CT in ileus in a similar manner for cases in which ascites fluid had accumulated, and a mean value was calculated. Method of determining intestinal tract ischemia: For subjects in whom enterectomy had been performed, ischemia was determined based on the pathological findings. Specifically, necrosis or internal bleeding in the intestinal tract wall was determined as ischemia. For subjects in whom no enterectomy had been performed, the presence of ischemia was determined based on the intestinal tract findings during surgery. Specifically, it was comprehensively determined based on the color of the serosal surface of the bowel or the presence of peristalsis. Test item A comparative examination was carried out with respect to the CT value of the intestinal tract wall in each scanning phase, the CT value of the solution in the intestinal tract, and the CT value of ascites fluid for a group in whom ischemia was observed (hereafter referred to as Group I) and a group in whom ischemia was not observed (hereafter referred to as Group N) based on the pathological findings or intra-surgical findings. Moreover, a comparative examination was carried out in Group I subjects for each scanning phase with respect to the average differences in the CT value of the intestinal tract wall where ischemia was suspected and in the intestinal tract wall in non-ischemic areas. Statistical analysis A comparison of the CT value of the intestinal tract wall, the CT value of the solution in the intestine, and the CT value of ascites fluid was analyzed using a Mann-Whitney U test. Differences in the CT value of the intestinal tract wall in each scanning phase in Group I were also compared using a Mann-Whitney U test. When the P value was less than 0.05, it was determined to be statistically significantly different. Analysis was performed using commercially available statistical software (Dr. SPSS Ⅱ, SPSS for Windows, United States). RESULTS There were 15 subjects in Group I and among these, 8 subjects were confirmed based on the pathological findings of an isolated preparation and 7 subjects were determined based on the intra-surgical findings. There were 30 subjects in Group N. The CT value of the intestinal tract wall was signifi- Table 1). It was possible to evaluate the CT value of the fluid in the intestinal lumen in 40 subjects (13 subjects in Group I and 27 subjects in Group N), and it was significantly higher in Group I. It was possible to evaluate the CT value of accumulated ascites fluid in 30 subjects (12 subjects in Group I and 18 subjects in Group N) and no significant differences were observed (Table 1). 1). ). The average differences in the CT values between the ischemic bowel wall and the non-ischemic bowel wall in each subject in Group I were measurable in 14 subjects, excluding one subject in whom ischemia was observed in all intestines in the small intestine axis rotation. Among these, the equilibrium phase was scanned in 10 subjects. The difference in the CT value between the ischemic bowel wall and the non-ischemic bowel wall in the arterial phase, portal venous phase, and equilibrium phase were 33.7 ± 20.1 HU, 12.4 ± 15.0 HU, and 15.0 ± 13.0 HU, respectively. Compared to the other two phases, the difference in the arterial phase was significantly greater. DISCUSSION There are a number of reports claiming that CT tests are useful in the diagnosis of strangulation ileus. Regarding the CT findings of strangulation ileus, poor contrast of the intestinal tract wall [6][7][8][9][10] , edema and thickening of the mesentery [6] , pneumatosis of the intestinal tract [7] , convergence of mesentery or axle-shaped change [9] , accumulation of ascites fluids [11] , elevation of the CT value of the solution in the intestinal tract [19] , etc., have been reported. Strangulation ileus is associated with ischemia of the intestinal tract and is an emergency surgical indication. It is comparatively easy to diagnose cases clearly having necrosis or cases resulting in necrosis, making it less difficult to determine the surgical indication; however, it may become difficult to determine emergency surgery for cases of non-typical image findings or for cases of poor physical findings. In these cases, enterectomy can be avoided by diagnosing the intestinal tract ischemia at an early stage and performing surgery, making it possible to improve the mortality rate. In this retrospective study, no significant differences were observed in the CT value in the equilibrium phase for the ischemic bowel wall compared to that of the non-ischemic bowel wall; however, the value was significantly lower in the arterial phase and the portal vein phase. This suggests the possibility that the intestinal tract ischemia at an early stage, which cannot be captured by means of scanning in the equilibrium phase, can be captured by means of scanning in the arterial phase and the portal vein phase. Moreover, when the mean differences in the CT value between the ischemic bowel wall and the non-ischemic bowel wall were compared for cases of ischemia between scanning phases, the difference in the arterial phase was significantly larger than the difference in the portal vein phase and the equilibrium phase. The larger the difference in the CT value, the more contrast is provided on the image, making it possible to visibly capture it; therefore, scanning in the arterial phase was considered most useful in the early diagnosis of the intestinal tract ischemia (Figure 2). In the study of the CT value for ascites fluid and the solution in the intestinal tract, no significant differences were observed between Group I and Group N for the ascites fluid; however, the value was significantly higher in Group I for the solution in the intestinal tract. This is considered to be indicative of texture change of the liquid contents associated with ischemia and necrosis of the mucous membrane appearing in the initial stage of strangulation ileus. However, the difference in the mean value of the CT value is low, at less than 10 Hounsfield Unit, making it impossible to capture visibly. The mechanism by which the contrast effect of the arterial phase decreases in the ischemic bowel wall is not clear. Chou et al [20] reported that the hemodynamics of the strangulated and closed-looped intestinal tract wall reach a similar state to venous occlusion of the mesentery. That is, the artery blood flow in the strangulated mesentery is at a higher pressure, causing the venous blood flow to be interrupted first and only the artery blood flow to continuously flow in, thus resulting in an increase in the systematic pressure inside the intestinal tract wall and a decrease in the artery blood flow. According to the scanning of the arterial phase, it is believed to be possible to capture this blood flow change at an early stage, leading to early diagnosis of strangulation. There are some limitations associated with this study. The scanning timing in the arterial phase is set at 30 s after injection of the contrast agent; however, it has been mentioned that the actual arterial phase varies depending on age and general condition [21][22][23][24][25][26] . It is believed that it would have been possible to evaluate more accurately the arterial phase scanning if a computer-assisted automatic bolus-tracking technique had been used. Moreover, this is a retrospective study in patients in whom treatment had been completed; therefore, it is necessary to study the diagnosability with respect to strangulation ileus of the arterial phase scanning in prospective clinical studies in the future. This is a retrospective study using a small number of subjects; however, our findings suggest that CT scanning in the arterial phase may be useful for making an early diagnosis of strangulation ileus. Background Strangulation ileus is an intestinal obstruction associated with ischemia of the intestinal tract which, if left untreated, results in intestinal necrosis and could thus become fatal if it leads to perforation followed by peritonitis, and it therefore requires timely treatment. Research frontiers Computed tomography (CT) is reported to be useful for diagnosing strangulation ileus, but few reports have so far discussed the differences in the diagnostic effectiveness of the scanning phase of CT. In this study, the authors demonstrate the usefulness of the arterial phase of dynamic CT for the early detection of strangulation ileus. Innovations and breakthroughs This is the first study to show the usefulness of the arterial phase of dynamic CT by measuring the CT value of the ischemic bowel walls. Applications By scanning the arterial phase of dynamic CT for ileus patients, it may thus be possible to diagnose strangulation earlier, and thus avoid performing unnecessary bowel resections in some cases. Peer review It is worth publishing e�cept for a few minor corrections worth publishing e�cept for a few minor corrections
2018-04-03T00:05:59.405Z
2012-11-28T00:00:00.000
{ "year": 2012, "sha1": "3395aabcc7b9af03aaba110c73b083cf2c2c3679", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4329/wjr.v4.i11.450", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0896a6d3bcc39cb54ff76dec108ea671fded3dfe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245651105
pes2o/s2orc
v3-fos-license
Plant-based dietary changes may improve symptoms in patients with systemic lupus erythematosus Introduction Previous studies have reported that patients affected by systemic lupus erythematosus (SLE) are interested in using diet to treat fatigue, cardiovascular disease and other symptoms. However, to date, there is insufficient information regarding the ways for patients to modify their diet to improve SLE symptoms. We investigated the relationship between the eating patterns of SLE patients and their self-reported disease symptoms and general aspects of health. Methods A UK-based, online survey was developed, in which patients with SLE were asked about their attitudes and experiences regarding their SLE symptoms and diet. Results The majority (>80%) of respondents that undertook new eating patterns with increased vegetable intake and/or decreased intake of processed food, sugar, gluten, dairy and carbohydrates reported benefiting from their dietary change. Symptom severity ratings after these dietary changes were significantly lower than before (21.3% decrease, p<0.0001). The greatest decreases in symptom severity were provided by low/no dairy (27.1% decrease), low/no processed foods (26.6% decrease) and vegan (26% decrease) eating patterns (p<0.0001). Weight loss, fatigue, joint/muscle pain and mood were the most cited symptoms that improved with dietary change. Conclusion SLE patients who changed their eating patterns to incorporate more plant-based foods while limiting processed foods and animal products reported improvements in their disease symptoms. Thus, our findings show promises in using nutrition interventions for the management of SLE symptoms, setting the scene for future clinical trials in this area. Randomised studies are needed to further test whether certain dietary changes are effective for improving specific symptoms of SLE. Lay Research Summary How you can help influence a new research proposal. We are a group of researchers at University College London (UK) studying patients with Lupus. We want to increase our understanding of what causes the disease so that we can improve and develop new treatments for patients. We have a new and exciting idea that we believe could help to reduce disease severity. Before we start this project, we would like to find out from you whether you have any experiences that could help us improve our research. We are interested in how diet can influence the immune system (the body's natural defense system). We have found that certain foods are linked with harmful inflammation in the body and disease flares. We would like to find out whether altering the diet could have a beneficial effect on Lupus by decreasing inflammation. Therefore, greater knowledge of how diet has affected patient experience with Lupus would help us understand the potential of this new idea. We hope this approach could reduce the dependence of patients on drugs. Before we begin this research, we would like to ask you some short questions to help us understand your experience with diet and Lupus. Your input is of huge value to our research. Please click "Next" below to answer these questions. All of us at UCL Centre for Rheumatology Research would like to extend a grateful thank you for your time and support in answering these questions; it is only together we can really make progress in understanding this disease. By completing this questionnaire, you are consenting to us sharing your responses. All of your answers will be completely anonymous. Supplementary Data Free-text responses to: How did you change your eating habits? Please explain. 1. Changed to a low carb diet 2. Restricted red meat and cut out spicy and acidic foods. Also omitted fizzy drinks. 3. Saw a nutritionist who prescribed an exclusion diet and used paleo diet -gluten free dairy free plus plenty of good protein omega 3 leafy green vegetables. I kept a food diary. Avoided bought gluten free foods as full of additives and sugar. I avoided sugar and this significantly reduced inflammation 4. Avoid eating mushrooms. Take meal replacements protein shake for dinner to reduce food intake 5. By avoiding certains kinds of food and adding more of others. 6. fasting 7. Cut out carbs completely and all meat except fish. 9. Generally eat a mediterranean diet. Lowered levels of processed foo 10. Went on the autoimmune protocol 11. cut out alcohol& caffiene reduced sugar content 12. Reduced sugar and carbohydrate intake. 14. Gluten free more plant based less processed foods 15. Eating more healthy, less fatty foods 16. Tried Gave up meat and processed foods. Just trying to become more healthy and balanced all round 70. I try to avoid immune system boosting foods 72. Vegetarian for health and ethical reasons, avoid garlic and alfalfa sprouts because they trigger lupus flares. I limit processed sugar and white flour. 73. Had pancreatitis a few times because of lupus and pbc, so keep to a low fat diet, avoid spicy foods 75. Green smoothies and trying to eat a Plant base diet 76. Avoided buying junk food and processed foods often and cooking more rather than eating out/or ordering takeout 77. Changed to a vegetarian diet with limited process foods 78. Low Mechanism 1 Weight loss High intake of fibre and water reduces the caloric density of the overall diet while increasing satiety and energy expenditure (1). Resultant decrease in white adipose tissue (WAT), an active and inflammatory organ that releases adipokines, which are molecules that contribute to inflammation in rheumatic disease (2). Restricting processed foods, refined carbohydrates and sugar Commonly used industrial food additives such as gluten, glucose, salt and emulsifiers breach the integrity of the intestinal-epithelial barrier, resulting in entry of foreign immunogenic antigens and activation of the autoimmune cascade (3). Regular consumption of excess free fructose contributes to intestinal accumulation of advanced glycation end-products that cross the intestinalepithelial barrier and promote inflammation in tissues (4). High intake of processed foods leads to excess calorie intake and weight gain and is associated with increased biomarkers of inflammation (5-9). Restricting meat/ animal products Large intake of ω6-polyunsaturated fatty acids (PUFA), saturated and transfatty acids have pro-inflammatory and aggravating effects for SLE symptoms (10). Moderate protein intake is associated with better immune function and delay in autoimmunity (10,11). Increasing intake of vegetables, fruit, legumes and wholegrains Fibre, vitamins, minerals, isoflavones, phytochemicals, PUFA and other plant metabolites have anti-inflammatory effects. Many of these components positively diversify gut microbiota and mediate metabolic, inflammatory and immunity pathways (12). High dietary fibre improves the synthesis of short chain fatty acids (SCFAs) in the metabolome and decreases the level of harmful free-radicals involved in disease state (13,14). High intake of ω3 PUFA reduces levels of pro-inflammatory cytokines such as IL-1, IL-6 and TNF (15). ω3 PUFA are also essential for the synthesis of eicosanoids, regulators of the inflammatory cascade (13). Polyphenols from plants such as flavonoids have anti-inflammatory and antioxidant activity (16). Mood improvement Plant metabolites such as phytochemicals boost mood through their serotonergic, noradrenergic and dopaminergic effects (17). ω3 PUFA modulate serotonin receptors in the cortex and hippocampus and increase brain-derived neurotrophic factor (BDNF) expression (17). Vitamin C and magnesium antagonise the activity on N-methyl-D-aspartate (NMDA) receptors and increase BDNF (18,19). Decrease in pro-inflammatory molecules that negatively impact the circuitry in depression-related brain regions, such as the anterior cingulate cortex, amygdala and insula (20)(21)(22). Table 2. The literature identifies the main mechanisms through which WFPB diets could contribute to decreased inflammation and reduction in symptoms in SLE patients.
2022-01-04T06:22:56.859Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "263a5c24045e2a61f3ca902168d4ee75ef70d670", "oa_license": "CCBY", "oa_url": "https://discovery.ucl.ac.uk/10141408/1/09612033211063795.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "842dd6c41e27696669b99e8b4aefabd0736ae30c", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
253841620
pes2o/s2orc
v3-fos-license
Editorial: Use of small peptides in the treatment of inflammatory diseases Inflammation is essential in the resolution of infection or tissue damage. However, excessive inflammation, which sometimes lasts for years, leads to numerous inflammatory disorders such as cardiovascular, neurodegenerative, or gastrointestinal diseases, cancer or diabetes mellitus, or even infections. Small peptides are promising therapeutic tools as they have been shown to present higher specificity (especially peptides with allosteric properties) and binding affinity than small molecules, and reduced immunogenicity and toxicity than biologics. Moreover, some of them have a dual—anti-inflammatory and antimicrobial—activity. The source of therapeutic peptides is almost inexhaustible as they can have natural or designed origins. Interestingly, peptidic compounds entering clinical trials are more susceptible to approval. In the last 20 years, novel designs, delivery strategies, and improvements in peptide production and modification have led to a total of 33 approved peptide drugs, and more than 170 peptides are in clinical trials (Wang et al., 2022). In their comprehensive review focused on the state of clinical trials on noncancer dermatological biologics in China, Zhu et al. show that the number of dermatological biologic trials in China surged between 2016 and 2020, primarily driven by psoriasis trials. To control undesirable inflammation with reduced side effects, it is desirable to target a particular signaling pathway, cell type, tissue, or organ, without increasing susceptibility to infections or diseases secondary to the treatment. Additional to those considerations, the development of promising immunomodulatory peptide candidates needs to contemplate when and how they will best contribute to inflammation resolution. As a first step to comply with those requirements, it is necessary to elucidate the origin and potential functions of the diverse immune cells present in a tissue at a given time. Especially, macrophages’ multiple functions in wounds or infections, such as the induction and resolution of inflammation, the removal of apoptotic cells, cell OPEN ACCESS Inflammation is essential in the resolution of infection or tissue damage. However, excessive inflammation, which sometimes lasts for years, leads to numerous inflammatory disorders such as cardiovascular, neurodegenerative, or gastrointestinal diseases, cancer or diabetes mellitus, or even infections. Small peptides are promising therapeutic tools as they have been shown to present higher specificity (especially peptides with allosteric properties) and binding affinity than small molecules, and reduced immunogenicity and toxicity than biologics. Moreover, some of them have a dual-anti-inflammatory and antimicrobial-activity. The source of therapeutic peptides is almost inexhaustible as they can have natural or designed origins. Interestingly, peptidic compounds entering clinical trials are more susceptible to approval. In the last 20 years, novel designs, delivery strategies, and improvements in peptide production and modification have led to a total of 33 approved peptide drugs, and more than 170 peptides are in clinical trials (Wang et al., 2022). In their comprehensive review focused on the state of clinical trials on noncancer dermatological biologics in China, Zhu et al. show that the number of dermatological biologic trials in China surged between 2016 and 2020, primarily driven by psoriasis trials. To control undesirable inflammation with reduced side effects, it is desirable to target a particular signaling pathway, cell type, tissue, or organ, without increasing susceptibility to infections or diseases secondary to the treatment. Additional to those considerations, the development of promising immunomodulatory peptide candidates needs to contemplate when and how they will best contribute to inflammation resolution. As a first step to comply with those requirements, it is necessary to elucidate the origin and potential functions of the diverse immune cells present in a tissue at a given time. Especially, macrophages' multiple functions in wounds or infections, such as the induction and resolution of inflammation, the removal of apoptotic cells, cell proliferation, and tissue repair, but also promote excessive inflammation, leading to various disorders, make of macrophages a promising therapeutic target to control the balance between necessary and excessive inflammation. In their article, Golden et al. investigated the origin of the different macrophage populations present in the acute lung injury and the resolution phases in the intratracheal bleomycin mouse model. In this model of acute inflammation, tissue-resident macrophages in the lung are downregulated in the acute phase of inflammation before being regenerated in the resolving phase of inflammation. In contrast, monocyte-derived macrophages are recruited to the lung during the acute phase of inflammation and are responsible for the release of iNOS and excess inflammation that leads to acute lung injury. These results stress the concept that depending on the inflammatory stage, macrophages exhibit a particular phenotype and activation state, making individual populations of macrophages a promising target to modulate different inflammation stages. In another study, Lee et al. developed a mouse leukocyte migration assay using a lower uterine extract chemoattractant that could be used as a diagnostic tool for pre-term birth. With this test, as a proof of concept, the authors showed that IL-1beta stimulated pre-term birth by activating neutrophils, leading to increased uterine and fetal brain activation. This IL-beta stimulation was inhibited by the rytvela peptide, an allosteric antagonist of the IL-1 receptor that selectively inhibits the IL-1R signaling pathway down the mitogenactivated protein kinase/p38, but not the nuclear factor kappa B one, enabling immunosurveillance. Again, this work underscores how a better appreciation of the biological role of potential therapeutic targets is fundamental in the search and use of new immunomodulatory candidates. Another challenge in controlling undesirable inflammation is to direct the therapeutic peptide to a specific tissue or organ at the right moment. One of the drawbacks in the use of peptides as therapeutic agents is their poor membrane permeability, pinpointing the drug delivery system as key to the success of therapy. Different strategies have been developed in the past years, such as coformulation with permeation enhancers or implantable pumps (Farra et al., 2012;Knudsen et al., 2019). Due to their biochemical properties and skin-mimicking structure, hydrogels loaded with bioactive peptides are a promising strategy for wound healing and tissue restoration. Hao et al. show that using a chitosan/alginate hydrogel combined with short-chain peptides isolated from the velvet antler blood (a remedy used in traditional Chinese medicine) contribute to rapid wound healing and skin repair. Indeed, the biochemical and biophysical properties of the hydrogel loaded with the velvet antler blood peptides exhibit a combined biological activity, i.e., antibacterial capacity and excessive inflammation inhibition, resulting in local skin repair. Another limitation in peptide drug development is their poor in vivo stability. Due to their composition and structure, small peptides are susceptible to being degraded by various enzymes and rapidly eliminated in vivo. In recent years significant breakthroughs in chemical peptide synthesis, but also rational design, use of all-D peptides and phage display have been developed to solve this limitation. In their article, to overcome the limitation of poor in vivo stability, Luo et al. used a strategy based on the inhibition of endogenous enzymes. Using a model of mouse colitis where the anti-inflammatory role of enkephalins contributes to lessen the inflammation of the colon, the authors showed that central administration of human opiorphin, the natural inhibitor of enkephalinase, suppresses the activity of natural endo-and amino-peptidases, thus favoring higher levels of enkephalin in the serum and the improvement of the colitis. In conclusion, this research topic highlights some of the key points to be considered for the development of succesfull peptide drugs: the identification of the therapeutic targets and understanding their mechanisms of action, and the improvement of in vivo stability and delivery. Author contributions All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
2022-11-24T14:59:39.813Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "680011a11844e0dfc035bbd03fed171b725eb7d0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "680011a11844e0dfc035bbd03fed171b725eb7d0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
238259095
pes2o/s2orc
v3-fos-license
Conditional stability up to the final time for backward-parabolic equations with Log-Lipschitz coefficients We prove logarithmic conditional stability up to the final time for backward-parabolic operators whose coefficients are Log-Lipschitz continuous in $t$ and Lipschitz continuous in $x$. The result complements previous achievements of Del Santo and Prizzi (2009) and Del Santo, Jaeh and Prizzi (2015), concerning conditional stability (of a type intermediate between Hoelder and logarithmic), arbitrarily closed, but not up to the final time. Introduction In real world models, deterministic diffusion processes are often irreversible. Consider for example the heat equation ∂ t u = ∆u with Cauchy data u(0, x) = u 0 (x). The forward initial value problem is well posed in an appropriate space of physically meaningful configurations, but the evolution has a strong regularizing effect, so when one tries to reconstruct an initial configuration u(0, x) from a final observation u(T, x) at a positive time T , one needs to impose regularity conditions on u(T, x), while in general the backward problem with Cauchy data at T has no solution. However, in a physical context an observation at a final time T records the configuration resulting from an actual evolution, so the problem of existence is less relevant than that of uniqueness and sensitiveness to errors in measurements. In [22] John introduced the notion of well-behaved problem for ill-posed problems. According to John a problem is well-behaved if "only a fixed percentage of the significant digits need be lost in determining the solution from the data" [22, p. 552]. More precisely, a problem is well-behaved if its solutions in a space H depend Hölder continuously on the data belonging to a space K, provided the solutions satisfy a prescribed a priori bound. According to the literature, we call conditional stability any continuous dependence (possibly weaker than Hölder) which is subordinated to a prescribed a priori bound. In this paper we carry on the investigation about conditional stability of backward solutions for a general parabolic equation. For ease of notation we reformulate the problem inverting the sign of the time variable, so we deal with (forward) solutions of the backward-parabolic equation on the strip [0, T ] × R n . We assume throughout the paper that the matrix (a ij ) n i,j=1 is symmetric and positive definite and that the coefficients a ij 's are at least Lipschitz continuous in x and Hölder continuous in t. These are the standard regularity assumptions which guarantee the (forward) well posedness for forward-parabolic equations in H s , 0 ≤ s ≤ 2 (see e.g. [2]). We denote by the space for admissible solutions of (1.1). In [1] Agmon and Nirenberg proved, among other things, that the Cauchy problem for (1.1) on the interval [0, T ] is well-behaved in the space H with data in L 2 (R n ) on each subinterval [0, T ′ ] with T ′ < T , provided the coefficients a i,j 's are sufficiently smooth with respect to x and Lipschitz continuous with respect to t. In order to achieve their result they developed the so called logarithmic convexity technique. The main step consists in proving that the function t → log u(t, ·) L 2 is convex for every solution u ∈ H of (1.1). In the same year Glagoleva [17] obtained essentially the same result for a concrete operator like (1.1) with time independent coefficients. Her proof rests on energy estimates obtained through integration by parts. Some years later Hurd [19] developed the technique of Glagoleva to cover the case of a general equation of type (1.1), with coefficients depending Lipschitz continuously on time. The results of [1,17,19] can be summarized as follows: Theorem A. Assume the coefficients a ij 's are Lipschitz continuous with respect to t. For every T ′ ∈ (0, T ) and D > 0 there exist ρ > 0, 0 < δ < 1 and K > 0 such that, if u ∈ H is a solution of (1.1) on [0, T ] with u(0, ·) L 2 ≤ ρ and u(t, ·) L 2 ≤ D on [0, T ], then sup t∈[0,T ′ ] u(t, ·) L 2 ≤ K u(0, ·) δ L 2 . The constants ρ, K and δ depend only on T ′ and D, on the positivity constant of the matrix (a ij ) n i,j=1 , on the L ∞ norms of the coefficients a ij 's and of their spatial derivatives, and on the Lipschitz constant of the coefficients a ij 's with respect to time. As T ′ approaches T , the constant K above blows up, while δ decays to 0, so one cannot expect that solutions are well behaved up to the final time T . From the physical point of view, going back to the forward parabolic equation, this means that the reconstruction of the past from observations at the final time t = T worsens more and more as one gets closer to the initial time t = 0. Yet, as it was proved by various authors (e.g. Imanuvilov and Yamamoto [20], Yamamoto [27], Isakov [21]), some kind of conditional stability for the backward-parabolic equation (1.1) up to the final time T can be recovered if one settles for integral estimates rather than pointwise estimates. Moreover, pointwise estimates can be recoverd by imposing stronger a priori bounds on the solutions. In any case, however, one doesn't get Hölder dependence but only logarithmic dependence on data. The results of [20,27,21] can be summarized as follows: Theorem B. Assume the coefficients a ij 's are Lipschitz continuous with respect to t. For every D > 0 there exist ρ > 0, 0 < δ ≤ 1 and K > 0 such that, if u ∈ H is a solution of (1.1) on [0, T ] with u(0, ·) L 2 ≤ ρ and u(t, ·) The constants ρ, K and δ depend only on D, on the positivity constant of the matrix (a ij ) n i,j=1 , on the L ∞ norms of the coefficients a ij 's and of their spatial derivatives, and on the Lipschitz constant of the coefficients a ij 's with respect to time. In all the above mentioned results, Lipschitz continuity of the coefficients a ij 's with respect to time plays an essential role. The possibility of replacing Lipschitz continuity by simple continuity was ruled out by Miller [26] and more recently by Mandache [23]. They constructed examples of operators of the form (1.1) which do not enjoy the uniqueness property in H. In the example of Miller the coefficients a ij 's are Hölder continuous in time, while in the more refined example of Mandache the modulus of continuityμ of the coefficients a ij 's with respect to time needs only to satisfy 1 0 (1/μ(s))ds < +∞. On the other hand, in [9,11,12] it was proved that ifμ satisfies the Osgood condition, i.e. 1 0 (1/μ(s))ds = +∞, then equation (1.1) enjoys the uniqueness property in H. Therefore it would be natural to conjecture that if the Osgood condition is satisfied, then the Cauchy problem for (1.1) is well-behaved in H with data in L 2 (R n ). Unfortunately this is not true, as shown by a counterexample in [10]. Nevertheless if the coefficients a ij 's are Log-Lipschitz continuous in time, it was shown in [10,8] that a weaker conditional stability result holds: Theorem C. Assume the coefficients a ij 's are Log-Lipschitz continuous with respect to t. For every T ′ ∈ (0, T ) and D > 0 there exist The constants ρ, K, N and δ depend only on T ′ and D, on the positivity constant of the matrix (a ij ) n i,j=1 , on the L ∞ norms of the coefficients a ij 's and of their spatial derivatives, and on the Log-Lipschitz constant of the coefficients a ij 's with respect to time. Moreover, in [5] a (very feeble) conditional stability result was proved even when the coefficients a ij 's are just Osgood continuous with respect to t, provided they depend only on time. The proof of Theorem C relies on weighted energy estimates in the spirit of [17,19,20,27], but in order to overcome the obstructions created by the lack of time differentiability of the coefficients a ij 's it is necessary to introduce a weight function taylored on the modulus of continuity of the a ij 's (see Proposition 2.4), and a microlocal approximation procedure originally developed by Colombini and Lerner in [6] in the context of hyperbolic equations with Log-Lipschitz coefficients. In this paper we shall exploit the same type of weighted energy estimates to extend Theorem B to the case of parabolic equations whose coefficients are Log-Lipschitz continuous in time (Theorems 5.1 and 5.3). Our results can be summarized as follows: Theorem D. Assume the coefficients a ij 's are Log-Lipschitz continuous with respect to t. For every D > 0 there exist ρ > 0, 0 < δ ≤ 1 and K > 0 such that, if u ∈ H is a solution of (1.1) on [0, T ] with u(0, ·) L 2 ≤ ρ and u(t, ·) The constants ρ, K and δ depend only on D, on the positivity constant of the matrix (a ij ) n i,j=1 , on the L ∞ norms of the coefficients a ij 's and of their spatial derivatives, and on the Log-Lipschitz constant of the coefficients a ij 's with respect to time. Our results therefore complement the achievements of [10,8], and en passant improve them in some crucial technical points related to the regularity of the coefficients a ij 's with respect to the x variable (see the discussion in the final part of section 2). Finally, in Section 6 we illustrate some applications of the main results. Proposition 2.4 (Weighted energy estimate). Assume Hypothesis 2.1 is satisfied. There exists a constant α 1 > 0 (depending only on A LL , A and κ) and, setting α := max{α 1 , T −1 }, σ := 1 α and τ := σ 4 , there exist constantsλ > 1,γ > 0 and M > 0 (depending on A LL , A, κ and α, and hence on T ) such that, for all β ≥ σ + τ , λ ≥λ and γ ≥γ and whenever u ∈ H is a solution of equation (2.1), the estimate Remark 2.5. If one would like to include lower order terms in (2.1), one has to suppose that the corresponding coefficients are L ∞ with respect to t and also Lip with respect to x. The constants in Proposition 2.4 then will depend also on the norms of the coefficients of the lower order terms. In [10] estimate (2.4) was used to deduce the following local conditional stability result: Theorem 2.6 ([10, Thm.1]). Assume Hypothesis 2.1 is satisfied. Let α 1 , α and σ be as in Proposition 2.4. Then there exist constants ρ, δ, K and N, such that, whenever u ∈ H is a solution of (2.1) with holds true. The constants ρ, δ, K and N depend on A LL , A, κ and α, and hence on T . The fact that α 1 is independent of T and σ = min{α −1 1 , T } allows one to iterate the local result of Theorem 2.6 a finite number of times, and to obtain conditional stability in the large. Remark 2.8. Notice that, following Remark 2.2, it would be sufficient to impose an a-priory bound on u(T, ·) L 2 , which authomatically implies the a-priori bound for Estimate (2.4) was proved in [10] when the coefficients a ij (t, x) are of class C 2 with respect to x (in this case the constant A contains also the L ∞ norm of the second order spatial derivitaves of the a ij 's). Actually, in [10] C 2 regularity was imposed to overcome a technical difficulty in managing a commutator term appearing in the dyadic decomposition of equation (2.1). However, once estimate (2.4) is achieved, Theorems 2.6 and 2.7 follow directly from it, and the additional regularity in x of the a ij 's plays no role. The C 2 requirement is somewhat "non natural", since Lipschitz continuity in x of the a ij 's is sufficient in order that the domain of the operator − n j,k=1 [16,Thms. 8.8 and 8.12]). In [8] a weaker version of estimate (2.4) was obtained by mean of Bony paraproducts (see [4]), when C 2 regularity in x is replaced by the more natural Lipschitz regularity. In this weaker version of (2.4) the spaces L 2 and H 1−αt were replaced by H −θ and H 1−θ−αt respectively, where 0 <θ < 1, and the estimate hold for s ∈ [0, 7 8 . Such weaker version of (2.4), together with some nontrivial modifications of the arguments in [10], led eventually to recover the continuity results of Theorems 2.6 and 2.7. However, the weaker weighted energy estimate of [8] turns out to be unfit for the pourpose of reaching any kind of stability up to the final time T , especially because in that version of the estimate one can not integrate up to s = σ in the left hand side of (2.4), but has to stop at s = σ ′ < σ. Therefore we shall go back to the strong weighted energy estimate (2.4) and demonstrate it in the Lipschitz continuous case, using some ideas contained in [8] and performing a more careful and precise analysis of some terms in the paramultiplication procedure. Littlewood-Paley theory and Bony's paraproduct In this section, we review some elements of the Littlewood-Paley decomposition which we shall use throughout this paper to define Bony's paraproduct. The proofs which are not contained in this section can be found in [10], [11] and [25]. In the following two propositions we recall the characterization of the classical Sobolev spaces and Lipschitz-continuous functions via Littlewood-Paley decomposition. The constant C θ remains bounded for θ in compact subsets of R. Moreover, there exists a positive constant C such that if a ∈ Lip(R n x ), then For the proof of our conditional stability result it is essential that T a is a positive operator. Unfortunately, this is not implied by a(x) ≥ κ > 0. Therefore, we have to modify the paraproduct a little bit. Following [7, Sect. 3.3.] we introduce the operator where m ∈ N 0 ; note T 0 a = T a . As it will be shown below, the operator T m a is a positive operator for positive a provided that m is sufficiently large. The next results were proved for T a , but Lemma 3.10 in [7] guarantees that they hold also for T m a . Proposition 3.4 ([25, Prop. 5.2.1 and Thms. 5.2.8 and 5.2.9]). Let m ∈ N \ {0} and let a ∈ L ∞ (R n x ). Let θ ∈ R. Then T m a maps H θ into H θ and there exists C m,θ > 0 depending only on m and θ, such that, for all u ∈ H θ , The constant C m,θ can be chosen independent of θ when θ belongs to a compact subset of R. Let m ∈ N \ {0} and let a ∈ Lip(R n x ). Then • a−T m a maps L 2 into H 1 and there exists C 1 > 0 depending only on m, such that, for all u ∈ L 2 , a ∂ x i u extends from L 2 to L 2 , and there exits C 0 > 0 depending only on m, such that, for all u ∈ L 2 , Corollary 3.5. Let θ ∈ [0, 1]. Then for every i = 1, . . . , n, the mapping u → a∂ x i u − T m a ∂ x i u extends from H θ to H θ , and for all u ∈ H θ , Proof. By Proposition 3.4 the operator (a − T m a )∂ x j is continuous from H 0 to H 0 and from H 1 to H 1 . The result follows by interpolation (see e.g. Theorems B.1, B.2 and B.7 in [24]). Next we state a positivity result for T m a . for all u ∈ L 2 (R n x ) and m ≥ m 0 . A similar result is true for vectorvalued functions if a is replaced by a positive symmetric matrix. The next proposition is needed since T m a is not self-adjoint. However, the operator (T m a − (T m a ) * )∂ x j is of order 0 and maps, if a is Lipschitz, L 2 continuously into L 2 . We end this section with a property of the commutators [∆ k , T m a ] which will be crucial in the proof of the weighted energy estimate. The constant C m,θ can be chosen independent of θ when θ belongs to a compact subset of R. Proof of the weighted energy estimate For ease of notation, we write the proof only in one space dimension. We divide the proof in several steps. -Microlocalization and approximation Let u ∈ H be a solution of (2.1) Let Now we add and subtract ∂ x T m a ∂ x w, where T m a is the paramultiplication operator defined in (3.2), with m ≥ m 0 (κ, A), according to the positivity result of Proposition 3.6. We obtain We set u ν = ∆ ν u, w ν = ∆ ν w and v ν = 2 −αtν w ν . Then the function v ν satisfies . Now we make the scalar product of (4.2) with (t + τ )∂ t v ν in L 2 (R x ) and obtain To proceed further, we need to regularize the coefficient a(t, x) with respect to t. We take a regular mollifier, i.e. an even, non-negative ρ ∈ C ∞ 0 (R) with supp(ρ) ⊆ [− 1 2 , 1 2 ] and R ρ(s)ds = 1. For ε ∈ (0, 1], we set A straightforward computation shows that for all ε ∈ (0, 1], we have From these properties of a ε (t, x) and by Proposition 3.4, we immediately get We set a ν (t, x) := a ε (t, x), with ε = 2 −2ν . We replace T m a by T m aν + T m a − T m aν in the third term of the right hand side of (4.3) and we obtain (4.6) by the expression on the right hand side of (4.2) and we obtain By (4.6) and (4.7), we obtain A straightforward computation using Leibnitz derivation rule with respect to t yields Next we consider the term −(t+τ ) ∂ x (T m aν ∂ x v ν (t)) | ∂ t v ν (t) L 2 . From (3.2) it can be seen that ∂ t T m aν = T m ∂taν + T m aν ∂ t . A simple computation then shows that Eventually we obtain the identity where we have set ) . Setting ν = 0, we get from (4.8) By Proposition 3.6 we have Using Propositions 3.1, 3.4 and Lemma 4.1, for N 1 , N 2 > 0, we get Now, we choose N 1 and N 2 so large that 1 for γ ≥γ. With this choice, the term Further, we recall that Φ satisfies equation (2.3), i.e. for λ > 1. From this, we see that and thus we get Integrating in t over [0, s] ⊆ [0, σ], we obtain where we have used the estimates , which follow from propositions 3.4 and 3.6 respectively. -Estimates for ν ≥ 1 Now, we consider (4.8) for ν ≥ 1. From Lemma 4.1 and Proposition 3.7, for N 3 and N 4 > 0, we obtain (4.9) Using again the positivity estimate in Proposition 3.6 as well as Proposition 3.1, we obtain Now, we choose N 3 and N 4 so large that a,m N 4 < 0, and we set α := max{T −1 , α 1 }. With this choice, we get (4.13) and hence, the term 1 Now we consider the term If ν ≥ 1 log 2 log 16α log 2 If ν ≤ν 1 , then we choose a possibly largerγ such that for all γ ≥γ. We obtain and, consequently, (4.17) is absorbed by The term −αγ log 2(t + τ )ν v ν (t) 2 L 2 can be neglected since it is negative. However, we stress here that it is a crucial term in order to achieve our energy estimate for an equation including also lower order terms. Recalling also Propositions 3.1 and 3.6, we obtain Integrating over [0, s] ⊆ [0, σ], we get where we have used the estimate -End of the proof Now we sum over ν and we obtain By Corollary 3.5, Proposition 3.8 and Proposition 3.2 we get In the same way one can prove that We thus obtain Now the term for high frequencies, and by for low frequencies by choosingγ larger if necessary. All in all, we finally obtain From this, going back to u ν and using Proposition 3.2, the weighted energy estimate (2.4) follows. Conditional stability up to the final time In this section we state and prove two global stability theorems for solutions of (2.1) up to the final time T . The first result gives a logarithmic type control of u L 2 ((0,T ),L 2 ) in terms of u(0) L 2 . Proof of Theorem 5.1. First we observe that, due to Theorem 2.7, it is not restrictive to assume that α 1 ≤ T −1 . Indeed, if this is not the case we can take T ′ , 0 < T ′ < T , such that T − T ′ < α −1 1 , and then in [0, T ′ ] we apply the pointwise estimate given by Theorem 2.7, so we just need to estimate T T ′ u(t) 2 L 2 dt in terms of u(T ′ ) L 2 . With such assumption we can apply Proposition 2.4 with α = 1/T , σ = T and τ = T /4 and we can find λ > 1, γ > 0 and M > 0 such that for all β ≥ T + τ = 5 4 τ and whenever u ∈ H is a solution of equation Now for any r ∈ (0, T ) we have where we have used the fact that u(t, ·) L 2 ≤ u(t, ·) H 1−αt . Now, the function Φ λ is increasing and consequently the function t → e −2βΦ λ ((t+τ )/β) is decreasing. We deduce that where M ′ = Mγ2T e 2γT . Then where we used the fact that Φ ′ λ ( τ β ) ≥ 1 and Φ λ ( T +τ β ) ≤ 0. We recall that the function Φ λ is concave, so and then By Lemma 2.3 we have that We remind that τ = T /4, so T +τ τ = 5, and Now we observe that We choose now β in such a way that e −βΦ λ ( τ We obtain β = τ Λ −1 λ ( 1 τ log u(0, ·) L 2 ), where Λ λ (y) = yΦ λ (1/y). If u(0, ·) L 2 ≤ρ := e τ Λ λ (5) , then β ≥ T + τ . We have then )δ u(T, ·) 2 L 2 + 1 . By Lemma 2.3 we have that if z < 0 and |z| is sufficiently large. It follows that there existsρ ≤ρ such that, if u(0) L 2 ≤ρ, then On the other hand, It follows that for all r > 0 Finally we choose The proof is complete. Under a stronger a-priori bound on admissible solutions in [0, T ], namely assuming an a-priori bound in H 1 rather than in L 2 , we can prove a pointwise stability estimate of logarithmic type up to the final time T . Theorem 5.3. Assume Hypothesis 2.1 is satisfied. Then for all D 1 > 0 there exist positive constants ρ ′′′ , δ ′′′ and K ′′′ , depending only on A LL , A, κ, T and D 1 , such that if u ∈ H is a solution of (2.1) satisfying sup t∈[0,T ] u(t, ·) H 1 ≤ D 1 and u(0, ·) L 2 ≤ ρ ′′′ , the inequality Remark 5.4. Notice that, following Remark 2.2, it would be sufficient to impose an a-priory bound on u(T, ·) H 1 , which authomatically implies the a-priori bound for u(t, ·) H 1 , t ∈ [0, T ]. Proof of Theorem 5.3. We begin by noticing that, since u solves (2.1), It follows from Morrey's inequality that The conclusion follows observing that for each fixed t ∈ [0, T ] we have Reconstruction of the initial condition for parabolic equations In view of applications it is convenient to rephrase Theorem 5.3. Consider the (forward) parabolic equation on the strip [0, T ] × R n x and assume Hypothesis 2.1 is satisfied. Then we have: Corollary 6.1. Let D > 0. There exist positive constants ρ D , δ D and K D , depending only on A LL , A, κ, T and D, such that if u, v ∈ C 0 ([0, T ], H 1 )∩C 1 (]0, T ], L 2 ) are solutions of (6.1) satisfying u(0, ·) H 1 ≤ D, v(0, ·) H 1 ≤ D and u(T, ·) − v(T, ·)) L 2 ≤ ρ D , then the inequality Corollary 6.1 can be exploited to reconstruct the initial condition of an unknown solution u(t) of (6.1), provided we can measure with arbitrary accuracy its final configuration u T := u(T ). More precisely, suppose that for every θ > 0 we can perform a measurement v θ,T of u T such that v θ,T − u T L 2 ≤ θ. Moreover, suppose that we know a priori that u(0) H 1 ≤ D for some D > 0. We are interested in finding a computable approximation of u(0). If it were possible to solve equation (6.1) backward in time with final condition v(T ) = v θ,T , then by Corollary 6.1 we would get that v(0) is closed to u(0), provided v(0) H 1 ≤ D and v θ,T is sufficiently closed to u T . However, equation (6.1) with final condition v(T ) = v θ,T in general has no solution, due to the regularizing effect of equation (6.1) forward in time, and to the fact that v θ,T does not possess any regularity, since it is the output of a measurement. There are various strategies to overcome this major obstruction. We mention the technique of quasi reversibility (see e.g. [13]), which consists in perturbing the equation to make it solvable backward in time, and the technique of Fourier truncation, which consists in approximating v θ,T with a very regular function obtained truncating its Fourier transform. We illustrate the second technique through an example inspired by [14] (see also [18]). We consider the equation a jk (t)ξ j ξ k , we assume that 1 2 |ξ| 2 ≤ a(t, ξ) ≤ 2|ξ| 2 , (t, ξ) ∈ [0, T ] × R n ξ . As above, suppose we know a priori that u(0) H 1 ≤ D. Moreover, suppose that for every θ > 0 we can perform a measurement v θ,T of u T such that v θ,T − u T L 2 ≤ θ.
2021-10-05T01:16:23.811Z
2021-10-04T00:00:00.000
{ "year": 2021, "sha1": "5cfb636ac19bf323d2310ab47ff4367c9eb59e10", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e2886eba73ca55cde7f63a514b7db7dd359a2e26", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
199576662
pes2o/s2orc
v3-fos-license
Global extracellular vesicle proteomic signature defines U87-MG glioma cell hypoxic status with potential implications for non-invasive diagnostics Purpose Glioblastoma multiforme (GBM) is the most common and lethal of primary malignant brain tumors. Hypoxia constitutes a major determining factor for the poor prognosis of high-grade glioma patients, and is known to contribute to the development of treatment resistance. Therefore, new strategies to comprehensively profile and monitor the hypoxic status of gliomas are of high clinical relevance. Here, we have explored how the proteome of secreted extracellular vesicles (EVs) at the global level may reflect hypoxic glioma cells. Methods We have employed shotgun proteomics and label free quantification to profile EVs isolated from human high-grade glioma U87-MG cells cultured at normoxia or hypoxia. Parallel reaction monitoring was used to quantify the identified, hypoxia-associated EV proteins. To determine the potential biological significance of hypoxia-associated proteins, the cumulative Z score of identified EV proteins was compared with GBM subtypes from HGCC and TCGA databases. Results In total, 2928 proteins were identified in EVs, out of which 1654 proteins overlapped with the ExoCarta EV-specific database. We found 1034 proteins in EVs that were unique to the hypoxic status of U87-MG cells. We subsequently identified an EV protein signature, “HYPSIGNATURE”, encompassing nine proteins that strongly represented the hypoxic situation and exhibited close proximity to the mesenchymal GBM subtype. Conclusions We propose, for the first time, an EV protein signature that could comprehensively reflect the hypoxic status of high-grade glioma cells. The presented data provide proof-of-concept for targeted proteomic profiling of glioma derived EVs, which should motivate future studies exploring its utility in non-invasive diagnosis and monitoring of brain tumor patients. Electronic supplementary material The online version of this article (10.1007/s11060-019-03262-4) contains supplementary material, which is available to authorized users. Introduction Glioblastoma multiforme (GBM) is the most common and malignant type of primary brain tumor in adults with a median survival of approximately 15 months [1][2][3]. GBM is identified from less malignant, low grade gliomas, by extensive regions of hypoxia [4] that directly correlate with the aggressive behaviour [5]. Hypoxia results from the high proliferative and metabolic activity of malignant cells [6] and is associated with pseudopalisading necrosis as well as vascular hyperproliferation [7]. Tumor hypoxia modulates stromal cell interactions in the microenvironment that further support the survival and dissemination of malignant cells [4,[8][9][10][11]. Numerous studies have previously shown that tumor progression is driven by hypoxic signaling [12], and the expression of hypoxia-related markers correlate 1 3 with poor patient outcome in several tumor types, including GBM [13]. However, the development of strategies for non-invasive monitoring of brain tumor hypoxic signalling remains a challenge of high clinical relevance, especially with regard to the relative inaccessibility and spatiotemporal heterogeneity of GBM tumors. Extracellular vesicles (EVs) are excessively secreted by tumor cells into the circulation, and are emerging as a promising candidate for liquid biopsy-based approaches in cancer [14][15][16]. Exosomes and microvesicles are lipid-bilayer EVs [17] that have come to be recognized in intercellular communication, promoting the development and progression of various disease conditions [18]. Numerous studies have shown that exosome-like EVs may mediate hypoxiadependent intercellular signaling in GBM [19]. Moreover, pilot studies based on an antibody array targeted at angiogenesis-related proteins, suggested that the EV proteome may reflect the tumor oxygenation status in GBM [20]. To further develop EV-based strategies for non-invasive tumor diagnosis and monitoring of hypoxia, it is essential to comprehensively identify proteins that are efficiently sorted to EVs and that reflect the hypoxic status of the cell or tissue of origin. In this study, we employed label free quantification (nontargeted method) and parallel reaction monitoring (targeted method) to globally characterize the proteome of EVs derived from U87-MG high-grade glioma cells with the aim to understand how EV profiling can be exploited to noninvasively define the hypoxic status of glioma tumors. Global proteome identification in EVs derived from high-grade glioma cells EVs from U87-MG, i.e. the most well-characterized human glioma cell-line [21,22], grown under normoxic (EV NORM ) or hypoxic (EV HYP ) conditions were isolated by standard sequential ultracentrifugation [20]. The size distribution and morphology of EVs was analyzed by transmission electron microscopy (TEM), where EV NORM and EV HYP predominantly were found in the size range of 50-150 nm in diameter with no apparent difference in their morphology (Fig. 1a, b). Nanoparticle tracking analysis (NTA) showed similar size distribution, where both EV NORM and EV HYP were found in the size range of 80-150 nm (Fig. 1c, d), which is consistent with the typical size distribution profile of exosomes [23]. We found significantly increased secretion of EVs by U87-MG cells when cultured under hypoxia as compared to normoxia (Fig. 1d), which is in accordance with previous findings [24,25]. Currently, in addition to the mechanism of biogenesis and size [26], EVs are generally referred to as exosomes also based on the expression of CD9, CD63, and CD81 proteins [27], which were all found to be present in U87-MG derived EVs, together with a strong enrichment of the membrane raft marker Flotillin 1 (Fig. 1e). We then employed shotgun proteomics by data-dependent acquisition to comprehensively determine the proteome of EV NORM and EV HYP derived from U87-MG cells. We identified a total of 2089 EV HYP and 2035 EV NORM proteins ( Fig. 1f; Supplementary Tables 1, 2). There were 1034 protein groups unique to EV HYP ( Fig. 1f; Supplementary Table 3) and 1055 protein groups common to both EV NORM and EV HYP ( Fig. 1f; Supplementary Table 4). We next created a multiconsensus list combining EV NORM and EV HYP protein identities (Supplementary Table 5) and then compared the multiconsensus protein group to the ExoCarta EV public database [28]. The multiconsensus EV identities (2928 proteins) showed extensive overlapping of 1654 common identities with the ExoCarta database and also identified 1274 unique identities (Fig. 1g), which support the sensitivity of detection of the EV proteome with the current approach. Processing of the EV proteome by label free quantification (LFQ) Discovery MS analysis resulted in the identification of thousands of proteins, and it is not feasible to analyze the abundance signature of each individual protein by targeted MS/ MS. Therefore, to filter the proteins identified in EV NORM and EV HYP based on their significance in hypoxia, we subjected the discovery MS-identified proteins to nontargeted LFQ in Proteome Discoverer (PD) version 2.2 (Fig. 2a). We could then obtain the abundance value of each protein in EV HYP and EV NORM in terms of the LC/MS precursor peak quantification of the unique peptides for a particular protein. Subsequently, a ratio of the abundance values of each protein in EV HYP over EV NORM was calculated, which identified a total of 580 hypoxia significant (H significant ) proteins (Log2 fold change, cut-off > 0.01), and other proteins that were above Log2 fold change cut-off > 0.01, were taken as hypoxia downregulated (H nonsignificant ) proteins (Supplementary Table 6). Validation of H significant profile by parallel reaction monitoring (PRM) To validate the H significant proteins identified above by LFQ, we next performed PRM (Fig. 3a). A set of selection criteria specific for targeted PRM analysis as described in Rauniyar was applied [32], including peptide length, uniqueness, miscleavage, modification, precursor charge, chromatographic peak, and signal intensity to further filter identified protein groups and select appropriate quantotypic peptides for proteins of interest using Skyline version 3.1. In addition, we added a few protein groups based on their relevance in glioma. Consequently, we selected a total of 135 protein groups with 5 unique quantotypic peptides per protein group for quantification by targeted PRM. Firstly, we performed an unscheduled PRM run on EV NORM and EV HYP samples to analyze the ionization of selected peptides and optimize their retention time and transition charge state. The chromatogram output was analyzed in Skyline and the 2 to 3 most quantotypic flyable peptides and appropriate transition states per protein were selected for the scheduled PRM run for all 135 protein groups (Supplementary Table 7 Table 8). On analysing the fold change, we found 17 proteins significantly differentially expressed in EV HYP as compared to EV NORM ( Fig. 3b; Supplementary Table 8). We further applied peptide significance and normalized peak area restrictions on the hypoxia response of the H significant EV proteins (N = 17) and filtered it down to a signature of 9 proteins that included Insulin-like Growth Factor-Binding Protein 3 (IGFBP3), Tissue Factor (F3), Carbonic Anhydrase 9 (CA9), Solute Carrier Family 2 Facilitated Glucose Transporter Member 1 (SLC2A1), Nucleolin (NCL), Osteopontin (SPP1), Monocarboxylate Transporter 1 (SLC16A1), Membrane-Associated Progesterone Receptor Component 1 (PGRMC1), and Annexin A5 (ANXA5) (Fig. 3c). These proteins defined a profile of unique proteins (N = 9) efficiently sorted from donor cells to EVs and enriched at hypoxic conditions, hereafter referred to as "HYP SIGNATURE " (the PAN of the replicates of the different peptides is given in Supplementary Fig. 1). We assayed the pathways enriched by the HYP SIGNATURE proteins using ConsensusPathDB-human interaction database [33]. This identified HYP SIGNATURE to be closely associated with the Hypoxia-Inducible Factor-1α (HIF-1α) transcription factor network (adjusted P value = 0.00012) and HIF-1 signalling pathway (adjusted P value = 0.0057) with high significance (Fig. 4a). Tissue factor (F3) was previously shown by our group to be enriched in hypoxia-derived EVs [20]. The hypoxic enrichment of other top candidates of the HYP SIGNATURE (Fig. 3c), was supported by immunoblotting, which showed increased levels of IGFBP3 (Fig. 4b) and CA9 (Fig. 4c). Immunoblotting analysis was unable to detect other candidate proteins (NCL, SLC16A1, SPP1, ANXA5) in EVs, either from normoxia or hypoxia ( Supplementary Fig. 2b). A potential limitation of these results is the lack of EV housekeeping proteins, and equal protein loading rely on BCA total protein concentration. However, gene array analysis showed increased expression of IGFBP3 (P = 0.0012), F3 Fig. 2a). Several studies have established the association of GBM mesenchymal subtype with hypoxia and an aggressive tumor phenotype [34][35][36]. To address how the HYP SIGNATURE may associate with the mesenchymal phenotype, we compared the cumulative Z score of HYP SIGNATURE with different subtypes of primary GBM cells obtained from Human Glioblastoma Cell Culture (HGCC) i.e. classical, proneural, neural and mesenchymal (Fig. 5a). The cumulative HYP SIGNATURE Z score (1.78) was in close proximity to the HGCC mesenchymal subtype (0.24), evident by their average positive Z score as compared with the classical (− 0.18), proneural (− 0.28), and neural (− 0.41) subtypes (Fig. 5b). Next, we compared the HYP SIGNATURE cumulative Z score with GBM subtypes obtained from Cancer Genome Atlas Program (TCGA) using the Gliovis portal, which again showed the proximity of HYP SIGNATURE Z score with the mesenchymal (1.26) as compared with classical (0.94), proneural (0.89), and neural (0.83) GBM subtypes (Fig. 5c). Discussion In this study, we used an optimized combination of nontargeted and targeted quantitative proteomics to comprehensively profile hypoxia-regulated proteins associated with high-grade glioma cell derived EVs. We have identified a protein signature, "HYP SIGNATURE ", in EVs secreted by U87-MG cells that is associated with the HIF hypoxic signaling response and exhibited close proximity to the mesenchymal GBM subtype. Importantly, out of the nine proteins encompassing the HYP SIGNATURE , seven proteins are known as plasma membrane integrated proteins with an extracellular domain available for specific recognition by antibodies and other targeting agents. Together, our findings thus propose that the hypoxic status of GBM tumors can be defined by the EV HYP SIGNATURE, which may be utilized not only to noninvasively immunephenotype glioma tumors but also as potential therapeutic targets. The utility of EVs across diverse cellular functions, including recent investigations that support the application of EVs as non-invasive biomarker tools [14,16,37,38], strongly motivates improved efforts to comprehensively profile the proteome of EVs derived from cells grown at disease mimicking conditions. Using discovery proteomics, a previous study [39] identified a total of 844 proteins in EVs isolated from GBM cells. In comparison, we identified approximately 3000 proteins in EVs, out of which 1034 proteins were unique to hypoxic EVs. Importantly, the major aim of the present study was to specifically identify an EV signature that mimics the hypoxic situation, i.e. a pathognomonic feature of GBM tumors associated with disease aggressiveness and treatment resistance. Although the studies are limited to one glioma cell-line, it may be argued that the obtained results have general relevance given the substantial overlap between EV protein identities found here and the ExoCarta EV proteome database. Moreover, the hypoxic response is a universal phenomenon of high-grade gliomas as well as other highly malignant tumors. Clearly, future studies will have to further assess the generalizability of the present data, including validation in primary GBM cell models as well as in vivo. LFQ has now become a widely accepted analytical approach for comparison of the relative abundance of proteins across multiple samples [40][41][42]. The possibility to analyse untreated proteins or peptides in a large number of samples makes LFQ a preferred protocol over other relative quantification approaches. However, previous studies have shown that sample preparation for the LFQ approach is highly susceptible to variability [43]. Therefore, to reduce this variability, we used 9 replicates of normoxia and 12 replicates of hypoxia samples for LFQ. In addition, the conforming pattern of differential levels of most proteins analyzed in LFQ (Supplementary Table 6) and PRM (Supplementary Table 8), suggest a high degree of sample preparation consistency. In support of EV proteomics data, immunoblotting showed an enrichment of top candidates of the HYP SIGNATURE , and gene array analysis showed increased expression of IGFBP3, F3, CA9, SLC2A1 and PGRMC1 mRNA in hypoxic as compared with normoxic U87-MG cells. We were unable to detect other candidate proteins (NCL, SLC16A1, SPP1, ANXA5) in EVs by immunoblotting analysis, either from normoxia or hypoxia, and did not detect a hypoxic enrichment of these proteins in U87-MG cells. A potential explanation to the discrepancy between an induction of these proteins in EVs collected over a cumulative time period of 48 h of hypoxia, and cells analyzed at a fixed time-point, is the well-known temporal dynamics of the hypoxic response. Several previous studies have associated tumor cell expression of HYP SIGNATURE proteins with increased GBM aggressiveness. For example, F3 expression was demonstrated to be hypoxia-dependent in highly aggressive P7 GBM cells, leading to increased F3 activity [44], and F3-positive EVs were shown to induce angiogenesis [20]. Hypoxia also induced increased SLC16A1 plasma membrane expression in glioma cells, both in in vitro and in vivo models [45]. Additionally, SLC16A1 plasma membrane expression was associated with HIF-1α and CA9 positivity in hypoxic regions. Further, SLC16A1 was found to be upregulated in GBM as compared with normal tissues [46]. NCL was also found to be overexpressed in patient-derived GBM tumors and cells as compared with normal brain [47]. ANXA5 has been found to promote invasion and chemoresistance to the alkylating drug temozolomide in GBM cells [48]. Since hypoxic cells and components in the hypoxic niche have been increasingly implicated in resistance to temozolomide [49], it is conceivable that ANXA5 is associated with the hypoxic component of drug resistance. SPP1 was shown to be induced by hypoxia both in vitro and in vivo [50] and is predominantly observed in the microvasculature of GBM [51]. Several studies have implicated SPP1 with crucial roles in invasion [52] and malignant gliomas [53]. In several glioma cell models, CA9 strongly co-localized with HIF-1α, indicating its induction in hypoxic regions of this tumor type. Clinically, CA9 is minimally expressed in normal brain tissue, whereas its high expression in brain tumors strongly correlated with the level of malignancy [54]. SLC2A1 is another well-established hypoxia-induced protein that has been associated with hypoxic regions of GBM [55]. These studies support a functional role of HYP SIGNATURE protein expression in tumor cells, and future studies that define the tumor promoting role of these proteins when associated with EVs, especially in the context of e.g. pH regulation (CA9), metabolite transport (SLC2A1, SLC16A1), and coagulation activation (F3), will be of high interest. To conclude, our data strongly support that a specific subset of mostly membrane intercalated EV proteins could define the hypoxic status of high-grade glioma cells. The proteins identified as part of the HYP SIGNATURE warrant further clinical examination using a targeted approach to validate their capacity to differentiate the highly heterogeneous nature of high-grade glioma tumors from e.g. low grade gliomas and other brain lesions that are challenging to define by imaging alone. This proof-of-principle study to noninvasively define the glioma hypoxic status utilizing advanced proteomics is a significant step in this direction. EV isolation Normoxic or hypoxic EVs were isolated in parallel from U87-MG cells at a particular passage by standard procedures, using differential ultracentrifugation [20]. Routinely cultured U87-MG cells at sub-confluency were grown in DMEM supplemented with 1% BSA at normoxic or hypoxic conditions for 48 h. Conditioned media were collected after 48 h and centrifuged at 300×g twice to eliminate cell debris. Supernatant fractions were then centrifuged at 100,000×g for 2 h to pellet EVs, followed by washing twice with PBS at 100,000×g for 2 h. EVs were then resuspended in 6 M Urea for downstream proteomics experiments. Nanoparticle Tracking Analysis, Transmission Electron Microscopy, Trypsin digestion and peptide preparation, Discovery LC-MS/MS, label free quantification, and quantitative LC-PRM-MS/MS were performed as described in Supplementary Materials and Methods. Data analysis The Gene Ontology functional classification of H significant proteins was performed using PANTHER (https ://www. panth erdb.org/). Enriched pathways of EV HYP signature proteins were determined using ConsensusPathDB-human interaction database (https ://cpdb.molge n.mpg.de/). Wilcoxon test was employed for pathway enrichment analysis with a P value cut-off of 0.01. For HYP SIGNATURE comparison in U87-MG cell-derived EVs, the Z scores of 9 HYP SIGNATURE candidates were individually calculated for their protein levels with the respective normoxic values as reference as shown by the formula below: where "EV HYP " is the mean protein level measured in hypoxic EVs; "EV NORM " is the mean protein level measured in normoxic EVs; and "SD EV NORM " is the standard deviation value of the protein level measurements in normoxic EVs. Generation of a cumulative score was done by arithmetic mean of Z scores of all 9 HYP SIGNATURE proteins. For Z score calculation on the TCGA dataset, subtype classification of GBM patients was performed with GlioVis portal, and gene expression values for all 9 HYP SIGNATURE candidates were downloaded. Low Grade Glioma (LGG) Z-score = EV HYP −EV NORM ∕ SD EV NORM expression data of the 9 HYP SIGNATURE protein genes was downloaded and used as reference value for Z score calculations, as indicated in the formula below: where "GBM subtype" is the mean gene expression value in subtypes such as Classical, Mesenchymal, or Proneural GBM; "TCGA-LGG" is the mean gene expression value for the corresponding gene in LGG patients; and "SD TCGA-LGG" is the standard deviation value of the analyzed gene among the LGG patients. Generation of cumulative score for each GBM subtype was done by arithmetic mean of Z scores of all 9 HYP SIGNATURE candidates. For HGCC data analysis, the gene expression Z score for each HYP SIGNATURE candidate in subtypes (Classical, Mesenchymal, Proneural, or Neural) was directly extracted from the HGCC database. Cumulative Z score was generated as described for TCGA dataset. Statistical analyses Data are expressed as mean ± STDEV. Statistical analyses were done using unpaired Student t test. All values with P < 0.05 were considered to be statistically significant.
2019-08-15T14:36:54.842Z
2019-08-14T00:00:00.000
{ "year": 2019, "sha1": "9cf7f44ae31e596f9c30b585018d5285cad29702", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11060-019-03262-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ab9aedd204ecab48d29b93a64a0702df778d92bc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
61549299
pes2o/s2orc
v3-fos-license
Analysis of Soybean Production And Demand to Develop Strategic Policy of Food Self Sufficiency: A System Dynamics Framework Soybean production and demand gap in Indonesia for decades has triggered dependence on imported soy products. Demand consumption of soy protein-based foods higher with increasing population annually. Various efforts have been made by the government but is still not are able to show maximum results overall. The aim of this study is to find an alternative solution using system dynamics approach. Real conditions captured into a model and then performed a series of scenarios the decision to obtain best results using computer assistance.The results of scenarios showed that soybean production could be produced to meet the needs of soybean demand in Indonesia for 20 years by increasing expansion of land of at least 70% per year, the use of seeds with a minimum production level 2.4 tons / hectare or utilization of the short-lived seeds which able to increase the cropping index at least 2.0, use of biological fertilizers which can increase seed productivity at least 125%. Introduction Availability of food is essential for the stability of a country. The ability to be self-sufficient can save foreign exchange that can be utilized for other strategic purposes. In fact, to date in Indonesia import needs especially in the sectors of food needs are always increasing because the demand exceeds the supply of available food. This condition occurs because there are gaps in the production and consumption of soy every year. Gaps occur due to the rate of domestic soybean production is unable to meet the pace of soybean demand. Increased demand for soybeans occur due to increasing population, rising incomes, healthy lifestyle changes and progress in the field of agro-industry and farming sectors. Conditions such as these that ultimately trigger a shortcut to import soybeans to meet the needs of the public soy consumption. in addition, an increase in the volume of imports was also caused by increased demand for import soybean because it has cheaper prices than local soybean [4]. In resolving these issues is certainly required number of steps to obtain the expected solution. However, every action would require a long time and cost as well as the risk of unexpected. For that we need a simulation action before a real action is applied in the real world and in this research, the simulation process will use system dinamics approach because it can provide a more reliable estimate than the statistical models for determining the significant and sensitive forecasting factors [5]. The main purpose of this research is to examine how to analysis and find an alternative solution quickly and as expected for supplying the requirements for the next 20 years. This research also tried to improve previous research by including some additional condition such as soybean import, sales, etc. Outcomes from scenario of model can be used as support material for government decisions and the stakeholders in the development of strategies for implementation of national soybean availability. Thus, stakeholder can utilize the results of these simulations to make strategic decisions. About Indonesia Soybean. Soybean (Glycine max L) is a highly nutritious food commodities as a source of vegetable protein and low cholesterol at an affordable price. Soybeans also an important food commodity after rice and maize. Soy consumption in the form of fresh or in processed form can improve nutrition. Soybean plants can grow well in areas with rainfall around 100-400 mm / month with temperatures between 21-34 degrees C and at a height of not more than 500 m above sea. In Indonesia, many processed soybeans for various foodstuffs, such as tempeh, soy milk, tofu, bean curd, soy sauce, oncom, tauco, soybean cake, ice cream, edible oil, and soy flour. In addition, it is also widely used as an animal feed ingredient [6]. Indonesian soybean easier to grow in wetlands than any other land. Soybean species that grow in Indonesia is yellow soybean and black soybean. System Dynamics System Dynamics is a unique method that can be used to help managers and decision makers in order to find the policies and decisions that benefit and could be implemented well in a certain period of time. System dinamics itself is a methodology for studying and managing complex feedback systems. System dinamics can be used as an analytical tool to evaluate the impact of short-term and long-term policy. The final goal of the simulation model creation is validation of models and scenarios decisions. The purpose of the validation is that the model created can certainly approach the original and credible system. The credibility of the model can be expressed from results of the verification and validation of the model. Credible models can be simulated using computer-assisted predictions to see results quickly. Previous Research Research related to self-sufficiency in soybean in Indonesia ever conducted by previous researchers using a variety of methods including system dynamics approach. Strategy to achieve soybean selfsufficiency in 2015 through implementation of synergic policies [2] found by extended area program, increase productivity, decrease population, soybean consumption and postharvest losses. Another research [1] concluded that self-sufficiency can be achieved in 2014 by increase the planting area with use of suboptimal land and increasing productivity by improving seeds, fertilizer, farmer education and harvest losses. Research Steps This research consists of four stages: data collection, analysis of existing condition, design of computer model and perform the process simulation with scenarios. Research Information and Data This study uses secondary data from BPS and the Ministry of Agriculture also related research that has already conducted in areas of Java, Bali, Sumatra, Nusa Tenggara and Sulawesi. Analysis of Existing Condition In an attempt toward self-sufficiency in soybeans, the Indonesian government has conducted various agricultural programs to increase production value since the sixties ago until now. But until now, the fulfillment of soybeans is still not achieved. National soybean production is currently only able to meet 35% of the market, while the rest is filled with soybean imports. Some causes of this condition is the rising of soybean demand due to high population growth, lack of land and production quality problems. Population growth led to the consumption of soy-based foods to be increased and agricultural land decreased because changed into residential land. Of the amount of agricultural land available, the average land for soybean cultivation is only available about 23%, still less with the amount of rice and corn fields. Until now, farmers tend to use wetland than any other land because it does not need additional processed and more efficient. On the other hand, with increasing economic growth in Indonesia, the awareness of a healthy lifestyle is growing. Soybeans as a source of vegetable protein began to be used as the raw material of healthy processed foods, thus causing demand for soybeans growing over time. Soy demand for food raw materials was higher than for other purposes such as animal feed or other. Based on the data SUSENAS 2013, Soy-based foods is dominated by Tempe, Tofu, Soy Sauce, Oncom , Tauco and Soy Fresh. On the production side, the amount of the national harvest still inadequate and the quality of local soybean yet fully able to compete with imported soybean. Land productivity of soybean currently at 1.4 tonnes / ha. This is caused by the use of uneven quality seeds, fertilizers and the high level of of pest attack. Soybean planting also was done during the times of the year and still a priority number three after rice and maize. Loss of grain during the harvest process traditionally also caused reducing the amount of harvest. Another factor is instability of the price that makes the farmers switch to plant other than soybean. Many factors influencing instability of the price which one of them is competition in price and quality with imported soybeans. Domestic Soybean tend to be expensive caused trick a middleman buy soybeans at low price or existence of high cost transport when farmers try to sell soybeans to wholesalers. Lower selling prices will encourage farmers to use their land for cultivation of other food. In a systems approach, those condition can be described in the following Causal Loop Diagram below (Fig. 1). System Dynamics Model Based on the causal loop diagram, can be composed of system dynamics models the availability of soybeans into stock flow diagrams and formulation of the function and its parameters. Holistic flow diagram of the model that constructed using Vensim is described as follows (Fig. 2): The model is composed of four sub-models: demand submodel, production submodel, farming cost submodel and import soybean cost submodel. Overall, the linkages of model variables shown in Table 1. Model Simulations Model simulations carried out to obtain the results and the behavior of the system during the period of the simulation. Simulations done by entering the input parameter value and changes the structure of the model if necessary. The simulation period used for the model was from year 2015 to 2035. To get the soybean self-sufficiency goals in the next 20 years then conducted the following scenario: (1) Scenario of productivity improvement in the existing conditions of land.; (2) Scenario of combination of productivity improvement with the expansion of planting area.; (3) Scenario of stabilization soybean prices to increasing farmer productivity through the implementation of policies related to import soybeans. Result of Model Simulation The simulation process is done using software Vensim. The data used came from the Central Bureau of Statistics, Ministry of Agriculture and other relevant government agencies. Scenario of productivity improvement in the existing conditions of land. As well as other food crops, soybean productivity is influenced by factors of seed. For maximum growth, the seed is affected by the fertility of the soil and sufficient water. Whereas the final results are affected by the level of damage caused by pests and loss of seed at harvest. Assumption of this scenario is no land conversion, farmers use wetland and rate of pest attack at 5-8%. Planting area used is 640.85 thousand hectares. From various combinations of scenarios obtained the best results in the use of the type of seeds with an average productivity of at least 2.5 tonnes / ha (example: gamasugen) and biological fertilizer with an average productivity increase of at least 125% (example:Bio P2000Z) that shown in Another alternative is the use of seeds that have a short growing season so that it can perform soybean planting at least twice a year (Fig 4). This scenario conducted by trying to involve the expansion of planting area in order to increase production. With the use of quality seeds as the previous scenario, the best results obtained to use the expansion of planting area of at least 70% every year. As shown in Figure 5, the first year of self-sufficiency simulating the conditions are still not met, but in subsequent years the condition will increase according to the amount of available land. Besides the availability of land, availability of farmers who want to cultivate soybeans is important. farmers was not willing to plant soybeans if they can not profit from the farm. farmers will suffer losses if the community prefers soybean imports because it has a cheaper price. The establishment of the purchase price the government does not help if the price of soybean imports are still cheaper than the price of local soybean. To overcome these problems it is necessary to keep the price of imported soybeans remain under control. Base on simulation, the percentage of the cost of imports to be charged at least relative to the fixed price of government (HPP) with reference to Eq. (1). C = 1 -( Gc * T ) / V (1) Where : C = percentage of the cost of imports Gc = HPP of Soybean T = Import Volume V = Import Value in Rupee Conclusions Based on simulation results, to improve national soybean production to meet the needs for the next 20 years, the government needs to take action as follows: 1. Increase the planting area at least 70% every year to obtain sufficient land to increase the amount of production. 2. Doing provision of quality seeds production with high productivity level of at least 2.4 tons / hectare, biological fertilizer that can increase seed productivity at least 125%. 3. Controlling the price of imported soybeans by providing high import costs corresponding to (Eq 1) in order to keep price stability so that the soybean farmers remain productive.
2019-02-15T14:22:16.776Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "1a0667c3aee4928abb76db2d1a9b26137098c5ff", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.procs.2015.12.169", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d564096a1a6684d807a41ac056ca251c9939413e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
257653577
pes2o/s2orc
v3-fos-license
A Clinical Prediction Model of Overall Survival for Patients with Cervical Cancer Aged 25–69 Years Aims: This study aims to develop a prediction tool for the overall survival of cervical cancer patients. Methods: We obtained 4116 female patients diagnosed with cervical cancer aged 25–69 during 2008–2019 from the Surveillance, Epidemiology, and End Results Program. The overall survival between groups was illustrated by the Kaplan–Meier method and compared by a log-rank test adjusted by the Bonferroni–Holm method. We first performed the multivariate Cox regression analysis to evaluate the predictive values of the variables. A prediction model was created using cox regression based on the training set, and the model was presented as a nomogram. The proposed nomogram was designed to predict the 1-year, 3-year, and 5-year overall survival of patients with cervical cancer. Besides the c-index, time-dependent receiver operating curves, and calibration curves were created to evaluate the accuracy of the nomogram at the timepoint of one year, three years, and five years. Results: With a median follow-up of 54 (28, 92) months, 1045 (25.39%) patients were deceased. Compared with alive individuals, the deceased were significantly older and the primary site was more likely to be the cervix uteri site, large tumor size, higher grade, and higher combined summary stage (all p values < 0.001). In the multivariate Cox regression, age at diagnosis, race, tumor size, grade, combined summary stage, pathology, and surgery treatment were significantly associated with the all-cause mortality for patients with cervical cancer. The proposed nomogram showed good performance with a C-index of 0.82 in the training set. The 1-year, 3-year, and 5-year areas under the curves (with 95% confidence interval) of the receiver operating curves were 0.88 (0.84, 0.91), 0.84 (0.81, 0.87), and 0.83 (0.80, 0.86), respectively. Conclusions: This study develops a prediction nomogram model for the overall survival of cervical cancer patients with a good performance. Further studies are required to validate the prediction model further. Introduction Cervical cancer is the fourth most common malignant tumor in women, leading to a substantial health threat worldwide [1,2]. Owing to the advances in prevention, diagnosis, and treatment, the incidence and mortality of cervical cancer have decreased by at least half in the past three decades in developed countries [1]. Still, the disease remained a significant health burden on a global scale. It was reported that 569,847 patients were newly diagnosed with cervical cancer, and 311,365 deaths were caused by cervical cancer worldwide in 2018. Squamous cell carcinoma accounts for the major histological subtypes (about 70%), and adenocarcinoma is the second most common subtype accounting for about 25% [1,3]. Many factors have been previously reported to be associated with the survival of this malignancy, such as lymph node metastasis, histologic type, tumor size, etc. [4]. However, the overall survival varies among cervical cancer patients at the individual level, even for those with the same disease stage and histologic type. A single predictive biomarker alone is insufficient to evaluate the disease's survival comprehensively. As a class of artificial intelligence, machine learning uses algorithmic methods to make machines perform disease prediction without programming [5]. Applying machine learning to big data provides a powerful method for evaluating complex healthcare information [6]. Therefore, this study aims to develop a prediction tool for the overall survival of cervical cancer patients based on machine learning. Data Source The Surveillance, Epidemiology, and End Results (SEER) Program collects populationbased cancer incidence and survival from the US cancer registries, which cover about 48% of the total US population. Patient demographics, tumor site, morphology, and stage at diagnosis, treatment, and follow-up survival status were routinely collected in the SEER registries. This study obtained data from the "Incidence-SEER Research Data, 8 Registries, Nov 2021Sub (1975". The cervical cancer diagnosis was based on the International Classification of Diseases for Oncology, 3rd Edition (ICD-O-3). We included participants who were (1) pathologically diagnosed with cervical cancer, (2) aged between 25-69, (3) with complete survival records, and (4) newly diagnosed between 2008-2019. Exclusion criteria were (1) diagnosed only by autopsy or death certificate, (2) without race, tumor site, size, grade, or stage record, and (3) missing surgery record. In the SEER database, the surgery records of participants were recorded as (1) Yes (have received surgery treatment), (2) No (have not received surgery treatment), or (3) Not available (have no information about surgery treatment). We excluded patients without available information on surgery. It should be noted that "missing surgery record" indicated that we are not sure if the participant received surgery treatment, instead of that they have not received surgery. Follow-up time was defined as the time from diagnosis to death or the last contact date. Finally, 4116 female patients diagnosed with cervical cancer during 2008-2019 were included in this study. In this study, sociodemographic, pathologic, and clinical variables were obtained for further analysis, including age at diagnosis, race (White, Black, and other races), primary site (cervix uteri, endocervix, exocervix, and overlapping lesion), tumor size, grade (Grade I, well-differentiated; Grade II, moderately differentiated; Grade III, poorly differentiated; Grade IV, undifferentiated), combined summary stage (regional, localized, and distant), pathology (squamous cell carcinoma, adenocarcinoma, and others), and surgical treatment. All the data were acquired from the SEER database by SEER*Stat software (version 2.4.0). Development and Validation of the Prediction Model We first performed the multivariate Cox proportional hazard regression model to evaluate the predictive values of the variables. Multiple biomarkers were input to the Cox regression model, including age at diagnosis, race, primary site, tumor size, grade, combined summary stage, pathology, and surgical treatment. The results were shown by hazard ratios (HRs) with 95% confidence intervals (CIs). Multicollinearity refers to the high correlation between two or more predictor variables in a regression model. Multicollinearity can lead to unstable estimates of the regression coefficients, which makes it difficult to determine the true effect of each predictor variable on the outcome variable. The variance inflation factor is a measure widely used to assess the degree of multicollinearity in a regression model. We calculated the variance inflation factors of each variable to evaluate the multicollinearity. The variance inflation factor value of 1 indicates no correlation, between 1 and 5 indicates moderation correlation, above 5 indicates high correlation. The input data were randomly divided into a training set and a testing set at a 7:3 ratio. The training set (n = 2882) was used to create the prediction model, while the testing set (n = 1234) was used to validate the model performance. The prediction model was created by cox regression and was presented as a nomogram. The proposed nomogram was designed to predict the 1-year, 3-year, and 5-year overall survival of patients with cervical cancer. Besides the c-index, time-dependent receiver operating, and calibration curves were created to evaluate the accuracy of the nomogram at the timepoint of one year, three years, and five years. Statistical Analysis Descriptive statistics were used to describe the baseline characteristics. We represented the continuous variables as mean ± standard deviation and categorical variables as percentages. The baseline characteristics were compared by the Kruskal-Wallis test or chi-square test as appropriate. The overall survival between the group was illustrated by the Kaplan-Meier method and compared by a log-rank test adjusted by the Bonferroni-Holm method. We used the Bootstrapping 1000 resamples to validate the performance of the proposed model internally. p values < 0.05 were considered statistical significance. All statistical analyses were performed in R software (version 4.0). Participant Characteristics With a median follow-up of 54 (28, 92) months, 1045 (25.39%) patients were deceased. Compared with the alive individuals, the dead were significantly older and more likely to be in the cervix uteri site, large tumor size, higher grade, and higher combined summary stage (all p values < 0.001). The baseline participant characteristics are shown in Table 1. Cox Regression Analysis In the multivariate Cox regression, age at diagnosis, race, tumor size, grade, combined summary stage, pathology, and surgery treatment were significantly associated with the all-cause mortality for patients with cervical cancer. The results of the multivariate Cox regression analysis are shown in Table 2. Compared with the white race, black race patients were at a 1.37 (1.14-1.65)-fold risk of all-cause death. Furthermore, Figure 1 illustrates the overall survival of cervical cancer patients of different races (log-rank p value < 0.0001). The survival was significantly lower in the black race than in the white (BH-adjusted p value < 0.001) and other races (BH-adjusted p value < 0.001). However, no statistical difference was observed between the white race and other races (BH-adjusted p value = 0.62). Additionally, the variance inflation factors of each variable were provided in the Supplementary Table S1. Moderately differentiated and poorly differentiated grade showed high multicollinearity with variance inflation factor values of 5.4 and 5.8, respectively. Therefore, we input all variables into the prediction model. Development and Validation of the Prediction Model A prediction model was created by cox regression based on the training set, and the proposed model was presented as a nomogram in Figure 2. Multiple variables were input: age at diagnosis, race, primary site, tumor size, grade, combined summary stage, pathology, and surgical treatment. The C-index in the training set was 0.82. Figure 1. K m plotter of overall survival for patients with cervical cancer in different races. The overall survival for cervical cancer patients of different races (log-rank p value < 0.0001). The survival was significantly lower in the black race than in the white (BH-adjusted p value < 0.001) and other races (BH-adjusted p value < 0.001). However, no statistical difference was observed between the white race and other races (BH-adjusted p value = 0.62). Development and Validation of the Prediction Model A prediction model was created by cox regression based on the training set, and the proposed model was presented as a nomogram in Figure 2. Multiple variables were input: age at diagnosis, race, primary site, tumor size, grade, combined summary stage, pathology, and surgical treatment. The C-index in the training set was 0.82. In the testing set, the nomogram showed good performance with a C-index of 0.81. Figure 3 shows that the performance remains satisfied at the time point of one year, three years, and five years, with the area under the curves (AUCs) of 0.88 (0.84, 0.91), 0.84 (0.81, In the testing set, the nomogram showed good performance with a C-index of 0.81. Figure 3 shows that the performance remains satisfied at the time point of one year, three years, and five years, with the area under the curves (AUCs) of 0.88 (0.84, 0.91), 0.84 (0.81, 0.87), and 0.83 (0.80, 0.86). Additionally, we showed the calibration plots of the nomogram in Figure 4. The sensitivity and specificity of the model in the testing set are shown in Table 3. Our results showed that the nomogram had good calibration when predicting 1-year, 3-year, and 5-year overall survival probability. In the testing set, the nomogram showed good performance with a C-index of 0 Figure 3 shows that the performance remains satisfied at the time point of one year, th years, and five years, with the area under the curves (AUCs) of 0.88 (0.84, 0.91), 0.84 (0 0.87), and 0.83 (0.80, 0.86). Additionally, we showed the calibration plots of the nomog in Figure 4. The sensitivity and specificity of the model in the testing set are show Table 3. Our results showed that the nomogram had good calibration when predictin year, 3-year, and 5-year overall survival probability. Discussion In this study, we obtained 4116 female patients diagnosed with cervical cancer during 2008-2019 from the SEER Program. Based on cox regression analysis, we developed a prediction model and presented it as a nomogram. In the validation, the model showed good performance with a C-index of 0.82. The ROCs show that the performance remains satisfied at one year, three years, and five years, with AUCs of 0.89, 0.86, and 0.84. Previous studies have revealed many risk factors to predict overall survival for patients with cervical cancer (e.g., lymph node metastasis, histologic type, tumor size) [4]. However, cervical cancer patients show distinct prognoses even for those with the same histologic type. Prediction using a single biomarker alone is insufficient to comprehensively evaluate the disease survival. Nomograms based on machine learning integrate multiple biomarkers to comprehensively evaluate disease prognosis [7][8][9]. The visualized method is designed to generate the precise prediction tailored to an individual patient, providing a simple-to-use tool for clinicians to predict overall survival [10]. Recently, many nomograms have been developed for cancer diagnosis and prognosis, which showed better performance than the traditional clinical stage system [11][12][13]. Few studies proposed prediction tools to evaluate the overall survival of cervical cancer [14,15]. Polterauer et al. [14] developed a nomogram to predict overall survival in cervical cancer patients diagnosed using 528 consecutive patients. Gynecologists and Obstetricians stage, tumor size, age at diagnosis, histologic subtype, lymph node ratio, and parametrial involvement were input to the prediction model as nomogram covariates. This model was internally validated using 1000 bootstrap resampling, and the c-index for overall survival was 0.72 (25th and 75th percentiles, 0.70 and 0.74) [14]. In another study by Kidd and colleagues [15], 234 cervical cancer patients were included to develop the nomograms. The proposed nomograms showed reliable performance for recurrence-free survival, disease-specific survival, and overall survival with C-indexes of 0.741, 0.739, and 0.658, respectively [15]. Compared with previous studies, we included a large populationbased sample size, which made our results more reliable and might be applied to the general population. Importantly, our nomogram showed good performance with a C-index of 0.82 and 1-year, 3-year, and 5-year, AUCs of 0.89, 0.86, and 0.84, respectively. The proposed nomogram was convenient and can be easily converted into an online prediction tool, which would help clinicians to make treatment decisions. Tumor stage and histology subtype are well-demonstrated risk factors for worse survival of cervical cancer patients. However, it remains uncertain whether older age reduces the overall survival [16][17][18]. The median age of cervical cancer diagnosis is 49 years, and cervical cancer is mostly diagnosed in patients aged between 35 to 44 years. Current guidelines recommend cervical cancer screening for women below 65 but not above 65 years [19]. Still, many patients were diagnosed at elder age (above 65 years), which accounts for about 20% of all patients [20]. Therefore, research is required to investigate the risk factors to predict the survival of cervical cancer. In a previous study on 43,350 cervical cancer patients, Quinn et al. [21] reported that increased age (particularly > 70 years) was associated significantly with decreased survival trends. The trend remains consistent when stratified by various tumor stages and histology subtypes [21]. In our study, patients aged 65-69 were at a 1.6-fold risk of all-cause death compared with 25-29 years. Despite the advantages, some limitations should be noticed. First, this study is based on the SEER program, which is performed in the US. It remains unclear whether the model can be applied to other races. Second, the predictive factors were input in this model were all records from the SEER. However, this database did not collect some important predictive biomarkers for cervical cancer (especially the recently proposed ones). Third, the prediction model was validated in the internal validation set. The further validation based on an external dataset would be necessary. Last but not least, this study excluded participants with missing records, which might induce additional selection bias. Further studies are required to further validate the prediction model. Conclusions In the present study, we developed a prediction nomogram model for the overall survival of cervical cancer patients with a good performance. Further studies are required to validate the prediction model further.
2023-03-22T15:16:57.546Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "12f262a70af9b11d74565c809b8fe7c175ab5e24", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/59/3/600/pdf?version=1679051619", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be4fb037ee2294d49ca6cec8b0511df6cbf44895", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
11372886
pes2o/s2orc
v3-fos-license
Improved mutation tagging with gene identifiers applied to membrane protein stability prediction Background The automated retrieval and integration of information about protein point mutations in combination with structure, domain and interaction data from literature and databases promises to be a valuable approach to study structure-function relationships in biomedical data sets. Results We developed a rule- and regular expression-based protein point mutation retrieval pipeline for PubMed abstracts, which shows an F-measure of 87% for the mutation retrieval task on a benchmark dataset. In order to link mutations to their proteins, we utilize a named entity recognition algorithm for the identification of gene names co-occurring in the abstract, and establish links based on sequence checks. Vice versa, we could show that gene recognition improved from 77% to 91% F-measure when considering mutation information given in the text. To demonstrate practical relevance, we utilize mutation information from text to evaluate a novel solvation energy based model for the prediction of stabilizing regions in membrane proteins. For five G protein-coupled receptors we identified 35 relevant single mutations and associated phenotypes, of which none had been annotated in the UniProt or PDB database. In 71% reported phenotypes were in compliance with the model predictions, supporting a relation between mutations and stability issues in membrane proteins. Conclusion We present a reliable approach for the retrieval of protein mutations from PubMed abstracts for any set of genes or proteins of interest. We further demonstrate how amino acid substitution information from text can be utilized for protein structure stability studies on the basis of a novel energy model. of a protein complex. The interactions between proteins are of central importance for almost all processes in living cells, and are described by numerous distinct pathways in databases such as KEGG [2]. Malfunctions or alterations in such pathways can be the cause of many diseases, when for instance the biosynthesis of involved proteins is repressed or proteins are not interacting the way they should. The latter can be due to structural changes in one of the interacting proteins, caused by point mutations, i.e. single wild type amino acid substitutions. Indeed, it is already well known that such mutations are the cause of many hereditary diseases. Thus the large-scale analysis of point mutation data in combination with information about protein interactions, protein structure, and disease pathogenesis might facilitate the study of still unresolved phenotypes and diseases. Despite the availability of numerous biomedical data collections, valuable information about mutation-phenotype associations is still hidden in non-structured text in the biomedical literature. This knowledge can be extracted by text mining, stored in a homogeneous data store, and integrated with already available data from suitable databases. Combining all data, new hypotheses can be formulated, such as the prediction of phenotypic effects induced by mutations. Genomic variation data have already been collected for many years. Single nucleotide polymorphisms (SNPs), which make up about 90% of all human genetic variation and occur every 100 to 300 bases along the 3-billion-base human genome [3], are available as large collections. Single amino acid polymorphisms (SAPs) are often manually extracted from literature and curated into databases, originating from wet lab experiments. Additionally, some structures of such mutations may be revealed in crystallography experiments and might eventually end up as distinct structures in the Protein Database PDB. Of particular interest is the identification of mutations which have a strong influence on the stability of proteins. Therefore, the biomedical literature can be systematically searched for information about mutation-phenotype associations by text mining, which may lead to new insights beyond information in existing databases. For the text mined data it is additionally possible to weight or prioritize information according to publication date, the involved authors, and journals. Consideration of such meta data can be relevant for detecting that an already published assumption has been proven wrong in a more recent publication, or for determining whether a protein just recently attracted interest or if the information is already available for years. Furthermore, it is possible to receive a more detailed view on a protein's characteristics, for example, if a certain interaction only takes place under specific conditions, or if an interaction is prevented by the conformational change of a protein domain triggered by a point mutation. Databases Data on mutations have been collected for years, for numerous species and by different organizations for diverse purposes. There are many efforts to cope with the data, which is being made available in a growing number of databases. The Human Genome Variation society [4] promotes the collection, documentation and free distribution of genomic variation information. New mutation databases are reported in the journal Human Mutation on a regular basis. There are manually curated databases like OMIM [5], UniProt Knowledgebase [6,7], and general central repositories like the Human Gene Mutation Database HGMD (now part of BIOBASE) [8], Universal Mutation Database [9], Human Genome Variation Database [10], or MutDB [11]. Besides these central repositories, there are small specialized databases, such as the infevers autoinflammatory mutation online registry [12], the GPCR NaVa database for natural variants in human G protein-coupled receptors [13], or the Pompe disease mutation database with 107 sequence variants [14]. Table Table 1: Mutation databases: Most of available mutation databases focus on mutations from human, or specific protein families (e.g. G protein-coupled receptors). Some lack well-defined information on mutant phenotypes and only few link to interaction data. Half of the databases also contain data retrieved by text mining methods. In contrast, unpublished SNPs normally make their way into large locus specific data repositories. Since August 2006, there is a wiki based approach SNPedia http:// www.snpedia.com/index.php/SNPedia in contrast to classical databases collecting information on variations in human DNA. Text mining Despite the availability of numerous biomedical data collections, valuable information about mutation-phenotype associations is still hidden in non-structured text in the biomedical literature. Hence, text mining methods are implemented to automatically retrieve these data from the 18 millions of referenced articles in PubMed [15][16][17][18][19]. Text mining aims to generate new hypotheses through the automatic extraction and integration of information spread over several natural language texts. One of the key prerequisites for finding new facts (e.g. interactions or mutations) is the named entity recognition (NER) in text [20,21], the assignment of a class to an entity (e.g. protein), as well as a preferred term or identifier, in case an entry in a database, such as UniProt, or a controlled vocabulary like the Gene Ontology (GO) [22] exists. For the task of named entity recognition usually a dictionary is used, which contains a list of all known entity names of a class (e.g. human proteins) including synonyms. For the recognition of patterns (e.g. database identifiers like NM_12345) regular expression can be defined. For the analysis of whole sentences, Natural language processing (NLP) techniques are used, which aim to understand text on a syntactic and semantic level. This approach is often paired with systems which are based on a set of manually defined rules or which make use of (semi-)supervised machine learning algorithms. In recent years, there have been diverse examples for the successful application of text mining to the mutation retrieval task. Early examples are the automatic extraction of mutations from Medline and cross-validation with OMIM [23], and mining OMIM for phenotypic and genetic information to gain insights into complex diseases [24]. More recently, a concept recognition system based on regular expressions was applied on mutation mining task [25]. GraB for the automatic extraction of protein point mutations using a graph bigram association [26] was reported to reliably find gene-mutation associations in full text. For identifying gene-specific variations in biomedical text, the ProMiner system developed for the recognition and normalization of gene and protein names was integrated with a conditional random field (CRF)based recognition system [27]. As an answer to the diverse approaches developed over the past years, a framework for the systematic analysis of mutation extraction systems was proposed [28]. A growing number of groups are working on protein mutations and their involvement in diseases. A recent overview is given at [29]. Kanagasabai et al. [30] developed mSTRAP (Mutation extraction and STRucture Annotation Pipeline), for mining mutation annotations from full-text biomedical literature, which they subsequently used for protein structure annotation and visualization. Worth et al. [31] use structure prediction to analyse the effects of non-synonymous single nucleotide polymorphisms (nsSNPs) with regard to diseases. Focusing on Alzheimer's disease, Erdogmus et al. [32] developed MuGeX to extract mutation-gene pairs, with estimated 91.3% recall, and precision at 88.9%. Lage et al. [33] realized a human phenome-interactome network of protein complexes implicated in genetic disorders by integrating quality-controlled interactions of human proteins with a validated, computationally derived phenotype similarity score. We compared the above mentioned mutation extraction approaches with regard to their strengths and weaknesses. MutationFinder is still used as a reference system for the pure mutation extraction task, although it does not distinguish between mutations on the DNA and protein level, and does not support grounding to genes. MuGeX finds textual descriptions of mutations and distinguishes between DNA and protein mutations, but their mutation grounding relies only on proximity and does not consider sequence information. The mutation grounding approach used in mStrap considers sequence information, but allows only mutation-protein pairs that co-occur in one sentence and the mutation extraction approach relies on simple regular expressions. Finally, GraB is a successful approach which implements the grounding and disambiguation techniques discussed above, but might be computationally too expensive for large data sets. Towards the development of an automated system for the interpretation of structure-function relations in the context of genetic variability data, we chose to design our own protein mutation retrieval system. We aim at a system, which identifies and grounds protein mutations based on sequence information and proximity at a high recall. On the other hand we need a flexible system, that can be applied to diverse biomedical questions and has moderate computational requirements. Methods As we have motivated above, novel gene-disease associations or the influence of mutations on protein-protein interactions can be discovered through combination of data from literature and databases. Hence, we designed a generic mutation centred approach that can be applied to any kind of genetic data for answering disease-centred questions. As a prerequisite, we consider available high quality data on protein point mutations from curated databases and from peer-reviewed literature. For the latter, we present a flexible approach for both the specific and high-throughput retrieval of mutations. In detail, the following tasks have to be performed: (1) Identify genes/proteins in abstracts. (2) From this subset of abstracts consider only those which additionally contain information about mutations. (3) Propose potential proteinmutation pairs. (4) Filter proposed pairs by sequence checks. (5) Utilize this information for the refinement of the original gene/protein identifier. Entity recognition Gene normalization This module allows for the automated named entity recognition of genes and proteins. Our approach performs gene name disambiguation by using background knowledge to match a gene with its context against the text as a whole [34]. A gene's context contains information on Gene Ontology annotations, functions, tissues, diseases etc. extracted from the databases Entrez Gene and Uni-Prot. A comparison of gene contexts against the text gives a ranking of candidate identifiers and the top ranked identifier is taken if it scores above a defined threshold. This approach has been recently extended for inter-species gene normalization and achieves 81% success rate on a mixed dataset of 13 species [35]. Mutation tagging We implemented an entity recognition algorithm (Muta-tionTagger) to automatically extract protein point mutation mentions from PubMed abstracts. Wild-type and mutant amino acid, as well as the sequence position of the substitution are extracted by means of both a set of regular expressions for pattern recognition of 1 or 3-letter-notations (e.g. E312A or Glu(312)  Ala), and rules for the more complex identification of textual mutation descriptions (e.g. Glu312 was replaced with alanine). Problems concerning the full text representations (detecting the correct sequence position of the mutated residue and unraveling enumerations) have been addressed by additional extraction algorithms and the implementation of a sequence check. An evaluation of our method on the test data from MutationFinder [36] showed comparable success rates of 88% F-measure for mutation mention extraction (see Table 2). Mutation grounding In the process of recognizing mutations in text the direct association to specific proteins and genes remains a challenge. This is due to the fact that the abstracts of relevant publications typically mention more than one mutation or protein, respectively. Thus, a mutation -protein association purely based on their co-occurrence in one abstract is not sufficient, as the consideration of all possible combinations of mutations and proteins would result in a significant number of false positive predictions. The problem becomes even more evident, when considering that both gene and mutation tagging are imperfect, achieving a precision of 80 to 90% each. We are aiming at an approach that disambiguates the relations of candidate mutations with proteins, and at the same time filters out false positives from the underlying mutation and gene recognition tasks. Approaches have already been developed, which apply a word distance metric for assigning a mutation to its nearest occurring protein term, which is error prone, as matching mutation and protein do not necessarily have to occur close to each other in the abstract or even in the same sentence. The statistical approach GraB is a tool for the automatic extraction of protein point mutations using a graph bigram association [26], achieving good results of up to 79% Fmeasure for mutation-protein association but alone would also not fulfil the second aspect of filtering out false positives. Sequence checks Mutations are commonly described as the substitution of a wild-type by a mutant amino acid at a given position. Our method compares the wild-type residue as described in a mutation mention with the UniProt/Swiss-Prot and [37]. Only associations between mutations and proteins with matching amino acids are considered, whereas the score of mismatches is set to 0. Matching pairs are scored based on their proximity, favouring pairs that co-occur in the same sentence. We assign the score to the gene -mutation pair, but also keep track of the particular Swiss-Prot and/or PDB sequence (including chain information) that matched to the mutation. In the case of a shift between Swiss-Prot and PDB sequences we calculate the correct numbering for the shifted sequence utilizing the mapping table by Martin et al. Through the consideration of both sequence and proximity information, for each mutation exactly one gene match is determined, even if more than one protein-mutation pair is possible. Annotation pipelines The developed mutation retrieval pipeline can be accessed through two different interfaces (see Figure 1), which offer either a systematic or quick and flexible solution, dependent on the annotation task. The following approaches have been implemented: Organism-centred approach (database) All available mutations for a given organism will be retrieved in one literature screening and stored in the Mutation database. This approach relies on the large-scale identification of gene mentions in PubMed abstracts, which have to be compiled for organisms of interest prior to a mutation screening. As of now, gene mention data is available for Human, Mouse, Yeast, Rat, Fruit Fly, E. coli, A. thaliana, C. elegans, Zebrafish, and H. pilori. However, data for additional relevant organisms will be added on a regular basis in the near future. Protein-centred approach (on-the-fly) It is possible to retrieve relevant data for a single gene or a list of genes/proteins for any organism. For this purpose, relevant documents are obtained via a keyword searches from the PubMed library using the Entrez Programming Utilities. Like for the large-scale identification of gene mentions in PubMed abstracts in the organism-centred approach, the result is a set of abstracts, which is subsequently processed by the MutationTagger. Improvement of gene normalization As described above, we defined the input set of documents for the organism-e mutation mining approach by scanning the whole PubMed database for abstracts mentioning at least one gene or protein of a pre-defined species. For this filtering step, we relied on the gene normalization techniques of our gene normalizer, which was applied to all PubMed abstracts in advance and has shown 85% F-measure for human genes and slightly lower for other species [35]. However, the gene normalization proposes only one identifier per gene mention, even if a set of different candidate identifiers was computed. According to internal ranking mechanisms, only the top scoring candidate is considered. This leads to a possible scenario, where in some cases the correct identifier is ranked lower and would be neglected for any subsequent data procession. In case of our mutation mining algorithm, we assume that some mutations cannot be associated to the correct protein, because the gene tagging task already failed. On the other hand, it should be possible to improve the performance of both entity recognition techniques for genes and mutations by combining the results. The idea is to run both approaches with low precision thus receiving a high recall, associate all genes to all mutations, and then Mutation retrieval workflow Figure 1 Mutation retrieval workflow. Workflow of mutation data retrieval with MutationTagger. A: PubMed IDs of abstracts mentioning proteins for given species are retrieved from a local database (gene2pubmed), which contains the results of our gene normalizing approach. Mutations are identified in the abstracts and stored (mutation2pubmed). The gene and mutation data is joined, filtered by sequence checks, and stored (mutation2gene). B: For a queried protein or gene relevant articles are retrieved from the Entrez database. Mutations are identified in the abstracts, sequence checks against the queried protein are performed, and the checked mutation data is exported to HTML or SQL. consider the intersection of all combinations that fit. Mutation and gene product are considered to be a valid pair, if the wild-type residues at the mutated position in the protein sequence and in the reported mutation match (as described in section Sequence Checks). For all proposed gene identifiers, protein sequences are obtained and checked for compliance with the reported wild type amino acid. The score of identifiers that show a match are increased, which might lead to a re-ranking of the identifiers for one gene entity. This could further improve the original gene normalization approach for candidate entities which are reported to show a mutation. Example As shown in Figure 2 our gene normalizer identified CCP (human crystallin, gamma D) with EntrezGene ID 1421 as the top candidate gene for abstract PMID 8142383. MutationTagger identified a replacement of tryptophan with glycine at position 191 as the only mutation mentioned in the paper. None of the protein sequences retrieved for human CCP showed a tryptophan residue at position 191, which means that this gene identifier was not supported by mutation information. However, besides human crystallin, there was also cytochrome-c peroxidase in yeast (EntrezGene ID 853940) proposed as an alternative identifier, which was ranked lower. As the product of this gene showed a tryptophan residue at position 191 (according to PDB sequencing) the score was increased making it the new top candidate. Indeed, manual curation of the corresponding literature confirmed, that the only gene mentioned in the abstract is cytochrome-c peroxidase in yeast. Mutation database We are establishing a mutation database, which is intended to store all protein point mutations mentioned in PubMed abstracts for all organisms of interest. We realized an early version, comprising a MySQL database and web-interface to access the data. It is envisaged to apply the data on diverse biological problems, such as the detection of mutation centred gene-disease associations in human. To populate the database, in a first step the PubMed corpus is filtered for abstracts mentioning at least one gene or protein using the named entity recognition algorithm as described in section Gene normalization. Currently, data for 10 model organisms is available: Human, Mouse, Yeast, Rat, Fruit Fly, E. coli, A. thaliana, C. elegans, Zebrafish, and H. pilori. This led to a set of 1,564,124 abstracts proposing more than 3 millions of potential protein candidates. In the second step, the mutation extraction system is applied on this corpus and the retrieved information is transferred into the database. In total, 240,057 mutation mentions were found in 68,983 abstracts. Subsequently, for all candidate genes found in these abstracts, the corresponding sequences are obtained and checked for compliance with the wild type amino acid at the position of the mentioned mutation. Out of 451,474 potential protein -mutation pairs 106,360 are supported by sequence checks (59,991 if multiple mentions of the same mutation in one abstract are counted as one) in contrast to 345,114 (188,878) mutations which have not passed the sequence filter. In summary, from all 240,057 mutation mentions initially identified by the algorithm 100,681 (42%) could be supported by gene associations based on sequence check. These data were retrieved from 30,458 (44%) out of 68,983 abstracts in total. Figure 3 shows the content of the database for the different species and compares the text mining results with mutation data retrieved from UniProtKB. We made the mutation data for the ten model organisms available in GoGene [38] at http://www.gopubmed.org/gogene. Improvement of gene normalization We evaluated our approach on two different tasks: pure identification of a mutation in a text, and the identification of correct mutation-protein pairs. An evaluation of our method on the test data from MutationFinder [36] showed comparable success rates of 88% F-measure for pure mutation mention extraction (see Table 2). The test set comprises 508 abstracts which are manually annotated with point mutations. 183 out of 508 abstracts contain at least one mention of a point mutation. It should be noted that the annotation does not contain any information about genes or proteins. Our approach (MutationTagger in recall mode) found in 166 of 183 abstracts mutations, Improvement of gene normalization Figure 2 Improvement of gene normalization. Example for gene name normalization with the help of mutation mining. Initially, our gene normalizer proposed the human gene CCP as its context fits the text best (abstract not fully shown). However, when comparing the recognized mutation at position 191 with the sequences of all three candidates, only CCP in yeast contains the wild-type tryptophan at the specified position (PDB entry). After checking the full text of this publication, we found that CCP indeed refers to the gene in Saccharomyces cerevisiae. whereas 7 additional abstracts were wrongly predicted to contain mutation information. On the mutation level, 776 out of 907 mutation mentions were identified alongside 73 false positives. We found 33 correct mutations more than MutationFinder. The higher false positive rate is in regard to the mutation grounding task secondary, as we could observe that most of the falsely predicted mutations are discarded in the subsequent filter check. To assess the mutation grounding and gene name normalization improvement as motivated in the Methods section, we run our gene normalization approach on the 183 abstracts that contained mutations. We were able to identify gene mentions of any of the 10 supported species in 22 abstracts. It should be noted that the majority of the 183 abstracts contained genes from species that are not yet supported by our approach. In the initial run, the gene name normalizer identified in 17 of 22 abstracts (77%) the correct gene as the top ranked candidate. However, after the gene tagging refinement by applying the mutation-sequence filter to all candidate genes, in three more papers genes were identified correctly replacing the false top candidate. The following genes could be correctly identified after re-ranking: Cytochrome c peroxidase of yeast in PubMed abstract 8142383 (see also Figure 2), human TP53 in abstract 11254385, and human amylase alpha in abstract 15182367. This led to the correct normalization of all genes in 20 out of 22 (91%) abstracts. For the remaining two publications, the correct genes could not be identified, as they belong to species which are not yet supported by our system. The abstracts became part of our validation subset, as the gene normalizer falsely predicted mouse genes. However, these genes were subsequently not supported by the sequence checks and the proposed identifiers were ranked below the threshold. Showing no gene identification at all, these abstracts turned from the two "false positives" into "true negatives". The results on the test set indicate that our grounding approach performs reliably and can improve gene name normalization. In contrast to our approach of first performing sequence checks and using proximity as secondary information, most related grounding mechanisms do either not consider sequence information like MuGeX [32], or utilize it only as secondary information after proximity like mSTRAP [30]. In addition, we consider both Mutation database content Figure 3 Mutation database content. Mutations and their genes extracted from text for ten model organisms. A: For each organism the number of distinct genes (red) and genes with mutations (orange) extracted from PubMed abstracts are shown. From the 6,000 distinct mutated genes found in total, more than half were human (3,170) which corresponds to 25% of all extracted human genes. B: The distribution of text mined mutations across organisms. More than 70% of all mutations reported in literature abstracts are from human. C: The Venn diagram shows text mined mutations (blue) in comparison to variant (green) and mutation (orange) annotations from UniProtKB as of version 1.47: information for additional 26,981 mutations was obtained through text mining. UniProt and PDB sequences for sequence checks, as both are used by authors when describing mutations in the literature. Sequence checks are surprisingly specific already for single mutations, with increasing precision for double and triple mutants. However, the presence of some orthologous proteins in one abstract complicates the grounding of mutations. On-the-fly vs. database approach We evaluated the results of the two approaches (database and on-the-fly) for human Aquaporin-1, as part of the stability analysis of protein membranes (see Section Application). Precision of the on-the-fly approach is expected to be lower, as the document retrieval part is relying on the more general free text queries through Entrez ESearch utility. We chose this approach to be independent of our gene normalization approach, which so far only supports 10 model organisms. Indeed, in comparison to the unique 20 mutations found by the organism-centred approach, 9 additional mutations were found querying for "(Chip28 OR Aquaporin-1) AND human". All of these additional mutations turned out to be false positives, actually appearing in Aquaporin-2 or 4. This supports the good precision of our gene normalization approach. We found out, that a slightly modified query "(Chip28 OR "Aquaporin-1") AND human" did not produce false positives and conclude, that query building might not work fully automated but needs human interaction. Similar problems could be observed, when short gene names or synonyms were part of queries and could be overcome by removing them from the query. On the other hand, this supports the good precision of our gene normalization approach. Application Predicting effects of mutations based on sequence Integral membrane proteins play an important role in all organisms, especially as transporters. Due to their striking importance, mutations in membrane proteins are known to be the cause of many hereditary diseases, such as cystic fibrosis, or retinitis pigmentosa. The reason are often conformational changes in proteins, which may lead to malfunction of a whole protein complex. Unfortunately, identified structures for membrane proteins are still rare. For this reason, we used a coarse grained model presented by [39] considering sequence information to assess the influence of mutations on protein structure. The approach considers the solvation energy, which is based on the probability distribution for each amino acid within the integral part of a membrane protein to be facing the lipids of the membrane or the neighbouring proteins. The amino acid specific property "inside" or "outside" reflects the orientation of the amino acid side chains with respect to the centre of mass of the neighbour-ing residues. For a given mutation in an integral part of a membrane protein, the approach compares the solvation energies for wild-type and mutant residues. If the energies differ significantly, a destabilizing effect is predicted, especially if the energies are changing from negative to positive or vice versa. To quantify the ability of this model to predict the influence of mutations on the stability of membrane proteins, we compared already examined and published effects of mutations with the predictions of the sequence based model. For this purpose, we screened the literature for single point mutations reported for five membrane proteins from the family of G protein-coupled receptors (bacteriorhodopsin and halorhodopsin from Halobacterium salinarum, bovine rhodopsin, Na+/H+ antiporter from Escherichia coli, and human aquaporin-1). As described in section Results and Discussion, Protein-centred approach and Figure 1B, articles relevant for these proteins were identified by searching PubMed via the NCBI Entrez Programming Utilities. Abstracts for each protein were queried by the protein and gene name including the synonyms as derived from the corresponding PDB/UniProt entry. The MutationTagger was applied on these five sets of abstracts for the extraction of mutation information. The application of sequence checks brought the results down to a reasonable number of proposed mutations, which were presented as HTML documents and subsequently manually curated. In the manual curation phase, we only considered publications where a clear relationship between a single point mutation and stability or stability related function was described. Double or multiple mutations were not considered, if the determination of a direct relation between the reported effect and one of the mutations was not possible. If an appropriate mutation was found in the literature, we compared the solvation energies of both wild-type and mutant residues, which were calculate according to [39], to decide, if the mutation is stabilizing, slightly stabilizing, slightly destabilizing, or destabilizing. Example Mutation T93P for bovine rhodopsin was reported to lead to a conformational change of the protein [40]. Considering the two solvation energies of wild type Threonine (-0.66 a.u.) and mutant Proline (0.08 a.u.) a destabilizing effect can be predicted, although both amino acids are actually classified as neutral. Without the change of sign from -to +, only a slightly destabilizing effect would have been hypothesized. Relevance We were able to show the ability of our mutation mining approach to retrieve publications containing mutation still active slightly destabilizing no information for given proteins at a good precision. Due to the quick and precise retrieval of mutation data we were able to assess the soundness of the coarse grained model for the prediction of stabilizing regions in membrane proteins. For any of these five membrane proteins, 25 out of 35 mutational effects reported in the literature correlate with the predictions based on the solvation energy (see Table 3). These cases suggest a relation between mutations and stability issues in membrane proteins. It should be noted that none of these mutations were annotated in the UniProt and PDB databases. Conclusion We developed a rule-and regular expression-based approach that allows for the retrieval of protein point mutations from the whole PubMed database specifically for any given protein. This flexibility makes it a powerful tool for immediately finding relevant data for follow-up studies, as we showed in the application on five membrane proteins. In addition, MutationTagger can be utilized for the species-wide identification of mutations in proteins mentioned in PubMed. We started to set up a mutation database which allows for systematically querying mutation related information, and finding relevant literature for subsequent studies. The sequence checks applied on identified mutations and candidate proteins have been proven to be an efficient, yet not sufficient filter for mutation-protein associations. The filter shows good sensitivity but improvable specificity, especially regarding the species level. Furthermore, we were able to show that mutation information from literature can even further improve the quality of the gene normalization algorithm, which already showed very good results.
2017-08-03T01:06:41.985Z
2009-08-27T00:00:00.000
{ "year": 2009, "sha1": "17fe53695d789d61b7516d2456bb8ed3972704b3", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/counter/pdf/10.1186/1471-2105-10-S8-S3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e47145418c92853be999ba04bacc711aab6c025", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
202223279
pes2o/s2orc
v3-fos-license
Clothing Design Model for High Temperature Work based on Heat Conduction By analyzing the temperature changes of the three-layer high-temperature clothing and the air layer of the fourth layer, according to the calculation of the temperature distribution of the high-temperature clothing, the temperature changes of the high-temperature clothing are related to the heat, ignoring the heat transfer of heat radiation in the clothing and the internal heat sources between the layers, defining three variables of time, temperature and thickness, according to the law of conservation of energy and Fourier’s law of heat transfer. The law obtains the heat conduction equation. The forward difference method in the finite difference method is used to solve the equation, and the numerical model is obtained. Then, the relationship diagrams of temperature, time and thickness are drawn. The experimental results show that the heat absorbed by the outside is consumed layer by layer in the process of heat transfer, and finally the air layer contacted with human body tends to 48 C. Finally, according to the formula Q = cm Delta U of heat absorption in physics, further analysis is carried out, and the optimal thickness of Layer II and Layer IV is obtained by using the optimization algorithm. Introduction When workers work in high temperature environment, they will feel hot, dizzy, panic, irritation, thirst, weakness and fatigue after a long time. A series of physiological changes may occur, which endangers the health of the body. At this time, people need to wear special clothes to avoid the impact of high temperature. The existing models of high-temperature working clothes are divided into single-layer and multi-layer models based on single-layer or multi-layer materials. In the single-layer model, thermal protective clothing only has a shell. Gibson proposed a heat and mass transfer model for single-layer porous media at high temperature. The defect of this model is that the influence of thermal radiation is [1].In order to improve the model, Torvi proposed a heat transfer model of thermal protective clothing shell material considering different radiation conditions [2]. Later, on the basis of the singlelayer model, many scholars studied the multi-layer model of heat and moisture transfer in high temperature working clothes [3][4][5]. These studies are based on the Torvi model. The special hightemperature work clothes in this paper are usually composed of three layers of fabric material. They are classified as layers I, II and III. Among them, layer I contacts with the external environment, and there is a gap between layer III and skin. The gap is classified as layer IV. In order to design special clothing, the dummy whose body temperature is controlled at 37 C is placed in the high temperature environment of the laboratory to measure the temperature of the outside skin of the dummy. In order to reduce the cost of research and development and shorten the research and development cycle, the mathematical model was used to determine the temperature changes on the outside of the dummy skin, and to solve the related problems. Re-description of the same heat transfer process by different models deepens the understanding of heat transfer in textile materials, which is conducive to further research, and provides theoretical basis for ensuring the safe working time at high temperature and improving the performance of high temperature working clothing. Figure 1 shows that people need to wear special clothes to avoid burns when working in high temperature. Special clothing is usually composed of three layers of fabric material, marked as layer I, II and III, in which layer I contacts the external environment, and there is still a gap between layer III and skin, which is marked as layer IV. For this system, the following assumptions are made: assume that the special fabric material for high temperature operation is isotropic. This system only considers the heat transfer of heat conduction, ignoring the heat transfer of heat radiation, moisture transfer and internal heat sources between layers. Heat transfer is perpendicular to the skin and can be regarded as one-dimensional. It is assumed that there is no melting or decomposition of thermal protective fabrics during heat transfer [3]. The thickness of the air layer in layer IV is not more than 6.4 mm, so the influence of heat convection is neglected. The temperature distribution between layers is continuous, but the temperature gradient is jumping. Layer Distribution of Heat Conduction Model. It is necessary to calculate the temperature distribution of high-temperature working clothes. After analyzing each layer of high-temperature working clothes, it is found that the temperature change of high-temperature working clothes is related to heat. Variable factors affecting temperature change are found. Time, temperature and thickness are defined to find out the law of temperature change. According to the law of conservation of energy and Fourier's law of heat conduction, a preliminary model is established, and the forward difference method in the finite difference method is used to obtain the temperature distribution of each layer. The temperature change of high-temperature work clothes is related to heat. If the thickness of hightemperature work clothes is x and the temperature is u, the heat intake of high-temperature work clothes is equal to the heat required by the medium when the temperature rises, regardless of the heat source and radiation inside the high-temperature work clothes[1] [2].Taking the section from X to x+dx on the x-axis, its mass can be expressed as m=p DV (which is expressed as⑥_below). Let the heat in the work clothes conduct vertically along the x-axis, the intensity (heat flux) is Q (x, t), and the temperature distribution is u (x, t), as shown in Figure 2. According to Fourier law of heat conduction, the heat per unit area q through the vertical x direction in unit time is proportional to the rate of change of temperature in space. (2) Substitute the equation ②into the equation ①, and the final heat conduction equation can be obtained: Where c is the specific heat capacity; As the density; Q is the heat absorbed by the fabric J; K is the heat conductivity; U is the temperature. Optimal thickness of layer II. When the ambient temperature is 65 C and the thickness of layer IV is 5.5mm, the optimal thickness of layer II is determined to ensure that the outer skin temperature of the dummy does not exceed 47 C and the time beyond 44 C does not exceed 5 minutes when working for 60 minutes. High temperature uniform absorption of heat in each layer in the process of heat transfer, consumption, step by step a Q1 to the ith layer absorbs heat, Q4 is IV layer air layer heat, Q3 for III layer heat, assuming that the temperature change of the temperature change in Q3 and Q4 is the same, according to the Q1, Q3 and Q4 three variables about II layer is deduced the relation between the heat Q2, then according to the optimal solution algorithm to the optimal II layer thickness. Let the heat of the air layer of the third layer and the fourth layer of the high-temperature work clothes be Q1, Q2, Q3 and Q4 respectively. The heat that the high-temperature work subject to the external absorption is Q1. According to the law of energy conservation, it can be obtained: According to the heat absorption formula in physics: (4) Where Q1 is the heat of the ith layer; Q2 is the heat of the second layer; Q3 is the heat of layer III; Q4 is the heat of the fourth layer; M is mass. Optimum Thickness of Layers II and IV. When the ambient temperature is 80, the optimum thickness of Layer II and Layer IV should be determined to ensure that the temperature of the outside skin of the dummy does not exceed 47 °C and that the time of exceeding 44 °C is not more than 5 minutes when working for 30 minutes. In the layer-by-layer heat consumption, the heat of Layer II and Layer IV is taken as unknown quantity, and the Layer I and Layer III are known. According to the law of conservation of energy, the preliminary model of this problem can be obtained. Then, further analysis can be carried out by using Q=cm△u, and the optimal thickness of Layer II and Layer IV can be obtained by using the optimization algorithm. The heat of the air layer of layer I, layer II, layer III and layer IV of the high temperature work clothing is Q1, Q2, Q3 and Q4 respectively. The heat absorbed by the outside is Q1. According to the law of conservation of energy, it can be obtained that: According to the heat absorption formula in physics, we can see that: Q1 is the heat of Layer I, Q2 is the heat of Layer II, Q3 is the heat of Layer III, Q4 is the heat of Layer IV and M is the mass. Layer Distribution of Heat Conduction Model In this paper, the finite difference method is used to solve the heat conduction equation. According to the analysis of the problem, the forward difference method is used to solve the equation. The forward difference scheme is as follows [8]. Let's set a stability factor S,order ,then we can get it. Optimum Thickness of Layer II According to the meaning of the question, we can know that the temperature of Q4 varies from 37 °C to 47 °C, assuming that the temperature of Q3 varies in accordance with that of Q4, but in fact the temperature of Q3 varies from 37 °C to 45 °C. From this we can see that: (5) Derived from known formulas (6) From ⑥to ④, it can be simplified ⑤. Optimum Thickness of Layers II and IV The temperature of Q4 varies from 37 °C to 47 °C. It is assumed that the temperature of Q3 varies from 37 °C to 47 °C. In fact, the temperature of Q3 varies from 37 °C to 47 °C. The heat of Layer II and Layer IV is as follows: Layer Distribution of Heat Conduction Model Using MATLAB to draw the heat conduction model [6], a three-dimensional relationship diagram of time, temperature and thickness (i.e. temperature distribution diagram) is obtained as follows. Optimum Thickness of Layers II and IV If the working time is 30 minutes and the temperature does not exceed 47 °C, the working time needs to reach 25 minutes when it reaches 44 °C, as shown in Figure 6 below: Fig.6 Temperature-time relationship of Layers II and IV Concluding Remarks In this paper, a three-dimensional model of time, temperature and thickness is established through analysis, so that a more intuitive and three-dimensional table can show the temperature change of hightemperature working clothes. And this paper makes full use of the law of conservation of energy and Fourier's law to optimize the model of high temperature working clothes under heat conduction, which provides a better idea for the research of high temperature working clothes. At the same time, this paper uses layer-by-layer analysis method to build the model, which can make the model more specific. However, in the process of building the model, the factors such as heat radiation transmission, moisture transfer and internal heat sources between layers are neglected, which make the model too idealized, and these factors should be fully considered in the application of practical problems. Outlook In recent years, the research on the design of special clothing for high temperature work has gradually become a hot topic of academic research. The research on strengthening functional protective materials and clothing for high temperature work clothing is one of the important measures for national security development and revitalization of textile industry. Therefore, the research and design of special clothing for high temperature work is very promising. In practice, taking into account the factors of heat radiation transmission, moisture transfer, and internal heat sources between layers, the model can be used more widely. Good thermal insulation performance can better assist high-temperature workers to perform their tasks, which can make a qualitative leap in the national security development and textile industry.
2019-09-11T02:02:52.295Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "3449abe8deb3d6af98066443f8484eba67174f4e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1300/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "576885d2205598cba818df21c52d4e751af4892c", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
233391703
pes2o/s2orc
v3-fos-license
POST-COVID-19 UNIVERSITY GOVERNANCE IN GERMANY . Researchers admit that there will not be a quick return to “business as usual”, especially related to internationalization, the financing of studies and universities, research and administration. The enabling research question is formulated as following: What is the relationship between the COVID-19 pandemic and university governance? The aim of the article is to examine the relationship between the COVID-19 pandemic and university governance underpinning the elaboration of implications for higher education. The study was of the qualitative nature. The exploratory study was implemented. The study was carried out in Germany on the 22nd September 2020. Focus group interview served as a method of data collection. The data were interpreted via structuring and summarising content analysis. The theoretical finding is that the COVID-19 pandemic is a factor that influences the university governance. The COVID-19 pandemic is an external factor in regard to university governance. Factor impact can be regulated. Factor impact can be increased or decreased according to the situation requirements. The empirical data allow concluding that the COVID-19 pandemic fastens the changes in governing the universities in Germany. Implications for higher education are presented. Research limitations are identified. Directions of further research are proposed. Introduction The outbreak of COVID-19 in the world has led to the unprecedented changed in people's lives. Many people have experienced rapid transformations in many aspects of their lives: working conditions, shopping, travelling, finance, etc. Higher education has also been significantly adapted to the new situation created by the COVID-19 pandemic. Higher education is conventionally delivered by higher education institutions. "Higher education institutions" and "universities" are used synonymously in this work. The first university reaction to the COVID-19 pandemic in March 2020 was expressed by the full lockdown: university staff and students were not allowed entering university campus and premises. The corona lockdown has interrupted usual university processes in almost all areas, namely teaching, university management, laboratory work, etc. Then, in April 2020, face-to-face lectures were replaced by their digital equivalents. Since the global spread of the coronavirus pandemic, universities, as higher education providers, gradually shift their oncampus activities to the on-line performance. On-line lectures, department on-line meetings, student on-line project work are only some of the university measures in response to the COVID-19 pandemic. Against this background, many researchers admit that probably there will not be a quick return to "business as usual", especially related to internationalization, the financing of studies and universities, research and administration. The enabling research question is formulated as following: What is the relationship between the COVID-19 pandemic and university governance? The aim of the article is to examine the relationship between the COVID-19 pandemic and university governance underpinning the elaboration of implications for higher education. The study was of the qualitative nature. The exploratory study was implemented. The study was carried out in Germany on the 22 nd September 2020. The focus group interview served as a method of data collection. The data were interpreted via structuring content analysis as well as summarising content analysis. Conceptual Framework University governance is defined as the constitutional forms and processes through which universities govern their affairs (Shattock, 2006 others. University governance is opposed to university management. Figure 1 reflects the inter-relationship between university governance and university management. University management ensures the implementation of university's constitutional forms and processes by which university's objectives are achieved. University governance relies on university policies. University policies include guidelines, rules, and procedures established to support efforts and encourage work towards stated objectives (Aboy, 2018). Figure 1 The relationship between university governance and university management University governance is formed by factors. By factor, a reason of the phenomenon's change is meant. Factors are conventionally differentiated (Zaščerinska, Zaščerinskis, Andreeva, &Aļeksejeva, 2013), as shown in Figure 2, into external and internal. Figure 2 The relationship between factor, external factor and internal factor The COVID-19 pandemic in this work is certainly an external factor that shapes university governance. It should be noted that, depending on the research focus, the COVID-19 pandemic can be also analysed as • a research object, • a criterion and indicator, • a structural element, etc. Identification of the COVID-19 pandemic as a factor leads to the research goal to mitigate or diminish the effect of the COVID-19 pandemic on universities. Scientific literature addressed to analyse the decrease in the impact of the COVID-19 pandemic on universities reveals that there is no "right answer", no "one-size-fits-all" response (Peregrine, DeJong, DiVarco, McDermott & Emery LLP, 2020). The ultimate consideration in play post-pandemic is the expectation that university governance policies and procedures will periodically be reevaluated in the light of their particular facts and circumstances (Peregrine, DeJong, DiVarco, McDermott & Emery LLP, 2020). There is a need to refocus thinking away from the short-term and urgent issues and look to the medium and long term (McVitty, 2020). Though each university may have its own internal discussions and scenarios, there's a loss of University management University governance Internal Factors External quality and efficiency of strategic thinking about the sector as a whole (McVitty, 2020). This is not only about institutional autonomy but about sourcing the best ideas and insight about the role of higher education in a post-Covid-19 world (McVitty, 2020). Traditionally, effective governance has been framed as protecting the longevity and sustainability of one's own institution (McVitty, 2020). But governors can offer a strategic leadership on how education institutions can fulfil their mission to serve their place, as well (McVitty, 2020). The most appropriate approach is for the university governors to pursue such a self-analysis from a good faith, informed perspective (Peregrine, DeJong, DiVarco, McDermott & Emery LLP, 2020). Hence, the analysis of scientific literature results in findings that • university governance changes due to particular facts and circumstances, • the approach of self-analysis from the informed perspective should be practised, • long term issues and sustainability should be prioritized, instead of dealing with the short-term and urgent issues, • the focus should be put on the whole sector of higher education, but not on each university. Empirical Study Design The empirical study was enabled by the research question: What is the impact of the COVID-19 pandemic on the university governance in Germany? The study purposes were to investigate the effect of the COVID-19 pandemic on the university governance in Germany. The qualitative study was implemented. The exploratory study was employed in the present work. The interpretive research paradigm was used in the study. The interpretive paradigm is characterized by the researcher's practical interest in the research question (Cohen, Manion, & Morrison, 2003). The interpretive paradigm is featured by the researcher's interest in a phenomenon. The interpretive paradigm is aimed at analysing the social construction of the meaningful reality. Meanings emerge from the interpretation. The researcher is the interpreter (Ahrens, Purvinis, Zaščerinska, Miceviciene, & Tautkus, 2018). The data were collected via the focus group interview. The focus group interview examined how knowledge, and more importantly, ideas, develop and operate within a given cultural context as well as explored exactly how the opinions were constructed (Kitzinger, 1995). A focus group usually includes from five to 10 participants (Krueger, 2002). The choice of participants for a focus group interview is conventionally based on three criteria (Zaščerinska, Aļeksejeva, Aļeksejeva, Gloņina, Zaščerinskis, & Andreeva, 2015): • participant's knowledge on a given topic, • participant's cultural difference, and education's diversity (scientific direction, occupation, training, etc), and • participants' hierarchy in the group. The number of participants depends on the heterogeneity of the focus group: the greater the heterogeneity of the group, the fewer the number of participants (Okoli & Pawlovski, 2004). Further on, smaller groups show a greater potential (Krueger & Casey, 2000) to examine the process of the construction of the knowledge and opinion. The focus group interview was video-recorded, and detailed notes were made. The interview was relatively open and exploratory until novel concepts and ideas stop emerging. The full transcripts of the focus group interview were made, and the thematic analysis was carried out to elucidate common themes and topics of the discussion. The structuring content analysis was used to seek to assess the material according to the particular criteria that are strictly determined in advance (Mayring, 2004, p. 269). The summarizing content analysis seeks to reduce the material in such a way that the essential contents are preserved, but a manageable short text is produced (Mayring, 2004, p. 269). Empirical Study Results The focus group interview was carried out within the Forum of University Councils organised on the 22 nd September in Berlin, Germany. The Forum was entitled "Corona and the Consequences -What is next to Universities?". How and when can the international exchange be restarted? • What effects does the corona-related economic slump have on the university funding? A respondent of the focus group interview stated that, in order to prepare universities to work in emergency situations, more focus should be put on the elaboration of a strategy, and not on defining separate actions. Another respondent of the focus group interview agreed that by 2025 universities in Germany have to be able to practice hybrid (partly on campus and partly on-line) models in regard to teaching, learning, research and other activities. In terms of teaching, a specific (professional) knowledge within a study programme will be delivered mostly on-line, while the knowledge for personal development of students -face-to-face. It was pointed that the preparation of a face-to-face lecture requires the same time as for the preparation of an on-line lecture. It was also noted that on-line lectures can be given by the university staff without the obligation for the teaching staff to do it from the university campus. On-line lectures can be delivered from any place and location. Lectures for big students' groups were discussed to become on-line as the coronavirus pandemic restricted the number of participants per event. On-line lectures for big students' groups arise the problem of the use and management of big lecture halls in university buildings. The transfer to the hybrid model of university studies increases the students' load. The respondents expressed the opinion that students' work within the university studies will only grow. As students' work within the university studies is tightly connected to students' learning, learning will take more and more an individual format. Individual learning will allow students' own pace within the university studies. The delivery of lectures mostly on-line raised the issue of the use and management of university buildings. Currently, administrative staff members work in office rooms which are mostly occupied by one person. The respondents highlighted a possibility of creation of co-working spaces for administrative staff members at universities. Another important issue that received a lot of attention was universities' internalization. The respondents opined that university internalization should be enhanced as internalisation facilitates the financing of studies and universities. The universities in Germany expect the increase of international team members after the introduction of the vaccine against coronavirus. The interview respondents pointed an interesting fact that during the first wave of the COVID-19 pandemic, more international students applied for an Erasmus+ exchange in Germany. The respondents assumed that the exchange students wished to stay in Germany during the first wave of the COVID-19 pandemic, despite Germany that time was on the list of the countries with a high infection rate, as Germany has a good health system that is highly ranked in Europe and worldwide. The interview respondents stressed that a new university ranking criteria such as a country's health system will appear in future. University internalisation was also related to the university networking activities. Networking was concerned in a wider context, not only as part of university internalisation. The respondents underlined that a network cannot be established via digital tools, personal contacts and meetings are significant for the network creation and maintenance. Empirical Study Findings The analysis of the data collected through the focus group interview was based on the criterion, namely affairs/areas of university governance at institutional level. The structuring content analysis allows finding that the most effected affairs/areas of university governance in Germany are − Personal contacts. It should be pointed that the respondents of the focus group interview highlighted that networking cannot be established without personal contacts and meetings. The respondents definitely considered teaching staff professional networks. In relation to university students, the university studies are an opportunity for students to start their own professional networking. Professional networking is also useful for group learning and peer learning that are found to be the necessary part of the students' study process (Zascerinska, 2013). However, the use of mostly digital tools for studies at university does not promote the establishment of a student's professional network. This allows finding that the student's study process has to include all the parts, namely teaching, peer learning and learning (Zascerinska, 2013). The summarising content analysis allows identifying that the university governance in Germany has been externally affected at the institutional level. The emergency situation related to the COVID-19 pandemic has greatly impacted the university governance processes to be shifted in a short period of time • from face-to face and hybrid • to fully digital. However, it should be pointed that hybrid studies at the universities in Germany were planned in advance and already partly introduced. The COVID-19 pandemic has only increased the pace as well as shortened the planned period of the time of the transformation, namely from face-to face and hybrid. Along with the changing university governance processes, the COVID-19 global spread has an enormous influence on the re-elaboration and updating of university governance policies, rules and guidelines. Conclusions The theoretical finding is that the COVID-19 pandemic is a factor that influences the university governance. Further on, the COVID-19 pandemic is an external factor in regard to the university governance. Another theoretical finding is that the factor impact can be regulated. The factor impact can be increased or decreased according to the situation requirements. The empirical findings reveal that changes in the university governance in Germany are in the full compliance with the university governance's shifts described in scientific literature. The empirical data allow concluding that the COVID-19 pandemic fastens the changes in governing the universities in Germany. For example, the introduction of the hybrid teaching and learning model was planned by 2025. The introduction of the hybrid teaching and learning model in Germany has already started, and the COVID-19 pandemic only speeded up this process. This leads to the conclusion that the university governance in Germany is well-planned and oriented to the sustainability of both, namely higher education and higher education institutions. The organisation of the Forum of German Universities' Councils implies that issues of the whole sector of higher education in Germany are being discussed and solved, and not the problems of each university. The university governance in Germany changes due to the COVID-19 pandemic effect. Another empirical finding is that Germany sets 2025 as the year of the introduction of the hybrid teaching and learning at universities. This discloses that the university governors in Germany concentrate on long-term issues and sustainability. Raising the issues of the relationship between higher education and state health system as well as the management of the university buildings makes evident that the university governors practise the approach of self-analysis from the informed perspective as these issues have not been investigated in scientific literature. Implications for higher education have been formulated: • the universities' governors are proposed to work together when dealing with the COVID-19 pandemic in particular in universities and in general in higher education, • teaching in the study process at university has to be combined with peer learning in order to support the establishment of students' professional networks, • students need face-to-face classes to start their professional networking in order to build and strengthen their professional capacity and competence, • administrative premises have to be re-organised in accordance with the available space for working, • the use of the university buildings and halls has to be re-structured, • the university ranking system is to be updated with the state health system. The present research has some limitations. A limitation is that the relationship only between the COVID-19 pandemic and the university governance has been set. Another limitation is that the empirical study was carried out only in one country, namely Germany. The group of respondents was limited by the participants of only one forum. The further research will focus on identifying internal factors that influence the university governance. The list of external factors that influence the university governance will be extended. The future work will tend to increase the number and widen the groups and countries of respondents. The search for methods of data collection, analysis and interpretation is planned. Comparative studies of the impact of the COVID-19 on the university governance of different countries is also proposed.
2021-04-26T03:28:40.077Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "e7cc292ca87549828c0b8432da4953a746a1a61d", "oa_license": "CCBY", "oa_url": "http://journals.rta.lv/index.php/ER/article/download/5336/5594", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "be9645c5f2d8f71a8d4bf8454b79c28f2108cb4f", "s2fieldsofstudy": [ "Education", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
211036253
pes2o/s2orc
v3-fos-license
A comprehensive analysis of #Enuresis conversation on Twitter , Introduction Enuresis is defined by the International Children's Continence Society (ICCS) as discrete episodes of nocturnal urinary incontinence in children greater than the age of 5 [1] and is a common problem among the pediatric population. It is estimated that 16% of children at the age of 5 will experience enuresis, which decreases to 13% at the age of 6, 10% at the age of 7, and 1% to 2% after the age of 15 [2]- [4]. Some studies have found that enuresis can have a detrimental impact on childhood development. For instance, children with enuresis were found to have lower perceptions of self-esteem and self-image [5], [6]. These children may therefore have difficulties adjusting to social situations due to an inability to participate in common activities such as sleepovers, residential school trips, and camping trips [7]. Although there are many management options for enuresis, including bedwetting alarms, motivation therapy, and pharmacologic interventions, the impact of these treatments on self-esteem and patient mental health is an area of active investigation [8]. The Internet has become a widely used resource for patients to obtain medical information, share personal experiences, and garner peer support. Online support groups for conditions such as cancer, mental health disorders, and human immunodeficiency virus (HIV) have been found to be effective in alleviating psychosocial burdens [9]- [11]. While no studies have examined the role of online support groups in pediatric urology, several studies have investigated the role of the internet and social media in this field. Routh et al. in 2009 studied internet content for 10 different pediatric urology conditions, including enuresis, and found that the available online content was high quality for both common and uncommon conditions [12]. Rowe et al. in 2018 demonstrated that social media can be employed as a novel tool for undertaking pediatric urologic focused patient-centered outcomes research [13]. Twitter, a microblogging platform, is a social media service that has emerged as a popular discussion forum for healthcare topics [14]. Conversations on Twitter use hashtags that effectively serve as keywords for topics. Twitter has gained popular appeal amongst both medical professionals and patients. O'Kelly et al. in 2017 identified that parents of pediatric urology patients use social media accounts of medical journals, physicians, and hospitals to access health education information [15]. Many open source efforts including the Urology Tag Ontology Project have aimed to structure the conversation for pediatric urological conditions via hashtags. #Enuresis was established as the official hashtag for Twitter discussions by the Urology Tag Ontology Project, and has been recognized by urological organizations such as the Urology Care Foundation as well by academic urology journals sponsored by the American Urological Society and European Association of Urology [16]. The goal of this study was to examine the content contained within conversations using #Enuresis by analyzing users contributing to the conversation, and the content of tweets incorporating the hashtag. Twitter analysis We analyzed the use of #Enuresis using Symplur, a Twitter Analytics service (www.symplur.com), between June 28, 2016 and November 28, 2018. This time frame included all tweets containing #Enuresis since the Symplur service began monitoring #Enuresis. Tweet activity was analyzed by examining number of total users, new users per month, and tweets per month. Tweet metric analysis was performed by obtaining information about retweets as well as tweets with links, embedded media, mentions, and replies. User information was aggregated via Symplur based on publicly available information. A user profile was generated based on geographic location, occupation, and organizational affiliations. The number of users in North America (Canada, Mexico, United States) were compared against Europe and the rest of the world. Twitter users employing #Enuresis were classified into healthcare categories based on profession, organizational affiliation, or credentials using Symplur category definitions [17], [18]. All Symplur classifications were manually verified and corrected if necessary to confirm that stakeholders were accurately identified. Users were also classified based on influence on the #Enuresis Twitter discussion. Influence was determined via the SymplurRank metric. SymplurRank is a propriety score that is similar to Impact Factor measurements used by academic journals and controls for Twitter activity that is corrupted by spammers such as number of tweets, retweets and mentions [19]. The Top 100 users with the highest SymplurRank were reported as key influencers of the online discussion. Tweet content was determined by analyzing words, hashtags, links and the presence of media attachments. Each of these categories were separately investigated to further understand the content within #Enuresis conversations. The 100 most common words were analyzed along with the Top 25 hashtags and Top 10 Links used in tweets containing #Enuresis. A survey of tweets containing #Enuresis as well as any associated hashtags was performed given the fact that tweets may have multiple hashtags. Statistical analyses All statistical tests were undertaken using the R Programming Language 3.5.0 (https://cran.rproject.org/). Two separate analysis of variance (ANOVA) tests were performed to determine differences in #Enuresis tweet volume and new user adoption in the study time frame. Specifically, the first ANOVA was performed comparing the average number of tweets per month across the 3-year interval, and the second ANOVA was performed comparing the average number of new users per month across the same time interval. The change in number of users was modeled using a linear regression, and the regression coefficient was tested for statistical Discussion Social media provides a platform for providers, patients, and healthcare organizations to communicate and share information. Smailhodzic et al. found that social media encouraged equal communication between the patient and physician and increased the rapport of patient-physician relationships [20]. Laranjo et al. found that interventions for patients using social network sites were able to effectively promote health-related behavior change [21]. Farpour et al. described how patients with chronic medical conditions were able to improve their mental health by participating in healthcare interventions that incorporated social media tools [22]. Our study was focused on analyzing Twitter conversations employing #Enuresis in order to understand existing discussion patterns and highlight avenues to more effectively leverage this platform for improving management of the condition. There was no significant difference in the average number of monthly tweets containing #Enuresis across our time period of June 2016 to November 2018 (p = 0.292). This is in contrast to other reported urology Twitter discussions such as #TesticularCancer [14] and #KidneyStones [17], which both reported increases over their study periods. One likely explanation is that both testicular cancer and kidney stones impact an older population than enuresis. Testicular cancer in particular is the most common malignancy among young men [23]. As a result, those patients are more likely to have access and be active on the internet and social media compared to pediatric patients afflicted with enuresis. When analyzing the locations of users tweeting with #Enuresis, the majority of known users were found to be in Spain and other European countries. There was no difference in the average number of users in North American countries compared to European countries (p=0.328). This marks a contrast between other hashtag analyses, where the majority of users were located in the United States (US) [14], [17]. The adoption of #Enuresis across different countries is evidence for the global appeal of Twitter based healthcare conversations. Our results may suggest that users from European and other foreign countries are more willing to engage in enuresis discussion and research compared to users from the US. There are several reasons why #Enuresis might have higher engagement levels outside of the US. World Bedwetting Day for example was launched in 2015 by a coalition of international agencies including the ICCS and the European Society for Pediatric Urology [24]. Additionally the ICCS, one of the main research/advocacy groups for the condition, has strong international presence as 9 of the 11 board members reside outside of the US [25]. The fact that these CUAJ -Original Research Yu et al Enuresis conversation on Twitter 6 © 2020 Canadian Urological Association advocacy efforts are driven by European members might be an underlying reason for this distribution of users. In addition, the US healthcare system operates largely as a fee-for-service (FFS) model, where payment is distributed based on the quantity of care that is delivered [26]. Since the management of enuresis is non-surgical [27], the economic incentives for enuresis awareness and management may differ from other countries that promote pay-for-performance and integrated care models. Countries with healthcare systems in place that subsidize health maintenance and long-term follow-up may draw more awareness to chronic conditions such as enuresis. We observed an increase in the number of users from 6 to 1,555 across our study period. Physicians comprised the majority of the top 100 influencers (14%), which was followed by medical device organizations (13%), and advocacy/support organizations (11%). 79% of these physicians and 64% of these advocacy/support organizations tweeted in Spanish, and words such as "niños" and "cama" were in the list of top 10 most commonly used words. These findings are consistent with the international adoption of this hashtag. The relatively high percentage of medical device organizations is likely attributed to the popularity of enuresis prevention technologies. A majority of medical device organizations were related to the manufacture and sale of enuresis alarms, which can be used as a primary treatment for enuresis [28]. A majority of tweets (72%) were sent with links. The most commonly tweeted links were affiliated with medical device websites that sold enuresis prevention tools (thebedwettingdoctor.com, www.tenscare.co.uk, https://www.dri-sleeper.com/, malemmedical.com). The next most commonly tweeted websites were advocacy/support sites (pisenlacama.com.ar, www.eric.org.uk, www.guiainfantil.com). Currently there are no studies that have evaluated the effect that online support websites or groups have on alleviating the psychosocial burdens of enuresis. The popularity of these websites in our analysis supports future work to investigate the impact of these internet tools on enuresis management. We acknowledge that our study has certain limitations. First, we recognize that conversation regarding the condition might exist outside #Enuresis hashtag. Less than 33% of #Enuresis included more colloquial hashtags such as #Bedwetting or #PisEnLaCama. This demonstrates that the conversation surrounding #Enuresis is substantially distinct from the conversation involving these alternative hashtags. Furthermore we wanted to investigate the Twitter conversation surrounding discrete episodes of nocturnal urinary incontinence via hashtags incorporating formal medical terminology in lieu of hashtags using colloquial language such as bed wetting. Second, we recognize that our analysis might be limited by the fact that some Twitter users might not be following traditional Twitter norms and thereby failing to append #Enuresis to tweets pertaining to this condition. Consequently we acknowledge that our study might therefore under predict the volume of Tweets and number of users discussing this condition on Twitter. Last, due to limitations of the Symplur software, we are unable to correlate CUAJ -Original Research Yu et al Enuresis conversation on Twitter 7 © 2020 Canadian Urological Association patient engagement with tweet quality. Emerging evidence has suggested that publications receiving the most media attention may not be the most scientifically rigorous or that the public may place greater value on different subjects than the scientific community [29]. As a result, future research is necessary to determine the quality of information that patients are interacting with. Conclusions Our analysis demonstrates that Twitter is a popular forum for discussions about Enuresis and that many user are employing #Enuresis to converse about the condition. Our results shows that there have been steady increases in the total number of users who are utilizing this hashtag. Our study indicates that the majority of conversations about #Enuresis is driven by various influencers including physicians, advocacy groups, and medical device companies. We demonstrate that #Enuresis has received strong international adoption and that Twitter is a widely used platform for discussing the condition around the globe. Fig. 1. Analysis of tweet activity (A) and user influx (B) from June 28, 2016 to November 28, 2018.
2020-02-06T09:08:44.390Z
2020-02-04T00:00:00.000
{ "year": 2020, "sha1": "b44022806247061ecc95181a9194c585d6c39380", "oa_license": null, "oa_url": "https://cuaj.ca/index.php/journal/article/download/6260/4312", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "62faee4943a0e8865fab36b2a1ce8495ce94513c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16930170
pes2o/s2orc
v3-fos-license
Severe symptomatic hyponatremia during citalopram therapy - a case report Background Hyponatremia secondary to the syndrome of inappropriate secretion of antidiuretic hormone is an uncommon complication of treatment with the new class of antidepressant agents, the selective serotonin reuptake inhibitors. The risk of hyponatremia seems to be highest during the first weeks of treatment particularly, in elderly females and in patients with a lower body weight. Case Presentation A 61-year-old diabetic male was admitted to the hospital because of malaise, progressive confusion, and a tonic/clonic seizure two weeks after starting citalopram, 20 mg/day. On physical examination the patient was euvolemic and had no evidence of malignancy, cardiac, renal, hepatic, adrenal or thyroid disease. Laboratory tests results revealed hyponatremia, serum hypoosmolality, urine hyperosmolarity, and an elevated urine sodium concentration, leading to the diagnosis of inappropriate secretion of antidiuretic hormone. Citalopram was discontinued and fluid restriction was instituted. The patient was discharged after serum sodium increased from 124 mmol/L to 134 mmol/L. Two weeks after discharge the patient denied any new seizures, confusion or malaise. At that time his serum sodium was 135 mmol/L. Conclusions Because the use of serotonin reuptake inhibitors is becoming more popular among elderly depressed patients the present paper and other reported cases emphasize the need of greater awareness of the development of this serious complication and suggest that sodium serum levels should be monitored closely in elderly patients during treatment with citalopram. Background Hyponatremia secondary to the syndrome of inappropriate secretion of antidiuretic hormone (SIADH) is an uncommon complication of treatment with the new class of antidepressant agents, the selective serotonin reuptake inhibitors (SSRIs) [1,2]. Estimations of the occurrence of hyponatremia during treatment with SSRIs range between 0.5% and 25%, and the risk of hyponatremia seems to be greatest during the first weeks of treatment with SSRI, in the elderly, in female patients and in patients with lower body weights [3,4]. However, severe consequences of hyponatremia caused SSRIs, such as tonic/clonic seizure, have not been reported. We describe the case of a 61-year-old male with tonic/clonic seizure caused by SSRIinduced hyponatremia Case Presentation We recently saw a 61-year-old male referred to us because of a 3-day history of malaise, progressive confusion, and a tonic/clonic seizure. Two weeks before, he had been started on a regimen of citalopram 20 mg at bedtime. The patient and his wife reported that he became progressively confused, lethargic and had difficulty performing simple tasks. He is a type 2 diabetic treated with metformin 500 mg twice daily and glyburide 2.5 mg once daily. Upon admission, the patient was afebrile with normal vital signs. He appeared euvolemic without evidence of congestion or dehydration. A diagnosis of SIADH was made based on clinical euvolemia in the presence of hyponatremia with a urine osmolarity and sodium that were inappropriately high. Normal renal, thyroid and adrenal function with relative hipouricemia, all supported SIADH. Extensive investigations ruled out malignancy, pulmonary, hepatic cardiac or renal disease or any other known causes of SIADH. On the day of admission, citalopram was discontinued and the patient was treated with 2 liters of intravenous 0.9% sodium chloride, phenytoin (5 mg/kg), and subcutaneous insulin. Approximately 24 hours after admission the patient's serum sodium increased to 129 mmol/L (136-145 mmol/L) and the chloride increased to 89 mmol/L (98-106 mmol/L), thereafter, fluids were restricted to 1200 ml/day. His mental status improved over the next 48 hours. Five days after admission serum sodium was 134 mEq/L (136-145 mmol/L) and serum chloride was 99 mmmol/L (98-106 mmol/L). Patient was fully alert, had no more seizures and was subsequently discharged. At this time phenytoin treatment was stopped. A follow up serum sodium three weeks after discharge was 135 mmol/L (136-146 mmol/L). This patient's seizures appear to have been induced by hyponatremia that was secondary to SIADH, a diagnosis that is supported by the low serum sodium concentration, concentrated urine, and clinical evidence of euvolemia. The laboratory values and history were inconsistent with a diagnosis of psychogenic polydipsia. The finding of SIADH secondary to citalopram use may reflect dysregulation of serotonergic control of ADH release or metabolism. Experimental evidence in rodents has demonstrated the presence of serotonin's neurons in the hypothalamic supraoptic nucleus, which is where the ADH prohormone is synthesized [5]. Other studies suggest that serotonin may be involved in the regulation of ADH release [6]. The occurrence in this case of a seizure secondary to SIADHassociated hyponatremia suggests a possible mechanism for citalopram-induced convulsions and corroborates previous reports of citalopram-induced SIADH. Conclusions The present case and others previously reported, emphasize the need for greater awareness of the development of this serious and potentially fatal complication in association with citalopram therapy. Review of the present and previous cases has shown that the onset of citalopraminduced hyponatremia or SIADH ranges from 6 to 20 days after the therapy has been started [7][8][9][10][11][12][13][14][15][16]. Potential risk factors for SIADH due to citalopram included advanced age, female gender, concomitant use of medications known to cause SIADH or hyponatremia, and possibly, higher citalopram doses [7,8,17]. Therefore, a high level of suspicion, close and careful monitoring of serum sodium concentration particularly in elderly patients during the first month of therapy with citalopram may reduce the incidence of this serious and likely, not rare, adverse effect. Although information is not conclusive, other SSRI's should also be avoided if treatment with an antidepressant had to be restarted in patients with past medical history of hyponatremia or SIADH induced by citalopram [17,18]. that participated in the care of the patient. All authors read and approved the final manuscript.
2014-10-01T00:00:00.000Z
2004-01-16T00:00:00.000
{ "year": 2004, "sha1": "069fe1057afed8daf5535ac54754371aab8f28dd", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/1471-2369-5-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "069fe1057afed8daf5535ac54754371aab8f28dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270875833
pes2o/s2orc
v3-fos-license
A non‐invasive approach to measuring body dimensions of wildlife with camera traps: A felid field trial Abstract Dimensions of body size are an important measurement in animal ecology, although they can be difficult to obtain due to the effort and cost associated with the invasive nature of these measurements. We avoid these limitations by using camera trap images to derive dimensions of animal size. To obtain measurements of object dimensions using this method, the size of the object in pixels, the focal length of the camera, and the distance to that object must be known. We describe a novel approach of obtaining the distance to the object through the creation of a portable distance marker, which, when photographed, creates a “reference image” to determine the position of the animal within an image. This method allows for the retrospective analysis of existing datasets and eliminates the need for permanent in‐field distance markers. We tested the accuracy of this methodology under controlled conditions with objects of known size resembling Felis catus, our study species, validating the legitimacy of our method of size estimation. We then apply our method to measure feral cat body size using images collected in Tasmania, Australia. The precision of our methodology was evaluated by comparing size estimates across individual cats, revealing consistent and reliable results. The average height (front paw to shoulder) of the feral cats sampled was 25.25 cm (CI = 24.4, 26.1) and the average length (base of tail to nose) was 47.48 cm (CI = 46.0, 48.9), suggesting wild feral cats in our study area are no larger than their domestic counterparts. Given the success of its application within our study, we call for further trails with this method across a variety of species. killing of an animal (Richard-Hansen et al., 1999).These methods allow for a range of biometric measurements, including weight, tissue and blood samples, and the specific dimensions of size. However, this direct contact with wild animals may alter their behavior, cause high levels of stress, and potentially result in injury or death (Zemanova, 2020).Live captures are also laborious and financially costly, meaning an intense sampling effort is required to obtain sufficient sample sizes (De Bondi et al., 2010;Mills et al., 2016).In contrast, camera trapping offers a cost effective, non-invasive monitoring method that provides information on the presence, activity, and potential density of a target species (Kays et al., 2020), although there are limited opportunities to gain qualitative biometric information through this approach.Considering the visual nature of the data provided by camera traps, there is the potential to opportunistically measure animal size using this noninvasive methodology. Past research has explored estimating animal size directly from camera trap images.A relatively simple method is described by Tarugara et al. (2019).They deployed carcasses atop fallen trees, and secured steel pegs 20 cm apart on the underside of these logs, so that as leopards (Panthera pardus) climbed the trees to retrieve the carcasses, their body dimensions could be estimated from this permanent scale.This methodology worked because the animal could be continuously captured in the same position and distance from the camera, allowing for the scale to provide an accurate reference point.However, such effective designs are not always possible, as animals may vary in their distance from the camera trap, limiting the information a permanent scale can provide.Leorna et al. (2022) demonstrate the utility of the pinhole camera approach in circumstances where a permanent scale cannot be used.They employed the pinhole method to provide accurate measurements of reindeer (Rangifer tarandus) body dimensions from camera trap images.To use the method, one must have measurements of the body dimension in pixels, the focal length of the camera, and the distance of the animal to the camera trap (Johanns et al., 2022;Leorna et al., 2022).Measuring the distance of an animal to the camera trap from within camera trap images can prove problematic.One approach has been to place distance markers at regular intervals in the camera's field of view for the duration of the monitoring period (Corlatti et al., 2020;Hofmeester et al., 2017).However, these markers are conspicuous and could increase the risk of theft or alter the behavior of the study species (Corlatti et al., 2020).These markers also tend to bin distances at quite wide intervals (i.e., 1-to 2-meter intervals) (Corlatti et al., 2020;Leorna et al., 2022).A potential alternative is to use a laser rangefinder to derive distance (Leorna et al., 2022), although this option is expensive (~$500 USD per unit), therefore limiting the number of potential stations that can be deployed and inflating the consequences of theft and vandalization. In this study, we describe a cost-effective, low-effort, and inconspicuous method for estimating size utilizing the pinhole camera approach, allowing for reliable measurements whilst keeping the risk of disturbance and theft low.Our method can be implemented in pre-existing camera trap surveys and does not require permanent distance markers.Our approach can also be retroactively applied to historical data.We demonstrate the utility of this method with feral cats, an ecologically damaging, trap-shy invasive predator in Australia.In estimating their size, we also seek to uncover whether feral cats in Tasmania (our study region) are larger than domestics, addressing the common anecdotal reports of "giant cats" in the wilds of Australia (Menkhorst & Morison, 2012). | Estimating size from a camera trap image The following steps are required for our approach: (i) source or calibrate the focal length of the camera trap; (ii) create a portable reference marker; (iii) take a reference image of the marker at each camera trap location; (iv) overlay the image with animal images from the same site in photo-editing software; (v) measure desired dimensions of the animal, in pixels, as well as the distance to the animal; and (vi) convert these measurements from pixels to meters.Each of these steps are described in detail below, with illustrations. | Information required to calculate size The method described within this paper employs the pinhole camera approach, which illustrates the relationship between a twodimensional image and a three-dimensional scene as described by this equation: where S i is the size of the object on the image in pixels, d i is the distance of the camera sensor to the aperture (i.e., focal length expressed in pixels), S o is the physical size of the object (in meters), and d o is the physical distance to the object in meters from the camera (Leorna et al., 2022). Three pieces of information are needed to rearrange this equation to obtain the physical size of an object/animal from a camera trap image.These include the camera trap's focal length (d i ), measurements in pixels of the animal's dimensions, and the distance of the animal to the camera trap (d o ).Focal length can occasionally be sourced from a camera trap manufacturer, but can more reliably be obtained by following the methods of Megalingam et al. (2016), which describe a calibration procedure to derive focal length in pixels.The focal length is a static specification that will not change between calculations, provided that the researcher is using the same camera model throughout their monitoring.In contrast, the pixel measurements of an animal's dimensions and the distance of the animal to the camera trap will be different for each photograph and camera trapping site.As such, a distance marker is required to provide a reference point for the distance of an animal to the camera trap. | Distance marker and in-field protocol Implementing this method requires researchers to create a distance marker that can be readily taken into the field.This distance marker needs to maintain a straight line from the camera trap and provide visible indicators of distance at regular intervals to be readily discerned from the camera trap images.The design of the distance marker is flexible, and largely dependent on the resources and requirements of the researcher (and camera model).For our study, we created a portable distance marker using a tape measure marked with different colors every 10 cm, to total length of 230 cm (Figure 1).This length was determined as the longest distance at which the colors on the tape were still reliably distinguishable within a camera trap image taken by a Cuddeback X-Change Colour Model 1279 with 20-megapixel resolution (the standard device we used throughout this work).This maximum distance may vary for other cameras, depending on their specifications. A reference image of the distance marker must be taken at each camera trap site (e.g., Figure 1, panel a).This image will later be overlain with animal images taken at the same site using stones, trees, the horizon, or other landmarks visible within the image.In images.The same model of camera trap must also be used.After this reference image is taken, the distance marker can be removed from the site. | Pixel measurements and conversion to centimeters Images of the study species should feature the animal in clear view of the camera trap and within the range of the distance marker.Not all images need to be measured, as the sample size required will depend on the researcher's question and data availability.Representative images of the study species must be overlain with a reference image of the distance marker at the same site.This can be done using the transparency function in Photoshop, or with any equivalent image editing software that allows for image overlay and pixel measurements.To ensure accurate overlap, the reference image, and the wildlife image, should be aligned using objects in the background, such as trees, stones, or the edges of pathways (Figure 1).These objects can be traced to make this process easier (e.g., red drawn lines in Figure 1c).A wildlife image should only be used if the animal's whole flank is perpendicular to the orientation of the camera trap. Using the "ruler" tool in the photo editing software, a straight, horizontal line is then drawn between the animal's front foot and the distance marker (Figure 1c).Where it overlaps on the distance marker is the distance of that animal to the camera trap.The ruler tool must be set to measure "pixels," and then the height and length of the animal, or any other parameters of interest (e.g., head length, tail length, and flank width) can be measured.These measurements must be taken consistently (e.g., for height, starting from the front foot and measuring to the shoulder every time for each individual). The pixel measurements are converted to physical dimension measurements using the following equation, as per Leorna et al. (2022): | Test of accuracy in controlled conditions: Camera calibration We calibrated camera focal length following the methods of piece of white paper on a poster board at 40-cm intervals up to 200 cm distance.We took three images of the paper at each interval, ensuring the paper was at the center of the image and approximately perpendicular to the face of the camera trap.We measured the height and length of the paper in pixels using the ruler tool in Photoshop.We then estimated the focal length using the following equation: From these estimates, we calculated the average focal length with 95% confidence intervals and employed the derived average focal length in all further equations. | Tests of accuracy The accuracy of the pinhole camera method for estimating animal size has been effectively demonstrated by Leorna et al. (2022).However, as we are using a different model of camera trap and new method to derive distance of the animal to the camera trap, we validated the accuracy of our method in controlled conditions before undertaking our field measurements.To do this, we created silhouettes of Felis catus of four different sizes (Table 1).We collected one image of each silhouette at intervals of 20 cm between 110 and 210 cm from the camera trap.We also collected images of the silhouettes at unknown distances from the camera trap to determine how much additional error is incurred through the use of the portable reference marker. We measured height (front paw to shoulder) and length (base of tail to nose) for each silhouette and calculated percent relative error (RE) for each measurement (i.e., RE = ([estimated -actual]/actual) × 100).We calculated the average RE with 95% confidence intervals for images with known distance and images with unknown distances. | Application of methodology using feral cats as a case study The physiology of F. catus, or the domestic cat, is well studied in the context of veterinary science (Courchamp et al., 2000), but understudied within the feral populations of Australia, with feral cats defined here as "cat that lives in the wild and can survive without human reliance or contact" (Garrard et al., 2020).There are many anecdotal reports of large cats published within social and popular media (Menkhorst & Morison, 2012), but limited published empirical evidence on their body size in an Australian context.Feral cat size may also have some relevance to the threat posed by feral cats to Australian wildlife, as it has been suggested that as feral cats get bigger, they consume a greater quantity and more diverse prey items (Yip et al., 2015). Camera traps have been revolutionary for monitoring feral cats (Bengsen et al., 2012) and could be used to provide information on cat size.In this case study we estimate feral cat size with camera trap images using data collected in a temperate rainforest/wet-eucalypt forest, an environment where the shooting and trapping of live cats is infeasible.The frequent capture and re-capture of feral cats within a pre-existing camera trap survey in the south-east of Tasmania provided a large dataset for us to test and demonstrate the utility of our size methodology, while also giving us the opportunity to evaluate the size of individuals within this wild population. | Existing field sites and camera deployment We reviewed images from an existing dataset of 54 trail cameras and a minimum lapse time between successive of 30 s during the day and 1 min at night.Considering that all camera traps were deployed more than 5 km from the nearest town (population 125 people), all cats observed were assumed to be feral. | Processing camera trap images Reference images were taken at all 54 camera sites during the final service in 2021 using the methodology as described above (see section: Distance marker and in-field protocol).Images for size estimation were included only if the cat was within 2.3 m of the camera and had their flank parallel to the camera trap.Of the 54 sites, 32 (60%) met these criteria, having images of cats close enough to the camera trap to reliably obtain distance to the camera, and where cats were at an appropriate angle to be measured accurately.From the chosen sites, we had access to 327 images of cats for measurement.Of these, 157 featured black cats, and 170 photos could be identified at the individual level. Although our method does not require individual identification for size estimation, employing it enables us to scrutinize the standard errors attributed to each individual and thereby assess the precision of our methodology.We identified individual cats by their unique coat markings where possible, excluding black cats from our analysis.To mitigate potential bias from site-specific camera characteristics-such as road width and camera angle-a maximum of 10 images per individual cat were processed for each camera.If only a single image was captured for an individual across all sites, then this individual was excluded from our analysis and method demonstration.Using this protocol, 32 unique individual cats were identified and measured.Cat height (front paw to shoulder) and length (base of tail to nose) were measured in pixels.To provide an indication of precision, measurements for each individual cat was averaged and confidence intervals were calculated using the bootstrap method in R (Canty & Ripley, 2017). | Field trial results: Feral cat size estimates Average cat height for the population was 25.25 cm (CI = 24.4,26.1) and the average length was 47.48 cm (CI = 46.0,48.9) across all measured cat images.The average standard error for each individual was 1.58 cm for height (CI = 1.2, 2.0) and 0.82 cm for length (CI = 0.66, 0.97).The tallest individual's average height was 29.3 cm (CI = 28.0,30.5), and the longest individual's average length was 54.6 cm (CI = 49.0,60.1).The shortest individual had an average height of 18.6 cm (CI = 15.4,21.8), and the individual with the shortest length measured 37.7 cm (CI = 35.3,40.0) (Figure 3).There was a strong relationship between height and length across all measured images (R 2 = .87)(Figure 4). | DISCUSS ION Our initial tests of accuracy under controlled conditions showed that our novel method to derive animal distance to the camera trap resulted in a consistent overestimation of animal size of around 2%-8%.While greater than our relative error when distance was known, this margin of error compares favorably with past studies utilizing the pinhole approach (Leorna et al., 2022).As such, we were able to confidently derive consistent estimates of height and length for 32 unique cats as calibrated against repeated images.Notably, our measurements were close to the expected range of body size for domestic F. catus globally: 46 cm for length and 23-25 cm for height (Sunquist, 2002).The key advantages of our methodology for measuring body size in camera trap images are two-fold: (i) the distance marker does not need to remain at the field-site, thus lowering costs and mitigating theft risk, and (ii) as a consequence, this approach can be used to measure animal body size from historical datasets, by revisiting the site and replicating the position of the previously placed camera, and taking a reference image with the distance marker in place. We were able to integrate our method of estimating body size into a pre-existing camera trap survey as our camera traps were still operational in the field.For researchers wishing to apply this method to a historical dataset, sufficient information must be available regarding the camera trap's location and positioning.This includes the height of the camera trap, angle, and tree/location of the post on which the camera was deployed.It is recommended that the height and angle of the camera trap are reported in all camera trap studies, along with habitat information (Meek et al., 2014).Researchers who have photographed their camera trap deployments when recording this information will likely find it easier to apply our method to their historical datasets. The margin of error varied among individuals: some exhibited consistent size estimates across images, while others showed greater variability.This variability could be attributed to the differences in the position of the animal in each photograph (Tarugara et al., 2019), or some foreshortening introduced by slightly oblique angles (e.g., Figure 5, left panels).This corroborates the findings of Leorna et al. (2022), who noted a decline in measurement accuracy for reindeer as the distance to the camera trap increased or when photographed at an angle.Additional variation in our measurements was also introduced by the non-exact nature of our distance marker, which measured distance in intervals of 10 cm.This is because the resolution of the camera trap images was too low to examine more precise intervals (e.g., 5 cm and 1 cm).Despite these potential sources F I G U R E 3 Comparisons of average cat length (left) and height (right) for each individual cat (labelled 1-32) with standard error bars.Note that the x-axis is in descending order for length, and that the cat IDs on the x-axis for average height match that order to allow for comparisons between individuals.The dashed red line in each graph displays the average length and height across all individuals, and the faint dashed red lines above and below this show the average +/− one standard error.to automatically measure fish snout-to-fork length in pixels (Tseng et al., 2020), and while an initial set of training data would need to be provided to facilitate a similar approach here, this approach could reduce some of the labor involved with our current method. In our case study, camera traps placed off-trail captured fewer cats than road cameras, and these cats were often approaching the camera or walking away from it, making the images unmeasurable. Lures have been used in past studies to overcome this problem, increasing capture rate and to ensure the animal is perpendicular to the camera at the time of capture so that dimensions can be measured (Tarugara et al., 2019).Our study shows that roads and trails can be used in the same capacity.Predators have high rates of detection on roads and trails (Wysong et al., 2020), making these locations a sort of "passive lure" for feral cats.In addition, we note that the road locations measured in our study consistently yielded images of cats positioned with their flanks parallel to the camera trap, as they were following a directed linear path of movement (Figure 5), and these animals were also generally close to the camera trap (i.e., within 230 cm).Our portable distance marker is advantageous in this context, as permanent distance markers cannot be placed on active roads and may increase the risk of theft on walking tracks by making the camera trap's location more obvious to human observers.This was a particular concern in Tasmania, where 20% of camera traps from the broader monitoring network were stolen over a four-year study period (L.M Cardona, B.W. Brook, Z. Aandahl, and J.C. Buettel, unpublished).As such, our distance marker methodology can be applied to predator surveys already utilizing these locations of high predator traffic where distance markers have previously been unsuitable. Our field trial provided the first estimates of the size of feral cats living in the dense rainforests and tall wet forests of south-east Tasmania, in areas remote from human settlements.The average estimates of height and length for cats in our study was 25.3 cm (CI = 24.4,26.1) and 47.5 cm (CI = 46.0,48.9), which is similar to that of typical domestic cats of 46 cm for length and 23-25 cm for height (Sunquist, 2002).This is particularly true when one considers that our accuracy trial indicated that our methodology consistently over-estimates dimensions.As such, our findings do not contain any evidence that supports the phenomenon of Australian "panthers" and "big cats," which is commonly reported in the media, but not currently supported by any scientific literature (Menkhorst & Morison, 2012).However, we have only sampled a small pocket of the Tasmanian wilderness herein.The pinhole camera approach we employed provides an opportunity to substantiate claims of giant cats in Australia where shooting and trapping fail or are unavailable, particularly considering this method can be applied to historic data. | CON CLUS ION The pinhole camera approach is a cost-effective method to estimate animal body size if using pre-existing camera trap surveys, allowing researchers to exploit past data.There are several caveats associated with our method.A researcher must return to the site of a camera trap survey and replicate deployment to obtain a reference image if using past data, and a wildlife image should only be used if the animal photographed is parallel to the camera-trap, such that the whole flank can be seen.This can limit the amount of data available to be used.Additionally, pixel measurements need to be taken consistently, although this step could be aided by the integration of a measurement AI.Nonetheless, this method provides a non-invasive alternative to live capture or killing that consistently provides precise animal dimensions.We encourage other researchers to test the pinhole approach with other models of camera trap, species, or captive populations to further validate this method. cases where these landmarks are not available, researchers may consider creating an artificial landmark, such as a strategically placed stone or log, which they can use to align their reference image and wildlife images with at the desktop.The reference image should contain the distance marker laid flat and out in front of the camera trap, directly in line with the lens.One should ensure that the camera trap used to take this reference image is in the same location and position as the camera trap used to collect wildlife Megalingam et al. (2016).We collected 15 images of a 25 cm × 25 cm Object size (cm) = Object size (pixels) × Distance to object (cm) Focal length (pixels) F I G U R E 1 An example of the field photo taken of the measuring tape with 10 cm color blocks out to 230 cm (panel a), a suitable image of a cat close enough to the camera for analysis (panel b), and how the images are superimposed, aligned and sanity checked to ensure the tape measure is in an accurate position to read distance of the cat to the camera.The red lines in panel (c) show reference points in the background that were traced to ensure accurate overlay, and the white line indicates the distance of the cat from the camera trap on the distance marker. 1 Summary of results from accuracy trial of Felis catus silhouettes in controlled conditions.Confidence intervals (CI) are reported with lower limits (LL) and upper limits (UL). ( model: Cuddeback X-Change 1279) that were deployed in the Picton region of Tasmania as a part of a broader camera trap network (B.W. Brook and J.C. Buettel unpublished) (Figure 2).Cameras were secured to trees at animal shoulder height, 30-50 cm, as per Apps and McNutt (2018).Of the 54 cameras, 48 were set on unsealed forestry roads and six off the road in nearby natural arenas or on game trails, and no lures were used.All cameras used a white flash with a passive infra-red sensor (being triggered in response to movement and heat), mean derived focal length for the Cuddeback X-Change 1279 model with an image resolution of 20mp was 6747.9px(95% CI = 6711.5,6784.2).The mean RE for estimates of height and F I G U R E 2 Map of Tasmania showing the camera trap sites used in this study.Status indicates whether a site provided measurable data, with red points (1) as sites that provided data suitable for size measurements of cats, and white (0) with no useable photos. of error, the confidence intervals were tight for most individuals, indicating that our method provided consistent measurements despite uncertainties in angles and distance among camera sites.Obtaining pixel measurements for each image was done by hand within out study, although the utilization of deep-learning image processing software by future researchers could provide an automated alternative.For example, technology in aquaculture has been developed F Relationship between estimated length and height (centimeters) within each image.Points represent each image measured, with the regression line provided in blue and standard error in gray.F I G U R E 5 On the left, two examples of cats on an angle, making them inappropriate to measure for dimensions like length, which require a full view of the animal's flank, these camera trap images.On the right, two examples of cats that are parallel and close to the camera trap, making them good specimens for measurement using these camera trap images.
2024-07-03T05:05:58.575Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "252b88ff9bb6cad5d630b979c2175991f5793304", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "252b88ff9bb6cad5d630b979c2175991f5793304", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
27232022
pes2o/s2orc
v3-fos-license
Prognostic value of c-Met overexpression in pancreatic adenocarcinoma: a meta-analysis The overexpression of c-Met protein has been detected in pancreatic adenocarcinoma (PAC). However, its prognostic impact remains unclear. We performed this meta-analysis to evaluate the prognostic value of c-Met overexpression in PAC. A systematic computerized search of the electronic databases such as PubMed, Embase, and Google Scholar was carried out. From 5 studies, 423 patients who underwent surgical resection for PAC were included in the meta-analysis. Compared with patients with PAC showing low c-Met expression, patients with c-Met-high tumor had significantly worse disease-free survival (hazard ratio = 1.94 [95% confidence interval, 1.46–2.56], P = 0.00001) and overall survival (hazard ratio = 1.86 [95% confidence interval, 1.19–2.91], P = 0.006). In conclusion, this meta-analysis demonstrates that c-Met overexpression is a significant prognostic marker for poor survival in patients who underwent surgical resection for PAC. INTRODUCTION Despite the recent advances in diagnostic and therapeutic modalities, pancreatic adenocarcninoma (PAC) is still among lethal malignancies with 5-year survival rates of less than 10% [1,2]. Surgical resection with or without adjuvant therapy is the potential curative therapy for patients with a localized disease, but patients usually present with unresectable advanced diseases at the time of diagnosis. Moreover, most patients who underwent complete resection develop recurrent diseases during the course of their disease [3,4]. For advanced or metastatic PAC, systemic chemotherapy can prolong survival compared with best supportive care, but unfortunately median overall survival (OS) was less than ten months [5,6]. Thus, the development of more effective treatment is mandated. With more understanding of molecular mechanisms of carcinogenesis, novel molecular agents targeting epidermal growth factor receptor, vascular epithelial growth factor receptor, or c-Met has been proposed for the treatment of PAC [7,8]. However, the identification of biomarkers associated with response is essential to improve therapeutic outcomes of these molecular agents. Therefore, it is still necessary to accumulate our knowledge at the genomic and molecular levels. MET is a proto-oncogene that encodes tyrosine kinase receptor for hepatocyte growth factor (HGF) [9]. HGF, also known as a scatter factor, binds to c-Met protein (the product of MET gene) and initiates autophosphorylation of an intracellular kinase on the betasubunit of the receptor. This interaction allows the binding and activation of multiple signaling molecules such as Src, PI3K, Gab1, SOS, or MEK1/2 [9,10]. This muti-faceted activation results in cellular alterations that contribute to carcinogenesis. The HGF-c-Met signaling pathway ultimately leads to tumor differentiation and proliferation, cellular invasion, angiogenesis and Meta-Analysis metastasis [11,12]. The enhanced expression of c-Met protein has been observed in various tumors such as breast cancer [13], lung cancer [14], gastric cancer [15], colorectal cancer [16], cervix cancer [17], or hepatocellular carcinoma [18]. Several meta-analyses demonstrated that c-Met was a strong prognostic indicator of poor survival [13][14][15][16][17]. The overexpression of c-Met protein has also been detected in PAC [19][20][21][22][23][24][25]. However, most studies had a small number of patients, and its prognostic role remains unclear. We performed this meta-analysis to evaluate the prognostic value of c-Met overexpression in PAC. Figure 1 shows the flowchart of our study. A total of 158 potentially relevant studies were initially found, but 151 of them were excluded after screening the titles and abstracts. Of the remaining 7 potentially eligible studies, 2 were further excluded by the inclusion criteria because the required hazard ratio (HR) with 95% confidence interval (CI) stratified by c-Met expression were not extractable from the presented data [19,20]. Finally, 5 studies were included in the meta-analysis [21][22][23][24][25]. Table 1 summarizes the main characteristics and clinical outcomes of the five included studies. All the studies were performed retrospectively in patients with PAC who underwent radical resection. From the 5 studies, 423 patients were included in the meta-analysis. In one study with 92 patients [25], 56 (60.8%) received preoperative chemoradiotherapy. Except for two studies [21,22], three provided the data of adjuvant treatment. Out of 311 patients from the 3 studies [23][24][25], 214 (68.8%) received adjuvant chemotherapy with or without radiation. c-Met expression assignation c-Met expression was assessed by immunohistochemistry (IHC). There was a marked heterogeneity between the thresholds used to dichotomize c-Met status (c-Met low or c-Met high ). IHC criteria were briefly summarized in the Table 1. The rate of high c-Met expression ranged from 27.5% [24] to 60.6% [22]. Publication bias Visual inspection of the funnel plots for DFS and OS showed symmetry, indicating there were no publication biases ( Figure 3A and 3B). DISCUSSION In this meta-analysis, we evaluated the prognostic impact of c-Met overexpression in patients with resected PAC. The results show that high c-Met expression is associated with significantly poor DFS or OS. To our knowledge, this is the first meta-analysis suggesting that c-Met overexpression represent an adverse prognostic marker in patients with PAC. PAC shows unfavorable prognosis with the most aggressive tumor biology. The traditional post-operative prognostic factors such as tumor size, lymph node involvement, or status of resection margin are insufficient to predict patients with a high risk of recurrence or metastasis. Therefore, the identification of reliable predictive markers and potential therapeutic targets is essential to guide individual treatment strategies and improve prognosis in patients with PAC. c-Met has been proven to play a critical role in the pathogenesis and progression of many tumor types [9][10][11][12]. The enhanced expression of c-Met has also been observed in PAC [19][20][21][22][23][24][25][26]. Because most studies had a small number of patients and adopted various IHC scoring methods, however, they could not draw a consensus regarding the prognostic value of c-Met. In an early study by Furukawa et Multiple studies also demonstrated that high expression of c-Met was associated with poor survival in various cancers [13][14][15][16][17][18]. Thus, interference with c-Met activation may provide an effective therapeutic approach for cancers with c-Met overexpression [27]. Several c-Met inhibitors are currently under active investigation in various cancer types [10,[28][29][30][31]. The efficacy of c-Met-targeting agents has been associated with high c-Met expression in non-small-cell lung cancer and hepatocellular carcinoma [28,29]. Therefore, patients with PAC overexpressing c-Met protein might be good candidates for c-Met inhibitors. Indeed, it has been demonstrated that targeting c-Met impairs tumor growth and improves activity of gemcitabine in PAC [29][30][31][32]. However, the major challenge for clinical development of c-Met inhibitors is that there are no standardized methods and criteria for c-Met overexpression. A variety of methods such as IHC, Western blot, fluorescence in situ hybridization, or realtime quantitative PCR are currently used for assessing c-Met status [13]. In this meta-analysis, the included studies adopted the various IHC methods with the different criteria for c-Met overexpression. The discrepancies in the prognostic value of c-Met overexpression in the previous reports with PAC might be attributable to the different c-Met scoring methods. Therefore, the definition of a reliable guideline for c-Met status is an essential prerequisite for assessing the prognostic role of c-Met expression and developing c-Met inhibitors in solid tumors. Our study has inherent limitations that should be noted. First, the meta-analysis included a small number of studies with a limited sample size. Second, the included studies were all retrospectively performed. Third, of the five studies, four were conducted in Asia. Finally, as we already mentioned, IHC criteria to stratify c-Met status were various among studies. In conclusion, our meta-analysis demonstrates that c-Met overexpression is a significant prognostic marker for poor survival in patients who underwent surgical resection for PAC. However, larger studies using standardized methods are still needed to verify the prognostic role of c-Met expression in PAC. Publication searching strategy This study was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [33]. We performed a systematic computerized search of the electronic database PubMed, Embase, and Google Scholar (up to April 2017). The search was carried out using the following keywords: 'c-Met' or 'Met' and 'pancreatic cancer' or 'pancreas neoplasm' or 'pancreatic adenocarcinoma'. The related articles function in the PubMed was also used to identify all relevant articles. Inclusion criteria Eligible studies should meet the following inclusion criteria: (i) patients had a diagnosis of PAC; (ii) DFS and/ or OS were analyzed by c-Met expression status; (iii) HRs with 95% CIs for DFS or OS were reported or could be calculated from the data provided; (iv) papers were written in English. Data extraction Data extraction was carried out independently by two investigators (BJK and HSK). If these two authors did not agree, other investigators (JHK and HJJ) were consulted to resolve the dispute. The following data were extracted from all eligible studies: first author's name, year of publication, country, number of patients, tumor stage, treatment, methodology of IHC, the criteria used to dichotomize c-Met expression as 'high' or 'low', and HR with 95% CIs for DFS or OS. Statistical analysis Statistical values used in this meta-analysis were obtained directly from the original articles. When papers had no HR and 95% CI, the Engauge Digitizer version 9.1 was used to estimate the needed data from Kaplan-Meier curves. The effect size of DFS and OS was combined through HR and its 95% CI. Heterogeneity among studies was estimated using the chi-squarebased Cochran's Q statistic and I 2 inconsistency test: P < 0.1 and I 2 > 50% indicated the presence of significant heterogeneity. The fixed-effects model (Mantel-Haenszel method) was selected to calculate the pooled HR when substantial heterogeneity was not observed. When significant heterogeneity was detected across studies, we adopted the random-effects model (DerSimonian-Laird method). The RevMan version 5.2 was used to combine the data. The plots show a summary estimate of the results from all the studies combined. The size of the squares represents the estimate from each study and reflects the statistical 'weight' of the study (relative contribution to the summary estimate). Results are presented as forest plots with diamonds representing estimate of the pooled effect and the width of diamond representing its precision. The line of no effect is number one for binary outcomes, which depicts statistical significance if not crossed by the diamond [34]. All reported P-values were two-sided and P < 0.05 was considered statistically significant. Publication bias was assessed graphically by the funnel plot method [35].
2018-04-03T00:55:59.404Z
2017-08-22T00:00:00.000
{ "year": 2017, "sha1": "9c38f249d7b7c941f7c3a5533076553054f3a8b5", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=20392&path[]=64994", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c38f249d7b7c941f7c3a5533076553054f3a8b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219631210
pes2o/s2orc
v3-fos-license
A Meta Classification Model for Stegoanalysis using Generic NN : The core idea behind deep learning is that comprehensive feature representations can be efficiently learned with the deep architectures which are collected of stacked layer of trainable non linear operation. However, because of the diversity of image content, it is hard to learn effective feature representations directly from images for steGAnalysis. SteGAnalysis may be generally figured as binary classification issue. This technique, which is called a universal/blind steGAnalysis, will become the principle stream around current steGAnalytic algorithms. In the preparation phase, effective features which are sensitive with message embedding are concentrated on highlight possibility control by steGAnographier. Then, a binary classifier will be discovered looking into pairs from claiming blanket pictures and their relating stegos pointing with Figure a limit on recognize steGAnography. On testing phase, those prepared classifier is used to anticipate labels from claiming new enter pictures. Past exploration indicated that it will be rather critical to power spread Characteristics Also stego offers to be paired, i. e. SteGAnalytic offers from claiming spread pictures And their stego pictures ought further bolstering be safeguarded in the preparing situated. Otherwise, breaking cover-stego pairs in distinctive sets might present biased error and prompt to a suboptimal execution. Proposed approaches have to fix the kernel of first layer as the HPF (high-pass filter). It is so-called pre-processing layer. We suggested another technic with characteristic decrease done which characteristic Choice and extraction And classifier preparation need aid performed at the same time utilizing a generic calculation. That generic calculation optimizes An characteristic weight vector used to scale the individual features in the unique example vectors. A masker vector may be likewise utilized to concurrent Choice of a characteristic subset. We utilize this technobabble clinched alongside mix with those RESNET, and look at the outcomes with established characteristic Choice and extraction systems. INTRODUCTION FOR a long time, steGAnography and steGAnalysis always developed in the struggle with each other. SteGAnography seeks to hide secret data into a specific cover as much as possible and makes the changes of cover as little as possible, so that the stego is close to the cover in terms of visual quality and statistical characteristics [1][2][3]. Meanwhile, steGAnalysis uses signal processing and machine learning theory, to analyze the statistical differences between stego and cover. It improves detecting accuracy by raising the numeral characteristics and enhancing the classifier presentation [4]. Currently, the existing steGAnalysis methods include specific steGAnalysis algorithms and universal steGAnalysis algorithms. Early steG Analysis methods aimed at the detection of specific steGAnography algorithms [5], and the generalpurpose steGAnalysis algorithms usually use statistical features and machine learning [6]. The commonly used statistical features include the binary similarity measure feature [7], DCT [8][9] and wavelet coefficient feature [10], co-occurrence matrix feature [11] and so on. In recent years, higher-order numerical character is based on the association between neighbouring pixels that become the major stream in the steGAnalysis. These features improve the detection performance by capturing complex statistical characteristics associated with image steGAnography, such as SPAM [12], Rich Models [13], and its several variants [14][15]. However, the advance strategies would be based on rich models that incorporate many thousands of characteristics. Dealing with such high-dimensional features will inevitably lead to increasing the training time, overfitting and other issues. Besides, the success of feature-based steGAnalyzer to detect the subtle changes of stego largely depends on the feature construction. The feature construction requires a big contract of person involvement and capability. Benefiting from the development of deep learning, convolutional neural networks (CNN) perform well in various steGAnalysis detectors [16]. CNN can automatically extract complex statistical dependencies from images and improve the detection accuracy. Considering the GPU memory limitation, existing steGAnography analyzers are typically trained on relatively small images (usually 256×256). But the real-world images are of arbitrary size. This leads to a problem that how an arbitrary sized image can be steGAnalyzed by the CNN-based detector with a fixed size input. In traditional computer vision tasks, the size of the input image is usually adjusted directly to the required size. However, this would not be a good practice for steGAnalysis as the relation between pixels are very weak and independent. Resizing before classification would compromise the detector accuracy. In this paper, we have proposed a new Generic network structure named "meta classification" to improve the accuracy of spatial domain steGAnalysis. The proposed generic NN performs well in both the detection accuracy and compatibility, and shows some distinctive characteristics compared with other NNs, which are summarized as follows: (1) In the pre processing layer, we modify the size of the convolution kernel and use 30 basic filters of SRM [13] to initialize the kernels in the pre processing layer to reduce the number of parameters and optimize local features. After again for extraction of the best features we are applying the GA for best feature selection (I.e meta features) then, the convolution kernel is optimized by training to achieve better accuracy and to accelerate the convergence of the network. (2) We use two separable convolution blocks to replace the traditional convolution layer. Separable convolution can be used to extract spatial correlation and channel correlation of residuals, to increase the signal to noise ratio, and obviously improve the accuracy. (3) We use spatial pyramid pooling [20] to deal with arbitrary sized images in the proposed network. Spatial pyramid pooling can map feature maps to fixed lengths and extract features through multi-level pooling. We design experiments to compare the proposed CNN network with [17], Ye-Net [19], and Yedroudj-Net [21]. The proposed CNN shows excellent detection accuracy, which even exceeds the most advanced manual feature set, such as SRM [13]. II. RELATED WORK The (1) A CNN is composed of two parts: the convolution layer and the fully connected layer (ignoring the pooling layer, etc.). The function of convolution layer is to convolve input and to output the corresponding feature map. The input of the convolution layer does not need a fixed size image, but its output feature maps can be of any size. The fully connected layer requires a fixed-size input. Hence, the fully connected layer leads to the fixed size constraint for network. The two existing solutions are as follows. Resizing the input image directly to the desired size. However, the relationship between the image pixels is fragile and independent in the steGAnalysis task. Detecting the presence of steganographic embedding changes really means detecting a very weak noise signal added to the cover image. Therefore, resizing the image size directly before inputting image to CNN will greatly affect the detection performance of the network. Using a full convolution neural network (FCN), because the convolutional layer does not require a fixed image size. In this paper, we propose the third solution: mapping the feature map to a fixed size before sending it to the fully connected layer, such as SPP-Net [20]. The proposed network can map feature maps to a fixed length by using spp-module, so as to steganalysis arbitrary size images. (2) Accuracy of steGAnalysis based on CNN seriously relies on signal-to-noise ratio of feature maps. CNN network favourite's high signal-to-noise ration to detect small differences between stego signals and cover signals. Many steganalyzers usually extract the residuals of images to increase the signal to-noise ratio. However, some existing schemes directly convolve the extracted residuals without thinking of the cross channel correlations of residuals, which do not make good use of the residuals. In this paper, we increase signal-to-noise ratio by three ways as follows. Optimizing the convolution kernels by reducing kernel size and the proposed "forward-backward-gradient descent" method. Using group convolution to process the spatial correlation and channel correlation of residuals separately. We greatly improve the accuracy of steGAnalysis by reducing the features dimension. III. STEGANALYSIS That improvement of data communication provides the users a great convenience for data communications. A key issue for data communications on the web is with transmit data from a sender should its collector safely, without being eavesdropped, wrongfully accessed alternately tampered. SteGAnography, which will be those craftsmanship alternately science that hides mystery message on a fitting media transporter including text, image, audio, alternately feature [3], gives a viable result. As opposed with steGAnography, steGAnalysis will be with uncovering the vicinity from claiming mystery messages inserted on advanced Medias. These two strategies are broadly utilized within huge numbers imperative fields, for example, such that the business communications and the military communications. Picture steganography and picture steganalysis bring pulled in incredible diversions for late quite some time. Punctual investigations for picture steGAnography were will hidden mystery messages to picture locales that are uncaring to human's visual system, demonstrating that notable areas on advanced pictures are avoided for message hideyo noguchi. Later researches need broadened picture steGAnography Furthermore steGAnalysis under a more all case, which is illustrated done fallowing figure. For picture steGAnography, the sender hides the message m in the blanket picture X. Toward applying the message embedding algorithm Emb (X, m, k) and the way k looking into X, that stego picture Y will be created et cetera passed of the recipient. Toward applying the message extraction calculation Ext (Y, Concerning illustration diverse part xk,l need diverse extend Furthermore separate xk need separate dimensional number, xk,l will be normalized thereabouts that every characteristic contributes just as for registering the similitude measure In view of Euclidean separation. Expect maxk,l Also mink,l are the greatest and the base for xk,l over those database, separately. We standardize xk,l as shown: Nearest Calculation: We utilize k-nearest neighbour arrangement exactness as wellness work to get K-Nearest neighbour classifier may be dependent upon Taking in toward relationship. It is conveyed crazy under the suspicion that the comparable pictures will have a place with the same classification. Provided for An situated of d instance-label pairs (Xi,Li), i = 1,2,d, the place Xi 2 rn , li will be the class name of Xi. Each picture speaks to a side of the point for a n-dimension characteristic space and is utilized as An inquiry picture should figure the 'closeness' of the different pictures. K-Nearest neighbours that would those closest of the inquiry picture need aid came back. The inquiry picture may be doled out for the A large portion as a relatable point classification "around its k closest neighbours. The arrangement exactness about k-NN k-NN accuracy could be ascertained similarly as the following: The place t will be those numbers for pictures effectively classified d the number of pictures in the situated. 'Closeness' may be characterized As far as comparability measure. A few similitude measures dependent upon basic separation works for example, Euclidean, Mahalonibis, and so forth throughout this way, observing and stock arrangement of all instrumentation may be characterized as shown. We utilization distance, the place the euclidean separation the middle of two focuses X = {xk, l} Furthermore Y = {yk, l} is characterized as following: Recognizing separate weight might a chance to be doled out on diverse characteristic descriptor, a weighted euclidean separation is used to figure that comparability measure concerning illustration the accompanying: The place wk is the weight of kth characteristic descriptor. Similarly as an irregular scan calculation propelled by characteristic evolutionary laws, those GA might have been principal suggested toward holl and over 1975. To tackle an issue by the utilization of the GA, the to start with step is will build those beginning number. Each part of the introductory populace will be called a "individual" (or chromosome), comparing will an answer to An sure issue. Commonly, wellness may be used to represent able a chromosome's versatility of the environment, in this way each chromosome will be assessed toward a sure target capacity. A determination operation is then conveyed out similarly as it picks the people for higher wellness values, which are used to recover new posterity. After this, generic will be a key step to prepare new people by haphazardly recombining the chose guardian chromosomes looking into an irregular generic perspective for a particular likelihood. Finally, a transformation operation will be actualized for a generally little probability, which can diminish those presence from claiming nearby optima by haphazardly displacing person or a greater amount genes of the present chromosomes. Those generic Also transformation operators of the GA would illustrated clinched alongside figure. Our GA:- would joined And C2N pairs need aid gotten. Second haphazardly produce two numbers a (0 < a < m) Also b (0 < b < m -An). The place m will be those length about each chromosome, An is the begin position for generic operation; b will be the generic operation length. In conclusion expect to every match Ct1 = {WK}. K=a + 1 where K has two generic portions. The genes in the extent [(a+1)] swap on produce two new people for generic rate pc as follow: C1r+1={w1k}, C2r+1={w2k}. The place w1k=y wk+(1-y). Wk. W2k=y. Wk+(1-y). Wk. The place generic variable y will be a predefined steady. Change operations would exceptionally imperative to keeping those varieties about populace. We place those people produced over generic operation under the pool for guardian people. K the Most exceedingly bad fit people would chosen with a little transformation rate Pm. We haphazardly select for genes of every people to change operation. Expect gene wk (wk€ [0, 1]) will be mutated, whose posterity is wk. The transformation operation is taking after. Where W n is random number value (0), I is function of (t,y) (t,y)=y (1-r (1-1/2)p ) The place t will be cycle number, r will be An irregular amount in the range [0,1]. M will be those amount of the greatest iteration, Furthermore transformation parameter p is a predefined consistent. This technique adjusts the generic calculation process, which gives change operation need bigger transformation ranges done prior stage, Also littler ones in the after the fact. B is those first worth from claiming a picture square. Assuming that just those second most reduced bit-plane is identified, those progress between those test picture square and prepared square can be recognized Likewise a change grid in A1 alternately A2. The altered picture pieces would b 0 1 = b + A1 Also b 0 2 = b + A2. Here, we will utilization you quit offering on that one illustration should delineate this procedure. To those unique piece B, f(B) = 99 and f(F−(B)) = 120, the place F− is the non-positive flipping. To the altered piece b 0 1 , f(F−(B 0 1 )) = 90, whether f will be non-positive flipping. To in turn altered square b 0 2 , f(F−(B 0 2 )) = 150. In summary, the sort (regular alternately singular) of the piece might make transformed toward a correct change. Generic calculation may be a general streamlining algorithm. It transforms a streamlining or scans issue similarly as the methodology from claiming chromosome Development. The point when the best distinctive will be chosen following a few generations, the ideal or suboptimum result will be discovered. Those three the vast majority paramount operations of generic calculation would reproduction, generic and change. The versatile qualities influence the duplicate operation. In general, those people with bigger wellness qualities have higher possibilities with make chose on breed the following era. IV. RESNET The suggested system to picture steGAnalysis. Those preprocessing sub system comprises of the high pass filter (HPF) layer and the truncation layer, the place the HPF layer may be to extricate those clamour part from enter pictures and the truncation layer will be to compel the element extend about information characteristic map. Following that those characteristic guide will be passed of the generic calculation to lessening those Characteristics. The characteristic taking in sub network holds weight matrices (RLU) Furthermore offers to picture steGAnalysis. Res Net permits the utilization of deeper networks much appreciated of the utilization of shortcuts. On Xu-Net, the preprocessing square takes Likewise enter dequantized (real values) images, At that point convolved those picture for 16 DCT foundation (in those same soul as Zeng et al. System [106] [105]), et cetera apply an outright value, An truncation, Furthermore An set from claiming convolution, BN, ReLU until acquiring An characteristic maps about 384 dimension, which may be provided for to a completely joined piece. We might note that those max pooling alternately Normal pooling are swapped toward convolutions. This system is consequently truly straightforward and might have been in 2017 those state-ofthe-craft. Over a way, this sort of effects indicates us that those networks suggested by the machine Taking in would precise focused And there is not to such an extent domainknowledge will incorporate of the taxonomy of a organize so as will acquire a productive system. 6. Test effects also discourse. We present those suggested HNN model for picture steGAnalysis. Firstly, we display the Generally speaking structural engineering about HNN for subtle elements. Then, we portray that parameter taking in of the HNN model. System structural engineering figure illustrates the construction modelling about HNN in this paper. Those organize holds three sub networks, i. E. Those high-pass sifting (HPF) sub-network, those profound remaining Taking in sub network and the arrangement subnetwork. These sub-networks have their parts over transforming the information in the in general model, which would depicted Likewise takes after. Those HPF subnetwork may be to extricate the commotion segments starting with information cover/stego pictures. Past investigations demonstrate that pre-processing information pictures for HPF can generally smother their contents, prompting a limited progressive go also an extensive signto-commotion proportion (SNR) between those feeble stego indicator and the picture indicator. Similarly as a result, Factual portrayals of the separated picture ended up additional conservative Furthermore hearty. For this reason, we don't specifically bolster unique pictures under the system yet all the information their commotion segments. Mathematically, those clamour part from a picture n is the convolution the middle of the picture i Furthermore a HPF portion k: n = I * k Where * denotes convolution operator. We follow the general setting and choose the k as the KV kernel HNN to steganalysis. In the HPF sub-network, a 5 × 5 kv part pre-processes information cover/stego pictures to get their clamour parts. In the remaining Taking in sub-network, there are two sorts of building blocks: those remaining Taking in square (ResL) and the extent expanding square. N1, n2, n3, or n4 means that there would n1, n2, n3, or n4 ResL squares accompanying that present layer. Those order sub-network At last maps offers under labels. In this figure, p@q × q means that there are p filters for that size from claiming q × q. Those ReLU actuation layer, the most extreme pooling layer, and the clump standardization layer would not demonstrate in the figure. The lingering Taking in sub-network is will extricate viable features to segregating disguise pictures and stego pictures. That subnetwork firstly use 64 convolution filters (the extent will be 7 × 7) will convolve enter images, generating huge numbers characteristic maps to resulting transforming. Accompanying the convolution layer, there need aid a ReLU actuation layer, a most extreme pooling layer and a clump standardization layer. This transforming will be on catch large number distinctive sorts about dependencies around pixels in the clamour part pictures. Its motivation will be with aggravate those system extricate enough Factual properties will identify the mystery message faultlessly. To the remaining Taking in layer, it is constituted by two sorts for fabricating blocks: the non-bottleneck piece and the bottleneck block, which would demonstrate similarly as fig. 4. For a non-bottleneck block, it needs two convolution layers with that size from claiming 3 × 3. Every convolution layer is trailed by An ReLU actuation layer, a most extreme pooling layer Furthermore a clump standardization layer. To a bottleneck block, that number about convolution layer will be three. Furthermore, two sizes of convolution filters would utilize within those block: 1 × 1 and 3 × 3. In practice, a bottleneck square is a greater amount prudent to building CNN models for huge depths. To customary lingering learning, both those enter and the yield of two fabricating squares needs those same sizes. To size increasing, the yield needs double extent about characteristic maps over the input. Will energy every piece hosting the same complexity, that characteristic map is down-sampled toward variable 2 to that size expanding piece. Previously, our HNN model, there are four phases about processing, which builds the amount of characteristic maps starting with 64 should 512. That last order sub-network comprises from claiming completely joined neural system model, mapping features concentrated from the lingering Taking in sub-network under double labels. To guarantee those demonstrating capability about this sub-network, we set the number about neurons will 1000. System preparing Parameters of the remaining Taking in sub-network and the arrangement subnetwork would take in by minimizing that softmax capacity: where yi denotes the label of the sample xi, δ(·) represents the delta function, N is the number of training samples, K is the number of labels (K = 2). oik(xi,θ) denotes the output for the i-th sample xi at the k-th label. θ is the parameter of the network. For a neural network model, θ generally represents the weight matrix W or the bias vector b. The weight matrix and bias vector for each layer is updated by the gradient descent: V. EXPERIMENTAL RESULTS This test is will show those adequacy of the characteristic naturally took in by those suggested HNN. Some of the main experiment, we select those S-UNIWARD steGAnography toward 0.4 bpp for assessment. The most recent characteristic guide preceding that yield hub done HNN model is chose concerning illustration those naturally took in characteristic. We decide those traditional spatial rich model (SRM) characteristic [8] Performance comparisons with prior arts VI. CONCLUSION This paper presented a novel convolution neural system model to picture steGAnalysis. Those recommended model need two clear contrasts for existing meets expectations. In those recommended or ganize need and generally bigger profundity over present CNN based models. Second, a novel taking in system known as lingering Taking in will be used to actively preserve the powerless stego sign. Trials ahead standard dataset need showed that those suggested or GAnize need accompanying contributions: -CNN with expansive profundity demonstrates an unrivalled capacity to model characteristic pictures. It can extricate perplexing Factual Characteristics to classifying blanket pictures and stego pictures.lingering Taking in turns out with be powerful on preserve the powerless stego signal, settle on the suggested model catch the distinction between spread pictures and stego pictures. For addition, features naturally discovered by suggested system are more effortlessly arranged over established rich model built offers. Present fill in shows that a profound or ganize for lingering Taking in could identify spatial space steGAnography adequately. We will augment this worth of effort with identify compacted space steGAnographic calculations. Furthermore, such as existing CNN models that are computationally expensive, those suggested model likewise needs sufficient computational assets with backing its effectiveness. We will also concentrate on moving forward it's preparing effectiveness as earlier, future.
2019-09-16T23:09:51.585Z
2019-07-30T00:00:00.000
{ "year": 2019, "sha1": "49dbcd4f4388b05c6c985deea3a01f68ee41e9b3", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijrte.b1780.078219", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7f72e2350b18a38f6e0fdbbc29b10ac767f11b4d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
59508653
pes2o/s2orc
v3-fos-license
Bone Tissue is an Integral Part of the Fascial System Bone tissue is not considered an integral part of the fascial system as per the current definition of fascia. Bodily fasciae derive from the mesoderm, while the fasciae associated with the cranial-cervical area derive from the ectoderm. Bone tissue or specialized connective tissue follows the same development process, but with a greater admixture between the two embryological sheets. Bone tissue is the largest organ capable of producing autocrine and paracrine substances, influencing its own metabolism and that of other organs. This article reviews the functions of bone, the anatomy that determines its shape, and its relationships within an organism. The objective of the article is to provide a scientific rationale for incorporating bone tissue within the definition of fascia, using the most up-to-date scientific knowledge. Introduction And Background No one definition of the fascial system has yet been accepted by all researchers. One of the most commonly used definitions derives from the Fascia Nomenclature Committee (2014), created by the Fascia Research Society and founded in 2007: "The fascial system includes adipose tissue, adventitia, neurovascular sheaths, aponeuroses, deep and superficial fasciae, dermis, epineurium, joint capsules, ligaments, membranes, meninges, myofascial expansions, periostea, retinacula, septa, tendons (including endotendon/peritendon/epitendon/paratendon), visceral fasciae, and all the intramuscular and intermuscular connective tissues, including endomysium/perimysium/epimysium [1]." The fascial system is an anatomical continuum that connects every part of the body. A recent study showed, for example, that the thoracolumbar fascia is in contact with the fascia of the abdominal muscles [2]. Microscopically speaking, we know that there are no discontinuations in the fascia because there is an absolute anatomical and functional continuity [3][4]. In a previous paper, we attempted to establish a new definition of the fascial system, viewing tissue from a functional and embryological point of view, including the epidermis and the bone tissue, which had been excluded from previous classifications: "The fascia is any tissue that contains features capable of responding to mechanical stimuli. The fascial continuum is the result of the evolution of the perfect synergy among different tissues, capable of supporting, dividing, penetrating and connecting all the districts of the body, from the epidermis to the bone, involving all the functions and organic structures. The continuum constantly transmits and receives mechano-metabolic information that can influence the shape and function of the entire body. These afferent/efferent impulses come from the fascia and the tissues that are not considered as part of the fascia in a biunivocal mode [5]." In addition to the solid state of the fascia, we recently tried to introduce the concept of a liquid fascia, that is, a specialized connective tissue that constitutes an integral part of the fascial system: blood and lymph. To do this, we developed a new theoretical model incorporating liquids into the biotensegretive vision: Rapid Adaptability of Internal Network (RAIN) [6]. Currently, only the periosteum, a connective tissue sheath (dense connective tissue) that covers the richly vascularized bone, is considered by scholars as an integral part of the fascial system. It is divided into two layers. The first, outer layer is rich in vascular and nerve vessels, fibroblasts and elastin, and collagen; this layer determines the mechanical stability of the periosteum [7]. The second, deepest layer consists of osteoblasts, smaller fibroblasts and a homogeneous diameter (isodiametric), with adult mesenchymal skeletal progenitor cells; this layer is fundamental for regenerative processes [7]. This article reviews bone tissue in terms of its local and systemic functions, with the aim of developing a potential new definition of fascia, as bone is a connective tissue. Review Embryological derivation of bone tissue Bodily fasciae derive from the mesoderm, while the fasciae associated with the cranial-cervical area derive from the ectoderm [5]. Bone tissue or specialized connective tissue follows the same development process, but with a greater admixture between the two embryological sheets. The bones of the skull and the first cervical vertebrae originate from the mesoderm and the ectoderm. To give examples of the bones of the skull, the sphenoid bone (the orbitosphenoid and basal-post-sphenoid portion) originates from the cephalic mesoderm and neural crest cells (the alisphenoid and basal-pre-sphenoid portion) [8]. The occipital bone is formed from the paraxial mesoderm (basiocciput, jugular tubercles, foramen magnum, anterior tubercle of the clivus, occipital condyles) and from the neural crests of the notochord (the remaining parts of the occipital bone) [9][10]. The facial bones or splanchnocranium derive mainly from the cells of the ectoderm, except for some parts of the mandible (mesoderm); the bones of the cranial vault originate from both the ectoderm (frontal bone) and the endoderm (parietal bones) [11][12]. The first vertebra in particular develops from the notochord, while the remaining part of the vertebral column derives from the paraxial mesoderm as well as the ribs and the scapula [10,13]. The sternal bone derives from the lateral plate of the mesoderm; the cells migrate from a different area of the mesoderm, laterally towards the center of the mesoderm [14]. The bones that will constitute the limbs derive from the lateral plate of the mesoderm [15]. Bone tissue is an organ Bone is traditionally regarded as a target for different hormonal substances (1.25 dihydroxy vitamin D, calcitonin, sex hormones and growth hormones, thyroid hormones), growth factors [transforming growth factor beta (TGF-B), insulin-like growth factor (IGF-1), fibroblast growth factor (FGF), bone morphogenic proteins (BMPs), and platelet-derived growth factor (PDGF)], as well as inflammatory substances [interleukins (IL-1β, IL-6) and tumor necrosis factor α (TNF-α)] [16][17][18]. Bone tissue is the largest organ capable of producing autocrine and paracrine substances, influencing its own metabolism and that of other organs [16]. The osteocyte is the most abundant bone cell capable of secreting sclerostin; the latter influences bone metabolism (autocrine action) and systemic metabolism (paracrine action) [16]. An increase in blood sclerostin is found in particular when the bone has a decreased stimulation to the load. Autocrine action stimulates a minor remodeling of the bone (osteoporosis), while paracrine action influences insulin action [16]. Another molecule produced by osteocytes is a phosphatonin, more precisely, fibroblast growth factor 23 (FGF23). The transmembrane receptor of FGF23 (a protein known as Klotho) is found in the osteocytes and other tissues, such as the thyroid gland and the kidneys [16]. A reduced level of FGF23 and its Klotho receptor is correlated with premature aging and systemic endothelial dysfunction, while an optimal level of FGF23 positively influences renal function, protecting the kidney from phosphate retention and excessive production of parathormone [16]. An excess of FGF23 is detrimental to the health of the central nervous system. FGF23 is also associated with alcohol abuse, and increased FGF23 beyond normal physiological threshold values causes an alteration in hippocampal morphology, and a cognitive decline [16]. Osteocalcin, a peptidic hormone synthesized by osteoblasts, is essential for optimal adaptation of muscle fibers after exercise, probably owing to an increase in insulin sensitivity in myofibers [16]. It is able to stimulate, through a membrane receptor (G protein-coupled receptor family C group 6 member A, GPRC6A), the production of insulin from the pancreatic beta cells, and influence the lipid metabolism of the liver [16]. A recent study using an animal model demonstrated the ability of osteocalcin/GPRC6A to stimulate the synthesis of luteinizing hormone (LH), as well as the production and release of testosterone from Leydig cells. In this way, bone can control male hormones, utilizing a path independent of the hypothalamus-pituitary axis [16]. Bone tissue is fundamental for the general health of the individual, influencing different organs and systems, through the hormonal paracrine production of bone cells [17][18]. Bone tissue cells Adult bone contains three major cell types: the osteocyte, which accounts for about 90%-95% of all bone cells; the osteoblast, which derives from mesenchymal stem cells; and the osteoclast, which derives from hematopoietic progenitor cells [19]. Osteocytes develop from the osteoblasts and are found in the bone matrix and on the bone surface. They are considered essential for maintaining bone turnover, through the production of the sclerostin protein and its receptor [nuclear factor (NF)-kB ligand and receptor activator of NF-kΒ ligand (RANKL)] [19]. Osteocytes control the activity of osteoblasts (which create and repair bone tissue) and osteoclasts (which disassemble bone tissue), allowing the bone to adapt and responding to any mechano-metabolic stimuli [19]. Osteocytes are the main sensors of mechanical stimuli. The osteocytes form a network within the entire bone tissue (lacunar-canalicular), so that any stimulus can be transported and sensed by the entire bone area; mechanical energy is converted into electrical energy or a biochemical signal [20]. This transductive mechanism of the osteocyte is favored by the Wnt (canonical pathway) biochemical pathway, involving proteins that help to transport the signal inward [21]. Each osteocyte senses what is happening to the entire bone, owing to the presence of junction or gap-junction proteins, in particular, connexin-43; together, they constitute the osteocyte or lacunar-canalicular network [20]. How are mechanical signals, with respect to the whole bone, managed by the cells? The osteocytes are dispersed throughout the bone matrix, which consists of type I collagen (and other noncollagenous proteins such as osteopontin, bone sialoproteins, proteoglycans), minerals (carbonated apatite crystals), and water [22]. The osteocytes act as mechanical sensors, but water also plays a fundamental role in the transduction of the mechanical signal [22]. At the ultrastructural level (nanomillimeters) of bone are collagen fibrils and hydroxyapatite crystals, whose bonds constitute a large part of the matrix; this bond makes it possible to create a state of pre-tension [22][23]. When the mechanical message to the bone is transmitted through the osteocyte via Wnt, the bone matrix becomes deformed, generating electrical charges or movements of water (fluid flow shear stress). The water moves through the entire bone, deforming the structures along its passage, due to slippage between the collagen fibers and the hydroxyapatite crystals (bone elasticity), interacting simultaneously with all the osteocytes [22][23]. Hydration is an important component for the proper maintenance of bone tissue; water represents about 15%-25% of the total volume of bone, thereby establishing a variable pressure gradient [24][25]. Vascular system and bone innervation Inside the matrix in the cortical area (the innermost layer of bone) are pores known as Haversian canals, located inside the lacunar-canalicular network (also called "active osteonal bone") [26]. In these Haversian channels, surrounded by a thin bony lamella (known as a cement line), we find the blood and nerve vessels [25]. Haversian channels have a longitudinal pattern but lie at an angle of about 15-30 degrees from the median line of the bone [26][27]. Volkmann canals, positioned transversely, connect the Haversian canals, creating a shared blood network [27]. The entire bone system is richly vascularized, from the bone marrow to the periosteum [27]. It is the heart and the movements of the muscles that enable the entry and exit of blood to and from the bones [27]. The function of the lymphatic system inside the bone is not clear; it is probable that waste metabolites are transported by the outgoing venous system [27]. Bone is innervated by parasympathetic fibers, which communicate with acetylcholine (Ach) bone receptors, contributing to bone growth. Vagal innervation to bone is induced via stimulation of the central nervous system by interleukin-1; in this modality, the vagus stimulates apoptosis of the osteoclasts [28]. Precise data on the topographic presence of the vagus nerve in bone are lacking. Bone is also affected by innervation of the sympathetic system, with a more complex penetration of the tissue, involving the cortical area and the bone marrow that occurs with the parasympathetic system [29]. The sympathetic activation of the bone suppresses bone growth, stimulating the activity and production of osteoclasts, which are released from nerve endings via various inflammatory reactions (involving, for example, prostaglandins, bradykinins, endothelin, and nerve growth factor) [29]. In bone and in periosteum there are mechanosensitive fibers of the nociceptive type, which respond quickly to mechanical distortions of the tissue [29]. In bone tissue, there is a direct relationship between the autonomic and the central nervous system. Bone marrow Red bone marrow or myeloid tissue (yellow bone marrow, consisting mainly of adipose tissue, which determines its color) is a key component of the lymphoid system, producing the lymphocytes that form part of the body's immune system. Myeloid cells are recruited in the presence of inflammation by the sympathetic nervous system [30]. The leukocytes and neutrophils produced by the bone marrow are released into the systemic circulation, starting from the venous sinuses of irregular caliber known as sinusoids (or vascular sinuses). Immune cells will then be recruited from the inflamed or injured site [31]. Bone marrow participates in the repair and defense of the body system for bone tissue and for all other organs and tissues. Improving the current definition of the fascial system Bone tissue corresponds perfectly to the definition of fascia [5]. It is able to remodel in response to mechanical stimuli, and it is in synergy with other structures of the human body, influencing the systemic health of the individual. Each osteocyte communicates with all the other osteocytes in the bone where it resides. Bone is part of the fascial continuum. As an example, consider the mechanical stimulus of a voluntary movement such as walking, where the tension felt by the epidermis of the foot passes through all the tissues to the bone, which participates in the adaptation of the whole body in a biunivocal mode through autocrine and paracrine actions. Comparing our previous definition of fascia with our current definition, we added the term "feeding," as arterial blood nourishes the fascia and is an integral part of the fascial continuum. Regarding the concept of nurturing, the action of venous blood and lymph is inherent, these being integral parts of the definition of fascia; an adequate metabolic environment is created to best utilize the nutrients that the arteries carry [4,6]. But it is not only the arteries that contribute to a satisfactory metabolic environment. The entire fascial system transmits substances between different tissues and between cells to inform what happens from a mechano-metabolic perspective and to facilitate an adequate mechanotransduction process [32][33]. The ability to receive information is vital to be able to adapt and survive, from the whole tissue to the single cell. We can define it as an informational "nutrition." Finally, we added two other words to improve the definition of the band: liquids and solids. The fasciae inside the human body exist as both a solid structure and a liquid structure [4,6]. We reiterate our previous definition of fascia, with some words added (highlighted in italics): "The fascia is any tissue that contains features capable of responding to mechanical stimuli. The fascial continuum is the result of the evolution of the perfect synergy among different tissues, liquids and solids, capable of supporting, dividing, penetrating, feeding and connecting all the districts of the body, from the epidermis to the bone, involving all the functions and organic structures. The continuum constantly transmits and receives mechano-metabolic information that can influence the shape and function of the entire body. These afferent/efferent impulses come from the fascia and the tissues that are not considered as part of the fascia in a biunivocal mode." Conclusions This article reviewed the main functions of bone and its related anatomy, as well as the capacity of bone to adapt in response to mechano-metabolic stimuli. We have emphasized skeletal relationships in relation to the systemic health of the individual, with biunivocal modalities, inserting the skeletal network into the definition of fascia. We have added in the description the term "feeding," because the liquid bands, like the blood and the lymph, have peculiarities that allow the nourishment of different tissues. The same tissues feed on mechano-metabolic information, which is mutually exchanged with the ultimate aim of adapting and surviving. Other words added to enrich the definition of fascia are "liquids and solids," because the fascial tissue is composed of both solid and liquid material. We believe that further research is needed to achieve a truly complete definition of fascia, in the light of pressing and constant new scientific information.
2019-01-31T14:12:44.146Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "0632c2fa6d2566e1187dae4eb842e3734e309da7", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/16943-bone-tissue-is-an-integral-part-of-the-fascial-system.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1025c9a8cea478148e4fcba7cf7059f926b1716e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
240416506
pes2o/s2orc
v3-fos-license
The Combination of Oral PDE5-Inhibitor (Sildenafil) And Oral Prostacyclin Analogue (Beraphrost) Therapy for Increasing Quality of Life in Adults with Pulmonary Arterial Hypertension Related to Uncorrected Secundum Atrial Septal Defect Background : Sildenafil, an oral phosphodiesterase type-5 inhibitor, has vasodilatory effects through a cyclic guanosine 3,5-monophosphate–dependent mechanism, whereas beraprost, an oral prostacyclin analog, induces vasorelaxation through a cAMP-dependent mechanism. This combination has often used but there was little detailed study on it Objectives : To investigate whether the combination of oral sildenafil and beraprost is superior to sildenafil alone in in adult patients with Pulmonary Arterial Hypertention (PAH) related uncorrected secundum Atrial Septal Defect (ASD). Methods : Patients with secundum ASD who developed PAH divided into two group. Group A received oral sildenafil 3x40 mg and oral beraphrost 3x20 mcg. Group B received oral sildenafil only 3x40 mg in a 12-week. Health-related quality of life (HRQoL) was recorded by patients using the Medical Outcomes Study 36-item short form (SF-36) questionnaires at baseline and after 12 of therapy. Therapy adherence was achieved through a series of phone calls and a four-weekly hospital visit. Every normal follow-up appointment included an examination of side effects and a dosage modification based on the clinical situation Results : We didn’t found any significant of proportion different in cofounding factor between groups. Compared with Group B, Group A had better functional capacity, limitation to physical health, energy fatigue, pain, and health change (P=0.00, P=0.03, P=0.044, P=0.026, P=0.008, respectively). Conclusion : Combination between oral sildenafil therapy 40 mg three times per day and beraphrost 20 mcg two times per day significantly increase the HRQoL in PAH patients in uncorrected secundum ASD compared sildenafil alone Background : Sildenafil, an oral phosphodiesterase type-5 inhibitor, has vasodilatory effects through a cyclic guanosine 3,5-monophosphate-dependent mechanism, whereas beraprost, an oral prostacyclin analog, induces vasorelaxation through a cAMP-dependent mechanism. This combination has often used but there was little detailed study on it Objectives : To investigate whether the combination of oral sildenafil and beraprost is superior to sildenafil alone in in adult patients with Pulmonary Arterial Hypertention (PAH) related uncorrected secundum Atrial Septal Defect (ASD). Methods : Patients with secundum ASD who developed PAH divided into two group. Group A received oral sildenafil 3x40 mg and oral beraphrost 3x20 mcg. Group B received oral sildenafil only 3x40 mg in a 12-week. Health-related quality of life (HRQoL) was recorded by patients using the Medical Outcomes Study 36-item short form (SF-36) questionnaires at baseline and after 12 of therapy. Therapy adherence was achieved through a series of phone calls and a four-weekly hospital visit. Every normal follow-up appointment included an examination of side effects and a dosage modification based on the clinical situation Results: We didn't found any significant of proportion different in cofounding factor between groups. Compared with Group B, Group A had better functional capacity, limitation to physical health, energy fatigue, pain, and health change (P=0.00, P=0.03, P=0.044, P=0.026, P=0.008, respectively). Conclusion: Combination between oral sildenafil therapy 40 mg three times per day and beraphrost 20 mcg two times per day significantly increase the HRQoL in PAH patients in uncorrected secundum ASD compared sildenafil alone Keywords: Pulmonary Hypertension; Secundum ASD; Quality of Life Pulmonary arterial hypertension (PAH) is a term used to classify a variety of conditions that have in common an injury to the pulmonary vasculature that produces elevations in pulmonary arterial pressure. PAH is a common (9-35%) consequence of congenital heart disease that primarily affects patients with a left-to-right shunt, including an atrial septal defect (ASD) that has or has not been repaired. It is characterized by vascular remodeling, elevated pulmonary vascular resistance, and raised pulmonary arterial pressure. 1 PAH had twice risk of mortality, three times risk of a cardiac event, and five times chance of ICU admission in ASD patients. Patients with PAH caused by uncorrected ASD are currently managed in tertiary care centers, where the development of advanced specific medications is expected to increase prognosis. 2,3 ASD patients who developed PAH could have symptom that reduced their quality of life such as dyspnea on effort, hemoptoe, dizziness, chest pain, palpitation, and peripheral edema. It's impact on physical mobility and mental state that might deteriorate the patients' health related quality of life (HRQoL). 4 HRQoL is an indicator of personal satisfaction with one's life that is influenced by one's health condition, such as physical stamina, learning function, working relationships, emotional well-being, and spirituality. It is subjective, multifaceted, and transient. Quality of life in various chronic illnesses may be assessed using specialized and validated questionnaires such as the Short-Form 36 Health Survey (SF-36). This evaluation is highly repeatable, well-known, non-invasive, and extensively utilized survey is available in a variety of languages. 4,5 20 Original Article 2 Sildenafil, an oral phosphodiesterase type-5 inhibitor, causes vasodilation via a cyclic guanosine 3', 5'-monophosphate-dependent pathway, whereas beraprost, an oral prostacyclin analog, generates vasodilation via a cAMP-dependent function. When compared to treatment with either medication alone, combined treatment with sildenafil and beraprost exhibited additive effects on increases in plasma cAMP and cyclic guanosine 3', 5'-monophosphate levels, resulting in additional improvement in pulmonary hemodynamics. 6 The combination of sildenafil and epoprostenol (intravenous prostacyclin analogue), however, has demonstrated synergistic effects. The PACES trial evaluated the benefit of sildenafil in patients on background epoprostenol therapy. This study included a total of 267 patients and randomized patients to sildenafil or placebo. Sildenafil improved 6MWD by 28.8 meters (95% CI, 13.9 to 43.8 meters), and there were improvement in cardiac index and reductions mean PA pressures. Combined therapy yielded improvement in quality of life and time to clinical worsening although there were increased rates of headaches and dyspepsia. 7 HRQoL improvement has been reported in PAH related uncorrected ASD patients with the specific therapy, but it does not show consistency in combination of oral PDE5 inhibitor and oral prostacyclin analogue. Thus, we aim to investigate whether there are HRQoL differences between sildenafil mono therapy and combine with oral prostcycline analogue (Beraphrost) in adult patients with PAH related uncorrected secundum ASD. Method This was an observational prospective cohort study undertaken in the Saiful Anwar General Hospital, a Tertiary Hospital associated with Universitas Brawijaya in Malang, East Java, Indonesia. Adult patients (more than 18 years) with PAH and uncorrected secundum ASD who had enrolled on the Saiful Anwar-PH registry and signed the informed consent form were included in this study. Transthoracic echocardiography and transesophageal echocardiography were used to identify secundum ASD, whereas right cardiac catheterization was used to diagnose PAH. Exclusion criteria included failing to complete follow-up, having another congenital heart defect, being in WHO NYHA functional class I, being pregnant, or having chronic pulmonary illnesses. In case report form, demographic and clinical data such as age, gender, WHO functional class, marital status, and concomitant illness were collected. Subjects completed an HRQoL questionnaire before and after receiving the optimum dose of oral sildenafil 3x40 mg (Group B) or in conjunction with an oral prostcycline analogue (Beraphrost 3x20mcg) (Group A). Therapy adherence was achieved through a series of phone calls and a four-weekly hospital visit. Every normal follow-up appointment included an examination of side effects and a dosage modification based on the clinical situation. Assessment of HRQoL HRQoL in varied cardiac situations was assessed using the Short Form Survey (SF)-36 questionnaire. The HRQoL assessment was carried out to assess physical functioning, physical health limitations, emotional problems, energy/fatigue, emotional well-being, and social functioning. Right heart catheterization (RHC) After TTE and/or TOE were verified as ASD and recorded, right heart catheterization (RHC) was simulated in all individuals. Before treatments, cardiology experts performed RHCs on non-sedated patients using conventional techniques. The RHC's goal was to compute hemodynamics, diagnose pulmonary artery hypertension (PAH), and assess the septal defect/shunt repair procedure. The flow rate was calculated using the formula pulmonary blood flow (Qp)/systemic blood flow (Qs) = (aorta saturation -mixed vein (MV) saturation)/(pulmonary vein (PV) saturation-pulmonary artery (PA) saturation). The MV saturation was calculated as ((3 x superior vena cava saturation) + inferior vena cava saturation)/4. The pulmonary vascular resistance index was used to develop the formula (mPAP-mean left atrial pressure (mLAP) (or mPAWP)/Qp) (PVRi). A Qp was calculated using the formula: O2 intake (ml/min)/ (1.36x10xhemoglobin level x ((PV saturation-PA saturation-ation)/100). 7 The PVR was calculated using the PVRi/body surface area. The PAH diagnosis was established when mPAP was 25 mmHg, PVR was greater than 3 WU, and PAWP or mLAP was greater than 15 mmHg. 8 Eisenmenger syndrome is characterized when Qp/Qs = 1 and PVRi > 8 WU.m2. A vasoreactivity test was done on a subset of individuals (discretion by cardiologist consultants). The vasoreactivity result was determined using established recommendations (reduction in PVR > 20% and final PVRi 6 WU.m2). 9 Shunt correctability was defined as patients with appropriate defect anatomy (surgery and/or device), Qp:Qs > 2, and PVRi 6 WU.m2. 8 Blood was drawn from each patient through venipuncture in peripheral veins and during RHC. Cuvettes analysis was used to measure the blood Gases analysis sample. The hemoglobin and hematocrit levels were determined using a standard hemocytometer. Statistical Analysis The mean/median is used to represent continuous variables (interquartile range). Numbers and percentages are used to represent categorical variables. For comparisons between SF-36 domain subgroups, the Independent T-test was utilized (symptoms, activities and quality of life). Subgroups were established for continuous variables based on their median values. All statistical analyses were carried out utilizing the SPSS software, version 21 (SPSS Institute, Inc; Cary, North Carolina). If p < 0.05, the results were significant. Result Forty-four patients were included to the analysis from the Saiful Anwar Pulmonary Hypertension Registry between January 2019 until December 2020. As of 31 December 2020, at diagnosis, the mean age was 30.4 ± 8.2 years, The main symptoms were dyspnea on effort (35.9%), easily fatigued (16.3%), chest pain/discomfort (10.8%) and palpitations (9.3%). 20 patients got Sildenafil 3x40 mg, and 24 patients got combination between beraphrost 3x20 mcg and sildenafil 3x40 mg. We didn't found any significant of proportion different in Comorbid condition between groups. Hemodynamic variables obtained during RHC (Table 2), that found no significant different level of hemodynamic variable between group. We found that, there were significant increasing for quality of live in group A compared with group B, there were significant different in physical functioning, Limitation to physical health, Energy fatigue, Pain, and health change (P=0.00, P=0.03, P=0.044, P=0.026, P=0.008, respectively). Discussion The mean age of patient with ASD who developed PAH was similar to other studies, stated that PAH development in secundum ASD mostly occurred in the third decade. Incidence of PAH in secundum ASD were increased in patients at the age of 18 to 40 years old. 10 Dyspnea on effort was the most common symptom in this study, and its similar with previous studies stated that the most common symptom in uncorrected ASD who developed PAH was breathlessness due to volume overload of the right ventricle (RV). 11 The assessment of medication effects on HRQoL is a key component in assessing the impact of medicines on clinical outcomes and health care. Several prior studies that assessed HRQoL following sildenafil and beraphrost administration yielded results that were compatible with this study. Study by Joanna et al. (2008) found that after 12 weeks treatment, sildenafil-treated participants exceeded placebo-treated patients in terms of exercise ability (p 0.001). Increases in all SF-36 categories were seen in sildenafil-treated patients from baseline to week 12, with statistically significant increases in physical functioning (p 0.001), overall health (p 0.001), and vitality (p 0.05) compared to placebo-treated control individuals. 12 Nazzareno Galiè et al. (2002) stated that Patients who received beraprost increased their exercise ability and symptoms, resulting in a higher quality of life. 13,14 In this study, we found that combination of treatment between sildenafil and beraphrost had better quality of life compared than sildenafil alone. improve HRQoL measured by Medical Outcomes Study Short Form 36 (SF-36). Patients in group A had better physical functioning, lower limitation to physical activity, less energy fatigue and less pain. SF-36 questionnaire is the most extensively used instrument for accessing HRQoL in various cardiac conditions. It can be used to accommodate eight health concepts. The improvement in the HRQoL can be achieved through several ways. Sildenafil relaxes the pulmonary vasculature via a cGMP-dependent pathway, whereas beraprost dilates the pulmonary arteries via a cAMP-dependent process. As a general pharmacologic concept, when various medicines that cause comparable effects via distinct pathways are combined, they may have additive or synergistic effects. In fact, as compared to therapy with either medication alone, the combination of oral sildenafil and beraprost substantially reduced increases in RV systolic pressure and RV/BW. These data imply that the combination of oral sildenafil and beraprost is more effective than either medication alone in preventing the development of MCT-induced pulmonary hypertension. 13,6 Although the drug administration significantly improves the SF-36 based on statistics, the clinical benefit cannot significant for patients. Thus, the score difference must exceed the MCID (Minimal Clinical Important Difference) that represent the minimal amount of benefit that were recognized by the patient. Using the distribution-based method, Koichi et al (2020) stated that the SF-36 MCIDs of the PCS (Physical component summery) and MCS (Mental component summery) were 5 and 5 by half the SD, and 6 and 5 by standard error of the measurement. we believe that the result can be used as a treatment consideration for pulmonary arterial hypertension cases especially in developing countries. We noticed several limitations in this study. First the sample size was small, further study with larger number of participants might 23 be required. Second, we did not assess the effect of supportive treatments (such as diuretics, digoxin, oral anticoagulants), which is one of the PAH patient care methods specified in the ESC PAH recommendations. 8 Fluid retention, elevated central venous pressure, hepatic congestion, ascites, and peripheral edema are all symptoms of right heart failure. Although clinical experience suggests that diuretics might help minimize fluid retention symptoms, there have been no randomized studies including diuretic usage in PAH patients. Diuretic treatment is advised in PAH patients who show symptoms of right heart failure and fluid retention, according to the I C recommendation class. To avoid hypokalemia and pre-renal kidney disease, aldosterone antagonist treatment, combined with plasma electrolyte levels and renal function monitoring, may be explored. 9 Conclusion This study concluded that combination between oral sildenafil therapy 40 mg three times per day and beraphrost 20 mcg two times per day in PAH patients in uncorrected secundum ASD significantly improve physical functioning, Limitation to physical health, Energy fatigue, Pain, and health change resulted in increasing HRQoL compared sildenafil alone. Ethics Approval and Consent to participate This study was approved by local Institutional Review Board, and all participants have provided written informed consent prior to involve in the study. Consent for publication Not applicable. Availability of data and materials Data used in our study were presented in the main text. Competing interests Not applicable. 6.5. Funding source Not applicable. Note, data were presented in mean ±SD or n (%)
2021-11-02T15:08:18.672Z
2021-10-30T00:00:00.000
{ "year": 2021, "sha1": "a99d7a64473ca1a53bbfa70066dc5eff88d4818a", "oa_license": "CCBY", "oa_url": "https://heartscience.ub.ac.id/index.php/heartscience/article/download/217/148", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2869b1967a8fd0a6e58264f100841a326dc5b0b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235998210
pes2o/s2orc
v3-fos-license
Curcumin-loaded nanocomplexes: Acute and chronic toxicity studies in mice and hamsters Graphical abstract Nanoparticles with a particle size range of (10− 600 nm) enhance therapeutic efficacy of curcumin [22]. Delivery of nanoparticles of curcumin can improve oral bioavailability due to enhanced solubility [24,25] or increased mucosal permeation and cell-selective uptake [26,27]. Although several formulations of nanoparticle-based curcumin have been developed, only few of them such as curcumin-loaded polymeric nanoparticles of Eudragit S100 [28] and curcuminoid-essential oil complex [29] have evaluated the safety of nanoparticle delivery systems in animal models. These formulations were non-toxic in acute toxicity studies at doses equivalent to 2 g curcumin /kg of body weight [28]. Daily administration of a curcuminoid-essential oil complex (CEC) at a dose of 1 g CEC/kg body weight [29] or curcumin-loaded polymeric nanoparticles of Eudragit S100 at 0.1 g curcumin/kg of body weight [28] also revealed no toxicity after 90 and 28 days, respectively. However, further evaluation of safety, particularly in the event of long-term consumption of nanoparticles is required. It is widely accepted that the toxicity of nanodelivery systems vary according to nanoparticle size [30], nanomaterials used and their exact formulation [23], dosage [31] and cell types targeted [32]. Therefore, although curcumin has been shown to be safe in several studies [33,34], development of novel nanoparticle-based delivery system for curcumin still requires safety assessment. Nanoparticles taken orally are generally absorbed through the intestinal mucosa or lymphatics whereby distributed and excreted through clearance systems, i.e. renal clearance, hepatobiliary clearance, the reticuloendothelial system or the mononuclear phagocyte system, depending on several factors such as particle sizes [35], surface morphology, charges, and properties of nanomaterials [36]. Renal clearance, through glomerular filtration and tubular secretion, deals with small particles (about 5 nm) [36]. The epithelial lining of hepatic sinusoids deals with nano-sizes up to 200 nm, however, other hepatic mechanisms, i.e. endocytosis, metabolism and enzymatic cleavage, also play roles in hepatic clearance of foreign particles and eliminate them via the bile duct. Nanoparticles with sizes of >200 nm tend to remain in the body until degradation [35,37]. Previously, mucoadhesive polymers were used to form nanoparticles of curcumin and shown to promote its release and absorption [38] with chemoprevention potential for opisthorchiasis-associated cholangiocarcinoma (CCA) [39]. To solve the indispensability in water of the curcumin nanoparticles, hydrophilic solid dispersions using arabic and xanthan gums have been used to develop curcumin-loaded nanocomplexes (CNCs) with particle sizes in a range of 400− 1,000 nm. These showed improved oral delivery of curcumin by enhancing gastrointestinal mucoadhesion and potentially extending curcumin retention [40]. Because CNCs tend to be retained in the body for long periods, it is essential to determine their safety for human use as a novel drug. Non-clinical risk assessment in an animal model is a necessary step on the path for translation of CNCs to use in human patients. The objective of this study was to investigate acute and chronic toxicities of oral CNCs in two different animal species, mice and hamsters. Different doses of CNCs were administered to evaluate their toxicity as indicated by physiological, biochemical parameters, ultrastructural effects and histopathological changes in various organs. Safety assessment of CNCs have very low toxicity in non-clinical trials and are ready for clinical study. CNCs preparation Arabic gum, xanthan gum and isoflurane were purchased from Sigma-Aldrich (St. Louis, MO, USA). Curcumin (>98 % purity w/w) was purchased from ACROS Organics (Geel, Belgium). Powdered CNCs (WellCap® Kaminn, with encapsulation efficiency of 80 % and loading capacity of 28 % [40] and blank nanocomplexes (BNCs, WellCap® Capsule) kindly gifted by Welltech Biotechnology Co. Ltd. Bangkok, Thailand. These were stored following the manufacturer's instructions. In brief, the curcumin-encapsulated nanoparticles, using ethylcellulose and methylcellulose formula dispersed in deionized water, were mixed with a solution containing 1% each of arabic gum and xanthan gum and then subjected to a spray-drying process as previously described [40]. Morphology Focused ion beam-field emission scanning electron microscopy (FIB-SEM, FEI Helios Nanolab G3CX, USA) was used to observed dry CNCs powder at 10 kV, after the powder was spread on adhesive tape and then vacuum-coated with a thin layer of gold at 15 kV for 90 s. CNCs were also dispersed in deionized water, dropped onto analytic glass plate, desiccated and then gold-coated before visualization using a JSM-IT100, SEM (JEOL, Japan). CNCs in deionized water were dropped onto a 200-mesh grid and visualized by transmission electron microscopy (TEM, JEM-1010, JEOL, Japan) at 100 kV. Stability Ten light-proof foil/polyamide-sealed packets containing CNCs (5 g each) were randomly sampled from three batches of thirty packs (ten packs / batches) to be stored at 25 ± 2 • C/60 ± 5 % RH for stability monitoring. After 6 or 12 months, samples were diluted with ethyl acetate for UV spectroscopic determination of curcuminoid concentration at 416 nm (Spectroquant UV2400PC, China) in comparison to standard solutions of curcuminoid (1− 5 μg/mL). Data were presented as the average of % w/w of curcumin in CNCs and % of initial. Ethics statement and animals used This study has been reviewed and approved by the Animal Ethics Committee of Khon Kaen University based on the Ethics of Animal Experimentation of National Research Council of Thailand (ACUC-KKU-59/2559). Two species of animals were used in this study. Swiss albino mice of both sexes (JcL:ICR) (4-5 weeks old, 25− 40 g, total 197 mice) were purchased from Nomura Siam International Co. Ltd., Bangkok, Thailand and reared at Northeast Laboratory Animal Center, Khon Kaen University. This strain of mice has been used in tests for sensitivity to chemicals, susceptibility to toxic substances and for tumor induction and is commonly used for preclinical toxicity testing. Syrian golden hamsters (Mesocricetus auratus), a susceptible animal model for opisthorchiasisassociated cholangiocarcinoma [41], of both sexes (4-6 weeks old, 80− 100 g, total 207 hamsters) were reared at the Animal Unit, the Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand. All female rodents were nulliparous and non-pregnant, as recommended by Organization for Economic Co-operation and Development (OECD) Guidelines for Testing of Chemicals (Sections 423 and 452). This study was performed under good laboratory practice according to OECD principles on good laboratory practice guideline [42]. All animals were maintained under clean conventional conditions at 23 • C (± 2 • C) with relative humidity 30-60 % and 12 h light/ dark cycles, and fed ad libitum a commercial pellet diet (CP-SWT, Thailand) with unlimited access to food and drinking water. All animals were randomly assigned to cages for at least 5 days before experimentation. All cages were monitored every day and bedding material was changed thrice a week. Acute toxicity study Female animals (total 29 mice and 23 hamsters) were randomly assigned to non-treated (normal control group) or intervention groups. Acute toxicity testing was carried out following OECD Guideline 423 for Testing of Chemicals with slight modification, including dose level, rodent species and number of animals used. The acute oral toxicity of CNCs was classified based on the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) 2003 in which the most severe toxicity is classified in category 1 (LD 50 ≤ 0.005 g/kg bw) and relatively low toxicity in category 5 (LD 50 > 2− 5 g/kg bw). Agents that have very low toxicity (LD 50 > 5 g/kg bw) [43] are placed into the unclassified hazards category as indicated by the ∞ symbol based on OECD guidelines [44]. Most previous articles have reported LD 50 of curcumin to be approximately 2 g/kg bw (in GHS category 5 of OECD guidelines for oral toxicity studies) in rats and mice [45,46]. Therefore, in this study, we used the higher dose levels from OECD Guideline 423 to determine the actual dose for translation from animal to human. For rodent species, mice and hamsters were selected as described above. CNCs were given on a single occasion at low, medium and high doses, which were 0.1, 1.1 or 11 g/kg bw (equivalent to 0.03, 0.3 or 3 g/kg bw of curcumin, respectively) for mice, and 0.2, 2.1 and 21.4 g/kg bw (equivalent to 0.06, 0.6, 6 g/kg bw of curcumin, respectively) for hamsters. Mice and hamsters in the blank nanocomplexes (BNCs) group received a single oral dose of 7.9 g/kg bw BNCs and 15.4 g/kg bw BNCs, respectively. Administration of the single dose was performed by oral gavage in a volume of 10 mL/kg bw. The BNCs or CNCs powder to be administered was diluted with distilled water at a ratio of 10:1 w/v. The diluted samples were pushed through gavage tubes within 45 min. Animals in the negative control group received no intervention. Clinical signs of toxidromes (depression, rising fur, tremors, excitability, twitching, salivation, morbidity) and mortality were observed and recorded twice daily for 14 days post-treatment. Chronic toxicity study The protocol was performed based on the OECD Guidelines for Testing of Chemicals (Section 452) with slight modification including rodent species, dose levels and number of animals used. Swiss albino mice (12/sex/group, total 168, bodyweight 25− 40 g) and hamsters (13/ sex/group, total 182, bodyweight 80− 100 g) were randomly divided into seven groups as follows: Group 1 (control) normal diet without any treatment, Group 2 daily oral gavage with BNCs (0.58 g/kg bw/day in mice or 1.16 g/kg bw/day in hamsters), Groups 3-5 daily oral gavage of CNCs at low, medium and high doses in mice (0.09, 0.27 and 0.8 g/kg bw, equivalent to 0.025, 0.075, 0.225 g/kg bw of curcumin, respectively) and CNCs at low, medium and high doses in hamsters 0.18, 0.54, 1.61 g/kg bw (equivalent to 0.05, 0.15, 0.45 g/kg bw of curcumin, respectively), for 6 months. Groups 6 and 7 of both species (n = 24 mice and 26 hamsters), termed the recovery groups, were respectively given 0.58 g/kg bw/day of BNCs or the high-dose CNCs regimen daily for 6 months. Following cessation of the treatments at 6 months, animals in groups 6 and 7 were held for a further 28 days. The dose volume in all animals used was 10 mL/kg bw. The dose levels to be used for chronic toxicity testing were based on the results from the acute toxicity testing. The high-dose level in the chronic toxicity test was approximately equivalent to the medium dose level used in the acute toxicity tests: the medium and low doses used in the chronic toxicity tests were successive approximately three-fold reductions of the high dose. During the experiment, all animals were daily checked for overall health condition, body weight, morbidity and mortality. All animals were starved for 1 day before euthanasia. Sample collection and histopathological study Animals were anesthetized using isoflurane inhalation and euthanized by cardiac puncture. Blood samples obtained were immediately divided into 3 portions, one for hematological analysis (stored in an EDTA tube), one for coagulation analysis (in a citrate tube) and one for serum biochemistry. Internal organs including liver, lung, kidney, heart, spleen, pancreas, stomach, intestine and ovaries or testes were collected and immediately fixed in 10 % buffered formalin for histopathological study. The tissues were processed using an Automatic Tissue Processor (Hestion, England) and embedded with paraffin using a tissue processor (Bio-Optica, Italy). The paraffin-embedded tissues were cut using a Microm HM 315 microtome (Thermo Fisher Scientific, USA) and stained with hematoxylin and eosin (H&E). The slides were observed under a light microscope. Hematological and biochemical parameters All analyses of blood samples, with results reported as means ± SD, were conducted at the Laboratory Unit, Srinagarind Hospital, Faculty of Medicine, Khon Kaen University, Thailand. A Sysmex Xs-800i1000i Automated Hematology Analyzer (Sysmex Corporation, Kobe, Japan) provided hematological parameters. A Cobas 8000 Chemistry Autoanalyzer (Roche Diagnostics International Ltd., Scotland) was used to determine serum biochemistry parameters, among which were glucose levels, activity of liver function enzymes and lipid profile. An automated blood coagulation analyzer ACLTOP550 (AWerfen Company, Germany) provided coagulation parameters. Scanning electron microscopy (SEM) To achieve our ultimate goal of CNCs use in opisthorchiasisassociated CCA patients, we focused on hamsters, a susceptible animal model for O. viverrini infection [41]. Tissue samples of hamster stomach and intestine (1 mm 3 each) of Groups 5and 7 in the chronic toxicity study were excised, fixed with Karnovsky's fixative, washed twice with buffered solution for 10 min and post-fixed using osmium tetroxide (OsO 4 ) for 2 h and then, re-washed twice in buffered solution for 10 min. The samples were dehydrated using ethyl alcohol for 10 min at each of the following concentrations: 50 %, 70 %, 80 %, 90 % and 95 %. Final dehydration was achieved using 3 changes of absolute alcohol (10 min each) followed by amyl acetate for 15 min. All tissues were dried using a critical-point drier (CPD) K850 (ASHFORD, Kent, UK) liquid carbon dioxide, coated with a thin layer of gold under vacuum at 15 kV for 90 s (EMITECH K550X, England) and the mucosal side of each sample was observed and imaged using SEM (JSM-IT200, JEOL, Japan). Transmission electron microscopy (TEM) Samples of liver (1 mm 3 , each) of hamsters from Groups 1, 2 and 5 in the chronic toxicity study were fixed in Karnovsky's fixative and dehydrated using an ethanol concentration gradient. The samples were embedded in propylene at 60 • C for 48 h. The sections were cut using an Ultracut N, Reichert-Nissei microtome. The TEM grids holding samples were viewed using a JEM-1010 TEM, (JEOL, Japan). TEM images of CNCs were obtained with an accelerating voltage of 100 kV. Statistical analysis Survival rates were statistically analyzed using Kaplan-Meier analysis and Cox regression. All parameters were compared statistically between the normal (control) group and any treatment group. Blood parameters were analyzed using analysis of variance (one-way ANOVA) with a post-hoc Tukey's HSD (Honestly Significant Difference) test, implemented in the IBM SPSS Statistics 19 program (SPSS, Inc., Chicago, IL, USA), and reported as means ± SD. Comparisons yielding p values of 0.05 or lower were regarded as statistically significant differences. )). The segregated nanoparticles of CNCs forming nanoencapsulated curcumin were similar to those in our previous report [40]. The segregated nanoparticles of CNCs were within a size range of 400− 1,000 nm. The TEM image, Fig. 1(C), reveals a loose surface detaching from the nanoparticle. After storage at 25 ± 2 • C/60 ± 5 % RH in light-protected and sealed conditions for 6 and 12 months, curcuminoid contents of CNCs remained higher than 97 % of the initial amount with little or no changes in pH and bulk densities ( Table 1). Acute toxicity In generally, mice engage in high-energy activities, such as climbing around in their cage [47]. Similarly, hamsters spend much time running around their habitat and do a lot of chewing [48]. In acute toxicity testing, all animals treated with high doses (11 g/kg bw in mice and 21.4 g/kg bw in hamsters) exhibited different behaviors from the normal group: they moved more slowly and squeezed or curled themselves against the walls or into corners of their cages a few minutes after the dosing. Half of the animals died within 24 h after administration of a single high dose of CNCs (3 of the 5 hamsters and 3 of the 6 mice), enabling the determination of LD 50 . The surviving animals recovered after some hours and showed no further signs of toxicity or died during the remaining 14 days of observation. Based on Lorke's method [49], the estimated oral LD 50 values of CNCs were 8.9 and 16.8 g/kg bw (equivalent to 2.5 and 4.7 g/kg bw of curcumin) for mice and hamsters, respectively. There was no significant change in body weights of mice or hamsters throughout the period of study relative to initial (p > 0.05 both) (Fig. S1). Gross pathology and histopathology did not find necrosis or other severe abnormal changes, but inflammation was found in liver after high doses (11 g/kg bw in mice and 21.4 g/kg bw in hamsters) (Fig. S2). In the animals treated with a high dose of CNCs, as shown in Table 2, there were significant increases in organ-weight to body-weight ratios of spleen in mice and of heart and stomach in hamsters (p < 0.05, all). Liver weight was significantly increased in hamsters compared to the normal group, but not in mice. Mice treated with a high dose of CNCs exhibited significantly elevated total protein, globulin and BUN with an increasing trend of ALT and alkaline phosphatase but decreasing trend of AST. In hamsters receiving the high dose, there was a non-significant increase of ALT, AST and alkaline phosphatase compared to the normal Fig. 1. Images of the CNCs-based nanodelivery system. CNCs powder was visualized using FIB-SEM (A). CNCs after dispersal in water and visualized using SEM (B) and TEM (C). Table 1 Physicochemical characteristics of curcumin-loaded nanocomplexes (CNCs) and their stability following storage for 6 or 12 months. Characteristics Appearance: Yellowish-orange, mild turmeric odor and tasteless powder Condition: Stored CNCs in foil-sealed 5-g packs at 25 ± 2 • C/60 ± 5 %RH (n = 3 lots) Note: BNCs -blank nanocomplexes (7.9 g/kg bw in mice or 15.4 g in hamsters), CNCs -Curcumin-loaded nanocomplexes at low doses (0.1 g/kg bw in mice or 0.2 g/kg bw in hamsters), medium doses (1.1 g/kg bw in mice or 2.1 g/kg bw in hamsters) or high doses (11.0 g/kg bw in mice or 21.4 g/kg bw in hamsters); Data are mean ± SD, *P value < 0.05 one-way ANOVA, n = number of animals. group (Table 3). Notably, BNCs-treated mice showed significant elevation of total protein, globulin, ALT, AST, alkaline phosphatase and BUN compared to the normal control group. Significantly elevated levels of total protein, globulin, ALT and alkaline phosphatase were also observed in BNCs-treated hamsters. Survival, clinical observations and body weights Kaplan-Meier plots, Fig. 2 (A) and (B), illustrate significant decreases in survival rates of both species of animals treated for 6 months with high doses of CNCs (0.8 g/kg bw in mice and 1.61 g/kg bw in hamsters). The number of animals in each group surviving until euthanasia (251 of 350) is shown in the Table 4. Interestingly, BNCs (0.58 g/kg bw in mice and 1.16 g/kg bw in hamsters) as well as low-and medium-dose CNCs regimens, did not affect survival of the animals. No abnormal clinical signs were evident nor was food and water intake affected in any treated animals. There was no significant difference in either species in body weights of experimental groups relative to controls at any time point throughout the period of study (p > 0.05 both) (Fig. S3). Organ-weight to body-weight ratios Daily consumption of both BNCs and CNCs for 6 months had some effects on the organ weights and organ-weight to body-weight ratios of mice and hamsters (Table 4). Compared to the normal control group, organ-weight to body-weight ratios of stomach, intestine, pancreas, lung, heart and testes were increased in both species of animal according to doses and the conditions of treatment. These changes were markedly observed in animals receiving the high CNCs doses. Although high dose CNCs treatment induced weights of some organs significant changes, histological study did not show any severe abnormalities. Moreover, most of organ-weight to body-weight ratios reverted to normal after 28 Table 3 Average (±standard deviation) of serum chemistry parameters of animals that consumed a single dose of blank nanocomplexes (BNCs) or of curcumin-loaded nanocomplexes (CNCs) in the acute toxicity study. Note: BNCs -blank nanocomplexes (7.9 g/kg bw in mice or 15.4 g in hamsters), CNCs -Curcumin-loaded nanocomplexes at low doses (0.1 g/kg bw in mice or 0.2 g/kg bw in hamsters), medium doses (1.1 g/kg bw in mice or 2.1 g/kg bw in hamsters) or high doses (11.0 g/kg bw in mice or 21.4 g/kg bw in hamsters); superscripted Uunits are mg/dL, Data are mean ±SD, NA = not available; *P value < 0.05 one-way ANOVA, n = number of animals. days in the high-CNCs recovery groups. Particle adherence SEM photographs of stomach and small intestine of hamsters in the group treated with high doses of CNCs and the high-dose CNCs recovery group are shown in Fig. 3. The deposition of CNCs on the stomach and small intestine walls 24 h after oral administration, due to mucoadhesive effects of the gum polymers, was still apparent 28 days after the last high-dose CNCs treatment. Biochemical analysis, hematological, and coagulation parameters Treatment with blank nanocomplexes and high doses of CNCs yielded elevated levels of blood glucose and creatinine in mice and blood urea nitrogen in hamsters, as shown in Table 5. The liver function markers ALT and AST, but not alkaline phosphatase, were significantly increased both animal species in high-dose CNCs treatment compared to the normal control group. In contrast, triglycerides significantly decreased in the BNCs and BNCs recovery groups in mice and low-dose CNCs group in hamsters compared with the normal group. Likewise, cholesterol significantly decreased in the high-dose CNCs recovery group relative to the high-dose and normal groups of hamsters. BNCsrecovered hamsters exhibited significant decrease of cholesterol compared with the BNCs and normal groups. The hematological analysis of male and female mice (Table S1) revealed that RBC and HCT in female mice of the high-dose CNCs group were significantly increased (p < 0.05). The WBC values for female mice were significantly higher in the low-dose and medium-dose CNCs groups (p < 0.05). Similarly, RDW was significantly higher in female mediumdose, high-dose and high-dose (recovery) CNCs groups (p < 0.05). In female mice of the medium-dose group, MCH value was significantly lower (p < 0.05). Percentage of monocytes was significantly higher in mice treated with BNCs and low-dose CNCs for 6 months (but returned to normal 28 days after the last dose) while there were significant increases in eosinophils in male mice treated with low-dose CNCs. In addition, eosinophils of hamsters were significantly elevated after recovery from BNCs and high-dose CNCs treatment. Basophils were significantly higher in medium-dose treated males. Blood coagulation parameters, including PT and aPTT of male and female hamsters, were within normal ranges for all experimental groups. However, mice yielded insufficient whole blood for us to do this analysis (Table S1). Gross and histopathology There was no significant necrosis and no severe abnormal gross pathology of the liver, kidney, lung, heart, spleen, stomach, intestine, pancreas, testes and ovaries in any of the treatment groups. Histopathological examination did not identify any dose-related abnormalities or Table 4 Relative organ weights of animals in the chronic toxicity study. Note: BNCs -blank nanocomplexes (0.58 g/kg bw in mice or 1.16 g in hamsters), CNCs -Curcumin-loaded nanocomplexes at low doses (0.09 g/kg bw in mice or 0.18 g/kg bw in hamsters), medium doses (0.27 g/kg bw in mice or 0.54 g/kg bw in hamsters) or high doses (0.8 g/kg bw in mice or 1.61 g/kg bw in hamsters); Data are mean ± SD, * P value < 0.05 one-way ANOVA, n = number of animals. lesions, such as severe damage or necrosis, in any animal group, but animals given the high-dose CNCs treatment exhibited slight changes in tissues with some precipitates, indicating mild inflammation (Fig. S4). Ultrastructural changes of liver tissue The liver, as one of the main target organs for nanoparticle clearance, was shown to gain weight in hamsters after a single high-dose CNCs treatment (Table 1). Also, increases in biochemical parameters indicate alterations in liver function in hamsters with ALT, AST, globulins, total protein (Table 4). Thorough observations by TEM occasionally found intense endoplasmic-reticulum stress and destruction of mitochondria in liver samples in the BNCs-treated group and high-CNCs group compared to the normal group (Fig. 4). Organelle structure, including endoplasmic reticulum, nucleus and mitochondria were clearly seen in normal liver samples. Discussion CNCs, formed as a complex of nanoparticles of curcumin in a solid dispersion form, could be well dispersed in water to give segregated nanoparticles and mucoadhesive gums, providing sustained release of curcumin in the GI tract. Curcuminoid contents were stable in the CNCs for at least 12 months under the specified storage conditions. This demonstrates the advantage of CNCs, as curcumin is prone to rapid degradation [40]. It has been reported that curcumin can induce DNA damage both in vitro and in vivo [13] potentially due to pro-oxidant effects [50]. Nanoparticles are speculated to enhance absorption [25] and could potentially promote the toxicity of the encapsulated curcumin. The toxicity of nanoparticles might differ according to the encapsulated substances [51]. Here, we assessed the actual LD 50 for curcumin in CNCs form in vivo in mice and hamsters. The oral LD 50 values of CNCs estimated using Lorke's method were 8.9 and 16.8 g/kg bw (equivalent to 2.5 and 4.5 g/kg bw of curcumin) for mice and hamsters, respectively. Our data could provide the actual LD 50 of nanoparticles of curcumin in vivo, which was 4.5-fold higher than the LD 50 level of curcumin in mice (approximately 2 g/kg bw) [46]. A single dose of CNCs followed by an observation period of 14 days to assess acute toxicity and daily oral CNCs treatment for 6 months, to test chronic toxicity, did not produce obvious toxicologically significant changes in clinical observations or blood chemistry in groups given the low-dose and medium-dose treatment. However, a high dose could produce toxicity in tissues via inflammation-mediated injury in both animal species. A possible mechanism of CNCs-induced toxicity is shown in Fig. 5 . Increases in weights of some organs, both in acute (Table 2) and chronic (Table 4) toxicity tests may be due to mucoadhesive characteristics of CNCs with additional effects of arabic and xanthan gums [40]. Interestingly, hamsters in the high-CNCs chronic recovery group showed significant increases in stomach weight (1.89-fold) and intestine (1.27-fold) ( Table 4) (p < 0.05), which was supported by SEM results (Fig. 3). Notably, this effect persisted even at 28 days after the last dose as a result of the extended release of CNCs [40,52] and their mucoadhesive properties [38]. In contrast, in mice the increase in weight of stomach and intestine of high-dose CNCs-treated group (0.8 g/kg bw) was reversed by 28 days after the last dose ( Table 4). The reasons for Table 5 Average (±standard deviation) of serum chemistry parameters of animals that consumed blank nanocomplexes (BNCs) daily and curcumin-loaded nanocomplexes (CNCs) at various doses daily for 6 months in the chronic toxicity study. these species-specific differences are unknown. Moreover, CNCs were apparent in the stomach and intestinal mucosa of hamsters 24 h after oral administration were still present in both organs in reduced quantities 28 days after the last dose, suggesting extended release of CNCs (Fig. 2). Increased organ-weight to body-weight ratios, particularly of the liver and spleen after acute CNCs treatments, and pancreas, lung, heart, kidney and testes after 6 months of daily CNCs intake (Tables 2 and 4), might be related to the pharmacokinetics of nanoparticles, their biodistribution and clearance-related side effects [36]. Those processes occurred after absorption of nanoparticles into the bloodstream and distribution into other organs [36]. The clearance of nanoparticles from the body depends on many factors including particle size, shape, materials and surface modifications [35]. Perhaps, CNCs product with particle size of 400− 1000 nm after dispersal in water, as previously described in [40], may be eliminated by the reticuloendothelial system or the mononuclear phagocyte system (MPS) [35]. Macrophages might phagocytose CNCs or BNCs and excrete them into the blood circulation with smaller sizes. Kidney and liver are main clearance organs to eliminate them from body. An overdose of nanoparticles might extend the elimination process in some organs [36], leading to tissue damage via production of enhanced reactive oxygen species (ROS) by inflammatory cells [53,54], leading to increased organ weight. The distribution of curcumin in various organs after uptake such as in the liver, kidney, stomach, small intestine and also blood was supported by a previously reported [55]. In contrast, some MPS phagocytes that carry CNCs may exert an inhibitory effect on BNCs-induced inflammation by curcumin [56] as proposed in Fig. 5. Daily oral administration of CNCs at low, medium and high doses (0.09, 0.27 and 0.8 g/kg bw, which is equivalent to 0.025, 0.075, 0.225 g/kg bw of curcumin, respectively) was defined as the appropriate dose for testing in mice for 6 months. Those dose levels are close to levels used in a previous study of solid-lipid curcumin particles (SLCP) containing approximately 30 % curcumin (0.18, 0.36, 0.72 g SLCP/kg bw in low, medium and high doses, respectively, in rats [46]. High-dose administration of CNCs for 6 months led to 1.86-and 1.47-fold increases of of ALT and AST levels in mice, respectively. In hamsters, the same Note: BNCs -blank nanocomplexes (0.58 g/kg bw in mice or 1.16 g in hamsters), CNCs -Curcumin-loaded nanocomplexes at low doses (0.09 g/kg bw in mice or 0.18 g/kg bw in hamsters), medium doses (0.27 g/kg bw in mice or 0.54 g/kg bw in hamsters) or high doses (0.8 g/kg bw in mice or 1.61 g/kg bw in hamsters); superscripted Uunits are mg/dL, Data are mean ±SD, NA = not available; *P value < 0.05 one-way ANOVA, n = number of animals. treatment led to 1.19-fold increase in ALT and a significant increase of 1.42-fold in AST, indicating that high-dose and long-term CNCs treatment induced inflammation-mediated liver injury. Notably, these changes were restored to normal levels in the recovery groups by 28 days after the final dose. In addition, although high dose CNCs-treated mice and hamsters exhibited slight changes in blood glucose (an increase 1.46-fold in mice and decrease of 0.74-fold in hamsters), these levels were still within the normal range. Increases in blood glucose observed in high-dose CNCs groups might be an effect of gum arabic, which has been reported to adversely interfere with electrolyte balance and vitamin D in mice, and to induce hypersensitivity in humans [52]. This, however, requires further studies to identify the exact mechanism. BNCs at a high dose had the same result as high-dose CNCs treatment that produced significant changes in total protein and globulin in the acute toxicity test (Table 3) and had same trend in the chronic toxicity test ( Table 5), suggesting that high BNCs dose might induce toxicity. BNCs are composed of cellulose-based materials, mainly ethylcellulose and methylcellulose. These are popular in pharmaceutical technology and side effects seem to be limited [57], although an overdose of cellulose-based materials can induce cellular damage [58]. Alternatively, an overdose of curcumin could enhance glycolysis by upregulation of metabolism-related genes [59], leading to increased glucose levels. We found that liver-injury markers such as ALT and AST were increased by high-dose and long-term CNCs treatment, but not by lowand medium-dose treatments. Oral consumption of high-dose CNCs for 6 months decreased survival rates in animals (Fig. 2). Adverse effects of CNCs treatment are likely to be dose dependent. In agreement, overdose and long-term administration (100 mg/kg/90 days) of curcumin could induce imbalance in rats including overproduction of ROS, increased production of pro-oxidant cytokine IL6 and decreased antioxidant enzymes i.e. SOD and GST, leading to oxidative stress-mediated liver injury and inflammatory disorders which, however, could recover to normal 1 month after cessation of treatment [59]. Likewise, a higher dose of curcumin (400 mg/kg/15 days) mediated ROS induction, leading to myocardial damage in rats. On the other hand, curcumin can enhance endogenous antioxidant systems at lower doses [60]. Taken together, based on our results and previous study, it seems likely that high doses CNCs treatment might be involved in induction toxicity in both animal species due to excessive of curcumin deposition, and nanomaterials of gums and methy-and ethycelluloase. Several studies have recommended doses of curcumin for various disease conditions ranging from 0.02 to up to 2.5 g/person/day (around up to 42 mg/kg bw/day for an individual weighing 60 kg) [61][62][63]. Our chronic toxicity trial revealed that the NOAEL of CNCs at doses within a range of up to 0.27 and 0.54 g/kg bw/day does not cause obvious toxicity in mice and hamsters, respectively. Based on the results of our study, NOAEL of curcumin from CNCs might be converted to a maximum recommended starting dose for clinical trials at 13.13 and 43.7 mg/person/day (an individual weighing 60 kg) of CNCs. Also, toxicokinetics/exposure data to correlate with the findings of the study should be obtained in future study. Safety of CNCs will thus be confirmed. Conclusion CNCs segregated to provide nanoparticles after dispersion in water and showed potential to stabilize curcuminoid contents for at least 12 months in the storage conditions used. Acute and chronic toxicity studies were conducted to confirm the safety of CNCs. A single low or medium dose of CNCs is safe in both mice and hamsters. Likewise, low and medium daily CNCs doses are safe for long-term administration. We observed that CNCs treatment have the potential to produce toxicity in high-dose treatments, but most abnormal parameters returned to normal levels by 28 days after the final dose. Therefore, CNCs exhibit relatively low toxicity and are ready for use in clinical study in human beings, but requiring more study of their toxicity in high doses. Declaration of Competing Interest Welltech Biotechnology Co., Ltd, Thailand, the producer and supplier of CNCs and BNCs, had no part in the planning and conducting the acute and chronic toxicity testing in this project. All authors declare no conflict of interest. Possible mechanism by which a high dose of CNCs induces toxicity in mice and hamsters. Curcumin-loaded nanocomplexes (CNCs) with arabic gum and xanthan gum adhere to the stomach wall after oral administration. After dispersal in water, CNCs forms an amorphous nanoprecipitate partially covered by gums. In this form, it is moved onwards to the small intestine, and then absorbed into the bloodstream via many pathways. Curcuminloaded nanoparticles and curcumin released from nanocomplexes are engulfed by the mononuclear phagocyte system and then distributed to various organs. Accumulation of phagocytes containing nanoparticles is primarily resided in the liver, kidney, lung, spleen, pancreas and testes, leading to injuries in those organs via inflammation-mediated ROS production. These were resolved within 28 days after cessation of treatment. would like to thank Welltech Biotechnology Co., Ltd, Thailand for producing CNCs and BNCs. Chanakan Jantawong is a graduate student supported by Cholangiocarcinoma Research Institute (CARI), Khon Kaen University, Khon Kaen, Thailand (No. CARI 02/2561). We would like to thank Prof. David Blair for invaluable suggestions and editing the manuscript via publication clinic, Khon Kaen University, Thailand.
2021-07-18T05:27:15.382Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "b0f3baa355298715d8ff6b2c81867261f9d7fdbd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.toxrep.2021.06.021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0f3baa355298715d8ff6b2c81867261f9d7fdbd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233875729
pes2o/s2orc
v3-fos-license
Composite grafts for fingertip amputations: a systematic review There is debate in the literature surrounding the management of fingertip amputations. The role of composite grafts lacks clarity in terms of outcomes and complications. Hence, there is a need for an evidence synthesis to guide practice. A search of the databases OVID MEDLINE, PubMed, EMBASE, SCOPUS, The Cochrane Library, and clinical trial registries was conducted, from 1946 to January 2020, using the key terms “fingertip,” “digital tip,” “digit,” “finger,” “thumb,” “amputation,” “replantation,” “reattachment,” “reimplantation,” and “composite graft.” Studies reporting primary data on the outcomes of composite grafts of 5 or more digits were included. The studies included in this systematic review ranged in year of publication from 1959 to 2019. Data extraction included demographic details, functional, esthetic and adverse outcomes. Twenty-three articles were included. Outcome data on composite grafts are heterogeneous and little standardization of measurements exists, making interpretation challenging. Identified factors associated with improved outcomes include lower age, distal amputation levels by cut mechanism and decreased time to operation. Smoking is associated with poorer composite graft outcomes. Although survival rates vary greatly, composite grafting may be useful in certain cases and provide good functional and sensation outcomes with good patient satisfaction. compliant with PRISMA guidelines [12] . A systematic review protocol was published [1] , and the systematic review was registered a priori: https://www.researchregistry.com. Studies included Original research studies of levels 1-5 of the Oxford Centre for Evidence-based Medicine [13] were considered for inclusion if they reported data concerning the relevant outcomes, as well as unpublished data, if methods and data were accessible. No duplicate articles nor articles not reporting primary data were included. Participants The patient population included children and adults receiving nonmicrosurgical replantation following distal fingertip amputations, with the aim of reviewing outcomes in these cases in order to elucidate the role of non-microsurgical replantation in the management of distal finger amputations. Intervention The interventions included were composite grafting of the distal tip via non-microsurgical methods following fingertip amputation. Any studies in which microsurgical reconstruction was used were not included. Articles were included if they reported on the survival outcomes of distal fingertip amputations treated with primary composite grafting of the amputated tip. All articles using subcutaneous pocket techniques, "pulp flaps" or microsurgical replantation were excluded, as were articles reporting on data of <5 cases, following previous research [9] . Outcomes The primary outcome measured was graft survival. Secondary outcomes are detailed below. Identification and selection of studies Two independent reviewers (M.R.B. and M.L.L.) screened the title and abstract of each of the published articles for inclusion according to the criteria listed in Tables 1-2. Full-length manuscripts were reviewed for articles which met the inclusion criteria, if no abstract was published or if the abstract did not have sufficient information to determine eligibility. Quality scoring The Grading of Recommendation Assessment, Development and Evaluation (GRADE) system was used to assess the methodological quality of included studies. Analysis Characteristics of included studies are presented as counts and percentages. Continuous data are expressed as means (or median values where stated). Meta-analysis was not performed as only one study reported comparative data on outcomes of composite grafting compared to other methods of managing distal fingertip amputations. Results The search yielded a total of 5790 articles, after 2061 duplicates were removed, 3729 underwent title and abstract screening (stage 1), and 119 articles underwent full-text screening (stage 2). A total of 23 articles met the full inclusion criteria (Fig. 1) [10, . Article demographics The articles included covered data collection from 1959 to 2019 (Table 3). The majority of the work published on composite grafting outcomes was conducted in Japan (n = 5), followed by the United Kingdom (n = 4) and the USA (n = 4), Korea (n = 3), Italy (n = 3), Australia (n = 1), Taiwan (n = 1), Turkey (n = 1) and France (n = 1). The highest level of evidence of our included studies was 4, corresponding to a randomized controlled trial (RCT) by Kusuhara et al [29] . In terms of article quality, every study had a GRADE score of "very low", with the exception of the aforementioned RCT conducted by Kusuhara et al [29] which was graded as "moderate". Patient demographics In total, the number of reported patients included across all studies was 810, with 264 females (Table 4). In addition, Urso-Baiarda et al [35] reported on 108 digits and Imaizumi et al [26] on 18 digits, with the number of patients not specified. The mean age of participants per study ranged from 2.4 [32] to 43.2 years [28] (range 0-74) [28,32] and each article reported on anywhere from 7 to 108 digits, with a mean of 41.5 digits [33,35] . The majority of included studies reported on outcomes of a single digit composite grafting per study participant, with five articles reporting outcomes of more than one digit per patient [17,22,24,28,34] . Surgical technique Surgical technique and reporting on specific operative details varied (Table 5). Classic composite grafting (ie, no modifications) was the most commonly used method, with 19 of the included articles adopting this technique [10,[14][15][16][18][19][20][21][22][23][24][25][26][28][29][30][31][32]34] . The cap technique, whereby the proximal stump is de-epithelialized and the amputated part modified so as to allow for maximal contact between the stump and amputated part, was adopted in three studies [17,27,33] . Fingertip amputations (ie, distal to the DIPJ) almost always involve the nailbed, however, only 11 of the 23 studies specifically describe repair of the nail bed [14][15][16][17][18]20,22,[25][26][27]31] and Murphy et al [32] describe removal. Part of the management (and "preservation") of the nailbed involves management of the nail; the nail may be removed and sutured back onto the nailbed to act as a splint to guide new nail growth or discarded due to contamination. When discarded, other material (most commonly foil) can be used as a splint, or surgeons may not use a splint at all. Three of the 12 articles mentioning nailbed management describe removing and resuturing the nail bed [22,26,31] . Dagregorio and Saint-Cas [18] and Chen et al [17] stated that the nail bed was preserved. Proximal part trimming was only reported in 3 articles, that is those using the cap technique [17,27,33] . Functional outcomes In total, ten studies reported on the functional outcomes following composite grafting [14][15][16][17][18][19]25,27,30,31] (Table 9). Losco et al [30] were the only authors to use objective measure, and graded functional recovery using the Q-DASH score and measured movement at the IPJ. The results of this indicates minimal disability [30,41,42] but with lessened motion at the IPJ [43] . The other studies recorded functional outcomes with questionnaires, however, each study used a unique questionnaire with different questions [15][16][17]30,31] . Results based on clinician reports showed that all patients used their hands normally or that all digits were functional [14,18,25,27] with the exception of Douglas [19] , who only reported on functional outcomes of 2 patients. Of the 4 articles that reported on patient satisfaction with the results, the responses were favorable and showed that the majority of patients were pleased with the end result [15,17,27,30] . Discussion Composite grafting is a simple technique for restoring the amputated fingertip in cases where microvascular replantation is not possible. This technique has most frequently been used to repair pediatric fingertip amputations due to the small caliber of affected vessels and the relative regenerative capacity of juvenile tissues [7] . To date, there has been no formal synthesis of results across individual studies. Therefore, we conducted the first systematic review of composite grafting for distal fingertip amputations to investigate whether it is a viable and worthwhile technique and what factors are most predictive of graft survival. A total of 23 individual studies were reviewed in this systematic review. Across all studies, the success rates of composite grafting were highly variable, ranging from 7.7% [20] to 93.5% [17] . Adverse outcomes were common with infection rates as high as 17% [15] and reoperation rates of up to 56.3% [23] . The functional and sensory outcomes were favorable with high patient satisfaction. However, cosmetic outcomes were not optimal as detailed from the questionnaire responses and clinical reports, which show that finger shortening, and nail deformities are common. However, and importantly, the evidence available to date was of poor quality. Indeed, only one study was the level 1a (the highest level) according to the Oxford criteria. This study by Kusuhara et al [29] ; however, this study did not compare composite grafting to alternative methods for managing fingertip amputations not suitable for replantation (ie, stump management by primary closure), but rather compared success of grafting with and without application of b-FGF. In fact, no comparative studies looked at outcomes of composite grafts versus not grafting, and the majority of published articles were retrospective case series Idone et al [25] Classic - Borrelli et al [15] Classic < (level 4) [10,14,15,[17][18][19][20][21]23,[25][26][27][28][30][31][32]35] . Another factor limiting study was the low participant number. A minority of available studies included > 50 patients [15,16,21,22,31,32,34,35] . A major outcome of this systematic review was to investigate factors predictive of graft survival. Smoking status and comorbidities are relevant when using composite grafting on adult patients. Of the 17 studies reporting results with adults, only 7 studies reported on smoking or comorbidity status [10,15,21,22,27,30,35] . The studies that did report on smoking found, not surprisingly that smoking was associated with poorer outcomes. A multivariable analysis [22] found that smoking was an independent factor associated with poorer graft healing. Better graft survival has been linked to decreased time to operation [31] , lower age [15,16] , clean-cut injuries [21,28] , and more distal amputation levels [16,28] . These findings, in addition to future research, should help clinicians in stratifying patients to being at high risk of poor outcomes from composite grafting. A variety of operative techniques were described, including classic composite grafting and the cap technique. The cap technique has been shown to aid healing through providing increased contact surface between the stump and amputated part. However, the main limitation of this technique is the resulting finger shortening, which, depending on patient and injury factors, may be significant. Sutured A secondary outcome investigated was predictors of poor postoperative outcomes. Adverse events following composite Table 6 Adverse outcomes. Moiemen and Elliot [31] Parental Questionnaire --Tender tip: 10 (26%) Pain cutting nail: 8 (21%) Adani et al [14] Clinician report & 2PD < 7 in all patients 2 y None complained of dysesthesia or cold symptoms Kankaya et al [27] Questionnaire & 2PD 7.26 6 mo Zone I (n = 2): Pain and cold intolerance were ameliorated after 2 mo Zone 2 (n = 15): Patient satisfaction on pain, sensibility, cold intolerance was achieved Zone 3 (n = 6): Patients had neither pain nor cold intolerance by the third postoperative month Eo et al [10] 2PD 5.5 -Some complained of persistent paraesthesia Chen et al [17] Questionnaire & 2PD 6.3 6 mo Numbness over the fingertip: 19 (65.5%) Fingertip tenderness: 4 (13.8%) Butler et al [16] Parental Questionnaire --Scar tender: 3 (7%) Cold intolerance: 7 (17%) Hypersensitive: 3 (7%) Idone et al [25] 2PD < 5 in all -No patient complained of dysesthesia or cold intolerance Borrelli et al [15] grafting were inconsistently reported among the included studies and only 17 articles reported adverse events [10,[15][16][17][18][19][20][21][22][23][24][26][27][28]30,32,33] . The overall complication rate was 15.6%. The recovery of composite grafts from the data indicate that adverse effects such infection and necrosis are common and that reoperation mostly consists of debridement or the use of additional skin graft or flap procedures [10,[15][16][17][18][19][20][21]23,24,27,30,32,33] . One striking finding of this review is the huge variety in the small number of published studies. Interestingly, in the 23 of studies, 6 different classification schemes were used to describe the level of amputations. One of the more commonly used, the Ishikawa classification adapted to distal fingertip amputations, categorizes amputations in terms of zones of the fingertip based on the nail. It comprises four zones distal to the DIPJ and takes into account the angle of the amputation [36] . The Hirase classification [23,24] is based on the course of the digital artery, whereas the Allen classification includes reference to bony fragments in the amputated stump and advice for management based on the level [37] . Moreover, descriptions of the types of injuries sustained were not reported in a standardized fashion and five articles did not classify the mechanism of injury [23,24,27,29,35] . Finally, the definition of graft survival, the main outcome investigated, also significantly varied between studies. One of the main limitations in the data is the reporting of the composite graft healing. Success or failure or graft take is defined differently Table 9 Functional outcomes and patient satisfaction. Measurement Method Results Patient Satisfaction Douglas [19] Clinician report Case 3: negligible stiffness Case 4: ankylosis at distal joint -Moiemen and Elliot [31] Parental Questionnaire Difficulty cutting nail: 11 (29%) Digit use "normal": 34 (90%) -Adani et al [14] -All patients used their hands normally -Kankaya et al [27] Clinician report -Zone 1: full functional and aesthetic satisfaction Zone 2: satisfaction with aesthetic and sensation outcomes Zone 3: -Dagregorio and Saint-Cast [18] Clinician report All fingers were functional -Chen et al [17] Questionnaire 4 (13.8%) experienced limitation in use of hand Very satisfied: 24 (82.8%) Moderately satisfied: 2 (6.9%) Slightly satisfied: 1 (3.4%) Completely unsatisfied: 2 (6.9%) Butler et al [16] Parental Questionnaire 2 parents (5%) reported functional deficit Parents reported ∼45% complete graft survival Idone et al [25] Clinician report All patients were able to normally use their digits also for pinching and picking up small objects -Borrelli et al [15] Questionnaire Time before using hand/finger in normal activities: Figure 2. Mean percentage of composite graft survival/take/ success [10,[14][15][16][17][18][19][20][21][22]25,[27][28][29][31][32][33][34] . Revision operation rate (%) Figure 3. Mean revision rate [10,[15][16][17][18][19][20][21]23,24,27,30,32,33] . across the included studies, making comparisons of success rates difficult. As an example of this, a few studies define complete or partial take as success, while others do not. This is reflected in the broad range of success rates across the data which vary from 7.7% [20] to 93.5% [17] . Details of postoperative care such as assessments of recovery and postoperative instructions were also varied and could add significant variability. Despite this heterogeneity making it difficult to compare results and synthesize data across studies, the results from the 23 articles included in this review suggest that composite grafting is a successful management technique for distal fingertip amputations not for microsurgical reconstruction and often yields good functional and sensation outcomes. Cosmetic outcomes may not be optimal; however, this must be considered against the outcomes from primary closure of the stump, which results in loss of the nail complex. Future studies should be additive or adopt previously used classification systems, such as the Ishikawa, which has the advantage of detailing the angle of amputation, which may be significant. Furthermore, future work should use clear definitions of graft success to facilitate homogeneity. Conclusions Composite grafting may be a useful technique in the management of distal fingertip amputations in adults and children when microsurgical anastomosis is not possible and may yield good functional and sensation outcomes with good patient satisfaction. However, cosmetic outcomes are less successful, with nail deformity and digit shortening commonly reported. Adverse outcomes are also commonly reported. Current available evidence suggests that composite grafting success is higher in children with more distal amputation levels by a cut mechanism who undergo composite grafting within a few hours from injury. The current available data on composite grafting for distal fingertip amputations is extremely heterogenous and synthesis of results is difficult for this reason. Little standardization exists for detailing injury, amputation, operative or follow-up information and several classifications systems are used. How optimal healing is defined is also a major limitation to interpreting the success of composite grafting. This is reflected in the rates of composite graft take, which vary widely. Further research should aim to address this by using standardized methods of collecting data. Digit shortening (mean, mm) Figure 4. Mean digit shortening [15,27,30,33] . Two-point discrimination (mean, mm) Figure 5. Mean 2-point discrimination [10,17,27,30,33] .
2021-05-07T13:14:37.990Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b5a141abd7743a60fa2b1a23e52dcc5aaaa70698", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/ijsshortreports/Fulltext/2021/01010/Composite_grafts_for_fingertip_amputations__a.4.aspx", "oa_status": "HYBRID", "pdf_src": "WoltersKluwer", "pdf_hash": "b5a141abd7743a60fa2b1a23e52dcc5aaaa70698", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238582688
pes2o/s2orc
v3-fos-license
Multi-parameter analysis of the obstacle scattering problem We consider the acoustic field scattered by a bounded impenetrable obstacle and we study its dependence upon a certain set of parameters. As usual, the problem is modeled by an exterior Dirichlet problem for the Helmholtz equation $\Delta u +k^2u=0$. We show that the solution $u$ and its far field pattern $u_\infty$ depend real analytically on the shape of the obstacle, the wave number $k$, and the Dirichlet datum. We also prove a similar result for the corresponding Dirichlet-to-Neumann map. Introduction Understanding how the shape of an object impacts a certain property is a very old problem and has a variety of applications. We may think, for example, to the problem of finding the best design to maximize some sort of efficiency or to the sort of problems related to non-destructive test methods, like the problem of finding the shape of an inclusion from a set of measurements taken on the outer boundary of an object, or the inverse scattering problem, where the shape of an obstacle is inferred from measurements of a scattered wave. In mathematics, the property that one wants to analyze is often associated with the solution of a boundary value problem or to a quantity related to the solution by a certain functional. Then, understanding how the shape impacts a specific property amounts to studying the dependence of the solution of the boundary value problem upon perturbations of the domain of the partial differential equation. In mathematical jargon the problem of finding an optimal configuration that maximizes a shape functional goes under the name of shape optimization. The reader may find some references in the monographs by Henrot and Pierre [16], Novotny and Soko lowski [30], and Soko lowski and Zolésio [34]. The problems of inferring a shape from measurements on the boundary of an outer domain or a scattered wave are known as inclusion detection and inverse scattering problems, respectively, and are both examples of inverse problems. For some references we mention the books of Colton and Kress [4] and Kirsch [20]. A preliminary task that is common to shape optimization and the above mentioned inverse problems is that of understanding the regularity of the map that associates the shape of an object to the solution of the boundary value problem and to the specific quantity under consideration. For most techniques, indeed, it is desirable to have at least some sort of differentiability (as in Kirsch [19], where the differentiability of the far field pattern is used in the numerical analysis of an inverse scattering problem). It is not surprising, then, that many papers deal with the differentiability properties of shape functionals and our paper is one of these. More specifically, we examine an acoustic obstacle scattering problem and study the dependence of the solution and of its far field pattern upon perturbations of the wave number, the Dirichlet datum, and the shape of the obstacle. We also consider the pullback of the Dirichlet-to-Neumann operator and its dependence on the wave number and the shape of the obstacle. Among the works that precede our paper with similar results we mention those of Potthast [31,32,33], where the aim is to prove that the layer potentials of the Helmholtz equation are Fréchet differentiable functions of the support of integration. Potthast's results are obtained in the framework of Schauder spaces and the final goal is that of analyzing the domain derivative of the far field pattern. Related problems are studied in the papers of Haddar and Kress [14], Hettlich [17], Kirsch [19], and Kress and Päivärinta [21]. For similar differentiability results, but for the elastic scattering problem, we mention Charalambopoulos [1]. Finally, the case of Lipschitz domains have been studied by Costabel and Le Louër [5,6,29] in the framework of Sobolev spaces. The novelties that we bring in this list are of two kinds. On the one hand, the regularity properties that we prove are stronger than Fréchet differentiability. More specifically, we obtain real analyticity results. On the other hand, we do not confine ourselves only to the shape of the obstacle, but we consider the joint regularity upon the wave number, the Dirichlet datum, and the shape. So, for example, we prove that the far field pattern is a real analytic map of the wave number, the Dirichlet datum, and the shape of the obstacle (a triple that we think as a unique variable in a certain product Banach space). Incidentally, we observe that there are very few results in literature that go beyond the differentiability of shape functionals. A remarkable example are some recent works on the shape holomorphy by Jerez-Hanckes, Schwab, and Zech [18], which deals with the electromagnetic wave scattering problem, by Cohen, Schwab, and Zech [2], about the stationary Navier-Stokes equations, and by Henríquez and Schwab [15], on the Calderón projector for the Laplacian in R 2 . We now introduce the geometry of the problem. We fix α ∈ ]0, 1[ and a bounded open connected subset Ω of R 3 of class C 1,α such that R 3 \ Ω is connected. Here we note that, if Ω is a set, the symbol Ω denotes its closure. Also, if z ∈ C, we denote by z the conjugate of the complex number z. For the definition of sets and functions of the Schauder class C j,α (j ∈ N) we refer, e.g., to Gilbarg and Trudinger [13]. We also note that, if not otherwise specified, all the functions in the paper are complex-valued. To consider perturbations of the shape of the obstacle, we take the set Ω of (1) as a reference set. Then we introduce a specific class A 1,α ∂Ω of C 1,α -diffeomorphisms from ∂Ω to R 3 : A 1,α ∂Ω is the set of functions of class C 1,α (∂Ω, R 3 ) that are injective and have injective differential at all points of ∂Ω. is known as the outgoing Sommerfeld (k)-radiation condition. From the point of view of physics, solutions of the Helmholtz equation that satisfy the outgoing Sommerfeld condition describe waves that scatter from a source situated in a bounded domain. In particular, waves with sources situated at infinity do not satisfy condition (3). For k = 0, the Sommerfeld condition implies the decay at infinity of u(x), and thus it is stronger than the last condition of problem (2) (cf., e.g., Colton and Kress [3, Chap. 3, Rem. 3.4]) . For k = 0, this is no more the case, as one can easily verify taking u identically constant. In particular, for k = 0 a solution u of (2) is a harmonic function that, by the last condition of the system, is also harmonic at infinity (see Folland [12,Chap. 2]). Then, in this case it is the Sommerfeld condition to follow from the decay at infinity of u(x) (see, e.g., Folland [12,Prop. 2.75]). Either way, from the Sommerfeld condition if k ∈ C \ {0} and Imk ≥ 0, or from the decay of u(x) if k = 0, we can see that problem (2) has a unique solution in C 1,α loc (E[φ]) for all choice of φ ∈ A 1,α ∂Ω , k ∈ C with Imk ≥ 0, and g ∈ C 1,α (∂Ω) (cf. Colton and Kress [3,Chap. 3] for the case with k = 0 and Folland [12,Chap. 3] for k = 0). From now on, we denote such a solution by u[φ, k, g]. We stress that we decided to state problem (2) including both the Sommerfeld condition and the decay at infinity for exactly this reason, that is, to have a unique solution both when k = 0 and k = 0. Doing so we can study the dependence of the solution u[φ, k, g] upon the wave number k ∈ C with Imk ≥ 0 in a unified way, without the need of introducing two different problems for k = 0 and k = 0. We also observe that there exists a function u ∞ [φ, k, g], defined on the boundary ∂B 3 (0, 1) of the three dimensional unit ball B 3 (0, 1) and with values in C, for which we have the following asymptotic expansion: For k = 0, u ∞ [φ, k, g] is known as the far field pattern of u[φ, k, g] (see, e.g., Colton and Kress [3,Chap. 3]) and, for every Y j is a spherical harmonic of degree j). Both for k = 0 and k = 0, u ∞ [φ, k, g] can be computed from the solution u[φ, k, g] by the formula where R > 0 has to be taken large enough so that I[φ] ⊆ B 3 (0, R) and where ν B3(0,R) denotes the outward unit normal to ∂B 3 (0, R). By the divergence theorem, we can also verify that the integral in the right-hand side of (4) does not depend on the specific choice of R. From the point of view of physics, the far field pattern represents the main directional (angular) part of a wave away from a scattering object. In inverse scattering theory, one of the main problems is that of reconstructing the properties of an object starting from the knowledge of the far field pattern. Moreover, if φ ∈ A 1,α ∂Ω , k ∈ C with Imk ≥ 0, we introduce the pullback of the Dirichlet-to-Neumann operator D (φ,k) from C 1,α (∂Ω) to C 0,α (∂Ω) as the linear operator that takes the Dirichlet datum g to the normal derivative of the solution u[φ, k, g], i.e. Our aim is to investigate the dependence of the solution u[φ, k, g] and of its far field pattern u ∞ [φ, k, g] upon the triple (φ, k, g), and of the Dirichlet-to-Neumann operator D (φ,k) upon the pair (φ, k). As mentioned above, the rationale of this paper is to prove regularity properties that go beyond the Fréchet differentiability. More specifically, we do not confine to the dependence on the shape: we study the joint dependence on the triple (φ, k, g) and we prove (joint) real analyticity results. So, for example, in Theorem 4.5 we show that the map is real analytic. In the expression above C + is the set of complex numbers k with Imk ≥ 0. In Theorems 4.1 and 4.3, we prove similar results also for the solution u[φ, k, g] and for its normal derivative. In Corollary 4.4 we deduce by Theorem 4.3 a corresponding result for the pullback of the Dirichlet-to-Neumann map. We stress here that for us the word "analytic" always means "real analytic." For the definition and properties of real analytic operators, we refer to Deimling [11, §15]. Our analysis relies on the results of [8], where the authors consider layer potentials associated with a family of fundamental solutions of second order differential operators with constant coefficients depending on a parameter. The authors prove the real analytic dependence of the layer potentials upon variations of the diffeomorphism, the density, and the parameter. In the present paper we apply the results of [8] to the k-dependent fundamental solution − 1 4π|x| e ik|x| , x ∈ R 3 \{0}, of the Helmholtz equation ∆u+k 2 u = 0. We also mention the work of Lanza de Cristoforis and Rossi [25] on layer potentials of the Helmholtz equation, where they consider a different family of fundamental solutions (see also the previous work [24] by the same authors, which deals with harmonic layer potentials and [7] for the case of higher order operators). Moreover, analyticity results for integral operators and methods of potential theory have been exploited in the monograph [9] to obtain real analytic continuation properties of the solutions of singularly perturbed boundary value problems. Finally, we point out that an analysis similar to the one of the present paper has been carried out by the authors for other physical quantities arising in fluid mechanics and in material science (see [10,27,28] for the longitudinal fluid flow along a periodic array of cylinders and the effective conductivity of a periodic two-phase composite material). The paper is organized as follows. Section 2 is a section of preliminaries of classical potential theory for the Helmholtz equation. In Section 3 we transform problem (2) into an equivalent integral equation. Finally, in Section 4 we prove our main results on the analyticity of functions related to problem (2). Preliminaries of potential theory Let α ∈ ]0, 1[ and letΩ be a bounded open connected subset of R 3 of class C 1,α . We denote by νΩ the outward unit normal to ∂Ω and by dσ the area element on ∂Ω. We remark that, in this section,Ω is a generic open subset of R 3 that we use as a dummy to define some notation and write some general results. Instead, the set Ω introduced in (1) is a reference domain that we keep fixed for the whole paper. Our method is based on classical potential theory. In order to construct layer potentials, we introduce for k ∈ C the function For k = 0, S(k, x) is a standard fundamental solution of the Helmholtz equation ∆u + k 2 u = 0 that satisfies the outgoing Sommerfeld (k)-radiation condition. For k = 0, S(0, x) is a standard fundamental solution of the Laplace equation ∆u = 0, that is and is harmonic at infinity. Then, we introduce the layer potentials associated with the fundamental solution S(k, ·). We set for all µ ∈ C 0 (∂Ω). Here above, DS(k, ξ) denotes the gradient of S(k, ·) computed at the point ξ ∈ R 3 \ {0}. We also clarify that, in this paper, ∂ ∂νΩ(y) is the partial derivative (in the normal direction) with respect to the y variable, whereas ∂ ∂νΩ(x) denotes the partial derivative with respect to the x variable. This is why a − (minus) sign appears in front of the last integral. We also set for all µ ∈ C 0 (∂Ω In Theorems 2.1 and 2.2 below, we collect some well-known properties of layer potentials (cf., e.g., [8] with Lanza de Cristoforis, Colton and Kress [3], Lanza de Cristoforis and Rossi [24,25]). Theorem 2.1. Let α ∈ ]0, 1[ and letΩ be a bounded open connected subset of R 3 of class C 1,α . Let k ∈ C be such that Imk ≥ 0. Then the following statements hold. Our approach is based on integral equations. More precisely, in order to study problem (2), we convert it into an equivalent integral equation. We do so by exploiting a representation formula of the solution u[φ, k, g] in terms of single and double layer potentials. Therefore, we now show the validity of the following variant of the result of Colton and Kress [3,Thm. 3.33] regarding the solvability of the exterior Dirichlet problem for the Helmholtz equation by means of a combined double and single layer potential. Theorem 2.3. Let α ∈ ]0, 1[ and letΩ be a bounded open connected subset of R 3 of class C 1,α such that R 3 \Ω is connected. Let k ∈ C be such that Imk ≥ 0. Then the following statements hold. (i) The integral operator T from C 1,α (∂Ω) to itself defined by where I denotes the identity operator, is a linear homeomorphism. (ii) Let Γ ∈ C 1,α (∂Ω). Then problem has a unique solution u ∈ C 1,α loc (R 3 \Ω). Moreover, where µ ∈ C 1,α (∂Ω) is delivered by Proof. We first consider statement (i). We modify the proof of Colton and Kress [3,Thm. 3.33]. We first note that by Theorems 2.1 (ii) and 2.2 (iii), by the continuity of the single layer potential, by the compactness of the embedding of C 1,α (∂Ω) in C 0,α (∂Ω), and by the continuity of the restriction operator from C 1,α (Ω) to C 1,α (∂Ω), the operator is compact from C 1,α (∂Ω) to itself. Therefore, is a Fredholm operator of index 0. As a consequence, to show that T invertible, it suffices to prove that it is injective. So let ψ ∈ C 1,α (∂Ω) be such that Then, by the continuity of the single layer potential and by the jump formula for the double layer potential (see Theorem 2.2 (iii)), the function u ∈ C 1,α loc (R 3 \Ω) defined by solves the homogeneous exterior Dirichlet problem lim x→∞ u(x) = 0 , and thus, by the uniqueness of the solution of problem (5) (cf. Colton and Kress [3,Chap. 3] for the case with k = 0 and Folland [12,Chap. 3] for k = 0), we have Next we set Clearly, u # ∈ C 1,α (Ω) and by the jump relations for the layer potentials (see Theorems 2.1 and 2.2), we have Then, the first Green identity (cf., e.g., Colton and Kress [3, (3.4), p. 68]) implies that Taking the real part in (6), we obtain and taking the imaginary part in (6), we get Now, if Rek = 0, then equation (8) implies that ψ = 0 (we also remember that Imk ≥ 0) and if Rek = 0, then equation (7) implies that ψ = 0 . Either way, we have ψ = 0 and statement (i) follows. The validity of statement (ii) follows from statement (i), from the jump formulas for the double layer potential of Theorems 2.2, and from the continuity of the single layer potential. We now introduce a technical lemma about the real analytic dependence upon the diffeomorphism φ of some maps related to the change of variables in integrals and to the outer normal field. For a proof we refer to Lanza de Cristoforis and Rossi [24, p. 166 Moreover, the mapσ[·] from A 1,α ∂Ω to C 0,α (∂Ω) is real analytic. (ii) The map from A 1,α ∂Ω to C 0,α (∂Ω, R 3 ) that takes φ to ν I[φ] • φ is real analytic. By the results of [8] and the definition of S(k, ·), we deduce the following lemma on the real analyticity of some maps related to the φ-pullback of layer potentials and their derivatives (see also Lanza de Cristoforis and Rossi [25] and Lanza de Cristoforis [26, §3]). Proof. By a straightforward computation, one verifies that the k-dependent families of fundamental solutions S(k, ·) and of differential operators P [k](u) ≡ ∆u + k 2 u satisfy the assumption in [8, (1.1)]. Then the validity of statements (i)-(iv) follows by [8,Thm. 5.6]. Analysis of the integral equation formulation of problem (2) By Theorem 2.3 we can transform problem (2) into an equivalent integral equation. Then, the dependence of the solution of problem (2) on the shape of the obstacle, the wave number, and the Dirichlet datum, can be analyzed studying the dependence of the solution of the equivalent integral equation on the triple (φ, k, g). We begin with the following Proposition 3.1, which follows from Theorem 2.3 and from a change of variable. where θ ∈ C 1,α (∂Ω) is the unique solution of In view of the previous Proposition 3.1, we find convenient to introduce for all (φ, k) ∈ A 1,α ∂Ω × C the auxiliary operator for all θ ∈ C 1,α (∂Ω). Then, we can rewrite the integral equation (9) as We plan to show that (10) has a unique solution θ[φ, k, g] that depends analytically on (φ, k, g). To do so, we will show that the map that takes (φ, k) to Λ(φ, k) is real analytic and invertible and then we will exploit the real analyticity of the inversion map and the formula We begin by proving that (φ, k) → Λ(φ, k) is real analytic from A 1,α ∂Ω × C to the space L C 1,α (∂Ω), C 1,α (∂Ω) of linear bounded operators from C 1,α (∂Ω) to itself equipped, as usual, with the operator norm. Proof. By Lemma 2.5 the maps from C to itself is real analytic. We deduce that the map is real analytic. SinceΛ is linear and continuous with respect to the variable θ, we have Since the right-hand side equals a partial Fréchet differential of an analytic map, the right-hand side is analytic. Hence Λ is analytic on A 1,α ∂Ω × C × C 1,α (∂Ω) and, since it does not depend on θ, we conclude that it is analytic on A 1,α ∂Ω × C. Now we find convenient to introduce the set of complex numbers with nonnegative imaginary part. In the following proposition we see that Λ(φ, k) is an isomorphism for all (φ, k) ∈ A 1,α ∂Ω × C + . Proposition 3.3. Let α, Ω be as in (1). For all (φ, k) ∈ A 1,α ∂Ω × C + the operator Λ(φ, k) is an isomorphism (i.e. a linear homeomorphism) from C 1,α (∂Ω) to itself. Proof. Since Λ(φ, k) is linear and continuous it suffices to show that it is bijective and then, by the open mapping theorem, we deduce that it is an isomorphism. The fact that Λ(φ, k) is a bijection follows by Theorem 2.3 and by noting that the map from C 1,α (φ(∂Ω)) to C 1,α (∂Ω) that takes µ to θ ≡ µ • φ is a bijection. By Proposition 3.3 it makes sense to define the map that takes a triple (φ, k, g) to the unique solution θ[φ, k, g] of equation (10). We now prove that the map above is real analytic. Since C + is not open, we clarify that this means that the map has a real analytic continuation on an open neighborhood of every Proposition 3.4. Let α, Ω be as in (1). Then the map from A 1,α ∂Ω × C + × C 1,α (∂Ω) to C 1,α (∂Ω) that takes (φ, k, g) to θ[φ, k, g] is real analytic. 4 Analysis of the solution of problem (2) and of associated functionals We are now ready to exploit the intermediate result of Proposition 3.4 on the solutions of the equivalent integral equation (10) to prove our main theorems. In particular, Proposition 3.1 gives a representation of the solution of problem (2) by means of layer potentials with a density that, by Proposition 3.4, depends analytically upon (φ, k, g). Then we can use Proposition 3.4 to prove a series of results on the analyticity of functions related to problem (2). We start with a result on the analyticity of the solution u[φ, k, g]. Remark 4.2. We note that in Theorem 4.1 we have chosen the target space C 2 (Ω ) for the sake of simplicity. Indeed, by standard elliptic regularity theory, the solution u[φ, k, g] is real analytic in the interior of its domain. Therefore, we can easily replace the target space C 2 (Ω ) with C j (Ω ) for any j ∈ N or even with a suitable space of analytic functions. Next we consider the normal derivative of the solution.
2021-10-12T01:34:16.652Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "20e868fe580871dbcea0d888b05953aebbcdaafa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2110.05393", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "20e868fe580871dbcea0d888b05953aebbcdaafa", "s2fieldsofstudy": [ "Physics", "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
73563315
pes2o/s2orc
v3-fos-license
Magnetic ordering and non-Fermi-liquid behavior in the multichannel Kondo-lattice model Scaling equations for the Kondo lattice in the paramagnetic and magnetically ordered phases are derived to two-loop order with account of spin dynamics. The results are applied to describe various mechanisms of the non-Fermi-liquid (NFL) behavior in the multichannel Kondo-lattice model where a fixed point occurs in the weak-coupling region. The corresponding temperature dependences of electronic and magnetic properties are discussed. The model describes naturally formation of a magnetic state with soft boson mode and small moment value. An important role of Van Hove singularities in the magnon spectral function is demonstrated. The results are rather sensitive to the type of magnetic ordering and space dimensionality, the conditions for NFL behavior being more favorable in the antiferromagnetic and 2D cases. Introduction A number of 4f -and 5f -compounds, including so-called Kondo lattices and heavy-fermion systems, possess anomalois electronic properties, e.g., giant value of T -linear electronic specific heat C(T ) and magnetic susceptibility χ m (T ) [1]. Magnetism of such systems demonstrates also unusual features, including formation of an antiferro-or ferromagnetic state with small ordered moment value. A common explanation of the heavy-fermion behavior is based on the Komdo effect. Unlike the one-impurity situation, the competition between the Kondo screening of regularly arranged magnetic moments and intersite magnetic interactions has a great importance in the lattice case. As a result, smearing of Kondo singularities occurs on the scale of the characteristic spin-dynamics frequency ω. At the same time, ω itself acquires renormalizations due to the Kondo screening. A scaling consideration of this renormalization process in the s − f exchange model [2] yields, depending on the values of bare parameters, both the "usual" states (a non-magnetic Kondo lattice or a magnet with weak Kondo contributions) and the peculiar magnetic Kondo-lattice state. There are a number of theoretical mechanisms proposed to describe the NFL state, both single-site and intersite effects being discussed. In particular, proximity to magnetic quantum phase transitions [6] should be mentioned. The NFL behavior in the M -channel Kondo model (especially in the large-M limit) was extensively investigated in the one-impurity case [7,8,9,10,11]. Physically, this behavior is connected with overscreening of impurity spin by conduction electrons. The model permits a consistent scaling investigation since the fixed point is within the weak-coupling region (however, the marginal case M = 2 has some peculiarities). On the other hand, the lattice case is more difficult and only special approaches, in particular for one-dimensional models [8,12] and infinite space dimension [13] were used. In the present paper we start from the standard microscopic model of a periodical Kondo lattice and treat the interplay of the on-site Kondo screening and intersite exchange interactions within a scaling approach. We will demonstrate that, besides the standard one-impurity NFL mechanism,"soft" boson branches can be formed during the renormalization process, the role of singularities in spin spectral function being important for the NFL behavior. Earlier a similar consideration was performed in Refs. [2,14] where the NFL behavior in M = 1 and large-M Kondo lattices was treated within a simple approximation corresponding to one-loop scaling (in the pseudofermion representation). This approach yields NFL behavior in the formal limit M → ∞ (where the coupling constant is unrenormalized, which is similar to occurrence of a fixed point), but for any realistic M the NFL regime is achieved only in a very narrow interval of bare coupling constant (near the critical value for magnetic quantum phase transition). Thus this approximation is insufficient to describe consistently the NFL state. In the present work we perform the next-leading scaling analysis which changes radically the situation. In Sect.2 we write down the scaling equations in the one-impurity case and in the lattice situation (i.e. with account of spin dynamics). In Sect.3, results of numerical calculations are presented. In Sect.4 we discuss the physical consequences. Details of derivation of the scaling equations are presented in Appendices. Scaling equations To describe a Kondo lattice, we use the degenerate-band (multichannel) periodical s − f exchange model where t k is the band energy, S i are spin-1/2 operators, I is the s − f exchange parameter, σ are the Pauli matrices, m = 1...M is the channel index. For the sake of convenient constructing perturbation theory, we explicitly include the Heisenberg f − f exchange interaction H f in the Hamiltonian, although in fact this interaction is usually the indirect RKKY coupling. In the more general SU (N ) ⊗ SU (M ) model we have σ = 1...N and the Hamiltonian can be written as [10] A somewhat more realistic model including angular momenta is discussed in Ref. [2]; generalization to arbitrary spin is also possible (see, e.g.,. Ref. [15]) Similar to Ref. [2] we use the "poor man scaling" approach [16]. In this method one considers the dependence of effective (renormalized) model parameters on the cutoff parameter C < 0 which occurs at picking out the Kondo singular terms and approaching the Fermi level. To describe the renormalization process we introduce the dimensionless coupling constants where ρ is the bare electron density of states per channel at the Fermi level. In the one-impurity case the scaling behaviour is governed by the beta function At M > N the fixed point g * = N/M (zero of β(g)) lies in a weak coupling region which makes possible successful application of perturbation and renormalization group approaches. The scaling equation reads where the cutoff energy D is defined by g ef (−D) = g. Solving this equation yields with ∆ = N/M and the Kondo temperature It should be noted that we have no divergence of g ef (ξ), and the power-law critical behavior in (6) takes place in a wide region, including |C| > T K [9]. Generally, the critical exponents are defined by the slope ∆ = β ′ (g). Taking into account higher orders in 1/M one has the latter value being in agreement with the exact results of Bethe ansatz and conformal field theory (see Ref. [10]). The corresponding value of g * for N = 2 reads [9] which differs weakly from ∆. Using the results of Appendices A, B we can write down the system of scaling equations for paramagnetic (PM), ferromagnetic (FM) and antiferromagnetic (AFM) phases in the lattice case. Similar to Ref. [2], but taking into account next-leading contributions, we find the equation for I ef by picking up in the sums in the corresponding self-energies the contribution of intermediate electron states near the Fermi level with C < t k < C + δC. We derive where ω is a characteristic spin-fluctuation energy, η(x) is the scaling function satisfying the condition η(0) = 1, which guarantees the correct one-impurity limit, see Appendix C. The third-order term, proportional to M , comes from corrections which contain summation over the orbital index m (in the diagram approach, they correspond to diagrams containing a closed electron loop). The leading renormalization of spin-fluctuation frequencies is already of order of M : where the parameters a for a concrete lattice and magnetic structure are expressed in terms of averages over the Fermi surface (see Refs. [2,14] and Appendices A, B). It turns out that, owing to the structure of perturbation theory for magnetic characteristics, the M 2 corrections do not occur in the third order in I, so that Eq.11 is sufficient. Replacing in the right-hand parts of (10) and (11) g → g ef (C), ω → ω ef (C) we obtain the system of scaling equation with γ = M/N, Writing down the first integral of the system (12), (13) yields Thus we have a soft-mode situation at approaching the fixed point. Provided that ω ef (C) is weakly renormalized (e.g., a ≪ 1 at smalll k F ) we obtain cf. the treatment of the large-N limit [2]. In particular, in the paramagnetic state (6), cf. discussion in Refs. [17,18]. However, in the general case the scaling behavior is much more rich and interesting. Introducing the function the scaling equation takes the form In Ref. [2], an approximation was proposed for magnetically ordered cases, which takes into account not only the magnon pole, but also incoherent contribution, namely where η coh corresponds to the magnetic phase, and the function η incoh is unknown; for estimations we may put η incoh = η P M . The quantity Z = Z(−ω ef (C)/C) is the residue at the magnon pole, which is given by Then we have instead of (20) Scaling behavior Our scaling equations are written in terms of γ rather than M and N separately. Therefore, to establish properly the correspondence with the one-impurity case (8), we may put γ = M/N + 1 = 1/∆. This yields, at least for M > 2, correct critical exponents for magnetic susceptibility, specific heat and resistivity. The important case M = 2 is more difficult from the theoretical point of view, see [10,8,11]. However, a fixed point is still present for M = 2, the resistivity being satisfactorily described by simple scaling approach [10]. Since Ψ (ξ > 1) ≃ 1, in the PM phase χ(ξ) increases according to (6), (15). Provided that g is not too small, at large ξ we can put for rough estimations g ef (ξ) ≃ g * = 1/γ to obtain Thus a power-law behavior occurs which corresponds to the standard one-impurity NFL behavior (see below the discussion of physical properties). Note that the scale of T K occurs here, unlike the lowestorder scaling in the large-M limit [2]. The dependence (24) takes place up to the point becomes small, and g ef (ξ) increases slowly tending at ξ → ∞ to an asymptotic value which is, however, smaller than the one-impuirity g * since χ(ξ) remains finite. Note that lowest-order (one-loop) scaling for finite M yields the NFL behavior in a very narrow interval of bare coupling constant g only, since with increasing g we come rapidly to strong-coupling regime where g ef (ξ > λ) → ∞. Unlike the lowest-order scaling, such a critical g value does not occur in the present calculation for the paramagnetic case: g ef (ξ) remains finite for any g. The dependences g ef (ξ) and χ(ξ) in the paramagnetic phase are shown in Fig.1 for the 3D case (the results for the 2D case differ here very weakly). The behavior g ef (ξ) between ξ 1 and ξ 2 may be described as nearly linear, but is somewhat smeared since Ψ (ξ) differs considerably from the asymptotic values 0 and 1 in a rather large interval of ξ. Remember that Fig.1a demonstrates also the behavior of magnetic moment according to Eq. (15). In magnetically ordered phases, the behavior for for ξ < ξ 1 is similar, but the situation for ξ > ξ 1 changes since the Van Hove singularity of Ψ coh (ξ) at ξ = 0 plays an important role. Instead of decreasing, Ψ (λ + χ − ξ) starts to increase at approaching ξ 1 . At sufficiently large g, provided that at ξ > ξ 1 the argument of the function Ψ coh in (20) becomes almost constant (fixed), and we obtain Thus, instead of divergence of χ(ξ) in the one-channel model [2], we have a linear NFL behavior since g ef (ξ) remains finite. Such a behavior has a critical nature and corresponds to g = g c in the one-channel Kondo model. Unlike the PM case, a sharp crossover occurs here with changing g since we do not reach the regime (28) at small g < g c . The value of g c is determined by the value of δ, i.e. the magnon damping, see (74). One can see that the influence of the singularity is considerably stronger and conditions of the NFL behavior are more favorable in the 2D rather than 3D case, and in the AFM rather than FM case. Above the critical value g c , the picture of scaling trajectories (in particular, the size of NFL behavior region ) does not practically depend on g. In the case of equation (20), the linear behavior takes place up to ξ = ∞. On the other hand, when taking into account the incoherent contribution the increase of χ stops at aγg * 2 Ψ max coh = 1/Z = 1 + χ/a, i.e., at The dependences χ(ξ) for a 3D and 2D antiferromagnet are shown in Figs.2-3. In the presence of the incoherent contribution, the region, where the linear dependence (28) holds, is sensitive to the value of δ and is not wide, especially in the 3D case; the width does not increase with further increasing g. However, a more exact consideration of spin dynamics may change considerably the results. Probably, using the spin diffusion approximation underestimates the coherence and the picture should be somewhat intermediate between solid and dashed lines. The non-Fermi-liquid behavior in physical properties Now we discuss the NFL behavior of physical properties for the most important case N = 2. The temperature dependences of magnetic moment and magnetic susceptibility In the PM case are obtained directly from the above results by the replacement |C| → T , However, unlike the one-impurity case, such dependences are somewhat smeared and take place only up to temperatures determined by (26). A similar dependence is obtained for specific heat [9]. (20), and the dashed lines to account of incoherent contribution, Eq. (23). The parameter values are λ = 7, δ = 2 10 −4 , a = 1, g = 0.1, 0.12, 0.14 (from below to above). Note a shallow maximum of g ef (ξ) for small g, which is due to the sign change in η(x) As discussed above, the logarithmic factor in χ m for M = 2 (∆ = 1/2) is not described by our approach; an accurate treatment is obtained by more sophisticated methods, e.g., Bethe ansatz and conformal field theory. In the spin-wave region for an AFM structure with the wavevector Q we write down in terms of a retarded Green's function On replacing ω → ω(C), S → S ef (C) with |C| ∼ T in spirit of scaling arguments we obtain for the regimes corresponding to (25) and (28), respectively. Note that the spin-wave description of the electronmagnon interaction can be adequate not only in the AFM phase, but also for systems with a strong short-range magnetic order, including 2D and frustrated 3D systems at finite temperatures. According to (67), the non-universal exponent ζ is determined by details of magnetic structure and can be both positive and negative. For a qualitative discussion, we can Following to Ref. [14], the temperature dependence of electronic specific heat in magnetic phases can be estimated from the second-order perturbation theory, C el (T )/T ∝ 1/z(T ) where z(T ) is the residue of the electron Green's function at the distance T from the Fermi level (cf. Ref. [17]). Then we have in the AFM case The dependence C el (T )/T ∝ χ m (T ) was obtained experimentally for a number of NFL systems [7,10]. In the paramagnetic case the temperature correction to magnetic resistivity can be calculated from (6) as [9] The T 1/2 dependence (which corresponds to M = 2) is indeed observed in a number of f -systems [10]. (35) For electron-electron scattering one has another temperature dependence Conclusions To conclude, we have treated various mechanisms of the NFL behavior in the multichannel Kondo lattice. In comparison with one-impurity model, the lattice version provides a more rich picture. The NFL phenomenon seems to have a complicated nature being influenced by both singlesite Kondo effect and spin dynamics. The corresponding dependences of physical properties can be different in different temperature intervals. Moreover, various scattering mechanisms can give different temperature dependences. The most important result is occurence of an intermediatecoupling fixed point, which means formation of reduced magnetic moment or even its vanishing in the NFL regime, the dependence on the bare coupling parameter being weak. The details of scaling behavior are determined by the magnetic structure (parameter a) and the scaling function η(x), its singularities being essential. Peculiarities of electron and magnon spectrum can also play a role, similar to consideration in Refs. [14,19]. An important problem is stability of the fixed point: lifting of the degeneracy of electron subbands with different m in the Hamiltonian (1) should result in a change of scaling behavior, so that anomalous temperature dependences may take place in a restricted region. Possible applications of two-channel model to rare-earth and actinide systems, including corresponding difficulties of interpretation, are discussed in Ref. [10]. For uranium systems, realization of this model is possible due to time-reversal symmetry of subbands. The model used describes naturally formation of a magnetic state with small moment value. Besides that, our consideration provides an example of essential renormalization of the coupling constant according to (15). This may be of interest for the general theory of metallic magnetism (in particular, for weak itinerant ferro-and antiferromagnets): the magnetic state is determined by the renormalization process rather than by bare Stoner-like criterion (cf. discussion in Ref. [18]). The author is grateful to Prof. M. I. Katsnelson for useful discussions. This work was supported in part by the Division of Physical Sciences and Ural Branch of Russian Academy of Sciences (project no. 15-8-2-9). Appendix A. Renormalization in the paramagnetic phase The Kondo-lattice problem in the paramagnetic state describes the process of screening of localized magnetic moments. The correction to the effective magnetic moment is obtained from the static magnetic susceptibility [17,2] with J q (ω) the spectral density of the spin Green's function for the Hamiltonian H f , which is normalized to unity. We use the simple spin diffusion approximation (D is the spin diffusion constant), which corresponds to dissipative spin dynamics. The spin-fluctuation frequency in the paramagnetic phase is determined from the second moment of the spin Green's function with the result [17,2] where α q is expressed in terms of exchange integrals (40) In the approximation of nearest neighbors at the distance d, so that we may use a single renormalization parameter. It should be stressed that we do not need here to search for higher order corrections to magnetic properties (leading corrections are already proportional to M ). To construct a self-consistent theory of Kondo lattices we have to find the renormalization of the effective s − f exchange parameter. To this end, we calculate the Kondo correction to the electron self-energy with account of spin dynamics. We use the method of irreducible Green's functions (see Ref. [20] and the review paper [21]) which enables one to construct a consistent perturbation expansion in a small parameter. We write down where H int is the s − f interaction term. In the second order in I we have The next-order singular contributions read where P = 1 − 1/N 2 , n k = n(t k ) is the Fermi function. When neglecting spin dynamics Eqs. (43)-(45) agree with the one-impurity results [9]. The Kondo renormalization of the s−f parameter I → I ef = I +δI ef is determined by "incorporating" Im Σ (3) k (E) into Im Σ (2) k (E). The imaginary parts required are simplified: Note that the structure of Im Σ (4) k (E) is similar to that for magnetic susceptibility and magnetic moment (37). Averaging over t k = t k ′ = t k" = E F = 0 we obtain to leading accuracy the result (10). Appendix B. Renormalization in magnetically ordered phases Now we investigate the renormalization of the s − f interaction in FM and AFM phases. For simplicity we treat only the s − f model with N = 2 (a more general case is discussed in Ref. [2]). For a ferromagnet the electron spectrum possesses the spin splitting, E kσ = t k − σM IS. The second-order correction to I ef is determined by the corresponding electron self-energies: which are defined by As described in Ref. [20], using equation-of-motion method, we write down the self-energy in terms of the irreducible Green's function with δA = A − A . Writing down the equations of motion for the Green's function (48) we derive with account of singular terms The next-leading singular contribution, similarly to (44) (second term in the brackets), come from static correlators and are formally reduced to renormalization of occupation numbers: Calculating the corresponding Green's function yields Using the spectral representation for the retarded Green's function we obtain (53) the coefficient at the δ-function being just the contribution of the layer t k+q↓ − t k↑ = ω = C. Note that the correction to magnon frequency and magnetization can be obtained in the same manner via magnon damping (cf. Ref. [17]). This just gives the singular correction to Σ k↓ (E). Note that this does not survive in the limit of large N . At the same time, corrections to Σ which yields the required cutoffs at the magnon frequency in (12). The correction to the magnon frequency is the same as in the one-loop consideration [2] δω q /ω q = 2(1 − α q )δS/S (56) (57) For an antiferromagnetic structure with the wavevector Q the electron spectrum contains the AFM gap IS, The renormalization of I is obtained from the secondorder correction to the anomalous Green's function in the local coordinate system (cf. Ref. [22]), The calculation of the off-diagonal self-energy gives (we consider for simplicity a two-sublattice situation with ω q = ω q+Q ) We derive The Green's function needed is calculated as so that we obtain δ Σ (2) k,k+Q (C) /δC = 2I 2 S in agreement with (10). Appendix C. Scaling functions For the paramagnetic phase we have (69) In the spin-diffusion approximation (38) we obtain where ω = 4Dk 2 F , the averages go over the Fermi surface. Integration yields In the FM and AFM phases for simple magnetic structures we have η (ω ef /|C|, δ) = Re 1 − (ω k−k ′ + iδ) 2 /C 2 −1 t k =t k ′ =0 (71) where δ is a cutoff owing to damping. (Note that in the FM case with N > 2 this expression should be generalized since spin-up and spin-down contributions are asymmetric [2].) For an isotropic 3D ferromagnet integration in (71) for quadratic spin-wave spectrum ω q ∝ q 2 yields For an antiferromagnet integration with the linear spinwave spectrum ω q ∝ q gives Im[x 2 − (1 + iδ) 2 ] −1/2 d = 2 (73) Thus Ψ becomes bounded from above: One can see that the scaling functions for the ordered phases contain Van Hove singularities at x = 1. Presence of such singularities is a general property which does not depend on the spectrum model. The function η AF M (x) (d = 3) changes its sign at x = √ 2. For d = 2, η AF M (x) vanishes discontinuously at x > 1, but a smooth contribution occurs for more realistic models of magnon spectrum. A more detailed analysis of the scaling function singularities is presented in Ref. [14].
2016-04-02T12:21:01.000Z
2015-12-10T00:00:00.000
{ "year": 2015, "sha1": "0914225c4eb171690f134306cf9d607b5fab82dc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.03161", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0914225c4eb171690f134306cf9d607b5fab82dc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15679828
pes2o/s2orc
v3-fos-license
Road Traffic Injuries: Social Change and Development In the course of the twentieth century road traffic injuries (RTIs) became a major public health burden. RTI deaths first increased in high-income countries and declined after the 1970s, and they soared in low- and middle-income countries from the 1980s onwards. As motorisation took off in North America and then spread to Europe and to the rest of the world discussions on RTIs have reflected and influenced international interpretations of the costs and benefits of ‘development’, as conventionally understood. Using discourse analysis, this paper explores how RTIs have been constructed in ways that have served regional and global development agendas and how ‘development’ has been (re-)negotiated through the discourse of RTIs and vice versa. For this purpose, this paper analyses a selection of key publications of organisations in charge of international health or transport and places them in the context of (a) the surrounding scientific discussion of the period and (b) of relevant data regarding RTI mortality, development funding, and road and other transport infrastructure. Findings suggest that constructions of RTIs have shifted from being a necessary price to be paid for development to being a sign of development at an early stage or of an insufficiently coordinated development. In recent years, RTI discussions have raised questions about development being misdirected and in need of fundamental rethinking. At present, discussions are believed to be at a crossroads between different evaluations of developmental conceptualisations for the future. Introduction All causes of death are equal in as much as they cause death. But some are less equal than others by being more overlooked, more taken for granted or more accepted as an inevitable part of life than others, which are more scandalising. Road traffic injuries (RTIs) must be counted among the less equal. Their global health burden of causing more than 1.2 million deaths per year, corresponding to 2.4% of all global deaths, establishes them as a major global health threat, in a similar league with tuberculosis. However, until recently the response of the international health community appeared strangely muted. The 'combined effort of the global community towards funding road safety is roughly estimated to be between US$10-25 million per year', a fraction of the sums spent on other public health issues of comparable significance. 1 International health organisations have never attempted eradication, have invested only few research funds and have, in fact, only discovered the issue about fifteen years ago although it had been around for decades. How can this incongruity between health burden and effective response be explained? Cars stand for a way of life which existed as the uncontested goal of all economic development for decades and for which alternatives continue to have a difficult time competing. Cars have therefore possessed a powerful significance as both signs and preconditions of supposed modernisation, progress and development. However, from the beginning, RTIs as a destructive side-effect of the motorisation of society has threatened this image. The spread of cars and RTIs as a public health issue has been accompanied by a parallel story of efforts to prevent, limit or control RTI morbidity and mortality, their tangible reality as well as their public image. In fact, in many ways, the rise of a car culture depended on the construction of an acceptable RTI narrative. As cars spread from industrialised countries to the rest of the world RTI narratives inevitably became part of the global development and modernisation discourse. In the process, discussions on RTIs have been both a reflection of and a contribution to the international construction of the costs and benefits of a key element of development. 'Development' and 'modernisation' are controversial expressions, fraught with interpretations of politics and history. Recent research underlines the socially constructed character of both concepts. 2 Critics see it as an imperialist policy by which high-income countries have fortified their economic superiority and their political control over lowincome and often formerly colonised countries. 3 They call attention to the dramatically growing economic inequalities between high-and low-income countries despite -or because of? -decades of 'development' efforts allegedly designed to mitigate such disparity. Meanwhile, other scholars point to perceived successes of 'development', measured in health or social indicators such as life expectancy, infant mortality, gender equality or literacy rather than purely monetary data. 4 Other concepts do not easily lend themselves to this type of tally at all, notably Amartya Sen's view of 'development as freedom' or Herman Daly's insistence on 'development' as a strictly qualitative notion, to be distinguished from economic growth. 5 This paper uses 'development' in a non-judgmental sense as an expression of the notion policy makers, stakeholders or the general public have had about the direction their country, other countries or the world at large should take and for which they seek to make decisions accordingly. Both high-income and low-income countries develop in the sense that they evolve over time and concepts about desirable socio-economic conditions, viewed as the target of 'development', influence this process. Concepts differ according to their perceived benefits and drawbacks at specific times and places, and this paper argues that RTIs have been an important element of the perceived advantages and disadvantages of motorised transport and the socio-economic system it represents. At the same time, the construction of RTIs has been affected by the perceived benefits of motorised transport and, in a larger sense, by the perceived benefits of the specific type of development which prevailed at the time when RTIs became an issue in the early twentieth century. The development in North America after 1900, understood as 'modernisation', was marked by mechanisation, motorisation, increased provision with material goods, increased urbanisation, increased need for and ways of travel and life at an accelerating pace. These changes resulted from industrialisation in Europe and North America, a complex social transformation based on changing modes of production and consumption, changing power structures, changing forms of rural and urban living, and changes in medicine, food and technology. These changes relied heavily on the exploitation of fossil fuels, coal and increasingly oil, which made an unprecedented amount of energy available for a broad range of activities, especially for heavy industry used for warfare and transportation. Transport increased exponentially because railways and steamships provided the means and because intensified migration, trade and military campaigns created the need. The process entailed benefits and disadvantages for various groups of people at different times and places in volatile and sometimes contradictory ways. It improved nutrition, medical care, housing and opportunities for social mobility and political freedom for some and exacerbated it for others, within but especially between countries. 6 For the first time in world history, societies in Europe and North America were tangibly wealthier, more powerful and apparently healthier (though the reality was more complicated) than people in other parts of the world, many of whom, notably in Asia, had enjoyed similar or higher living standards than their European counterparts only a few generations earlier. 7 Consequently, this development came to be perceived as 'development' in the sense of undergoing the combination of processes which encompassed a European-type industrialisation. After 1945, the countries in Europe and North America which were industrialised in this way were regarded as 'developed', others were believed to be in need of such 'development' and therefore became categorised as 'developing countries'. Generally, 'development' was perceived as an economic concern and was placed under the responsibility of economists. 8 Thereby, it often came to be seen as synonymous with economic growth, defined as an increase in Gross National Product (GNP), a concept, which had just been invented. 9 Since this entire process appeared to go hand in hand with increased wealth, power and welfare it became widely accepted as a desirable process, or indeed the natural and only process towards improving well-being. This view was held by the vast majority of actors both in 'developed' and 'developing' countries, although motives ranged from the altruistic to the self-serving, and it is still held by leading economists today. 10 Considerations about how much of this process depended on finite fossil fuels and of making use of other regions' resources through imperialism were voiced repeatedly but generally marginalised or excluded from the mainstream discourse on 'modernisation', thus glossing over the fact that this process would be impossible to repeat on a global scale. Meanwhile, RTIs are similarly complicated. While the deaths of people who die on the road are real enough, the concept of RTIs is no less of a construction than that of 'development'. RTIs are the result of a complex interplay of a series of components, including various traffic participants, vehicles, roads, the spatial, legal and logistic organisation of road traffic and medical care. Accordingly, RTIs can be constructed as the result of individual misbehaviour, corporate irresponsibility, lack of administrative regulation or control, insufficient public maintenance and medical services, misguided transportation arrangements, poverty, fundamentally flawed working and living configurations or just bad luck. These constructions are not trivial since they determine which RTI prevention strategies are chosen, placing the financial and political responsibility on some actors while relieving others. They determine how much money is spent where and for whom, or even if any money is spent at all. Depending on how well those respective constructions reflect the reality of deaths on the road, they also determine how many people die or stay alive. Using discourse analysis, this paper explores how RTIs have been constructed in ways that have served local, regional and global development agendas and how 'development' has been (re-)negotiated through the discourse of RTIs and vice versa. For this purpose, the paper analyses a selection of key publications of organisations in charge of international health or transport and contextualises them within (a) the surrounding scientific discussion of the period and (b) relevant data regarding RTI mortality, development funding, and road and other transport infrastructure. The Early Years of RTIs Motorisation advanced in three waves. Between ca. 1910 and 1950 cars were concentrated in the USA, between 1950 and 1975 they became widespread in Europe and from 1960 onwards in the rest of the world, especially in Asia. In the process, the number of cars, trucks and motorbikes exploded from roughly one million cars in 1910 to 50 million (1930), 100 million (1955), 500 million (1985) and 777 million in 1997. It surpassed one billion in 2010. In addition, average driving distances expanded, further increasing overall traffic exposure. 11 The modern automobile, driven by a combustion engine, was developed and first used in France and Germany, but it was in the USA that it first became part of everyday life. 12 The beginning hardly foreshadowed upcoming developments. Two deaths due to motor vehicles were registered in Great Britain in 1896 and one in the USA in 1899. 13 Indeed, initially it did not seem that cars would spread beyond a small group of eccentrics. Early drivers in the USA were wealthy sportsmen who used cars to demonstrate 'conspicuous leisure'. Their attitude outraged many Americans, some of whom reacted with 'intense anger, and even acts of violence -often tinged with class hostility'. 14 Part of this resentment reflected the frustration of the have-nots watching the haves flouting their riches, but another part resulted from the experience that speeding motorists were becoming a danger to other street users. The number of cars grew, and stories of children being run over by reckless drivers or 'joyriding' chauffeurs made frequent, sometimes exaggerated, headlines in the press. 15 Thus, RTIs formed part of discussions on automobiles from the beginning. They presented a more difficult problem than railway traffic, which had also caused new dangers to which travellers were unaccustomed, but where a strict separation of railway lines from the rest of the traffic, notably from pedestrians, had proved successful at reducing victims. 16 Such a separation was not practical for motorised cars and the question involved a renegotiation of space. The issue was less acute in the countryside where other road users were few and where the car was often the only means of rapid transportation and its advantages were obvious. It was here that the car, in the form of an affordable and unpretentious Model-T Ford, turned from a symbol of aggressive luxury to one of the social success of the hard-working man. Country doctors, who had the most need for fast and reliable transportation, often served as promoters of cars as respectable products. 17 Mass production brought the car within reach of most Americans, and between 1909 and 1920 the number of registered cars increased by 2750%. 18 In the cities, which often had efficient means of public transportation already, cars had fewer obvious benefits and were more of a nuisance. Throughout the 1910s and 1920s, an angry animosity greeted cars, as traditional street users resented drivers who were perceived as disturbing the public order and as endangering people's lives. Most urban RTI victims were pedestrians, and most of these were children. In marked contrast to later times, early observers blamed motorists for their deaths, not children who had played on the streets or parents who had failed to watch them. Children were expected to play on the streets without needing surveillance. 19 Motorist interest groups fought their negative image. They portrayed drivers as a persecuted minority deserving protection and succeeded in redefining the issue away from one of injuries and death to one of freedom. Any restriction of the use of the car was constructed as inhibiting the people's rights to choose their preferred means of transport and of street use. 20 In the process, they effectively reconstructed a street from a place of public service to a 'marketplace for transportation demands', an expression of a more modern economic outlook. 21 In a deliberate campaign the car lobby ridiculed pedestrians as 'jaywalkers' and thereby as less legitimate users of street space. 22 By the 1930s, years before cars became majority traffic participants, most people had accepted that streets were primarily for them. 23 In addition to determined lobby activities, the democratisation of car ownership changed attitudes. Increasingly, 'people bought their first cars, not just because they were useful as well as fun, but because their self-respect demanded it'. 24 But the spread of motorised individual transportation was more than the sum of private decisions. Everywhere, governmental planning and the construction of an extensive system of overland roads preceded mass ownership of cars which could make use of it. In the USA, President Wilson signed the first large-scale highway building programme into law in 1916 with the Federal Aid Road Act, at a time when less than 4% of the population owned any type of motorised vehicle. 25 In Europe, widespread motorisation did not begin until the 1950s or even later but various measures spurred and anticipated future needs. In Germany, National Socialist policies of mass production of a 'people's car' and highways largely served purposes of military preparedness. 26 The Italian government similarly initiated a high-profile programme of highway building years before there were sizable numbers of drivers but discontinued the scheme during the economic depression. 27 In Great Britain and France, car ownership was no longer restricted to the most privileged class by 1938. 28 The increase in cars went hand in hand with an increase in the burden of road traffic accidents. 29 In the USA the number of RTI deaths increased drastically, both in absolute and relative terms between the First World War and the 1930s. While rates declined slightly and remained relatively stable afterwards, the absolute number of RTI deaths decreased only during the years of war and then continued to climb until more than 50 000 people were killed in 1970 ( 76. 29 For some years, the suitable terminology has been the object of debate, discussed further down in the text, as some authors prefer using 'crashes' or 'collisions' to 'accidents'. In order to avoid the impression that the difference of words in quoted sources and in the text somehow refer to different phenomena this paper uses all three expressions synonymously. They all describe an unintended collision involving at least one motorised vehicle. 100 000. The numbers subsided afterwards, due to a drastic reduction of car ownership and available fuel. 30 Although these numbers were well below the rates experiences in the USA at the time, they were impressive when road casualties were compared to war casualties: 370 000 people were killed and wounded during the war, while 588 000 people were killed and injured on the roads during the same period. 31 The latter number never truly entered the collective memory. After 1950, mass motorisation took root in Europe, and widespread car ownership formed part of a series of massive socio-economic transformations, based on cheap fossil fuels. Within one generation, patterns of housing, shopping, work and leisure time all reflected a society that relied increasingly on cars. Rising RTI numbers were part of this development, and they changed shape in the process. While initially accidents had primarily involved one vehicle and fixed objects or pedestrians, the rising presence of cars on the streets meant that collisions increasingly engaged several vehicles. In the UK, crashes involving three or more vehicles represented a mere 1.5% in 1936-7 but 4.7% in 1953. In the USA, deaths resulting from collisions of vehicles with fixed objects increased by 80% from 720 in 1930 to 1300 in 1952 but during the same period deaths resulting from collisions between two or more vehicles increased by 140% from 5880 to 14.100. In the USA, as the number of cars increased, death rates in proportion to vehicles decreased, as did the proportion of victims resulting from collisions with pedestrians first in relative and eventually in absolute terms. From 1950 onwards, more people died in collisions between two motorised vehicles than in accidents involving vehicles and pedestrians. Similarly, the number of pedestrian deaths per registered motor vehicle roughly fell by half in Switzerland, Sweden, the UK and Ireland between 1947 and 1953. 32 These changes led to a shift of concern about RTIs from pedestrians to drivers but also to an increasing acceptance of cars and RTIs. Even in pre-war Great Britain, pro-car and pro-pedestrian associations had competed for dominance while government policies tended to favour the affluent, i.e. car drivers. 33 After the Second World War, the widespread perception that the expanding car industry was instrumental to economic reconstruction as well as to social recovery provided automobiles with a positive connotation. This existing pro-car bias and the growing democratisation of car ownership complicated a perspective which focused on cars as the culprits of RTI mortality. Instead, discussions were muted and concentrated on 'the human factor'. In Great Britain, the discourse focused on pedestrian behaviour and pedestrians accepted that it was primarily up to them to adapt to an increasingly motorised environment. This discourse was complemented by a new perspective of speedy cars no longer as the source of road dangers but as a proud sign of modernity, which required similarly modern road systems. 34 Meanwhile, in the USA, attention still concentrated on drivers. Automobile associations and insurance companies organised driver education courses, designed to turn car users into 'safe drivers', while local administrations established and police enforced regulations which should ensure that driver behaviour was conducive to road safety. 35 RTIs were perceived as a form of dysfunction, caused by individual traffic participants who were insufficiently adapted to the demands of modern life. The Construction of a Public Health Issue Remarkably, during the first half of the twentieth century RTIs never appeared as an international public health issue. The League of Nations, which pioneered data collection and assessments of the state of public health around the world, completely ignored RTIs. 36 It was only in the 1950s, after the USA had recorded its millionth RTI death (in 1951) that the international community began to take note. 37 road safety measures. In line with prevailing attitudes, its recommendations focused on the weakest of potential victims, calling for better education of pedestrian road users, especially of children. 39 Meanwhile, the World Health Organisation (WHO) discovered RTIs as a health issue and began defining it in those terms. As a start, it carried out a survey regarding motor vehicle accidents. Forty-seven member states returned questionnaires: between them they had recorded 102 552 deaths that year (79 810 of them males) out of a population of 650 million people, and numbers were rising. No records existed regarding the much larger number of people who were injured, often seriously. It became clear that RTIs affected predominantly young males and children where they took a staggering toll: . . . in Canada, the United States, Austria, the Netherlands, Australia, and New Zealand, deaths from motor vehicle accidents in males in 1958 exceeded those due to tuberculosis (all forms), acute poliomyelitis, typhoid fever, diphtheria, and diabetes mellitus added together. Among females in these countries fatal road traffic accidents were fewer but were still prominent among the causes of death. 40 In the UK, RTIs killed fifteen times as many children as poliomyelitis in 1956, and twice as many as during the worst polio epidemic after 1945. In a complaint that was to be repeated many times during the following decades, the author of a study observed that these numbers aroused a fraction of the interest of that directed at other epidemics. 41 The WHO tried to raise the awareness of this invisible health issue, dedicating its 1961 World Health Day to the theme of 'Accidents and Their Prevention'. 42 It also commissioned a study on Road Traffic Accidents. This report, written by the Chief Medical Officer of the London Transport Executive, was published by the WHO in 1962 under the sub-title of 'Epidemiology, Control, and Prevention' and declared that RTIs constituted 'a public health problem of the first magnitude'. 43 But comparisons between countries, and, indeed, an assessment of the burden, were difficult since definitions varied widely. In Belgium, for instance, road traffic deaths described deaths which occurred at the site of the accident only, while in England, deaths occurring up to 30 days after the accident entered into the statistics. 44 What was clear, however, was that there were gender-and age-specific differences regarding RTI risks. The most striking element then -as later -was the gender gap. In all participating countries, the ratio of male to female road traffic deaths ranged between three and five and, the author of the 1962 WHO study observed, resembled a 'biological or sociological law', so far little understood. 45 The age distribution differed between traffic participants. Among pedestrians, children between roughly 1 and 10 were particularly at risk; prevalence then declined and rose again from age 65 onwards. The increase in old age resulted in part from the high probability of elderly people dying from injuries from which younger people recovered. 46 However, within the age bracket of young adults, RTIs represented a major cause of death, causing 'a serious economic loss to the community'. 47 While RTIs, therefore, slowed down economic development in tangible, although unquantified ways, the essential role of the motorisation industry for economic growth was obvious. The report commented that evidently RTIs took an increasing part of national mortality as a country became 'more highly developed and therefore more highly motorised'. 48 The connection between development and motorisation appeared too obvious even to suggest a cost-benefit analysis. Despite his urgent calls to address RTIs as a public health threat, the author was ready to accept a certain level of RTIs as an inevitable side-effect of modernisation, and he merely suggested that 'a balance between man and the new element in his environment, the motor vehicle, is being reached, and that a mortality of roundly 20 per 100 000 per annum is the price of introducing the motor vehicle on a large scale'. 49 Thus, in the early 1960s, a cost of 20 lives lost for every 100 000 people seemed an acceptable price for the benefit of modernisation, a rate reached in the USA during the late 1930s and then briefly again in 1948, but which had consistently been surpassed since. Meanwhile, the perception of RTIs as a public health issue remained concentrated in North America and Europe where it gradually changed its character. In the late 1950s, a diverse group of public health experts, politicians, lawyers and social activists began arguing that 50 000 RTI victims annually were unacceptable and that car designs, which did not prioritise safety, bore a large part of the responsibility for the number of casualties. They argued that getting drivers to act more responsibly was clearly not succeeding while making cars 'crashworthy' by supplying them with padded dashboards and stronger door locks would have instant effects. The political climate of social discussions and the civil rights movement of the time were propitious for the argument that people's well-being went beyond the control of the individual. The public was introduced to the new arguments through congressional hearings and, above all, in Ralph Nader's book Unsafe at any speed, published in 1965. The car industry, fearing expensive construction changes and liability claims, tried to portray the issue as one of personal freedom. This strategy remained unsuccessful in part, because insurance companies could be drawn over to the other side of the argument. 50 One year later, Lyndon Johnson signed into law two bills raising safety standards in cars and roads, and established a new federal agency, the National Highway Safety Bureau (NHSB), in charge of RTI control. 51 These events marked an important shift in how the causes for RTIs were defined, with ambivalent effects. On the one hand, it weakened a perspective which blamed RTI victims for their fate. Thus, shifting attention to cars can be seen as a correction to a situation where the industry had largely been exempt from any accountability. On the other hand, a growing focus on vehicle safety spurred the expectation of technical fixes. This new reductionist view of the problem became obvious during the more recent controversy regarding airbags, which became mandatory in the USA and whose overall benefits remained unclear. This development has been blamed for a neglect of behavioural factors and, ultimately, for a relatively less positive development in the USA, compared to other industrialised countries. 52 The change of perspective paved the way for a view of economic development not as a cause of RTIs, i.e. the problem, but as its solution. And eventually, the focus on vehicles established the view of drivers being the primary victim of RTIs deserving protection. No similar technical considerations were directed at protecting pedestrians, who, by that time, had been driven out of large part of public spaces. Implicitly, this was criticised by William Haddon, an American epidemiologist who developed a systemic approach to RTIs which integrated considerations of infrastructural factors with vehicles and users in the pre-crash, the crash and post-crash stages relevant to road traffic accidents. 53 This 'Haddon Matrix' was criticised but spurred a shift towards a more comprehensive perspective of traffic and RTIs in the 1970s and would re-emerge in a transformed shape in the late 1990s. As urban centres in industrialised countries experienced an increasing burden of traffic congestion with its economic, environmental and social costs as well as ever-increasing RTIs, many cities took measures designed to reduce the motorised traffic on their territory. A study of the Organisation for Economic Cooperation and Development (OECD) of twelve cities in as many countries revealed that, by the mid-1970s, eleven had restricted parking in their centres, eight had increased the frequency of public transportation services, ten had provided preferential treatment for public services (such as bus lanes etc.), ten had established pedestrian zones and five had made provisions for cyclists. These measures had reduced RTIs substantially, sometimes spectacularly. Thus, Nagoya reduced RTIs by 61% and RTI mortality by 59% in its central business district within a few years. Even Paris, which saw its overall RTI rate go up by 52%, enjoyed a reduction of RTI mortality by 24%. In Ottawa, where overall RTIs increased by 19%, the RTI rate decreased by 40% in those areas where measures to reduce transit traffic had been put in place. 54 These changes could not gloss over the fact that private cars were still considered 'the predominant mode of personal travel in North America' and were gaining ground in many areas of Europe and Japan. 55 Nevertheless, in a curiously contradictory development, the number of cars kept increasing while cars were no longer automatically considered the best or the most modern means of transportation everywhere in high-income countries, and pedestrians regained some of the urban ground from which they had been evicted some decades before. At about that time, the WHO began addressing RTIs more seriously. In 1974, the World Health Assembly acknowledged the 'extensive and serious individual and public health problems resulting from road traffic accidents' and urged national health authorities to provide leadership in the issue. In its proposals of tangible measures the resolution remained well within the paradigm of the last decades, calling for 'improved driver licensing standards and traffic safety education programmes' and the application of 'safety principles in the development of new types of vehicles'. 56 The Regional Office for Europe was made responsible for the WHO programme on road traffic accidents. 57 However, probably this office observed more than they caused the substantial RTI decline which was unfolding around them. Between 1975 and 1998, RTI mortality decreased drastically in virtually all high-income countries: by 63.4% in Canada, by 58.3% in Sweden and by 27.2% in the USA. 58 In 1987, a public health expert stated that a 200-fold increase of cars had been accompanied by an only twenty-fold increase in RTI deaths and that the RTI death rate was lower in 1985 than at any time during the preceding sixty years except in 1948. Impressed, he commented: 'The epidemic of road traffic deaths may be most remarkable for the way it has been controlled.' 59 Unfortunately, this assessment overlooked the majority of global regions and people. In fact, RTIs were fast developing into a global issue with an immense public health burden concentrated in middle-income countries. Between 1975 and 1998, RTI deaths increased by 237.1% in Colombia, by 243% in China and by 383.8% in Botswana. In 1990, RTI assumed place nine in contributors to the global burden of disease. Globally, road traffic deaths increased from ca. 990 000 per year in 1990 to nearly 1.2 million in 2002. 60 In the late 1980s, international organisations gradually became aware of the extent of the evolving problem and of the need to address it. Given that North America and Europe looked back on decades of experience and that RTIs appeared to show a positive downward trend, there was a natural tendency to apply lessons learned in the 'developed' world to problems encountered in 'developing' countries. But what exactly were they? Had RTI rates declined because of better drivers, more stringent regulations, safer vehicles, better protection for cyclists and pedestrians, more alternatives to travelling by car, medical progress or simply because the sheer number of cars provided the majority of traffic participants with a protective frame, so that cars served as both protection against and as threats to other cars and their passengers? Or any combination of those factors? And which of these findings were applicable to the rest of the world, to whose benefit and at what price? Discussions evolved around competing claims to historical analysis and their conclusions for development policies in those countries where car traffic was still low but rising. In many ways, the situation in low-income countries at the end of the twentieth century resembled that of countries in Europe and North America at its beginning: RTI victims were primarily pedestrians and cyclists, vulnerable traffic participants who competed with an increasing number of motorised vehicles for road space. But there were also distinct differences. Researchers listed a fateful combination of reasons for the rising RTI burden, including a 'traffic mix of incompatible users (pedestrians, cyclists, motorbikes, cars, and trucks) with, for example, communities living within the vicinity of roads or the lack of pavement along large urban streets'. 61 passenger vehicles such as overloaded mini-buses or taxis, so that individual accidents involved more people than the typical accident in a high-income country, which had usually affected the driver(s) of one or several cars involved. Comparing RTIs in different parts of the world in 2000, this difference was exacerbated by the lack of effective first aid and emergency medical care in low-income countries. According to a Harvard study 10 000 crashes resulted in 66 deaths in the US, but 1786 in Kenya and 3181 in Vietnam. 62 Gradually, international institutions began to become aware of the issue. Given the complex nature of RTIs, they could not but become part of the tension between different international health agendas. By the 1980s, international health was no longer the domain of the WHO alone. A new major player was the World Bank, which had been alerted by WHO efforts at the conference of Alma-Ata in 1978 to connect public health policies to demands for increased global economic equality and regulations, while growing neoliberal tendencies among key members, notably the USA, called for further economic deregulations. During the 1980s and 1990s, the World Bank steadily increased its role in the international health scene. Through substantial investment in health projects and the generation of health-related data it integrated health into its overall programme of fostering a market-driven form of economic development. 63 Part of this strategy was to identify poverty reduction through economic growth as the primary means of improving public health in low-income countries, and research programmes were designed accordingly. 64 This situation gave rise to the Global Burden of Disease Project, which the World Bank launched in preparation of its 1993 World Development Report Investing in Health, which successfully established the work of the World Bank as a reference point for global health. 65 The Global Burden of Disease Project introduced disability adjusted life years (DALYs) as new measurement unit, which combined mortality, morbidity and injury into a single number. The resulting data revealed RTIs as an unexpectedly important health burden. Road traffic accidents ranked ninth among the leading causes of DALYs, accounting for 2.4% of the total. 66 Even more surprising -at least to the authors of the study -road traffic accidents were the second leading cause of DALYs for men between 15 and 44 year of age in high as well as in low income countries. 67 These data prompted a growing number of research publications dedicated to RTIs in the following years. In line with the perspective begun in the 1962 WHO report, publications constructed RTIs in medical terms as a 'neglected epidemic', 68 a 'global epidemic', 69 a 'global road trauma pandemic' 70 or compared RTIs to Aids, insisting that one epidemic provided lessons for fighting the other. 71 Some publications invoked warfare, describing RTIs as 'vehicular manslaughter' 72 or a 'war on the roads'. 73 However, this health discourse had a hard time competing against the entrenched view of roads and road traffic as essential economic infrastructure. The 1994 World Bank World Development Report on Infrastructure for Development was a case in point. It acknowledged that RTIs were a leading cause of death in low-income countries and accorded governmental 'regulation to preserve safety standards in infrastructure service provision and delivery' an important priority. 74 Another paragraph observed the 'very low rates of traffic accidents' which the Brazilian city of Curitiba had achieved through 'carefully designed public transport routes'. 75 However, those were the only references to road crashes and could easily be overlooked in the midst of a report which, otherwise, overwhelmingly discussed the need for optimal cost-effectiveness of infrastructure services, ideally through privatisation. Ironically, this preference for market solutions for development challenges should have provided grounds for the World Bank to favour investments in railway lines and other forms of collective transport rather than roads. Railways, bus services, subways etc. could easily be supplied by private operators or under concessions, working under conditions of competition and, therefore, supposedly more efficiently and cost-effectively. The privatisation of roads was sometimes possible as toll roads, but in general roads would necessarily remain public spaces. 76 But these theoretical considerations clearly had little effect on real lending decisions. Since the 1970s, World Bank commitments for transportation infrastructure had increasingly concentrated on highway building, which effectively dwarfed the sums spent on railway lines. Investment in urban transport were non-existent or a fraction of those for highways and also consisted largely of road constructions. This tendency would continue and even intensify in the beginning of the twenty-first century. The sums invested in rural and urban roads amounted to 80% of all infrastructure lending (Figure 1(a) and (b)). The World Bank was not the only source of global investments in transportation infrastructure but its distribution of funds was reflected in the development of road and railway lines in many countries, including El Salvador, Malawi, Brazil and India. (Figure 2(a) and (b)). Invariably, the length of paved roads increased substantially, sometimes dramatically, between 1960 and 2000 while that of railway lines stagnated. If any funds were spent on infrastructure supporting non-motorised transportation, such as buses, trams, subway or even secure sidewalks or bicycle lanes, they were considered too insignificant to be mentioned in World Bank publications. Clearly, although a large part of the population in low-income countries had no chance of ever owning cars, motorised road traffic had become the accepted model for transport development. Meanwhile, RTIs were becoming an increasingly pressing concern. A precise understanding of the situation was difficult since data were -and still are -often patchy or simply non-existent, but those that were available were impressive enough. Examples included El Salvador and Mauritius, both 1 9 8 1 1 9 8 3 1 9 8 5 1 9 8 7 1 9 8 9 1 9 9 1 1 9 9 3 1 9 9 5 1 9 9 7 1 9 9 9 2 0 0 3 2 0 0 1 2 0 0 5 Within some years it supported practical road safety projects in several countries around the world. Activities ranged from targeting drunk-driving and speeding to promoting the use of helmets and seat-belts to separating motorised and non-motorised traffic or promoting public transportation. 80 However, the main focus was on the behaviour of potential victims, portrayed as those most responsible for RTI deaths. Children, particularly, were described as lacking sufficient skills and knowledge to cope with complex road traffic situations. This line of argument had formed part of the conventional wisdom in high-income countries since approximately the 1940s, but it was quickly becoming outdated. A 2002 meta-study of controlled trials of pedestrian education programmes in high-income countries showed no evidence that such programmes reduced the risk of road accidents involving child pedestrians. No similar studies existed for lowincome countries. 81 Including representatives of the automobile industry in this group brought technical expertise and assured close contact with those actors whose actions inevitably formed an important component of any road safety programme. But obviously their presence also precluded solution strategies which questioned motorisation in principle. In 2002, the secretariat for the Steering Committee was shifted to the Task Force for Global Health, an institution which had been founded to coordinate international activities of Primary Health Care after 1978, when the World Bank succeeded in shifting the Health for All approach of the Alma-Ata conference from a political strategy aimed at increasing economic equality to technical programmes of oral rehydration, breastfeeding and immunisation. 82 A similar strategy to define the issue as a technical problem appeared underway with regard to RTIs. 80 However, even this met little response in the car industry, as the European Enhanced Vehicle Safety Committee (EEVC) realised. Its proposals regarding changes to the fronts of vehicles designed to make them less dangerous to pedestrians were not on the agenda of car manufacturers in 2002. 83 The Competition of Development Concepts At the end of the twentieth century, discussions on RTIs evolved into a more fundamental debate about global development. It was stimulated by the introduction of the Kuznets curve into RTI research. The Kuznets curve went back to a theory presented by Simon Kuznets in 1955. Drawing on historical data of industrialised countries he maintained that social inequality increased in the early phase of modernisation but decreased from a turning point onward as national income continued to grow. The relation between social inequality and economic growth evolved, therefore, along a curve the shape of an inverted U. 84 In the 1990s, the model received a second life as a description of the relationship between economic growth and some pollutants, notably SO 2 and smoke, and became known as the environmental Kuznets curve. This finding provided a welcome defence against accusations that the existing growth-oriented economic system was responsible for large-scale environmental destruction, and it was readily accepted by advocates of the contemporary economic system. In 1992, the World Bank incorporated it into its World Development Report on the environment. 85 In the following years, further research challenged the curve: similar developments could not be reproduced for other forms of environmental burden. Besides, it was doubtful to what extent the effect resulted from the transfer of polluting industries to low-income countries, a strategy which could obviously not be imitated on a global scale. 86 In 2000, van Beeck et al. appear to have been the first to observe a Kuznets curve of RTIs in relation to 'prosperity levels'. Economic growth, they argued, was 'not only associated with growing numbers of motor vehicles in the population, but also seems to stimulate adaptation mechanisms, such as improvements in the traffic infrastructure and trauma care'. 87 This view suggested that economic growth would in itself lead to a reduction of RTIs, making a rise in gross domestic product (GDP) the best strategy to reduce the health burden of widespread motorisation. It also implied that RTIs as a health problem were largely solved in high-income countries. This view was taken up by the World Bank. In 2003, the World Bank Development Research Group on Infrastructure and Environment issued a Policy Research Working Paper on Traffic Fatalities and Economic Growth. 88 Analysing vehicles per person (V/P) and fatalities per vehicle (FN) data from eighty-eight countries for the period 1963-99, they found a confirmation of the Kuznets curve with a turning point at a per capita GNP of $8600 in 1985 international dollars. On the basis of these data and of prognoses of population and income growths, they projected that it would take many years for developing countries to achieve the low RTI fatality rate of existing high-income countries. RTIs in India, for instance, which had a per capita income of only $2900 in 2000, would only begin to decline in 2042 after a peak of at least twenty-four fatalities per 100 000 persons, or thirty-four when adjusted for estimated underreporting. Brazil would 'already' peak in 2032 and would experience an RTI mortality rate of twenty-six deaths per 100 000 persons as late as 2050, compared to a rate of around eleven enjoyed by high-income countries in 2000. Only on the last page did the text mention, almost in passing, that these projections were based on a continuity of ongoing policies, while measures such as mandatory helmet wearing or effective traffic separation might lower those numbers. 89 Predictably, the paper provoked different reactions among researchers. Its projections led Nitin Garg and Adnan Hyder to urge that countries like India should take active steps to curb RTIs along WHO recommendations well before the theoretical economic threshold. 90 Other researchers accepted and confirmed a Kuznets curve behaviour of RTIs as the statistical representation of a development whereby increasing national income would allow investments in safer roads and vehicles. 91 In an analysis of data from forty-one countries from the years 1992-6 David Bishai et al. found that in low-income countries a ten per cent increase in GDP increased RTIs by 4.7% and RTI deaths by 3.1%. By contrast, GDP increases in high-income countries reduced the number of deaths, although not of crashes or injuries. The turning point appeared to be between $1500 and $8000 per capita income. 92 Other projects were more clearly tied to corporate interests. In a study that was financially supported by the automobile industry, Walter McManus of the University of Michigan Transportation Research Institute calculated the lives that would be saved by lowering either vehicles per capita or the fatalities by vehicle. Both would save lives but, he concluded: 'Reducing motorisation (vehicles per capita) is unlikely to be used as a policy to reduce fatalities because it is inextricably linked to economic growth. Consequently, the focus should be on reducing fatalities per vehicle.' 93 Clearly, this approach limited anti-RTI strategies to those not harmful to the large sector of the global economy which depended in some way on the construction or use of motorised vehicles. It also portrayed RTIs as a regrettable but temporary side-effect of modernisation. However, new approaches to the problem emerged. One sign of changing attitudes appeared in a debate about terminology. In 1987, a group of intensive care specialists in New Zealand proposed to change 'the discourse on road traffic injuries by rejecting the concept of "accidents. . . "'. 94 In a widely publicised campaign they succeeded in changing the choice of words in the media. Soon, the debate spread to Europe. In 1993, the editor of the prestigious British Medical Journal (BMJ) adopted the same attitude, arguing that numerous injuries, including RTIs, were not accidental and, therefore, should not be called 'accidents'. Instead, he proposed using 'crash'. 95 The change of terminology was slow to be accepted. Maybe the expression of 'traffic accident' was too deeply ingrained in the general vocabulary, maybe BMJ was insufficiently influential or maybe people were unconvinced by the idea that 'accident' should be reserved for those rare events which involved no human responsibility at all. Eight years later, 'accident' continued to be widely used, including in articles published by the BMJ. In reaction, BMJ, in its self-proclaimed position as 'a leading communicator in medicine' with a responsibility 'to establish or follow standards in language' banned the 'inappropriate use of "accident"' in its pages. 96 The change in terminology remained controversial. When the psychologist Alan Steward (University of Georgia) and social worker Janice Lord, former National Director of Victim Services of Mothers Against Drunk Driving, argued against 'accidents' since crashes 'caused by intoxicated, speeding, distracted, or careless drivers' were no accidents and that such misnaming might create extra stress for crash victims and impede their recovery, several colleagues disagreed. 97 Indeed, prior studies had suggested the opposite effect: patients who blamed themselves for car accidents recovered more rapidly than those who blamed others. 98 Nevertheless, the new terminology was adopted by many researchers and was also in WHO publications. Some authors clearly considered this change an important move, which they took pains to explain. 99 Meanwhile, the current classification of causes of death and disease, ICD-10 kept 'accident' and so did other researchers as late as 2012. 100 In fact, the change of words in itself was of limited consequence. It did underscore that RTIs resulted from preventable causes rather than fate and thereby contradicted a view of RTIs as inevitable consequence of development. But the new wording offered no new perspective as long as it continued to refer to wellrehearsed factors such as child restraints or driver behaviour, which stayed well within the conventional discourse. Other initiatives had more far-reaching consequences. In 1999, Kåre Rumar, professor at the Swedish Road and Transport Research Institute, insisted that, despite falling RTI rates in Europe, the problem was far from solved. To illustrate its continued significance he pointed out that in most European countries one out of three citizens would need hospital treatment after a road traffic accident sometime in their lives and that one in twenty people would be killed or injured in a road accident. 101 He urged a systemic approach. In what appeared like an expanded form of the Haddon Matrix he described a web of factors, including age, gender, traffic regulations, road maintenance, attitudes to safety policies, intelligent control systems, unclear distribution of responsibilities, etc. Overall, his explanations aimed at a change of perspective. Instead of accepting the need for motorised traffic as a given and focusing on ways to reduce its health price, intelligent strategies should begin with the underlying purpose of traffic (transportation and mobility) and the biological vulnerability of the human body to external shock, and then search for ways to combine highest benefits with least sacrifice on that basis. As a vastly underestimated measure he singled out reducing the exposure of traffic participants to the risks of motorised transportation: [T]raffic exposure is increasing faster than the reduction of crash and injury risk. The fact is that presently the number of cars is increasing faster than the number of persons on this planet. 102 This approach was taken up by the WHO, which was initiating a five-year-strategy. 103 A first result was a study on the 'Global road safety crisis', issued in 2003, which incorporated input from various UN bodies. The report advocated integrating RTI consideration into a broader vision of urban development and transportation planning, which also included alternative modes of transport. A one-sided concentration on a carbased system of traffic was portrayed as aggravating social inequality since it invested 'increasing resources in the building and maintenance of an infrastructure for private motorised transport, while overlooking the public transport needs of the larger part of the population'. 104 It also contributed to further health problems since in 'many highincome countries, increasing use of cars has led to a general decline in walking and an increase in sedentary lifestyles, which in turn has had adverse consequences in terms of increasing obesity and cardiovascular health problems'. 105 Effective strategies to reduce RTIs were said to require a 'systems approach', aimed at identifying and addressing all relevant factors. Successful strategies of high-income countries could serve as orientation, but policies in low-income countries would have to be adapted or even newly created according to local circumstances. While these explanations promised a new approach, the list of relevant determinants (speeding, alcohol, helmets, safety devices, trauma care, road safety standards, traffic safety regulations, vehicle safety) appeared remarkably conventional and addressed, again, to a large extent the behaviour of road users. 106 Nevertheless, the text appears to have been the first time that a high-level international report on RTIs welcomed -and by implication recommended -a cutback of the use of cars as one strategy to reduce the RTI burden. The concept of a 'systems approach' was further elaborated in a World Report on Road Traffic Injury Prevention, a WHO report, published in 2004 in collaboration with the World Bank, which presented a wealth of data and analysis. Despite a reality of more 101 Kåre Rumar, 'Transport Safety Visions, Targets and Strategies: Beyond 2000', European Transport Safety than 1.2 million RTI deaths, twenty to fifty million people injured and rising numbers the report insisted that RTIs had to 'be considered alongside heart disease, cancer and stroke as a preventable public health problem' which responded 'well to targeted interventions'. 107 The report recommended and discussed in considerable detail a series of well-known practical measures such as speed control, the use of seat-belts and adequate child restraints as well as helmets for two-wheeler users, the enforcement of alcohol limits, good road design, improved vehicle standards and efficient post-crash medical care. These were only presented, however, as pieces of a much larger puzzle. Other, less frequently cited but relevant strategies included reducing the deprivation of underprivileged social groups, the separation of different types of traffic in clearly marked separate roads with different speed limits, mandatory daytime running lights, intelligent seat-belt reminders, traffic calming measures such roundabouts, road narrowings, chicanes or road humps, or crashprotective roadsides. 108 And even these points were merely tactical elements in a far more comprehensive approach, designed not primarily to make motorised traffic safer but to provide living conditions which would satisfy human needs for food, household items, work and leisure activities in intelligent ways. This task might entail measures to reduce the need for travel by '[l]and-use planning practices and "smart growth" land-use policies -development of high-density, compact buildings with easily accessible services and amenities' or the 'creation of clustered, mixed-use community services' or encouraging the use of electronic mail for communication. 109 If moving from one place to another could not be totally prevented, it could be organised in safer ways than through private cars. Calculations for the European Union regarding the risk of death in relation to distances travelled listed motorised twowheeler users as running twenty times the risk of car occupants, who were seven to nine times safer than cyclists or pedestrians but ten times less safe than bus and train occupants. Thus, travelling by public transportation was by far the safest means of transportation, creating good reason to encourage it. 110 Reducing the need to travel in general and to travel by car in particular was also considered a very positive step since, as studies from high-income countries indicated that 'under certain conditions, for each 1% reduction in motor vehicle distance travelled' there was 'a corresponding 1.4-1.8% reduction in the incidence of crashes'. 111 Besides, such measures could have tangible health benefits in addition to reducing RTIs, such as increasing healthier life-styles through more walking and cycling, and reducing noise and air pollution. 112 However, as the report acknowledged, many of the road safety measures would not be applicable in developing countries, where the need to act was greatest. Since the large majority of global RTI victims were pedestrians and cyclists in low-and middleincome countries and since in the foreseeable future most people in those countries would continue to be walking, cycling and using public transportation, their needs would have to be the priority concern. 113 In the absence of facilities for pedestrian and cyclists many people were forced to travel with privately run services. These services, often in obsolete overloaded vehicles, driven by overworked drivers and owned by businessmen, who bribed traffic enforcement authorities, created substantial risks to occupants as well as other traffic participants. Among other measures, strategies had to be found that would simultaneously address the 'safety of road users, the labour rights of drivers and the economic interests of the vehicle owners'. 114 Such a traffic pattern with its mix of dangerous motorised vehicles sharing road space with large numbers of vulnerable road users had never been experienced by high-income countries and therefore lessons from North America and Europe were of only limited use in the rest of the world. 115 In addition, the report showed a more complex connection between RTIs and development by citing estimated global costs of the RTI burden of US$518 billion per annum and $100 billion in developing countries 'twice the annual amount of development assistance to developing countries'. 116 In this perspective, RTIs did not appear as a temporary price paid for a generally beneficial economic development but as a powerful impediment to development. This WHO report was the most detailed study on RTIs as a global health problem to that date, and was meant to become the central reference point for further research. Its wealth of information made it a publication which was impossible to ignore. Notably its numbers regarding present and estimated future RTI victims became the standard components of virtually all following studies. The system's approach included near-revolutionary elements. Questioning the connection between motorised transport and modern economic development by decentralising residential areas or by making use of online communication means challenged the logic of the traditional view of development, more so than the various traffic reduction measures begun in OECD urban areas during the 1970s. While pedestrian zones restricted traffic locally, adopting a reduction of the need to travel as a goal of intelligent housing and work plans could potentially change the concept of development as people knew it. The structure of recommendations underlined the comprehensive nature of the report, listing what governments, policy, legislation and enforcement, the public, vehicle manufacturers, donors, communities, civil society groups and individuals could do. Clearly, preventing or reducing RTIs was considered everybody's responsibility. However, the very comprehensiveness of the report obscured its radical components in the midst of a multitude of more conventional ideas. Indeed, the large number of recommended measures allowed adopting a broad approach that nevertheless could leave out individual onerous ideas. In the summary, only two recommendations out of forty-seven called for the establishment of public transportation and none explicitly mentioned the inclusion of traffic-reduction objectives in land use plans. 117 The UN instituted the United Nations Road Safety Collaboration, designed to implement the recommendations of the World Report. The group consisted of regional and global international organisations, including the World Bank and UNICEF, and a variety of other national and international bodies (governments, non-governmental organisations, donors, research agencies and the private sector) where the WHO held a coordinating role. This strand has also led the UN General Assembly in March 2010 to proclaim the period 2011-20 as the Decade of Action for Road Safety. 118 Thus, there has been a real effort on the part of the UN to construct RTIs as a global issue and to assume responsibility for it. Other organisations reacted with their own reports. Each referred to the 2004 World Report, and provided its own interpretation in line with its own outlook. In subtle ways, they carried out a competition of concepts: one designed to safeguard motorised transport by making it safer through technical, legal and administrative modifications, another designed to modify the entire system of transport by questioning the prioritisation of its motorised form. A report by WHO Europe, which also came out in 2004, was among the latter. It emphasised the synergistic value of various anti-RTI measures and the environmental component of overall anti-RTI policies. In the process, it portrayed finding ways to reduce RTIs as part of a strategy for sustainable development: A sustainable transport system is one that (i) provides for safe, economically viable and socially acceptable access to people, places, goods and services; (ii) meets generally accepted objectives for health and environmental quality . . . ; (iii) protects ecosystems by avoiding exceedance of critical loads and levels for ecosystems integrity . . . and (iv) does not aggravate adverse global phenomena, including climate change, stratospheric ozone depletion, and the spread of persistent organic pollutants. 119 The same objective of 'sustainable development' was echoed in the title of Make Roads Safe: A New Priority for Sustainable Development, published by the Commission for Global Road Safety in 2006. The Commission was created by the FIA Foundation for the Automobile and Society in 2005 with former NATO Secretary-General Lord Robertson as chairman. Its advisory board included prominent people connected with the automobile sector, but also members of the WHO, the World Bank and the OECD. 120 The campaign 'Make roads safe' has been supported by a number of celebrities and world leaders such as British Prime Minister Tony Blair and Archbishop Desmond Tutu. 121 The Commission also supported the GRSP created by the World Bank in 1999. Cooperation and overlap of personnel and website space with the GRSP was such that it was questionable to what extent these were distinct bodies. The GRSP website included a link from which the 2004 World Report on Road Traffic Injury Prevention could be downloaded and it issued guidelines for the implementation of its recommendations. 122 However, below the surface of agreement to behavioural, technological and administrative changes, underlying differences about where transport policies in particular and socio-economic development in general should be heading were played out. Make Roads Safe: A New Priority for Sustainable Development focused attention on road improvements while referring to developmental challenges. 123 The report argued that lowincome countries could and should learn from the experiences of high-income countries but without having to imitate every step. Instead of repeating the Kuznets curve relation between economic growth and RTIs, found in Europe and North America in the 1960s, low-income countries should use additional measures as a form of shortcut to lower RTIs. 124 Rather than aiming at a 'system's approach', like the 2004 World Report, the Commission proposed a 'safety system's approach', which recalled the Haddon Matrix of 1968. This combination of pre-crash, crash and post-crash measures for people, vehicle and environment made the approach appear comprehensive, while accepting motorised transport as a given and narrowing 'environment' to the technical qualities of a road: its design, markings, maintenance, protection, pedestrians' crossing and rescue facilities. 125 Improving the safety of vehicles held little promise in low-income countries, the report argued, since few people could afford modern cars and the variety of vehicles used prevented a quick effect. Ruling out an emphasis on people as a way of 'blaming the victim' the report focused on roads as central elements. This strategy was not limited to low-income countries since, '[a]mongst the best performing industrialised nations, improved road infrastructure remains the major source of expected future contributions to casualty reduction targets.' 126 Nevertheless, the development of low-income countries, formed a central argument using the Millennium Development Goals (MDGs) as points of reference. The report criticised the MDGs for ignoring RTIs and insisted on the central importance of roads for achieving these goals in a long-term, 'sustainable' manner. 127 Thus, according to the report, the aim could not be development without roads but development with lots of roads of good quality. A central recommendations, therefore, was that 'at a minimum 10% of all road infrastructure projects should be committed to road safety and that this principle should be rigorously and consistently applied by all bilateral and multilateral donors'. 128 Thereby, the report reconfigured RTIs from a development to a development-assistance-programme issue. Three years later, the WHO issued a Global Status Report on Road Safety, which sought to provide an overview over the state of RTI related conditions in all countries worldwide. Based mainly on information gathered in a questionnaire, the report addressed a broad range of factors: institutional settings, the quality of data, vehicle and infrastructure standards, legislation on some of the main behavioural risk factors, medical care and exposure to risk. While the behavioural factors (speeding, drink-driving, use of motorcycle helmets, use of seat-belts and child restraints) received relatively most attention in terms of pages, the exposure to the risk, defined as 'the existence of policies to encourage nonmotorised modes of transport and public transport and strategies to achieve these, and levels of motorization' were also addressed and related questions were included in the questionnaire. 129 The results reflected years of prioritising transport by car: forty-four per cent of all countries worldwide had no policy, national or local, that encouraged public transport, and sixty-eight per cent had no policy that encouraged walking or cycling as an alternative to motorised transport. In an obvious attempt to produce an inspiring model, the report presented positive examples from Bogotá, Sweden, Delhi and Lagos, where policies of this type had tangibly reduced RTI numbers. 130 In addition, the need for such 124 Ibid., 18. 125 Ibid., 12-13. 126 Ibid., 15. 127 Ibid., 4, 46. 128 Ibid., 3. 129 WHO, Global Status Report on Road Safety. Time for Action (Geneva: WHO, 2009), 9 and x. 130 Ibid., 16-18. policies was addressed in the first of five central recommendations, although it was framed as an issue of 'road design and infrastructure, land use planning and transport services', understating its systemic nature. 131 The World Bank had cooperated with both the Commission for Global Road Safety and with the WHO, and in many ways its position has been the most interesting and possibly the most important, given its financial clout and its influential voice in the development debate. A 2007 report, issued by the in-house Independent Evaluation Group of the World Bank, painted the picture of an institution in search of a position regarding a rapidly evolving issue. The report spelled out the stakes involved: an expected 'huge expansion' of the global automobile market, based on the motorisation of China and India, which predicted that 'over the next 20 years, more cars may be built than in the 110-year history of the industry'. 132 Clearly, decisions in this issue involved lots of money, and a huge private market, in which the Bank and its investors could hardly remain disinterested, especially since those two countries had absorbed almost half of all World Bank lending in the transport sector since 2000. 133 However, the Evaluation Group also recognised that the pressure of a three per cent annual growth rate of cars on global roads increasingly created environmental and health problems -and thereby eventually economic burdens. 134 Times were changing and the Bank risked becoming outdated unless it changed, too. The report regarded the dominance of road building in the present World Bank transport portfolio as a threat to its long-term relevance. While highway construction would remain important, other 'transport modes and themes' were also becoming significant and it was 'essential to see transport opportunities with a multimodal setting of integrated urban and rural concerns'. 135 Given the importance of the 'poor in urban and rural areas of developing countries' and the bad condition of non-motorised transport facilities open to them, which had been neglected, including by World Bank activities, the report strongly recommended that the Bank begin funding related projects despite 'the lack of or very low revenuegeneration nature of such projects'. 136 After all, holistic road safety approaches were increasingly being pursued 'in all regions'. 137 Presumably, the World Bank would be well advised to recognise these tendencies, their demands and opportunities at an early stage. In other words: [P]ast Bank experience, with its relatively narrow, albeit successful, primary focus on roads, will be insufficient to provide for the Bank's future response to these emerging challenges. . . . Overall, the sector is at a crossroads, where it has a good window of opportunity to attain a higher level of relevance and offer a better level of support to its clients. 138 Conclusions In 2012, the history of RTIs as part of the development discourse is far from finished. The upcoming years and decades will clearly bring changes in the number of RTI deaths, in RTI discussions and in their perceived relation to development. All stakeholders will be able to choose from a broad range of approaches which have been endorsed by different 131 Organisation; non-motorised transport Table 3: Matrix of events-discourse correlation. Abbreviations: HIC -high-income country; MIC -middle-income country; LIC -low-income country; NMT -non-motorised traffic. actors at different times in the twentieth century (Tables 2 and 3). This paper identifies eight approaches each involving different elements, RTI reduction strategies, responsible agents and interpretive frameworks. While addressing the behaviour of traffic participants has been the preferred approach, all the others have been chosen as well at some point or another, depending on world views and interests. Implicitly and, increasingly, explicitly development played an important role in these choices. From the beginning, RTIs have formed part of the spread of motorised traffic. First in North America and then in Europe, rising numbers of cars lead to a rising number of RTIs, which caused concern. In the 1970s, RTIs numbers were brought to an acceptable -or accepted -level by a combination of regulations, affecting a broad range of traffic components, and by some measures, which upgraded non-motorised relative to motorised traffic in urban areas. After the 1970s, the number of cars rose in other parts of the world, causing an increase in RTIs in those countries as well. By 2000, the RTI situation was recognised as critical. This paper argues that current difficulties arise in part from a selected transfer of development models, derived from pre-1945 high-income countries to post-1945 lowincome societies. In order to be less lethal, the transfer of individualised motorised transport as the principal transport model would have required a physical, legal and administrative infrastructure, which low-income countries clearly did not have. On a large scale, the process was similar to the policy of the baby food companies in the 1970s when they sold infant formula to people in low-income countries who did not have a reliable infrastructure of clean water and sterilisation technology necessary for a safe administration of formula. 139 Although the export of motorised transport is more complex than that of baby formula the underlying logic is comparable: a mixture of profitdriven interests, a Eurocentric view of 'modernisation' and 'development', and a lack of infrastructure such as helmets, speed limit regulations, police enforcement, modern cars, road maintenance, etc. In both cases, further policies depend on the extent to which the health discourse can affect the underlying development discourse. So far, RTIs have been recognised as an important element of the larger transport/development discourse, but a number of factors prevent their prioritisation: • The economic interests of stake-holders of motorised traffic, above all the automobile industry; • The traditional conception of 'modernity' as motorisation, deeply ingrained in the collective consciousness in high-as well as in low-income countries; • The path dependency of existing material infrastructures, which privileges motorised vehicles and, on the other side, the scarcity or absence of a material infrastructure for safer modes of travelling, especially in low-income countries; • The class difference separating the car drivers and the non-drivers in low-income countries and thereby those who are most like to survive collisions unharmed and those who are most likely to die. This difference also effectively excludes the potential victims, who would benefit most from policy changes, from political decision-making processes in their countries. 140 • The deceptive assurance of a Kuznets curve like concept that promises a solution of the specific problems of low-income countries once they cease to be low income. • The sheer extent of reconceptualisation necessary to separate motorisation from economic development. It seems that it is the idea of development as 'catching up', of imitating step by step the development taken in high-income countries, which most stands in the way of novel systemic approaches. This concept ignores the fact that it is not possible to extend the car density, the level of fossil fuel consumption and the culture of suburban spread of Europe and North America to the rest of the world. On the basis of present-day knowledge about the physical state of our planet, a global repetition of the motorisation experience in highincome countries is simply not possible and is therefore not a useful development model. Nor is it necessary for an intelligent modernisation. The mobile phone is one example that a modern means of communication can be extended globally without spreading its Europe-centred predecessor (in this case networks of landline cables) first. There is no reason to assume nothing similar could happen in the field of mobility and transportation, given the political will and economic prospects. Promoting non-motorised transport would reduce RTIs in high-income countries and even more so in low-income countries. Since, by broad consensus, development in highas well as low-income countries is meant to improve public health, the need to reduce RTIs may eventually serve as a powerful argument to search for intelligent transport solutions. For this argument to become effective, it must be made convincingly -so discourse is important -and it must be heard. In this context, the promotion of a medical doctor with a history of health promotion in low-income countries to the post of World Bank Director is an interesting development.
2018-04-03T02:43:50.329Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "de4687899dcf53322fc7c1b873aaf2cb921ac0d5", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3566732?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ab8ff6c4842fbe6f2c1cca05bd992e11c0142d1e", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
246418676
pes2o/s2orc
v3-fos-license
Assessment of the Risk of Nodal Involvement in Rectal Neuroendocrine Neoplasms: The NOVARA Score, a Multicentre Retrospective Study Rectal neuroendocrine tumors (r-NETs) are rare tumors with overall good prognosis after complete resection. However, there is no consensus on the extension of lymphadenectomy or regarding contraindications to extensive resection. In this study, we aim to identify predictive factors that correlate with nodal metastasis in patients affected by G1–G2 r-NETs. A retrospective analysis of G1–G2 r-NETs patients from eight tertiary Italian centers was performed. From January 1990 to January 2020, 210 patients were considered and 199 were included in the analysis. The data for nodal status were available for 159 cases. The nodal involvement rate was 9%. A receiver operating characteristic (ROC) curve analysis was performed to identify the diameter (>11.5 mm) and Ki-67 (3.5%), respectively, as cutoff values to predict nodal involvement. In a multivariate analysis, diameter > 11.5 mm and vascular infiltration were independently correlated with nodal involvement. A risk scoring system was constructed using these two predictive factors. Tumor size and vascular invasion are predictors of nodal involvement. In addition, tumor size > 11.5 mm is used as a driving parameter of better-tailored treatment during pre-operative assessment. Data from prospective studies are needed to validate these results and to guide decision-making in r-NETs patients in clinical practice. Introduction Rectal neuroendocrine tumors (r-NETs) represent a heterogeneous group of rare malignancies that account for up to 13.7% of all neuroendocrine tumors (NETs) [1]. According to the Surveillance, Epidemiology, and End Results registry database of the National Cancer Institute, the age-adjusted incidence of r-NETs has increased about sixfold over the last 40 years, probably due to the increased use of endoscopic procedures for colorectal cancer screening [2]. R-NETs typically appear as single smooth yellowish polypoid lesions that originate from deeper layers of the mucosa and protrude from the mucosal surface into the lumen of the rectum without surface distortion [3,4]. The 2016 European Neuroendocrine Tumour Society (ENETS) guidelines recommend different surgical approaches of R-NETs, including endoscopic mucosal resection (EMR), endoscopic submucosal dissection (ESD), transanal endoscopic microsurgery (TEMS), and low anterior resection (LAR) depending on the tumor size, endoscopic ultrasound staging (T and N), and World Health Organization (WHO) grading (G1/2 or G3) [5,6]. Nevertheless, there is no consensus on the extension of lymphadenectomy or contraindications to extensive resection. Rectal lesions less than 10 mm in size typically show an indolent course, with the nodal involvement incidence ranging from 1% to 10%, a high rate of curative resection, and 5-year survival of 98 to 100%. Conversely, in the case of r-NETs 10 mm to 20 mm and larger than 20 mm in size, the reported incidence of nodal involvement increases to 30% and 60%, respectively, with a worse prognosis [7][8][9]. Previous studies have shown that a tumor size of >10 mm or >20 mm, stage, depth of submucosal invasion, lymphovascular invasion (LVI), or tumor grade 3 (G3) are important predictors of lymph node metastases, but the risk factors for nodal involvement have not been clearly elucidated [3,[10][11][12][13][14]. This study aimed at identifying potential clinical and histopathological risk factors for lymph node metastases and to construct a risk stratification score relevant for determining the proper treatment option in G1-G2 r-NETs. Study Design and Participants This was a retrospective analysis of a multicentric prospective database of 210 consecutive patients affected by r-NETs referred to 7 tertiary Italian centers from January 1990 to January 2020. The study was approved by the local Institutional Review Board (Comitato Etico Indipendente, S.Orsola-Malpighi Hospital, Bologna, Italy) and was conducted in accordance with the principles of the Declaration of Helsinki (revision of Edinburgh, 2000). The primary endpoint of this study was the identification of predictive factors related to the presence of nodal involvement in patients with r-NETs. All consecutive patients undergoing endoscopic or surgical resection of r-NETs at 7 tertiary Italian centers during the study period were included and provided informed consent at the time of surgery for anonymous review of their data for research purposes. Patients with neuroendocrine carcinoma (NEC) G3 (according to WHO 2010 classification), mixed adenoneuroendocrine carcinoma (MANEC), or no evidence of r-NETs on pathology revision were excluded from the analysis. Nodal involvement was defined on the basis of pathology report in surgically resected patients or of unequivocal imaging finding (magnetic resonance, endoscopic ultrasonography, or PET with Ga-DOTA-peptide). Indeed, endoscopic ultrasonography in addition to MRI and PET/CT is an accurate tool to capture nodal metastases even if pathologic nodal status is not confirmed. Data about nodal involvement were not available if patients did not undergo surgical resection or had no proper imaging. Data Collection All data were prospectively collected at the center where surgery was performed for every patient. A single computerized data sheet was created and patient demographics, clinical presentation, surgical, and pathological characteristics were retrospectively analyzed. Data collected included: gender, age, onset of symptoms, endoscopic features (presence of ulceration, presence of depressed lesion, or multiple lesions), type of endoscopy resection, and/or of surgical procedures performed. The gathering of data from 7 tertiary Italian centers provides a picture that reflects the risk profile for lymph node metastases in routine hospital care. Pathology Assessment Pathological features, such as tumor size, localization site according to the European Society for Medical Oncology (ESMO) guidelines definition for rectal carcinoma (<5 cm beginning at the anal verge as low, 5-10 cm as mid, and 10-15 cm as high rectal cancer), lymphovascular and perineural invasion, Ki-67, WHO 2010 classification (used at the time of histopathological exams), and the ENETS grading system, were listed [15][16][17]. Ki-67 values are expressed as the percentage of positively marking malignant cells using the anti-human Ki-67 monoclonal antibody MIB1. The margin clearance was not available since the review of tissue samples was not performed for the retrospective study design. All specimens were examined by a NET expert pathologist at each center. Statistical Analysis Categorical variables are expressed as numbers and percentages and compared using the chi-squared test or Fisher's exact test when appropriate. Continuous variables are expressed as medians and interquartile range (IQR, 25th to 75th percentiles) and compared using Mann-Whitney U test. Receiver-operating characteristic (ROC) curve was built to identify the best cutoff value for the prediction of nodal involvement according to the size of the tumor and Ki67 value. Analysis of the predictive factors of nodal disease was carried out by univariate and multivariate analysis using logistic regression. Predictive factors were expressed as odds ratio (OR) and 95% confidence interval (95% CI). A value of p < 0.05 was considered statistically significant. Statistical analyses were performed using SPSS Statistics v. 22 (IBM). Study Population Of the 210 patients considered for the analysis, eleven patients were excluded, ten because they were affected by rectal NEC and one for being affected by MANEC. The remaining 199 patients met the inclusion criteria and were included in the analysis. The selection process is shown in Figure 1. clinical presentation, surgical, and pathological characteristics were retrospectively analyzed. Data collected included: gender, age, onset of symptoms, endoscopic features (presence of ulceration, presence of depressed lesion, or multiple lesions), type of endoscopy resection, and/or of surgical procedures performed. The gathering of data from 7 tertiary Italian centers provides a picture that reflects the risk profile for lymph node metastases in routine hospital care. Pathology Assessment Pathological features, such as tumor size, localization site according to the European Society for Medical Oncology (ESMO) guidelines definition for rectal carcinoma (<5 cm beginning at the anal verge as low, 5-10 cm as mid, and 10-15 cm as high rectal cancer), lymphovascular and perineural invasion, Ki-67, WHO 2010 classification (used at the time of histopathological exams), and the ENETS grading system, were listed [15][16][17]. Ki-67 values are expressed as the percentage of positively marking malignant cells using the anti-human Ki-67 monoclonal antibody MIB1. The margin clearance was not available since the review of tissue samples was not performed for the retrospective study design. All specimens were examined by a NET expert pathologist at each center. Statistical Analysis Categorical variables are expressed as numbers and percentages and compared using the chi-squared test or Fisher's exact test when appropriate. Continuous variables are expressed as medians and interquartile range (IQR, 25th to 75th percentiles) and compared using Mann-Whitney U test. Receiver-operating characteristic (ROC) curve was built to identify the best cutoff value for the prediction of nodal involvement according to the size of the tumor and Ki67 value. Analysis of the predictive factors of nodal disease was carried out by univariate and multivariate analysis using logistic regression. Predictive factors were expressed as odds ratio (OR) and 95% confidence interval (95% CI). A value of p < 0.05 was considered statistically significant. Statistical analyses were performed using SPSS Statistics v. 22 (IBM). ROC Curves Two ROC curves of the tumor size and Ki-67 were used to determine the best cutoff values predicting nodal involvement. The best tumor size cutoff value for nodal involvement was 11.5 mm (area under the curve standard error, 0.747 ± 0.032; Figure 2a). In the cohort, twenty-six (13.1%) patients presented with r-NETs > 11.5 mm, and, among them, 16 (61.5%) patients had nodal involvement. On the other hand, the best point for Ki-67 predicting nodal involvement was >3.5% (area under the curve standard error, 0.843 ± 0.054) (Figure 2b). Twenty-six (13.1%) patients had Ki-67 > 3.5%, and, among them, 7 (30%) patients had nodal involvement. ROC Curves Two ROC curves of the tumor size and Ki-67 were used to determine the best cutof values predicting nodal involvement. The best tumor size cutoff value for noda involvement was 11.5 mm (area under the curve standard error, 0.747 ± 0.032; Figure 2a) In the cohort, twenty-six (13.1%) patients presented with r-NETs > 11.5 mm, and, amon them, 16 (61.5%) patients had nodal involvement. On the other hand, the best point fo Ki-67 predicting nodal involvement was >3.5% (area under the curve standard error, 0.84 ± 0.054) (Figure 2b). Twenty-six (13.1%) patients had Ki-67 > 3.5%, and, among them, (30%) patients had nodal involvement. On this basis, we created a predictive model of nodal involvement by combining the two clinicopathological variables within the NOVARA score (assessment of the risk of nodal involvement in rectal neuroendocrine neoplasms) and by assigning weight 1 to each of the following variables: tumor size > 11.5 mm and presence of vascular invasion. Accordingly, the patients were stratified into three different risk groups as follows: lowrisk group (zero predictive factors), intermediate-risk group (one predictive factor), and high-risk group (two predictive factors). Among the patients with both tumor size and LVI status available, 147 (83%) of the patients were categorized as low-risk, 20 (11%) patients as intermediate-risk, and 10 (6%) as high-risk. The data regarding tumor size and/or vascular invasion were not reported in 22 (11%) of the patients. Of the 147 low-risk patients, the data on regional lymph node status were available in 113 cases and nodal involvement was found in one case (0.9%). Of the 20 intermediate-risk patients, lymph node metastases were noted in four of the fifteen patients with known lymph node status (26.7%). Of the 10 high-risk patients, all the patients had known nodal status and all the patients presented with nodal involvement (100%; Figure 3). On this basis, we created a predictive model of nodal involvement by combining the two clinicopathological variables within the NOVARA score (assessment of the risk of nodal involvement in rectal neuroendocrine neoplasms) and by assigning weight 1 to each of the following variables: tumor size > 11.5 mm and presence of vascular invasion. Accordingly, the patients were stratified into three different risk groups as follows: lowrisk group (zero predictive factors), intermediate-risk group (one predictive factor), and high-risk group (two predictive factors). Among the patients with both tumor size and LVI status available, 147 (83%) of the patients were categorized as low-risk, 20 (11%) patients as intermediate-risk, and 10 (6%) as high-risk. The data regarding tumor size and/or vascular invasion were not reported in 22 (11%) of the patients. Of the 147 low-risk patients, the data on regional lymph node status were available in 113 cases and nodal involvement was found in one case (0.9%). Of the 20 intermediate-risk patients, lymph node metastases were noted in four of the fifteen patients with known lymph node status (26.7%). Of the 10 high-risk patients, all the patients had known nodal status and all the patients presented with nodal involvement (100%; Figure 3). Discussion We evaluated the clinicopathological risk factors related to nodal involvement in a large cohort of newly diagnosed patients with G1-G2 r-NETs. Furthermore, we provided Discussion We evaluated the clinicopathological risk factors related to nodal involvement in a large cohort of newly diagnosed patients with G1-G2 r-NETs. Furthermore, we provided initial evidence of a predictive score that takes into account tumor size and vascular invasion. The incidence of r-NETs has been increasing in recent decades and, despite the overall good prognosis, the long-term prognosis of r-NETs is comparable to that of colorectal cancer in the case of nodal involvement [13,18,19]. Thus, a risk stratification-based approach could suggest the appropriate surgical or endoscopic management in this setting. Previous studies reported a correlation between primary tumor size and the likelihood of lymph node metastases in r-NETs [3,10,11,[20][21][22][23][24][25][26][27][28]. Therefore, the National Comprehensive Cancer Network (NCCN) guidelines and the latest ENETS guidelines recognize the identification of tumor size as a major parameter to determine the patient prognosis and therapy options [5,29]. According to ENETS guidelines, any decision regarding the therapeutic approach is based on the assessment of tumor size, muscle layer invasion, grading, and presence of regional or distant metastases. Tumors that are smaller than 10 mm and well-differentiated should be completely removed endoscopically, whereas r-NETs larger than 20 mm, which are more likely to invade muscularis propria and to have malignant potential, should be considered for surgical resection [5,9]. On the other hand, there is still no consensus regarding r-NETs of intermediate size (10-19 mm), where an accurate tumor assessment by endoscopy and endoanal ultrasound should guide towards an endoscopic, transanal, or surgical approach [30,31]. We showed that tumor size greater than 11.5 mm and vascular invasion were independent risk factors for lymph node metastases. Nevertheless, despite what is reported in the current ENETS guidelines, our investigation lowered the dimensional cutoff for clinical decisions from 20 mm to 11.5 mm in line with the results from the latest retrospective analyses regarding r-NETs [13,[32][33][34][35]. Similar to our findings, two large retrospective studies based on national registries published in 2019 and a retrospective report from the French group of endocrine tumors (GTE) confirmed that tumor size larger than 10 mm was related to nodal involvement in non-metastatic r-NETs, along with other predictive factors, such as tumor grade and presence of muscular and lymphovascular invasion [33][34][35]. Another retrospective registry-based study by Concors et al. found that the cutoff value of 11.5 mm was also able to predict the risk of distant metastases in well-differentiated and moderately differentiated r-NETs, suggesting a possible role for radical surgical resection in these cases [32]. With regard to vascular invasion, defined by the presence of tumor cells in blood vessels, our findings were in agreement with the available literature, suggesting its predictive role of nodal involvement [36,37]. The prevalence of LVI in small r-NETs was 21.8% according to a recent systematic review and meta-analysis by Kang et al., and, when separately analyzed, the vascular invasion had a stronger impact on lymph node metastasis than the lymphatic invasion [38]. Moreover, we found a Ki-67 > 3.5% as the optimal cut-point value for the risk of nodal metastases. However, Ki67 did not retain its association with the risk of nodal involvement upon multivariate analysis. Nonetheless, of the 199 consecutive patients considered for the analysis, nodal involvement was found in 18 (9%) of the cases and, among these, 10 (55.6%) patients presented with tumor size larger than 11.5 mm and vascular invasion. The combination of these two single parameters in the NOVARA risk prediction score, of which tumor size can be assessed preoperatively, has led to differentiate three different categories with a distinct risk of nodal involvement that could allow discussion for bettertailored treatment and a dedicated surveillance program. Thus, the NOVARA score can identify patients with a low risk of nodal involvement that are likely to have an excellent prognosis and benefit from endoscopic resection, and patients with intermediate to high risk that should be considered for surgical resection and/or close monitoring. The retrospective design of our study, along with the use of a large dataset with certain missing data, are two limitations to be acknowledged. Particularly, the main limitation is the lack of long-term follow-up data, which precludes the possibility to analyze the impact of lymph node metastases on survival outcomes. Moreover, since nodal pathology or imaging was not performed in all the patients as per standard clinical practice, occult metastases might have been underestimated in some patients, and this could have led to a selection bias. Nevertheless, to our knowledge, this is the first Italian multicentric study and one of the few non-registry-based studies that assessed the predictors of nodal involvement in a wide cohort of patients with G1-G2 r-NETs. Additionally, given the paucity of dedicated high-level evidence, our study developed a scoring system for risk stratification that can be incorporated in clinical practice and help guide discussions with patients regarding their risk of lymph node metastases. Conclusions We covered one of the largest multicenter studies conducted on this topic so far. According to our results, tumor size and vascular invasion predicted nodal involvement and were incorporated in the NOVARA predictive score, according to which patients presenting both factors had a higher risk of nodal involvement at diagnosis and should thus be considered for radical surgical resection. In addition, our findings suggest that tumor size > 11.5 mm is a fundamental variable guiding the most appropriate surgical approach during pre-operative assessment. In our view, well-designed, prospective clinical trials are required to validate these results and to guide decision-making in r-NETs patients in everyday clinical practice worldwide.
2022-01-31T16:12:22.961Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "061eccd2fe78baa08e77bf25eba700927ccb2e09", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/3/713/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db9c5ec92ba0bc0aa393ee4ac2b8469aea23c553", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259120943
pes2o/s2orc
v3-fos-license
Clinical pointers in Prevotella septic arthritis of the hip: a case report Background Infective arthritis is an orthopaedic surgical emergency. Staphylococcus aureus remains the commonest causative bacteria across all age groups. Prevotella spp. as a cause of infective arthritis is extremely rare. Case report We present our case of a 30-year-old African male patient who presented with mild signs of infective arthritis of the left hip. His risk factors were his background retroviral disease, intravenous drug abuse, and a previous episode of left hip arthrotomy which healed expectantly with intervention. The current presentation was treated with arthrotomy of the hip, fluid lavage, and skeletal traction based on our clinical findings and the rarity of the presentation was seen to be mobilising non-weight bearing with crutches, and pain-free on the left hip. Conclusion A high index of suspicion for Prevotella Septic Arthritis (PSA) should be exercised when treating infective arthritis patients with background joint arthropathies, and intravenous drug abuse, especially in individuals with significant immunosuppression and/or recent tooth extraction. Fortunately, although rare an entity, good outcomes can be expected with early diagnosis and classic treatment principles of joint decompression and lavage as well as guided antibiotic therapy. Introduction Infective arthritis is one of a few orthopaedic surgical emergencies [1]. Bacterial septic arthritis is by far the commonest form of infective arthritis [1,2]. Staphylococcus aureus (SA) accounts for between 70 and 90% of cases of infective arthritis with the remainder of the cases being caused by either other gram-positive, gram-negative, mycobacteria or anaerobic organisms [1][2][3]. The latter micro-organisms rarely affect synovial joints, especially Prevotella spp. [4] and as such we present our case report of Prevotella septic arthritis (PSA) of the hip with particular emphasis on a clinical approach with pointers for making a diagnosis, all the way through to rehabilitation of the affected joint. Case report We present a 30 years old African male who reported a 4-day history of worsening left hip pain, swelling, and inability to weight bear on the left lower limb. He gave a background history of being retro-viral disease reactive, which was uncontrolled on treatment (CD4 = 218 cells/ml, viral load = 3320 copies/ml). Off-note is that he also suffered from pulmonary tuberculosis (December 2021) which was treated successfully with no sequelae, however, he was also known to suffer from intravenous drug addiction using his arm veins for the injections. He denied any prior dental procedures but gave a history of previous left hip septic arthritis a year prior to the current presentation, he was treated with surgical joint decompression and antibiotics, and an unremarkable recovery was reported. Clinical findings On examination, the patient was generally ill-looking, vitals (BP = 119/88 mmHg, HR = 101 b/m, RR = 20 b/m, Temp = 36.4 °C), and the left hip was held flexed, abducted, externally rotated, and irritable to examination with marked tenderness. On radiographs, see Fig. 1 the left hip radiograph confirmed the clinical posture of the left hip with the destruction of the femoral head and a widening of the joint space with superolateral subluxation of the femoral head. Laboratory infective marker workup was in keeping with an infective process by raised septic markers (Erythrocyte Sedimentation Rate (ESR) = 113 mm/hour, C-Reactive protein (CRP) = 50 mg/L, PLTs = 532 × 10 9 /L), however, the White Cell Count (WCC) and renal function were normal. Therapeutic intervention The patient underwent emergent hip arthrotomy, see Fig. 2. with copious yellowish pus evacuated from the hip. The hip also received extensive fluid lavage and a Portovac drain was left in-situ for continuous post-operative drainage in the ward. The microscopy results surprisingly revealed Prevotella as an infective micro-organism. The patient received intravenous antibiotics (Metronidazole 500 mg iv. ter die sumendum/three times daily (TDS), in our case) for 4 weeks and trans-femoral skeletal traction with Brown's frame as shown in Fig. 3a which, aided in repositioning the femoral head within the acetabulum as shown in a radiograph (Fig. 3b) done at 4 weeks post-traction. Discussion Staphylococcus aureus is still by far the commonest cause of septic arthritis [1,2]. A patient's age group and clinical condition usually predisposed one to infective bacteria outside of usual cases due to SA [1][2][3]. Rarer causes of septic arthritis include Prevotella species [4][5][6]8]. The literature reports these micro-organisms to be isolated in only a handful of cases [4][5][6][7][8][9]. And as such, there is no level 1 evidence for diagnosis, treatment, and eventual outcomes for PSA. Our case of discussion was a young male who fits the profile for PSA as per his risk factors [4]. Shalman et al. reported the condition to affect individuals in the 5th and 6th decades but it can also be expected in younger patients suffering from medical co-morbidities and risk factors, as was the case in our patient [4][5][6][7][8]. On history, he had a prior surgical history of the same (LEFT) hip for a previous infective arthritis that was treated and had healed uneventfully. Naseir et al. also reported Prevotella septic arthritis in a joint with previous surgery. However, Shalman et.al and others reported the infection in surgically naive joints [5,9]. Recent dental surgery has also been associated with PSA following dental tooth extraction [6,7]. PSA post-dental surgery can develop as early as 48 hours post-tooth extraction especially in elderly patients [6]. Usually in cases that follow post-dental work there is an underlying arthropathy of sorts [7]. Joint inflammatory arthritides have always been noted to be risk factors for the development of infective arthritis on the whole [8][9][10][11]. Clinically PSA presents with the classic signs of infective arthritis with pain, swelling, warmth, and loss of function of the involved joint, however usually with an associated draining sinus [4,5]. The picture can be easily confused with that of subacute and even chronic infective arthritis like the one seen in tuberculosis of the joints. Ironically, radiological changes with PSA are similar to those of chronic infective arthritis. Our case presented with an increased joint space and an effusion. Surgical drainage usually reveals a yellowish-to-greenish collection of pus [4]. Microscopy revealed a small gramnegative rod on Haematoxylin and Eosin staining previously referred to as Bacteroides species. Fortunately, these microorganisms are usually sensitive to antibiotics [12,13]. However the duration of treatment is not well defined in the literature and so we adopted treatment as per the usual SA infective arthritis with the use of intravenous antibiotics for 4 weeks and an additional 2 weeks of oral antibiotics post-discharge [4,13]. Metronidazole is the gold standard of treatment with clindamycin being the only alternative [13]. Traction was applied for the 1st four weeks with the plan to have the joint heal in an acceptable arthrodesis position of hip flexion at 15 degrees. Arthrodesis was preferred in our case since there was established joint destruction at presentation and the patient was not an ideal candidate for arthroplasty replacement due to his age and co-morbidities. At the last follow-up, the patient was seen to be mobilising non-weight bearing with crutches, and pain-free on the left hip. Conclusion PSA is an uncommon cause of a common orthopaedic emergency. A high index of suspicion should exist when treating septic arthritis patients presenting with a background of general inflammatory arthritis, and/ or previous total joint replacement, especially in individuals with significant immunosuppression, intravenous drug abuse, and/or recent tooth extraction. Fortunately, although rare an entity, good outcomes can be expected with early diagnosis and classic treatment principles of joint decompression and lavage as well as guided antibiotic therapy.
2023-06-10T13:41:21.632Z
2023-06-10T00:00:00.000
{ "year": 2023, "sha1": "18e6da101d50ebdb7835c46ee079478a820078ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "18e6da101d50ebdb7835c46ee079478a820078ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39361338
pes2o/s2orc
v3-fos-license
Cervical osteomyelitis caused by Burkholderia cepacia after rhinoplasty Burkholderia cepacia, previously known as Pseudomonas cepacia, has been implicated in vertebral osteomyelitis in patients who are intravenous drug abusers. We report a case of acute vertebral osteomyelitis in a non-intravenous drug user, following an elective rhinoplasty. Introduction Burkholderia cepacia is usually non-pathogenic in healthy people.However, in the past 2 decades, Burkholderia cepacia has emerged as a significant human pathogen, particularly in intravenous drug users and patients with cystic fibrosis (CF) [1,2].One highly transmissible strain spread across North America and Britain and another between hospitalized CF and non-CF patients [3].We report a case of acute vertebral osteomyelitis caused by Burkholderia cepacia in a patient after elective rhinoplasty. Case Report A 49-year-old female with a past medical history of hypertension, diet-controlled type 2 diabetes mellitus and hyperlipidemia was admitted for severe neck pain and stiffness.The neck pain worsened over two days, with pain radiating into the upper back and development of bilateral upper extremities numbness.A recent tuberculin skin test was reportedly negative.She had quit smoking thirteen years previously and denied any alcohol consumption or illicit drug use.Three weeks prior to admission to our hospital, the patient underwent an elective rhinoplasty in Iran with no reported post-operative complications. Physical examination demonstrated C5-6 tenderness upon palpation, with marked limitation on range of motion secondary to pain.The patient had no fever on admission.All laboratory tests were unremarkable, except for an elevated CSF protein to 234 mg/dl.Cervical CT with contrast revealed pre-vertebral soft tissue swelling from C2 to T1. MRI revealed abnormal signal involving the C5-6 vertebral bodies with an abnormal soft tissue component anteriorly as well as posteriorly in the epidural space. A cervical CT-guided fine-needle aspiration was nondiagnostic and the patient underwent open biopsy.Based on the operative report, the vertebral body was granular and broken apart.The disc material was indurated.Partial corpectomies were performed at C5-6 because the abscess destroyed the endplates completely at the vertebral bodies.Multiple specimens including disc material and vertebral debris were sent to pathology and microbiology.Surgical specimen pathology revealed acute osteomyelitis.The culture isolate identified by Vitek (bioMérieux, Durham, North Carolina) at 48 hours yielded Burkholderia cepacia. Susceptibilities were determined by the Kirby-Bauer method and the Etest. Kirby Bauer methodology revealed susceptibility to ceftazidime, imipenem-cilastin, levofloxacin, meropenem and pipercillin.The meropenem MIC was 2 µg/ml, which is considered susceptible by CLSI standards [4].The patient was treated with meropenem 1gm every 8 hours for 6 weeks with gradual resolution of neck pain and upper extremity numbness. Discussion Burkholderia cepacia, previously known as Pseudomonas cepacia, is a motile, aerobic, catalase positive, gram-negative organism, first described in 1949 by Walter Burkholder of Cornell University [5].It is ubiquitous in the environment and is frequently found in association with soil, water and plants. Vertebral osteomyelitis is primarily a disease of adults older than 50 years old.The overall incidence of vertebral osteomyelitis has steadily increased in recent years secondary to increasing age of the population, injection drug use and increasing rates of nosocomial bacteremia from intravascular devices and other forms of instrumentation.Life-threatening sepsis from intravenous flush solutions, outbreaks of Burkholderia cepacia from contaminated ultrasound gel and from contamination of albuterol and nasal spray have also been recently reported [6,7,8]. To our knowledge, this is the first report of a patient who was not an IVDA, had an elective rhinoplasty and developed acute vertebral osteomyelitis caused by Burkholderia cepacia.We postulate that this patient's infection occurred around the time of rhinoplasty.The patient was likely exposed to irrigation solution or nasal packing contaminated with Burkholderia cepacia leading to subsequent infection of her vertebral column by means of a transient bacteremia. Postoperative bleeding is increased after removal of nasal packing.Kaygusus et al. found that 16.9 percent of patients became bacteremic after packing removal [9]. Burkholderia cepacia needs to be considered a pathogen in patients who have undergone routine surgical procedures in proximity to the vertebral column who are subsequently diagnosed with vertebral osteomyelitis.Proper infection control at local and international levels should be evaluated to ensure safe production and utilization of medical supplies.
2017-04-01T20:00:37.189Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "a41e6e13bd4583fe97ad13d7e3fcd538a1c9541e", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/19736393/187", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a41e6e13bd4583fe97ad13d7e3fcd538a1c9541e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253798863
pes2o/s2orc
v3-fos-license
A promising tool to explore functional impairment in neurodegeneration: A systematic review of near-infrared spectroscopy in dementia. This systematic review aimed to evaluate previous studies which used near-infrared spectroscopy (NIRS) in dementia given its suitability as a diagnostic and investigative tool in this population. From 800 identified records which used NIRS in dementia and prodromal stages, 88 studies were evaluated which employed a range of tasks testing memory (29), word retrieval (24), motor (8) and visuo-spatial function (4), and which explored the resting state (32). Across these domains, dementia exhibited blunted haemodynamic responses, often localised to frontal regions of interest, and a lack of task-appropriate frontal lateralisation. Prodromal stages, such as mild cognitive impairment, revealed mixed results. Reduced cognitive performance accompanied by either diminished functional responses or hyperactivity was identified, the latter suggesting a compensatory response not present at the dementia stage. Despite clear evidence of alterations in brain oxygenation in dementia and prodromal stages, a consensus as to the nature of these changes is difficult to reach. This is likely partially due to the lack of standardisation in optical techniques and processing methods for the application of NIRS to dementia. Further studies are required exploring more naturalistic settings and a wider range of dementia subtypes. Introduction Dementia is a clinical syndrome, defined by symptoms including problems with memory, language, and executive function, which ultimately emerge due to neuronal loss. The most common cause of dementia is Alzheimer's Disease (AD), which is characterised by amyloid plaques, neurofibrillary tangles, memory impairment, cortical shrinking, and hippocampal atrophy (Arvanitakis et al., 2019). Other degenerative forms of dementia include Dementia with Lewy Bodies (DLB), characterised by Lewy body inclusions and motor symptoms, Fronto-Temporal Dementia (FTD), associated with fronto-temporal degeneration, and Vascular Dementia (VaD), caused by vascular injuries such as ischemia (Arvanitakis et al., 2019). Dementia is a leading cause of disability worldwide (World Health Organisation), in part due to its increasing incidence (Nichols et al., 2022) and deleterious effects on capacity for independent living and cognitive function. The development of new detection and therapeutic tools is thus a priority. As dementia is a progressive disorder, several structural changes are thought to occur in the brain prior to symptom onset (Beason-Held et al., 2013). Prodromal stages such as Mild Cognitive Impairment (MCI) are therefore a critical target for early intervention. In support of this, around 16% of MCI revert to normal cognition within a year (Koepsell and Monsell, 2012). Yet, current methods for detecting early cognitive decline, the most common of which are cognitive tests, are inadequate. This is demonstrated by the wide variation in their reported specificity and sensitivity (Mitchell, 2013). Such tests are overly reductive, introduce 'arbitrary' cut-off values, and are highly influenced by attention and motivation (Brown, 2015). The subsequent inability to effectively detect early cognitive decline (Elkana et al. (2015) prevents the identification of at-risk individuals prior to irreversible damage. Advances in imaging and fluid biomarkers are rapidly progressing, however, there is a lack of brain-specific, low-cost, and accessible biomarkers for clinical use. Imaging techniques such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are invasive, expensive, and limited to specialist centres. As a result, these techniques are not routinely available in care pathways, and thus only provide a snapshot of a patient's status. Conversely, fluid biomarkers do notprovide regional brain information. Despite the practical limitations of techniques such as MRI and PET, considerable effort has been made to purpose them for biomarker development. For example, functional MRI (fMRI) has identified altered Default Mode Network (DMN) and medial temporal lobe activity, with inconsistent results in prodromal stages (Sperling, 2011). Of note are the contradicting reports in the literature of a compensatory response in early stages in the form of hyperactivation or reduced deactivation, followed by later protein aggregation and hypoactivation (Bakker et al., 2015;Celone et al., 2006). PET, on the other hand, has helped to identify significantly differentiable patterns of reduced glucose metabolism, using the 18-2fluoro-2-deoxy-D-glucose metabolic tracer, across dementias (Young et al., 2020). Additional work has also crucially revealed a strong association between dementia and vascular dysfunction. Such dysfunction includes neurovascular decoupling, arterial stiffening, increased pulsatility, hypoperfusion, disrupted functioning of the blood brain barrier, and impaired autoregulation (Toth et al., 2017). These changes are thought to arise prior to prodromal stages and partially drive later neuronal damage, in turn resulting in domain-specific impairments and secondary neurometabolic dysfunction (Chung et al., 2017). An imaging technique which is relatively unestablished, and in its infancy compared to its peers, is near-infrared spectroscopy (NIRS). NIRS is a non-invasive neuroimaging technique which uses nearinfrared light to measure brain oxygenation by exploiting the differing absorption spectra of molecules in the brain. Within an optical window in the near-infrared range (650-950 nm), oxygenated (HbO) and deoxygenated haemoglobin (HbR) are the primary absorbers of light. NIRS exploits this phenomenon by shining two (or more) wavelengths of light into the brain and using the detected light attenuation to estimate the concentrations of HbO and HbR which, due to neurovascular coupling, can quantify brain activity via capture of the haemodynamic response. In continuous-wave NIRS specifically, light attenuation due to absorption is indistinguishable from attenuation due to unknown scattering effects, making only relative concentration changes from baseline measurable. NIRS is well-suited for widespread clinical use and may thus be highly beneficial for dementia research, particularly as this is a population with high a prevalence of comorbidities (Bunn et al., 2014). NIRS has numerous practical advantages over perhaps more widely known techniques such as MRI and PET: it is non-invasive, well-tolerated, low-cost, portable, and easy-to-setup-and-use. This means that it can be used in naturalistic testing settings, such as outdoors or at the bedside, granting access to a more diverse pool of subjects, both socio-economically (as it does not require travel to a hospital) and with regards to accessibility (as it has very few physical restrictions). Another important advantage of NIRS is its relatively low sensitivity to movement, enabling the adoption of more ecologically valid tasks. Such a lack of restriction on test subjects and experimental paradigms has important implications for generalisability. This is particularly pertinent in dementia, where symptoms such as motor impairments are difficult to investigate using methods like MRI. With regards to how NIRS can be used to study dementia, NIRS can provide regional brain-specific time courses of oxygenation (for both HbR and HbO) and metabolism, the latter via the quantification of the redox state of cytochrome-c-oxidase using broadband NIRS (Bale et al., 2014). This technique can thus offer information as to how well the brain is supplying neurons with the resources necessary for maintaining function and how efficiently neurons are using these resources. As the NIRS signal itself encodes crucial physiological information, such as the integration of neural-glial-vascular components, by applying signal processing techniques (West et al., 2019), for example, NIRS enables the exploration of physiological processes like neurovascular coupling. For example, time-to-peak reflects the action of neurovascular mediators, vasomotor reactivity, and oxygen extraction efficiency. Additionally, whilst both fMRI and NIRS interrogate the blood-oxygen-level-dependent response and can subsequently investigate network-level activity, NIRS does so with higher temporal resolution and additional practical advantages (i.e. its portability, low cost, and useability). Consequently, NIRS may help enable the early detection of dementia as it is better suited for functional imaging and for providing biomarkers of brain oxygenation and metbaolism which cannot be provided by other neuroimaging techniques. The present article thus sought to review the application of NIRS to dementia. To do so, data from previous literature was synthesised and the clinical value of their findings was evaluated. Through these aims, future avenues for the adaptation of NIRS for dementia were delineated. Additionally, given growing interest in the use of NIRS in ageing research (Agbangla et al., 2017), a thorough investigation into its application to one of the most prevalent age-related diseases was deemed necessary. Considering the previous fMRI literature (Sperling, 2011) and the numerous practical advantages of NIRS, we hypothesise that NIRS has the potential to become a standard-of-care for the diagnosis and prognosis management of dementia by detecting differences in brain oxygenation and metabolism between dementia, prodromal stages, and controls. Specifically, we expect to observe hypoperfusion and hypoactivation (reflected by blunted haemodynamic responses) in dementia in the resting state and across activation tasks. We assume that this will be observed alongside hyperactivation in the prodrome, in line with the hypothesis of a 'break point' in early stages, as seen using MRI (Dounavi et al., 2021). If differences are not observed between groups, or a consensus cannot be reached, we anticipate that, upon further inspection, the applied NIRS methods will not have been adequately adapted for this clinical population, partially underlying such an outcome. Methods A review protocol was developed according to the Preferred Reporting Items for Systematic review statement (Page et al., 2021) (PROSPERO registration number CRD42021297315). A systematic search of MEDLINE (1946), Embase (1947 and PsychINFO (1806PsychINFO ( -2021 was subsequently performed on the 1st of February 2023 using the following search terms: (Cognitive impairment OR Cognitive disorder OR Cognitive decline OR Vascular dementia OR Cognitive dysfunction OR Neurocognitive disorder OR Alzheimer* OR Dement* OR AD OR FTD OR DLB OR LBD) AND (Near-infrared spectroscopy OR Near infrared spectroscopy OR NIRS OR oxyhaemoglobin OR Tissue oxygenation index). The results of this search were stored using Covidence (Veritas Health Innovation Ltd.; Australia) and de-duplicated. Two authors (EB, SS) independently screened abstracts and titles for relevant articles and conflicts were resolved by a third reviewer (GB). Full texts were then evaluated for inclusion. Studies involving humans diagnosed with dementia or in prodromal stages, and both case-controlled studies and those exclusively testing clinical groups were included. Conference abstracts, animal studies, reviews, study protocols, and non-English studies were excluded. Additional studies were identified through crossreferencing the bibliographies of the included studies. The quality of studies was assessed using the Newcastle Ottawa scale (Wells et al., 2009) for case-controlled studies, the JADAD scale (Jadad et al., 1996) for randomised control trials, and the National Heart, Lung, and Blood Institute quality assessment tool for observational cohort studies (National Heart Lung and Blood Institute, n.d.). These quality assessment scales were chosen as they are the most widely used for the respective study types (Ma et al., 2020). The results of this assessment are provided in the appendix (Fig. A1). Data from the included studies was extracted by two reviewers (EB, SS) and stored using Excel (Microsoft Corporation). The following information was extracted: title, first author's name, publication year, publication journal, experimental paradigm, cohort characteristics, sample size, summary of results, NIRS parameters, and NIRS device. This information is summarised in the tables provided in the appendix. A Arai et al. (2006). (e) Schematic overview of the one-back task. (f) Mild cognitive impairment was associated with a reduced and delayed rise in the haemodynamic response where as Alzheimer's Disease was associated with a decreased and delayed response compared to controls during memory encoding. Figure from Li et al. (2018a). (g) An example of the Benton Line Orientation task (Benton et al., 1978). (h) Significantly reduced average change in oxygenated haemoglobin concentration in individuals with late life depression compared to Alzheimer's Disease in a parietal channel during a visuo-spatial task. Figure from Kito et al. (2014). meta-analysis was not possible due to the heterogeneity of the studied clinical populations, methods used, and data presented, preventing quantitative synthesis. The included studies were tabulated according to (1) cognitive domain and (2) the clinical population studied. The major outcomes of each study were then summarised within this framework. The proportion of included studies which reported a significant difference between dementia or prodromal stages, and controls was calculated. It is important to note that a study was classified as reporting differences between groups if a significant difference was reported in any single outcome in the study. This was done as most statistical analyses performed and outcomes reported were not standardised across studies. Further, correlations with clinical and behavioural scores, and details as to the NIRS methods used, were considered. A consensus as to the clinical value of the NIRS data could then be ascertained through a critical analysis of this information. Search results The search identified 800 records (Fig. 1). Following title and abstract screening, 138 studies were eligible for full-text screening: 24 were conference proceedings or abstracts, 22 studies were excluded for wrong patient population, 7 for wrong study design, and 1 was a book chapter. Four studies were identified through cross-referencing, yielding a total of 88 studies for final evaluation. Since 1993, when the first paper was published using NIRS in dementia (Hoshi and Tamura, 1993), there has been a steady increase in the number of papers published in the area (Fig. 2). Of note is the gap in published studies between 1998 and 2004. This may be due to a lack of commercially available NIRS systems for research as two of the three studies published prior to 2004 both used a NIRO 500 system (Hock et al., 1996(Hock et al., , 1997 and the third used a tissue oximeter (Fallgatter et al., 1997). Nevertheless, a deficiency in the number of NIRS studies generally published is observable around these years (Yan et al., 2020). Since then, there has been a significant increase and improvement in commercially available and 'user-friendly' systems, such as those from Artinis Medical Systems, which are now largely designed for use in neuroscientific research. The included studies took several approaches to characterise dementia and its prodrome using NIRS including recording the resting state (32 studies, Table A1) and employing tasks testing word retrieval (24 studies, Table A2), memory (29 studies, Table A3), motor (8 studies, Table A4), and visuo-spatial function (4 studies, Table A5), as well as tasks such as oddball paradigms (13 studies, Table A6). NIRS is able to detect differences between dementia, prodromal populations, and controls In accordance with our initial hypothesis, we observed that the majority of the included studies successfully used NIRS to identify significant differences in brain oxygenation between dementia or prodromal stages, and controls (~86.4% of studies), supporting its use as a standard-of-care alternative to currently used methods like MRI. Conversely, none of the studies used NIRS to measure neurometabolism, so it remains to be determined whether NIRS can detect differences in neurometabolism across these populations. These studies and their analysis methods are discussed and critically evaluated according to the cognitive domain they explored below. Resting state brain oxygenation is reduced in prodromal stages A total of 32 studies explored resting-state brain oxygenation (Table A1) which used a wide range of devices, experimental paradigms, and analysis methods to do so. Of these, six measured a Tissue Oxygenation Index (TOI) (Viola et al., 2013;Marmarelis et al., 2017;Liu et al., 2014;Tarumi et al., 2014;Li et al., 2022;Viola et al., 2014). This is a commonly used metric in clinics which provides a measure of absolute tissue oxygen saturation, both arterial and venous, from a single measurement location. Several studies found reduced TOI in amnestic MCI (aMCI) (Viola et al., 2013;Tarumi et al., 2014), and cognitively impaired individuals , compared to controls. In support of its clinical use, reduced TOI was also associated with poorer Mini-Mental State Examination (MMSE) (Viola et al., 2013) and memory scores (Tarumi et al., 2014) in aMCI. TOI has also been considered as a marker of oxygenation to investigate therapeutic efficacy, however the value of TOI in this regard is unclear. Two studies observed negligible TOI reactivity in AD with midazolam administration (Tatsuno et al., 2021;Morimoto et al., 2022), whereas (Viola et al., 2014) observed TOI increases in AD with brain reperfusion rehabilitation therapy alongside improved MMSE scores. The unclear nature of the observed alterations in TOI may be due to issues with intra-device variation (Kleiser et al., 2016). Additionally, TOI is often reported as percent tissue saturation which, whilst useful for quick clinical assessments, provides little information as to the physiological processes underlying such saturation values. Many studies also only recorded TOI from a single measurement location, neglecting any spatial variations in oxygenation. Another commonly used method to measure resting state oxygenation, or rather cerebrovascular reactivity (i.e. the HbO increase present upon rapid vasodilation), is through sit-stand manoeuvres or CO₂ challenges. Studies using such paradigms yielded mixed results as to differences in response between dementia, MCI, and controls (van Beek et al., 2012;Marmarelis et al., 2021;Babiloni et al., 2014). However, oxygenation during CO2 challenges increased with acupuncture therapy and galantamine treatment in MCI (Ghafoor et al., 2019), VaD (Schwarz, Litscher and Sandner-Kiesling, 2004;Bär et al., 2007), and AD (van Beek et al., 2010). Disrupted cerebrovascular reactivity has been linked to several underlying mechanisms in AD, including the characteristic Aß deposition proposed to cause oxidative stress and decreased production of vasodilatory factors, and reduced cholinergic tone (Bär et al., 2007). In contrast, resting state data in the absence of such challenges was not found to differentiate between AD Zeller et al., 2010;Chiarelli et al., 2021) and MCI (Soo Baik et al., 2021), and controls. A physiological process of interest in dementia is neurovascular coupling (Shabir et al., 2018), i.e. the coordination of blood flow and neurometabolic demand necessary to maintain neuronal function, which can be explored using a multi-modal approach. Whilst disrupted neurovascular coupling has long been an area of interest in ageing research (Turner et al., 2022;Hutchison et al., 2013), only two studies explored this in AD (Chiarelli et al., 2021) and aMCI (Babiloni et al., 2014) respectively. These used electroencephalography (EEG)-NIRS to identify uncoupling between HbO concentration changes and EEG power in AD (Chiarelli et al., 2021), and an association between poor vasomotor reactivity and EEG coherence in aMCI (Babiloni et al., 2014). However, these studies were limited by a lack of subject-specific anatomical information and low channel counts. In both prodromal and dementia stages, computational methods identified resting state cortical disorganisation, however, these methods had several limitations Several computational methods aiming to use resting state data to differentiate between clinical groups ( Fig. 3a) have been reported in the literature Various studies explored network connectivity, many of which identified disturbances in dementia and prodromal stages, the nature of which was not well defined. This is partly due to the diverse methods used to quantify connectivity across studies. One such method is 'effective connectivity', i.e. the causal influence of one brain region's activity over another. Effective connectivity was found to be reduced in MCI across several regions including the bilateral prefrontal cortex (PFC), in which stronger coupling between the dorsolateral PFC and other regions of interest was associated with higher cognitive scores (Bu et al., 2019). Alternatively, correlation coefficients can be calculated between signal time courses to quantify connectivity. Using this method, both increased and decreased connectivity has been found in MCI compared to controls. Zhang et al. (2022) concluded that such decreased connectivity is in line with evidence of hypoperfusion and hypometabolism in MCI (Li et al., 2015). However, this is in direct contrast with the hypothesis of a compensatory response in prodromal stages to support declining cognitive function, which fails in dementia stages (Østergaard et al., 2013), i.e. the 'break point' (Dounavi et al., 2021). Previous studies have also calculated the 'entropy', i.e. complexity, of the NIRS signal, a metric considered to reflect cognitive ability. Reduced signal entropy was observed in AD, which, in accordance with findings from Niu et al. (2019), was localised to the DMN, frontoparietal and ventral/dorsal attention networks (Li et al., 2018b). In contrast, increased signal entropy in the very low frequency bands (0.008-0.1 Hz), also identified in AD (Ferdinando et al., 2022), is suggested to denote increased variation in vasomotor brain waves, potentially indicating greater variability in vascular diameter in AD compared to controls. With regards to regions of interest, both MCI and AD showed disturbances in dynamic functional connectivity (which accounts for the temporal variability of connectivity) within long-distance connections in prefrontal, parietal, and occipital cortices , and in the DMN and fronto-parietal networks (Niu et al., 2019) (Fig. 3b). This agrees with Keles et al. (2022) who identified dorsolateral PFC activity, part of the fronto-parietal network (Gratton et al., 2018), to be a crucial differentiator between AD and controls during the resting state. Another significant area of research which has been growing rapidly in popularity within the healthcare sector is the application of machine learning to neuroimaging data. Despite this, few of the included studies (13) applied machine learning to NIRS (Cicalese et al., 2020;Ho et al., 2022;Kim et al., 2021;Oyama et al., 2018;Yang et al., 2019Yang et al., , 2020Yoo et al., 2020;Yang and Hong, 2021;Yoo and Hong, 2019). Furthermore, only one focused on the prediction of a continuous variable (Oyama et al., 2018) while the rest classified dementia stage or task performance. Most used simple models such as support vector machines and linear discriminant analysis, with recent studies demonstrating higher dementia diagnosis classification accuracies using more complex machine learning, or deep learning, models . With regards to how machine learning was applied to NIRS, four studies performed classification on resting state data, with two of these finding that classification of either AD or MCI from controls was more accurate using HbO compared to HbR (Yang and Hong, 2021). The only study identified in this review which used broadband NIRS classified AD, MCI, and controls from their optical spectrum, finding a feature at 895 nm to be best at differentiating between AD and MCI (Greco et al., 2021). What this indicates is unclear as the biological substance contributing to this peak could not be identified by the authors. In addition, machine learning has also demonstrated promise in identifying regions of interest and functional connections with particularly high predictive accuracy. For example, Zhang et al. (2022) identified the long-range connection between the right PFC and left occipital lobe as a potential biomarker for aMCI. All of the studies so far, however, suffer from the fact that, bar four studies which focused on multi-class classification, they focused on binary classification between MCI/AD and controls. Of those that performed multi-class classification, Chiarelli et al. (2021) used estimates of neurovascular coupling strength and a multivariate linear regression approach to classify AD from controls. In agreement with Cicalese et al. (2020), classification accuracies were improved using combined EEG-NIRS features (Chiarelli et al., 2021). However, reported classification accuracies were also high using solely NIRS signals. Kim et al. (2022a) demonstrated > 90% prediction accuracies with the difference in left and right PFC signals recorded during olfactory stimulation as an input to a random forest classifier model. Most discussed studies were limited by small group sizes and group imbalance, which do not provide enough training examples for sufficiently robust models. Many also demonstrated low multi-class prediction capabilities, though with larger volumes of data, prediction and finer-scale classification tasks can be realised with higher accuracies. Simple signal feature sets are used in the majority of the discussed studies, though engineered features containing both spatial and temporal information have been shown to produce predictive models with higher accuracies Zhang et al., 2023). Finally, none focused on interpretable machine learning, such that classification decision pathways, and the signal metrics used to make a particular diagnosis, cannot be easily evaluated by clinicians. Blunted haemodynamic responses during word retrieval were evident in both prodromal and dementia stages All of the included studies which assessed word retrieval (24 , Table A2) used the verbal fluency task (VFT), or a modification of such (Fig. 3c). This is a frequently used paradigm in dementia studies, in which subjects must generate words within a category ('semantic') or beginning with a specific letter ('phonemic'). Overall, clinical groups generally performed worse than controls, as was the case for AD (Yap et Table A1 Characteristics of the included studies reporting resting-state near-infrared spectroscopy data in dementia and prodromal stages. In house device Spectral feature at 895 nm which distinguished AD from MCI, the biological identity of which was not identified. Prodromal AD (50), AD dementia (9), asymptomatic AD (28) Significant difference in TOI response to change in blood CO 2 between MCI and controls. (continued on next page) 2008), MCI (Yap et al., 2017;Metzger et al., 2016;Yeung et al., 2016a;Nguyen et al., 2019), asymptomatic AD , and the behavioural variant of FTD (bvFTD) Metzger et al., 2015). Such reduced behavioural performance was accompanied by smaller haemodynamic responses in dementia (Takahashi et al., 2015), particularly in AD (Richter et al., 2007;Arai et al., 2006;Hock et al., 1996;Kato et al., 2017;Herrmann et al., 2008). These responses were characterised by longer latencies (Yap et al., 2017), reduced amplitudes, and smaller areas under the waveform (Kato et al., 2017). Such inadequate task-appropriate activation was echoed in prodromal stages such as MCI which presented with hypoactivation , particularly in the right parietal area (Fig. 3d) (Arai et al., 2006), and reduced inter-hemispheric connectivity . However, upon classifying between MCI and controls using HbO, the VFT was not as stable an indicator of MCI as the n-back task (Yang et al., 2019). In support of this lack of diagnostic potential, Soo Baik et al. (2021) found no VFT-related differences between MCI and AD. Additionally, there appeared to be no clear association between the magnitude of the haemodynamic response and clinically relevant features such as MMSE score (Kato et al., 2017;Kito et al., 2014;Arai et al., 2006), or behavioural performance (Araki et al., 2014;Metzger et al., 2016;Richter et al., 2007;Hock et al., 1997). Although the magnitude of the haemodynamic response during the VFT may not be clinically useful (Takahashi et al., 2022), spatial patterns of activation may be able to differentiate between healthy ageing and dementia, as well as across MCI subtype (Yoon et al., 2019;Yeung et al., 2016a). For example, differences between AD and controls were localised to frontal and bilateral parietal regions (Hock et al., 1996), whereas differences between AD and MCI were limited to right parietal regions (Arai et al., 2006). Similarly, a loss of activation asymmetry was also Reduced whole-brain functional connectivity in aMCI compared to controls. Greater impairment in connectivity between brain regions at larger distances in aMCI. Table A2 Characteristics of the included studies reporting near-infrared spectroscopy data associated with word retrieval in dementia patients and prodromal stages. Greater task-related increase in HbO for all channels in controls, and only for some channels in dementia. Prodromal AD (50), AD dementia (9), asymptomatic AD (28), controls (53) VFT 6 frontal channels (2 S, 5 D) In house device Significant difference in HbO concentration change between groups. Significant difference in HbO concentration change between men and women across patient groups. Hock, 1996 Probable AD (19), controls OT-R40 (Hitachi Medical; Tokyo, Japan) No differences in mean HbO change between MCI, AD, and controls. (continued on next page) E. Butters et al. evident in both dementia (Fallgatter et al., 1997;Richter et al., 2007) and MCI (Yeung et al., 2016a), however, one study found no significant lateralisation in either controls or MCI (Katzorke et al., 2018). The extent of lateralisation has been suggested as a possible biomarker for dementia as it is thought to reflect the recruitment of contralateral resources to support declining function (Yeung et al., 2016a), as is supported by the fMRI literature (Liu et al., 2018). There was evidence for both hypo-and hyperactivation during memory tasks for all clinical groups Overall, 29 studies explored memory function (Table A3), many of which used the n-back task (13), with mixed results. This task evaluates working memory (WM) and interrogates frontal regions, making it good for use with NIRS as it avoids monitoring through hair. In this task, subjects are presented with a sequence of letters and must indicate whether the presented letter was the same as that just before (one-back) or that before last (two-back) (Fig. 3e). Using this task, two studies observed blunted haemodynamic responses in MCI (Yang et al., 2019;Yoo et al., 2020), with a gradation from controls, to MCI, to AD , whereas three found no difference in functional response (Yoon et al., 2019;Soo Baik et al., 2021) or connectivity between MCI and controls. Interestingly, one study identified hyperactivation in MCI compared to controls . Perhaps the discriminatory ability of the n-back task is more subtle: there is evidence for differential WM load modulation across disease stages. For example, certain studies observed differences between MCI and controls only with high WM loads Yeung et al., 2016b) and others identified WM load modulation only in controls (Vermeij et al., 2017;Ung et al., 2020). With respect to the clinical value of the haemodynamic response during WM tasks, most studies reporting results of correlation analyses identified positive correlations between the magnitude of the HbO signal or functional connectivity metrics , and behavioural or clinical scores (Ni et al., 2021;Yeung et al., 2016b;Li et al., 2018a;Niu et al., 2013;Uemura et al., 2016;Liu et al., 2023) (Fig. 3f) such that greater oxygenation was associated with better scores. Encouragingly, haemodynamic responses during the n-back task were also validated as having strong diagnostic potential using convolutional neural networks (Yang et al., 2019. Additionally, responses to the n-back task may be sensitive markers of treatment responses, reflected by increases in oxygenation (Ni et al., 2021;Ghafoor et al., 2019;Khan et al., 2022). Such results lend support to the hypothesis of a compensatory hyperactive response in prodromal stages to compensate for declining function. However, perhaps surprisingly, two studies found improved memory performance to be associated with reductions in frontal activation in MCI with both photobiomodulation therapy (Chan et al., 2021) and VR-based training (Liao et al., 2020). Some evidence suggested hypoactivation in clinical groups during motor activity, however, the tasks used were simplistic Of the eight studies testing motor function (Table A4), six used dualtask walking (Doi et al., 2013;Teo et al., 2021;Nosaka et al., 2022;Wang et al., 2022;Talamonti et al., 2022;Takahashi et al., 2022) with wearable NIRS devices, five of which recorded exclusively from the frontal cortex. The dual-task walking paradigm involves performing a single (e.g. walking), and a dual task (e.g. completing a cognitive task whilst walking). Findings from studies using this task suggest a non-linear relationship between dementia severity, brain oxygenation, and motor performance, unlike memory function or word retrieval (Yap et al., 2017). For example, people with memory complaints had higher activation during dual-task walking compared to controls, whereas those with dementia had higher activation compared to both controls and people with memory complaints in single-task walking, yet significantly reduced activation in dual-task walking (Teo et al., 2021). Concerning studies directly assessing motor function, none used naturalistic tasks, such as social interaction, but used simplistic motor tasks, such as hand-grip movements (Tak et al., 2011) and finger tapping (Yang et al., 2022), which revealed decreased oxygenation in AD compared to controls and MCI. More demanding visuo-spatial tasks may have revealed clearer deficits Four studies explored visuo-spatial processing (Zeller et al., 2010;Kito et al., 2014;Tomioka et al., 2009;Haberstumpf et al., 2022) ( Table A5). Three of these used angle discrimination tasks, such as the Benton Line Orientation task (Fig. 3g), which requires participants to judge the orientation of a presented line (Benton et al., 1978). However, these yielded varied results (Zeller et al., 2010;Kito et al., 2014;Haberstumpf et al., 2022) (Fig. 3h) possibly due to a lack of standardised methodologies across studies. For example, Zeller et al. (2010) used a combined 'dementia' patient group. The absence of performance differences across groups (Zeller et al., 2010;Kito et al., 2014) may also suggest that more demanding visuo-spatial tasks are required to reveal differences in the NIRS data. A handful of studies used sensory stimuli and oddball tasks, with little consensus Four studies explored sensory responses by using NIRS (Table A6) with music (Tanaka et al., 2012) and olfactory stimuli, the latter of which could discriminate healthy ageing from prodromal (Kim et al., Table A3 Characteristics of the included studies reporting near-infrared spectroscopy data associated with memory function in dementia patients and prodromal stages. Reduced concentration changes of HbO and decreased global efficiency in MCI compared to controls. Increased haemodynamic response with acupuncture therapy in MCI. Prodromal AD (50), AD dementia (9), asymptomatic AD (28), controls (53) One-back task 6 frontal channels (2 S, 5 D) In house device Significant difference in HbO concentration change between groups. Degree of haemodynamic activation perfectly correlated with AD stage: lowest activation in AD dementia and highest activation in controls. Jang, 2019 MCI ( 2022b) and dementia stages (Fladby et al., 2004;Kim et al., 2022a). Alternatively, eight studies employed oddball tasks. Three found no differences in the haemodynamic response Soo Baik et al., 2021) and connectivity between MCI and controls, whereas three observed reduced frontal activation in MCI (Yang et al., 2019;Yoo et al., 2020) and AD , with one study finding greater overall HbO increases in MCI compared to controls (Zhang et al., 2023). As most of these studies used the same task design and patient groups, except for Ho et al. (2022) which used a four-minute task block, these mixed results are surprising. The research methods used were not adequate for use in dementia and prodromal populations Overall, across cognitive domains, most studies observed reduced magnitudes of the relative concentration changes of HbO and HbR across dementia groups, AD, VaD and FTD. This agrees with results from other modalities, including hypometabolism identified using PET (Costantini et al., 2008), an overall 'slowing' of neocortical EEG (Dringenberg, 2000), and hypoactivation observed with fMRI in dementia (Sperling, 2011). Of the 52 studies which tested those in prodromal stages, i.e. MCI, 32% identified no difference, 8% identified increases, and 60% found decreases in either or both HbO and HbR concentration changes. Such a lack of consensus is consistent with the contradictory reports of an early compensatory response observed across other imaging techniques (Bakker et al., 2015;Celone et al., 2006). Given the variability of results across certain domains, such as in the resting state, and particularly with regards to prodromal stages, the research methods used across the studies were investigated to determine whether methodological inefficiency could at least partially underlie such variability. A pattern emerged: experimental designs and optical methods lacked consistency, standardisation, and adequacy, as discussed below. The optical methods did not account for likely differences in brain size and shape present at the dementia stage Firstly, no studies accounted for the changes in brain size and structure which are commonly observed in dementia and old age. As brain tissue is a highly scattering medium, near infrared light can only penetrate ~4 cm into tissue. Therefore, NIRS can only record from superficial cortical layers. However, in dementia, widespread cortical shrinkage and atrophy (Harper et al., 2017) result in an assumed increased distance of the cortex from the scalp. This in turn may lead to data being recorded solely from extracerebral tissues, as opposed to from brain tissue. The incorporation of subject-specific anatomical data, such as from structural MRIs, to perform source localisation and signal reconstruction is thus necessary to avoid apparent functional differences being caused by anatomical variability or structural degeneration. Doing so is particularly critical in late stages, when the scalp-to-cortex distance can be up to 1.7 cm (Lu et al., 2019). Several studies also did not age-match their control and patient populations (see Fig. A1) which, alongside the absence of correcting for baseline age-related vascular changes, such as via statistical modelling of the haemodynamic response, may lead to the misattribution of alterations in the temporal dynamics of the haemodynamic response to changes in neural activity. Similarly, many studies used sparse (low-density) NIRS arrays, i.e., in which the sources and detectors are arranged in a grid-like pattern. Not only does this often mean that there are little-to-no short channels, but the light cannot penetrate as deep as in higher density systems. Higher density systems, which consist of overlapping, variable length channels, yield improved resolution and fewer positional errors (White and Culver, 2010), and may achieve better sensitivity in dementia (Srinivasan et al., 2023). High-density NIRS can also be combined with anatomical information to create detailed topographical maps of brain activity, termed High-Density Diffuse Optical Tomography (HD-DOT). Whilst no studies used HD-DOT, and only a few used high-density systems (e.g. Soo Baik et al., 2021; Yoo and Hong, 2019), Talamonti et al. (2022) used DOT and Li et al. (2019) performed source localisation, however, neither used subject-specific anatomical information to do so. Prodromal groups are highly heterogenous, possibly underlying the diversity of results observed in this population The majority of studies focused on AD (36) and MCI (52), with few exploring less common dementia subtypes: only three in VaD, one in FTD, and none in DLB. Despite this, the nature of alterations in prodromal stages such as MCI was highly variable, particularly with regards to the presence of an early compensatory response in the form of hyperperfusion and hyperactivation (Merlo et al., 2019). Whether these variable results reflect differences in methodology or indeed indicate true variation in the ability to recruit additional resources across subgroups or individuals is unclear. Firstly, this may simply be due to the relatively small number of studies (12) which directly compare MCI with AD. Additionally, a similar degree of variability has been observed across patterns of activation in fMRI (Yetkin et al., 2006) and across subjects in EEG (Trinh et al., 2021) in MCI, suggesting an inherent heterogeneity to this subpopulation. As such, a lack of subgrouping across studies may contribute to such variable results. This is not an easy issue to resolve: MCI is difficult to diagnose and classify into subtypes (Díaz-Mardomingo et al., 2017), as it can present considerably differently with regards to symptomatology (Lopez, 2006) and patterns of atrophy (Bell-McGinty et al., 2005). To further complicate matters, the fMRI literature suggests distinct manifestations of compensatory responses in early stages between resting state and task-related fMRI, with the sensitivity and reliability of task-related fMRI yet to be established (Young et al., 2020). The adoption of higher density, wider coverage NIRS systems, and improved region of interest selection, would also Characteristics of the included studies reporting near-infrared spectroscopy data associated with motor function in dementia patients and prodromal stages. Table A5 Characteristics of the included studies reporting near-infrared spectroscopy data associated with visuo-spatial function in dementia patients and prodromal stages. increase sensitivity (Srinivasan et al., 2023) in prodromal populations. An overall lack of standardisation in methods was evident across studies A common theme which became apparent across studies was a lack of standardisation in experimental methods. This includes data analysis, evidenced by the wide range of signal metrics and statistical methods used (see Table A6 as an example), experimental design, such as baseline and task duration, and data collection, such as probe placement. For example, almost half of the reviewed studies do not refer to motion burden, or the need to explicitly correct for motion to ensure that spikes, baseline shifts, and low-frequency drifts are not misinterpreted as physiologically relevant signals (e.g. van Beek et al., 2010;Viola et al., 2014) Moreover, even in studies which used systems containing short channels, several did not perform short-channel regression (e.g. Bu et al., 2019) to remove the influence of scalp haemodynamics. Such a lack of standardisation is also seen more widely across NIRS research, in which the myriad of adjustable parameters for processing NIRS data can lead to "misinterpretation and irreproducibility of results" (Pinti et al., 2020;Hocke et al., 2018). For instance, it remains unclear whether it is best to use either or both HbO and HbR to study brain activity (Pinti et al., 2019) or dementia (Zeller et al., 2019;Katzorke et al., 2018;Yang and Hong, 2021). Many studies in the present review only analysed the HbO signal and discarded HbR, citing HbO's higher signal-to-noise ratio and greater correlation with blood oxygen level dependent fMRI (Cui et al., 2011). Nevertheless, efforts are being made to standardise research methods for NIRS, such as adopting SNIRFs for data storage. Further future directions Aside from the methodological issues detailed above, there are several avenues of research which remain to be explored. For example, all included studies used continuous-wave NIRS systems bar Oyama et al. (2018), which used a time-resolved system, and Chiarelli et al. (2021) which used a frequency-domain system. In addition, perhaps surprisingly, given NIRS's ability to be relatively easily integrated with other imaging modalities, few studies did so: one PET, three EEG, and one fMRI. Only a single longitudinal study explored how brain oxygenation changes with disease progression (Talamonti et al., 2022), in which exploring stages even earlier such as Apolipoprotein E-4 carriers (Katzorke et al., 2017) is necessary for the assessment of the clinical value of NIRS. Most studies also only recorded from pre-specified regions of interest, limiting functional connectivity analyses. This is particularly the case in studies measuring task-related activation which predominantly recorded exclusively from frontal regions, even during tasks with motor components (e.g. Takahashi et al., 2022), and despite the established posterior degeneration in AD and DLB (O'Donovan et al., 2013). Similarly, few studies used NIRS to explore motor symptoms (Table A4). This Table A6 Characteristics of the included studies reporting near-infrared spectroscopy data associated with other functions in dementia patients and prodromal stages. Difference between HbO and HbR concentration change is reduced in patients compared to controls. Prodromal AD (50), AD dementia (9), asymptomatic AD (28), controls (53) Oddball task 6 frontal channels (2 S, 5 D) In house device Increased HbO concentration change in controls compared to patient groups. Kim, 2022b AD (16) is surprising due to NIRS' low sensitivity to movement and lack of physical restrictions, as well as the characteristic motor symptoms of certain dementia subtypes, such as DLB (Emre, 2003), which cannot be easily explored using techniques like MRI. Though, the emergence of wearable NIRS is relatively recent, possibly explaining the lack of naturalistic task designs. Finally, using broadband NIRS to quantify intracellular neurometabolism (Bale et al., 2016) would be invaluable to investigate neurovascular decoupling in dementia. Conclusion Broadly, the previous literature identified differences between dementia, prodromal stages, and healthy ageing. This is evidence by cortical disorganisation, involving the DMN and fronto-parietal networks (e.g. Niu et al., 2019) and hypoactivation (e.g. Niu et al., 2013;Li et al., 2018b): a generally suppressed haemodynamic response across cognitive domains, at the dementia stage. In prodromal stages, several studies found hypoactivation (Yoon et al., 2019;Arai et al., 2006), whereas others identified a possible compensatory response in the form of hyperactivation Yap et al., 2017). Alongside the blunted haemodynamic response in dementia, these findings partially agree with the hypothesis of a 'break point' in prodromal stages (Dounavi et al., 2021). This review highlights the necessity for standardised protocols for both experimental designs, e.g. ecologically valid designs, and analysis methods, e.g. subject-specific information for source localisation, for more holistic and generalisable outcomes. To conclude, NIRS has strong potential for clinical translation and integration into care pathways, however, several methodological issues must be resolved before this is possible. Declaration of Competing Interest The authors have no conflicting interests to declare. Data availability No data was used for the research described in the article.
2023-06-24T13:16:53.161Z
2023-06-23T00:00:00.000
{ "year": 2023, "sha1": "43743442c026222f4ff4b7512abf80de48984f34", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.arr.2023.101992", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b9bd4f669b75b2ae8e04bd5c44ec3f301b34c396", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268231677
pes2o/s2orc
v3-fos-license
Efficacy of doxycycline therapy for macrolide-resistant Mycoplasma pneumoniae pneumonia in children at different periods Background The prevalence of macrolide-resistant Mycoplasma pneumoniae has increased considerably. Treatment in children has become challenging. This study aimed to evaluate the efficacy of doxycycline therapy for macrolide-resistant Mycoplasma pneumoniae pneumonia in children at different periods. Methods We retrospectively analyzed the data of patients with macrolide-resistant Mycoplasma pneumoniae pneumonia hospitalized between May 2019 to August 2022. According to treatment, patients were divided into three groups: oral doxycycline treatment alone (DOX group), changed from intravenous azithromycin to oral doxycycline (ATD group), and intravenous azithromycin treatment alone (AZI group). ATD group cases were separated into two sub-groups: intravenous azithromycin treatment<3 days (ATD1 group) and ≥ 3 days (ATD2 group). Clinical symptoms were compared in each group and adjusted by Propensity score matching (PSM) analysis. Results A total of 106 were recruited in this study. 17 (16%) were in DOX group, 58 (55%) in ATD group, and 31(29%) in AZI group. Compared with ATD group and AZI group, the DOX group showed shorter hospitalization duration and fever duration after treatment, while higher rate of chest radiographic improvement. After using PSM analysis, shorter days to hospitalization duration (P = 0.037) and to fever duration after treatment (P = 0.027) in DOX + ATD1 group than in ATD2 group was observed. A higher number of patients in the DOX + ATD1 group achieved defervescence within 72 h (P = 0.031), and fewer children received glucocorticoid adjuvant therapy (P = 0.002). No adverse reactions associated with doxycycline was observed during treatment. Conclusions Children receiving early oral doxycycline had a shorter duration of fever and hospitalization in macrolide-resistant Mycoplasma pneumoniae patients. Background Mycoplasma pneumoniae (M.pneumoniae) is one of the most common causes of upper and lower respiratory tract infections, particularly in children and young adults.The majority of M. pneumoniae pneumonia are benign and self-limiting disease.However, some patients may develop severe M. pneumoniae pneumonia or refractory M. pneumoniae pneumonia, causing progressive pneumonia or various extrapulmonary complications [1].These cases may be related to the occurrence of macrolide-resistant (MR) M. pneumoniae [2,3].This resistance is associated with point mutations in the V region of the 23S rRNA gene and leads to high-level resistance to macrolides [4].Therefore, the efficacy of macrolide treatment was shown to be lower in patients infected with macrolide-resistant isolates than in patients infected with macrolide-sensitive isolates [5,6]. In recent years, the global patterns in the proportion of MR M. pneumoniae infections showed an increasing trend, and the proportion of MR M. pneumoniae infections was highest in China [3,7].Treatment with the increase in MR M. pneumoniae has become challenging.Because this resistance may lead to more extrapulmonary complications and severe clinical features [2], alternative antibiotic treatment can be required, including tetracyclines or fluoroquinolones.To date, no tetracycline resistance has been reported in M. pneumoniae clinical isolates.In vitro antimicrobial susceptibility testing showed that M. pneumoniae in all cases was sensitive to tetracyclines, including doxycycline and minocycline [8]. MR M. pneumoniae pneumonia is characterized by an excessive immune response against the pathogen as well as direct injury caused by an increasing M. pneumoniae load [9].Study indicates that children with higher M. pneumoniae abundance in the bronchoalveolar lavage fluid tend to have a longer hospital stay and higher fever peak [10].This suggests that the loading of M. pneumoniae is associated with clinical severity [11].Doxycycline, as an alternative drug for treating MR M. pneumoniae, can inhibit the replication of M. pneumoniae DNA and reduce the load of pulmonary pathogens [12].However, the exact timing of doxycycline treatment has not been established at present.In this study, we aimed to evaluate the efficacy of doxycycline therapy for macrolide-resistant Mycoplasma pneumoniae pneumonia in children at different periods. Study subjects We retrospectively reviewed the medical records of children without prior underlying diseases with Community acquired pneumonia hospitalized at the Ningbo Medical Center Lihuili Hospital between May 2019 and August 2022.All the evaluated patients had signs and symptoms indicative of pneumonia, such as fever, cough, and abnormal chest radiographic findings compatible with pneumonia [13].The M. pneumoniae infection was determined by polymerase chain reaction test of nasopharyngeal aspirates obtained from the patients on admission.Samples positive for M. pneumoniae were subjected to direct DNA sequencing of the domain V of the 23S rRNA gene to identify A2063G or A2064G mutation site using real-time fluorescent PCR assay kit (Jiangsu Mole Bioscience Co., Ltd, China) in accordance with the manufacturer's instructions.It took about 2 days to determine whether there is a resistance mutation. Exclusion criteria were as follow: (1) other pathogens were found before treatment, including bacteria, respiratory syncytial virus, influenza viruses, parainfluenza viruses, coronaviruses, human rhinoviruses, adenoviruses, human metapneumovirus and chlamydia pneumoniae; (2) patients who had been started on treatment for macrolide or doxycycline prior to admission; (3) children who have chronic disease (such as tuberculosis, asthma and immunodeficiency) states predisposing them to recurrent lung infections; (4) discharge within 48 h after enrollment and insufficient data; (5) children younger than 8 years were excluded because they could not be treated with doxycycline. Methods In our study, all patients were treated with intravenous azithromycin or oral doxycycline.The dosage of intravenous azithromycin was 10 mg/kg once daily, and oral doxycycline was administered once every 12 h at doses of 2.2 mg/kg, in accordance with the package insert accompanying each drug [13].The primary antibiotic selection was made by the attending pediatrician.Oral doxycycline should be used if the patient had a history of exposure to MR M. pneumoniae.The preferred treatment for M. pneumoniae pneumonia is macrocyclic antibiotics [13].Therefore, before the results of pathogen and M. pneumoniae mutation site testing are available, intravenous azithromycin was chosen as the primary antibiotic for patients with suspected M. pneumoniae pneumonia.After obtaining evidence of MR M. pneumoniae infection, some patients were changed from intravenous azithromycin to oral doxycycline, while others continued treatment with intravenous azithromycin due to inability to tolerate swallowing capsules.According to treatment, these patients were divided into three groups: oral doxycycline treatment alone (DOX group), changed from intravenous azithromycin to oral doxycycline (ATD group) and intravenous azithromycin treatment alone (AZI group).ATD group cases were separated into two sub-groups: intravenous azithromycin treatment<3 days (ATD1 group) and intravenous azithromycin treatment ≥ 3 days (ATD2 group). Data collection Clinical information was retrospectively collected from the medical records of the patients.The collected data included demographics, hospitalization period, duration of fever (febrile days before macrolide or doxycycline treatment, febrile days after treatment, time to defervescence), laboratory results upon admission, chest radiographic findings and adverse reactions during treatment.The duration of fever was defined as the number of days for which the patient had a body temperature of ≥ 38℃ with an interval of <24 h between each episode of fever.Defervescence of fever was defined as a decline in body temperature up to < 37.5℃ for > 48 h.All patients underwent chest radiographic examination before admission, and a second chest radiographic examination was performed 7 to 10 days after treatment.The chest radiographic findings were from the records read by two radiologists and classified according to the presence of consolidation lobar, patchy and effusion.If the results of the patient's X-rays showed that reduction of more than 30% in consolidation and infiltration area compared to before treatment, we consider the patient to be a consolidation and/or infiltration absorption case. Statistical analysis SPSS 25.0 statistical software was applied for Propensity score matching (PSM) and analysis.The data were expressed as median (IQR) for continuous variables or as number of cases (percentage) of a specific group for categorical variables.The Kruskal-Wallis test was used for continuous variables.If the variables were statistically significant when compared among more than two groups, they were further analyzed by Mann-Whitney U test for comparing two groups.The Pearson's Chisquared or Fisher's exact test were used for categorical variables.To reduce the effect of possible selective bias, patients in the DOX + ATD1 and ATD2 groups, were matched with those in non-biopsy group for a 1:1 PSM with a caliper value of 0.02.Matching factors included age, gender, fever duration prior to treatment, and chest radiographic before admission.Two-sided p-value < 0.05 was considered to be statistically significant. Demographic and clinical characteristics The total number of hospitalized patients was 5589, who were tested for M. pneumoniae PCR tests of the nasopharyngeal aspirates between May 2019 to August 2022 due to clinically suspected M. pneumoniae infection.Among the cases tested, 10% (533/5589) were M. pneumoniae PCR positive, and 72% (384/533) showed point mutations in domain V of 23S rRNA.The prevalence rate of patients tests positive for MR M. pneumoniae infection peaked in summer and autumn season of 2019 during the study period.Of the patients with MR M. pneumoniae pneumonia, we excluded 278 patients following the exclusion criteria.Consequently, a total of 106 were recruited in this study.17 (16%) were in DOX group, 58 (55%) in ATD group, and 31(29%) in AZI group (Fig. 1). Comparisons of clinical courses after therapy The efficacy of treatment in each group was compared (Table 2).Hospitalization duration in DOX group was shorter than that in ATD group (6 days vs. 8 days, P = 0.003) and AZI group (6 days vs. 8 days, P = 0.000) (Fig. 2).The median fever duration after treatment in DOX group was shorter than that in the other two groups (2 days vs. 4 days, P = 0.022 and 2 days vs. 3 days, P = 0.044, respectively) (Fig. 2).Hospitalization duration and fever duration after treatment were not significantly different between ATD group and AZI group (P = 0.867 and P = 0.990, respectively) (Fig. 2).The numbers of patients who achieved defervescence within 48 h and chest radiographic improvement after one week of treatment were higher in DOX group than that in in ATD group (P = 0.045 and P = 0.021, respectively) and AZI group (P = 0.000 and P = 0.003, respectively).The number of patients using glucocorticoid adjuvant therapy in DOX group was less than that in AZI group (P = 0.015).These indicators were no statistically significant differences compared between ATD group and AZI group (P > 0.05).Three patients in ATD group received nasal cannula oxygen supply, one patient in AZI group and none in DOX group.No patients were transferred to the intensive care unit or received mechanical ventilation during hospitalization.During intravenous azithromycin treatment, 10 patients had abdominal pain, 3 had vomiting, 3 had rash.All patients treated with doxycycline responded well.No adverse reactions associated with oral doxycycline were observed during treatment. Comparison of doxycycline treatment in different periods As shown in Table 3, the response to azithromycin and doxycycline among children with MR M. pneumoniae pneumonia at different time points was analyzed.The median of hospitalization duration and fever duration after treatment were shorter in DOX group than that in ATD2 group (P = 0.001 and P = 0.012, respectively) (Fig. 3).There was no difference in hospitalization duration and fever duration after treatment in ATD1 group compared with the DOX group (P = 0.088 and P = 0.860, respectively) and ATD2 group (P = 0.990 and P = 0.314, respectively).The numbers of patients who achieved defervescence within 72 h were higher in DOX group than that in ATD2 group (P = 0.039), and were not significantly different between ATD1 and ATD2 group (P = 0.741).The numbers of patients who achieved defervescence within 96 h were higher in DOX and ATD1 group than that in ATD2 group (P = 0.024 and P = 0.006, respectively), and were not significantly different between DOX and ATD1 group (P = 0.990).After one week of treatment, the number of patients who achieved chest radiographic improvement in DOX group and ATD1 group was higher than that in ATD2 group (P = 0.003 and P = 0.045, respectively).ATD2 group had the highest number of patients using glucocorticoid adjuvant therapy, followed by the other two groups (P = 0.007). The efficacy of early oral doxycycline using PS matched analysis The efficacy of early oral doxycycline was compared in the matched analysis between the DOX + ATD1group and ATD2 group (Table 4).There was no significant difference in baseline characteristics between the two groups.The hospitalization duration, fever duration after treatment were shorter in DOX + ATD1 group than that in ATD2 group (P = 0.037 and P = 0.027, respectively).The number of patients achieving defervescence within 72 h was higher in DOX + ATD1 group (P = 0.031).More children in ATD2 group were treated with glucocorticoid adjuvant therapy (P = 0.002). Discussion The incidence of MR M. pneumoniae has recently increased and has been related to life-threatening or refractory M. pneumoniae pneumonia in children [14].Since the emergence of macrolide resistance has been reported mainly in Asia [7,15], prevalence of MR M. pneumoniae isolated in pediatric patients has increased annually in China [16]: 88.19% in 2016, 90.93% in 2017, 90.56% in 2018 and 92.90% in 2019.In this study, the total number of M. pneumoniae pneumonia was 533 cases between May 2019 to August 2022, and 72% (384/533) were MR M. pneumoniae pneumonia.The prevalence of MR M. pneumoniae was similar to the data reported, and peaked in 2019 during the study period.There was an uneven distribution among cases in the 2019-2022, which was related to the COVID-19 pandemic.Since January 2020, in response to various public health policies to control the spread of COVID-19 in the pandemic, there was a substantial decrease in respiratory infections in China and many resources were focused on diagnosis and management of COVID-19.During the COVID-19 pandemic, our study also showed an increase in MR M. pneumoniae infection rates in 2022.The increasing prevalence of MR M. pneumoniae has become a significant clinical issue in the pediatric patients.Treatment of MR M. pneumoniae pneumonia in children has become challenging. M. pneumoniae lacks cell wall and consequently is resistant to beta-lactams and to all antimicrobials targeting the cell wall [17].This mycoplasma is intrinsically susceptible to antibiotics that act on the bacterial ribosome and inhibit protein synthesis such as macrolides or tetracyclines or agents that inhibit DNA replication such as fluoroquinolones [18,19].MR M. pneumoniae is caused by mutations in domain V of the 23s rRNA gene that interfere with the binding of macrolides to rRNA [15].A-to-G transition mutation at position 2063 in 23S rRNA genes is the most prevalent in MR M. pneumoniae isolates, and it is closely followed by the A2064G mutation [2,3].Both mutations can cause high-level resistance to erythromycin and azithromycin in M. pneumoniae [20].This suggests that macrolide may have limited effects on MR M. pneumoniae infection.Therefore, in cases of MR M. pneumoniae strains, alternative antibiotic treatment can be required, including tetracyclines such as doxycycline and minocycline [21,22].To date, no tetracycline resistance has been reported in M. pneumoniae clinical isolates.Doxycycline has good activity against both macrolide-susceptible and macrolide-resistant strains [22,23].As expected, our study found that doxycycline regimens were shown to be more effective than macrolide regimens in patients infected by MR M. pneumoniae.The duration of fever and hospitalization were significantly longer in patients with macrolide regimens.Compared to intravenous azithromycin treatment, oral doxycycline is more acceptable to children.Therefore, oral doxycycline is likely to be a better treatment of MR M. pneumoniae infections than macrolide for children above the age of 8 years. The occurrence of MR M. pneumoniae infections was likely to lead to treatment failure, which translates into a longer duration of therapy, persistent cough and increased time to resolution of fever compared with treatment-susceptible infection, both in children and in adults [6,24].For the treatment of MR M. pneumoniae pneumonia presenting clinical and radiological deterioration, adjunctive systemic corticosteroids are sometimes used [25].However, too early use large doses corticosteroids could cause suppression of phagocytic function of alveolar macrophages and neutrophils, decrease lymphocyte mobilization [26].In addition, corticosteroids did not significantly decrease the DNA load of M. pneumoniae in bronchoalveolar lavage fluid [27].Therefore, untimely corticosteroid additional therapy may increase the risk of mixed infection and they may contribute to condition aggravation [26].Tetracyclines can inhibit peptide chain lengthening of protein synthesis by acting on the 30 S subunit of M. pneumoniae ribosomes.Estimated M. pneumoniae amounts after 3 days clearly decreased from 10 6 copies/mL to 5 × 10 2 copies/mL in those receiving doxycycline [12].Tetracyclines are generally well tolerated, with common adverse reactions observed in patients receiving these agents including anorexia, nausea, vomiting, diarrhea, rash, photosensitivity, tooth discoloration [18,28].Most concerning side effect is permanent tooth discoloration.The affinity for mineralizing tissue leads to incorporation into calcifying tissues [29].However, due to a low affinity for calcium of doxycycline [30], there is no or only negligible tooth staining, even in young children aged 2-8 years [31,32].Factors related to tooth discoloration are dosage, duration of treatment, stage of tooth mineralization, and activity of the mineralization process [33].In our study, oral doxycycline treatment lengths usually range between 7 and 10 days.No adverse reactions associated with doxycycline was observed during treatment.Further studies were needed to evaluate the adverse effects of doxycycline. This study has some limitations.The first limitation was its retrospective design, which had the potential to introduce memory bias and led to missing data, most notably for assessment of disease severity.Secondly, due to the limitations of the duration of this study, we collected all the qualified children rather than calculated the sample size in the study period.Therefore, it might lead to selective bias.Finally, all research subjects come from one center with limited sample size.Although the PSM method can deal with the issue of selection bias, the small sample size after matching may lead to less objective and complete display of data features.In the future, it is necessary to carry out prospective randomized studies or to conduct studies involving more subjects through multicenter studies. Conclusions Compared with intravenous azithromycin, cases with MR M. pneumoniae pneumonia showed significantly shorter duration of fever and hospitalization with oral doxycycline and more rapid improvement of radiologic findings.Most of MR M. pneumoniae pneumonia patients achieved rapid defervescence with oral doxycycline or treatment changes to oral doxycycline.Pediatricians should improve the early recognition of MR M. pneumoniae pneumonia, which is important for early conversion to doxycycline therapy.Furthermore, a largescale prospective study is needed to guide appropriate treatment in children with MR M. pneumoniae. Fig. 1 Fig. 1 Flow chart for the inclusion and classification of the study subjects Table 1 Characteristics of patients with macrolide-resistant MP pneumonia in each treatment group *Comparison between three groups Table 2 Comparisons of clinical courses after therapy in macrolide-resistantMycoplasma pneumoniae pneumonia *Comparison between three groups That indicates doxycycline could decrease the DNA load of M. pneumoniae.
2024-03-05T14:15:04.685Z
2024-03-05T00:00:00.000
{ "year": 2024, "sha1": "6cfbde97a1a20200c5d8392e152c58895f69009f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "36aaf5d1119d96db85e3f06da35e239f5292d4ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118320138
pes2o/s2orc
v3-fos-license
Comparative study of phonon spectrum and thermal expansion of graphene, silicene, germanene and blue phosphorene Based on first-principles calculation using density functional theory, we study the vibrational properties and thermal expansion of mono-atomic two-dimensional honeycomb lattices: graphene, silicene, germanene and blue phosphorene. We focus on the similarities and differences of their properties, and try to understand them from their lattice structures. We illustrate that, from graphene to blue phosphorene, phonon bandgap develops due to large buckling-induced mixing of the in-plane and out-of-plane phonon modes. This mixing also influences their thermal properties. Using quasi-harmonic approximation, we find that all of them show negative thermal expansion at room temperature. We perform Density Functional Theory (DFT) based calculations of the phonon spectrum at different lattice constants. Based on these calculations, using the quasi-harmonic approximation (QHA), we obtain the Grüneisen parameters, thermal expansion coefficients and other thermodynamic properties of these 2D materials. We find that at room temperature, the thermal expansion coefficients of all these 2D materials are negative. It has already been experimentally demonstrated that the interaction between graphene and the substrate can be tuned by utilizing their different thermal expan-sion coefficients. We anticipate that similar effect is possible for other 2D materials. II. DFT CALCULATION Our DFT calculations are performed by using the Vienna ab initio Simulation Package (VASP) 34,35 . It is based on the projected augmented wave (PAW) method and plane wave basis set. The Perdew-Burke-Ernzerhof (PBE) version of the generalized gradient approximation (GGA) is used 36 . We note that the main conclusions of this work do not depend on the exchange-correlation functionals used. The energy cutoffs for graphene, silicene, germanene, and blue phosphorene are 750, 500, 400, 400 eV, respectively. For the structural relaxations, the Brillouin zone is sampled using the Γ centered scheme with at least 11×11×1 k points. For the vibrational and thermal properties, we need a large unit cell to treat the arXiv:1610.04376v1 [cond-mat.mtrl-sci] 14 Oct 2016 long range interaction, which is important for the long wavelength, low frequency phonons near Γ, and a dense k point sampling for the high frequency optical phonons. In this work, we have used a supercell of at least 7 × 7, and a k-point sampling of 4 × 4 × 1. The mechanical and thermal properties are obtained using Phonopy-QHA script 37,38 . Firstly, a series of phonon spectrum using different lattice constants are calculated. For each lattice constant a, the free energy is obtained from Here, E(a) is the ground state free energy, ω a;q,j is the vibrational frequency corresponding to wavevector q, mode j, is the reduced Planck constant, β = (k B T ) −1 with k B the Boltzmann constant, T the temperature. A thirdorder Birch-Murnaghan equation of state is then used to fit the data points. The equilibrium lattice constants at different T are obtained. The thermal expansion coefficient is defined as α(T ) can also be obtained from the mode-dependent Grüneisen parameters as Here, c v is the heat capacity at constant volume, B = −V ∂P/∂V is the bulk modulus, a 0 is the equilibrium lattice constant, V 0 is the equilibrium unit cell volume, and ω 0;q,j is the corresponding vibrational frequency. Note that, in our calculation, we fixed the length of the unit cell in the direction perpendicular to the 2D plane. Thus, we have 4 instead of 9 in Eq. (4). For 2D materials, the ZA mode is very soft, and a slight reduction of the lattice constant may result in negative phonon frequencies near Γ point. This means the applied strain should be small enough. Otherwise, the QHA is not valid any more. One important difference between our and previous calculations is that we have used smaller strain of ±0.5%. Actually, due to this difference, our results for blue phosphorene are quite different from those of Ref. 25. We have compared results using different strains to show how sensitively the thermal expansion depends on the applied strain. To validate our results, we also calculated the thermal expansion coefficient using the Grüneisen theory from the data points at strain of ±0.2%, with 300 × 300 k-point sampling. This means that we ignore the contribution of phonon modes with wavelength larger than ∼ 0.07 µm. This cutoff is reasonable since in 2D materials ripples of similar size form and break the long range order. III. RESULTS The calculated phonon dispersion relations along high symmetry lines within the Brillouin zone are shown in Fig. 2 together with the phonon density of states (DOS). The dispersion lines are similar due to their similar honeycombed lattice structures. Graphene has a mirror symmetry about the atomic plane, such that the atomic motions along Z direction are decoupled from those in the X-Y plane in the harmonic approximation. The acoustic and optical modes along Z direction (ZA (red) and ZO(purple)) do not couple with other phonon modes, resulting in crossings of dispersion lines in graphene. For silicene, germanene and blue phosphorene, the slight buckling of the atoms in Z direction breaks the mirror symmetry, leading to hybridization of ZA and ZO with other modes. The crossings turn into avoid-crossings. The hybridization becomes stronger for larger buckling. This results in (1) the development of phonon bandgaps, (2) the reduction of phonon group velocity. Both of them reduce effectively the phonon thermal conductivity. Interestingly, the large buckling in blue phosphorene results in a larger Γ point ZO frequency than that of degenerate TO and LO modes. This does not happen in silicene and germanene. The buckling of atomic structure does not change the 3-fold rotational symmetry of the lattice. Due to this rotational symmetry, two degenerate points show up at K point in the dispersion relations of all the materials considered. The quadratic dispersion of ZA mode near Γ point in graphene is protected by the mirror and rotational symmetries around Z. The quadratic dispersion leads to a non-zero DOS at ω = 0. But for other materials, the slightly breaking of mirror symmetry due to buckling introduces a small linear dispersion into the quadratic form. To study the thermodynamic properties within the QHA, we performed a series of calculations by changing the lattice constant within the range of ±0.5%. The energy-lattice-constant relationship is plotted in Fig. 3. We can see that the stiffness goes down from graphene to silicene, germanene and blue phosphorene. Correspondingly, the calculated bulk modulus follows the same trend. Using the phonon dispersion at a = 0.998a 0 , a 0 , 1.002a 0 , we calculated the mode Grüneisen parameters as shown in Fig. 4. In Table. I, we compare our results with previous works, especially with those of Ref. 24. They show reasonable agreement. This comparison validates the calculation procedure we used here for other materials. We can try to understand the results starting from graphene. As has already been shown by many previous works 24,25,[27][28][29] , the graphene ZA and ZO modes have negative Grüneisen parameters, explained by Lifshitz 39 . This abnormal hardening of phonon modes upon expansion is a general feature of the 2D out-of-plane mode, and the reason why graphene shows negative thermal expansion. All other modes with in-plane motion have normal, positive Grüneisen parameters. As we have mentioned, there is no coupling between ZA, ZO modes with modes in the X-Y plane. There is a clear distinction of these modes in the calculated Grüneisen parameters. For silicene, germanene, and blue phosphorene, due to the buckling, atomic motions in Z and X-Y directions mix. Away from the Γ point, there are more modes with negative Grüneisen parameters. But independent on elements, the TO and LO modes have Grüneisen parameters around 2 . Finally, one notices that due to large buckling in blue phosphorene, the ZO mode at Γ point has a larger frequency than the LO and TO modes, and a positive Grüneisen parameter, contrary to the other three materials. This shows a gradual lost of 2D properties of the ZO mode. From the series of calculations, we can obtain the ther- mal expansion coefficients as a function of temperature using two methods. The left panel of Fig. 6 shows results from fitting the third-order Birch-Murnaghan equation of state, while the right panel shows that from the Grüneisen theory. The details of the fitting to the equation of state at representive temperatures are shown in Fig. 5. The general trends for all the four materials are the same: α starts from zero, goes down and reaches a minimum value. Afterwards, it goes up monotonically. This can be understood as follows. At low temperatures, the ZA mode populates much more than all other modes, and it has a large DOS. Thus, it dominates over other modes and contributes to negative thermal expansion due to its negative Grüneisen parameter. The ZA mode keeps dominating until certain temperature. After that, the modes with positive Grüneisen parameters get populated, and become important, consequently α goes up. The temperature at which α reaches its minimum is related to the temperature at which the heat capacity of ZA modes saturates to its classical value (Eq. 4). The heavier the elements, the lower this temperature. We note that the mode Grüneisen parameter and consequently the thermal expansion coefficient are minimal change of mode frequency as a function of lattice constant and lattice constant as a function of temperature, respectively. Both are very sensitive to the calculation parameters and approximations used. Although our results from the two methods follow similar trends, they are different quantitatively. Actually, the results from fitting the equation of state depend sensitively on the range of strain applied to the material. For 2D materials, the ZA mode is soft near Γ point. A slight compression of the lattice constant results in decrease of the phonon frequency. In practical calculations, modes near Γ point go negative, indicating the structure is not stable (Fig. 7 inset), or the QHA used here is not valid anymore. To minimize this technical problem, one should keep the strain as small as possible. This is why a small strain of ±0.5% was chosen in this work. But, on the other hand, to fit the results to an equation of state, we need to have data points span in a reasonably large range of energy. Due to this difficulty, we argue that it is more appropriate to use the Grüneisen theory to predict the thermal expansion of 2D materials, as shown in the right panel of Fig. 6. We have also compared the thermal expansion coefficients obtained here with previous results from DFT in Table I and III. For graphene, we get similar results with Ref. 24. For silicene and germanene, due to the different long wavelength cutoff used, and different k-point sampling, our results are similar to, but quantitatively different from those of Ref. 27. This discrepancy is acceptable. For blue phosphorene, we get negative thermal expansion coefficient of −1.0 × 10 −6 K −1 by fitting the equation of state, reasonably agree with −0.5 × 10 −6 K −1 in Ref. 9. However, in Ref. 25 the authors get a positive value of 7.8 × 10 −6 K −1 , much larger than ours. We believe that the large discrepancy comes from different range of strain applied. We argue that too large strain drives the system out of the valid range of QHA (Fig. 7). IV. CONCLUSIONS AND REMARKS We have studied the vibrational and thermal properties of graphene, silicene, germanene and blue phosphorene using first-principles calculations based on QHA. We have shown that the similarities and differences of their vibrational and thermal properties can be traced back to their structures. We find that all the materials considered show negative thermal expansion at room temperature. Our findings are useful in the design of VDW heterostructures, where different 2D materials are vertically stacked together. Finally, from the numerical point of view, we argue that, the calculated thermal expansion coefficients depend sensitively on the strain applied to the material due to the soft ZA mode of the 2D materials. Thus, it is more appropriate to use the Grüneisen theory to study thermal expansion in 2D materials. Meanwhile, more advanced method going beyond the QHA is needed for more accurate prediction of thermal expansion coefficient in these 2D materials. Molecular dynamics simulation can in principle take into account the full anharmonic interactions, and serves as a possible solution to the problem. But the computational cost is huge in order to get accurate thermal expansion coefficient. We are aware of only one work using this approach 33 . It is worth mentioning that, here all the calculations are done for single layer without including the substrate. For supported monolayer, the interaction between the layer and the substrate removes the translational invariance of the monolayer. Γ point frequencies of all modes become nonzero. The absolute values of the negative Grüneisen parameters becomes smaller. As a result, the thermal expansion at room temperature becomes less negative or even positive. The effect of the substrate on the thermal expansion of graphene nanoribbon has been studied in Ref. 33 and 50. One should keep this fact in mind when comparing theoretical to experimental results. For bulk materials, using Klemens model 51-53 , the phonon thermal conductivity can be estimated from the dispersion and Grüneisen parameters obtained here. For example, the phonon group velocity and density of states can be readily deduced from the dispersion relation, and the anharmonic interaction between different modes can be estimated from the Grüneisen parameters. But, for 2D materials, there are subtleties which make the estimation inaccurate. Currently, there is still ongoing debate on the size dependence of phonon thermal conductivity of 2D materials 54,55 . Interesting hydrodynamic phonon transport is predicted in 2D materials 56,57 . All these make it difficult to estimate the thermal conductivity from quantities calculated in this work.
2016-10-14T09:05:34.000Z
2016-10-14T00:00:00.000
{ "year": 2016, "sha1": "ceb5a2a760a1b66c26fbec0a0f883f8105200ea7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.04376", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ceb5a2a760a1b66c26fbec0a0f883f8105200ea7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
251139310
pes2o/s2orc
v3-fos-license
Can dog-assisted and relaxation interventions boost spatial ability in children with and without special educational needs? A longitudinal, randomized controlled trial Children's spatial cognition abilities are a vital part of their learning and cognitive development, and important for their problem-solving capabilities, the development of mathematical skills and progress in Science, Technology, Engineering and Maths (STEM) topics. As many children have difficulties with STEM topic areas, and as these topics have suffered a decline in uptake in students, it is worthwhile to find out how learning and performance can be enhanced at an early age. The current study is the first to investigate if dog-assisted and relaxation interventions can improve spatial abilities in school children. It makes a novel contribution to empirical research by measuring longitudinally if an Animal-Assisted Intervention (AAI) or relaxation intervention can boost children's development of spatial abilities. Randomized controlled trials were employed over time including dog intervention, relaxation intervention and no treatment control groups. Interventions were carried out over 4 weeks, twice a week for 20 min. Children were tested in mainstream schools (N = 105) and in special educational needs (SEN) schools (N = 64) before and after interventions, after 6 weeks, 6 months and 1 year. To assess intervention type and to provide advice for subsequent best practice recommendations, dog-assisted interventions were run as individual or small group interventions. Overall, children's spatial abilities improved over the year with highest increases in the first 4 months. In Study 1, typically developing children showed higher scores and more continuous learning overall compared to children with special educational needs. Children in the dog intervention group showed higher spatial ability scores immediately after interventions and after a further 6 weeks (short-term). Children in the relaxation group also showed improved scores short-term after relaxation intervention. In contrast, the no treatment control group did not improve significantly. No long-term effects were observed. Interestingly, no gender differences could be observed in mainstream school children's spatial skills. In study 2, children in SEN schools saw immediate improvements in spatial abilities after relaxation intervention sessions. No changes were seen after dog interventions or in the no treatment control group. Participants' pet ownership status did not have an effect in either cohort. These are the first findings showing that AAI and relaxation interventions benefit children's spatial abilities in varied educational settings. This research represents an original contribution to Developmental Psychology and to the field of Human-Animal Interaction (HAI) and is an important step towards further in-depth investigation of how AAI and relaxation interventions can help children achieve their learning potential, both in mainstream schools and in schools for SEN. Children's spatial cognition abilities are a vital part of their learning and cognitive development, and important for their problem-solving capabilities, the development of mathematical skills and progress in Science, Technology, Engineering and Maths (STEM) topics. As many children have di culties with STEM topic areas, and as these topics have su ered a decline in uptake in students, it is worthwhile to find out how learning and performance can be enhanced at an early age. The current study is the first to investigate if dog-assisted and relaxation interventions can improve spatial abilities in school children. It makes a novel contribution to empirical research by measuring longitudinally if an Animal-Assisted Intervention (AAI) or relaxation intervention can boost children's development of spatial abilities. Randomized controlled trials were employed over time including dog intervention, relaxation intervention and no treatment control groups. Interventions were carried out over weeks, twice a week for min. Children were tested in mainstream schools (N = ) and in special educational needs (SEN) schools (N = ) before and after interventions, after weeks, months and year. To assess intervention type and to provide advice for subsequent best practice recommendations, dog-assisted interventions were run as individual or small group interventions. Overall, children's spatial abilities improved over the year with highest increases in the first months. In Study , typically developing children showed higher scores and more continuous learning overall compared to children with special educational needs. Children in the dog intervention group showed higher spatial ability scores immediately after interventions and after a further weeks (short-term). Children in the relaxation group also showed improved scores short-term after relaxation intervention. In contrast, the no treatment control group did not improve significantly. No long-term e ects were observed. Interestingly, no gender di erences could be observed in mainstream school children's spatial skills. Introduction Children's visuospatial abilities are important in early development, and processing information about space is involved in infant's object location and locomotion (1). Spatial abilities develop gradually with age, and spatial reasoning encompasses the processing of space, shape, distance, direction, and angles, in addition to understanding these with reference to the self and the wider environment (1,2). Children's egocentric representation (explaining the reference of objects relative to the self) gradually matures to include an allocentric representation (describing locations using external frames of reference such as objects relative to each other) (3)(4)(5). Accordingly, limits in performance on visuospatial tasks may therefore be due to the immaturity of neural networks involved in such functions (2). Spatial cognition is intricately linked with problem-solving capabilities and high-level processing in the cognitive system (6). For example, spatial ability is associated with the development of mathematical skills in children (6)(7)(8)(9) and plays a critical role in achievement in STEM topics (science, technology, engineering and mathematics) (10-12). Additionally, as spatial reasoning is part of humans' integrated neuro-cognitive system, wider functioning such as children's inhibitory control and attentional functioning are also likely to affect processing capabilities. For example, Beattie, Schutte, and Cortesa (13) found that children with better inhibitory and attentional ability had greater spatial working memory. These related abilities are integral to the learning process overall and affect academic performance. It is noteworthy that spatial abilities may be influenced differently by the differing cognitive abilities in typically developing children and in those with special educational needs (SEN). For instance, children with Down Syndrome typically have a cognitive profile with impaired verbal processing abilities, but less impaired visuospatial processing abilities (14- 18). Certain visuospatial abilities can also differ between those with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and typically developing children (19,20), for example in spatial perspective-taking (21) and other spatial tasks. Studies have reported that those with ASD can show superior ability in visuospatial processing tasks with particular strength in tasks focussed on detail and local feature processing, and poorer ability to attend to the spatial configuration as a whole, employing more global processing as detailed in Central Coherence Theory (22). However, these findings are not replicated in all circumstances-mixed evidence exists and more complex solutions, taking other cognitive processing function into account, are offered (23)(24)(25)(26). As with typically developing children, children with the same diagnosis vary in terms of their cognitive profile (23). Additionally, the picture is more complex when taking gender differences in SEN populations into account as the overrepresentation of males makes the generalization of findings problematic (27). Gender differences in spatial processing have been found in typical populations (28-30) with males often outperforming females (29,(31)(32)(33)(34). Indeed Reilly, Neumann, and Andrews (35) suggest that of all cognitive functions, spatial processing shows the largest difference between genders. With its integral role for the development of quantitative reasoning skills important for mathematics and science subjects, this difference could contribute to gender differences in STEM subjects and underrepresentation of women in STEM fields. Theories have been offered to explain such differences on the basis of biological/hormonal factors (35)(36)(37)(38)(39), gender orientation and gender stereotypes [(35, 38), for wider discussion see (40)], socialization and play experience (35) and evolutionary pressures (41, 42). However, others argue that differences in spatial ability are often not present or are small (30,40,43,44). Furthermore, evidence suggests that spatial skills are malleable, and can be improved through training (45)(46)(47), and . /fped. . that environmental factors and experience also play a large part in these observed differences (35,48,49). As spatial processing and problem-solving are crucial to educational outcomes, for example in mathematics and in STEM subjects, it is important that such abilities are adequately supported in the school environment. This is especially pertinent as in recent years pupils' interest in STEM subjects, and uptake of STEM subjects by students, has dropped alarmingly (50). One intervention which may enhance spatial abilities in children is an animal-assisted intervention (AAI). AAIs are becoming increasingly popular in educational settings as pets in the classroom may be beneficial to children's learning success (51), classroom behavior (52) and their emotional and cognitive development [see (53)(54)(55)(56)(57) for recent reviews and overview] as well as contribute to lower stress levels (58,59). While animalassisted interventions show some beneficial effects on human health and emotional well-being, learning and memory (55)(56)(57), it has been demonstrated that research in the field is growing, but also that the knowledge base still needs to be strengthened with many areas still under-investigated (60). In the past, research in this area has often suffered from small sample sizes, lack of control groups and overall insufficient scientific rigor (56,61,62). However, in recent years steady progress has been made in more thorough investigation of the effects of AAI on human health, well-being and cognition with improvements also found in executive function [e.g. (60,(63)(64)(65)]. Next to AAI, relaxation, meditation and yoga interventions have become increasingly popular in schools. They can help to improve mental and physical well-being, regulate stress, enhance performance on selective attention, concentration and mental flexibility tasks and psychomotor speed (66-74). Broderick and Metz (75) found that girls demonstrated increased feelings of calmness, relaxation and self-acceptance after mindfulness interventions. However, overall, the field is suffering from similar methodological problems as the earlier field of AAI. Currently, only very few studies have been carried out on the effects of AAI on children's specific cognitive abilities. Previous studies have highlighted the beneficial effects of a dog's presence during a task on young children's cognitive functions such as memory (76), object recognition performance (77) and object categorisation tasks (78,79). In addition, studies such as those of Hergovich, Monshi, Semmler and Zieglmayer (80) and Kotrschal and Ortbauer (52) reported increased classroom cohesion and improved behavior of children with a dog present which is an important factor in ensuring that conditions are optimal for learning. There are currently no studies investigating effects of dog-assisted interventions on children's cognitive development, and more specifically, there are no studies focusing on spatial abilities. Explanations as to why dogs can have beneficial effects on humans are proposed by adapted and dynamic biopsychosocial models which integrate biological, physiological, psychological and social support (81-87), while others provide historical and social explanations [e.g., (88,89)]. Physiological indices for arousal and affiliative behaviors have been identified as biological mechanisms underlying the human-animal bond (e.g., lower stress levels as indicated by lower cortisol, higher oxytocin levels, lowered blood pressure, reduced skin conductance, lower heart rate; (59,(90)(91)(92)(93). Improved concentration, attention and motivation have been observed with the dog's presence creating a positive social atmosphere [for overview (90)]. Thus, an overarching, integrative approach combining neurobiological processes, attachment, biophilia and caregiving to pets may be best-suited to explain the resulting human-animal relationships, their development and their physiological and endocrine basis (83). Gee, Gryphon and McCardle (94) proposed a theoretical framework organizing the results of research and predicting unexplored pathways of indirect effects on learning through social-emotional development. This framework includes direct effects of classroom activities involving animals (mostly dogs) on children's motivation, engagement, self-regulation, and social interaction, as well as indirect effects on socio-emotional development and learning. This framework, though broadly useful, was not intended to serve as the basis for specific predictions within individual areas of cognitive development. Despite spatial cognition being a crucial part of cognitive development and highly important to mathematics and STEM subjects, studies have so far not been carried out on the effects of animal-assisted interventions (AAI) or relaxation interventions on children's spatial cognition. The current study closes this knowledge gap and makes a novel and unique contribution to the field of animal-assisted and relaxation interventions within Developmental Psychology. We tested if dog-assisted interventions lead to enhanced spatial ability in children compared to relaxation interventions and compared to a no treatment control group. Effects of AAI and relaxation interventions on children's spatial ability were investigated employing randomized controlled trials longitudinally, thus guaranteeing high scientific rigor. We tested typically developing children and children with special educational needs (SEN) to maximize knowledge gain in the field. Additionally, to provide practical advice for best practice in schools, intervention type was also assessed as to which works best [as the evidence is ambiguous (94)], and interventions were carried out as individual or small group interventions. This adds to the knowledge base as, depending on results, it may be possible to reduce direct contact time for therapy dogs per setting adding to animal welfare (95), and it may help to introduce the most cost-efficient intervention provision in educational settings (94). In line with the above research, we predicted spatial ability improvements in the dog-assisted interventions compared to the control group when comparing pre-and post intervention periods. We expected intermediate effects for relaxation interventions and no or only maturational change in spatial abilities in the control condition. Concerning longer lasting . /fped. . effects, our longitudinal design allowed for exploration of such effects. The current study was part of a larger, longitudinal, randomized controlled trial systematically examining the effects of dog-and relaxation-interventions on school children's academic performance, social and emotional well-being and measuring physiological changes (Lincoln Education Assistance with Dogs; https://lead.blogs.lincoln.ac.uk/) (95). The longitudinal studies described here investigated specifically the effects of AAI and relaxation interventions on spatial cognition in typically developing children (Study 1) and children with SEN (Study 2). Participants This research was approved by the University of Lincoln Research Ethics Committee (SOPREC) and is in line with British Psychological Society Ethics guidelines. In addition, the WALTHAM Animal Welfare and Ethical Review Board also approved the research. Children were recruited through mainstream and special educational needs schools in Lincolnshire and Gloucestershire, UK. In Study 1, 105 children took part in Lincolnshire, UK (N = 54 males, 51 females, mean age = 8.91 yrs, SD = 0.39, min = 8.21, max = 10.07; 4 mainstream schools). In Study 2, 64 children (N = 54 males, 10 females, mean age = 9.27 yrs, SD = 0.79, min = 8.0, max = 11.5) from 7 SEN schools in Lincolnshire and Gloucestershire, UK, participated. Diagnoses for the latter included 15 children with ASD, 16 with ADHD, 12 with ASD and ADHD, 12 with learning disorder not otherwise specified (LD NOS), and 9 with unknown diagnoses as parents did not provide this information. Please see (Table 1) below for numbers of children taking part at each assessment point per condition and school type, and (Table 2) for retention rates and reasons for attrition. All children attended school full-time. Researchers and dog handlers were in possession of enhanced police cheques, and researchers were highly experienced in research with school children. Dogs and handlers Twenty-two different dogs and their handlers (N = 21) took part in the interventions on a volunteer basis. All handlers had insurance: N = 19 through their registration with Pets as Therapy, N =1 obtained separate insurance, and N = 1 was insured via their registration with Therapy Dogs Nationwide. All dog-handlers were required to attend safety training on understanding dog stress signaling behaviors before the study started. Dog breeds included: 1 Greek Hare-Hound, 2 Cavalier King Charles Spaniel and Miniature Poodle crossbreeds, 1 Labrador and miniature Poodle crossbreed, 2 German Short-Haired Pointers, 2 Miniature Schnauzers, 3 Labradors and 1 Labrador crossbreed, 2 Tibetan Mastiffs, 1 Border Terrier, 1 Scottish Terrier, 1 Lurcher, 1 Clumber Spaniel, 1 Yorkshire Terrier, 1 Pekingese, 1 Smooth Collie, 1 Cocker Spaniel and 1 Golden Retriever. All dogs were healthy and had been assessed by independent canine behavioral experts to ensure their suitability to work with children. Materials The British Ability Scales (BAS-3) (96) were used to measure children's spatial ability (SA). The BAS-3 is a standardized cognitive scale normed for use from 5:00 to 17:11 (years: months) and designed to measure mental abilities significant for learning and educational performance (see https://www. gl-assessment.co.uk/assessments/products/bas3/ for more details). Two assessments within the BAS-3 were administered: Recognition of Designs and Pattern Construction to provide a Spatial Ability cluster score (SA). Children's performance in the Recognition of Designs task reflects their visual-spatial processing, short term visual memory, perception of spatial orientation and visualization abilities. Performance in the Pattern Construction task reflects the following: a child's visual-motor skills; spatial visualization (including matching abilities, perception of relative orientation, the ability to reproduce designs with objects, and to perceive and analyze visual information); non-verbal reasoning abilities (including skills in decomposing and reconstructing a design; the use of systematic strategies, for example, sequential assembly, hypothesis testing and trial and error); and the ability to follow verbal instructions and use verbal mediation strategies. After extensive assessment tool search and piloting, we chose the BAS-3 tool kit for the following reasons: it contained a range of assessments of the specific areas our research aimed to investigate, it was feasible to carry these out in realistic time slots with suitable duration for children of the chosen age group, it was usable and normed for both cohorts, and it was normed for British-English speaking children. Procedure Informed consent Parents gave consent for all children to take part in the study and provided details of any allergies and phobias to dogs. Children's assent was acquired prior to all test and intervention sessions, and parents and children were informed of their (and their children's) right to withdraw from the study at any time without having to give a reason. Dog handlers consented to . taking part with their dogs and were free to withdraw at any time. Dogs were monitored continuously throughout the study for potential signs of wanting to withdraw. They also were free to retreat at any time. Safety training and familiarization All children took part in detailed safety training with dog body language training (95,97) and further safety information before the study began, in order to set clear expectations for children's behavior around the dogs. This reduced the potential risk of incidents, and was designed to foster respect and uphold high standards of animal welfare. Children were familiarized with the dogs prior to study begin to eliminate potential novelty effects (95). Testing Children's performance was assessed before interventions began (baseline), immediately after interventions, and was repeated after 6 weeks, 6 months and after 1 year to assess if interventions provided immediate, short-term, mid-term or long-term improvements to children's spatial ability. See Figure 1 below for overview of procedure details. Interventions Stratified randomization was used to place children in the different intervention groups. This method ensured that we did not confound dog ownership, socio-economic status or children's academic ability with intervention condition. Testing was carried out in schools in waves with 1/3 of participants in the dog group, 1/3 in the relaxation control group and 1/3 in the no treatment control group to avoid potential effects of seasonal affective disorder (SAD). For example, if the dog intervention would have taken place in summer, and the control groups in autumn or winter, we would have confounded the study and not been able to say if effects are due to SAD or our intervention conditions. Hence, to avoid confounding the study, all testing with all groups took place over the whole year as described above. Individual and group interventions Children were randomly assigned to take part either in individual or in small group interventions. Dog-assisted intervention Interventions took place in a separate room in schools during the normal school day. During the interventions, the researcher and the dog handler were present as were the dog and the children. Having completed all safety training, children were taken to the room, with the dog handler and dog waiting outside the room to greet the children (the dog had been familiarized to the room and with the children beforehand, see above). Children were asked to sit down and remain seated unless the activities taking place required them to do otherwise. Intervention sessions were 20 min and structured with approximately 5 min for initial dos and don'ts (e.g., "don't hug/kiss/crowd the dog, " etc.) and greeting the dog and handler. Then approximately 10 min were spent on learning facts about the dogs via the handler, talking about and interacting with the dog as deemed suitable by handler and researcher who were constantly observing the dog's signaling and body language in order to safeguard the dog's welfare. As all sessions were childled, they varied somewhat in content. The last 5 min were spent saying goodbye to dog and handler and petting the dog as appropriate (again decided by dog handler and researcher). Relaxation intervention Relaxation sessions took part in a separate room and involved child age-appropriate meditation consisting of "Jellyfish" and "Butterfly" recordings from Enchanted Meditations for Kids (98) presented alternately across the sessions. Children were asked to lie down on a yoga mat and close their eyes; children who did not feel comfortable doing this, or who were unable due to mobility issues (mainly in SEN schools), were allowed to sit and relax with their eyes open or closed as they preferred. Again, the duration was 20 min, with timings approximately 5 min of active relaxation (body scanning with children moving toes, legs, fingers, etc.), followed by 10 min of meditation, and 5 min of active relaxation to match the profile of the dog sessions as closely as possible. Control group Children assigned to the no treatment control group condition took part in their usual class lessons. Animal welfare considerations A robust risk assessment was carried out for all settings taking part in the study (95). This incorporated strict protocols for animal welfare which were followed at all times. Care plans were completed for all dogs. Dogs were not required to work more than 2 h per day and had short breaks every 20 min as children moved between classrooms. Typical working times for most sessions were 1 h and 20 min in total. Dogs always had access to their own bedding for "time out" and water was freely available. Interventions would be stopped if dogs showed any signs of discomfort or being tired, and handlers were free to take their dog outside for a break as they felt appropriate. However, this did not occur. Power calculation Before study start, a priori power calculations were undertaken to determine sample size for the main repeated measures ANOVA with 3 conditions (dog intervention, relaxation intervention, and control group) and 5 measurement . FIGURE Welfare, safety, familiarization and consent and assent procedures carried out before and at study start for dogs and handlers, children, parents and schools taking part in the longitudinal randomized controlled trial. Statistical analysis Repeated measures ANOVAS were carried out overall, and for Study 1 (Mainstream school children) on Condition (dog intervention, relaxation intervention, no treatment control) and Time (before and after intervention, 6 weeks, 6 months, 1 year), . /fped. . also including Gender and Dog ownership for children. Analysis was then split into group and individual testing conditions. As we predicted a complex interaction pattern of improvements in spatial ability in children in the dog intervention group over the relaxation group, with no improvements expected in the no treatment control group, planned comparisons with Bonferroni corrections were calculated to investigate these specific predicted effects. A similar pattern was followed for children in SEN schools (Study 2). However, due to the sample consisting mainly of boys, and due to missing information on dog ownership, we did not include Gender and Dog Ownership as factors in this analysis (see footnote, p. 12). It is important to note that for all intervention conditions specific predictions, calculated with planned comparisons, were of core interest as it was predicted specifically that children in the dog intervention would show clear improvement after the intervention compared to the no treatment control group. Some improvement was expected in the relaxation group between pre and post intervention test times, and no significant improvement in the control group. Hence, planned comparisons were crucial to our analysis. Significance testing follows the usual p-value criterion of smaller than 0.05 for significant results, and for planned comparisons smaller significance levels were used employing Bonferroni corrections. Statistical analysis was carried out using Statistica 12 as well as IBM SPSS, version 26. No data was excluded or replaced. Results Inspection of pre-intervention data for study and study Initial comparison of cohorts Assessment of baseline spatial ability A one-way analysis of variance revealed that scores for spatial ability were significantly lower for children who attended SEN schools (M = 82.67) compared to those in mainstream Children who had taken part in the relaxation interventions showed no immediate, but significant short-term improvements in spatial ability scores from post-intervention to 6 Group intervention session To assess the effects of AAI in group interventions, the same repeated measures ANOVA of Time (pre-intervention baseline, post-intervention, 6 weeks, 6 months, 1 year) x Condition (dog, relaxation, control) x Gender (male, female) x Dog ownership (dog, no-dog) was conducted. The significant overall effect of Time [F (4,184) = 20.726, p < 0.001, η 2 p =0.311] was analyzed further and showed that children taking part in group interventions made significant improvements in spatial ability from baseline to post-intervention [t (61) As above, we predicted specific improvements per condition, and planned comparisons revealed significant improvements in spatial ability for children in the group dog interventions. These occurred only from post-intervention to the 6-week test time [t (17) Group intervention session To assess results for children who took part in group interventions, the repeated measures ANOVA of Time (preintervention baseline, post-intervention, 6 weeks, 6 months, 1 year) x Condition (dog, relaxation, control) revealed a significant main effect of Time [F (4,84) Discussion Children's spatial abilities are a crucial part of their learning and cognitive development, and important for children's problem-solving capabilities, the development of mathematical skills and progress in STEM topic areas. As many children, with and without SEN, struggle with maths and STEM topics, and as these topics have suffered a significant loss of interest by school children and decline in uptake in students (50), it is worthwhile to study how learning and performance can be enhanced at an early age. This study is the first to investigate if dog-assisted and relaxation interventions can improve spatial abilities in school children. The study employed high scientific rigor by using randomized controlled trials and a longitudinal design. We also broadened the scope of the research by including both children attending mainstream and special educational needs schools (SEN). As it has been hitherto unknown if individual or group interventions work better, the study also assessed the effects of individual and group interventions to make recommendations for best and most efficient practice. The results outline how typically developing children and children with SEN developed over the year during which time all children's spatial ability scores increased significantly from baseline over the 1-year study duration and thus showing the expected general learning and maturation effects. Immediate and short-term improvements were also revealed after 4week interventions. Study 1 results indicate that typically developing children benefitted from the dog intervention. Improvements in spatial ability scores occurred immediately after the intervention and lasted up to 6 weeks, with effect sizes ranging from medium to large. Interestingly, individual dog interventions showed more immediate effects, while group interventions had somewhat delayed effects with children showing better scores after intervention end to 6 weeks. Likewise, children in mainstream schools who took part in the relaxation intervention also benefitted from these overall, albeit relaxation interventions showing no immediate, but significant short-term improvements in spatial ability scores from post-intervention to 6-week test times. In contrast, it is noteworthy that no significant improvements in spatial ability scores were seen in the no treatment control group. Overall, the results show that dog and relaxation interventions enhance mainstream school children's spatial abilities, and it noteworthy to point out that the dog intervention shows significant results throughout, with individual sessions having a more immediate effect and group sessions a delayed effect. It could be argued that individual sessions involved more intensive interaction between children and dogs and therefore stronger calming effects in line with Beetz and colleagues (90). This may have had a beneficial effect on children's processing of spatial tasks shortly after the interventions. Children in the group sessions had less intensive contact time with the dogs, but they had instead other group members to share the experience with which could contribute to a delayed effect. Future research will need to establish if the less intense animal experience combined with peer contact and potential later discussions may have led to a delayed beneficial effect. Study 2 revealed, in contrast to the typically developing cohort, that children with special educational needs (SEN) . FIGURE Results of longitudinal assessments in the dog intervention: means for British Ability Scale spatial ability scores (y-axis) over time (x-axis) in the dog intervention group for children with and without special educational needs (SEN). Higher scores imply higher ability. FIGURE Results of longitudinal assessments in the relaxation intervention: means for British Ability Scale spatial ability scores (y-axis) over time (x-axis) in the relaxation intervention group for children with and without special educational needs (SEN). Higher scores imply higher ability. showed a significant increase in spatial ability in the relaxation condition only. They showed significant improvements from baseline to post-intervention assessments with medium effect sizes. While children in the dog condition also showed improved scores, these differences did not reach significance. Likewise, children in the no treatment control condition also did not show a significant improvement in scores. In the SEN cohort, no clear advantage for either individual or group interventions became evident from the data. Thus, this cohort benefitted from relaxation interventions instead of dog-assisted interventions. The integrative dynamic biopsychosocial model (82,83) is best suited to explain the result patterns for both cohorts, based . FIGURE Results of longitudinal assessments in the no treatment control group: means for British Ability Scale spatial ability scores (y-axis) over time (x-axis) in the no treatment control group for children with and without special educational needs (SEN). Higher scores imply higher ability. on the stress-reducing and calming effects of both interventions, including the creation of a positive atmosphere, beneficial to learning, in dog-assisted interventions and relaxation interventions (82,83,85,86,89,90,92). Concerning specifically spatial ability tasks it should be highlighted that these involve working memory, which incorporates integrated systems of the central executive, phonological loop and visual-spatial sketchpad (100). These flexible integrated cognitive systems can be affected by individual factors and wider influences such as learning, emotion and stress (101). Previous AAI research has shown positive effects on memory during cognitive tasks (76)(77)(78)(79)102). Positive emotions can also have a beneficial effect on spatial working memory (103)(104)(105)(106), and affective states can influence working memory (101). Relaxation and stress reduction as shown in other research on cortisol level buffering in AAI is likely involved during both dog and relaxation interventions (58,59), and may have benefitted the spatial ability tasks. Likewise, improvements in executive functioning, which have been linked to the presence of a dog in college students (63) and school children (64), may be driving improvements in spatial ability scores. As one is able to inhibit irrelevant thoughts, relax and focus on the task at hand, general cognitive abilities, such as spatial ability, also improve. Regarding the developmental pattern over the year, it is noteworthy that children's scores did not rise as steeply (or significantly) after the 6-week follow up point. This may be as children's cognitive scores may fluctuate as the school term progresses, as learning and development do not always represent a linear process (105,106). Potentially, the repeated use of the BAS tool kit could present a limitation of the current study in case the closer test intervals at the beginning of the study (baseline / after the 4-week intervention / 6 weeks later) may have resulted in practice effects enhancing the test results up to the 6-week time point, and which may dissipate after a longer break of 6-months. However, it is unclear how likely this scenario is given the complexity of the tasks and given the differences in results in the experimental groups and the no treatment control group. Further studies may also include a different, or a combination of, cognitive instruments. In the current study we were limited to choose one cognitive assessment tool due to other measures taking place within the overall larger-scale project as mentioned above. The lack of further significant improvements suggests that dog interventions may not show longevity past the postintervention test time or the later assessment 6 weeks after post intervention testing (in week 12). Concerning cohort differences, children's scores of spatial reasoning were significantly higher for those attending mainstream schools than for those with special educational needs (2,19,20). This is in line with previous research showing differing performance in children with neurotypical and nonneurotypical developmental profiles. Within the SEN cohort, processing of spatial ability was not significantly different based on the diagnoses of the children. This is a noteworthy finding, given that different diagnoses have diverse aetiologies and so differ in terms of their neural systems, memory, attention and executive function which are integral to efficient visuospatial processing. The current results are therefore consistent with . /fped. . those studies that did not find superior ability in visuospatial processing tasks in participants with ASD (24)(25)(26). As spatial reasoning is important in many other areas of learning such as the STEM topics and is malleable (46) it would be interesting and worthwhile to assess whether AAI paired with specific skills training can foster long term benefits for children's spatial ability. As pet ownership may have additional beneficial effects on the health and well being of children (107), and as it is unknown how this may interact with the effects of AAI and relaxation interventions, dog ownership was included in this longitudinal study. However, no effects of dog ownership were found, nor were there any interactions. This finding suggests that for populations (typical and SEN) of 8-10-yearold children, dog ownership is not necessary for the accrual of benefits from interacting with a therapy dog in interventions. It may be useful in future to investigate attachment to pets and attitude to pet dogs to find out if this potentially influences interaction outcomes. With interventions taking place twice a week over 4 weeks, and the cognitive assessments carried out without the presence of a dog in the room, the current study adds to previous research showing beneficial effects on cognitive tasks with a dog present during testing [see (76)(77)(78)(79)]. As there are no comparable studies into the effects of AAI on children's spatial cognition over time (56), the current research pioneers longitudinal investigation of AAI and relaxation interventions. Interestingly, despite previous research and theories reporting gender differences in spatial skills, this study found no significant differences between girls and boys on the standardized tests. These results are in line with research showing small or no differences (30, 42-44, 108) and potentially highlight the influence of teaching [e.g., (35,47)]. While individual intervention sessions require more working hours with dogs and handlers, group sessions could mean cost efficiencies for schools and reduced working time for therapy dogs. However, the results of the current study indicate that the individual dog interventions may be more effective. To our knowledge, there are no systematic studies on how dosage of interventions may relate to intervention type (individual or group) -future research should be carried out to enable effective interventions. Concerning potential feasibility, organizational, ethical and safety challenges in school settings, the following should be highlighted: For this longitudinal, randomized controlled trial in schools with two child cohorts to be feasible, it required early and meticulous planning. Next to the usual complex planning involved in longitudinal studies, further protocols concerning ethics and safety had to be established and implemented, including, for example, school, parent and child consent/assent. We operated with a timetable that was agreed in advance with schools and dog handlers and we managed to maintain schools' and children's continued interest and cooperation. Concerning human and animal safety and welfare, we have successfully employed the Lincoln Education Assistance with Dogs (LEAD) risk assessment tool (95) for this study -the tool not just ensured a thorough risk assessment, and provided a structure with clear areas of responsibility, but also enabled consistent, safe and welfare-guided practice for all involved. We would therefore recommend the following steps as vital for successful AAI and AAI research in schools: (1) Timing and Commitment: Following appropriate ethics approval, ensure significant advance recruitment of schools with clear information as to what the requirements are concerning time and space (e.g., separate room for specific duration). It is useful to be clear about the amount of commitment needed from schools and teachers so all involved can agree to researchers spending a substantial amount of time in schools with the children. (2) Clarity of Information: Transparency concerning the study to inform teachers, parents and children of all that is involved is essential to obtain consent/assent as well as to maintain ongoing interest. (3) Safety and welfare: Human and animal safety and welfare need to be ensured at all times. The LEAD tool (95) for AAIs as well as safety training for all involved as described above [e.g., on dog body language (97)] is efficient and helps to raise awareness of potential risk and ensure the safety and welfare of all involved. In conclusion, this longitudinal RCT study is the first to demonstrate how children's spatial abilities can benefit from AAI with dogs and from relaxation interventions. In Study 1, typically developing children showed improvements in spatial abilities especially over the first 12 weeks, but also beyond, and those in the dog group showed significant improvement immediately after the intervention and also short-term (a further 6 weeks after intervention end). They also showed significantly enhanced performance short-term after relaxation interventions. In contrast, no significant improvements in spatial abilities were found in the no treatment control group. In Study 2, the cohort of children with SEN showed lower scores overall, showed most learning only in the first 6 weeks, and benefitted only from relaxation interventions. Intervention effects did not extend to the second testing point after the end of the intervention. As immediate and short-term effects, but not long-term effects were evident, and as spatial abilities are important for wider academic skills such as maths and STEM topics in both cohorts, it is recommended that further research assesses how AAI and relaxation interventions may be incorporated into training applications to enhance such skills. . /fped. . Furthermore, we need to understand better why a dog intervention may improve these skills in typical children, but not in SEN children. It is possible that with SEN children a longer or more intensive period of intervention (higher dosage) may be required to accrue benefits, if any, of an AAI. This study provides information about the time course of effects of one type of AAI on spatial ability, but many variables need to be examined in the future such as dosage of intervention (number of days of AAI per week, number of weeks of AAI), details of the intervention (do the children need to touch the dog), and delivery of the intervention (free form vs planned pedagogy). The underlying mechanisms of action and the potential for interaction among these mechanisms need to be investigated in further depth in future so that we may make effective recommendations for the use of AAI in typical and SEN children in future. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by University of Lincoln Psychology Research Ethics Committee (SOPREC) and also WALTHAM Animal Welfare and Ethical Review Board. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions KM conceived the study and obtained the research funding. VB contributed to conception of the project. KM, NG, and ER advised on data collection and analysis. VB and MD collected the data. ER oversaw the data base. VB, MD, KM, and ER collated and/or analyzed the data. All authors contributed to the final research design of the current study. All authors contributed to advice on analysis and to writing the manuscript, and read and approved the final manuscript. Funding This research was funded with a research grant from the WALTHAM Petcare Science Institute (formerly Waltham Centre for Pet Nutrition), Mars Petcare, and a grant from the Waltham Foundation.
2022-07-29T13:36:37.089Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "82a5349856e32b3c3d5c56cf0af0704e8555754f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "82a5349856e32b3c3d5c56cf0af0704e8555754f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252740915
pes2o/s2orc
v3-fos-license
The uncommon delayed neurological deficit in posterior fossa chronic epidural hematoma: A case report Background Chronic epidural hematoma (CEDH) is uncommon and therefore, less well characterized. The incidence of CEDH ranges from 3.9 % to 30 % of all epidural hematomas. Posterior fossa epidural hematomas represent a rare clinical entity. It has been reported in only 4–7 % of all extradural hematomas. This rare condition may present with rapid clinical deterioration by quick increase in size that may cause brain stem compression. This study aims to provide a case of chronic epidural hematoma with uncommon sign of delayed neurological deficits, specifically in the posterior fossa region. Case presentation We report a case of a 34-years-old male with left upper and lower extremities weakness for 3 days before admission. The patient had a history of falling from a height of approximately 3 m about 3 weeks ago. Craniotomy epidural hematoma evacuation was performed on the patient. Conclusion Chronic epidural hematoma is uncommon and therefore, less well characterized. The results of surgical care of symptomatic chronic posterior fossa EDH are often excellent. Early diagnosis and emergent evacuation provide better outcome. Introduction An epidural hematoma (EDH) is an extra-axial collection of blood within the potential space between the outer layer of the dura mater and the inner table of the skull. An epidural hematoma occurs in 2 % of all head injuries and up to 15 % of all fatal head traumas [1,2]. Males are more often affected than females. Furthermore, the incidence is higher among adolescents and young adults. Acute epidural hematoma was first reported by Jacobson in 1886 and is a well-known clinical entity [3]. Epidural hematoma diagnosed more than 14 days after head injury is classified as a chronic epidural hematoma (CEDH). Chronic epidural hematoma (CEDH) is uncommon and therefore, less well characterized. The incidence of CEDH, which may range from 3.9 % to 30 % of all epidural hematomas, is also unknown from the literature. In the postcomputed tomography (CT) era it is often considered that CEDH is a rare entity [4][5][6]. Posterior fossa hematoma has been reported in 3 % of all operated extradural hematoma and 0.3 % of all intracranial hematomas. A history of occipital trauma was present in nearly all cases. Interestingly, most cases of epidural hematoma were not related to motor vehicle accidents [7,8]. Occipital bone fractures were found in nearly 80 % of cases. In more than 50 % of EDH, no source of bleeding was found. Diffuse venous oozing from the edges of the fracture line or the surface of the stripped dura, torn dural branches of the vertebral artery, and laceration of a dural sinus and emissary veins have all been described as sources of bleeding in EDH of the posterior fossa [9][10][11]. We report a rare case of posterior fossa chronic epidural hematoma with delayed neurological deficit after trauma. This case may provide further information and examples of chronic epidural hematoma with uncommon signs of delayed neurological deficit, specifically in the posterior fossa region. Case presentation A 34-years-old man came to the emergency department with left upper and lower extremities weakness in the last 3 days before admission. He had complaints of headache and dysarthria. The patient had a history of falling from a height of approximately 3 m about 3 weeks ago and loss of consciousness approximately 2 h after incident. After the accident, the patient was brought to the nearby hospital and was reported to regain his consciousness with good orientation. Patient had a history of vomiting at the hospital and was hospitalized for 5 days. During hospitalization, the patient was fully conscious with no neurological deficit and there was no head CT Scan examination. The patient was discharged and after 3 weeks, he presented himself to the emergency department with left upper and lower extremities weakness. In the emergency department, the patient was fully conscious with full Glasgow Coma Scale (GCS) E4V5M6. Neurological examination was found with left hemiparesis and motoric score of 4. There was left hypoglossal cranial nerve paresis. Head CT Scan was performed on the patient. A hyperdense biconvex inhomogeneous lesion in the right side and the volume of hematoma 22 ml with the thickness 2.2 cm were reported. There was also compression of the right quadrigeminal cistern that caused non communicating hydrocephalus (Fig. 1). Craniotomy epidural hematoma evacuation was performed on the patient. The aim of this surgery was to release the compression of the cerebellum and the brainstem resulted by the hematoma. During surgery, the hematoma was found to be capsulated with fluid and clot components (Fig. 2). There was no bone fracture above the lesion. The source of bleeding whether from bone fracture or sinus laceration was unidentifiable. After surgery, the patient was fully conscious with no neurological deficit. Hemiparesis and hypoglossal cranial nerve paresis were recovered. The patient later was able to carry on normal activity and go to work, without special care needed. This case report is presented based on the Surgical Case Report (SCARE) Guidelines [12]. Discussion Traumatic posterior fossa epidural hematomas represent a rare clinical entity. Skull fractures are involved in the majority of the cases. Even without any fracture, epidural hematoma could still be formed. Following trauma, an epidural hematoma often forms when the periosteal duramater gets separated from the calvarium and the intervening veins rupture [13,14]. The size of hematoma may quickly grow as a result of the vascular rupture. The development of late and chronic clinical pictures is possible, nonetheless, if the venous structures are affected. Posterior fossa EDH has a venous origin in 85 % of cases and is caused by damage to the transverse or sigmoid sinuses secondary to occipital bone fracture [8,15,16]. There is no agreement on the specific time-based definition of CEDH, unlike subdural hematoma, which is referred as chronic subdural hematoma if discovered after more than 21 days following injury. In the literatures, chronic EDH is defined in a variety of ways. Sparacio et al. [4], used the term chronic EDH for epidural hematomas that are operated more than 48 h after the first injury and Clavel et al. [4], classified CEDH as EDH that are discovered more than 72 h after the initial injury. Bradley [4] recently defined chronic EDH as epidural hematomas recognised more than 14 days after a head injury based on hemoglobin breakdown products on magnetic resonance imaging [4,6,7]. This definition appears to be more recent, evidence-based, and scientific. In our case, the patient came after 21 days from initial accident. Based on the literatures and clinical timeline, it is categorised as a chronic EDH. The possible pathogenetic mechanisms that may explain chronicity in extra-axial hematomas include the existence of associated skull fractures, hematomas that are frontally located, age-related diffuse cerebral atrophy, venous sources of bleeding, and traumatic arteriovenous fistulae of meningeal vessels [1,2,5]. The CT scan for CEDH frequently reveals a low-density center encircled by a high-density border. In addition to the natural enhancement of the displaced dura itself, the granulation tissue producing a fibrovascular neo membrane on the exterior of dura mater is thought to be the mechanism of this rim enhancement. The misplaced dura mater may also become calcified [4][5][6]. Our CT scan finding demonstrated inhomogeneous hyperdense and hypodense lesion in the posterior fossa. There was hypodense lesion surrounded by hyperdense lesion in the CT scan. Some CEDHs are discovered by chance, while others are found when persistent and/or progressive symptoms such as headache, dizziness, nausea, vomiting, memory loss, limb weakness, and alteration of consciousness are investigated [5,8,15]. The earlier surgical evacuation of symptomatic CEDH is performed, the better the results. However, spontaneous remission may be anticipated in patients with few or no symptoms, normal neurological function, and a modest CEDH without any mass effect. In these situations, a careful waiting may be necessary, but this entails expensive serial scans and protracted hospital stays. Even if the condition of the patient is satisfactory, surgical evacuation should be taken into consideration if CEDH is shown to not be naturally absorbed on serial scans due to the possibility of calcification [3][4][5]8,17]. Our case came with delayed neurological deficit after 21 days from accident. We performed surgical evacuation of hematoma based on the clinical condition and CT scan examination. Early after surgery, the neurological symptoms in the patient were improved. The patient was able to perform daily activity without any difficulties. Conclusion The results of surgical care of symptomatic chronic posterior fossa EDH are often excellent, while lesser ones can be treated conservatively. A watchful waiting may be warranted in minor chronic posterior fossa EDH patients, but it requires expensive serial scans and a protracted hospital stay. To rule out chronic posterior fossa EDH, a CT scan should be performed on any patient who has had a head injury and is alert and exhibiting mild, persistent symptoms and/or signs. The chronic posterior fossa EDH is a rare entity in the current period of comprehensive neuroimaging, it is often mentioned. Submission statement This manuscript is original and has not been submitted elsewhere in part or in whole. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2022-10-07T15:13:19.804Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "f5e203e4dd631472b2f095aac556f19be9c810b9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2022.107725", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "935a3d2c45fb9ce5f798cab46389fe8e35f19a9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254096542
pes2o/s2orc
v3-fos-license
Atomistic insights into the inhomogeneous nature of solute segregation to grain boundaries in magnesium In magnesium alloys with multiple substitutional elements, solute segregation at grain boundaries (GBs) has a strong impact on many important material characteristics, such as GB energy and mobility, and therefore, texture. Although it is well established that GB segregation is inhomogeneous, the variation of GB solute composition for random boundaries is still not understood. In the current study, atomic-scale experimental and simulation techniques were used to investigate the compositional inhomogeneity of six different GBs. Three-dimensional atom probe tomography results revealed that GB solute concentration of Nd in Mg varies between 2 to 5 at.%. This variation was not only seen for different GB orientations but also within the GB plane. Correlated atomistic simulations suggest that the inhomogeneous segregation behavior observed experimentally stems from local atomic rearrangements within the GBs and introduce the notion of potential excess free volume in the context of improving the prediction of per-site segregation energies. magnesium tend to co-segregate and form local clusters in order to minimize the lattice misfit with the matrix [13][14][15]. This raises interest in the characteristics of solute interactions, and the resulting impact on the deformation and recrystallization behavior in terms of active deformation modes and grain boundary migration. Although concrete conclusions regarding the mechanisms of texture modification are still elusive, there is a consensus in the literature that co-segregation of combined solute species inhibits growth of grains with a basal texture by decreasing the grain boundary (GB) energy and mobility [13,14,16]. This effect will vary with the type of GB and segregating solute giving rise to a growth preference of certain orientations (e.g. ones with basal pole split in the sheet transverse direction) [7,14,[16][17][18][19][20][21][22][23][24]. Given the complexity and experimental limitations to study atomistic behaviors, it is prudent to utilize recent advancements in high performance computing to investigate computationally intensive problems. Atomistic simulations are a powerful complement to high-resolution experimental techniques targeting the atomic-scale behavior of GBs. For example, molecular dynamics (MD) and molecular statics (MS) have been used to compute the distribution of GB segregation energy in face-centered cubic polycrystals [25,26]. For hexagonal close-packed metals, density functional theory calculations have also been used to study solute segregation at twin boundaries and coincident site lattice (CSL) Σ7 GB in magnesium [27,28]. Another novel application of atomistic simulations is its combination with machine learning to investigate the segregation energetics of aluminum at <0001> symmetric tilt GBs as a function of GB structure and local atomic environment [29]. Despite previous extensive research on the effect of different solutes on recrystallization texture development, formal understanding of inhomogeneous GB segregation and resulting selective growth during recrystallization remains pending. In line with this issue, the present work combines advanced modeling and high-resolution characterization at the atomic-scale to shed light on the effect of GB structure on solute segregation. The studied material was an extruded Mg-1.0 wt.%Mn-1.0 wt.% Nd alloy (hereafter, MN11) [12,30]. 3D atom probe tomography (APT) in a Local Electrode Atom Probe 4000X HR from Cameca was employed to quantify the chemical composition of six general GBs using laser-pulsing mode at a temperature of 30 K. Reconstruction of the evaporated tips was performed using the software package IVAS 3.8.2. The APT sample preparation was carried out by transmission Kikuchi diffraction (TKD)-assisted focused ion beam (FIB) milling in a FEI Helios 600i dual-beam electron microscope (Fig. S1). Correlative atomistic simulations to investigate the relationship between per-site segregation energy and the local site environment were performed using the open-source MD software package LAMMPS [31] in conjunction with the modified embedded atom method (MEAM) potential for Mg-Nd [32]. The atomistic configurations of general GBs were constructed using the open source tool Atomsk [33] based on experimentally determined crystallographic orientations of the GB plane and related grains obtained from TKD mapping and reconstructed APT tips (cf. supplementary material). misorientations, and (707 ̅ 10 ̅̅̅̅ ) and (2 ̅ 021) boundary planes, respectively. For simplification, the GBs are denoted hereafter by their misorientation angles. Given that Nd has a larger atomic radius (206 pm) than Mg (173 pm), Nd atoms in the solid solution matrix induce compressive elastic strains and therefore tend to segregate at GBs that are rich in microstructural defects. This is depicted in Fig. 1(a) by the obvious Nd enrichment at the two boundaries. The top part of the tip revealed a pure Mn precipitate smaller than 100 nm in diameter, demonstrating evident segregation of Nd atoms at the interface with the matrix. In Fig. 1(b) the obtained concentration profiles along a direction normal to the GB plane (region of interest ROIs 1 & 2) reveal that the segregation level depends on the GB type. The measured Nd peak concentrations at the two boundaries were 2.9 ± 0.2 at.% (86.5° GB) and 3.7 ± 0.2 at.% (59.3° GB). A similar trend was also seen in another reconstructed tip containing a different random GB with 11.3° [101 ̅ 0] misorientation and (11 ̅̅̅̅ 474 ̅ ) boundary plane (Fig. S2). As seen in the atom distribution maps ( Fig. S2 (a)), Nd segregation at the GB was more evident than Mn segregation. The corresponding mass spectrum is shown in Fig. S2 The segregation behavior of Nd atoms is not only influenced by the macroscopic features of GBs but also the local structural arrangement of atoms in the GB plane. Fig. 2(a) shows a magnified view of the Nd atom map in the triple junction region (ROI 3 in Fig. 1(a)). The pronounced segregation behavior of Nd atoms is displayed using a 1 at.% Nd iso-concentration surface ( Fig. 2(b)). As evidenced by the 2D concentration contour map of the same region shown in Fig. 2(c), the local segregation densities of Nd in the x-z plane of the measured tip exhibit strong variation along both GBs. The highest Nd concentration densities were observed at the triple junction and within a distance of 30 nm along 59.3° GB. As in the 59.3° and 86.5° GBs ( Fig. 1(c)), the x-z in-plane segregation in the 11.3° GB was similarly inhomogeneous with concentration density variations between 1.5 at.% and 3.5 at.% ( Fig. S2(c)). This can also be seen from the 2D Nd concentration density maps of the GB planes (x-y plane) in the three selected GBs, as shown in Fig. 2(d-f). The atomistic configurations were relaxed using the conjugate gradient (with box relaxation in the z-direction) and the FIRE algorithms [34,35] with force tolerance of 10 -8 eV/Å. A substitution region (80 Å × 80 Å × 20 Å) of Nd substitutions was considered in the center of the cylindrical setup across the GB to neglect the effect of the boundary conditions (Fig. S4). The local site environments within the substitution region represent the possible environments of the GB as indicated in the hydrostatic stress maps in Fig. S5. By swapping one Mg atom with one Nd atom near the GBs in the substitution region, the per-site segregation energies were calculated according to: where bulk is the energy of the Mg bulk, bulk X the energy of the Mg bulk where one host atom is replaced by Nd solute, GB the energy of the Mg system with a GB, and GB X is the energy of the Mg system with Nd solute occupying a GB site. After each swap, an energy minimization using the FIRE algorithm was performed. The Open Visualization Tool OVITO was used to visualize the atomistic configurations, analyze the misfit dislocation networks and calculate the atomic displacement. The Open Visualization Tool OVITO [36] was used to visualize the atomistic configurations and calculate the atomic displacement. The Dislocation Extraction Algorithm [37] was used to characterize the misfit dislocation networks. Fig. 3 shows the statistics of per-site segregation energy, which is binned according to the distance to the GB plane. Both GBs exhibit approximately symmetric distributions of segregation energy on either side of the GBs. For most GB sites at a minimum distance of 8 Å from the GB plane center, the segregation energy is close to zero. The 11.3° LAGB in Fig. 3(a) shows distinct segregation behavior compared to the 59.3° HAGB ( Fig. 3(b)). The maximum value of the per-site segregation energies of each bin is higher for the 11.3° LAGB than the 59.3° HAGB. For the 11.3° LAGB, the distributions of mean, median and third quartile segregation energies sharply increase when approaching the GB plane, and the deviation between mean and median in each bin is more significant than for the 59.3° HAGB. In contrast, the mean, median and third quartile segregation energies within 2.5 Å of the GB plane of the 59.3° HAGB stay at similar levels (cf. Fig. 3(b)). LAGB in Fig. 4(a) shows more hot spots than the 59.3° HAGB (Fig. 4(b)). In the 11.3° The concentrations of GB sites with high segregation energy in our simulations as shown in Fig. 4(a, b) agree well with the experimental Nd-solute concentrations at GBs in the current work ( Fig. 2(c, d)), and previous experimental observations of solute clusters at HAGBs in Mg-RE solid solutions [24]. The atomistic origin for the inhomogeneous GB segregation observed is explained by correlating the segregation energy to the local structural features of the GB. As shown in Fig. 4(c, d), the distribution of the hot spots of studies have shown that the excess free volume is directly related to the GB energy [40] and GB segregation [41,42], and it was often treated as a macroscopic feature of the GB [43][44][45]. However, the MSD computed in this work is not related to the excess free volumes of these previous studies. During diffusion, the atoms of the GB reorganize themselves in an energetically favorable configuration that has sufficient free volume to host the solute. The local structural reorganization measured by the MSD acts as a generator of excess free volume. Thus, the MSD is a measure of the potential of excess free volume for a given stable GB configuration, in contrast to the effective excess free volume classically considered. In this work, the strong association between the distributions of per-site segregation energy and MSD demonstrates the impact of the potential excess free volume on per-site segregation energy. The microscopic structural features of a GB, such as GB dislocations, GB disconnections and GB triple junctions, which affect the local site environment, could thus have significant effects on the local segregation behavior. As a result, these features could favor inhomogeneous segregation within the GB, as seen in Such local structural rearrangement also indicates that the widely used linear elasticity model for the atomistic modelling of high symmetric GBs based on effective excess free volume [19,20,28] may not be applicable in the study of general GBs. As shown in Fig. S7, the per-site segregation energy shows a deviation from the predicted segregation energy using the linear elasticity model, especially for sites with high segregation energies. In addition, there is almost no correlation of hot spots between the heat map of the simulated and predicted segregation energy density of the general GBs (see Fig. S6). To improve the prediction of per-site segregation energies in general GBs, existing models should account for the potential excess free volume of GB sites [46,47]. In summary, the inhomogeneous segregation behavior of Nd solute atoms at several random grain boundaries in a deformed and subsequently annealed MN11 TKD-EBSD assisted preparation of APT tips Before the milling process, the orientation of grains of the sampled surface is characterized by electron backscatter diffraction (EBSD) performed in a FEI Helios 600i dual-beam scanning electron microscope/focused ion beam (SEM/FIB) with an operation voltage of 20 kV, as shown in Fig. S1 a. The specimens for EBSD measurements were prepared by conventional mechanical grinding and polishing followed by electro-polishing in Struers AC-2 reagent at 20 V for 90 s. The targeted grain boundaries were selected based on the EBSD orientation measurements and marked by Pt deposition before they were lifted-out (Fig. S1 b). To guide the site-specific preparation process and ensure a proper position of the targeted grain boundary within the APT tips, transmission Kikuchi diffraction EBSD (TKD-EBSD) in conjunction with FIB milling at 30 kV with a current of 5.5 nA was employed as shown in Fig. S1 c for thinning steps between 750 and 300 nm inner diameters. After the final low-energy milling step at 2 kV, the targeted GB was ~ 200 nm away from the top of the tip. Atomistic simulations The atomistic simulations were performed using the open-source MD software package LAMMPS [1]. The interatomic interactions were modeled by the modified embedded atom method (MEAM) potential for Mg-Nd by Kim et al. [2]. The material properties of the MEAM potential were benchmarked, particularly the per-site segregation energies at the selected grain boundaries. The results are in good agreement with the experimental and ab-initio data (as presented in Table S1) [3][4][5][6]. In addition, we calculated the per-site segregation intractable. An alternative way to optimize the general GB structures is to vary the deletion distance of overlapping atoms near the interface [9]. Since we focus on per-site segregation energies and local GB structures instead of global GB properties, only one deletion distance was chosen in this work. A reasonable distribution of local site environments, which samples the space of possible environments similar to the minimum energy GB is expected [9]. A schematic of the cylindrical setup is shown in Fig. S4. The top and bottom layers of the cylinder with a thickness of 1.2 nm (2 times interatomic potential cutoff) were fixed in the z-direction. The outermost layers of the cylindrical surface with a thickness of 1.2 nm were fixed in x and y directions. Periodic boundary conditions were applied in the z-direction and a vacuum layer with a thickness of 4.8 nm was imposed between the periodic images. where ( ) ( ) is the coordinate of th atom in a Mg system with a GB, ( ) ( ( )) is the coordinate of th atom in the Mg system with a Nd solute at the GB. The predicted segregation energy was calculated using the linear elastic model [6]: where ∆ is the difference between Voronoi volume of the th GB site and the bulk ( bulk / ), is the host bulk modulus, bulk X is the volume of the bulk with the solute.
2022-01-11T02:15:51.158Z
2022-01-08T00:00:00.000
{ "year": 2022, "sha1": "f0ddff9f8b4c39f780d46e75dee294c28fe4d5b9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f0ddff9f8b4c39f780d46e75dee294c28fe4d5b9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
237582026
pes2o/s2orc
v3-fos-license
A bespoke data linkage of an IVF clinical quality registry to population health datasets; methods and performance Abstract Introduction Assisted reproductive technologies (ART), such as in-vitro fertilisation (IVF), have revolutionised the treatment of infertility, with an estimated 8 million babies born worldwide. However, the long-term health outcomes for women and their offspring remain an area of concern. Linking IVF treatment data to long-term health data is the most efficient method for assessing such outcomes. Objectives To describe the creation and performance of a bespoke population-based data linkage of an ART clinical quality registry to state-based and national administrative datasets. Methods The linked dataset was created by deterministically and probabilistically linking the Australia and New Zealand Assisted Reproduction Database (ANZARD) to New South Wales (NSW) and Australian Capital Territory (ACT) administrative datasets (performed by NSW Centre for Health Record Linkage (CHeReL)) and to national claims datasets (performed by Australian Institute of Health and Welfare (AIHW)). The CHeReL’s Master Linkage Key (MLK) was used as a bridge between ANZARD’s partially identifiable patient data (statistical linkage key) and NSW and ACT administrative datasets. CHeReL then provided personal identifiers to the AIHW to obtain national content data. The results of the linkage were reported, and concordance between births recorded in ANZARD and perinatal data collections (PDCs) was evaluated. Results Of the 62,833 women who had ART treatment in NSW or ACT, 60,419 could be linked to the CHeReL MLK (linkage rate: 96.2%). A reconciliation of ANZARD-recorded births among NSW residents found that 94.2% (95% CI: 93.9–94.4%) of births were also recorded in state/territory-based PDCs. A high concordance was found in plurality status and birth outcome ≥99% agreement rate, Cohen’s kappa ranged: 0.78–0.98) between ANZARD and PDCs. Conclusion The data linkage resource demonstrates that high linkage rates can be achieved with partially identifiable data and that a population spine, such as the CHeReL’s MLK, can be successfully used as a bridge between clinical registries and administrative datasets. Introduction Infertility affects one in six couples [1], resulting in significant personal suffering, and representing an important and increasingly prevalent public health problem [2,3]. Fortunately, there are a number of Medically Assisted Reproduction (MAR) treatments that allow many infertile individuals to achieve parenthood. The most advanced of these are assisted reproductive technologies (ART), such as in vitro fertilisation (IVF), which involve the fertilisation of human eggs outside of the body before transferring the resulting embryos into the uterus in the hope of achieving a pregnancy. ART represents one of the most significant medical and social achievements of the past century, leading to the birth of an estimated 8 million babies over the last four decades [4]. Non-ART treatment using ovulation induction (OI) with or without intrauterine insemination (IUI) is a more traditional form of MAR treatment in which fertilisation occurs within the woman's reproductive tract, but is still widely used as part of evidence-based management [5]. The increasing demand for MAR treatment, both ART and non-ART, reflects the social trend to delayed childbearing, changes in family structures, rising levels of sexually transmitted disease, obesity, and declining sperm quality [6][7][8][9][10]. Australia has one of the highest rates of ART utilisation per capita in the world [11]. Over the last two decades, Australia has experienced a 192% increase in ART utilisation, and in 2018, around 4.9% of Australian children were conceived using ARTs [2,[11][12][13]. However, little is known about the number of children conceived through OI/IUI (non-ART) treatment or by spontaneous conception for women with a history of subfertility. While evidence regarding the health outcomes of ARTconceived children is generally reassuring, several studies have suggested a higher risk of poorer perinatal outcomes, and longer term metabolic risks [14][15][16][17][18]. This is primarily because they are at a greater risk of being born as part of multiple gestation pregnancies (e.g. twins and triplets), but even singletons are at a marginally higher risk of low birth weight, small for gestational age, congenital anomalies, perinatal death (stillbirth and neonatal death), as well as maternal morbidity compared to spontaneously conceived children [14][15][16]18]. The reasons for these increased risks to mothers and babies are not well understood. Interestingly, it appears that couples who experience subfertility but achieve a spontaneous conception also have similar adverse risk profiles [14,19,20]. Furthermore, there is a lack of evidence on the health outcomes of children conceived using OI/IUI (non-ART) treatment [21]. The lack of clarity on the potential risks of MAR treatments (ART and non-ART) and the possible confounding role of subfertility is an enduring evidence gap when advising patients, clinicians and policymakers on the use of MAR treatments. To address this gap, the National Perinatal Epidemiology and Statistics Unit (NPESU) of the University of New South Wales created a MAR data linkage by linking a regional ART treatment registry (Australian and New Zealand Assisted Reproduction Database, ANZARD) to a number of other jurisdiction-based and national administrative databases. The resulting dataset contains longitudinal health records for women who have either undergone MAR (ART and non-ART), or who have conceived spontaneously, and their resulting children. The overarching objective of establishing the MAR data linkage resource was to quantify the risk of adverse health outcomes in children conceived from ART and non-ART treatments after accounting for confounders, in particular underlying subfertility, and to assess if specific forms of ART contribute differently to these outcomes. Central to the MAR data linkage resource is the ANZARD, which is the oldest national ART registry in the world incorporating all accredited fertility clinics operating in Australia and New Zealand (currently over 90 clinics) and providing demographic, treatment, laboratory and outcome data on all ART cycles and donor insemination (DI) cycles (currently over 80,000 cycles per year) [2]. The submission of data to ANZARD is a requirement of a clinic's accreditation to practice and thus complete ascertainment of ART cycles is assumed [22]. ANZARD does not currently collect data from non-ART treatments such as OI and IUI. This paper describes the data linkage methodology and results between the ANZARD and the state/territory and national data sources, and describes the concordance between the births recorded in ANZARD and those in the state perinatal data collections (PDCs). Methods The MAR data linkage New South Wales (NSW) and Australian Capital Territory (ACT) are two of the eight Australian states and territories, with their combined population of approximately 8 million residents accounting for one-third of the total Australian population [23]. ANZARD was linked to NSW and ACT perinatal, births, deaths, hospital admissions, and congenital anomaly routinely collected databases, as well as national medical and pharmaceutical claims databases. The linkage was possible because since 2009 ANZARD has collected the first two letters of female patients' first and last names. These personal identifiers were combined with the female patients' date of birth (DOB), residential postcode, and their partners' DOB to form a Statistical Linkage Key (SLK). Combinations of the components of the SLK were the foundation for linkage with the administrative datasets [2]. The NSW Ministry of Health's Centre for Health Record Linkage (CHeReL) and the Australian Institute of Health and Welfare (AIHW) undertook the required linkages before transferring the data into a secure research environment for cleaning and analysis by the researchers. Figure 1 summarise the data linkage process of ANZARD to 5 NSW and ACT administrative and 2 Commonwealth datasets. A key strategy to enable the linkage of ANZARD's partially identifiable data (components of the SLK) to the administrative datasets was the ability of the CHeReL to use their Master Linkage Key (MLK) as a bridge between ANZARD and the NSW and ACT Perinatal Data Collections (PDC) to identify births to women who had conceived using ART and those who had conceived spontaneously (without ART). The MLK is constructed by the CHeReL using probabilistic record linkage methods and ChoiceMaker software using a best practice approach to privacy-preserving COD URF = Cause of Death Unit Record File Registry 1 ANZARD's Patient IDs were used to link back to the ANZARD content data in stage 3 by ANZARD manager. Please note that ANZARD includes all ART and DI cycles performed by all fertility clinics operating in Australia and New Zealand (currently over 90 clinics) 2 Project Person Number (PPN) is a unique person ID for each individual in the linked data. It varies from project to project to prevent linking individual-level records across different projects. This is required to ensure privacy and confidentiality in Australia. 3 These 606,658 mother's identifiers included duplicates of mothers who gave birth in both NSW and ACT. We removed the duplicates from the NSW and ACT PDC data, resulting in 606,549 mothers in Figure 2. record linkage. The MLK comprises over 188 million records containing personal and demographic information, but no health information, on over 15 million people in NSW and ACT from a range of population-based health and healthrelated data collections [24]. The CHeReL uses the following personal information to link records for the same person to create the MLK: full name, address, sex, DOB, country of birth, and uses relevant event information such as hospital code, medical record number, event dates (e.g., hospital dates of admission and discharge), hospital transferred to, hospital transferred from, and date of death. The entire linked NSW and ACT administrative data has less than 5/1000 missed links and 3/1000 false positive links [25]. In addition to person links, the MLK contains a family structure, by virtue of data sources that contain details of a child and up to two parents. The data linkage between ANZARD and NSW/ACT administrative data and the Commonwealth data involved three stages that enabled the construction of the MAR data linkage while abiding by the principles of data separation to protect patient privacy. Stage 1: Data linkage between ANZARD and NSW and ACT administrative data The ANZARD Data Manager (who is independent from the research team) transferred to CHeReL a cycle ID (an anonymous unique cycle identifier) and a patient ID (an anonymous unique patient identifier) together with the SLK components associated with the 638,036 ART and DI cycles (with 195,490 SLKs) performed in Australia between 1 st January 2009 and 31 st December 2016. The CHeReL then deterministically linked ANZARD personal SLK identifiers (SLK person ID) to the MLK identifiers (MLK person ID) for females. Several strategies were adopted to improve the data linkage rate between the ANZARD identifiers and the MLK. Because linkage using the SLK may have low sensitivity depending on data quality, the deterministic linkage on person characteristics was combined with event-based linkage using ANZARD and hospital procedures dates for selected procedure codes relating to oocyte retrieval. An ART cycle involves mostly outpatient services; however, almost all egg retrieval procedures are undertaken under sedation as part of an inpatient admission which is recorded in the hospital admission data collections. Thus, procedure codes related to egg retrieval procedures were used for the event-based linkage to supplement the linkage based on personal identifiers. Matches were initially performed on all personal identifiers and event information; then, restrictions were progressively relaxed to allow a higher rate of matching to the SLK. Where multiple MLK person IDs matched to a single ANZARD SLK person ID clerical review was performed. Details of the results of each pass in the linkage process between the ANZARD SLK and MLK person ID and the corresponding linkage rate are shown in the results section below. The MAR data linkage is a birth cohort for all births in NSW/ACT, comprising births that were conceived through ART treatment, non-ART treatment, or spontaneous conception. All MLK person IDs that linked to PDC records indicated that a woman had given birth between 1 st January, 2009 and 31 st December, 2017 in NSW and women who gave birth between 1 st January, 2009 and 31 st December, 2016 in the ACT. These PDC records (including those with a link to an ANZARD SLKs and those that did not) were then linked to individual-level content data from the various NSW and ACT administrative databases by the CHeReL and ACT Health (see more details of the administrative databases included in Table S1, Supplementary Appendix). Once the linkage rate and accuracy were considered to be maximised based on available identifiers, the CHeReL created a Project Person Number (PPN) for each woman. The PPNs were later used by the research team to merge all datasets. The CHeReL loaded the PPNs and the content data from the NSW and ACT administrative databases into the Sax Institute's Secure Unified Research Environment (SURE) [26]. The SURE is a central, secure, online remote-access computing environment for analysing sensitive human research data. The AIHW Data Integration Unit undertook a probabilistic linkage between the MLKs and the personal identifiers from the Medicare Enrolment File (MEF) of 32,378,696 individuals registered to Australia's national health care scheme, Medicare. The MEF linkage procedure involved creating record pairs between MLKs and MEF's personal identifiers based on a combination of seven personal identifiers: surname; given name; sex; day, month, and year of birth; day, month, and year of death when applicable; residential postcode; upper case of the first six characters of the address after removing the punctuations and words such as unit, flat, PO box, etc. A total of 18 passes were undertaken to create the final linked dataset. Following the completion of the probabilistic linkage, a sample-based clerical review including 32 batches each containing between 78,437 and 1,250,122 records was performed to determine the linkage status for record pairs with similar linkage weights. Once all linkages were maximised, the AIHW retrieved the requested individual-level content data from the Medicare Benefits Schedule (fertility-related services) and Pharmaceutical Benefits Scheme (fertility medicines and other medicines) and uploaded the PPNs and the content data from the Medicare Benefits Schedule and Pharmaceutical Benefits Scheme into SURE. Stage 3: Retrieved ANZARD treatment and outcome data The CHeReL sent the PPNs and both linked and unlinked ANZARD's unique patient IDs back to the study-independent ANZARD Data Manager. The ANZARD Data Manager removed all personal identifiers from the ANZARD content data and attached the PPNs and ANZARD's unique patient ID to the ANZARD content data. The ANZARD Data Manager loaded the ANZARD content data (with PPNs and ANZARD's unique patient ID) for all ANZARD treatment performed during the study period to SURE ready for the researchers to merge all data collections using the common PPN. 37,443 (6.2%) had at least one ANZARD ART treatment cycle record. ANZARD was only linked to state and Commonwealth administrative datasets where a woman was identified as giving birth in NSW or ACT. Unlinked ANZARD records for treatment information for women who had undergone ART and DI treatment and who had not given birth were also uploaded to SURE. Cleavage (3 day old embryo) Lower limit: Embryo transfer date -3 days -14 days + 7xgestational weeks -grace period 2 Upper limit: Embryo transfer date -3 days -14 days + 7xgestational weeks + grace period 2 Blastocysts (5 day old embryo) Lower limit: Embryo transfer date -5 days -14 days + 7xgestational weeks -grace period 2 Upper limit: Embryo transfer date -5 days -14 days + 7xgestational weeks + grace period 2 Both cleavage and blastocysts Lower limit: Embryo transfer date -5 days -14 days + 7xgestational weeks -grace period 2 Upper limit: Embryo transfer date -3 days -14 days + 7xgestational weeks + grace period 2 Donor Insemination (DI) Final MAR data linkage Lower limit: DI date -14 days + 7xgestational weeks -grace period 2 Upper limit: DI date -14 days + 7xgestational weeks + grace period 2 PDC = Perinatal Data Collection; ANZARD = Australia and New Zealand Assisted Reproduction Database; DI = Donor insemination; DOB = date of birth. 1 Grace period was assumed as a 16-day period. 2 Grace period was assumed as a 10-day period. The MAR data linkage resource includes a wide range of data sources. For mothers, information is available on their use of fertility medicines and other medicines, their use of fertility-related services, hospital admissions, history of health conditions, socio-demographics, Aboriginal and/or Torres Strait Islander status, and pregnancy-related risk factors such as hypertension and gestational diabetes ( Agreement between births in ANZARD and PDC A primary research question to be addressed by the MAR data linkage is whether health outcomes are different between ART conceived versus non-ART conceived children (from other fertility treatment or spontaneously conceived). Therefore, an assessment was undertaken of the concordance between ANZARD recorded births and those recorded in the PDCs to identify births to women who had ART treatment, who may have conceived using non-ART treatment or spontaneously. This agreement analysis was only conducted for births resulting from ART or DI treatment by NSW residents and birthing in NSW or ACT because the ACT PDC data only covers births delivered in ACT public hospitals. Therefore, births to women residing in the ACT and who gave birth in a private hospital are missing from the ACT PDC, estimated to be about 20-25% of ACT births [12]. The PDCs encompass all live births and stillbirths of at least 20 weeks gestation or ≥400 grams birth weight. The record of birth and pregnancy information by ANZARD relies on the ART clinic staff following-up with women after their ART treatment, while the PDC relies on the attending midwife or medical practitioner completing a record of birth and pregnancy information. We relied on baby's DOB, gestational age, embryo transfer or DI date, and the age of embryo at transfer to match each ANZARD birth to a PDC birth of the corresponding mother. Where the babies' DOB in ANZARD did not exactly match with a PDC-recorded month and year of birth, we progressively relaxed the matching requirements, allowing a grace period of 16 days (see criterion 1 in Table 1). For the remaining unmatched ANZARD births, we then used the embryo transfer or DI date (from ANZARD data) to extrapolate a birth window of an expected DOB to match to the PDC's DOB (see criterion 2 in Table 1). We progressively relaxed the birth window to account for uncertainty in the embryo transfer or DI date and ANZARD recorded DOB by a grace period of 10 days (Table 1). This was necessary because only the babies' month and year of birth were provided by the PDC. The data concordance rate was calculated as a total number of births in an agreement between ANZARD and PDC data divided by a total number of ANZARD treatment records from NSW residents with an ANZARD birth recorded or without an ANZARD birth recorded due to loss of followup. Fact of birth plus key birth outcomes (live or stillbirth) and plurality (singleton or multiple births) were chosen to evaluate concordance because these are recorded in ANZARD and the PDCs. To examine the impact of these grace periods (criterion 1: ANZARD recorded DOB and criterion 2: embryo transfer or DI date) on the concordance rate, we performed several sensitivity analyses by varying the grace period of criterion 1 from 16 days to 5 or 31 days; and by varying the grace period of criterion 2 from 10 days to 5 or 15 days. Concordance of plurality status and birth outcomes between ANZARD and the PDCs For births in the agreement between the PDC and ANZARD birth records for NSW residents, we also examined agreement in the plurality (i.e., singleton, twins, and triplets) and birth outcome status (live birth or perinatal death) fields. We relied on the birth registry's plurality status, and death registry's perinatal death information if information in the PDC differed from that of the birth or death registries. We used both PDC and the Registry of Births, Deaths and Marriages (RBDM) (birth and death registries) as a gold standard reference when estimating sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). The RBDM is a statutory registry in NSW and ACT. Agreement rate, sensitivity, specificity, PPV, NPV, area under the receiver operating characteristic curve (AUC), and Cohen's Kappa statistics [27,28] were reported. Figure S1, Supplementary Appendix). Results Our sensitivity analysis, in which we used different combinations of grace periods, showed consistent results of the concordance rate between ART/DI treatment cycles and NSW and ACT PDC records at 94.2% over our study period (Table S2, Supplementary Appendix). Concordance of plurality status and birth outcomes between ANZARD and PDC data for the final MAR data linkage Of the 27,248 ANZARD births (from NSW and ACT residents) linked to a NSW/ACT PDC birth, there was 99.7% (27,170/27,248) (95% CI: 99.6-99.8%) agreement with Cohen's kappa of 0.977 (95% CI: 0.971-0.982) in plurality recording between ANZARD and PDC data (Table S3, Supplementary Appendix). There was also a high degree of agreement (≥99%, Cohen's kappa ranged: 0.78-0.90) for live birth and perinatal death status between ANZARD and PDC records for the 25,758 singleton births, with a range of 87.0-99.9% for PPV and a range of 87.0-99.9% for NPV; for 1,412 plural births, with a range of 87.0-99.3% for PPV and a range of 88.9-99.2% for NPV (Table 3). Discussion This paper describes the creation of a bespoke linked dataset (MAR data linkage) of a clinical quality registry (ANZARD) with state/territory and national administrative datasets. Despite limited personal identifiers being present in the registry, of the 62,833 women who had ART treatment in NSW or ACT, 60,419 could be linked to the CHeReL MLK population spine, representing a linkage rate: 96.2%. This means that only 3.8% of women who had ART treatment in NSW/ACT could not be linked to the NSW/ACT MLK created by CHeReL. This linkage rate is similar to other studies that have used limited identifiers as part of an SLK used to provide partial identifiers are part of clinical registries [29][30][31]. A reconciliation of the ART/DI cycles performed to women who resided in NSW and who were recorded as having a birth in ANZARD, found that 94.2% of the births were recorded in NSW and ACT PDCs. Possible reasons for the 5.8% of missing ANZARD births in the PDC, would include women who resided in NSW for ART/DI treatment but birthed in a private ACT hospital, in another Australian states/territories, or overseas, missing links between the SLK and MLK (3.8% linkage error), or births being erroneously recorded in ANZARD. An evaluation of the small percentage of NSW or ACT women recorded in ANZARD who could not be linked to the MLK population spine was not conducted as part of this study and could be a small source of linkage bias. However, because of Australia's universal health system it is unlikely that there would be a systematic bias in the linkage between ANZARD and the MLK based on demographics because the MLK contains 210 million records from 17 data collections with 15 average links per person [32]. A high concordance was found in plurality status (>99% agreement rate; Cohen's kappa: 0.977 (95% CI: 0.971-0.982)) and birth outcome (≥99% agreement rate; Cohen's kappa ranged: 0.78-0.90) between ANZARD and PDC birth records confirming the validity of the linkage. The high degree of concordance between births recorded in ANZARD and those recorded in jurisdictional perinatal data collections provides reassurance that fertility clinics in Australia are accurately recording the outcomes of ART treatment undertaken in their clinics, and that clinics are not artificially inflating their success rates in NSW/ACT. The accreditation of each fertility clinic in Australia is managed by the industry's Fertility Society of Australia and New Zealand under its Reproductive Technology Accreditation Committee's voluntary Code of Practice, under which clinics must submit their treatment and outcomes data to ANZARD [33]. The results of this concordance study and the high linkage rate reflects positively on the Fertility Society of Australia and New Zealand model of industry regulation being connected with clinical registry management. Our linkage rate (96.2% at woman-level, 94.2% at birth-level) was higher than that achieved by the States Monitoring Assisted Reproductive Technology collaboration's linkage of the U.S. state-based and national ART data linkage to vital birth registrations (80-90.2%) [34][35][36] and the linkage to pregnancy data (89.7%) in the Massachusetts Outcomes Study of Assisted Reproductive Technology [37]. The Committee of Nordic Assisted Reproductive Technology and Safety was able to achieve a very high linkage between IVF registries and birth registries because of the existence of national personal identifiers [38]. Most of the earlier U.S. linkages conducted by the States Monitoring Assisted Reproductive Technology collaboration were cycle-based and adopted a deterministic or probabilistic linkage strategy to link treatment cycles to vital records based on less specific maternal or infant variables (e.g., DOB of mothers and infants, mother's postcode, or plurality, etc.) due to a lack of mothers' or infants' identifiers [34][35][36]. The results of the latest U.S. linkage by the Massachusetts Outcomes Study of Assisted Reproductive Technology that included maternal or parental identifiers (i.e. mother's first and last name, and father's last name) in the linkage strategy is the most comparable to that of the MAR data linkage (89.7% 1 We identified NSW residents based on residential postcode in ANZARD data. 2 We included the ACT PDC data when matching the ANZARD births to a PDC births for accounting the cross-state delivery between NSW and ACT. 3 There were 2,323 mothers gave births in both NSW and ACT. The total unique number of mothers gave births in NSW or ACT were 606,549, which is different from the 606,658 PPNs that CHeReL sent to AIHW (Figure 1). The number of births and babies included raw records (pre-cleaning) received from CHeReL and ACT health. The MAR linkage contains up to 10.25 years of followup allowing the assessment of the short-term and longterm health risks for women and ART conceived children. Additionally, the prognostic value of the type of ART treatment performed, (e.g., use of fresh or frozen embryo, sperm injection, extended embryo culture) will be able to be assessed. Furthermore, the health of children born from non-ART treatments will be evaluated using national medicines and medical services claims data to identify children conceived using ovulation induction and ovarian stimulation. Moreover, because of the longitudinal nature of the datasets, women with a history of subfertility, but who CI = confidence intervals. ANZARD = Australian and New Zealand Assisted Reproduction Database; PDC = Perinatal Data Collection PPV = positive predictive value; NPV = negative predictive value; AUC = area under the receiver operating characteristic curve. 1 NSW residents who have undergone an ART or DI Treatment with a known birth outcome or an unknown birth outcome due to loss to follow-up and an agreement of birth recorded in PDC. 2 For births with an agreement in plurality status between ANZARD and PDC data. 3 This agreement analysis is only conducted for births resulting from ART treatment by NSW residents and birthing in NSW or ACT. The ACT PDC data only covers births delivered in ACT public hospitals; thus, births to women residing in the ACT and who gave birth in a private hospital are missing from the ACT PDC, estimated to be about 20-25% of ACT births (Australian Institute of Health and Welfare, 2018). 4 The 95% confidence intervals were constructed by the bias-corrected bootstrap method with 2000 replicates (Efron, 1987). subsequently conceived naturally can be identified, allowing the role of subfertility in health outcomes to be assessed -a confounder that is often elusive in studies of ART conceived children. Finally, sibship studies will also be possible because children born to the same mothers (including siblings from plural births or from singleton births since 1994) can be identified. Conclusions The MAR data linkage demonstrates that very high linkage rates can be achieved with partially identifiable data, and that a population spine such as the CHeReL's MLK can be successfully used as a bridge between clinical registries and administrative datasets. The high concordance between births recorded in ANZARD and perinatal data collections provides reassurance about the accuracy of ART treatment outcomes recorded in ANZARD. The MAR data linkage will provide invaluable information on the safety and effectiveness of ART and non-ART treatment, and the possible effect of subfertility when advising patients, clinicians, and policymakers on fertility treatments for Australia and beyond. #1127437 PDC encompasses all live births and stillbirths of at least 20 weeks gestation or at least 400 grams birth weight. For each birth, the attending midwife or medical practitioner completes a form (or its electronic equivalent), giving socio-demographic, pregnancy-related risk, medical and obstetric information on the mother, and information on the labour, delivery, and condition, and birth outcome of the infant. Socio-demographic, pregnancyrelated risk, medical and obstetric information on the mother, and information on the labour, delivery, and condition, and birth outcome of the infant NSW + ACT Mothers The MBS encompasses all clinically relevant medical services subsidised by the Australian Government. Item codes for fertility-related procedure, service date, fee charges, benefit paid, etc. NSW = New South Wales; ACT = Australian Capital Territory. 1 All ACT linked data only covers births delivered in ACT public hospitals. 2 We are currently requesting the siblings' data (who were born before 1 st January, 2009) to add in the MAR data linkage. Supplementary NSW = New South Wales; ACT = Australian Capital Territory. 1 All ACT linked data only covers births delivered in ACT public hospitals. 2 We are currently requesting the siblings' data (who were born before 1 st January, 2009) to add in the MAR data linkage. Supplementary 1 NSW residents who have undergone an ART Treatment with a known birth outcome or an unknown birth outcome due to loss to follow-up and we were successfully matched the ANZARD birth to a birth in the PDC data. This analysis is only feasible to NSW residents.
2021-09-22T05:16:21.886Z
2021-09-13T00:00:00.000
{ "year": 2021, "sha1": "82581c1077ce54b71a8a2206ec0b10975831cb86", "oa_license": "CCBYNCND", "oa_url": "https://ijpds.org/article/download/1679/3241", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82581c1077ce54b71a8a2206ec0b10975831cb86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243895396
pes2o/s2orc
v3-fos-license
A Comparative Study on the Differences of Chinese and American Business Etiquette Between China and America from Cross-Cultural Aspect Good manners not only embellish the company image, but also play a major role in generating profit. Nowadays, it is also necessary to apply etiquette to business activities. International trade exchanges have become frequent. People pay more and more attention to good business etiquette quality. As an important link in international business communication, business etiquette plays a crucial role in the smooth development of international business activities. This essay is based on the cross-cultural perspective to study the greeting etiquette, dress etiquette, the concept of time, business card etiquette and so on. By comparing and analyse business etiquette between China and the United States, the excellent cultures of the two countries in business activities can be fully understood, thereby making a reduce on unnecessary friction caused by diverse cultures, and making the cooperation and exchange go on smoothly. INTRODUCTION Succeeding in business requires not only mastery of one's job but also mastery of the consideration for others.International exchanges have become frequent.Business etiquette plays a crucial role in the smooth progress of international business activities.Good manners are cost-effective because they not increase the quality of life in the workplace, contribute to optimum employee morale, and embellish the company image.Since China is a large international country, there are more and more international trades.International business communication inevitably happens to the whole process of business activities.Business etiquette is a very critical aspect that nonverbal communication is applied. As a representative country in the west, the United States has a lot of etiquette to study and learn.At the same time, as the great power in the world, China and the United States have frequent exchanges in both politics and economy.In contact with the United States and other countries, learning more about customs, culture, and etiquette can make communication smoothly.The economy can also grow rapidly.You can understand the culture and you can promote the economy. There are similarities and differences in business etiquette between China and the United States. We can analyse Chinese business etiquette from the following aspects: first of all, mindset is formation of Chinese way of thinking, which is inseparable from the unique culture of thousands of years.The traditional Chinese mindset is to be friendly and humble.Chinese people are very particular on language expression, which is not direct.Therefore, it is also fully reflected in business contacts.Second, time arrangement is very necessary.For instance arriving at the appointed place in advance shows the respect to each other.Most Americans, compared with the Chinese, always arrive on time when being invited for dinner; no more than 10 minutes later than being invited to a small gathering.If it is a large party, it is acceptable to arrive up to 30 minutes later than invited.The hand shake is the common greeting.Handshakes are firm, brief and confident.Maintain eye contact during the greeting.In formal circumstances, you may want to use titles and surnames as a courtesy until you are invited to move to a first name basis.Such as, between men and women, the woman first reaches out her hand.If the woman does not want to shake hands, the man can only nod and bow. Cross-cultural is a culture that crosses the boundaries between different countries and ethnic groups.Consequently, it is necessary to study crosscultural business etiquette.Business etiquette plays an important role in business activities.More and more business people have realized the importance of understanding diverse business etiquette.However, many business people are not clear to business etiquette.Therefore, the study on differences of business etiquette between China and the United States is of utilitarian significance. DEFINITION OF CROSS-CULTURAL AND BUSINESS ETIQUETTE Different regions have different cultures and different business etiquette.What is cross-cultural?What is business etiquette?Here are the definitions of cross-cultural and business etiquette. Definition of Cross-culture Nowadays, culture is often seen as a noun with multiple meanings.The sum of the material and spiritual wealth were created by mankind.In the west, anthropology, the father of the British anthropologist e. b. Taylor, in his "primitive culture" in "the science of culture" chapter, said: "the culture or civilization, in its broad sense of ethnology, is a compound as a whole, including, belief, idea, morals, law, habituate art, and all other abilities and habits acquired by a person as a member of society."Culture is national and world.In fact, cross-culture is a culture that crosses the boundaries between different countries and ethnic groups.It is a cultural difference among different ethnic groups, collective and countries.Individuals have particular cultural identities.Cultural identity refers to the belonging and connection of human groups or individuals to a unique culture.It has an especial worth cultural orientation.The so-called "cross-cultural" refers to the culture that crosses the boundaries between different countries and different nationalities. For example, a school leader introduced the new American teacher to the others, "Ladies and gentlemen, I'm expected to introduce to you a very pretty gal, Miss Brown.She is a very good teacher, and she came from the USA.But the teachers looked uneasiness.Chinese people like to introduce their guests with complimentary words.But Americans believe that when you meet someone for the first time, you don't have to evaluate them.Any subjective comment, even if it is a good one, will give people a brusque and imposing feeling.For the above remarks, on that occasion, the introduction should highlight the identity, education, position, etc., rather than physical appearance and abstract comments.By contrast, if you change "pretty" and "good" to "tangible educational background and experience," that's a pretty objective statement.Therefore, it is necessary to communicate on the basis of understanding each other's culture to develop empathy and eliminate cultural centrism.The reason why people with different cultural affiliation are not easy to communicate is often due to different understandings of specific cultural phenomena.Only real, fair and comprehensive understanding of heterogeneous culture can eliminate various cultural barriers in the process of cross-cultural communication.On the basis of mastering the culture of this nation, learn as much as possible about the culture of other nations.When Chinese contact with other nations, people can strengthen their own national characteristics and respect the national culture of the world. Definition of Business Etiquette Etiquette is a very important way of intercourse.Paying attention to etiquette can improve personal cultivation and promote the success of business activities.Business etiquette is the code of conduct that business workers should observe in their posts.The interpretation of the definition can be understood from the following two aspects.The first is the scope of business etiquette.Just as its name implies, business etiquette refers to the normative behavior in business activities.Therefore, business etiquette is only applicable to trade activities, and it is not suitable to use business etiquette to regulate individual behaviour in daily life and other activities.The second is the object of business etiquette.Since business etiquette only occurs in trade activities, the object of business etiquette is business personnel in business activities, which is also determined by the attributes of business etiquette.According to the interpretation of the definition of business etiquette, there are four elements of business etiquette in commercial activities, including the subject and object of business etiquette, the media and the environment in which business etiquette occurs. The effects of business etiquette are as follow: Advances in Social Science, Education and Humanities Research, volume 588 It coordinates employee relations.Experts are aware of the many advantages of proper business etiquette.In professional situations, extending proper courtesies can help you to make a good impression on colleagues.Moreover, it will make the office environment much more pleasant and will make for better quality work when employees treat each other well.Proper etiquette will also make it more likely that a team of workers will come together to complete a project, which further means that deadlines will be met and employees will feel less burned out. It creates a good image of employees and companies.Business etiquette is a kind of civilization accumulation of human being.It is also a kind of standard behavior observed by employees.Of course, it can regulate employee's behavior while employees represent the companies they work for.The best intrinsic quality of each employee comes from the continuous penetration of proper etiquette.As it is well knew, good manners make a positive impression.Etiquette, therefore, keeps employees' goodwill as well as maintains the company's image and reputation.So, learning cross-cultural business etiquette is beneficial to associate with others.It gives us clues as to how to act and what to do in any given situation.Far stifling your personality in a strait jacket, etiquette by giving you the confidence to handle a wide of situations with ease, actually let you focus on being your own, relaxed self [5]. Therefore, the next step is to analyze the characteristics of business etiquette between China and the United States, and make an analysis and comparison. BUSINESS ETIQUETTE IN CHINA Chinese business etiquette has Chinese characteristics and fully demonstrates Chinese culture.Next, there are following expanding aspects. Giving Gift China is a nation with a lot of etiquette.With the increasing degree of social openness and international exchanges, business communication with other countries often involves business gift giving.In international exchanges, people often give gifts to express their gratitude and congratulations, so as to promote friendship.Chinese people have a tendency to pay attention to the essence of the gift, that is, its practical value, and do not like what is not useful.In fact, this is mainly because for thousands of years China has been adversely affected by population pressure, lack of resources, and low levels of social productivity.Therefore, more attention should be paid to material life and material practicality.Chinese people not only pay attention to the practical value of gift, but also pay attention to the price tag.When people buy gifts and bestow them abroad, the owner or clerk in a foreign shop often takes the trouble to tears off the price tag, which is exactly what the Chinese want in order to show the real value of the gift.To make the recipient feels sincere and gives such a valuable gift.In order to show their importance, in China, people often accept gifts with a gloomy expression of joy and don't open them in front of the person, which is considered very impolite and gives the impression of being greedy or greedy or overly concerned with the gift.It is usually after the business event or the VIP leaves, or after returning home quietly open the gift.When accepting a gift, Chinese people often refuse to accept it until the other party insists again, which shows that even accepting a gift is out of desperation.Then he set the gift aside, showing indifference.However, the act of quietly opening the gift after the guest has left is proof enough that the Chinese care about receiving the gift, just don't show it in person. Concept of Time Einstein said that time is the most precious gift.An old Chinese saying emphasized the value of time."An inch of time is worth an inch of gold; yet you can't buy an inch of time at an inch of gold."The concept of time from human observers perceives natural movement and the cultural movement of the order.The Chinese are good at observing the cycles of the seasons and observing the cycle of the Yin and Yang [1].Chinese believe that time is cyclical.And that cycle repeats.Traditional time idea of China, even to the contemporary has an impact.In business activity, the concept of time also fully shows the Chinese people's deep-rooted concept of time.In business etiquette, when Chinese people are working, they make an agreement with their clients to meet and discuss business.No matter whether you have a big chance to win or others are dominant in this negotiation, you might not handle it lightly.Arriving early at the appointed time is not only a sign of respect for others, but also a sign of personal accomplishment and the importance you attach to the job.China, though, has long been accused of being "unpunctual".These days, with the improvement of personal quality and the influence of the west, Chinese people can arrive on time or enter the Advances in Social Science, Education and Humanities Research, volume 588 appointed place in advance to wait for the arrival of each other in business activities. Greeting In Chinese, greeting is an indispensable form of social communication, no matter whether you are familiar with the other party, you should say "hello".When you meet someone for the first time, smile, look them in the eye, lean forward, and shake hands.When people greet you, you should be polite to the other party.Sincere tone and generous attitude are more likely to win the goodwill of others.If two people are of the opposite sex, the woman should hold out her hand first and the man should shake it back.Chinese greetings in business contacts are very reserved.Just like the Chinese character, it is reserved, considerate and always begins with some daily greetings.For example, "How are you?","Did you have breakfast?"This is not only conducive to close the distance with customers, promote the success of business negotiations, but also conducive to reflect their own quality and moral character.Business contacts are also very exquisite.First of all, different greetings can be used at different times.Besides the common "hello", greetings can also be changed according to time, people and place.Before 10 a.m., you can say "good morning", from 12 to 14 a.m., you can say "good afternoon", from 18 to 21 p.m. you can say "good evening".After 21 o'clock, if there is no urgent matter, do not call the other party again, so as not to affect other people's rest.Second, greet by announcing your name.Your name should be announced as soon as you say "hello" on the phone.If you continue talking about business, the other person will not be able to respond immediately.This can cause trouble for the other party.The person may be embarrassed to ask, "Who are you?"Because if you can't hear the voice of someone you know very well, it can also make the person who answers the phone uncomfortable.Especially the superior calls the subordinate, must announce the name first.Finally, greetings to pay attention to the tone, tone.On the phone, neither party can see the other's expression, and the only way to communicate is through hearing [4].Therefore, when greeting the first to tone appropriate, moderate tone, clear articulation, not to say dialect.Greetings that are too slow, too loud, or too weak can make the other person feel bad, which can affect the outcome of a phone conversation. BUSINESS ETIQUETTE IN AMERICA American business etiquette can fully reflect the native culture of the United States, from the following three aspects of detailed analysis. Greeting When Americans meet their guests, they usually shake hands.They used to hold hands tightly, eyes to look at each other, slightly hunched.That's considered good manners.Americans have an aversion to look the other person in the eye when shaking hands.This is considered arrogant and impolite [3].When shaking hands with a guest in a social setting, Americans also have some other customs and rules.If it is of the same sex, the older person should usually extend his hand to the younger person.The senior person should extend his hand to the junior person, and the host should extend his hand to the guest.Their other courtesy is kissing.This is a courtesy shows that they are very familiar with each other.Kissing ceremony, often accompanied by a certain degree of hug, different relationships, different identities of people, the kissing parts are each other's are not the same.In public and social occasions, close women can kiss each other on the face, men hug each other, and men and women usually kiss each other the cheek.Juniors kiss their elders on the forehead; a man may kiss the finger or the back of his hand to a distinguished female guest.Americans also have three major taboos: the first is to avoid someone asks his age.The second is to avoid asking him the price of things.The third is to avoid meeting and say: "You put on weight!Because age and price are personal matters, they don't like interference.As for "You're putting on weight!", this is a Chinese custom of "praising".In the United States, this behavior is derogatory.Americans address each other by their first names.They usually do not address each other by "Mr", "Mrs", "miss" or other titles.Try to get your point more quickly. Concept of Time In general, Americans do not evaluate their visitors through long chats in a casual atmosphere; nor will a restaurant invite guests to establish a sense of trust and friendship before the business is settled.For most, friendly relationships are not as important as actual performance.They focus on past track records rather than social etiquette to evaluate a peer.They typically evaluate and discuss things from a professional perspective rather than a social one, so Advances in Social Science, Education and Humanities Research, volume 588 they quickly get down to business.Most Americans fill their schedules with appointments and divide their time into sections [6].These schedules can be separated by periods as short as 15 minutes.They often "Give"; two or three (or more) time slots per person.In business, however, it's almost always date after date, no matter what they're doing.As a result, the ticking of the clock was always in their ears.Americans believe that time can be arranged, saved, wasted, encroached upon, killed, etc.They also charge for their time, because they believe that it is a valuable resource and a precious commodity.In fact, Americans do not spend much time chatting with visitors or entertaining guests in restaurants to establish friendly and cooperative relations.So, after a brief, polite greeting with the guest, they quickly got down to business.Due to Americans regard the time as one part of his life.They hate people who waste other people's time unwisely.If they feel that time is slipping away and they have nothing to gain, they start fidgeting and getting emotional.Americans like to do work by planned, so they do everything according to a strict schedule.If they find themselves behind schedule, they squirm and try to speed up [2].When Americans did a plan for some event, they usually advanced the time by days or weeks.After the time is determined, it is not easy to change unless the situation is urgent.All meetings, appointments, social events, etc., need prior notice so that the other party can make early arrangements.It considers impolite to inform the other party of an event. Characteristics of Clothing We pay attention to dressing in formal social occasions.There is a dress code at the banquet.There are about five types of dress codes in the United States: formal (semi-formal), informal (business attire), business casual, casual, and sportswear.If you're invited to a party, the first thing you shouldn't be rush to find the right dress, but ask for the party's dress code.When attending an important meeting, you should pay attention to the dress code on the invitation.If you are not sure about the clothing requirements, you can ask other participants first to avoid embarrassment.The bottom button of the vest is usually unbuttoned.On formal occasions or at work, women should wear skirts, while men should wear ties and dark suits.An evening dress should have an ankle-length hemline and high heels.At present, Tuxedo and Lounge suite Tuxedo, which are widely used in business activities, are mainly utilized in banquet activities.Tuxedo is characterized by cleanliness.The collar is trimmed with smooth, shiny satin, a wide, smooth ribbon or silk belt, and a black bow tie, with only one buckle at the waist; lounge suite, on the other hand, is actually a dark suit, which requires the same texture and style of upper and lower.This not only reflects the respect for the activities but also reflects their own taste.Moreover, it is considered impolite to wear pajamas, slippers or go out in this dress.Americans believe that wearing make-up in public, or in front of a large crowd, is not only considered ill-bred.It can also make people feel suspicious.Generally speaking, Americans are not particular about what they wear at the ordinary time.They advocate nature, prefer looseness, and pay attention to dressing to reflect personality, which is one of the basic characteristics of Americans. THE DIFFERENCES OF BUSINESS ETIQUETTE BETWEEN CHINA AND AMERICA Based on the analysis of American and Chinese business etiquette in the first and second sections, it will make a comparative analysis of Chinese and American business etiquette in different aspects. Talking In business contacts, there is a high demand for business eloquence.Talking is an art.Americans may not be able to accept Chinese people's humor.And the American jokes are not always understood by the Chinese.In business activities, know the enemy and know yourself, and you can fight a hundred battles with no danger of defeat.First, American communication are more private when they talk.In China, however, the boundaries of personal privacy are far less profound, and people do not care about others' general understanding of their lives.Second, Americans seldom use modest words.Americans are direct.They value logic and linear thinking and expect people to speak clearly and in a straightforward manner.To them if you don't say it directly, you simply waste time, and time is money.However, Chinese often use modest words.For example, when receiving a compliment, the Chinese will euphemistically say "It's no big deal" instead of accepting it cheerfully.Americans seldom use exchange of conventional greetings.They pay little attention to greetings in business meetings and pay much attention to efficiency.Chinese people are more accustomed to the use of enthusiastic language.When meeting business partners to visit, it will be very warm and hospitable.In fact, talking is an art. Advances in Social Science, Education and Humanities Research, volume 588 Table Manners Shakespeare said that the most appetizing thing at a banquet was the host's courtesy.Although business dinners take different forms, what they all have in common is a measure of sociability over and above that of an office-bound appointment.Your good manners at business parties will help you to cultivate friendship with your clients.In fact, the Chinese people eat more casually than the Western ones, although there are more and more rules concerning with other guests.Tables in China are usually round.Seat facing the door is for the host.And main guests just sit side by side.So it's cosy to talk.While Americans like a quiet and natural environment when they have dinner, they think that they must pay attention to their appearance at the table and not lose their manners.For example, dishes should not collide with each other when eating, and food should not make noise.The difference is that everyone in the United States to the right is for the respect, gentlemen and ladies are separated and sit, and couples not sitting together.The seats of the female guests are slightly higher than the male guests; men should be placed on his right for the female guests to pull the chair, to show respect to the woman.Besides that, when eating, everyone in the United States is to sit straight, thinking that bending over, or bowing, with the mouth up to eat, is not polite.And the host does not advocate heavy drinking during the meal.Drinks play an important role in Chinese food culture.Usually, both alcoholic drink and beverage are served throughout the meal.It is customary for the host to insist that guests drink to show friendship.If the guest doesn't want a drink, the guest would say "I'm unable to drink, but thank you." Hosts' insistence is to show generosity.So refusal by the guests should be made with utmost politeness. Business Cards Nowadays, the use of business cards has become an indispensable way in social and business occasions.The first meeting will be given to business cards.A business card is the easiest way to introduce someone in a social situation.There are differences in the use of business cards between China and the United States.However, business cards are exchanged without formal ritual in America, and they are given at the beginning of a meeting.It is quite common for the recipient to put the card in their wallet, which may be in the back pocket of their trousers.Business cards are always carried.(Wang Ping, 2012) [6].Husband and wife can use the same one.Domicile or place of work is in the right corner mostly, and the position is printed in central name under.A man may add Mr. to his name, and a married woman may add Mrs.In China, business cards are exchanged at the initial meeting.One side of the card is translated and printed the Chinese letters using gold ink as it is an auspicious color.The company, rank and any qualifications should be mentioned.When handing a card to the other side, you'd better smile and look at the other side.And both hands of the thumb and index finger respectively hold the two corners of the business card.If you are sitting, you should get up immediately when passing the card.It is impolite to leave the card lying around after you receive it.First, when you are meeting with a customer, it is best to read the important information on the card to show your respect.Whether in the orient or the west, knowing each other's business card etiquette in business communication is not only a formality, but also the culture and meaning behind it.Passing a business card is not just a small detail, and it can reflect people's quality and etiquette.A good impression can be left by the business exchange.At the same time, friction and embarrassment can be reduced in business activities, and cooperation can go smoothly. CONCLUSION In China, the value of life is often reflected in its social value.And the individual or self is always considered in the context of social relations.The Chinese pursue group harmony, and stable ethics nation.American culture is an individual culture in essence.Americans pay attention to science that they tend to be more rational in their approach.In fact, China is called "the country of etiquette".Advocating etiquette is the traditional virtue of Chinese people.From ancient times to now, the etiquette standard in China stands for Chinese unique civilization and the embodiment of Chinese virtues.Etiquette as a traditional virtue, has the heritage of history and eternal vitality.It can show the extent of enterprise civilization, management style and moral standards, shape the corporate image.Good manners will undoubtedly bring intuitive and direct benefits to the enterprise.There are some etiquettes in business work.In cross-cultural job, people should be able to handle the business etiquette of different cultures.Cross cultural business etiquette is a tool.It can teach people to learn ins and outs of global business.People often talk about how the world is getting "smaller" thanks to travel and technology.In fact, the reality is that, even though people in one country interact with different cultures more than ever.But there are still major differences.Everyone has Advances in Social Science, Education and Humanities Research, volume 588 different ideas, different ways of working, and different expectations.It makes life very amusing in a diverse world.Fundamentally speaking, commercial etiquette is the art of business man contact [4]. Based on the United States and China, this paper analyzes the differences in business etiquette between the two countries.Concept of time and giving gift as well as greeting etiquette and dressing etiquette of the United States and China are discussed and analyzed in depth.The purpose is to study Chinese and American business etiquette.Then it compared the two countries' business etiquette in the same field.Both sides have their own features and advantages.For instance, the Chinese dining rules, the Chinese usually like a pleasant atmosphere.But it's necessary to find out how to pick up your seat at the table.In America, Americans don't like lively atmosphere.Therefore, the characteristics of the two countries should be examined when conducting cross-cultural business activities [6]. Therefore, the author of this thesis has some suggestions for cross-cultural business activities: Above all, it is necessary to respect each other's unique habitude.Cultural diversity is a fundamental feature of human society and an important driving force for the progress of human civilization.Respect for cultural diversity is essential for the development of national culture and global cultural prosperity.Only by maintaining cultural diversity will the world become more colourful and full of vitality.In commercial activities, respect the cultural characteristics of China and other parts of the world.Respect for national characteristics is both a sign of personal character and a way to promote cooperation.Therefore, it creates a perfect impression. Secondly, deal with culture flexibly.In fact, through a certain amount of cross-cultural research, business people will become more interested in different cultures and improve their practice ability.A thorough understanding of different cultures is of great help in business work.Business people should understand and learn various business etiquette features.On this basis they need to formulate rational plan, select an expedient strategy, and develop all kinds of measures to prevent risks.Different countries have different business etiquette, which can cause some conflicts.It requires that business people should flexibly handle the problems.(Wang Ping, 2012) In fact, it is difficult to really master the differences in business culture.Therefore, business personnel should be flexible in the use of knowledge in business activities.It can help business people to solve the problems caused by cultural differences.Therefore, business personnel in business activities should be flexible and initiative.
2021-11-10T16:16:40.341Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "017fa649aae5fd83b49719cbc5cf9253532635a7", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125962042.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9b564a90e5a8b8da0513d72f25e6bf7bf099d394", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
225130445
pes2o/s2orc
v3-fos-license
Well-to-tank carbon emissions from crude oil maritime transportation International seaborne transport of crude oil takes place mainly on tankers, with annual seaborne crude flows totaling an estimated 12 billion barrels. To take into account the carbon footprint on crude oil from its international distribution segment, we utilize a micro-level dataset of more than 28,000 individual shipment samples to estimate each journey’s carbon emissions. The unique detailed dataset enables us to aggregate carbon emissions at the country level for importers and exporters, by trade lane, and by vessel size categories. Our methodology provides a framework for crude oil consumers to dynamically account for the carbon footprint of the commodity which is transported via different trade routes and by different vessels (size and age). So far, this dynamic emissions accounting has been largely neglected by oil consumers who typically apply one single emission factor regardless its supply chain. Our results highlight the importance for importers to consider the origin and point-of-use of crude oil in order to have a comprehensive view of its carbon footprint. The quantitative analysis in this study can feed into well-to-tank fuel emissions factors for oil and oil products in order to adopt dynamic emissions factors in companies’ carbon accounting. Finally, our research is important for the design of new environmental policies for the corporate Environmental Social Governance (ESG) reporting to include downstream logistics in the overall emission accounting of oil companies. Introduction The oil and gas sector is responsible for significant greenhouse gas (GHG) emissions. Emissions arise from both the extraction, processing, transportation, and distribution of fuel (often referred to as the well-to-tank phase of the fuel life cycle) as well as the eventual combustion of the fuel in various applications like energy, heat, and transportation (the tank-to-wheel phase) (El-Houjeiri et al. 2013;Greene and Lewis, 2019). Together, these stages form the well-to-wheel fuel life cycle. While tank-to-wheel emissions are the primary climate impact from the oil and gas, the other elements of the fuel life cycle are important to account for and monitor (Rahman et al., 2015;Di Lullo et al., 2016). This paper focuses on the carbon dioxide (CO 2 ) emissions from one of the most important elements of the well-to-tank stage: international maritime transport of crude oil. The primary mode of transport for intercontinental oil movement is by oil tanker, largely powered by marine diesel and heavy fuel oil (Jia, 2018). The maritime sector is an important element of the low carbon transition strategy and has been identified by the Energy Transitions Commission (ETC) as one of the most difficult sectors to decarbonize (ETC, 2018). The International Transport Forum (ITF) estimated that maritime transport made up 3% of global emissions and 27% of freight transport emissions in 2015, amounting to roughly 873 million tonnes of CO 2 per year (ITF, 2019). Oil tankers make up 13% of maritime emissions, or approximately 114 million tonnes of CO 2 (Olmer et al., 2017). The International Council on Clean Transportation found that between 2013 and 2015, oil tankers as a class became more efficient mainly due to the continuous improvement of technical standards (Olmer et al., 2017). Speed reduction has a high potential to increase fuel efficiency (Corbett et al., 2009;Faber et al., 2012), but only if charterparty contractual terms allow (Jia et al. 2017), or if mandatory slow steaming measures are put in place (Rehmatulla and Smith, 2015). While reduction in carbon emissions has become a focus for the shipping industry and its many customers, the sustainability efforts in oil transport have focused on safety issuesnamely avoiding oil spills (Poulsen et al., 2016;Smith et al., 2015). The importance of well-to-tank emissions is becoming more prominent with the growing adoption of alternative fuels like biodiesel or hydrogen (Bouman, et al., 2017;Di Lullo et al., 2016;Ozawa, et al., 2017;Winebrake, et al., 2007), where emissions often lie primarily in the well-to-tank phase. Companies that are seeking to understand the true impacts of their activities and align with mandates from carbon accounting and the Science-Based Targets initiatives must include values for well-to-tank emissions (Greene and Lewis, 2019;SBT, 2018). Companies do this by using standard emissions factors to convert fuel use into greenhouse gas emissions (Greene and Lewis, 2019). Typically provided by government bodies or academic studies, emissions factors are generally presented as a static value for the well-to tank and tank-to-wheel emissions, or combined as well-to-wheel emissions, for diesel, gasoline, and other fuels (Edwards et al., 2014;DEFRA, 2019;and EPA, 2014). These factors are rarely provided based on the fuel's origin or place of use. In order to better understand variations within the carbon emissions from oil, this paper provides new insights on the well-to-tank phase of the oil life cycle by providing an in-depth analysis of emissions from the maritime transportation of oil. The remainder of the paper is structured as follows. Relevant literature review is presented in Section 2. Data and methodology are illustrated in Section 3, which is followed with results and discussions in Section 4. Section 5 concludes the paper. Literature review The international maritime sector has been under scrutiny due to the fact that the large international ocean-going vessels, which, until very recently (1 January 2020), have been mainly fueled by marine diesel oil and residual heavy fuel oil. The overarching government body in the shipping industry, International Maritime Organization (IMO) has implemented a series of regulations and operational practice guidance to reduce emissions from the industry. For instance, the most recent IMO 2020 low sulphur cap regulation that aims to tackle sulphur oxide emissions from ocean going vessels either through burning lower sulphur marine gas oil or equipping the vessels with abatement facilities. CO 2 emission reduction is achievable through improvements in operational practices such as slow-steaming (Psaraftis and Kontovas, 2013); vessel designs (see, Motley et al., 2012;Doulgeris et al., 2012); or the use of alternative fuels (Bengtsson et al. 2011;Balcombe et al. 2019) -with zero emissions as the ultimate goal. Cariou et al. (2019) also show that liner shipping companies can achieve CO 2 emission reduction through network design by reducing vessel-cargo travel distances. Energy Efficiency Design Index (EEDI) was introduced by IMO in 2011 and its aim is to set the minimum technical standards for vessels built in and after 2013 for compliance in energy efficiency, ultimately emission reduction (Devanney, 2011). However, improvements in the environmental performance are mainly driven by power relationships in the market (see, for instance, Jeppesen and Hansen, 2004;Ivarsson and Alvstam, 2010;De Marchi et al., 2012;Goger, 2013). Maritime transportation is a derived demand from international trades. Cargo owners, for instance, the oil companies, are the other important side of the play. In fact, emphasis should be given to the whole crude oil supply chain to consider the power dynamics in this system. Previous work on crude oil life cycles have provided insights on the variable emissions along certain oil value chains. For example, El-Houjeiri et al. (2013) found that emissions from crude oil production can range from 3 to 30 g CO 2 /MJ depending on processing techniques and rates of gas flaring at a particular well field. The California Air Resources Board (CARB) assessed the well-to-refinery emissions of crude oil processed by California refineries, observing differences ranging from 2 − 48 g CO 2 /MJ depending on the oil field of origin (CARB, 2019). In a study of China's oil supply, Masnadi et al. (2018) found that the well-to-refinery emissions varied by oil field, with values ranging from 1.5 and 47 g CO 2 e/MJ. These studies showcase the variability within oil production processes, but, while they include oil transportation, they do not specify the share of these emissions related to the transportation of crude oil. Further, these studies do not use efficiency data for specific oil tankers, rather relying on industry average data for oil tankers. The accuracy of well-to-tank emissions for all fuels can be improved by providing emissions factors based on the oil's origin and ultimate destination, as well as the specific equipment used to carry it. This research attempts to fill this gap by investigating the potential for refining the transportation component of well-to-tank values based on a unique dataset of oil shipments. Through this analysis, this paper aims to build on the work of Clean Cargo, a group that offers trade lane emissions values for container ships, by providing a similar set of information for oil trade lanes that can be used in carbon foot printing initiatives (Clean Cargo, 2019). This research also echoes the efforts from the member states in the IMO to reduce carbon emission by 50% by 2050 (IMO, 2018), but emphasize the awareness from a wider community. Data and data processing The basis for this study was a unique raw dataset of 70,000 oil shipments that took place between 2013 and 2016, which are provided by Clipper Data Ltd. and primarily derived from the Automatic Identification System (AIS) for vessel tracking and port agents for cargo information. Note that the most recent IMO 2020 regulation to switch the industry from burning high sulphur fuel oil (HSFO) to low sulphur fuel oil (LSFO) does not improve on CO2 emissions. In fact, there have been suggestions that very LSFO (VLSFO) has even worse impact on black carbon emissions (Lloyds List, 2020). The vessel identification (name and IMO number) is then matched with Clarksons Fleet Registry database to get the vessel specifications, including the Energy Efficiency Design Index (EEDI). This dataset included information on the shipment origin and destination, shipment size, buyers and sellers of the cargo, and vessel information. In order to analyze the emissions from the shipments, the analysis of the data set involved various efforts to filter and categorize the data using Python, R, and Tableau, which are summarized below. In order to ensure the accuracy of results, the raw dataset was cleaned by excluding duplicates, incomplete, or non-sensical shipments. For instance, a number of shipments that had a duration of more than 50 days or less than two days were removed. Shipments that had the same load and offtake country, as well as those with a travel distance of less than 100 km, were removed in order to keep the focus on international maritime journeys. Finally, shipments on the same vessel with multiple discharge ports along the same trade lane were identified and aggregated, so that the voyages with the largest cargo volume are kept. The EEDI standard, which was a regulation adopted by the IMO in 2011, is to set minimum technical energy efficiency requirements for vessels built after 2013 (IMO, 2012). For older vessels that were built before 2013, commercial company RightShip back-calculated EEDI to the whole existing world fleet. The resulting Existing Vessel Design Index (EVDI) is a means to evaluate the carbon intensity (grams of CO 2 per tonne-nautical mile) of individual vessels based on a ship's design, manufacturer specifications, data from shipyards, industry publications, etc. Though we recognize that the actual emissions will vary depending on operating conditions, e.g., speed and weather conditions, we chose the "design" index assuming vessels were operated at design levels (i.e. design speed and fair weather conditions) to provide a generalized picture of carbon footprint for crude oil seaborne transportation. Interested researchers can adjust the results based on specific information, for instance, average vessel speed by trade lane per time period. Define trade lanes Once the duplicative, conflicting, misrepresentative shipments, as well as domestic shipments (i.e. the same country for port calls in consecutive voyages) were removed, and the vessels were matched with their EVDI score, where available, the number of shipments reduced from 73,313 to 28,043. The shipments were organized into common trade lanes based on the most important flows between origin and destination regions, as shown in Table 1. In general, the trade lanes were categorized as major international trade lanes, intraregional lanes, and a catch-all category of other international, which includes all other low volume trade lanes not represented elsewhere. Shipments along these trade lanes include direct port to port shipments as well as ships that make multiple stops to discharge along the trade lane. Carbon emission density EVDI measures the CO 2 emissions per tonne-mile for vessel i (Psarros, 2017;Jia, 2018): where, P i is the energy consumption level of main and auxiliary engines (kW) for vessel i; C F denote conversion factor between fuel consumption and CO 2 emission; SFC denote certified Specific Fuel Consumption (g/kWh). Total CO 2 emissions for voyage j by vessel i is calculated as the total amount (tonnes) of CO 2 emitted during the voyage: where, CE i,j is the total CO 2 emission for vessel i during voyage j; S i,j is the cargo size on board vessel i during voyage j (tonnes); D j is the distance for voyage j (km). To align with the scope 3 method in the GLEC Framework (Greene and Lewis, 2019) which applies to the whole supply chain, nautical miles in ocean distance are converted to kilometers. The resulting value from Eq. (2) is then scaled from CO 2 to CO 2 e using the 2% conversion factors recommended by the GLEC Framework. Table 2 summarizes the characteristics of the shipments along each trade lane. The average distance oil was transported in this study was 4160 km. The trade lanes of Arabian Gulf to East Asia, Latin America to South East Asia, and West Africa to South East Asia have the highest average distance; it will be shown later on that this has an important effect on emissions. Dead Weight Tonnage (DWT) represents the measure of the capacity and size of a vessel. The average for oil tankers in this study was 179,000 DWT, falling into the category of Very Large Crude Carrier on the Average Freight Rate Assessment (AFRA) scale. It is noticeable that, in general terms, larger vessels are utilized for long distance shipments, whereas vessels with less DWT are more frequent in intraregional or shortdistance trade lanes. For example, Latin America to North America operated the smallest ships, and Arabian Gulf to East Asia, the largest. Oil maritime transport emissions by trade lanes The average load factor of the ships, the ratio of shipment volume to ship capacity, represents the efficiency at which a ship is operating; higher load factors indicate more efficient shipments. The load factor varied by trade lane; the Arabian Gulf to North America trade lane had the lowest average load factor of 57% and Russia to East Asia the highest, at 87%. The average across all trade lanes, 73%, was similar to the average 70% load factor identified by Clean Cargo (2018) for container ships. The average emissions intensity for oil transport along each trade lane is provided in several formats. Firstly, for each trade lane, we show the average EVDI of oil tankers, representing the efficiency of the fleets operating along that trade lane. Lower is more efficient, higher is less. EVDI value ranges from 2.6 for Arabian Gulf to East Asia, to 4.15 for Latin America to North America. We would like to point out that these values correspond only to the design efficiency of the ships; the actual operational performance, such as ship speed, is not considered in these results. When the emissions are allocated to the shipment weight and distance, calculated based on the tonne-kilometers traveled by each shipment during the study period, the results are presented by CO2e/tonne-nm and CO2e/tonne-km (to be comparative to other transportation mode). The relative ranking across the trade lanes by CO2e/t-nm (CO2e/t-km) did not change materially comparing to EVDI measures. Namely, the average carbon intensity of the shipments, in CO2e/tonne-km along each trade lane, was lowest for Arabian Gulf to East Asia and highest for Latin America to North America. These values are useful for buyers of oil to estimate the carbon emissions of the maritime transportation of their oil purchases, such as for CDP reporting or product carbon footprint. The remaining value, CO 2 e/liter, is related not to the GHG emitted to power the ship, but rather to the oil that was transported within the vessels as cargo. These values could be considered part of a product carbon footprint for crude oil. Here again we see wide variability based on the trade lane that doesn't necessarily correspond to the EVDI or oil tanker carbon intensity. In fact, a negative correlation was observed between CO2e/liter and EVDI; however, the correlation coefficient is very low (-0.203), which indicates that the EVDI of a vessel does not have a strong effect on emissions per barrel of oil transported. A weak correlation exists as well between these variables and load factor, which implies that the degree of utilization of the vessel's capacity does not have a big influence on total emissions either. On the other hand, a very strong correlation exists between these variables and distance (correlation coefficient = 0.93), showing that CO2e/liter is linearly related to distance, with R 2 = 0.948. This suggests that minimizing distance is a key level to decreasing crude oil maritime transportation emissions. Oil tanker emissions compared with well-to-wheel emissions Companies tracking their GHG emissions use fuel emissions factors (g CO 2 e/liter) to convert the amount of fuel burned to CO 2 e. For most companies, a single emissions factor is adopted that represents the average emissions for an entire class of fuel, regardless of its supply chain. While it is impossible to know the percent of each fuel emissions factor that can be attributed to maritime transportation, we can still make several observations about a variance in well-to-tank emissions based on the origin and destination of fuel. As Fig. 1 shows, emissions per liter of fuel were considerably lower for short trips, like from Latin America to North America, or from Russia to East Asia. Conversely, longer trips, like from the Arabian Gulf to North America, had higher emissions. Also high were emissions for oil shipped from Northern and Western Africa to Europe, likely due to the higher EVDI of oil tankers running on these lanes (see Fig. 2). Comparing oil tanker emissions to GLEC well-to-tank fuel emissions factor (250 g CO2e/liter heavy fuel oil), our results indicate that the proportion of these factors that maritime emissions would comprise varies by the trade lane. Considering the case of heavy fuel oil, the least refined type of oil, shown in Fig. 3, if crude oil traveled from the Arabian Gulf to North America, maritime transport would make up 11% of well-to-tank emissions; whereas if the oil was transported from the Arabian Gulf to Southeast Asia, this number drops to 3%. Depending on the type of fuel, and its value chain, it's possible that companies could be over-or under-estimating their well-totank emissions by using generic industry average values. Conclusion Properly accounting for carbon emissions is becoming increasingly important to companies, governments, and international governing bodies as part of efforts to keep global temperatures below 2 degree celsius from pre-industrial times. These efforts need participation from various stakeholders to join force to achieve environmental improvements (see, for instance, Schleifer 2013; Hale and Roger, 2014;Abbott et al. 2015;Graham and Thompson, 2015;Raza, 2020). Consequently, companies in many industries are increasingly calculating, disclosing, and seeking to reduce carbon emissions along their value chains, including the production and distribution of the fuels they consume. This is reflected by the growing trend for the inclusion of well-to-tank emissions in carbon accounting standards and climate goal-setting. In this study, we demonstrated the difference in oil tanker efficiency along key trade lanes based on micro-level oil seaborne shipment data. We also demonstrated how this affects the carbon footprint of the oil cargo being transported by these oil tankers, adding new dimensions to the work done by El-Houjeiri et al. (2013) and Masnadi et al. (2018) who consider oil life cycles using global industry average emissions intensity values for oil tankers. The results show that the main driver of emissions is distance, despite optimized loads and more energy efficient oil tankers. This suggests that efforts to reduce these emissions should be first directed towards increasing local shipments, rather than improving EVDI or loading factors. New maritime routes may lead to a reduction in oil transport emissions by decreasing the distance shipments need to travel subject to naval and commercial feasibilities (ITF, 2019). Most significantly, the Kra Canal across the Malayan peninsula would shorten the Arabian Gulf to East Asia trade lane by 1,200 km. The Nicaraguan Canal and newly ice-free Arctic shipping routes may also have that same effect. There is potential to use this study's CO2e/liter emissions factors for oil maritime transportation to refine estimates for the well-totank emissions on a trade lane basis to more closely fit the emissions from oil consumed by companies. This work would contribute to a growing need to understand well-to-tank emissions, as alternative transportation fuels with emissions primarily lying in the well-totank phase begin to be used more widely. In additional, the CO2e/tonne-km values build on the work of Clean Cargo, which offers trade lane factors for container shipping, by creating a similar trade lane dataset for oil tankers. There is also potential to leverage this information to inform fuel sourcing decisions by governments or companies based on the results, or as a factual basis for influencing oil transporters to reduce the emissions of their ships. Further work could be done to characterize the other transportation emissions that are also part of well-to-tank emissions, such as trucking, pipelines, or other shipping activities that invariably occur as the crude oil is further processed and distributed. As companies and governments look towards their net zero and Paris Agreement goals, it's clear that the inclusion of transportation in the fuel life cycle based on the origin and destination is an important consideration that can help to refine emissions estimates and inform procurement strategies. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments Haiying Jia would like to acknowledge the financial support from the Research Council of Norway as part of the project 'Smart digital contracts and commercial management', project no. 280684. The authors would like to acknowledge the valuable input from Roar Os Ådland and Yinjin Lee in the development of this paper.
2020-10-28T19:08:49.929Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "bcbc11f0f41b7bd747c99c37aea13ff5474c731d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.trd.2020.102587", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6287416f7c9558300aa1643417bc3fc4cd1e9bf9", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Environmental Science" ] }
123178890
pes2o/s2orc
v3-fos-license
Confidence Limit for the Existence of a Solution to the One-Dimensional Magnetotelluric Inverse Problem The necessary and sufficient conditions for the existence of a solution to the one-dimensional magnetotelluric inverse problem are extended to error data. The confidence criterion is defined and used for testing the existence of a one-dimensional solution when data errors are considered. Synthetic data sets and two very popular real data sets are tested using this new confidence criterion. Introduction , Parker (1980), Yee and Paulson (1988a) stipulated the necessary and sufficient conditions for the existence of a solution to the one-dimensional (1D) magnetotelluric inverse problem. The existence was based on an all-inclusive set of continuous and discontinuous one-dimensional conductivity profiles, called S+ (Yee and Paulson, 1988b), and is guaranteed if two simple constraints on the c-response data set {c(vj) } 1 are satisfied. Here, c(vj) is the cresponse at the real, positive, discrete frequency vj. From this data set, a pair of N x N Hermitian test matrices, Q and Q, are constructed, with typical elements [qjk] _ 12 (c(vj) -C*(yk))J (1) vj+vk and [q.,k] = rvjC(vj) + ykC*(vk)] for j, k = 1, 2, • • . , N, l vj +vk where * denotes complex conjugation. If both of these test matrices are positive definite, the given data set is consistent with some c(v), vi < v < vN, corresponding to a one-dimensional conductivity, o E S+. If only certain sub matrices are found to be positive definite, consistency with some c(v) would be restricted to the frequency ranges involved in the definition of those sub matrices. The derivation of Weidelt(1972Weidelt( , 1986, Yee and Paulson (1988a) is applicable only to precise data and consequently is of little practical use. While Parker (1980), Weidelt (1990) considered data imprecision, the former was only through use of a chi-square goodness of fit to a 1D conductivity distribution in D+ (i.e., a sequence of positive delta functions) which requires that the residuals have a Gaussian distribution, and the latter failed on the generalization for N frequencies. In this paper it will be shown that compensation for the effects of random noise is possible allowing one to test for the one-dimensionality of the conductivity structure associated with any measured response data set {c(vj) } i that contains random noise in its structure. To this end, it will be assumed that the errors in both the real and imaginary parts of each c(vj) is known. These *Formerly with Cybernetics Laboratory , University of Saskatchewan, Saskatoon, Canada. errors can be either in the form of standard deviations or maximum errors and should always be part of any experimental magnetotelluric data set. This allows the definition of two error matrices o and A associated with Q and Q, respectively. Then a pair of Hermitian perturbation matrices, P and P are defined which are just sufficient to force the sums Q+P and Q+P to become positive definite. Finally, a criterion is chosen which relates these minimum perturbation matrices to their corresponding error matrices. Obviously, if the perturbation elements are large in comparison to the error elements, there are factors other than imprecision causing the non-positive-definiteness of the test matrix. To this end, non-negative scalars, p and µ are introduced, together with the ad hoc inequalities, /.c < 1 and µ <_ 1 as criteria for concluding that the measured response data set is compatible with some 1D conductivity structure. The method has been thoroughly tested on both synthetic and measured data sets some of which are reproduced in the paper. Theoretical Considerations By the spectral theorem (Strang, 1980), every Hermitian matrix, Q may be diagonalized by some unitary matrix U, such that and may be decomposed into the sum Q=UAUH=EAju u3, where the elements A3 are the real eigenvalues of Q, the column vectors uj are the corresponding eigenvectors, and H denotes the operation of complex-conjugate transpose. In the spectral decomposition of Q, the unitary matrix U may be chosen so that the N elements of the diagonal matrix, A, are in decreasing order. Thus, Al>A2>...>AN-If all of these eigenvalues are positive, Q is positive definite and the one-dimensional requirement of the test matrix in Eq. (1) is satisfied. However, suppose that some of the eigenvalues are negative, where rank N is assumed. Then, with the same ordering as before, where the matrix P is chosen to be the minimum perturbation needed to transform Q into the positive definite matrix Q + P. The perturbation, P, may result because of random noise or the departure of the conductivity structure from one dimensionality. It may also arise because of other factors such as source effects and near-surface conductivity anomalies. In order to weight the contribution of random noise to the perturbation matrix, evaluate the scalar ratio where IR{.} and s{.} are the real and imaginary parts, respectively, of matrix elements from the perturbation matrix, P, and the error matrix, A. The error matrix is derived from the error in cj, j = 1, ..., N, and defined as: In a similar manner, associated with the test matrix Q (Eq. 2), there is an error matrix, A, a perturbation matrix P, and a scalar ratio µ. If both p < 1 and µ < 1, it will be concluded that the perturbation matrices needed to change Q and Q into positive definite forms are within the limits of the error matrices A and A and that the underlying conductivity structure is onedimensional. If these two conditions are not satisfied, the structure may not be one-dimensional. The word "may" is used because other factors besides noise and higher-order dimensionality can cause the tests to fail. Synthetic example I: Consider a single plane layer of thickness 10 km and resistivity 100 Qm, over a half space of resistivity 10 11m. The values of the c-response at different frequencies are readily calculable. Four of these values are shown in Table 1 and they have been used to construct the 4 x 4 matrices Q and Q defined by Eqs. (1) From the error and perturbation matrices the two scalar ratios may be evaluated from Eq. (6) giving y = 0.0222 (14) and µ=0.0357. (15) Since both ratios are less than unity, it may be concluded that within the limits of the random noise the c-response data set resulted from some one-dimensional resistivity structure, which, of course, is the correct conclusion. Synthetic example II: Following Yee et al. (1988), a single horizontal component of a noise-free, time-dependent, magnetic-field signal was generated by an autoregressive-moving average and the corresponding orthogonal, noise-free, electric-field signal was generated based on the previously employed model of a single plane layer over a half-space. Two independent realizations of random noise with identical standard deviations were then added to each of these signals to create magnetic-and electric-field message sequences each containing 16,384 elements with 1-s sampling intervals. Each of these synthetic data components were then divided into 16 disjoint intervals each containing 1024 elements. These estimates are realizations of random variables because they are quotients of physical realizations of fields which have random elements in their structures. Because the standard deviations of the random-noise realizations are known, the standard deviations of each impedance estimate may be determined. Since the model is one-dimensional, the impedance estimates and their standard deviations may be converted into c-response estimates. The sixteen c-response estimates at a frequency of 75 mHz are plotted in Fig. 1 as small solid circles. The two largest circles in the figure are centered on two of these estimates and possess radii equal to the standard deviations of each estimate. Note that the circles have slightly different radii. This is so because the deterministic signal components of the fields vary over the 16 disjoint time intervals. The arithmetic (sample) mean of the 16 impedance estimates, together with its standard deviation may be computed to yield an improved estimate of the impedance. These two values, converted to c-responses, are displayed in Fig. 1 as a triangle and its associated standard deviation circle. Theoretically, as the sample size is increased from 16 to infinity, the estimate provided by the arithmetic mean approaches the true value. The true value of the c-response is shown in Fig. 1 as a each have negative entries and, therefore, the test of Yee and Paulson (1988a) would result in the conclusion that the underlying resistivity structure is not one dimensional. However, computation of the minimum perturbation matrix allows evaluations of and µ = 0.0567. (19) Once again, to within the limits of the inherent noise, it is concluded that the resistivity structure is one dimensional. Application to Experimental Data The theory developed in this paper will now be applied to magnetotelluric data sets obtained by Larsen (1975) and by Jones and Hutton (1979). Both data sets consist of a single pair of apparent resistivity and phase sequences, together with estimates of their standard deviations. The Larsen data set was converted to c-response values by Parker and Whaler (1981). We, in turn, have converted the Jones and Hutton data to c-response values using where Pa (v) and q5a(v) are, respectively, the apparent resistivity and phase measured at frequency v, µo is the magnetic permeability of free space and i = . The error associated with c(v) may be calculated in the following way: where 0,(v) represents the phase of c(v). There is 7r/2 difference between 0a(v) and '0a(V) or 0, _ Oa -7r/2. The Eq. (21) represents an error ellipse and the two radii are equal to 1C(V)1 Opa(v)/(2pa(v)) and 1c(v)1 O5a(v) respectively with one axis lies on 0,(v). Strictly speaking, of course, the conversion in Eq. (20) assume that the underlying structure is one dimensional, but one can do little else in these cases where the full impedance tensor was not measured in the first place. If the structure, indeed, is not one dimensional, the conversion will only worsen the departure from a one-dimensional structure as reflected in our test matrices . In any case, the problem could be cleared up entirely by making a small change in the definition of the c-response function, namely, where i, j is some combination of x, y and the other symbols on the right-hand side are conventional ones used in magnetotelluric theory. Both Larsen (1975) and Parker and Whaler (1981) concluded that the seven lowest frequencies of the Larsen data set could be adequately interpreted by a 1D conductivity model. However, Yee and Paulson (1988a) concluded that only certain combinations of 3-tuples of frequency values were interpretable as being consistent with some 1D distribution and that there were no combinations of 4-tuples that would satisfy this criterion. We have calculated the scalar indices for the seven lowest frequencies and find that µ = 0.09664 and µ = 0.06446. Since the two indices are much smaller than unity, we conclude that the seven lowest frequency values lowest frequency responses are consistent with some 1D conductivity distribution to within the limits of the inherent random noise. Extending the analysis to include the lowest 8, 9 and 12 frequency values gives values for the scalar indices given in Table 2. From this analysis, we conclude that 1D consistency may be extended to the 8th lowest frequency value, but not beyond. For the Jones and Hutton data set, different combinations of c responses at different frequencies have been analysed. The largest consistent frequency region occurred for the highest frequencies, between 4 and 35 mHz, a span of 8 frequency values, for which µ = 0.2404 and µ = 0.02858. When the next lowest frequency was added, the scalar indices for N = 9 became µ = 3.217 and µ = 0.06182. The conclusion is that the data set is consistent with some 1D distribution over a frequency range of 3.13 octaves (4 to 35 mHz) and that the data associated with the lower frequencies of the data set are not. Conclusions The testing for the one dimensionality of a given magnetotelluric c-response data set may be heavily influenced by the imprecision in the data set. We have presented a theory whereby the noise inherent in the data, as expressed by either the maximum error or the standard deviation, may be used to compensate for the imprecision. The theory is an extension to the deterministic analysis of Weidelt (1986), Yee and Paulson (1988a). The theory has been applied to both synthetic and experimental data sets with satisfactory results. Since the scalar indices adopted to determine if a data set is compatible with some 1D conductivity distribution to within limitations of the noise is purely an ad hoc one, the entire subject should be re-examined. However, the approach described here is a substantial improvement over what has been done in the past.
2019-04-21T13:03:33.429Z
1997-09-20T00:00:00.000
{ "year": 1997, "sha1": "2b47c63d1dd380bb363519a005e36d95808c6f74", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jgg1949/49/9/49_9_1145/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8bc28d81eacb2d118455b6172e75d643879abd88", "s2fieldsofstudy": [ "Mathematics", "Geology" ], "extfieldsofstudy": [ "Mathematics" ] }
215410530
pes2o/s2orc
v3-fos-license
Effects of Temperature on Fluidity and Early Expansion Characteristics of Cement Asphalt Mortar. In order to solve the problems of the sudden loss of fluidity and low expansion rate of CAM I (cement asphalt mortar type I) in a construction site with high environmental temperature, this paper studies the effect of temperature on the fluidity, expansion ratio and pH value of CAM I. The mechanism of action was analyzed by IR (infrared spectrometry), SEM (scanning electron microscopy) and other test methods. The results showed that a high temperature accelerates aluminate formation in cement paste. Aluminate adsorbs emulsifiers leading to demulsification of emulsified asphalt, and wrapped on the surface of cement particles, this causes CAM I to lose its fluidity rapidly. The aluminum powder gasification reaction is inhibited, resulting in an abnormal change in the expansion ratio. Based on findings, the application of an appropriate amount of superplasticizers can effectively improve the workability and expansion characteristics of CAM I at a high temperature. Introduction The ballastless slab track structure is an advanced track structure which is used widely in high speed railway in countries, such as Germany, Japan and China, due to its advantages of high stability, high durability and low maintenance compared to ballasted track structure. Nowadays, two main forms of ballastless slab track (CRTS I and CRTS II) are used in China, and although there are some differences in slab structures, the cement asphalt mortar (short for CAM I) layer is present in both slab tracks. CAM I is an interlayer injected in between the track and the bottom plate of the high-speed railway ballastless slab track. Its main components are: emulsified asphalt, Portland cement, fine aggregates, water, expansion agent, aluminum powder and other admixtures. CAM I used in CRTS I and CRTS II slab tracks is divided into CAM I and CAM II respectively. CAM I offers tremendous applications to the ballastless slab structure, including supports and leveling-which helps to adjust track precision; it also absorbs vibration and helps in vibration isolation. The thickness of the CAM I layer varies due to differences of slab track structure and the type of CAM I used [1][2][3][4][5].Studies showed that the environmental temperature has a serious impact on the quality of CAM I perfusion in a construction site. At high temperatures, the CAM I paste in the mixing will lose fluidity instantaneously, and causes the phenomenon of demulsification and flocculation ( Figure 1). After completing the perfusion, insufficient expansion of CAM I results in gaps between the filling layer and the track slab (depth > 40 mm) ( Figure 2). These problems have seriously delayed the construction progressions and threatened the safety of high-speed trains, so need to be resolved. Some studies have been conducted on the effects of temperature on the performance of CAM. The temperature sensitivity and mechanical properties of CAM are closely related to the thermal dependence of emulsified asphalt, and the compressive strength of CAM decreases with increasing curing temperature [6,7]. Wang et.al put forward a temperature stability coefficient (TSC) that effectively characterizes the temperature dependence of the strength of CAM [8]. Wang et.al [9,10] and Kong et.al [11] studied the effect of test temperature (0-40 °C) on the compressive strength of CAM, and based on the experimental observations, the empirical equations of compressive strength and temperature of CAM were obtained. Hu et al. used a concrete pressure bleeding instrument to study the compressive strength of CAM in water at different temperatures (20,40, 60 °C) and pressure (0-0.5MPa), and found that the compressive strength of CAM will decrease with increas ing water temperature [12]. Temperature also has an important influence on the stress relaxation process of CAM. As temperature decreases, the stress relaxation rate and stress relaxation modulus of CAM increase gradually [13]. Zhang et al. [14] studied the effect of temperature (0, 20, 40 °C) on the rheological properties of fresh cement asphalt paste using Brookfield DV-III + ULTRA rheometer; it was found that the yield stress of cement asphalt increased with increasing temperature, and cement asphalt mixed with cationic emulsified asphalt is more sensitive to temperature than anionic emulsified asphalt ; they believed that the adsorption of emulsified asphalt by cement particles may be the cause of these phenomena, but the reason for this adsorption is not explained clearly. At higher temperatures, the adsorption behavior of cement particles and emulsified asphalt in fresh cement asphalt paste is more prominent, resulting in increased viscosity and reduced workable time, but the microscopic mechanism is not clear [15]. Therefore, it is necessary to explore the temperature-sensitive micro-mechanisms, such as the working time and expansion rate of fresh CAM paste. Some studies have been conducted on the effects of temperature on the performance of CAM. The temperature sensitivity and mechanical properties of CAM are closely related to the thermal dependence of emulsified asphalt, and the compressive strength of CAM decreases with increasing curing temperature [6,7]. Wang et.al put forward a temperature stability coefficient (TSC) that effectively characterizes the temperature dependence of the strength of CAM [8]. Wang et.al [9,10] and Kong et.al [11] studied the effect of test temperature (0-40 °C) on the compressive strength of CAM, and based on the experimental observations, the empirical equations of compressive strength and temperature of CAM were obtained. Hu et al. used a concrete pressure bleeding instrument to study the compressive strength of CAM in water at different temperatures (20, 40, 60 °C) and pressure (0-0.5MPa), and found that the compressive strength of CAM will decrease with increasing water temperature [12]. Temperature also has an important influence on the stress relaxation process of CAM. As temperature decreases, the stress relaxation rate and stress relaxation modulus of CAM increase gradually [13]. Zhang et al. [14] studied the effect of temperature (0, 20, 40 °C) on the rheological properties of fresh cement asphalt paste using Brookfield DV-III + ULTRA rheometer; it was found that the yield stress of cement asphalt increased with increasing temperature, and cement asphalt mixed with cationic emulsified asphalt is more sensitive to temperature than anionic emulsified asphalt; they believed that the adsorption of emulsified asphalt by cement particles may be the cause of these phenomena, but the reason for this adsorption is not explained clearly. At higher temperatures, the adsorption behavior of cement particles and emulsified asphalt in fresh cement asphalt paste is more prominent, resulting in increased viscosity and reduced workable time, but the microscopic mechanism is not clear [15]. Therefore, it is necessary to explore the temperature-sensitive micro-mechanisms, such as the working time and expansion rate of fresh CAM paste. Some studies have been conducted on the effects of temperature on the performance of CAM. The temperature sensitivity and mechanical properties of CAM are closely related to the thermal dependence of emulsified asphalt, and the compressive strength of CAM decreases with increasing curing temperature [6,7]. Wang et al. put forward a temperature stability coefficient (TSC) that effectively characterizes the temperature dependence of the strength of CAM [8]. Wang et al. [9,10] and Kong et al. [11] studied the effect of test temperature (0-40 • C) on the compressive strength of CAM, and based on the experimental observations, the empirical equations of compressive strength and temperature of CAM were obtained. Hu et al. used a concrete pressure bleeding instrument to study the compressive strength of CAM in water at different temperatures (20,40, 60 • C) and pressure (0-0.5M Pa), and found that the compressive strength of CAM will decrease with increasing water temperature [12]. Temperature also has an important influence on the stress relaxation process of CAM. As temperature decreases, the stress relaxation rate and stress relaxation modulus of CAM increase gradually [13]. Zhang et al. [14] studied the effect of temperature (0, 20, 40 • C) on the rheological properties of fresh cement asphalt paste using Brookfield DV-III + ULTRA rheometer; it was found that the yield stress of cement asphalt increased with increasing temperature, and cement asphalt mixed with cationic emulsified asphalt is more sensitive to temperature than anionic emulsified asphalt; they believed that the adsorption of emulsified asphalt by cement particles may be the cause of these phenomena, but the reason for this adsorption is not explained clearly. At higher temperatures, the adsorption behavior of cement particles and emulsified asphalt in fresh cement asphalt paste is more prominent, resulting in increased viscosity and reduced workable time, but the microscopic mechanism is not clear [15]. Therefore, it is necessary to explore the temperature-sensitive micro-mechanisms, such as the working time and expansion rate of fresh CAM paste. To that end, this paper tests the change of CAM I fluidity, expansion ratio and pH value with time at different temperatures by simulating on-site construction conditions. The stability and evolution mechanism of the CAM I system at different temperatures were studied, and an effective solution to the above construction problems was explored. The purpose was to optimally mix the proportion of CAM I and provide guidance for construction quality control of CAM I. Materials In order to save mixing time and ensure the stability of CAM I, cement, sand, aluminum powder and other solid admixtures were premixed and made into dry materials, while liquid admixtures were added to emulsified asphalt. Field construction products included dry materials and emulsified asphalt. Properties of CAM-I dry materials (Table 1) and emulsified asphalt ( Table 2) meet the requirements of Chinese code [16] ("Provisional technical conditions for cement asphalt mortar and resin pours for CRTS I slab ballastless track of Passenger Dedicated Railway"). Table 3 shows the mix proportions of CAM I. Pouring bag: The pouring bag is made of polyester non-woven fabric with a mass per unit area of 105 g/m 2 . Other performance indicators meet the requirements of Chinese code [16]. Water: tap water is used throughout the experiments. Test of Fluidity According to Chinese code, the fluidity of CAM I was measured using a "J" type funnel ( Figure 3). The testing device includes a funnel (made of brass), a bracket (made of iron) and a stopwatch (accuracy 0.1 s). The test steps are as follows: (1) The funnel is vertically erected on the bracket. (2) The sample is injected into the funnel; the right amount of sample is discharged from the outlet; and the outlet is pressed by fingers so that the sample fills the funnel and the surface is leveled. (3) Release your finger and the mortar flows out. Use a stopwatch to measure the time it takes for the mortar to flow continuously from the beginning to the end, which is the fluidity of the sample-t (in seconds). (4) Carry out a fluidity test on the same sample every 10 min and draw a fluidity curve ( Figure 6); that is, the correspondence between the fluidity and the accumulated time. (5) Each group of samples is tested three times for fluidity and working time, and the mean value is taken. (3) Release your finger and the mortar flows out. Use a stopwatch to measure the time it takes for the mortar to flow continuously from the beginning to the end, which is the fluidity of the sample-t (in seconds). (4) Carry out a fluidity test on the same sample every 10 min and draw a fluidity curve ( Figure 6); that is, the correspondence between the fluidity and the accumulated time. (5) Each group of samples is tested three times for fluidity and working time, and the mean value is taken. Test of the Early Expansion Ratio We measured the early expansion ratio of CAM I with a self-designed device ( Figure 4); this device can continuously monitor the expansion ratio of pastes by a displacement sensor. The temperature of a sample is adjusted by heating in a water bath. The test steps are as follows: 1 The fresh CAM I paste with specified temperature is injected into the φ 10 × 20 cm pouring bag, tied and put it into the φ 10 cm PVC tube. 2 Press the sample by lighter metal sheet; then the PVC tube is placed in a water bath with a specified temperature, and a displacement sensor is placed on the metal sheet. Test of the Early Expansion Ratio We measured the early expansion ratio of CAM I with a self-designed device ( Figure 4); this device can continuously monitor the expansion ratio of pastes by a displacement sensor. The temperature of a sample is adjusted by heating in a water bath. The test steps are as follows: (3) Release your finger and the mortar flows out. Use a stopwatch to measure the time it takes for the mortar to flow continuously from the beginning to the end, which is the fluidity of the sample-t (in seconds). (4) Carry out a fluidity test on the same sample every 10 min and draw a fluidity curve ( Figure 6); that is, the correspondence between the fluidity and the accumulated time. (5) Each group of samples is tested three times for fluidity and working time, and the mean value is taken. Test of the Early Expansion Ratio We measured the early expansion ratio of CAM I with a self-designed device ( Figure 4); this device can continuously monitor the expansion ratio of pastes by a displacement sensor. The temperature of a sample is adjusted by heating in a water bath. The test steps are as follows: 1 The fresh CAM I paste with specified temperature is injected into the φ 10 × 20 cm pouring bag, tied and put it into the φ 10 cm PVC tube. 2 Press the sample by lighter metal sheet; then the PVC tube is placed in a water bath with a specified temperature, and a displacement sensor is placed on the metal sheet. 1 The fresh CAM I paste with specified temperature is injected into the ϕ 10 × 20 cm pouring bag, tied and put it into the ϕ 10 cm PVC tube. 2 Press the sample by lighter metal sheet; then the PVC tube is placed in a water bath with a specified temperature, and a displacement sensor is placed on the metal sheet. 3 Set to record a displacement value every 5 min; each group of samples is tested three times, and the mean value is taken. Test of pH Value The pH value of sample is measured by a pH meter (Delta320); each sample is tested three times, and the mean value is taken. Microstructural Analysis 1 Infrared spectrometry (IR): The paste hydration for 5 min is stopped hydration by ethanol, and then IR analysis was carried out (Process the sample as shown in Figure 5). 3 Set to record a displacement value every 5 min; each group of samples is tested three times, and the mean value is taken. Test of pH Value The pH value of sample is measured by a pH meter (Delta320); each sample is tested three times, and the mean value is taken. Microstructural Analysis 1 Infrared spectrometry (IR): The paste hydration for 5 min is stopped hydration by ethanol, and then IR analysis was carried out (Process the sample as shown in Figure 5). 2. Scanning electron microscopy (SEM): The samples of CAM I (curing at 20 °C and 55 °C) are analyzed with scanning electron microscopy (SEM) and a X-ray energy dispersive spectrometer (EDS) at the age of 10 days. The experimental steps are shown in Figure 5. As summarized in Table 4, the experiment of this study was divided into three parts: one was the experiment of the influence of temperature on the fluidity and expansion rate of CAM I paste (numbers 1 and 2); the other was the experiment of exploring the micro mechanism (numbers 3, 4, 5, 6, 9 and 10); and the last was the experiment of solving the problem (numbers 7 and 8). Thus, the experiments were not performed at same temperature. Since CAM I paste has completely lost its fluidity at 55 °C, the expansion rate cannot be tested at 55 °C. When testing the variation of pH value of CAM I paste with durations at different temperatures, it was found that the pH value of CAM I paste changed most abnormally at 55 °C compared to 20 °C. Therefore, the changes of pH value for each system of CAM I paste with time at both 20 °C and 55 °C were tested to explore the direct cause of the instability of pH value of CAM I paste at high temperature. When performing microstructural analysis, considering that the precision of an IR test is higher than that of SEM in the analysis of phase composition, only two samples at 20 and 55 °C were analyzed by SEM, while all samples at four temperatures were analyzed by IR. Due to the daily maximum temperature below 40 °C during construction, only the effects of superplasticizer on the fluidity and expansion rate of CAM I paste were studied at 45 °C. pH value of Emulsified Asphalt-Water system Y Y 2. Scanning electron microscopy (SEM): The samples of CAM I (curing at 20 • C and 55 • C) are analyzed with scanning electron microscopy (SEM) and a X-ray energy dispersive spectrometer (EDS) at the age of 10 days. The experimental steps are shown in Figure 5. As summarized in Table 4, the experiment of this study was divided into three parts: one was the experiment of the influence of temperature on the fluidity and expansion rate of CAM I paste (numbers 1 and 2); the other was the experiment of exploring the micro mechanism (numbers 3, 4, 5, 6, 9 and 10); and the last was the experiment of solving the problem (numbers 7 and 8). Thus, the experiments were not performed at same temperature. Since CAM I paste has completely lost its fluidity at 55 • C, the expansion rate cannot be tested at 55 • C. When testing the variation of pH value of CAM I paste with durations at different temperatures, it was found that the pH value of CAM I paste changed most abnormally at 55 • C compared to 20 • C. Therefore, the changes of pH value for each system of CAM I paste with time at both 20 • C and 55 • C were tested to explore the direct cause of the instability of pH value of CAM I paste at high temperature. When performing microstructural analysis, considering that the precision of an IR test is higher than that of SEM in the analysis of phase composition, only two samples at 20 and 55 • C were analyzed by SEM, while all samples at four temperatures were analyzed by IR. Due to the daily maximum temperature below 40 • C during construction, only the effects of superplasticizer on the fluidity and expansion rate of CAM I paste were studied at 45 • C. NO. Samples Note: Y refers to tested combination. Analysis of Fluidity and Expansion Ratio of CAM I Effects of temperature on fluidity of CAM I paste are shown in Figure 6. In the process of mixing, at 55 • C, the paste lost its fluidity and became flocculent immediately. At 45 • C, the paste also thickened and lost its fluidity about 10 min after mixing. The fluidity of paste at 20 • C and 35 • C increases slowly with time. The fluidity of paste at 20 • C remains in the range of 26 s after 1 h, and the fluidity of paste at 35 • C was found to maintain at 18~26 s within 30 min. Materials 2020, 13, x FOR PEER REVIEW 6 of 15 Note: Y refers to tested combination. Analysis of Fluidity and Expansion Ratio of CAM I Effects of temperature on fluidity of CAM I paste are shown in Figure 6. In the process of mixing, at 55 °C, the paste lost its fluidity and became flocculent immediately. At 45 °C, the paste also thickened and lost its fluidity about 10 min after mixing. The fluidity of paste at 20 °C and 35 °C increases slowly with time. The fluidity of paste at 20 °C remains in the range of 26 s after 1 h, and the fluidity of paste at 35 °C was found to maintain at 18~26 s within 30 min. As is shown in Figure 7, with the increase in temperature, the beginning time and the end time of expansion of CAM I paste are advanced. When the temperature was at 20 °C, the paste began expanding at about 150 min and stopped at about 350 min. When the temperature was at 35 °C, it begun expanding at about 45 min and stopped at 170 min. When the temperature was at 45 °C, it started and stopped early, at approximately 50 min and 130 min. This may be related to the fact that the rise of temperature can accelerate cement hydration. When the temperature rises from 20 to 35 °C, the expansion ratio of CAM I paste increases significantly, and the final expansion ratio increases more than twice. When the temperature rises from 35 to 45 °C, the final expansion ratio of CAM I paste is decreased by nearly 70%, and the volume even shrinks in the end. In alkaline conditions induced by cement hydration, the aluminum powder reacts with OHproduce H2 in the solution, resulting in a decreased pH value of the solution. This indicated that the expansion ratio is closely related to cement hydration. Chemical reactions were as followed [17]: (1) As is shown in Figure 7, with the increase in temperature, the beginning time and the end time of expansion of CAM I paste are advanced. When the temperature was at 20 • C, the paste began expanding at about 150 min and stopped at about 350 min. When the temperature was at 35 • C, it begun expanding at about 45 min and stopped at 170 min. When the temperature was at 45 • C, it started and stopped early, at approximately 50 min and 130 min. This may be related to the fact that the rise of temperature can accelerate cement hydration. When the temperature rises from 20 to 35 • C, the expansion ratio of CAM I paste increases significantly, and the final expansion ratio increases more than twice. When the temperature rises from 35 to 45 • C, the final expansion ratio of CAM I paste is decreased by nearly 70%, and the volume even shrinks in the end. In alkaline conditions induced by cement hydration, the aluminum powder reacts with OH − produce H 2 in the solution, resulting in a decreased pH value of the solution. This indicated that the expansion ratio is closely related to cement hydration. Chemical reactions were as followed [17]: Analysis of pH Value on Different Systems of CAM-I Paste The pH value is a momentous parameter of cement hydration. In order to find out the reason for the abnormality of fluidity and expansion ratio of CAM-I at high temperature, the changes of pH for each system of CAM I paste at different temperatures were tested. Change of pH Value of Emulsified Asphalt-Sand System As shown in Figure 8, the pH value of slurry gradually increases during the stationary time and increases very slowly after 15 min in the emulsified asphalt-sand system. Emulsified asphalt used in this study is cationic emulsified asphalt. The quaternary ammonium salt N + cation formed by dissociation was adsorbed on the surface of sand, so that the electrical double layer on the surface of emulsified asphalt particle was compressed, and the concentration of OH − in the solution increased. During the whole process, the pH value was still smaller than 7.0, meaning the slurry was still acidic. With the increase of temperature, the degree of ionization of electrolyte increases in the system, and the concentration of OHin the solution increases, so the pH value decreases. Since the emulsified asphalt is an acid and alkali resistant material, the slurry can still be stable, which can also be seen from the fact that the consistency of the slurry increases with time and does not change significantly over time. Analysis of pH Value on Different Systems of CAM-I Paste The pH value is a momentous parameter of cement hydration. In order to find out the reason for the abnormality of fluidity and expansion ratio of CAM-I at high temperature, the changes of pH for each system of CAM I paste at different temperatures were tested. Change of pH Value of Emulsified Asphalt-Sand System As shown in Figure 8, the pH value of slurry gradually increases during the stationary time and increases very slowly after 15 min in the emulsified asphalt-sand system. Emulsified asphalt used in this study is cationic emulsified asphalt. The quaternary ammonium salt N + cation formed by dissociation was adsorbed on the surface of sand, so that the electrical double layer on the surface of emulsified asphalt particle was compressed, and the concentration of OH − in the solution increased. During the whole process, the pH value was still smaller than 7.0, meaning the slurry was still acidic. With the increase of temperature, the degree of ionization of electrolyte increases in the system, and the concentration of OH − in the solution increases, so the pH value decreases. Since the emulsified asphalt is an acid and alkali resistant material, the slurry can still be stable, which can also be seen from the fact that the consistency of the slurry increases with time and does not change significantly over time. Analysis of pH Value on Different Systems of CAM-I Paste The pH value is a momentous parameter of cement hydration. In order to find out the reason for the abnormality of fluidity and expansion ratio of CAM-I at high temperature, the changes of pH for each system of CAM I paste at different temperatures were tested. Change of pH Value of Emulsified Asphalt-Sand System As shown in Figure 8, the pH value of slurry gradually increases during the stationary time and increases very slowly after 15 min in the emulsified asphalt-sand system. Emulsified asphalt used in this study is cationic emulsified asphalt. The quaternary ammonium salt N + cation formed by dissociation was adsorbed on the surface of sand, so that the electrical double layer on the surface of emulsified asphalt particle was compressed, and the concentration of OH − in the solution increased. During the whole process, the pH value was still smaller than 7.0, meaning the slurry was still acidic. With the increase of temperature, the degree of ionization of electrolyte increases in the system, and the concentration of OHin the solution increases, so the pH value decreases. Since the emulsified asphalt is an acid and alkali resistant material, the slurry can still be stable, which can also be seen from the fact that the consistency of the slurry increases with time and does not change significantly over time. Change of pH Value of Emulsified Asphalt-Water System As shown in Figure 9, the pH of the slurry almost did not change with time in the emulsified asphalt-water system (except for the pH value of slurry at 0.25 min, which was too high due to the initial contact), which indicates that the emulsified asphalt-water system is a relatively stable system. The pH value of the slurry decreases from 2.0 to 1.0 when the temperature increases from 20 to 55 • C. This indicates that the concentration of H + increases by 10 times; this is because the high temperature accelerates the ionization of electrolyte. Materials 2020, 13, x FOR PEER REVIEW 8 of 15 As shown in Figure 9, the pH of the slurry almost did not change with time in the emulsified asphalt-water system (except for the pH value of slurry at 0.25 min, which was too high due to the initial contact), which indicates that the emulsified asphalt-water system is a relatively stable system. The pH value of the slurry decreases from 2.0 to 1.0 when the temperature increases from 20 to 55 °C. This indicates that the concentration of H + increases by 10 times; this is because the high temperature accelerates the ionization of electrolyte. Figure 10 shows the change of pH value for the emulsified asphalt-cement system over time. Change of pH Value of Emulsified Asphalt-Cement System Obviously the pH value of emulsified asphalt-cement system changes with both time and temperature. At 20 °C, the pH value of the slurry increases constantly with time; the pH value of the slurry firstly increases, then decreases and finally increases with time at 55 °C. The initial pH value was above 12.0 at 55 °C. At the same time, the consistency of the slurry significantly increases with time at 55 °C in the mixing process. This result indicated the rapid loss of fluidity and the abnormal change of expansion ratio of CAM I paste at high temperature may be attributed to the instability of emulsified asphalt-cement system at high temperature. At high temperature, one of the possible reasons responsible for the instability of the emulsified asphalt-cement system is the high pH value. Because the emulsified asphalt used in CAM-I is cationic emulsified asphalt, in order to increase its stability, the pH of the slurry must be adjusted to a value that is less than 7.0. However, when emulsified asphalt and cement are mixed by a mixer, the Figure 10 shows the change of pH value for the emulsified asphalt-cement system over time. Change of pH Value of Emulsified Asphalt-Cement System Obviously the pH value of emulsified asphalt-cement system changes with both time and temperature. At 20 • C, the pH value of the slurry increases constantly with time; the pH value of the slurry firstly increases, then decreases and finally increases with time at 55 • C. The initial pH value was above 12.0 at 55 • C. At the same time, the consistency of the slurry significantly increases with time at 55 • C in the mixing process. This result indicated the rapid loss of fluidity and the abnormal change of expansion ratio of CAM I paste at high temperature may be attributed to the instability of emulsified asphalt-cement system at high temperature. As shown in Figure 9, the pH of the slurry almost did not change with time in the emulsified asphalt-water system (except for the pH value of slurry at 0.25 min, which was too high due to the initial contact), which indicates that the emulsified asphalt-water system is a relatively stable system. The pH value of the slurry decreases from 2.0 to 1.0 when the temperature increases from 20 to 55 °C. This indicates that the concentration of H + increases by 10 times; this is because the high temperature accelerates the ionization of electrolyte. Figure 10 shows the change of pH value for the emulsified asphalt-cement system over time. Change of pH Value of Emulsified Asphalt-Cement System Obviously the pH value of emulsified asphalt-cement system changes with both time and temperature. At 20 °C, the pH value of the slurry increases constantly with time; the pH value of the slurry firstly increases, then decreases and finally increases with time at 55 °C. The initial pH value was above 12.0 at 55 °C. At the same time, the consistency of the slurry significantly increases with time at 55 °C in the mixing process. This result indicated the rapid loss of fluidity and the abnormal change of expansion ratio of CAM I paste at high temperature may be attributed to the instability of emulsified asphalt-cement system at high temperature. At high temperature, one of the possible reasons responsible for the instability of the emulsified asphalt-cement system is the high pH value. Because the emulsified asphalt used in CAM-I is cationic emulsified asphalt, in order to increase its stability, the pH of the slurry must be adjusted to a value that is less than 7.0. However, when emulsified asphalt and cement are mixed by a mixer, the At high temperature, one of the possible reasons responsible for the instability of the emulsified asphalt-cement system is the high pH value. Because the emulsified asphalt used in CAM-I is cationic emulsified asphalt, in order to increase its stability, the pH of the slurry must be adjusted to a value that is less than 7.0. However, when emulsified asphalt and cement are mixed by a mixer, the initial pH value of slurry reaches above 12.0 instantly. Therefore, the cement particles may be wrapped by the emulsified asphalt that has been demulsified, which delays the hydration of cement. At 55 • C, we adjusted the pH value of the emulsified asphalt to 13.0 with NaOH solution, but the color of emulsified asphalt was still brown, and it has great fluidity, and there is no obvious demulsification. We could draw conclusions that, on one hand, the emulsified asphalt has a great resistance to high temperature, acid and alkali; on the other hand, demulsification is not caused by a high pH value. Change of pH Value of CAM I Paste The pH value of CAM-I paste at different temperatures was tested in Figure 11.When CAM-I is mixing with water, the pH value of paste reaches a high level in a moment and increases with time, which may be attributed to the large amount of Ca(OH) 2 produced by early hydration of cement. High temperature can promote ionization of various ions and accelerate cement hydration. The pH value of paste also rises when temperature increases from 20 to 35 • C. Since the reaction of aluminum powder requires a certain concentration of OH − , the rise in temperature accelerates the expansion caused by the aluminum powder gas-generating reaction. Materials 2020, 13, x FOR PEER REVIEW 9 of 15 initial pH value of slurry reaches above 12.0 instantly. Therefore, the cement particles may be wrapped by the emulsified asphalt that has been demulsified, which delays the hydration of cement. At 55 °C, we adjusted the pH value of the emulsified asphalt to 13.0 with NaOH solution, but the color of emulsified asphalt was still brown, and it has great fluidity, and there is no obvious demulsification. We could draw conclusions that, on one hand, the emulsified asphalt has a great resistance to high temperature, acid and alkali; on the other hand, demulsification is not caused by a high pH value. Change of pH Value of CAM I Paste The pH value of CAM-I paste at different temperatures was tested in Figure 11.When CAM-I is mixing with water, the pH value of paste reaches a high level in a moment and increases with time, which may be attributed to the large amount of Ca(OH)2 produced by early hydration of cement. High temperature can promote ionization of various ions and accelerate cement hydration. The pH value of paste also rises when temperature increases from 20 to 35 °C. Since the reaction of aluminum powder requires a certain concentration of OH -, the rise in temperature accelerates the expansion caused by the aluminum powder gas-generating reaction. However, when the temperature was at 45 or 55 °C, the pH value of slurry behaved abnormally with an increase in hydration time. The pH value of paste decreases first, and then increases with time; moreover, the duration of the rising and falling period becomes shorter with the increase of temperature. When the temperature was 45 °C, the rising period was about 5 min, and the falling period was about 10 min. When temperature rises to 55 °C, the rising period lasts only 1 min, while the falling period lasts only 4 min. All these phenomena indicated that a mutation occurs in the system at high temperature, and this mutation caused an abnormal change in the pH value of paste over time, which in turn caused an abnormal expansion due to the aluminum powder gas-generating reaction r. There are mainly three reasons for the reduction of pH value at high temperature. One reason was that the crystallization of Ca(OH)2 consumes some OH − . High temperature greatly accelerates the dissolution rate of cement minerals, so that the concentration of Ca(OH)2 in the solution increases rapidly. However, since the solubility of Ca(OH)2 decreases when temperature increases, Ca(OH)2 in the solution becomes saturated promptly; then it crystallizes and precipitates. Consequently, the inducted stage of cement hydration is shortened, and the accelerated stage of cement hydration quickly begins. However, when the temperature was at 45 or 55 • C, the pH value of slurry behaved abnormally with an increase in hydration time. The pH value of paste decreases first, and then increases with time; moreover, the duration of the rising and falling period becomes shorter with the increase of temperature. When the temperature was 45 • C, the rising period was about 5 min, and the falling period was about 10 min. When temperature rises to 55 • C, the rising period lasts only 1 min, while the falling period lasts only 4 min. All these phenomena indicated that a mutation occurs in the system at high temperature, and this mutation caused an abnormal change in the pH value of paste over time, which in turn caused an abnormal expansion due to the aluminum powder gas-generating reaction r. There are mainly three reasons for the reduction of pH value at high temperature. One reason was that the crystallization of Ca(OH) 2 consumes some OH − . High temperature greatly accelerates the dissolution rate of cement minerals, so that the concentration of Ca(OH) 2 in the solution increases rapidly. However, since the solubility of Ca(OH) 2 decreases when temperature increases, Ca(OH) 2 in the solution becomes saturated promptly; then it crystallizes and precipitates. Consequently, the inducted stage of cement hydration is shortened, and the accelerated stage of cement hydration quickly begins. Another reason was that the rapid precipitation of aluminate hydrates such as AFt results in the decrease of pH value. As shown in Equations (2) and (3), 4 mol OH − should be consumed if 1 mol AFt or aluminate hydrates is produced; as a result, the pH value of paste become lower. The reaction of Equation (4) generally takes place at a high concentration of Ca(OH) 2 . C 4 AH 13 is a hexagonal flake hydrate, which usually turns into AFt promptly; however, a high concentration of Ca(OH) 2 will stabilize it [18], and obviously, a high temperature makes this possibility greatly increase. The reaction of Equation (4) usually results in the flash setting of cement. The third reason was that high temperature accelerates cement hydration, and a series of physical and chemical effects lead to the emulsified asphalt to flocculate, demulsify and form floccule (visible to eyes). The floccule adsorbs a large amount of water and ions (a massive amount of water exudes from the floccule when squeezed by hand), and cement particles are wrapped by asphalt which prevents further dissolution of cement minerals, thereby causing decrease in pH value of paste. Analysis of Hydration Phase in CAM-I It can be seen from the above analysis that some mutation may occur in the system at a high temperature, which causes the flocculation and demulsification of emulsified asphalt and the abnormal change of pH value and expansion rate of CAM-I paste. In order to clarify the micro mechanism, the CAM-I samples (20,35,45 and 55 • C) hydrating for 5 min were analyzed by infrared spectrum (IR), as shown in Figure 12. Clearly, 2853.1 cm −1 is the asymmetric stretching vibration peak of −CH 3 ; 2924.0 cm −1 is the symmetric stretching vibration peak of −CH 2 ; 2360.4 cm −1 is the stretching vibration peak of quaternary ammonium salt N + ; both alkyl and quaternary ammonium groups are associated with emulsifiers of emulsified asphalt. The peak around 3400 cm −1 is related to the stretching vibration peak of −OH (cement hydrates mostly contain −OH groups; among them, the Tobermorite is 3440-3460cm −1 ; hard xonotlite is 3420-3460cm −1 ; AFt is 3420 cm −1 ; AFm is 3480 cm −1 ; hydrated calcium sulphate is 3400 cm −1 ). The peak of 3600-3700 cm −1 is mainly related to aluminate hydrates [19]. Materials 2020, 13, x FOR PEER REVIEW 10 of 15 Another reason was that the rapid precipitation of aluminate hydrates such as AFt results in the decrease of pH value. As shown in Equations (2) and (3), 4 mol OHshould be consumed if 1 mol AFt or aluminate hydrates is produced; as a result, the pH value of paste become lower. 3 ) The reaction of Equation (4) generally takes place at a high concentration of Ca(OH)2. C4AH13 is a hexagonal flake hydrate, which usually turns into AFt promptly; however, a high concentration of Ca(OH)2 will stabilize it [18], and obviously, a high temperature makes this possibility greatly increase. The reaction of Equation (4) usually results in the flash setting of cement. The third reason was that high temperature accelerates cement hydration, and a series of physical and chemical effects lead to the emulsified asphalt to flocculate, demulsify and form floccule (visible to eyes). The floccule adsorbs a large amount of water and ions (a massive amount of water exudes from the floccule when squeezed by hand), and cement particles are wrapped by asphalt which prevents further dissolution of cement minerals, thereby causing decrease in pH value of paste. Analysis of Hydration Phase in CAM-I It can be seen from the above analysis that some mutation may occur in the system at a high temperature, which causes the flocculation and demulsification of emulsified asphalt and the abnormal change of pH value and expansion rate of CAM-I paste. In order to clarify the micro mechanism, the CAM-I samples (20,35,45 and 55 °C) hydrating for 5 min were analyzed by infrared spectrum (IR), as shown in Figure 12. Clearly, 2853.1 cm −1 is the asymmetric stretching vibration peak of -CH3; 2924.0 cm −1 is the symmetric stretching vibration peak of -CH2; 2360.4 cm −1 is the stretching vibration peak of quaternary ammonium salt N + ; both alkyl and quaternary ammonium groups are associated with emulsifiers of emulsified asphalt. The peak around 3400 cm −1 is related to the stretching vibration peak of -OH (cement hydrates mostly contain -OH groups; among them, the Tobermorite is 3440-3460cm −1 ; hard xonotlite is 3420-3460cm −1 ; AFt is 3420cm −1 ; AFm is 3480 cm −1 ; hydrated calcium sulphate is 3400 cm −1 ). The peak of 3600-3700 cm −1 is mainly related to aluminate hydrates [19]. Figure 12 showed that both the asymmetric stretching vibration peak of −CH 3 and the symmetric stretching vibration peak of −CH 2 increase significantly when the temperature increases; that is, the content of emulsifier-related substance increased in the sample. It showed that the emulsifier adsorbed on the surface of asphalt particles is dispersed in the paste (i.e., emulsified asphalt may demulsify). In addition, a peak associated with the aluminate gel appeared in the 45 and 55 • C samples, which indicates that high temperature promotes the reaction associated with the formation of aluminate hydrates [20]. The CAM-I samples (curing temperatures of 20 and 55 • C) were analyzed by scanning electron microscopy (SEM) and energy dispersive spectrometry (EDS) at the curing age of 10 days. According to the SEM and EDS photographs of the sample cured at 20 • C (Figure 13a), many loose needle shaped hydration products were observed, and the EDS analysis of the spicules indicated that their main elements were Ca, S and Al, so the needle-shaped hydration products were ettringite (AFt) [19]. In the SEM and EDS analysis of CAM-I samples cured at 55 • C (Figure 13b), the cement hydration products cannot be seen clearly except for Ca(OH), and the asphalt film was clearly observed (Fu [13], Tian [21] and Tyler [22] observed this microstructure in their studies). Therefore, a high temperature accelerated the demulsification of emulsified asphalt in the paste. Materials 2020, 13, x FOR PEER REVIEW 11 of 15 Figure 12 showed that both the asymmetric stretching vibration peak of -CH3 and the symmetric stretching vibration peak of -CH2 increase significantly when the temperature increases; that is, the content of emulsifier-related substance increased in the sample. It showed that the emulsifier adsorbed on the surface of asphalt particles is dispersed in the paste (i.e., emulsified asphalt may demulsify). In addition, a peak associated with the aluminate gel appeared in the 45 and 55 °C samples, which indicates that high temperature promotes the reaction associated with the formation of aluminate hydrates [20]. The CAM-I samples (curing temperatures of 20 and 55 °C) were analyzed by scanning electron microscopy (SEM) and energy dispersive spectrometry (EDS) at the curing age of 10 days. According to the SEM and EDS photographs of the sample cured at 20 °C (Figure 13a), many loose needle shaped hydration products were observed, and the EDS analysis of the spicules indicated that their main elements were Ca, S and Al, so the needle-shaped hydration products were ettringite (AFt) [19]. In the SEM and EDS analysis of CAM-I samples cured at 55 °C (Figure 13b), the cement hydration products cannot be seen clearly except for Ca(OH), and the asphalt film was clearly observed (Fu [13], Tian [21] and Tyler [22] observed this microstructure in their studies). Therefore, a high temperature accelerated the demulsification of emulsified asphalt in the paste. The emulsifier used in the emulsified asphalt is similar to the traditional wood calcium and naphthalene-based water reducer, which were both ionic surfactants adsorbed on the surface of the asphalt particles, making them bring positive charges. The stability of the emulsified asphalt was mainly related to the adsorption strength of the emulsifier on the surface of asphalt particles and the concentration of the emulsifier in the solution. The hydrolysis reaction of C 3 A after adding water at high temperature is as follows [20]: Al(OH) − 4 can be regarded as aluminum hydroxide gel with OH − adsorbed onto it. Corstanje et al. [23] proposed that there is amorphous Al(OH) 3 formed on the surface of C 3 A. Skalny [24] found that an aluminum-rich layer exists on the surface of the C 3 A particles and retards the hydration of C 3 A. Barnes [20] considered the aluminum-rich layer to be a coprecipitate of Ca(OH) 2 and Al(OH) 3 or only Al(OH) 3 . Zhang et al. [14] found in studies that the adsorption amount of emulsified asphalt by cement particles increased at high temperatures. Additionally, from the analysis results of IR, it can be seen that the emulsifier content in the paste increased at high temperatures. Therefore, z high temperature accelerated the hydration of C 3 A, and produced a negatively charged aluminate hydrate to adsorb emulsifiers and emulsified asphalt particles. The emulsifier adsorption layer on the surface of the asphalt particles became loose, the electric double layer on the surface of the particle was changed [25] and the electrostatic repulsion was weakened, which resulted in the asphalt particles gradually aggregating, flocculating and then demulsifying. (The macro performance is the sudden loss of fluidity of CAM-I paste and the abnormality of expansion rate.) The emulsified asphalt demulsified to form a film, and adsorbed a large number of ions, which were wrapped on the surfaces of cement particles to prevent further dissolution of cement clinker and diffusion of water [13,21]. On one hand, this reduced the concentration of OH − in cement paste, thereby lowering the pH value. On the other hand, at high temperature, the absorption peak of emulsifier and aluminate appeared simultaneously in the IR spectrum of the sample hydrated for 5 min. The cement hydration rate was also slowed down so that the cement hydration products could not be seen in the sample of 55 • C at the 10 day mark except for Ca(OH) 2 . Moreover, when EDS analysis was performed on the surface, the peaks of elements other than Ca were very weak. Figure 16a presents a schematic illustration of the demulsification of emulsified asphalt at a high temperature. Effect of Superplasticizer on the Fluidity and Expansion Rate of CAM-I From the above analysis, it can be seen that the high temperature accelerated the hydration of C 3 A and generated more aluminate hydrates which adsorbed emulsifiers, leading to the demulsification of the emulsified asphalt, and the paste lost its fluidity. Therefore, the best way to solve the problem of sudden loss of fluidity and abnormal change of expansion rate is to weaken the adsorption capacity of aluminate hydrates toward emulsifiers and asphalt particles. In this study, a certain number of superplasticizers (percentage of cementitious material mass) are added to CAM-I paste to study its effect on the working time and expansion ratio of paste based on competitive adsorption principle [26][27][28]. As shown in Figure 14, the fluidity of paste without superplasticizer increases from 23.2 s to 62.5 s after 10 min when the temperature is 45 • C, and the paste loses fluidity. However, the paste can maintain high fluidity within 60 min and can be used for construction after adding 0.1% superplasticizer. The fluidity of paste can maintain for 18-26 s within 60 min when the amount of superplasticizer is increased to 0.3%. The fluidity of paste when the amount of superplasticizer was 0.5% was lower than that of 0.3% superplasticizer during 0-10 min; the phenomenon was also found in previous studies [29], which was interpreted as "aftereffect of superplasticizer". As shown in Figure 15, the volume of paste without superplasticizer eventually shrunk, but the expansion ratio of paste was greatly improved by 0.55% when the amount of superplasticizer was 0.3%. At a high temperature (about 45 °C), the emulsifier was adsorbed by the aluminate hydrates formed in the early stage of cement hydration, resulting in the system instability. The emulsified asphalt demulsified and formed a film on the surface of cement particles, which will prevent further cement hydration (Figure 13b). The superplasticizer molecules will be adsorbed on the surfaces of aluminate hydrates [30]. Many long molecular chains form an "isolation zone" between emulsifiers and cement minerals that hinders the adsorption of emulsifiers (Figure 16b). Therefore, the emulsified asphalt remains stable and the cement particles could be further hydrated. The fluidity and expansion ratio of CAM-I paste was improved. At a high temperature (about 45 °C), the emulsifier was adsorbed by the aluminate hydrates formed in the early stage of cement hydration, resulting in the system instability. The emulsified asphalt demulsified and formed a film on the surface of cement particles, which will prevent further cement hydration (Figure 13b). The superplasticizer molecules will be adsorbed on the surfaces of aluminate hydrates [30]. Many long molecular chains form an "isolation zone" between emulsifiers and cement minerals that hinders the adsorption of emulsifiers (Figure 16b). Therefore, the emulsified asphalt remains stable and the cement particles could be further hydrated. The fluidity and expansion ratio of CAM-I paste was improved. At a high temperature (about 45 • C), the emulsifier was adsorbed by the aluminate hydrates formed in the early stage of cement hydration, resulting in the system instability. The emulsified asphalt demulsified and formed a film on the surface of cement particles, which will prevent further cement hydration (Figure 13b). The superplasticizer molecules will be adsorbed on the surfaces of aluminate hydrates [30]. Many long molecular chains form an "isolation zone" between emulsifiers and cement minerals that hinders the adsorption of emulsifiers (Figure 16b). Therefore, the emulsified asphalt remains stable and the cement particles could be further hydrated. The fluidity and expansion ratio of CAM-I paste was improved. Conclusions Based on the results of this experiment and the discussions above, the following conclusions can be drawn. (1) When the temperature was 20-35 • C, the fluidity of CAM I paste increased slowly with time, but the paste became thickened and flocculated rapidly when the temperature was above 45 • C. (2) When temperature increased from 20 • C to 35 • C, the expansion ratio of CAM-I paste increased with the increase in temperature, but the expansion ratio of CAM-I decreased with the increase in temperature when the temperature was above 45 • C. (3) High temperatures accelerate the formation of aluminate hydrates in cement paste. The adsorbed emulsifier leads to demulsification of emulsified asphalt; then asphalt film wraps the surface of cement particles, which causes CAM-I paste to lose fluidity quickly, and the gas generating reaction caused by aluminum powder is inhibited, resulting in abnormal change of pH value and decrease of expansion ratio in the paste. (4) The superplasticizer can stabilize the emulsified asphalt by forming an "isolation zone" between the emulsifier and the cement particles, which can improve the working time and expansion ratio of the CAM-I paste. Author Contributions: For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used "Data curation, H.A.U.; Funding acquisition, G.L.; Investigation, C.M.; Resources, H.L. and Y.X.; Writing-original draft, H.Z.; Writing-review & editing, X.Z. and X.L.", please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported. All authors have read and agreed to the published version of the manuscript.
2020-04-08T19:10:41.056Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "efa9a56f8fe9fb3125daeae1a3ca79617e276cea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/13/7/1655/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5a8705ddf591c3192c982107418cba02469a20f", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
237447978
pes2o/s2orc
v3-fos-license
Determinants of prognosis in geriatric patients followed in respiratory ICU; either infection or malnutrition Abstract Severity of illness, age, malnutrition, and infection are the important factors determining intensive care unit (ICU) survival. The aim of the study is to determine the relations between Geriatric Nutritional Risk Index (GNRI), C-reactive protein/albumin (CAR), and prognosis-mortality of geriatric patients (age of ≥65 years) admitted to intensive care unit. The study with 10/15/2020, 697 approval date, and number retrospectively registered. Between January 1, 2018 and December 31, 2019, 413 geriatric patients admitted to ICU. The patients were divided into three groups according to their age. The age group, gender, Charlson comorbidity index, intensive care scores (Acute Physiology And Chronic Health Evaluation II and Sequential Organ Failure Assessment), the infection markers (white blood cell, procalcitonin, CAR levels), malnutrition tools for each patient (body mass index, Nutrition Risk in Critically ill score, and GNRI scores) were analyzed retrospectively. Also length of stay (LOS) ICU, length of stay hospital, and 30-day mortality were recorded. Geriatric patients number of 403 was included in the study. Forty-nine (12.3%) patients had a history of malignancy, 272 (67.5%) patients had Chronic Obstructive Pulmonary Disease comorbidity. There was no difference in mortality between age groups. In patients with mortality, body mass index, had being Chronic Obstructive Pulmonary Disease history, GNRI, length of stay hospital, and albumin were significantly lower; malignancy comorbidity rate, inotrope use, modified Nutrition Risk in Critically ill score, mechanical ventilation duration, LOS ICU, Sequential Organ Failure Assessment, Acute Physiology And Chronic Health Evaluation II, Charlson comorbidity index, C-reactive protein, procalcitonin, and CAR were significantly higher. Both malnutrition and infection affect mortality in geriatric patients in intensive care. The GNRI is better than CAR at predicting mortality. Introduction Nowadays with the increase in average life expectancy and decrease in birth rates, the elderly population in the society increases and also the rate of geriatric patients in intensive care units is increasing day by day. It is reported that the number of persons aged 60 or over is expected to more than twice by 2050 compared to 2017. [1] In our country, the rate of elderly people was 5.3% in 2000. [2] Severity of illness, age, malnutrition, and infection are the important factors determining intensive care unit (ICU) survival. There are many comorbidities that increase the mortality rate in the geriatric population. [3] Additionally, the incidence of sepsis increases with age, the ages over than 80 is associated with extremely high mortality rates. [4] C-reactive protein (CRP) is an acute phase reactant and indicates inflammation due to infection. Albumin is an indicator of malnutrition and the ratio of CRP/albumin (CAR) has recently been evaluated as a prognostic marker for mortality in sepsis. [5,6] Elderly patients are vulnerable to infection, and nutritional condition is very important predictive factor. While elderly patients hospitalized in intensive care are treated for primary disease, malnutrition may be overlooked. In evaluating the nutritional status of patients, all clinical findings should be taken into consideration; both anthropometric methods and screening tools should be used. The Geriatric Nutritional Risk Index (GNRI) is a tool to determine the nutritional status of elderly people based on their albumin level, current weight, and ideal weight. [7] Although it is known that malnutrition and infection adversely affect the prognosis of geriatric patients [8] , it has not been reported which one has more effect. In this study, we aimed to determine the possible relations between GNRI, CAR, and prognosis-mortality of geriatric patients admitted to intensive care unit. Materials and methods The study was designed retrospectively and initiated after approval from the Medical Specialization Training Board of Ataturk Chest Diseases and Thoracic Surgery Training and Research Hospital (approval date and number: 15/10/2020, 697). Between January 1, 2018 and December 31, 2019; consecutive 561 patients admitted to respiratory ICU. Four hundred thirteen of them were geriatric patients (age of ≥65 years). Patients with hyponatremia (135 mmol/L), hypernatremia (145 mmol/L), severe liver disease, and severe kidney failure (creatinine clearance < 15 mL/min) were excluded from the study to rule out albumin changes not related to malnutrition (n: 7). Also the patients had missing data (n: 3) were excluded from the study (Fig. 1). The number of patients were 403 included in the study. The patients were divided into three groups according to their age; 65 to 74 early elderly, 75 to 84 advanced elderly, and 85 and over very advanced elderly. The Also length of stay (LOS) ICU, length of stay hospital (LOS H), and 30-day mortality were recorded. If the actual body weight is higher than ideal bodyweight, the ratio (actual bodyweight/ideal bodyweight) is taken as 1. Statistical analysis Data analyses were performed using SPSS for Windows, version 22.0 (SPSS Inc., Chicago, IL). Whether the distribution of continuous variables was normal or not was determined by Kolmogorov Smirnov test. Levene test was used for the evaluation of homogeneity of variances. Unless specified otherwise, continuous data were described as mean ± standard deviation and median (minimum value-maximum value) or categorical data were described as number of cases (%). Categorical variables were compared using Pearson's chi-square test or fisher's exact test. Statistical analysis differences in not normally distributed variables between two independent groups were compared by Mann-Whitney U test. It was evaluated degrees of relation between variables with point biserial correlation and spearman correlation analysis. First of all it was used one variable multinominal logistic regression with risk factors that is thought to be related with mortality. Variables with a p value below 0.25 in univariate logistic regression analysis were included in multivariate logistic regression analysis. Whether every independent variables were significant on model was analyzed with Wald statistic. It was evaluated with Nagelkerke R 2 how much independent variable explained dependent variable. Besides, it was evaluated model adaptation of estimates with Hosmer and Lemosow model adaptation test. ROC curve analysis was used to determine the cut-off points for mortality. It was accepted P-value < .05 as significant level on all statistical analysis. Results Geriatric patients number of 403 was included the study. The males were 229 (56.8%), female were 174 (43.2) of them. Fortynine (12.3%) patients had a history of malignancy, 272 (67.5%) comorbidity. Mean GNRI score was 84.45 ± 9.48 and modified Nutrition Risk in Critically ill score (mNUTRIC score) was 5.9 ± 1.54. The intensive care severity scores of these patients were mean of APACHE II score 22.81 ± 7.58, mean of SOFA score 6.05 ± 2.5. The average of CAR of all patients was 20.57 ± 36.12. There was no difference in mortality between age groups. There is no statistically significant difference in age, gender, and white blood cell count between those with and without mortality in geriatric ICU patients. In patients with mortality, BMI, had being COPD history, GNRI, LOS H, and albumin were significantly lower than those without mortality. Malignancy comorbidity rate, inotrope use, mNUTRIC score, mechanical ventilation (MV) duration, LOS ICU, SOFA, APACHE II, CCI, CRP, procalcitonin, and CAR were significantly higher in those with mortality than those without mortality (Table 1 and Fig. 2). According to the correlation table results, there is a low statistically significant negative correlation between GNRI and intrope use, MV duration, LOS H, and LOS H. As GNRI decreases, intrope use, MV duration, LOS H, and LOS ICU increase. There is a low statistically significant correlation in the same direction between the mNUTRIC score and intrope use, MV duration, and LOS ICU. As the mNUTRIC score increases, intrope use, MV duration, and LOS ICU increase. There is no statistically significant relation between the mNUTRIC score and LOS H (Table 2). There is a low statistically significant correlation between CAR and intrope use and MV duration in the same direction. As CAR increases, the duration of receiving introp support and MV increases. There is no statistically significant relation between CAR and LOS ICU or LOS H. There is a low statistically significant correlation between procalcitonin and intrope use, MV duration, and LOS ICU. As procalcitonin increases, intrope use, MV duration, and LOS ICU increase. There is no statistically significant relation between white blood cell and the prognostic parameters (Table 2). There is a statistically significant correlation between ICU severity scores (APACHE II, SOFA) and intrope use, MV duration, and LOS H in the same direction (Table 2). To determine the factors affecting mortality, firstly, single variables logistic regression analysis was applied (Univariate Analyze). Variables with P < .25 in univariate logistic regression analysis were included in multivariate logistic regression analysis. The backward LR method was used for multivariate logistic regression analysis. The results of Step 11, which is the last step of the analysis, are given in the table. According to the results, it was understood that CAR, SOFA, CCI, procalcitonin, length of hospital stay, MV, and COPD affect mortality. The increase in CAR, SOFA and CCI, procalcitonin and decrease in hospital stay and MV, and not having COPD increase mortality (Table 3). A ROC curve analysis was applied in order to provide a cut-off value for the success of GNRI and CAR values in predicting In order to answer the question of which value should be taken as the cut-off value for this test, each sensitivity and specificity values given as a result of the analysis were examined and the optimum point was chosen. While the sensitivity value was 71% and the specificity value was 61.3%, the cut-off value was found to be 85.79. As a result, the risk of mortality was higher in cases with GNRI 85.79 and below (Fig. 3). It shows that CAR can differentiate in determining the risk of mortality in cases, that is, it can classify patients correctly at a rate of 61.6% (moderate). In order to answer the question of which value should be taken as the cut-off value for this test, each sensitivity and specificity values given as a result of the analysis were examined and the optimum point was chosen. While the sensitivity value was 69.9% and the specificity value was 49.8%, the cut-off value was found to be 3.13. As a result, the risk of mortality was higher in cases with CAR 3.13 and above (Fig. 3). In geriatric patients hospitalized in the intensive care unit, sepsis was most common in patients with malignancy comorbidity, followed by pneumonia, and least in those with both COPD and malignancy comorbidity, and this was statistically significant (Table 4). Discussion This study showed that while age and gender do not affect, both malnutrition and infection affect mortality in geriatric patients treated in respiratory intensive care. However, low GNRI significantly increases inotrope use, MV duration, LOS H, and LOS ICU, while high CAR levels only increase inotrope use and MV duration significantly and also at predicting mortality GNRI is better than CAR. Some studies have found that chronological age has an impact on morbidity and mortality, some have suggested that biological age and some other factors are effective. [10,11] Brunner-Zeigler et al [12] determined that mortality increases with age, but the physiological condition of the patients is more effective on mortality. Contrary we evaluate that age do not affect the mortality in geriatric patients in ICU. The CAR affect as a predictor of mortality is controversial. It has been stated that the CAR is more effective in predicting mortality compared to albumin and CRP alone. [13] In a study by Cirik et al [14] evaluating the clinical benefit of the CAR in predicting 30-day mortality in critically ill patients, it was revealed that the CAR was independently associated with 30-day mortality but APACHE II and CCI predicted mortality more than CAR. Although it was shown in a study that increased CAR was associated with increased mortality in intensive care patients, it was concluded that its sensitivity and specificity were not sufficient to predict mortality. [15] Oh et al [6] found that CAR in admission was an important predictor of mortality in geriatric patients. Similarly, in our study, procalcitonin, CRP, and CAR, which are predictors of sepsis, were found to be significantly higher in patients with mortality. The CAR affects inotrop use and MV duration but did not affect LOS ICU and LOS H while GNRI affects both of them. Malnutrition is common in geriatric patients, with significant effects on morbidity and mortality. [16] Sepsis creates lifethreatening organ dysfunction secondary to infection. [17] In a study conducted with geriatric patients, mortality increased significantly in patients with GNRI < 92, and they found that CRP levels were related to low GNRI. [18] Some studies have reported that low BMI and albumin levels are poor prognostic factors in determining mortality [3,19] but in our study, we found that BMI, GNRI, and albumin levels were significantly lower in patients with mortality. Some studies support our findings that they suggested low GNRI levels due to malnutrition suppressed the protein synthesis and a catabolic process began. [20,21] The nutritional status of geriatric patients is evaluated with scores such as GNRI and mNUTRIC. In this Table 4 The frequency of sepsis in geriatric patients in ICU. Eraslan Doganay and Cirik Medicine (2021) 100:36 www.md-journal.com study, we determined that low GNRI and high mNUTRIC scores have a significant effect on mortality. In this study we also found that had being malignancy history rather than COPD, and high CCI, APACHE II, SOFA scores were significantly increase mortality similar to Kao et al. [22] They suggested that CCI and SOFA scores were correlated with mortality. This study has several limitations to be considered. It has a retrospective design, and conducted at a single center. The sample size was relatively small. Conclusion Both malnutrition and infection affect mortality in geriatric patients in intensive care. However, low GNRI increases inotrope use, MV duration, LOS H, and ICU, while high CAR levels only increase inotrope use and MV duration significantly and also at predicting mortality GNRI is better than CAR.
2021-09-09T13:16:22.537Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "0f296f903b12cc06118f0c9b0f489fa48dafccaa", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000027159", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f296f903b12cc06118f0c9b0f489fa48dafccaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40011408
pes2o/s2orc
v3-fos-license
The Viscous Lengths in Hydrodynamic Turbulence are Anomalous Scaling Functions It is shown that the idea that scaling behavior in turbulence is limited by one outer length $L$ and one inner length $\eta$ is untenable. Every n'th order correlation function of velocity differences $\bbox{\cal F}_n(\B.R_1,\B.R_2,\dots)$ exhibits its own cross-over length $\eta_{n}$ to dissipative behavior as a function of, say, $R_1$. This length depends on $n$ {and on the remaining separations} $R_2,R_3,\dots$. One result of this Letter is that when all these separations are of the same order $R$ this length scales like $\eta_n(R)\sim \eta (R/L)^{x_n}$ with $x_n=(\zeta_n-\zeta_{n+1}+\zeta_3-\zeta_2)/(2-\zeta_2)$, with $\zeta_n$ being the scaling exponent of the $n$'th order structure function. We derive a class of scaling relations including the ``bridge relation"for the scaling exponent of dissipation fluctuations $\mu=2-\zeta_6$. The aim of this Letter is to expose the fact that the notion of the dissipative length in hydrodynamic turbulence is a rich and interesting concept whose complexity exceeds the expectations of established models and standard theories [1]. Indeed, during a few decades the thinking about the universal small scale structure of turbulence was dominated by Kolmogorov's picture of energy cascade through an "inertial interval" which is limited on one side by the integral scale of turbulence L and on the other side by the Kolmogorov viscous scale η = (ν 3 /ǭ) 1/4 where ν andǭ are the fluid's kinematic viscosity and the mean energy flux in the turbulent flow respectively. During the last decade there has been a growing concern about the inability of Kolmogorov's theory to cope with the increasing experimental evidence for multiscaling (or multifractal) behaviour of higher order structure functions. Together with the concern about the statistical theory there arose a realization that the uniqueness of the viscous length is suspicious. Paladin and Vulpiani [2], and also Frisch and Vergassola [3] used the multifractal model of turbulence to assess the characteristic viscous lengths associated with the higher order structure functions of velocity differences S 2n+1 (R 1 ) =R 1 · w(r 1 |r ′ 1 )|w(r 1 |r ′ 1 )| 2n , where w(r 1 |r ′ 1 , t) ≡ u(r ′ 1 , t) − u(r 1 , t) and u(r, t) is the velocity field of the fluid, R 1 ≡ r ′ 1 −r 1 , andR 1 ≡ R 1 /R 1 . In homogeneous and locally istoropic turbulence S n (R) is a function of the magnitude of R, and the viscous length is that value of R at which the functional dependence of S n (R) changes from a non-trivial power law S n (R) ∼ R ζn to a trivial power law that stems from a Taylor expansion of the velocity differences, S n (R) ∼ R n . The multifractal model leads to a prediction that this length depend on the order n. In this Letter we argue that a proper discussion of cross-overs to dissipative behaviour requires the analysis of functions richer than structure functions. Firstly, we state that the fundamental object to analyze is the n-point correlation of velocity differences which is an n-rank tensor. All the separations R i ≡ |r ′ i − r i | and r ij ≡ |r i − r j | are within the "inertial range". It is generally accepted that this correlation function is a homogeneous function of its arguments, i.e. It should be understood that quantities like (1) are obtained from (3) by fusing some coordinates together. (In this case all r ij → 0 and all R i → R). In this process of fusion one crosses the viscous scale, and it is important to understand how to do this. Our discssion will not call for any ad-hoc model of turbulence. It will be based on two solid building blocks, one being the Navier-Stokes equations, and the other the fusion rules that were derived recently. The fusion rules appear naturally in the analytic theory of Navier-Stokes turbulence [4][5][6][7] and passive scalar turbulent advection [7][8][9], and they determine the analytic structure of the n-order correlation functions (3) when a group of coordinates tend towards each other. The fusion rules were derived in [7] for systems in which Eq.(4) holds with universal scaling exponents (i.e the scaling exponents do not depend on the detailed form of the driving of the turbulent flows). The fusion rules address the asymptotic properties of F n when a group (or groups) of coordinates tend towards a common coordinate withing each group, while all the other coordinates remain separated by a large distance R. There are two particular examples of fusion rules that we will employ in this Letter. The first pertains to the fusion of one pair of points. When the distance between one pair is small, R 1 ∼ ρ, and the separations between all the other coordinates are much larger, R i ∼ R for i = 1, then to leading order in ρ/R F n ∝ S n (R)S 2 (ρ)/S 2 (R). The second situation pertains to the case in which we have two groups of fusing coordinates separated by a large distance R. When there is a group of p points separated by a typical distance ρ 1 , and a group of n − p points separated by a typical distance ρ 2 with a large distance R between the groups, then These forms hold as long as ρ, ρ 1 and ρ 2 are in the inertial range. The Navier-Stokes equations for an incompressible velocity field u(r, t) may be written in the forṁ Here ν is the kinematic viscosity and ↔ P is the transverse projector. Given the equation of motion we can take the time derivative of Eq.(3). We finḋ Substituting Eq. (7), and considering the stationary state in which ∂F n /∂t = 0 we find the balance equations The term J n originates from the viscosity term in (7), The term D n stems from the nonlinear term, and it needs a bit of algebra to bring to the exact form We are going to argue now that when all the separations R j are of the same order of magnitude R, the interaction term has a very simple evaluation, i.e. To this aim we need to prove that the integral is local in the sense that it converges in the ultraviolet and in the infrared. As the coordinate r is being integrated over, the most dangerous ultraviolet contribution comes from the region of small r. In this region the projection operator can be evaluated as 1/r 3 . Other coalescence events of r with other coordinates contribute less divergent integrands since the projection operator is not becoming singular. When r becomes small, there are two possiblilites: (i) r j = r k and (ii) r j = r k . In the first case the correlation function itself is analytic in the region r → 0, and we can expand it in a Taylor series Const + B · r + . . .. where B is an r-independent vector. The constant term is annihilated by the projection operator. The term linear in r vanishes under the dr integration due to r → −r symmetry. The next term which is proportional to r 2 is convergent in the ultraviolet. In the second case we have a velocity difference across the length r. Accordingly we need to use the fusion rule (6), and we learn that the leading contribution is proportional to r ζ2 . This is sufficient for convergence in the ultraviolet. We note that the derivative with respect to r j cannot be evaluated as 1/r when r j = r k . Rather, it is evaluated as the inverse of the distance between r j and the nearest coordinate in the correlation function. To understand the convergence of D n when the integration variable r becomes very large we consider the relevant geometry as shown in Fig.1. There is one velocity difference across the coordinates r j − r and r ′ j − r (which is shown on the right of the figure), (n − 1) velocity differences across coordinates that are all within a ball of radius R (at the left of the figure), and one velocity difference across the large distance r which is much larger than R. In the notation of this figure the leading order contribution for large r is obtained from the fusion rules (6) for the situation on the right and (5) for the geometry on the left. The resulting evaluation for the leading term is r ζn+1 (R j /r) ζ2 (R/r) ζn−1 . On the face of it, this term is near dangerous. For any anomalous scaling the integral converges since ζ n+1 ≤ ζ n−1 + ζ 2 due to Hoelder inequalities. This convergence seems slow. However, the situation is in fact much safer. If we take into account the precise form of the second-order structure function in the fusion rules we find that the divergence with respect to r j translates in fact to ∂S βγ 2 (R j )/∂R jγ which is zero due to incompressibility. The next order term is convergent even for simple (K41) scaling. This completes the proof of locality of (10). The conclusion is that the main contribution to the integral in (10) comes from the region r ∼ R. Therefore the integral can be evaluated by straightforward power counting leading to (12). It should be stressed that a more detailed analysis demonstrates that when the separations between the coordinates that do not involve velocity differences, (i.e separations like r jk but not R j )go to zero, the evaluation does not change. The evaluation of the quantity J n is more straightforward. When all the separations R j and r ij are of the same order R, the correlator in (9) is evaluated simply as S n (R). The Laplacian is then of the order of 1/R 2 . We note that when ν → 0 (which is the limit of infinite Reynolds number Re), this term becomes negligible compared to D n . The ratio J n /D n is evaluated as νS n (R)/RS n+1 (R), which for fixed R vanishes in the limit ν → 0. Thus the "balance equation" becomes a homogeneous integro-differential equation D n = 0 which may have scale-invariant solutions with anomalous scaling exponents ζ n+1 = (n + 1)/3. It should be stressed that the evaluation (12) remains correct for every term in D n , but various terms cancel to give zero in the homogeneous equation, provided that the scaling exponent ζ n is chosen correctly. To make this important point clear we exemplify it with the simple case n = 2 for which D n can be greatly simplified. Consider the scalar object F 2 (r 1 , r ′ 1 , r 2 , r ′ 2 ) = w(r 1 |r ′ 1 ) · w(r 2 |r ′ 2 ) . The terms in the scalar balance equation for this case are exactly When all the separations are of the order of R we can see explicitly that J 2 ∼ νS 2 (R)/R 2 which is much smaller than each term in D 2 . Considering the scale invariant solution S 3 (R) = AR ζ3 where A is a dimensional coefficient, we see that Obviously the solution for D 2 = 0 requires the unique choice ζ 3 = 1 which is the known exponent for S 3 [1]. The coefficient A is now determined asǭ which is the mean energy dissipation per unit mass and unit time. There is a cross-over from the scale invariant solution of the homogeneous equation to dissipative solutions when J 2 becomes comparable to any of the terms in D 2 . This happens when at least one of the separations appearing in (14) becomes small enough. Denoting the smallest separation as r m we evaluate J 2 ∼ νS 2 (r m )/r 2 m . From this we can estimate, using the balance equation, S 2 (r m ) ∼ (S 3 (R)/νR)r 2 m ∼ǭr 2 m /ν. In the inertial range we have S 2 (r) ∼ (ǭr) 2/3 (r/L) ζ2−2/3 . The viscous scale η 2 for the second-order structure function is then determined from finding where these two expressions are of the same order of magnitude, i.e. Using the outer velocity scale U L we estimateǭ ∼ U 3 L /L and end up with Note that this result is not in agreement with the ad-hoc application of the multifractal model [1-3] which predicts η 2 ∼ LRe −2/(2+ζ2) . A similar mechanism operates in the general case of n = 2. As long as all the separations are in the inertial interval J n is negligible. When one separation e.g. r 12 diminishes towards zero, and all the other separations are of the order of R, the internal cancellations leading to the homogenous equation D n = 0 disappear, and D n is evaluated as in (12). The term J n is now dominated by one contribution that can be written in short-hand notation as ν∇ 2 1 F n (r 12 , {R}). We can solve for F n (r 12 , {R}) in this limit: On the other hand we have, from the fusion rule (6), the form of the same quantity when r 12 is still in the inertial range, i.e. F n (r 12 , {R}) ≈ S 2 (r 12 )S n (R)/S 2 (R). To estimate the viscous scale η n we find when these two evaluations are of the same order. The answer is We note that the Hoelder inequalities guarantee that x n > 0 and increases with n. We see that the viscous "length" is actually an anomalous scaling function. Next we show that in the same spirit we can derive important (and exact) scaling relations between the exponents ζ n of the structure functions and exponents involving correlations of the dissipation field. We consider correlations of the type where R is a typical separation between any pair and ǫ(r) ≡ ν|∇u(r)| 2 , and we are interested in the scaling relations between the exponents µ n and the exponents ζ n . Note that µ (2) 0 in this notation is the well studied [10,11]exponent of dissipation fluctuation which is denoted as µ. This relation is almost at hand for µ (1) n . We see this by writing Using the result (17) we find immediately The scaling relations satisfied by µ (2) n require considerations of the second time derivative of the correlation (3). Using the fusion rules and following steps similar to those described above, we can prove that the integrals over r and r ′ converge. Accordingly, when all the separations are of the order of R, every term in D (2) n is evaluated as S n+2 (R)/R 2 . The term J (2) n takes on the form As before, when all the separation in this quantity are of the order of R, the Laplacian operators introduce factor of 1/R 2 and the evaluation of this quantity is J (2) n ∼ ν 2 S n (R)/R 4 . Clearly this is negligible compared to typical terms in D (2) n . The quantity B (2) n contains a cross contribution with one Laplacian operator and one nonlinear term with a projection operator. The integral is again local, and one can show that the evaluation is B (2) n ∼ νS n+1 (R)/R 3 which is also negligible compared to typical terms in D (2) n . Now we consider the fusion of two pairs of coordinate, e.g. r 12 → 0 and r 34 → 0. As before, the cancellations in D (2) n are eliminated, and the evaluation of a typical term becomes the evaluation of the quantity. The other two terms in the balance equation also become of the same order because the Laplacian operators ∇ 2 1 and ∇ 2 3 are evaluated as r −2 12 and r −2 34 respectively. As before we can consider the resulting balance equation as a differential equation for F n (r 12 , r 34 , {R}). The leading term in this equation is 4ν 2 ∇ 2 1 ∇ 2 2 F n (r 12 , r 34 , {R}) ≈ B (2) n + D (2) n ∼ S n+2 (R)/R 2 . Finally we can write the quantities K (n) ǫǫ in terms of the correlation function as ∇ 1 ∇ 2 ∇ 3 ∇ 4 F n (r 12 , r 34 , {R}). (27) Using (26) here we end up with the evaluation For the standard exponent µ = µ (2) 0 we choose n = 0 and obtain the phenomenologically proposed "bridge relation" µ = 2−ζ 6 . To our best knowledege this is the first solid derivation of this scaling relation. In general, if we have p dissipation fields correlated with n velocity differences the scaling exponent can be found by considering p time derivatives of (3), with the final result We see that Eqs.(22), (28) and (29) can be guessed if we assert that for the sake of scaling purposes the dissipation field ǫ(r) can be swapped in the correlation function with w 3 (r 1 |r ′ 1 )/R 1 , where R 1 is the characteristic scale. This reminds one of the Kolmogorov refined similarity hypothesis. We should stress that (i) our result does not depend on any uncontrolled hypothesis, and (ii) it does not imply the correctness of the hypothesis. Our result is implied by the refined similarity hypothesis, but not vice versa.
2018-04-03T04:39:34.765Z
1996-07-03T00:00:00.000
{ "year": 1996, "sha1": "003c93916f967fc407293db7c522706f32d02e8b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/chao-dyn/9606018", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2d892f49108e404b54e619c24246798a9bd1c040", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
248631893
pes2o/s2orc
v3-fos-license
The Short-Term Changes of the Sagittal Spinal Alignments After Acute Vertebral Compression Fracture Receiving Vertebroplasty and Their Relationship With the Change of Bathel Index in the Elderly Introduction Fragility vertebral compression fractures (VCFs) are of major concern due to aging populations worldwide, which may occur after a fall from standing or due to severe osteoporosis, impacting greatly the life quality of the elderly. This study thus determined the factors independently associated with poor functional recovery from a new VCF and changes in sagittal spinal alignment after vertebroplasty in elderly patients with osteoporosis. Materials and Methods The data were collected from patients older than 70 years and diagnosed with a new VCF. Logistic regression analysis was performed to determine factors independently associated with function and radiographic status. Results We enrolled 8 male and 34 female patients with a mean age of 80.74 ± 8.31 years between January and July 2020. Compared with preoperative data, post-vertebroplasty lumbar sagittal alignments and functional scores improved significantly, and function recovered gradually over 12 weeks. Climbing stairs was the most influential performance indicator at the beginning of the recovery process. At each postoperative follow-up, changes in the C7-sacrum sagittal vertical axis exhibited an influence on functional recovery. Male patients were better able to move from a chair to a bed at the 2-week postoperative follow-up, and positive changes in the spino-sacral angle led to improved function in terms of stair climbing at the 6-week postoperative follow-up. Conclusions Vertebroplasty seemed to be effective for functional recovery related to sagittal spinal alignment improvement of the elderly with VCFs during postoperative 12 weeks, which may be a critical stage for the recovery for their life activities. The recovery rate for stair climbing after vertebroplasty was slower than for the other functional performance indicators in our study. In addition, if a patient was unable to demonstrate a marked improvement in sagittal alignment, they were likely to have ongoing impaired function and a poor prognosis after surgery. Introduction Osteoporosis is one of the most common disorders among elderly adults, especially elderly women. A decrease in bone mineral density (BMD) and reduced bone quality result in structural and microarchitectural destruction of the vertebral body. According to the Nutrition and Health Survey in Taiwan (2004)(2005)(2006)(2007)(2008), the prevalence of osteoporosis increases to 22.57% in men and 41.17% in women after the age of 50 years. Of the comorbidities of osteoporosis, osteoporotic vertebral compression fracture (OVCF) is a major health issue because it may have a severe impact on the quality of life and the survival of patients. 1,2 The prevalence of OVCF in Europe is 12.1/1000 person-years for women and 6.8/1000 person-years for men after the age of 50. 3 Ballane et al reported higher rates of OVCF in Taiwanese women compared with those in other Asian countries, 4 and Burge et al revealed that OVCF would cost the insurance companies, the state, or the healthcare providers at least US$1 billion in the United States by 2025, 5 resulting in a tremendous burden on the health care system and social security due to the increased need for long-term care facilities, hospitalization, and vertebral augmentation procedures. Symptoms of OVCF include severe back pain, radiculopathy, and severe neurologic deficit, which may induce psychological impairment and poor social function. 6 Adequately controlling pain, correcting deformities, delaying deterioration, and treating underlying osteoporosis are key means of treating symptoms. Conservative treatments, including analgesics, bracing, and bed rest, have achieved only relatively poor outcomes in patients with OVCF, 7 and ongoing mobility and pain control problems after trauma are of great concern. 8 Conventional minimally invasive treatments, such as percutaneous vertebroplasty (VP) and kyphoplasty (KP), have become the main procedures for treating OVCF in recent years. 9,10 Both of these procedures result in effective pain relief, functional recovery, increased mobility, and a decreased need for painkillers. 11,12 Sagittal balance of the spine ensures minimal muscle effort is required to maintain a stable standing position and is essential for maintaining normal spinal biomechanics. 13 The thoracolumbar region is the most common fracture site because of its specific biomechanical characteristics. 14 KP can correct sagittal alignment, including the T1 pelvic angle (TPA) and spino-sacral angle (SSA) because of the ballon placement, 15 but the change of the sagittal alignment after VP and its correlation with the functional recovery was few mentioned in the literature. In this study, we investigated the change of sagittal alignment by using percutaneous VP, evaluating the outcomes with the Barthel index (BI). The Oswestry Disability Index is one of the most commonly used tools for quantifying disability related to lower back pain. 16 However, this questionnaire focuses on spondylosis and radiculopathy. By contrast, the BI is used to measure performance in the activities of daily living, and it is the most common tool used by the Taiwanese government and social welfare and medical institutions to evaluate an elderly patient's function. Method This study was performed under the approval of the Research Ethics Committee of our hospital. We recruited the patients diagnosed with OVCF at our hospital between January and July 2020, excluding patients who had undergone previous spinal instrumentation surgery; had a history of cancer, multiple fractures, or cardiopulmonary disorders; or exhibited evidence of spinal infection, pathologic fracture, or difficulty standing straight. After admitting these patients to the ward, one physician and one nurse practitioner assisted the patients and their family in completing a questionnaire based on the BI and visual analog scale (VAS). In addition to the preoperative BI, we also collected baseline BI scores from before the patient experienced an OVCF. The BI includes three performance indicators, namely transferring from chair to bed (transfer), walking capability (mobility), and stair climbing (stairs), and data on these indicators were extracted from the completed questionnaires to evaluate ambulatory function. Basic data, including body mass index and underlying diseases, were collected by the nurse. BMD measurements, magnetic resonance imagining (MRI), and plain film and whole spine radiography were performed in the outpatient or emergency department. Percutaneous VP was performed by two orthopedic attending physicians and two senior residents; procedures were performed under either local or general anesthesia. After adequate pain control and brace assistance, most of the patients were discharged within 24 hours. Subsequent outpatient appointments were scheduled at postoperative weeks 2, 6, and 12. During the outpatient appointments, BI and VAS data were collected by one physician with a nurse practitioner and the patients' family. Whole spine radiography was arranged at postoperative weeks 2, 6, and 12. Radiographic Analysis Whole spine radiography was conducted preoperatively, postoperatively, and then at the 2-, 6-, and 12-week followup appointments. We quantitated that the fractured vertebral body changes before and after VP by measuring the angle of the upper end plate and the lower end plate of the fractured column from the lateral view of the plain film. The other radiological parameters of the spine, including the sagittal vertical axis (SVA), SSA, TPA, pelvic incidence, pelvic tilt (PT), sacral slope (SS), lumbar lordosis, thoracic kyphosis (TK), and thoracolumbar kyphosis were recorded by three orthopedic residents. The spino-pelvic sagittal parameters were also described. We divided the patients into three groups: the thoracic group (T1-T9), thoracolumbar group (T10-L2), and lumbar group (L3-L5), according to the location of the fractured vertebra. Statistical Analysis An independent t test was used to compare the demographic characteristics between males and females. A stepwise multivariable linear regression analysis estimated the associations between function scores and sagittal spinal parameters. All reported P values <.05 were considered statistically significant. The statistical software SPSS for Windows, version 21.0 (IBM Corp, Armonk, NY, USA), was used for the analyses. Results We enrolled 42 patients according to the inclusion criteria between January and July 2020. Table 1 presents the patient demographics, BI scores, and details of any underlying diseases. Eight male and 34 female patients were investigated, with a mean age of 80.74 ± 8.31 years. Age was significantly different between the two groups, with women being older than the men. The baseline and preoperative BI scores of the male patients were significantly higher than those of the female patients. Comorbidities in all the patients were diverse without any significant difference, although essential hypertension and psychiatric disorders were predominant. The mean value of the angle of the fractured column improved from 10.42 ± 6.31 lordosis to 5.73 ± 4.61 lordosis after VP ( Table 2). The majority of preoperative and Data are presented as n or mean ± standard deviation. *P value <.05 was considered statistically significant after test. postoperative sagittal alignments were significantly different, including the SVA, TPA, and TK (Table 2). VAS scores at each follow-up exhibited a significant improvement. The preoperative VAS was 8.31 ± .84, whereas the VAS at postoperative week 2 was 4.24 ± .98, which indicated a substantial recovery. At each of the postoperative follow-ups, the BI scores and the BI performance indicators demonstrated gradual improvement (Table 3). Subsequent follow-ups indicated progressive improvement ( Table 4). The recovery ratio was determined by comparing the baseline BI scores with the record at follow-up. Initially, stair climbing was the most influential performance indicator (34.29% ± 35.92%), with a recovery ratio in excess of 80% at the 12-week follow-up. The majority of patients had recovered fully (96.83% ± 11.78%) by the week 12 follow-up. In addition, we analyzed the relationship between sagittal alignment parameters and recovery ratios. SVA influenced all patients' BI total scores at all postoperative follow-ups (Tables 5, 6, and 7). Male patients exhibited an improved early transfer function from chair to bed at the 2-week follow-up. An improved stair-climbing function was evident at the 6-week follow-up after changes in the SSA. Discussion When conservative treatment fails, percutaneous VP is considered the most effective treatment for symptomatic OVCF. Studies have demonstrated excellent pain relief, functional recovery, and increased mobility following this treatment. 11,12 Percutaneous VP is more effective for shortterm pain relief, and KP has demonstrated increased effectiveness for medium-term functional improvement. 10 However, no significant difference has been demonstrated between VP and KP in terms of long-term functional .089 Data are presented as β (95% CI). recovery and pain relief. 10,17 Although KP appears to result in improved radiologic outcomes, this procedure has not been associated statistically with any clinical outcomes. VP improves the kyphosis angle in cadavers, depending on the amount of cement, and Ates et al preferred VP to KP because it was an easier and cheaper method to perform. 12 In our study we found that VP could effectively improve the function of the elderly patients with VCFs and the sagittal spinal alignment of them also improved at the same initial 12-week stage. The change of the sagittal spinal alignment may be cause of the improvement of back pain symptom after VP, which seemed to be an addictively positive effect for the daily function restoration of the elderly. Sagittal balance plays a key role in maintaining the spine's straight posture, thus preventing progressive spinal deformity; this is achieved by contributing to the minimal effort required of the core muscles, minimal tension in the ligamentous structure, and minimal energy consumption. 18 The pelvis and spine must be in a harmonious relationship with the lower extremities and trunk to maintain an ergonomic and stable posture. If the torso's centroid cannot remain at a certain distance from the pelvis, the cone of economy, which refers to the stable region in a standing posture, cannot be maintained, causing an increasing expenditure of energy. 19 The SVA, SSA, and TPA were measured to assess global alignment. A patient's position and pelvic rotation and deviation in the X-ray projection distance can influence the final SVA result 20 ; thus, we measured the SSA and TPA as angular parameters not requiring proportional calibration. The SSA can reflect global alignment and kyphosis, 21 and the TPA not only indicates global and local spino-pelvic sagittal alignment but also the compensatory mechanism of the spine and pelvis. 22 Local deformity of the spine, such as OVCF or degenerative spondylosis, may result in an abnormal sagittal alignment and gravity line. To adapt to this phenomenon, pelvic posterior rotation, including decreased SS, increased PT, and decreased TK, was performed to maintain the torso in an upright position. Failure of the compensatory mechanism may lead to both hip and knee flexion, which represents positive sagittal alignment at this stage. 23 In the later stages of OVCF, after the failure of the compensatory mechanism and core muscle fatigue, horizontal gaze fails. A combination of core muscle fatigue, failure of the cone of economy, decreasing global alignment, and horizontal gaze failure may result in a traumatic fall. Falling is a worldwide problem for the elderly population, causing unexpected hospitalizations or visits to the emergency department. 24 Elderly patients who have fallen face a decline in physical function, a deterioration in mental status, and a risk of death 25 ; falls also have a negative impact on financial and social support systems. Furthermore, if a patient is hospitalized, they can be exposed to nosocomial infection with highly virulent microorganisms, resulting in diseases such as pneumonia or urinary tract infection. Hospital-acquired infections lead to increased morbidity, complications, and mortality in older patients compared with younger patients. 26 Furthermore, older patients with lower BI scores at the time of hospital admission have a relatively high mortality rate, especially female patients. 27 In terms of the results of this study, the SVA, TPA, TK, and VAS scores were improved at the 2-week follow-up. Improved global alignment and pain result in easier rehabilitation for elderly patients. At the 2-week follow-up, the male patients had an improved transfer function compared with the female patients, which may be related to their higher muscle mass. The BI score and mobility and stair functions were not significantly related. In addition, the more the SVA improved, the more the BI score and stair function improved. This phenomenon was also observed at the 6-and 12-week follow-ups. Improvements in global alignment can help patients achieve a straighter posture and improved horizontal gaze, preventing further falls when climbing stairs. The limitation to our study is relatively small sample size and short-term follow up even though spinal alignment may be changed by time. In addition, we couldn`t enforce the patient to keep a relative up-straight posture in the acute stage of compression fracture, but the cobb angle and body height of the fractured vertebral body were corrected. It may be related to the postural reduction with spine table which can make the thoracolumbar spine more lordotic, and this way may mimic the vertebral shape improvement and kyphosis correction of kyphoplasty. Despite the above limitation, the result of our study still pointed out that vertebroplasty for the elderly with acute VCFs in the subacute phase was effective for their functional restoration and spine alignment improvement and this stage was very critical for the recovery of life quality of them. In conclusion, vertebroplasty was effective for the elderly patients with acute VCFs in the improvement of function and sagittal spinal alignment in 12 weeks, which was a critical stage for their future recovery. Falling is traumatic for the elderly population, especially when it occurs on stairs. The recovery rate for stair climbing after VP was slower than for the other functional performance indicators in our study. In addition, the poorer functional recovery seemed to be related to the less change of sagittal spinal alignment. In these circumstances, the family can be provided with guidance on postoperative care. Patients must be careful when climbing stairs, and their family or caregiver must ensure that the patient's accommodation is obstacle free and pay attention to the patient when they are on th e move.
2022-05-10T15:24:42.679Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d4c917cc3d7a59990b0217ad6fa7ffe762912842", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a2d52ef6f6e064d44f6e952f64cda7bf03d62911", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269029616
pes2o/s2orc
v3-fos-license
Optimizing FPGA implementation of high-precision chaotic systems for improved performance Developing chaotic systems-on-a-chip is gaining much attention due to its great potential in securing communication, encrypting data, generating random numbers, and more. The digital implementation of chaotic systems strives to achieve high performance in terms of time, speed, complexity, and precision. In this paper, the focus is on developing high-speed Field Programmable Gate Array (FPGA) cores for chaotic systems, exemplified by the Lorenz system. The developed cores correspond to numerical integration techniques that can extend to the equations of the sixth order and at high precision. The investigation comprises a thorough analysis and evaluation of the developed cores according to the algorithm complexity and the achieved precision, hardware area, throughput, power consumption, and maximum operational frequency. Validations are done through simulations and careful comparisons with outstanding closely related work from the recent literature. The results affirm the successful creation of highly efficient sixth-order Lorenz discretizations, achieving a high throughput of 3.39 Gbps with a precision of 16 bits. Additionally, an outstanding throughput of 21.17 Gbps was achieved for the first-order implementation coupled with a high precision of 64 bits. These outcomes set our work as a benchmark for high-performance characteristics, surpassing similar investigations reported in the literature. Introduction Many chaotic systems, along with their applications in Chaos-Based Secure Communication (CBSC), data encryption, and True Random Number Generation (TRNG) are implemented using a wide variety of embedded systems, such as Arduino, Application-Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), and Field Programmable Gate Arrays (FPGAs) [1][2][3].Until recently, and before the rabid advances of digital technology, analogue implementations of continuous-time chaotic, or hyperchaotic, systems were the default.A combination of Op-Amps, resistors, capacitors and analogue multipliers were used to construct such implementations.The Ordinary Differential Equations (ODEs), which are used to describe the dynamics of chaos, for both autonomous and nonautonomous systems, were directly mapped to active RC circuits to generate the states of the system.A typical example, describing the Lorenz system, is illustrated in Fig 1 (see Eq 9) in Section 1).Other examples for analogue implementations could be found in [4][5][6][7][8].The analogue multiplier AD633 was used to implement the nonlinear part of the Lorenz equation (see Eq (1) in Section 1), along with other chaotic systems of similar structure.As shown in Fig 2, grounding terminals 2, 4, and 6 can effectively produce the product function, with high accuracy. The Lorenz system was explored in [4,5], where both the LF353 Op-Amp and the AD633 analogue multiplier were used to perform the required algebraic/calculus-based mathematical operations to implement its dynamics.Adjusting the values of the resistors and the capacitors were used to arrive at the required dominant time constants of the circuit, which could be made as small as a few microseconds, without any noticeable degradation in the performance.Other autonomous chaotic systems, such as the Ro ¨ssler and Chua circuits, were also considered in [4][5][6] that covered applications in chaos control, state observers, parameter identification, and synchronization of chaotic systems.Similar analogue implementations to other chaotic systems that include infinitely many equilibria and fractional-order dynamics without equilibrium were also covered in [7,8]. Challenges to digital implementations of chaotic systems, including Lorenz, include the performance aspects of time, speed, complexity, precision, and dealing with the intrinsic sequential behaviour of the model.As related to chaotic systems, the following research opportunities are highlighted: • The attraction of reconfigurability of FPGAs in implementing chaotic systems with effective applications in synchronization, control, and communication. • The development of hardware implementations of chaotic algorithms under FPGAs with appealing performance characteristics that outperform similar implementations reported in the literature. • The embedding of Lorenz hardware cores to assist or replace traditional computing systems, such as central processing units, in applications. • The emergence of hybrid analogue and digital chaotic system implementations. • The exploration of implementations with various accuracy levels, speeds, and complexities. • The creation of development and analysis patterns that are applicable in the wider area of chaotic systems, such as autonomous and non-autonomous systems to cover both chaotic and hyperchaotic systems. In this paper, we present high-speed hardware implementations of chaotic systems, namely the Lorenz system.The presented implementations target traditional and high precision including 8, 16, 32, and 64 bits floating point number representations.The proposed hardware cores implement different numerical integration (discretization) techniques that extend to equations of the sixth order.Furthermore, the implementation challenge is extended to include experimenting with different floating-point data types to arrive at the best compromise among complexity, precision, area, and speed. The rest of this paper is organized so that Section 2 presents related work.Section 3 presents the motivation and research objectives.In Section 4, the adopted hardware development methodology and the created cores are presented.Section 5 presents the achieved results and a thorough evaluation that includes comparisons with closely-related work.In addition, Section 5 presents the design and implementation limitations of the proposed cores and sets the ground for future work.In Section 6, the investigation is concluded by highlighting important achievements and presenting work in progress. Background When dealing with chaotic systems, several benchmark models exist that can be used for verifying newly proposed techniques, either for control, synchronization, synthesis, or implementation [9].The Lorenz system is the most famous example that represents the autonomous category of chaotic systems; it has many different forms, including a hyperchaotic model.It was originally discovered when analyzing weather patterns that exhibit very strong dependence on initial conditions [10]; however, other applications in engineering and physics were found to exhibit quite similar behaviour.This includes permanent magnet synchronous machines (PMSMs) [11], single mode optical lasers [12], and thermal convection [13].The mathematical model of the 3D chaotic Lorenz system is given by Eq 1. where x, y, and z are the three dynamic states of the system, and σ, ρ, and β are three positive constants.Along with the origin, this system has the two additional equilibrium points of Eq 2. ½x eq y eq z eq � ¼ ½� ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi bðr À 1Þ p � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi bðr À 1Þ which might be stable or unstable, depending on the values of the parameters, as can be deduced by evaluating the eigenvalues of the Jacobian matrix in Eq 3, at the equilibrium points: For generating chaos, the parameters might take the values, 10, 28, and 8/3, respectively [9].The most important characteristics of the Lorenz system are that each dynamic equation contains a single parameter and that chaos is generated by only two quadratic terms; namely, xy and xz.In addition, it is invariant under the transformation (x, y) !(−x, −y).Eq 1 is known to have (0.90563, 0, −14.57219), as Lyapunov exponents, and a DKY of 2.06215, representing the Kaplan-Yorke dimension [14].Moreover, The Lorenz system is dissipative, as illustrated in Eq 4: When investigating the time evolution of Eq 1, starting from x(0) = 1.0, y(0) = z(0) = 0, the response, illustrated in A fixed integration step of 0.01 seconds was maintained throughout the process.Furthermore, the fourth-order Runge-Kutta (RK-4) method was utilized to solve the ODEs presented in Eq 1. Usually, the choice of the integration step for numerical simulations is based on the actual dominant time constant of the system, in addition to the stiffness ratio of the ODEs [15].However, for chaotic systems, this is difficult to be extracted from the power spectrum of the states, or the eigenvalues of the Jacobian matrix.Changing the value of ρ in Eq 1, while maintaining both σ and β at their nominal values can lead to different oscillatory non-chaotic patterns that will be stable, provided that the following condition is satisfied [9]: which is directly driven from the eigenvalues (λ i , i 2 1,2,3) of the characteristics equation that corresponds to Eq 3. In addition, the eigenvalues of Eq 3, at the nominal values of σ, ρ, β and the equilibrium points of Eq 2 can be used to calculate the stiffness ratio (SR), as depicted in The SR, calculated in Eq 7, which is the ratio of the largest to the smallest eigenvalue of the Jacobian matrix of the ODE system, depicted in Eq 3, has a large value reflecting more restrictive stability conditions for the Lorenz system.This signifies that the solution, despite varying slowly, is affected by other nearby solutions that vary rapidly, so the chosen numerical method must take small integration steps to obtain satisfactory results.This should be taken into consideration, when designing the FPGA-based numerical algorithm, in terms of the maximum operating frequency, and the solver structure, which is thoroughly analyzed in the coming sections. Fig 5 shows the signal x(t), for ρ = 24, while the remaining parameters are kept the same, along with its power spectrum in (a) and (b), respectively.The periodic time of the dominant cycle can be used as a guide for the best choice of the integration step of the numerical solver. When using the Lorenz system for practical implementations, e.g.CBSC [4], it might be required to scale the generated signals to meet the constraints imposed by the actual hardware.For example, when using standard TTL hardware, signals are required to be within 5 Volts limits.In addition, many ADDA cards require the analogue signals to be within ±10 Volts.Nowadays, many low-power hardware, e.g.modern FPGAs, require dealing with signals that are limited to 3.3 Volts.More restrictions could be imposed on the level of the signals, generated from the Lorenz system, for specific applications that require handling binary-based multimedia signals, corresponding to text, audio, images, and video streams [16].Consequently, scaling the values of x(t), y(t), and z(t), to meet the required range, should be provided in a systematic way that will not distort the chaotic behaviour of the Lorenz system.Along with magnitude scaling, adjusting the time scale of the Lorenz system might be required to meet the requirements on the bandwidth of the application.This is crucial, especially for real-time applications that require synchronizing the speed of the Lorenz system with some clock.A simple way to achieve scaling, in both magnitude and time, is to modify the Simulink block diagram, as illustrated in Fig 6 .The system is made 10 times faster while forcing all states to fall between 0 and 1.This was easily adjusted by adding the gain blocks, just before the integrators (shown in green), while using soft functions to scale all the variables (shown in yellow).where T SF is the time scale factor that is used to shrink or stretch the time if set to more than or less than one, respectively.In addition, the old signal, S old , corresponding to x, y, or z could be easily scaled to S new , for any given range, according to the mathematical expression in Eq 8. When depending on numerical simulations to generate the chaotic signals, the choice of the integration algorithm and its corresponding time-step is crucial.Numerical solvers convert the analogue model, implicitly, into an equivalent discrete model for which the accuracy is dependent on its order.Stability, convergence, and tolerance are three important factors that must be taken into consideration when choosing the numerical solver and adjusting its settings.Many software packages exist that can do this automatically, e.g.MATLAB.The accuracy of the simulation is directly proportional to the order of the integration algorithm.First-order Euler, second-order Heun, and RK-4 methods are the most famous numerical solvers to choose from.Low-order numerical solvers are simpler, faster and require less mathematical effort, when implemented in real-time embedded hardware.On the other hand, higher-order numerical solvers are more complicated, require access to many intermediate variables, and can be dramatically slow, which makes them less appealing for realtime applications.Thus, an optimal compromise should be obtained between the required details for the abstract level of the discrete-equivalent model and its operating speed.Usually, there is a conflict between accuracy and speed, and satisfying both of them requires very sophisticated hardware with high-performance computational power.As a rule of thumb, the approximation error between the numerical solution and the exact solution is a function of h n , where h is the integration step and n is the order of the numerical solver.This implies that for better accuracy smaller integration steps and higher-order solvers should be used. For many applications, the Lorenz system needs to be implemented in analogue forms, especially in both electronic and optical hardware.In such cases, proper connections should be set up in the laboratories, with a controlled environment to minimize the effects of noise and external disturbances.Analog components are inherently susceptible to degradation over time, influenced by factors such as aging, temperature variations, and additional anomalies that may arise during the circuit assembly process.Therefore, their accuracy might be questioned, and they will need continuous calibration and conditioning.Fig 1 illustrates a typical electronic layout for an analogue implementation of the Lorenz system that has a scaling factor of 1000, and all the signals are scaled to fit the standard TTL level of ±5 Volts [17].Analog Op-Amps and a collection of resistors and capacitors are used to represent the three first-order nonlinear dynamics of x(t), y(t), and z(t).Two analog multipliers, AD633AN, were used to generate the quadratic terms xy and xz, while using LF353 Op-Amps, with ±15 Volts power supplies.The values and types of the analog components are shown in which have a scaling time factor of 1000, a 20% scaling factor for both x and y, and a 10% scaling factor for z.Fig 7 illustrates the response of such system.The Lorenz system was first observed in an application in fluid convection, where x(t) represents the rate of the fluid convection, while both y(t) and z(t) represent the temperature variation in both the horizontal and the vertical directions.The parameters σ, ρ, β represent Prandtl number, Rayleigh number, and horizontal wave number of the fluid convection, respectively [10].However, many optical systems have similar dynamics; this suggests the possibility of implementing the Lorenz system using optical devices, in contrast to the previous electronic analogue implementation.Eq 10 exemplifies the dynamics of semiconductor lasers: where σ represents the decay rate of the electric field, δ is the atomic detuning, ρ is the pump parameter, and β is the decay rate of the population inversion.Now, x(t), y(t) and z(t) are normalized variables that represent the electric field, the polarization, and the population inversion, respectively.With optical implementations, higher-speed applications could be easily addressed.However, laser-based analogue implementations are much more expensive than electronic ones and require special labs to be set up. With the rapid advancement of digital technology and the current availability of high-performance computing powers, digital implementations of chaotic systems are becoming more feasible and are replacing their analogue counterparts in many applications, especially in CBSC systems that rely on cryptography.This paper addresses the optimization of FPGAbased implementations of chaotic systems.Without loss of generality, only the Lorenz system will be discussed; however, it is argued that extending the suggested techniques is straightforward and very systematic when applied to other chaotic systems with different structures. Literature survey Due to the inherent problems in analogue circuits, specifically tolerance of the components, ageing, noise sensitivity, and limited operating bandwidths, digital implementations using discrete-equivalent models are much preferred, especially after the incredibly fast drop in the cost of digital circuitry.Many numerical methods were used to convert the differential equations, corresponding to the continuous-time chaotic systems, into closely equivalent difference equations [18].This has the effect of converting the complex calculus-based calculations that don't have closed-form analytical solutions into much easier algebraic-based recursive calculations that are much suited to numerical techniques, using different programming languages and different digital platforms.The one-step numerical algorithms such as Euler, Heun, and Runge-Kutta (RK) methods, in addition to the multi-step algorithms, such as Adams-Bashforth and Adams-Moulton methods, are among the most famous choices, depending on the nature of the system, its stiffness, and whether it is integer or fractional order [19].Microcontrollers, as a low-cost choice for implementing the discretized chaotic Lorenz system were explored in [20], where the Euler algorithm, with an integration step of 4.0 ms was used.An 8-bit PIC18F452 microcontroller was used, with a clock frequency of 10 MHz, while coding the algorithm using a CCS-C compiler.It was argued that the adopted implementation is much cheaper than using an FPGA approach; each run needed 350 μs, while 6% and 9% of the allocated RAM and ROM were used, respectively.Another choice for digital implementation of chaotic systems was adopted in [21], using a 32-bit TMS320F28335 DSP board running at 150 MHz, with floating point arithmetic operations, along with the 16-bit DAC8552, connected through a serial peripheral interface.This DSP-based system used the RK-4 numerical solver, with an integration step of 1.0 ms, to analyze the behavior of Chua system, with a hidden attractor.It was found that the experimental results are in good agreement with the MATLAB-based simulation results.Other approaches to digitally design and implement discretized chaotic systems were explored in [22][23][24] to address software techniques that work with and without MATLAB/Simulink engine, the use of ASICs versus FPGAs, and LabVIEW-based FPGAs, respectively. In addition to the analogue implementations of different chaotic systems that were explored in [4-8, 19, 24], more examples were presented in [25][26][27][28] to compare their performance to that of an equivalent FPGA-based implementation.In [25], a comparison was made between an analogue simulation-based and FPGA-based implementations, for a new chaotic system with a single equilibrium point.The analogue circuit was constructed using Pspice, while a Xilinx Virtex-6 family xc6vlx75t-3ff784 FPGA was used for the digital implementation.Adopting both the Heun and the RK-4 algorithm resulted in a maximum frequency (Fmax) of 390.067MHz, using a 32-byte IEEE 754-1985 floating point numerical format for the VHDL code.Based on the reported results, the generated data were consistent with a convergence of 34.456E-5 precision, using absolute error analysis.In [26], a similar study was conducted, but for a chaotic TRNG that is based on the Sundarapandian-Pehlivan system.Signals were generated from an actual analogue circuit implementation that was initially modelled and tested using Pspice, and then compared to a digital implementation using a Xilinx Virtex-6 XC6VLX240T-1-FF1156 chip that adopted RK-4, as the discretization method.The digital implementation used the high precision 32-bit IEEE 754-1985 standard and managed to achieve a Fmax of 293.815MHz.Moreover, the superiority of the FPGA-based implementation was verified by passing the two popular statistical-based standards, FIPS-140-1 and NIST-800-22, which proves their suitability for cryptographic applications. Another study for digitally implementing a TRNG that depends on the generalized Sprott C chaotic system was developed in [27].Although the system under study could exhibit multi-butterfly chaotic attractors, a comparison was made for the case of generating a twobutterfly chaotic attractor only.A discretized Euler-based method was used, with an integration step of 1 ms, and the FPGA-based hardware was a Xilinx DSP System Generator.The throughput of the digital implementation was analyzed, and power consumption was reported.Again, both the analogue and the digital results were consistent, and the designed system was able to pass 16 runs in the NIST-800-22 standard test.Another comparison between a Multisim-based simulation model and an FPGA-based model was conducted in [28], for a 3-D multi-stable system with a peanut-shaped equilibrium curve that was used for an image encryption application.The used FPGA was a Cyclone IV, with a 50 MHz clock and Quartus II synthesizer.Three different discretization methods were used, Euler, Trapezoidal, and RK-4, with an integration step of 0.1 ms.All of them were found in perfect agreement with the results obtained from the Multisim model.These different FPGA-based examples that were applied to many different chaotic systems and span many applications were found very effective.The choice of the discretization algorithm, deciding on the integration step, and achieving the highest frequency for real-time operation, along with other important factors related to the throughput of the FPGA-based digital implementation need to be carefully analyzed in order to ensure the integrity of the obtained results and their consistency with their analogue counterparts. Research objectives The proposed investigation aims at achieving several research objectives.The investigation focuses on developing high-speed FPGA cores for chaotic systems as exemplified by the famous Lorenz system.The proposed developments are set to challenge state-of-the-art FPGAs by targeting numerical integration techniques that can extend to equations of the sixth order.Furthermore, the implementation challenge is extended to include experimenting with different floating-point data types to arrive at the best compromise among complexity, precision, area and speed.The proposed implementations include high-order equations and highprecision floating point representations that are limitedly addressed in the literature.Indeed, the proposed investigation presents a development pattern that can be adopted in the wider area of chaotic systems.As the developments comprise challenging implementations, the investigation presents an analysis pattern that can be adopted for other chaotic systems.The investigation presents a thorough discussion and comparison among analogue, software, and hardware implementations under FPGAs.The proposed developments enable discussing the extendibility of the investigation to applications, such as CBSC.The research objectives of this paper are summarized as follows: 1. Develop high-speed FPGA cores for Lorenz chaotic systems with discretizations of the first, fourth, and the sixth order. 3. Perform a thorough analysis of the developed cores per complexity, power consumption, precision, area, throughput, and maximum operational frequency. 4. Validate the findings through careful comparisons with outstanding closely-related work from the recent literature. 5. Discuss the limitations of the proposed work and set the ground for future work. In relation to the similar work presented in Section 2, the proposed development enables the following comparisons for all the implemented cores.The pattern of comparisons includes reasoning about the development methodology and the target performance goals.In addition, the comparison presents a focus on the achieved precision per algorithm with and without scaling: • Evaluation of the attained maximum frequency. • Evaluation of the attained throughput. • Evaluation of the attained hardware area in terms of logic elements and registers. • Evaluation of the attained power consumption. The investigation confirmed the successful achievement of high-speed and accurate FPGA cores that outperform similar work reported in the literature in several aspects. Hardware design An informal and systematic approach is adopted to develop hardware cores for the targeted Lorenz system [29,30].The methodology is unified in the sense that it uses common software engineering techniques to model the algorithm; accordingly, HW and SW designs are derived and implemented.The steps of the HW and SW developments are as follows: 1. Depict the algorithm using flowcharts. Develop the software version. 3. Capture the parallelism in the algorithm using concurrent process models.6. Describe the developed hardware using a description language and synthesize the implementation for FPGAs. Fig 8 lays out the conceptual behaviour of the first-order Lorenz system, capturing the flow of the algorithm along with the states that the system evolves through to attain the desired output.The aim behind our proposed hardware core is to compute the different values of _ x, _ y, and _ z that vary over time by solving a set of differential equations expressed in Eq 1.Those computations are carried out repeatedly until a target number of iterations is reached. Inspired by the electronic analogue implementation of the Lorenz system presented in Fig 1, several computational hardware resources are allocated to develop the datapath of Lorenz's digital model as shown in Fig 9 .Our main focus in the proposed algorithm is to utilize floating-point (FP) functional units to execute the required arithmetic operations for solving the aforementioned set of differential equations.To facilitate the process of hardware development in VHDL, FP computational units are imported from off-the-shelf IEEE libraries.Those units include adders, subtractors, multipliers, and dividers.In addition, multiplexers and registers are employed to load and store various sets of data at different intervals of time.It is important to note that the datapath presented in Fig 9 is for the Euler discretization algorithm.However, by following the same design methodology, and by utilizing additional hardware resources, this model can easily be upgraded to solve higher-order differential equations including RK-4 and RK-6 algorithms. Hardware designers are usually confronted with a multitude of challenges when it comes to designing effective hardware cores that comply with the requirements of a real-time system.Among these challenges are, maximizing the processor's frequency, diminishing the period of each cycle, activating concurrent utilization of different hardware resources, and many more.To this end, our proposed algorithm described in Listing 1 is partitioned into 6 states from S 0 to S 5 as presented in Fig 8 .Those states depict the behaviour of the control unit at different intervals of time when certain conditions are met.State S 0 is responsible for initializing the values of all the registers simultaneously.States S 1 through S 5 each are responsible for the parallel execution of independent arithmetic computations by the concurrent utilization of FP functional units; after which the resulting values are stored in temporary registers to be used in the coming states.This design approach provides an efficient hardware utilization scheme and attains phenomenal results when operating in real-time. Listing 1. Sample VHLD code segment for the Euler discretization algorithm entity showing the main computational resources Analysis and evaluation In this section, a thorough analysis and evaluation are presented of the variety of developed cores.Firstly, the results are presented highlighting some achieved appealing performance characteristics.Secondly, the results are evaluated with a focus on the practical implications and achievements in both the general application and specific technical aspects.At that point, comparisons with multiple closely-related investigations are presented.The section ends by identifying limitations and proposing future research directions. Results In this paper, scaled and non-scaled implementations of Euler, RK-4, and RK-6 discretization algorithms are presented.The results confirm that, in most cases, complex implementations that are scaled and have higher precision, and discretization algorithm order, utilize more DSPs and LUTs than simpler non-scaled implementations as shown in Figs 10 and 11.However, this does not hold true in some special cases.For instance, the number of utilized LUTs in the 8-bit Euler algorithm is 2,748 which is more than that of the 8-bit RK-4 system which is 5 LUTs.The reason behind this variation is that, at compilation, the synthesizer may detect an optimization opportunity that only can be carried out on the higher-order system that bears more hardware units than the lower-order system.As for Logic Registers (LRs), Euler, RK-4, and RK-6 algorithms utilize 7, 12, and 16 LRs respectively, regardless of the adopted floating point precision in each implementation.Fig 12 shows how power consumption follows the trend of hardware utilization, expressing how different designs consume more power upon utilizing additional hardware resources.It is important to note that while experimenting with implementations of different configurations, some of them failed to compile.Such failures occur when the device under testing do not possess the minimum number of hardware resources that a certain hardware design demands. To better understand the effect of FP precision and discretization algorithm on the system's performance, Fmax, throughput in Gbps, and throughput in Mpt/s are recorded for each implementation.Among the different implementations of Euler and RK-6 algorithms, the 16-bit non-scaled version achieves the highest operating frequency and highest throughput in Mpt/s as shown in Figs 13 and 14 respectively.Those results are not fully maintained by the Euler algorithm when it comes to throughput in Gbps, where the 16-bit non-scaled version attains a throughput of 12.77 Gbps which is topped by the 64-bit scaled version that attains a throughput of 21.17 Gbps.However, in the RK-4 algorithm, the 32-bit scaled version achieves the highest operating frequency of 555.86 MHz and throughput of 55.59 Mpt/s, while the Evaluation Discretization of continuous-time systems is a numerical approximation that needs to faithfully replicate the original behaviour of the system.The discretization algorithm and the used integration step are the most important factors in arriving at the required accuracy.For simple one-step discretization, using the Euler method, there is a strong need to use a very small integration step to avoid the accumulation of residual errors.When adopting a higher number of the intermediate steps in the discretization method (e.g.RK-4 and RK-6), a relaxation could be made to the choice of the minimum integration step.Indeed, this comes at the expense of the mathematical complexity and the required computing power.To avoid the accumulation of roundoff errors, especially for real-time applications that require continuous operation, a higher precision for data representation should be used.With the rapid advances in digital technology and the current availability of configurable hardware, this became readily available.In this paper, we addressed the RK-6 algorithm, with an outstanding increased accuracy, which is indeed a major contribution, as all the work reported in the literature relies mainly on RK-4.The RK-6 algorithm will prove more stable, robust, and rigorous for real-time applications, especially those that require hyperchaotic systems.Moreover, the designed discretization algorithms in this paper were able to accommodate different operating conditions, via providing easy scaling of operating frequency and range of the output values to suit different digital hardware requirements, e.g. the new 3.3 V FPGAs.This added flexibility required little overhead in the implementation, which makes them suitable candidates for different applications in the field of CBSC and TRNG. Developing digital hardware implementations of chaotic systems is driven by different solid motivations.The widespread analogue implementations, the nature of the utilized computations, and the appealing pipeline-like structure are among the important attractions for hardware developments.Chaotic systems discretization methods are constructed using fine- grained computational building blocks that with no doubt can promise high performance if mapped onto FPGAs.Indeed, the target chaos algorithms comprise code segments that can be unrolled into pipelines or partly executed in parallel.Although FPGAs are becoming attractive in real-time applications, investigations outside real-time applications may be less sensitive to power consumption.This enables FPGAs to be used in practical implementations in addition to traditional testing, verification, and validation.In engineering applications, chaotic systems can be employed in areas such as security.To that end, the reconfigurability of FPGAs; which enable algorithm upload and modification, and architectural modifications [31], is yet another attraction for targeting them when implementing chaotic systems. One of the most important benchmarks for evaluating the performance of the discretization process is the Fmax that can be achieved by the target hardware.Fmax is expected to be much higher, using the digital circuitry, compared to its analogue counterparts.A combination of high Fmax and high accuracy is always desirable for real-time applications; however, this should also be correlated to the Throughput results of the digital FPGA-based implementation.In the presented cores, the highest Fmax was found to be 1329.79MHz, which produced the highest Throughput in the non-scaled Euler-based algorithm with an accuracy of 16-bit float; this is a logical result as it corresponds to the implementation that requires the minimum resources.It is important to note that the reported Fmax in our work is the theoretical Fmax that the designed circuitry can attain independent of the device's frequency limitations.However, the actual Fmax value is usually constrained by the speed of the slowest interface or clock networks in the utilized FPGA device, which is 800 MHz in our case [32].The second best Fmax was found to be 988.14MHz, corresponding to the RK-6 algorithm, emphasizing a dramatic improvement in robustifying the discretization algorithm, while achieving 74.3% of the highest possible Fmax.Comparing Fig 13 to both Figs 14 and 15, a perfect correlation is noticed, highlighting the consistency of the obtained results.The overall accuracy of the implementation depends on both the number of bits and the complexity of the discretization algorithm. It is widely recognized that the employment of a smaller integration step size can significantly improve the precision of the discretization technique.As such, the reported results offer a high degree of flexibility in deciding on both the number of bits and the structure of the algorithm.Choosing RK-4, with a 32-bit float, offers a high-frequency operation of 555.86 MHz, even when using the scaled version of the Lorenz system that requires additional overhead to satisfy the required mathematical constraints on the values of the outputs.Traditional applications from the literature, with the common use of ADCs, usually target precisions of less than 16 bits.In our proposed implementations with precisions of 8 and 16 bits, changing the algorithm shows different patterns for increasing or decreasing Fmax; indicating a very high dependence on the physical utilization of the FPGA resources and how they are optimized. When examining the effect of choosing either the scaled or the non-scaled version of the Lorenz system, it is clear that it follows the same argument, while exhibiting a strong correlation with the Throughput indicators in Figs 14 and 15.Developing aggregated performance indices that accurately assess trade-offs for each implementation is highly desirable.This approach, aligning with their relative importance, would significantly enhance the research reported in this paper [33][34][35]. As explained in the previous section, the Throughput, presented in Figs 14 and 15, was strongly correlated with Fmax.Increasing the accuracy, via increasing the number of bits, didn't much impact the Throughput values.Limiting the evaluation to only the 8-bit float and the 16-bit float cases, as some of the cores of other precisions failed to synthesize due to the physical limitations of the used FPGA, it is clear that a pattern does exist for all different implementations.The absence of pattern applies to both the discretization algorithm and whether the scaled or the non-scaled structure of the Lorenz system was used.The best Throughput was achieved for the 16-bit float non-scaled Lorenz system, showing 265.96Mpt/s and 12.77 Gbps.Increasing the accuracy, via adopting a more rigorous discretization algorithm, showed a consistent degradation in the overall Throughput.For the scaled Lorenz system, the Throughput ranged from [32.72, 69.06] Mpt/s and [0.86, 3.31] Gbps, while for the non-scaled version, the indicators changed to correspond to [25.99, 265.96]Mpt/s and [1.1, 12.77] Gbps. Different appealing hardware area characteristics are attained for the developed Lorenz cores.The most economically occupied area, in LUTs, that produced the highest throughput is the non-scaled Euler algorithm with an accuracy of 16-bit float; the achieved area is 8092 LUTs.In addition, the number of DSP blocks and LRs exhibited less variation among implementations within [0, 810] for DSP Blocks and [7,16] LRs.As per the target accuracies, implementations with higher accuracy consistently occupied larger areas.Common application areas, such as CBSC, that commonly require less than a 16-bit accuracy in modern systems due to ADC, can benefit from economical area utilization as achieved by Euler and RK-4 algorithms, with areas around 8000+ and 40000+ LUTs.However, higher accuracies can also benefit from the developed cores with areas that can fit mainly high-end FPGA systems (see Figs 10 and 11).Long-standing low-end FPGAs, such as Cyclone III (2007) with its different device models, are still recommended by their manufacturer [36].Cyclone III FPGAs are produced with capacities that range between 5,136 and 198,464 LEs-each of a single LUT.Cyclone III can accommodate most of the developed Lorenz cores for different orders and accuracies.In all, Most economical in Combinational LUTs: scaled implementations are consistently larger than their non-scaled counterpart with an average increase of 11% ±6.91.Moreover, the best performance vector is holistically achieved by the Euler algorithm for the non-scaled version at an accuracy of 16 bits. The total power consumption analysis presented in Fig 12 reflects that within the order of the discretization algorithm, the total power consumption increases with the increase in accuracy.Here, no outliers are found.The different cores, corresponding to the different discretization algorithms, consumed different total power but with comparable values that fall in the range [884.2, 1037.1]mW.As for the performance indicators vector that attained the highest Throughput, the least power consumption was attained by the Euler algorithm, 16-bit float, non-scaled, at 896.36 mW.Non-scaled RK-4 algorithm, 32-bit float, and RK-6 algorithm, 16-bit float, attained comparable power consumption of around 997 mW. The best-reported throughput was 80 Mbps [37] and 159 Mbps [26] as compared with our 5.34 Gbps achieved with the same specifications; the attained speedup is 33.6 times the throughput reported in [26]. Limitations and future work Some limitations are identified for the proposed investigation on the application and implementation levels.From the chaotic systems perspective, without loss of generality, the developed cores in this paper were focused on the discretization process, using different algorithms, integration steps, and data precisions.For applications in the field of secure communication, when using cryptography or other chaos-based shift keying techniques, it is very well known that most of the computational effort is done in the synchronization process between the transmitter and the receiver.A similar argument applies to other chaos-based applications, such as TRNG.Consequently, more investigations will be required to explore the expected overhead in the computational effort, when adding more lines of HDL code to the FPGAs or addressing the latency of expected networking operations.In addition, dealing with other structures of chaotic systems that involve non-autonomous structures and/or hyperchaotic multi-dimensions will surely add more complexity to the proposed analysis and design, proposed in this paper.However, we hope that the work presented in this paper sets an example and provides implementation patterns that would enable such future developments. In terms of the implementation, the proposed cores are limited to the available logic area in the target FPGA, namely the Stratix IV.To that end, some high-order implementations were over-mapped and results were not possible to obtain, specifically for the RK-6 algorithm (see Figs 13 through 15).On the processing level, pipelining the proposed cores is possible and may lead to significant performance characteristics.Work in progress includes mapping the developed FPGA cores to the communication interfaces of the DE4 Board with its Stratix IV FPGA to enable communication applications.To this end, pipelining chaotic systems can benefit from the sequential nature of some real-time communication options. A variety of improvement opportunities are identified for a set of promising lines of future research work.The work presented in this paper addressed the Euler discretization method, which is considered simple enough, but requires a relatively small integration intermediate step to ensure stability and accuracy, followed by the RK-4 method, which is the most widely used algorithm, as it is considered the best compromise between simplicity and accuracy.Both algorithms were almost successful for all data precisions and operating frequencies.For future work, it is suggested to try other discretization methods, such as the Heun algorithm which needs only two integration steps and could be a better upgrade for the Euler method.In addition, other higher-order methods, similar to the RK-6 could be explored in an attempt to avoid the scenarios where the implementation failed, given the constraint of the used hardware.This can include Bogacki-Shampine and Dormand-Prince algorithms.Moreover, the effect of the discretization methods on the accuracy of reconstructing transmitted messages in CBSCs and the integrity of the standards for TRNG are possible extensions to this paper.Comparing the effect of the calculations overhead, using discretization, against directly using discrete chaotic systems, such as the Logistic or the He ´non maps, would be an interesting exploration for future work as well. In hardware, implementing chaotic systems for heterogeneous computing systems, such as Graphical Processing Units (GPUs), Digital Signal Processors (DSPs), and their partitioned combinations can promise appealing implementation and performance characteristics.Furthermore, investigating the real-time embedded systems aspects and intercommunication scenarios can lead to a better understanding of application details.Indeed, the available variety of performance analysis indicators utilized in the evaluation process can enable the development of classification frameworks that can rank implementations according to their effectiveness [33,34,39]. Conclusion In this paper, the problem of implementing continuous-time chaotic systems, using reconfigurable digital hardware was investigated.Different implementations were explored while using three discretization algorithms that correspond to simple (Euler), high (RK-4), and very high accuracies (RK-6).A variety of precisions were attempted, ranging from 8 to 64 bits, while evaluating the maximum operating frequency that can be obtained.Correlation between the different implementations and their corresponding throughputs, power consumptions, and area utilization were analyzed for a given Stratix IV FPGA, while conducting a comprehensive comparison with similar work, reported in the literature.The advantages, limitations, and possible extensions to the work presented in this paper were stated while providing illustrative comparisons in the form of tables and charts.In addition, future work that targets adding relevant applications such as CBSCs and TRNG was suggested.The unique investigation of the RK-6 discretization algorithm was highlighted, using different scenarios, including the additional overhead computational effort to implement scaled-magnitude outputs, for the used chaotic Lorenz system.This significant contribution can pave the way for implementing highly accurate and fast real-time CBSCs, with encryption. Appendix Table 2 presents the acronyms used throughout the manuscript and their definitions. Fig 3 , is observed.which is shown to be bounded, for the given values of the parameters.The following ranges for the states were observed, for 50 � t � 100: þ17:9032 � xðtÞ � þ18:4669 À 24:0120 � yðtÞ � þ25:0480 þ04:3772 � xðtÞ � þ45:6160 ð5Þ Examining the phase space of the states, shown in Fig 4, illustrates the chaotic behavior of the system, where the famous butterfly effect is observed.The simulation was conducted employing the Simulink model, as depicted in Fig 4(e). Fig 1 . The modified ODEs, representing the electronic circuit of Fig 6, are given by: 4 . Design the processor Datapath by identifying, allocating, and binding resources.5. Develop the Finite State Machine (FSM) of the control unit based on the flowchart. Fig 11 . Fig 11.Number of utilized DSP blocks classification.https://doi.org/10.1371/journal.pone.0299021.g011 Fig 16 presents the best-achieved results per discretization algorithm order.The 16-bit nonscaled version proved to be superior to other implementations in Euler and RK-6 algorithms.However, in the RK-4 algorithm, the aforementioned implementation had the worst performance as opposed to the 32-bit scaled version that stood out among other implementations, achieving significant performance results as shown inFig 16.
2024-04-11T05:09:07.853Z
2024-04-09T00:00:00.000
{ "year": 2024, "sha1": "6850b5fd98f93b75a61342d7e448fe518af4ab8d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6850b5fd98f93b75a61342d7e448fe518af4ab8d", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
248003987
pes2o/s2orc
v3-fos-license
Long-Term Stability of Bacterial Associations in a Microcosm of Ostreococcus tauri (Chlorophyta, Mamiellophyceae) Phytoplankton–bacteria interactions rule over carbon fixation in the sunlit ocean, yet only a handful of phytoplanktonic–bacteria interactions have been experimentally characterized. In this study, we investigated the effect of three bacterial strains isolated from a long-term microcosm experiment with one Ostreococcus strain (Chlorophyta, Mamiellophyceae). We provided evidence that two Roseovarius strains (Alphaproteobacteria) had a beneficial effect on the long-term survival of the microalgae whereas one Winogradskyella strain (Flavobacteriia) led to the collapse of the microalga culture. Co-cultivation of the beneficial and the antagonistic strains also led to the loss of the microalga cells. Metagenomic analysis of the microcosm is consistent with vitamin B12 synthesis by the Roseovarius strains and unveiled two additional species affiliated to Balneola (Balneolia) and Muricauda (Flavobacteriia), which represent less than 4% of the reads, whereas Roseovarius and Winogradskyella recruit 57 and 39% of the reads, respectively. These results suggest that the low-frequency bacterial species may antagonize the algicidal effect of Winogradskyella in the microbiome of Ostreococcus tauri and thus stabilize the microalga persistence in the microcosm. Altogether, these results open novel perspectives into long-term stability of phytoplankton cultures. INTRODUCTION Bacterial-phytoplankton interactions in the sunlit ocean fuel the biological carbon pump (Field et al., 1998) and are fundamental for our understanding of the base of the food web in marine ecosystems (Azam and Malfatti, 2007). The interactions between bacteria and phytoplankton are multifarious and may span the spectrum of relationships from mutualistic (Amin et al., 2015;Choix et al., 2018;Cooper et al., 2019) or opportunistic (Pinto et al., 2021) to antagonistic (Fukami et al., 1997;Mitsutani et al., 2001;Sohn et al., 2004;Wang et al., 2010). Mutualistic interactions are generally driven by reciprocal needs of both taxa specific bacteria and phytoplankton partners (Mönnich et al., 2020). These requirements encompass essential trace elements, nutrients (Amin et al., 2015), and vitamins, such as in the production and acquisition of the B vitamins (Cooper et al., 2019), given that many phytoplanktonic microalgae are confirmed auxotrophs for vitamin B 12 (Croft et al., 2005). In turn, phytoplankton cell wall products and other exudates can be utilized as carbon sources to heterotrophic bacteria (Myklestad, 1995;Christie-Oleza et al., 2017). Consequently, the phytoplankton dynamics and biomass production (Suminto, and Hirayama, 1997) in the ocean (Buchan et al., 2014) are altogether affected by these range of interdomain interactions which still remain enigmatic and poorly studied. Following isolation from environmental sampling, photosynthetic eukaryotes maintained in culture collection are usually sustaining a diverse microcosm of heterotrophic bacteria, which are expected to benefit from the extracellular products of the microalgae (Bell and Mitchell, 1972). The relative frequency of bacteria to microalgae is highly variable from as low to 1:100 (Bacteria:Microalgae) to 4:10 in healthy cultures (Abby et al., 2014a) and is likely to depend on several different factors. Among these factors, there is the identity of the microalga, since the phylogenetic spread of phytoplanktonic microalgae spans the entire eukaryotic tree of life (Not et al., 2012), the composition of the culture media, the physiological state of the microalgae, the physiological state of the bacteria, and the diversity of the bacterial community present. For example, the bacteria-tomicroalgae ratio has been reported to vary with the age of the culture in the microalgae Ostreococcus tauri (Mamiellophyceae, Chlorophyta), a photosynthetic picoeukaryote which has been previously isolated from a Mediterranean lagoon (Courties et al., 1994) and the NW Mediterranean Sea (Grimsley et al., 2010). During exponential growth phase, the microalgae outnumber the bacteria, whereas the bacteria may outnumber the microalgae at a 50:1 ratio during the stationary phase and even more significantly so during the decay phase (Lupette et al., 2016). The advances in genome sequencing of phytoplanktonic eukaryotes has unraveled an unexpected genomic diversity of associated bacteria (Abby et al., 2014a;Rosana et al., 2016;Rambo et al., 2020). However, precise knowledge about the mutualistic, opportunistic, or antagonistic nature of the interaction and the estimation of the effect on microalgae growth or long-term stability requires co-cultivation of the microalgae and the bacterial partners (Amin et al., 2015;Behringer et al., 2018;Johansson et al., 2019;Lian et al., 2021;Pinto et al., 2021). In our study, we took advantage of a microcosm containing O. tauri and a bacterial microbiome without external input, pour ainsi dire "in lockdown, " which had maintained the microalga for more than 1 year, to characterize the pairwise interaction between the microalga and the three bacteria isolated from this microcosm. Like many phytoplanktonic microalgae, O. tauri is auxotrophic for vitamin B 12 as it requires vitamin B 12 for growth and its genome does not encode the B12-independent form of methionine synthase (METE) (Helliwell et al., 2011). We first performed co-culture experiments to identify the nature of the short-term and long-term dynamics (up to 231 days) between the microalga and each individual bacterial strain as well as the dynamics between the microalga and the three combinations of bacterial strains. Second, we sequenced and analyzed the microcosm to investigate total bacterial diversity and the relative frequency of the different bacterial species present. We also investigated the genetic complementarity of the bacterial metagenome-assembled genomes (MAGs) for genes that may inform about the nature of the interaction with the microalgae: the genes involved for vitamin B12 synthesis and for the presence of bacterial secretion systems. Phytoplankton and Bacterial Strain Isolation From the Microcosm A microcosm experiment was started in triplicate with O. tauri RCC4221 100-ml cultures in L1 media in 200-ml closed flasks (Sarstedt T75 ref 83.3911) opened weekly for sampling. The microcosm, culture, and co-culture experiments were performed at 15 µmol m −2 s −1 with shaking (135 rpm) in 12:12 light-dark conditions at 15 • C. After initial discoloration of the culture, as previously observed when O. tauri cultures are not reinoculated with fresh media (Lupette et al., 2016), the culture regained the typical green color of O. tauri cultures after 1 month. Following 1 year of sustained green coloration, the identity of the microalgae was checked with strain-specific primers (Grimsley et al., 2010) and the long-term stability of O. tauri RCC4221 was confirmed. The bacteria were isolated from the microcosm by streaking an aliquot of the culture on marine agar (MA) Petri dishes (Difco 2216) and incubated at 20 • C in the dark. Three different single colonies among the most dominant morphotypes were picked and subcultured two times on MA plates until getting pure cultures. Then, each selected strain was transferred onto marine broth (MB) tube at 20 • C, 100 rpm in the dark. After 72 h of growth, 3 ml of these cultures was used for cryopreservation in 5% dimethylsulfoxide or 35% glycerol, put into a −80 • C freezer, and added to the Banyuls Bacterial Culture Collection (as BBCC2900, BBCC2901, and BBCC2902, hereafter B2900, B2901, and B2902). About 1 ml of this resting liquid culture was pelleted for DNA extraction and 16S rDNA sequencing. Axenic O. tauri cultures were obtained by adding 1% antibiotics to cultures at 10 6 cell concentration in L1 ASW media as previously described (Sanchez et al., 2019). To investigate the effect of the co-culture of O. tauri on bacterial growth, we compared the temporal dynamics of bacteria in co-cultures and in a media without O. tauri, hereafter coined exudate media. The media experiments were prepared as follows: exponentially growing cultures of O. tauri in L1 medium were filtrated through 0.02 µm to keep O. tauri exudates neither larger than 20 nm particular organic matter nor microalga or bacterial cells. The co-culture experiments were performed in 10-ml glass tubes as follows: 0.6 ml of bacterial cultures (at a cell concentration between 10 8 and 10 9 cells ml −1 ) was added to 6 ml of axenic microalga culture (4 × 10 7 cells ml −1 ) grown in L1ASW media. Cytometry Measurements For flow cytometry counts of microalgae and free-living bacteria, 0.05 ml of culture was sampled, diluted at 1:10-1:10,000 and fixed for 15 min in the dark with a final concentration of electron microscopy-grade glutaraldehyde of 0.25% and Pluronic F-68 of 0.01% (Marie et al., 2014), flash-frozen in liquid nitrogen, and stored at −80 • C until the analysis. Cell counts were performed with a BD FacsCanto II Flow Cytometry System [3-laser, 8color (4-2-2), BD-Biosciences] equipped with a 20-mW 488nm coherent sapphire solid-state blue laser. Accurate analyzed volumes and subsequent estimations of cell concentrations were calculated using Becton-Dickinson Trucount TM beads. Phytoplankton and bacterial cells were discriminated and enumerated according to their side scatter properties (SSC) for both and red fluorescence (>670 nm) due to chlorophyll pigments or green fluorescence due to SYBR Green I staining of the bacterial DNA [1:10,000 final concentration (Marie et al., 1997)], respectively. Data were acquired using DIVA software provided by BD Biosciences. Metagenomics of Microcosm and 16S rDNA Sequencing From Bacterial Isolates DNA extraction and purification for 16S rDNA sequencing of B2900, B2901, and B2902 were carried out with the Wizard R Genomic DNA Purification Kit (Promega) according to the manufacturer's instructions. PCR and 16S rRNA gene sequencing were done as previously described (Fagervold et al., 2013) using the BIO2MAR platform facilities. Universal bacterial primers 27F and 1492R were used for PCR amplification. PCR products were cleaned up with AmpliClean Magnetic Bead PCR Clean-up Kit (NimaGen). Cleaned amplicons were sequenced with internal 907R primer using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and cleaned up with D-Pure Dye Terminator Removal kit (NimaGen). The cycle sequencing products were loaded into an AB3130xl genetic analyzer (Life Technologies). Partial 16S rDNA sequences of these three strains were completed with metagenomic contigs and full-length 16S rDNA sequences were submitted to GenBank under accession numbers: OK396682, OK396683, OK396702, and OK396703. About 10 ml of the microcosm was sampled in February 2019 (between day 148 and 163 in Figure 1) and used for DNA extraction using a modified CTAB protocol (Winnepenninckx et al., 1993), concentrated to 0.043 ml (final concentration 0.03 mg ml −1 ), and sequenced with the Miseq Illumina technology (2 × 300 bp PE) on the bioenvironnement sequencing platform of the University of Perpignan (France). The 19.3 10 6 PE reads were trimmed with TrimGalore with options -length 100 -paired, 1 and the resulting 10.6 Gbp of DNA sequence was assembled with metaSPAdes version (Nurk et al., 2017) with parameters −k 55,77,99,127 meta. Scaffolds with 95% nucleotide identity over 1 kb BLASTN alignments with nuclear (Blanc-Mathieu et al., 2014) and chloroplastic or mitochondrial genomes (Blanc-Mathieu et al., 2013) of O. tauri were discarded from further analyses. A total of two anchor datasets were built to screen the assembly. First, the reference dataset SILVA_138.1_ SSURef_NR99 (Quast et al., 2013) was used to identify 16S rDNA containing contigs, and the complete 16S rDNA sequence 1 https://www.bioinformatics.babraham.ac.uk/projects/trim_galore/ FIGURE 1 | Concentrations of Ostreococcus tauri and bacteria during 50 weeks in the initial microcosm. Dots represent observed concentrations. Solid lines represent the temporal dynamics of the concentrations predicted by fitting local regression curves. Shaded areas represent the 95% confidence intervals (CIs). Note that a log 10 scale is used in y-axis. was annotated with RNAmmer (Lagesen et al., 2007). Second, the reference genes and corresponding amino acid sequences involved in the adenosylcobalamin (vitamin B12), and biotin and niacin pathways were compiled from Warren et al. (2002); Helliwell et al. (2016), and Cooper et al. (2019) and the Uniprot Knowledge Database (Boutet et al., 2007) and are listed in Supplementary Table 1. The presence or the absence of a gene was inferred from best BLASTN (16S rDNA) or TBLASTN (protein-coding genes) hit from the reference gene set onto the assembly with an e-value threshold <10 −5 . The complete assemblies (available on 01/10/2021) of bacterial genomes that belong to the genera identified from the 16S rDNA were downloaded from GenBank: 86 Roseovarius, 75 Winogradskyella, 30 Balneola, and 68 Muricauda. Each contig from the metagenome was affiliated to the genus of the best blast hit (BBH) against this bacterial assemblies by BLASTN (e-value threshold < 10 −5 ) (Altschul et al., 1990). The coverage of each contig was estimated by aligning the trimmed PE reads onto the assembly with BWA (bwa-mem2-2.0 version) (Li and Durbin, 2010) and SAMtools (Li et al., 2009). MAGs were obtained by binning contigs with BBH against assemblies of the same genus with similar coverage and GC content. Each MAG was subsequently annotated with Prokka (Seemann, 2014). The predicted protein sequences were searched for secretion system components using the Macromolecular System Finder approach (Abby et al., 2014b) adapted for the detection of flagella and bacterial secretion system components in the TXSScan tool (Abby and Rocha, 2017) implemented on the Pasteur Institute Galaxy browser with default parameters. 2 Data Analysis The dynamics of the microalgae and bacteria in the cultures were summarized by calculating the mean ± standard deviation (SD) of the minimum and maximum concentration of cells and their day of occurrence from the values obtained for each replicate. Moreover, we calculated the reproductive rate and the daily change in the concentration of microalgae (mean ± SD) between the maximum and the minimum concentration of cells and throughout the entire experiment for bacteria. In the case of the subculture of the initial microcosm, we calculated the initial local maximum concentration of microalgae instead of the global maximum. We compared the average values in each co-culture with those in the axenic culture of O. tauri with a t-test. In addition, to better appreciate the temporal dynamics of the microalgae and bacteria and to facilitate the visual comparison among treatments, we fitted local regression curves to the observations of concentration against time. To this end, we used the function geom_smooth of the R library ggplot2 (Wickham, 2011). To analyze the effect of the bacteria on the temporal dynamics of the microalgae, we fitted segmented regression models within each culture type separately using segmented R library (Muggeo, 2008). We focused on the time interval comprised between the maximum and the minimum O. tauri concentrations. We considered the natural logarithm of the concentration of microalgae as response variable and time as predictor. In this way, (1) we were able to identify different temporal trends within the time interval analyzed and (2) the estimates of the slope had a biological meaning, as they corresponded to the intrinsic growth rate (r): where N i and N f are cell concentrations at initial (t i ) and final (t f ) times, respectively. Then, we compared the breakpoint, i.e., the time at which the trend changed, and the slopes estimated for the axenic culture of the microalgae (control treatment) with those obtained for each co-culture of microalgae and bacteria (or combination of bacteria strains) by looking at the overlap of the 95% confidence intervals (CIs). We removed five observations of O. tauri concentrations because they corresponded to either (1) samples with zero flow cytometry counts that were followed by non-zero abundances or (2) samples with less than 10 counts preceded and followed by samples with zero counts. In the former case, concentrations were likely different from zero but no counts were detected, whereas in the latter case, cell counts very likely corresponded to flow cytometry noise. Anyway, the exclusion of these observations does not affect the interpretation of results. All graphs and statistical analyses have been performed in R version 4.1.0 (R Core Team, 2021). Ostreococcus tauri Cultures Thrive in the Company of the Microbiome in the Microcosm The O. tauri cultures inoculated in 200 ml L1 media and left without any external input maintained the typical light green coloration for 1 year. Subsequent sampling of this microcosm during 50 weeks (Figure 1) revealed a stable concentration of microalgae (C M ) C M = 10.40 × 10 6 cells ml −1 and a slightly increasing concentration of bacteria (C B ) up to C B = 41.00 × 10 7 cells ml −1 , which corresponded to a 40:1 bacteria-to-microalgae ratio (Figure 1). To preserve the initial microcosm to proceed to a longterm monitoring, we decided to replicate the microcosm by subculturing 1 ml into tubes containing 3 ml of sterile L1 ASW media. This resulted in a change of the microalgaebacteria dynamic and equilibrium (Figure 2). The concentration of the microalgae reached C M = 1.04 × 10 7 cells ml −1 within 2 weeks, whereas the bacteria reached the value observed in flasks after 22 weeks. However, and contrary to what had been observed in the original microcosm, there was a slight increase in the concentration of the microalgae after day 79 (reproductive rate = 1.01 ± 0.00, corresponding to 0.05 ± 0.02 × 10 6 cells ml −1 day −1 ) and bacteria during the entire experiment (reproductive rate = 1.02 ± 0.00, corresponding to 1.94 ± 0.24 × 10 6 cells ml −1 day −1 ). As a result, the bacteria-to-microalgae ratio ranged from 7:10 to 58:1 along this experiment. In conclusion, while the stability of the microalgae concentration observed in the original microcosm could not be strictly reproduced in the subcultures as the bacterial/microalgae ratio increased from 40:1 to 58:1, both the microalga and the bacteria could be maintained at high concentrations over the complete 231 days of the experiment (Figure 2). Some Bacteria Have Beneficial Effects Whereas Other Have Deleterious Effects on the Persistence of the Microalgae To assess the role of individual bacterial strains of the bacterial community of the microcosm, we isolated three strains and proceeded to co-culture experiments with O. tauri cultures treated with antibiotics. Long-term removal of 100% of the bacteria in an Ostreococcus culture below 10 4 cells ml −1 is delicate to achieve, and this is likely to be due to bacterial FIGURE 2 | Concentrations of O. tauri and bacteria during 33 weeks in a subculture of the initial microcosm. Dots represent observed concentrations. Solid lines represent the temporal dynamics of the concentrations predicted by fitting local regression curves. Shaded areas represent the 95% CIs. Note that a log 10 scale is used in y-axis. In sharp contrast to the co-culture with Roseovarius, the coculture of O. tauri and Winogradskyella strain B2901 leads to the loss of the microalgae population after 36 days ( Figure 3D and Table 1). The decrease of O. tauri in the co-culture with Winogradskyella was even faster than the decrease observed in O. tauri axenic cultures before day 29, as the slope coefficient for the relationship between cell concentration and time is 24% lower and the 95% CIs of the slopes do not overlap (Table 3 and Supplementary Figure 1). Effect of the Microalga on the Bacteria We further investigated the effect of the microalga on the bacteria by comparing the dynamics of the bacteria with and without (exudate media) the microalgae. For the two Roseovarius strains, co-culture and culture in exudate media led to initial growth (Figures 4A,B). For Winogradskyella, as opposed to culture in exudate media, co-culture led to a decay in bacterial concentrations until day 20 (Figure 4C), at which point the microalga decayed below 10 6 cells ml −1 (Figure 4C). After day 20, the concentration of Winogradskyella increased to reach a plateau once the microalgae have died. As a conclusion, the microalgae and its exudate promoted the growth of Roseovarius, whereas the microalgae had a negative effect on the growth of Winogradskyella. Altogether, these observations suggest that the Roseovarius-O. tauri interactions are mutualistic and that the Winogradskyella-O. tauri interactions are antagonistic. Combining Antagonistic and Mutualistic Bacteria Does Not Reestablish Long-Term Survival of Microalga We further investigated whether the antagonistic effect of the Winogradskyella strain could be compensated by the addition of the beneficial Roseovarius strains. This was not the case as, whenever the Winogradskyella strain was added into a coculture experiment, the concentration of the microalgae would reach null values within 36 days (Table 1 and Figure 5). As a conclusion, the long-term stability of the microalgae in the microcosm experiment cannot be reproduced with the three isolated strains but with either one or the combination of the two Roseovarius B2900 or 2902 strains. Therefore, it is likely that additional bacteria are tempering with the antagonistic effect of Winogradskyella present in the microcosm. Metagenomic Insights Into the Total Bacterial Diversity Within the Microcosm The assembly of the microcosm led to 1324 contigs (total 58.8 Mbp). Following the removal of the contigs aligning to O. tauri nuclear or organellar genomes (refer to section "Materials and Methods"), the bacterial diversity of the microbiome was inferred from 678 contigs (total 16.5 Mbp, average contigs length: 24.3 kbp, 240 contigs with length >1 kbp adding up to 16.3 Mbp). Screening this assembly for 16S rDNA confirmed the presence of Roseovarius and Winogradskyella sequences, which were 100% identical to the partial 16S rDNA Sanger sequencing performed on the bacterial isolates B2900, B2901, and B2902. The complete 16S rDNA of Roseovarius and Min C M : minimum concentration of microalgae (10 6 cells ml −1 ). Max C M : maximum concentration of microalgae (10 6 cells ml −1 ). Day min C M : day of the minimum concentration of microalgae. Day max C M : day of the maximum concentration of microalgae. R Mmax −min : reproductive rate between the initial maximum and the minimum concentration of microalgae. C Mmax −min : average daily change in the concentration of microalgae between its maximum and minimum concentration (10 6 cells ml −1 day −1 ). Significant differences with the axenic microalgal culture are indicated with one (p-value < 0.05) or two (p-value < 0.01) asterisks. Min C B : minimum concentration of bacteria (10 6 cells ml −1 ). Max C B : maximum concentration of bacteria (10 6 cells ml −1 ). Day min C B : day of the minimum concentration of bacteria. Day max C B : day of the maximum concentration of bacteria. R B : reproductive rate of bacteria throughout the entire experiment. C B : Average daily change in the concentration of bacteria throughout the entire experiment (10 6 cells ml −1 day −1 ). SD is not provided as there was only one replicate. Winogradskyella was extracted from the metagenome assembly and also the 16S rDNA sequence of two additional lineages: Muricauda and Balneola. Interestingly, and without surprise, the BBH of these 16S rDNA sequences against GenBank has all been sampled from the marine environment, which includes a strain isolated from the culture of a diatom microalgae (Table 4). Taxonomic affiliation of the metagenome onto available assemblies assigned to these four bacterial genera led to 3.1 (Winogradskyella) to 4.8 Mb (Muricauda) MAG assemblies ( Table 5). The MAG coverage and GC content statistics clearly separated Roseovarius (60% GC) and Winogradskyella (35% GC) affiliated contigs to the Muricauda + Balneola cluster (Figure 6). The Roseovarius MAG assembly shared a very high sequence identity (>99.9% nucleotide identity over >500 kbp) with a genome assembly affiliated to R. mucosus strain 85A, which has been isolated from the culture of a diatom microalgae, whereas the MAGs affiliated to Winogradskyella, Balneola, and Muricauda shared up to 86% nucleotide identity with sequences available from GenBank ( Table 5). The percent of reads affiliated to each genera is thus 57% to Roseovarius, 39% to Winogradskyella, 1% to Balneola, and 2% to Muricauda ( Table 5). The relative coverage of each MAG can, in turn, be used to estimate the relative frequency of each strain, that is, 49% of Winogradskyella, 47% of Rosevarius, 2% of Muricauda, and 1% of Balenola. Metagenomic Insights Into the Identity of the Vitamin B12 Producer and the Presence of Secretion Systems The search for genes encoding for the niacin, biotin, and adenosylcobalamin pathways suggests the presence of a complete adenosylcobalamin (vitamin B12) pathway in the Roseovarius MAG with 18 genes detected (cobA, cobI, cobJ, cobM, cobF, cobK, cobL, cobH, cobB, cobO, cobQ, cobU, cobP, cobD, cobS, cobV, CobC, and cobT , Supplementary Table 2). As for the niacin and biotin pathways, which have been demonstrated to be incomplete from a Dinoroseobacter strain depending on O. tauri for niacin and biotin synthesis (Cooper et al., 2019), none of the MAGs seem to contain the complete gene complement for both pathways. The complete gene pathway for biotin has been identified in the Muricauda MAG, whereas it is incomplete in the Roseovarius, Balenola, and Winogradskyella MAGs (Supplementary Table 2). However, MAGs may not correspond to complete genome assemblies, so that the absence of a gene is not as informative as its absence from a complete genome assembly. Interestingly, available genome data from other strains suggest that the biotin pathway is complete in some Roseovarius and Balneola strains, that the niacin pathway is complete for some Muricauda strains, and that the vitamin B12 pathway is complete in some Roseovarius strains (Supplementary Table 1). As a conclusion, gene content analysis of the MAGs suggested that the Roseovarius strains present in the microcosm provide the microalgae O. tauri with vitamin B12. Protein secretion systems are complex molecular machineries that translocate proteins through the outer bacterial membrane and sometimes through the membrane of an eukaryotic cell (Denise et al., 2020). The screening of the four MAGs for secretion systems (Abby and Rocha, 2017) did not allow the identification of the T4SS candidate gene complement within the MAGs. However, we identified the candidate genes for T1SS in all four MAGs, for T9SS in the Winogradskyella and (Agogué et al., 2005) BBH, best blast hit against GenBank. *BBH from uncultured isolates has been excluded. Relative concentration of these four bacteria exists in the microbiome here. Muricauda MAGs, and the candidate genes involved in the flagella within the Roseovarius MAG (Supplementary Table 3). We thus conclude that the Roseovarius strain may be motile, as observed in many Rhodobacteraceae (Bartling et al., 2018), though additional gene expression analyses would be required to check whether these genes are indeed expressed within the microcosm. DISCUSSION Of the Importance of Long-Term Co-culture Experiments We have isolated novel bacterial strains from a stable microcosm experiment started with a non-axenic O. tauri culture and provided evidence of the individual effects of these isolates on the microalgal growth and on long-term stability. The two Roseovarius isolates can be considered to be from the same species, as they share an identical 16S rDNA sequence, and the co-culture experiments demonstrated that they have a beneficial effect on the microalgal long-term survival. Analysis of the gene content of the Roseovarius MAG from the microcosm suggests that the Roseovarius strains are the unique producers of vitamin B12 in the microcosm, whereas O. tauri may provide niacin. However, there is no evidence of a type four secretion system (T4SS), whereas T4SS have been recently demonstrated as required for establishing a beneficial effect of another Rhodobacterales, Dinoroseobacter, on the growth rate of a dinoflagellate (Mansky et al., 2022). Unlike Roseovarius, Winogradskyella has a deleterious effect on the microalgal growth and long-term survival, which accelerates the decrease in the concentration of microalgae by 24% during the first 29 days of the co-culture (R = 0.65 vs. 0.86, for co-culture vs. axenic conditions, respectively; Table 3 and Supplementary Figure 1). The analyses of the gene content of the Winogradskyella MAG suggested that it encodes a T9SS, which provides either a means of movement called gliding motility or a weapon for pathogenic bacteria (Lasica et al., 2017). This complex has so far only been identified within the Bacteroidetes phylum (Abby and Rocha, 2017) to which Balnoela, Muricauda, and Winogradskyella belong to. To our knowledge, phytoplankton-bacteria co-culture experiments are only exceptionally monitored for more than 30 consecutive days, with the notable exception of a 200 days Synechococcus-Roseobacter co-culture experiment (Christie-Oleza et al., 2017). Our study demonstrates the importance of long-term experiments as the first 15 days of co-culture may wrongly suggest stable concentrations of microalgae. Indeed, the evidence of the collapse of the microalgae populations in co-culture with both Roseovarius and Winogradskyella could only be observed after 15 days ( Figure 5). Obviously, the microalgal and bacterial cells will accumulate mutations and evolve over the course of long-term experiment (Krasovec et al., 2017). Interestingly, we observed that the number of bacterial cells tended to increase (slightly) over the course of the experiment (Figures 1, 2), whereas the number of microalgae only increased in the subcultured microcosm (Figure 2). Given that there is no external nutrient input into the system, this tendency suggests ongoing adaptation to the available resources in the microcosm. The ratio of heterotrophic bacteria to microalgae at the end of both the initial microcosm (40:1) and the subculture of the microcosm (58:1) may be compared with the fraction of the photosynthetic pico-eukaryotic vs. heterotrophic bacteria fraction in the natural environment. This ratio can be estimated by cytometry and has been estimated to vary between 9:1 and 216:1 at the Station d'Observation Laboratoire Arago (SOLA, 42 • 29 N, 03 • 08 E) throughout the sampling performed during 2019 every 2 weeks (David Pecqueur, personal communication). Nevertheless, absolute concentrations in our experiments were markedly higher than in SOLA (bacteria range = 0.08 × 10 6 -0.22 × 10 6 cells ml −1 ; picoeukaryotes range = 0.49-13.90 × 10 3 cells ml −1 ), and this is likely the consequence of the initial higher availability of nutrients in the L1 culture media when the microcosm experiment was started. Alonso-Sáez et al. (2007) also reported concentrations of heterotrophic bacteria 1-2 orders of magnitude higher than those of picocyanobacteria and autotrophic picoeukaryotes during a monthly sampling carried out in 2003-2004 in the North-Western Mediterranean Sea. In terms of carbon biomass, heterotrophic bacteria are usually less abundant than phytoplankton in coastal waters, although the proportion of bacteria increases with the oligotrophy of the system and its biomass is frequently higher than that of phytoplankton in open oceans (Gasol et al., 1997). From the Laboratory to the Environment: Is the Ostreococcus-Roseovarius Coexistence Prevalent in the Environment? Roseovarius strains have been previously reported to be present in algal cultures, which include O. tauri cultures (Abby et al., 2014a). Roseovarius sp. MS2 strain commonly grows in cultures of the macroalgae Ulva mutabilis, where it takes advantage of the dimethylsulfoniopropionate (DMSP) released by the macroalgae and in turn releases compounds that promote the proper development of the macroalgae (Kessler et al., 2018). A previous 4 day co-culture of Roseovarius mucosus strain SMR3 and Skeletonema marinoi, a centric diatom, demonstrated that this bacterial strain stimulated the growth rate of the microalga (Johansson et al., 2019). Roseobacter, a group belonging to the same order as Roseovarius (i.e., Rhodobacterales), is common in coastal waters and their abundances are correlated with Chla concentrations at a global scale, which could suggest an association with phytoplankton communities (Alonso-Sáez et al., 2007;Wietz et al., 2010;D'Ambrosio et al., 2014). In this regard, it was recently reported that Rhodobacterales usually represented 5-10% of total prokaryotic abundance in surface waters in the Western Mediterranean Sea during mid spring, when phytoplankton bloom occurs (Sebastián et al., 2021). The global analysis of 313 TARA Ocean metagenomes from 68 stations for taxon co-occurrence based on barcodes from the 18S rDNA and 16S rDNA sequences identified 36 robust associations involving Ostreococcus (Lima-Mendez et al., 2015). Ostreococcus concentration was positively associated with another eukaryotic taxa 35 times, whereas the only robust co-occurrence with a bacterial taxa was with the genus Rhodopirellula. It is important to note that the TARA Ocean sampling sites included mostly open ocean waters and that the corresponding communities sequenced did not contain sequence data affiliated to O. tauri but to two divergent sister lineages O. lucimarinus and O. spp RCC809 (Leconte et al., 2020). So, while the Roseovarius-Ostreococcus association has not been detected from the metagenomes analyzed in the Lima-Mendez et al. (2015) study, this association may be revealed in future metagenomic studies that include coastal sites, where Mamiellophyceae, which include Ostreococcus, have been found to be more prevalent (Tragin and Vaulot, 2018). Alternatively, there may be no need for a taxonomic constraint on mutualistic Ostreococcus-Bacteria associations, but rather a metabolic constraint. Indeed, a recent closed microbial community experiment (de Jesús Astacioa et al., 2021) provided evidence of metabolic but not taxonomic constraints on long-term persistence of different heterotrophic bacterial communities with the freshwater green algae Chlamydomonas reinhardtii. This metabolic redundancy between taxonomically diverse bacterial lineages may be invoked more generally to explain previous reports of a lack of overlap between bacteria-diatom associations observed in culture collections as opposed to bacteria-diatom associations observed in the natural environment (Crenn et al., 2018). Possible Applications of Bacteria for Long-Term Stability of Microalgae Culture Co-cultivation of microalgae and bacteria may have application for biomass production of microalgae. Indeed, specific bacterial strains may be used to (1) increase algal biomass or (2) limit productivity loss due to contamination by an antagonistic bacterial or (3) lyse the microalgae as part of the harvesting process with the addition of an antagonistic bacteria at the end of the growth phase (Lian et al., 2018). Obviously, these developments require precise knowledge of the interactions between specific microalgae-bacteria pairs (Lian et al., 2018). As Ostreococcus cultures left without subculturing are lost upon 4-5 weeks, the Ostreococcus cultures are maintained by subculturing 200 µl in 10 ml fresh sterile L1 culture media in transparent tubes for every 3 weeks. The experimental evidence of the beneficial effect of Roseovarius on O. tauri RCC4221 is opening promising venues in microalga husbandry as it could decrease the frequency of subculturing and, thus, the risk of contamination by antagonistic bacteria or cross-contamination between strains during the subculturing process. DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found here: partial 16S rDNA sequences of these three strains were completed with metagenomic contigs and full length 16S rDNA sequences were submitted to GenBank under accession numbers: OK396682, OK396683, OK396702, and OK396703. Metagenome Assembled Genomes of the microbiome and raw data are available from PRJNA797933. AUTHOR CONTRIBUTIONS GP planned the experiments. SV performed the co-culture experiments and drafted the first version of the manuscript. SV, MN, AC, and CS performed the cytometry monitoring. MN, AC, FS, and SV were responsible for cultures. FS was responsible for DNA extraction. LI isolated, performed 16S rDNA sequencing, and provided the cultures of bacteria isolated from microcosm. LFB and GP performed the bioinformatic analyses of metagenomes. SV and CC performed the statistical analyses. LFB, CC, and GP wrote the final version. All authors contributed to manuscript editing. ACKNOWLEDGMENTS We are grateful to all Genophy group members for stimulating discussions, especially Nigel Grimsley and Hervé Moreau, and to Mathieu Chynel for starting the initial microcosm experiment in triplicate. We would like to thank the GenoToul bioinformatic platform for access to the computing facilities and the BIOPIC and BIO2MAR platforms for access and support to the cytometry and molecular biology facilities. We would also like to acknowledge the long-term work of many people involved in the SOMLIT (https://www.somlit.fr/) national monitoring network.
2022-04-08T13:21:15.049Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "bcec557c6911f5ca06fb32466c8f08da6e8ed66d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "bcec557c6911f5ca06fb32466c8f08da6e8ed66d", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
260043411
pes2o/s2orc
v3-fos-license
A Sustainable Gel Polymer Electrolyte for Solid-State Electrochemical Devices Nowadays, solid polymer electrolytes have attracted increasing attention for their wide electrochemical stability window, low cost, excellent processability, flexibility and low interfacial impedance. Specifically, gel polymer electrolytes (GPEs) are attractive substitutes for liquid ones due to their high ionic conductivity (10−3–10−2 S cm−1) at room temperature and solid-like dimensional stability with excellent flexibility. These characteristics make GPEs promising materials for electrochemical device applications, i.e., high-energy-density rechargeable batteries, supercapacitors, electrochromic displays, sensors, and actuators. The aim of this study is to demonstrate the viability of a sustainable GPE, prepared without using organic solvents or ionic liquids and with a simplified preparation route, that can substitute aqueous electrolytes in electrochemical devices operating at low voltages (up to 2 V). A polyvinyl alcohol (PVA)-based GPE has been cast from an aqueous solution and characterized with physicochemical and electrochemical methods. Its electrochemical stability has been assessed with capacitive electrodes in a supercapacitor configuration, and its good ionic conductivity and stability in the atmosphere in terms of water loss have been demonstrated. The feasibility of GPE in an electrochemical sensor configuration with a mediator embedded in an insulating polymer matrix (ferrocene/polyvinylidene difluoride system) has also been reported. Introduction Solid electrolytes have attracted much attention thanks to their wide application in batteries, supercapacitors, sensors, solar cells, and fuel cells [1][2][3][4][5][6]. The most important electrolyte requirements are: (i) high ionic conductivity, (ii) inertness towards the various species that may be present at the stage of assembly and/or resultant from the electrochemical or side reactions, (iii) reasonably low cost, (iv) stability in a relatively wide temperature range, and (v) chemical and electrochemical stability. The two main families of solid electrolytes are represented by inorganic-based and polymer-based electrolytes, with the presence of hybrid inorganic-polymer systems. Solid polymer electrolytes (SPEs), where the polymer can be synthetic or natural, offer more advantages over liquid electrolytes by being environmentally safe, flexible, and easy to handle. Organic liquid electrolytes have played an essential role in electrochemical energy storage for several decades due to their high ionic conductivities (10 −3 -10 −2 S cm −1 ), wider electrochemical window compared to their aqueous analogues, and good interfacial contacts with electrodes [7]. In SPEs, there is no trace of solvent, and the ionic conductivity is due to the ion transport promoted by chain flexibility. Consequently, the specific conductivity at room temperature is usually in the range 10 −8 -10 −6 S cm −1 and can increase significantly when the amorphization temperature of the polymer is reached. While it was advantageous to work with solid polymer electrolytes because they provide a promising opportunity to tackle the safety issue, the SPEs mostly display poor cycle performance due to their low ionic conductivity. To this end, were used for the GPE preparation. The GPE was prepared with PVA:NaCl:glycerol:water weight ratios of 1:0.5:1:5, by dissolving NaCl and PVA in distilled water at 90 • C for 3.5 h under stirring. After having eliminated the bubbles by an ultrasonic bath treatment, glycerol was added and left under stirring at 90 • C for 30 min. The solution was poured into a Teflon mold and stored at room temperature for 16 h. The hydrogel thus obtained is homogeneous, transparent and very flexible. We obtained samples with a thickness ranging from 0.5 to 1.6 mm, depending on the amount of solution poured in the Teflon mold for casting the solvent (Appendix A, Figure A1). Thinner layers of GPE were yielded by direct casting on the electrode. Several methodologies were used to assess the water retention of the GPE by varying the time, temperature and humidity. Thermogravimetric analysis (TGA) was carried out with a Q50 TA Instrument (Waters S.p.A., Milan, Italy), and electrochemical impedance spectroscopy (EIS) with a VSP potentiostat/galvanostat (BioLogic SAS, Seyssinet-Pariset, France). FTIR-ATR (Bruker ALPHA FTIR spectrometer, Milan Italy) was used to evaluate the GPE and its single components between 400 and 4000 cm −1 , 64 scans, at RT. Differential scanning calorimetry (DSC) was carried out using a Q2000 DSC apparatus (TA Instruments, Waters S.p.A., Milan, Italy) equipped with a refrigerated cooling system (RCS90). About 8-10 mg of sample was placed in hermetic aluminum pans and subjected to a heating scan at 20 • C min −1 from −40 • C to +90 • C, quenched to −40 • C, and then heated up to 90 • C at 20 • C min −1 , under a nitrogen atmosphere. From the acquired data, the glass transition temperature (T g ) could not be determined. The humidity of the atmosphere was measured with a Trotec BC21 hygrometer (Trotec GmbH & Co. KG, Heinsberg, Germany). Electrochemical tests were carried out using different setups with a VSP potentiostat/galvanostat (BioLogic SAS, Seyssinet-Pariset, France): T-shaped Teflon cells (Bola, Bohlender GmbH, Grünsfeld, Germany) with stainless steel plugs as current collectors were used for measuring the ionic conductivity by impedance spectroscopy. Cells with titanium discs or grids as collectors were used for cyclic voltammetry and electrochemical stability tests in supercapacitor configuration. The specific currents of the galvanostatic charge/discharge cycles refer to the mass of both electrodes (m d ). The cell for tests in sensor configuration is described in Section 3.3. The thermogravimetric analysis of GPE was performed in Ar from RT to 700 • C with a heating rate at 10 • C min −1 , as shown in Figure 1a-c. The GPE displays a first step of degradation at 79 • C, due to the free water loss [27]. A second degradation step is visible at 131 • C, due to coordination water loss; then, between 230 and 270 • C, there are different degradations ascribed to the decompositions of PVA and glycerol that are also affected by the interactions between them. This first step of degradation of PVA is then followed by degradation in the second last step at 420 • C. galvanostatic charge/discharge cycles refer to the mass of both electrodes (md). The cell for tests in sensor configuration is described in Section 3.3. The thermogravimetric analysis of GPE was performed in Ar from RT to 700 °C with a heating rate at 10 °C min −1 , as shown in Figure 1a-c. The GPE displays a first step of degradation at 79 °C, due to the free water loss [27]. A second degradation step is visible at 131 °C, due to coordination water loss; then, between 230 and 270 °C, there are different degradations ascribed to the decompositions of PVA and glycerol that are also affected by the interactions between them. This first step of degradation of PVA is then followed by degradation in the second last step at 420 °C. DSC analyses were carried out on GPE and on a PVA + NaCl gel (without glycerol) prepared with the same procedure described in Section 2. The calorimetric curves show an endothermic peak around −20 °C that can be attributed to the presence of water in the hosting structure [25]. The supramolecular crosslinking with glycerol increases the enthalpy of the process (from 12 to 59 J g −1 ), suggesting more interactions between water molecules and the polymer structure of the GPE with respect to only PVA (Figure 1d). To evaluate the solvent loss from GPE over time, an isothermal thermogravimetric analysis was performed over 9 h at 30 °C in a mixture of argon and oxygen, 80 mL min −1 and 20 mL min −1 , respectively, (Figure 1e) in order to investigate changes in the sample weight in conditions mimicking the atmosphere. Two GPE samples at different times from the preparation were analyzed: one as prepared and one aged for 24 h in air. The curve of the aged GPE was then shifted and combined to the curve of the GPE as prepared. The rate of weight loss over time is calculated during the first 40 min, and the GPE water loss is significant (ca. 30%), which could be attributed to the free water present in the system. DSC analyses were carried out on GPE and on a PVA + NaCl gel (without glycerol) prepared with the same procedure described in Section 2. The calorimetric curves show an endothermic peak around −20 • C that can be attributed to the presence of water in the hosting structure [25]. The supramolecular crosslinking with glycerol increases the enthalpy of the process (from 12 to 59 J g −1 ), suggesting more interactions between water molecules and the polymer structure of the GPE with respect to only PVA (Figure 1d). To evaluate the solvent loss from GPE over time, an isothermal thermogravimetric analysis was performed over 9 h at 30 • C in a mixture of argon and oxygen, 80 mL min −1 and 20 mL min −1 , respectively, (Figure 1e) in order to investigate changes in the sample weight in conditions mimicking the atmosphere. Two GPE samples at different times from the preparation were analyzed: one as prepared and one aged for 24 h in air. The curve of the aged GPE was then shifted and combined to the curve of the GPE as prepared. The rate of weight loss over time is calculated during the first 40 min, and the GPE water loss is significant (ca. 30%), which could be attributed to the free water present in the system. Furthermore, the rate of weight loss stabilizes between 0.5 and 1% h −1 , with an additional 15% weight loss in the remaining time, indicating the slower evaporation rate of the coordinated water. To assess whether this water loss impacts the conductivity, EIS spectra were carried out over time in the Teflon T-shaped cell. Impedance spectra of cells with the GPE placed between blocking stainless steel electrodes were collected over 96 or 120 h with 5 mV (AC) and in the 200 kHz-1 Hz frequency range at RT (Appendix A, Figure A3). Given that water loss likely affects the GPE thickness, the cell was disassembled after each EIS test, and the GPE thickness and diameter were measured, the former with a digital micrometer and the latter with ImageJ software. The GPE was maintained in the sealed cell for the resting time between two measurements. A similar experiment was carried out by maintaining the GPE in air, covered by a plastic box to protect it from dust, and by placing it in the cell only for the EIS test. Hence, the ionic conductivity was calculated by the formula σ = l/RA with l as the thickness, R as the resistance, and A as the area of the GPE. Figure 2a shows the EIS spectra performed at different times at RT, while Figure 2b displays the ionic conductivity and the thickness of the GPE over time. The resistance, thickness, and ionic conductivity values of GPE are reported in Appendix A, Table A1. Furthermore, the rate of weight loss stabilizes between 0.5 and 1% h −1 , with an additional 15% weight loss in the remaining time, indicating the slower evaporation rate of the coordinated water. To assess whether this water loss impacts the conductivity, EIS spectra were carried out over time in the Teflon T-shaped cell. Impedance spectra of cells with the GPE placed between blocking stainless steel electrodes were collected over 96 or 120 h with 5 mV (AC) and in the 200 kHz-1 Hz frequency range at RT (Appendix A, Figure A3). Given that water loss likely affects the GPE thickness, the cell was disassembled after each EIS test, and the GPE thickness and diameter were measured, the former with a digital micrometer and the latter with ImageJ software. The GPE was maintained in the sealed cell for the resting time between two measurements. A similar experiment was carried out by maintaining the GPE in air, covered by a plastic box to protect it from dust, and by placing it in the cell only for the EIS test. Hence, the ionic conductivity was calculated by the formula σ = l/RA with l as the thickness, R as the resistance, and A as the area of the GPE. Figure 2a shows the EIS spectra performed at different times at RT, while Figure 2b displays the ionic conductivity and the thickness of the GPE over time. The resistance, thickness, and ionic conductivity values of GPE are reported in Appendix A, Table A1. The conductivity trend obtained in a sealed cell shows that the GPE is stable in these conditions with a conductivity that slightly decrease from 35 mS cm −1 to 5 mS cm −1 in 120 h ( Figure 2a). It must be considered that the sample manipulation needed to perform ex situ measurements of the thickness and the area could accelerate the water loss with a correspondent decrease in the ionic conductivity. On the other hand, in the open atmosphere, the GPE conductivity shows a steep drop over time due to faster water evaporation ( Figure 2b). After 24 h, the ionic conductivity stabilizes around 10 −1 mS cm −1 . The ionic conductivity was also measured as a function of the air humidity (Table 1). In devices operating in open atmosphere, the humidity could influence the water loss of the GPE and, thus, the eventual conductivity variation. Different samples were left in a closed glass container under atmosphere with controlled humidity (10%, 50%, 90%) for 4 h and their conductivity measured by EIS in SS/GPE/SS cell. All the samples showed an initial conductivity between 34 and 36 mS cm −1 . The conductivity trend obtained in a sealed cell shows that the GPE is stable in these conditions with a conductivity that slightly decrease from 35 mS cm −1 to 5 mS cm −1 in 120 h (Figure 2a). It must be considered that the sample manipulation needed to perform ex situ measurements of the thickness and the area could accelerate the water loss with a correspondent decrease in the ionic conductivity. On the other hand, in the open atmosphere, the GPE conductivity shows a steep drop over time due to faster water evaporation ( Figure 2b). After 24 h, the ionic conductivity stabilizes around 10 −1 mS cm −1 . The ionic conductivity was also measured as a function of the air humidity (Table 1). In devices operating in open atmosphere, the humidity could influence the water loss of the GPE and, thus, the eventual conductivity variation. Different samples were left in a closed glass container under atmosphere with controlled humidity (10%, 50%, 90%) for 4 h and their conductivity measured by EIS in SS/GPE/SS cell. All the samples showed an initial conductivity between 34 and 36 mS cm −1 . From EIS spectra of the cell with AC electrodes, reported in Appendix A, Figure A4, the conductivity values of the GPE have been evaluated at different temperatures, after 1 h resting at each temperature, and are reported in Figure 3. The electrolyte resistance has been evaluated by the intercept at high frequency of the semicircle. The conductivity increases, as expected, with temperature even if there is no significant variation between 25 • C and Polymers 2023, 15, 3087 6 of 14 80 • C, where the conductivity is less than three times higher than at 25 • C. Figure 3 also reports the value at 25 • C, recorded after 48 h when the ramp up to 80 • C was concluded, and the cell naturally cooled (red circled point). The higher conductivity can be associated with rearrangements of the chains and ions in the GPE. From the Arrhenius plot, a low activation energy of 0.23 kJ mol −1 (2.3 meV) can be evaluated, indicating good mobility of ions in the GPE. From EIS spectra of the cell with AC electrodes, reported in Appendix A, Figure A4, the conductivity values of the GPE have been evaluated at different temperatures, after 1 h resting at each temperature, and are reported in Figure 3. The electrolyte resistance has been evaluated by the intercept at high frequency of the semicircle. The conductivity increases, as expected, with temperature even if there is no significant variation between 25 °C and 80 °C, where the conductivity is less than three times higher than at 25 °C. Figure 3 also reports the value at 25 °C, recorded after 48 h when the ramp up to 80 °C was concluded, and the cell naturally cooled (red circled point). The higher conductivity can be associated with rearrangements of the chains and ions in the GPE. From the Arrhenius plot, a low activation energy of 0.23 kJ mol −1 (2.3 meV) can be evaluated, indicating good mobility of ions in the GPE. Gel polymer electrolyte has also been characterized by FTIR-ATR spectroscopy. Figure 4 shows the spectra of PVA, glycerol and GPE. In the pure PVA spectrum, the O-H stretching is present at 3200 cm −1 . A less intense vibration at 2940 cm −1 is due to the asymmetric stretching of the C-H bond [28]. The pure glycerol O-H stretching is present at 3281 cm −1 , which is more intense than that present in the PVA spectrum, given it is a triol. Another peak is around 2950 cm −1 due to the asymmetric stretching of the C-H bond. Finally, in the spectra of GPE, the vibrations due to the stretching of the O-H bond are centered at 3317 cm −1 , and the C-H bond slightly shifted at a higher wave number with respect to those of glycerol. This is mainly ascribed to the coordination water in the GPE, which cannot be removed by a mild drying procedure (as shown in the TGA of Figure 1). The presence of water is also confirmed by the board band at 2105 cm −1 and the vibration at 1642 cm −1 [29]. Gel polymer electrolyte has also been characterized by FTIR-ATR spectroscopy. Figure 4 shows the spectra of PVA, glycerol and GPE. In the pure PVA spectrum, the O-H stretching is present at 3200 cm −1 . A less intense vibration at 2940 cm −1 is due to the asymmetric stretching of the C-H bond [28]. The pure glycerol O-H stretching is present at 3281 cm −1 , which is more intense than that present in the PVA spectrum, given it is a triol. Another peak is around 2950 cm −1 due to the asymmetric stretching of the C-H bond. Finally, in the spectra of GPE, the vibrations due to the stretching of the O-H bond are centered at 3317 cm −1 , and the C-H bond slightly shifted at a higher wave number with respect to those of glycerol. This is mainly ascribed to the coordination water in the GPE, which cannot be removed by a mild drying procedure (as shown in the TGA of Figure 1). The presence of water is also confirmed by the board band at 2105 cm −1 and the vibration at 1642 cm −1 [29]. Electrochemical Tests with AC Electrodes Electrochemical measurements were performed in a cell with titanium current collectors to avoid unwanted reactions, and applying a pressure of ca. 4.7 kg cm −2 , i.e., 4.6·10 5 Pa, to improve the contact between the electrode and the electrolyte. Two-electrode sym- Electrochemical Tests with AC Electrodes Electrochemical measurements were performed in a cell with titanium current collectors to avoid unwanted reactions, and applying a pressure of ca. 4.7 kg cm −2 , i.e., 4.6·10 5 Pa, to improve the contact between the electrode and the electrolyte. Two-electrode symmetric cells were assembled using two activated carbon electrodes and the GPEs as electrolytes. Cyclic voltammetries (CVs) were first performed at 20 mV s −1 , varying the potential window up to ±2.1 V (Figure 5a). Then, subsequent CVs were carried out at different scan rates: 5, 20, 50, and 100 mV s −1 (Figure 5b). CVs display a typical capacitive behavior with well-defined box-shaped cycles. Electrochemical Tests with AC Electrodes Electrochemical measurements were performed in a cell with titanium current collectors to avoid unwanted reactions, and applying a pressure of ca. 4.7 kg cm −2 , i.e., 4.6·10 5 Pa, to improve the contact between the electrode and the electrolyte. Two-electrode symmetric cells were assembled using two activated carbon electrodes and the GPEs as electrolytes. Cyclic voltammetries (CVs) were first performed at 20 mV s −1 , varying the potential window up to ±2.1 V (Figure 5a). Then, subsequent CVs were carried out at different scan rates: 5, 20, 50, and 100 mV s −1 (Figure 5b). CVs display a typical capacitive behavior with well-defined box-shaped cycles. The electrochemical performance of the GPE was also evaluated by galvanostatic charge and discharge cycles in different currents and voltage ranges. Figure 6a displays the profiles at 0.1 A g −1 , and Figure 6b shows the EIS spectra carried out before and after the galvanostatic cycles. The voltage profiles of Figure 6a are typical of a supercapacitor, and it is worth noting that the voltage window is higher compared to water-based electrolytes with conventional concentrations of salts. The electrochemical performance of the GPE was also evaluated by galvanostatic charge and discharge cycles in different currents and voltage ranges. Figure 6a displays the profiles at 0.1 A g −1 , and Figure 6b shows the EIS spectra carried out before and after the galvanostatic cycles. The voltage profiles of Figure 6a are typical of a supercapacitor, and it is worth noting that the voltage window is higher compared to water-based electrolytes with conventional concentrations of salts. The EIS spectra of Figure 6b were fitted with the circuit Re(RQ)WQL, because it was possible to identify the electrolyte resistance as the intercept at high frequency (Re) of the semicircle. The semicircle originates from the charge transfer resistance (R) in parallel to the related double layer capacitance (Q). At low frequency, a Warburg element (W) and the capacitance of the device QL is visible. The phase constant element Q was used for the fitting, instead of the capacitance accounting for the reality of the system. The electrolyte resistance of the freshly assembled cells was 3.6 ± 0.1 Ω and 3.8 ± 0.4 Ω after the galvanostatic cycles. The corresponding electrolyte ionic conductivity is 24 mS cm −1 . The equivalent series resistance, which can be evaluated both from the intercept of The EIS spectra of Figure 6b were fitted with the circuit R e (RQ)WQ L , because it was possible to identify the electrolyte resistance as the intercept at high frequency (R e ) of the semicircle. The semicircle originates from the charge transfer resistance (R) in parallel to the related double layer capacitance (Q). At low frequency, a Warburg element (W) and the capacitance of the device Q L is visible. The phase constant element Q was used for the fitting, instead of the capacitance accounting for the reality of the system. The electrolyte resistance of the freshly assembled cells was 3.6 ± 0.1 Ω and 3.8 ± 0.4 Ω after the galvanostatic cycles. The corresponding electrolyte ionic conductivity is 24 mS cm −1 . The equivalent series resistance, which can be evaluated both from the intercept of the semicircle at low frequency and from the ohmic drop of the galvanostatic curves, is in the order of 9.0 ± 0.5 Ω, i.e., 14.9 Ω cm 2 . Repeated charge and discharge tests at 0.5 A/g demonstrated that the GPE is stable over cycling, as reported in Figure 7, where the capacitance retention and the coulombic efficiency are plotted vs. the cycle number. The capacitance retention was evaluated as the ratio between the discharge capacitance at a certain cycle and the discharge capacitance of the first cycle. The coulombic efficiency is the percentage ratio between the discharge capacity and the charge capacity. The capacity (Q e , in C), the capacitance (C e , in F) and the coulombic efficiency percentage (η%) of the single electrode were evaluated by Q e = I dt/m e (1) where I (A) is the discharge current, t (s) is the discharge time, m e and m d (g) are the mass of the active material of one electrode or both electrodes of the device, respectively, and V (V) is the discharge voltage [30]. Electrochemical Tests with PVdF + 20% wt. Fc Electrode Electrochemical tests on electrodes where the electroactive species is entrapped into an insulating polymer matrix were carried out to assess the applicability of the GPE in sensors, as well as in electrochemical soft actuators. A thin layer of PVdF + 20% wt. Fc solution in DMF was cast on an ITO glass (area = 0.72 cm 2 ) and dried at 80 °C for 15 min. The film thickness was 0.084 mm. On another ITO glass, a thin layer of GPE solution was cast (area = 2.2 cm 2 ), with a thickness of ca. 0.013 mm. The two ITO glasses were then assembled as in Appendix A, Figure A5a. CVs were performed on an assembled ITO/GPE/PVdF-FC device and on ITO/GPE/ITO, both displayed in Figure A5b. To evidence the redox process of Fc, the capacitive response of an ITO/GPE/ITO system was subtracted to the CVs of the ITO/GPE/PVdF-Fc cell. Figure 8 shows the CVs of the device after the response of GPE has been subtracted to better evidence the redox process of Fc. Electrochemical Tests with PVdF + 20% wt. Fc Electrode Electrochemical tests on electrodes where the electroactive species is entrapped into an insulating polymer matrix were carried out to assess the applicability of the GPE in sensors, as well as in electrochemical soft actuators. A thin layer of PVdF + 20% wt. Fc solution in DMF was cast on an ITO glass (area = 0.72 cm 2 ) and dried at 80 • C for 15 min. The film thickness was 0.084 mm. On another ITO glass, a thin layer of GPE solution was cast (area = 2.2 cm 2 ), with a thickness of ca. 0.013 mm. The two ITO glasses were then assembled as in Appendix A, Figure A5a. CVs were performed on an assembled ITO/GPE/PVdF-Fc device and on ITO/GPE/ITO, both displayed in Figure A5b. To evidence the redox process of Fc, the capacitive response of an ITO/GPE/ITO system was subtracted to the CVs of the ITO/GPE/PVdF-Fc cell. Figure 8 shows the CVs of the device after the response of GPE has been subtracted to better evidence the redox process of Fc. cast (area = 2.2 cm 2 ), with a thickness of ca. 0.013 mm. The two ITO glasses were then assembled as in Appendix A, Figure A5a. CVs were performed on an assembled ITO/GPE/PVdF-FC device and on ITO/GPE/ITO, both displayed in Figure A5b. To evidence the redox process of Fc, the capacitive response of an ITO/GPE/ITO system was subtracted to the CVs of the ITO/GPE/PVdF-Fc cell. Figure 8 shows the CVs of the device after the response of GPE has been subtracted to better evidence the redox process of Fc. Discussion The PVA-based GPE, prepared from a simplified water casting route, displayed good mechanical, thermal and electrochemical properties. The TGA measurements reported in Figure 1a show the degradation steps of PVA, glycerol, GPE, and water losses. In the TGA plot related to the GPE, a first peak is observed at 79 °C, which is attributed to an initial Discussion The PVA-based GPE, prepared from a simplified water casting route, displayed good mechanical, thermal and electrochemical properties. The TGA measurements reported in Figure 1a show the degradation steps of PVA, glycerol, GPE, and water losses. In the TGA plot related to the GPE, a first peak is observed at 79 • C, which is attributed to an initial water loss of 8% wt., due to free water still present in the polymer matrix. A second weight loss step of 32% wt. is observed at around 131 • C, attributed to the loss of water coordinated in the GPE with PVA and glycerol, forming H-bonds. Two peaks in the 230-267 • C range are attributed to the degradation of the glycerol and PVA, respectively. For PVA, the degradation occurs due to the elimination of hydroxyl groups as water, chain-scission, and the formation of double bonds in the structure. Glycerol degradation leads to decomposition with the formation of volatile components, which are carried away by the argon flow [31,32]. It must be noted that in the GPE, the decomposition of these components results shifted at lower temperatures because of the weaker H-bond hetero-interactions between PVA and glycerol with respect to the homo-interactions present in the pure compounds. At 432 • C, one last step of degradation occurs, which is attributed to the residue of the initial PVA. These residues contribute to thermal degradation and possess similar structures, such as conjugated structures and carbonyl groups. Isothermal thermogravimetric analysis evidenced two different water evaporation rates, suggesting that after the fast free-water evaporation, the coordinated water remain more constrained in the GPE contributing to increase the ionic conductivity. The conductivity values of the as-prepared GPEs are around 33-36 mS cm −1 at room temperature, whose results are comparable with samples obtained using a reflux procedure. Peng et al. reported a conductivity of 46.8 mS cm −1 for a GPE composition of 1:0.6:1:7 ratios, and they obtained higher conductivity values for the 1:1.4:1:7 formulation (92.5 mS cm −1 ) [23]. However, it was difficult to reach such a high concentration of salt by simply mixing the solution, as in our case. Nevertheless, the ionic conductivity of the GPE is satisfactorily high even at this low salt concentration and shows an Arrhenius-type behavior with a low activation energy (2.3 meV) indicating a small energy barrier for ion transport. As expected, in open systems, the ionic conductivity is dependent on the air humidity due to the different water evaporation rates, ranging from 0.1 to 35 mS cm −1 after 4 h of air exposure at 10 and 90% humidity, respectively. Furthermore, this problem is minimized in sealed devices or systems that could have brief contact with the atmosphere (up to a few hours). From an electrochemical point of view, GPE properties have been evaluated in a sealed system. The configuration was the same as a solid-state supercapacitor with activated carbon electrodes. Despite the high thickness of the GPE, the device works in a relatively wide voltage range of up to 2 V, with box-shaped CVs and linear charge and discharge voltage profiles. The electrode capacitance was ca. 90 F g −1 at the lowest current density, 1 mA cm −2 , i.e., 0.1 A g −1 , and decreased at ca. 70 F g −1 at 5 mA cm −2 , i.e., 0.5 A g −1 . The ohmic drop values are aligned with those reported for other solid-state systems with GPE [10]. The electrochemical stability, evaluated in terms of capacitance retention over 1000 cycles at 0.5 A g −1 , is good (85%) and evidence that the electrolyte is electrochemically stable. Also, the coulombic efficiency, very near 100%, indicates the good electrochemical stability of the system. For all these reasons, the GPE is an interesting solid electrolyte for electrochemical devices like supercapacitors or Li ion batteries and Na ion batteries that can operate in an aqueous medium, providing suitable selection of the salt. Another field of application that can take advantage of this kind of electrolyte is that of sensors [33]. The experiment has been designed to determine if the electroactive molecule, embedded in an insulating matrix (here Fc in PVdF), can be electrochemically stimulated in a solid-state configuration with the GPE as a solid electrolyte. In this case, we used a thin layer (<100 µm) of electrolyte directly cast on the PVdF-Fc film. In this configuration, the redox behavior of ferrocene was observed in the selected electrochemical window, from 0 V to 0.6 V. This is a promising result for the GPE, which could pave the way for application in other systems, like electrochemical soft actuators that need solid-state configurations but, at the same time, can operate in aqueous environments. Conclusions A sustainable GPE has been prepared with low-cost and abundant components, and by easy processing. It exhibits good mechanical, thermal and electrochemical properties, suitable for several electrochemical devices. The ionic conductivity was related to the water retention of the GPE in different conditions (resting time, temperature, and humidity) and at RT ranging from 35 mS cm −1 of the as prepared sample to 0.1 mS cm −1 of the sample stored in air for 4 h at 10% humidity. The first, close device allowed us to demonstrate the good performance of the GPE. After 1000 cycles at a high specific current (0.5 A g −1 ), the capacitance retention is 85% with a coulombic efficiency near 100%. The second device, a model sensor, indicates that a thin layer of GPE allows the circuit to be closed in which the working electrode is covered by PVdF embedding ferrocene, the electroactive species that mimics a redox mediator. We activated the redox mediator dispersed into a polymeric insulating matrix using the water-based gel polymer electrolyte. In this system, the electrochemical stimulus is transferred to ferrocene that can be reversibly switched from the oxidized to the reduced state and a current flow in the device. With this approach, we can also use the GPE for electrochemically stimulated soft actuators. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Energy Sustainable Transition, Spoke 6 Energy Storage. Institutional Review Board Statement: Not applicable Data Availability Statement: The data presented in this study are available on request from th corresponding author. Conflicts of Interest: The authors declare no conflicts of interest. Appendix A Figure A1. The as-prepared GPE (1.16 mm thick) with PVA:NaCl:glycerol:water = 1:0.5:1:5 weigh ratios. Table A1. Resistance (from Figure A3), measured thickness and diameter, and calculated ionic con ductivity of GPE (reported in Figure 2) stored in a sealed cell and in open atmosphere at RT. Figure A3), measured thickness and diameter, and calculated ionic conductivity of GPE (reported in Figure 2) stored in a sealed cell and in open atmosphere at RT.
2023-07-21T15:18:43.965Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "4a4ee0f018922ba61c6433e8772aa43844b5a973", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/14/3087/pdf?version=1689736808", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68013201c2934eb53520381d83344ef2acc58dfa", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
232055664
pes2o/s2orc
v3-fos-license
Immunoinformatic based identification of cytotoxic T lymphocyte epitopes from the Indian isolate of SARS-CoV-2 The Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has turned into a pandemic with about thirty million confirmed cases worldwide as of September 2020. Being an airborne infection, it can be catastrophic to populous countries like India. This study sets to identify potential cytotoxic T lymphocyte (CTL) epitopes in the SARS-CoV-2 Indian isolate which can act as an effective vaccine epitope candidate for the majority of the Indian population. The immunogenicity and the foreignness of the epitopes towards the human body have to be studied to further confirm their candidacy. The top-scoring epitopes were subjected to molecular docking studies to study their interactions with the corresponding human leukocyte antigen (HLA) system. The CTL epitopes were observed to bind at the peptide-binding groove of the corresponding HLA system, indicating their potency as an epitope candidate. The candidacy was further analyzed using sequence conservation studies and molecular dynamics simulation. The identified epitopes can be subjected to further studies for the development of the SARS-CoV-2 vaccine. Prediction of cytotoxic T cell epitopes for the Indian population. NetCTLpan version 1.1 3 was used to predict the CTL epitopes across the proteins coded by the SARS-CoV-2 Indian isolate. NetCTLpan uses a neural network to predict TAP-transporter binding and C terminal cleavage predictions in addition to HLA binding prediction. Considering the HLA supertype variation across populations, we predicted the epitopes only for those HLA supertypes which constitute the majority of human leukocyte antigen (HLA) distribution in the Indian population keeping cutoffs and parameters of NetCTLpan as default. The study on the evolution of HLA-A and HLA-B polymorphisms reveals that HLA A3, B7, and B44 are the major HLAs present in the Indian population 4 with HLA A3 constituting HLA-A type in 47 percent of the Indian population and HLA B7, B44 constituting HLA-B type in 30 percent and 28 percent of the Indian population respectively. Prediction of epitope immunogenicity. Although the binding affinities of the peptides towards HLA help in predicting the epitopes, the immunogenicity plays an important role in the immune response. All the predicted epitopes were subjected to the Immune Epitope Database (IEDB) immunogenicity tool 5,6 to predict their immunogenicity score. IEDB immunogenicity tool relies on physicochemical properties such as side chain composition, amino acid position to predict the immunogenicity of the peptide sequence. Identification of unique epitopes. As the healthy human body majorly shows immune response only towards foreign antigens except under certain conditions such as autoimmune disorders, it is of great importance to consider only those epitopes which are foreign to the human body as a potential vaccine epitope candidate. To identify the vaccine epitope candidates that are foreign to the human body, all the epitopes that show positive immunogenicity are subjected to the Multiple Peptide Match tool 7 against human reference proteome with Proteome ID UP000005640. The Peptide Match tool is a search engine based on Apache Lucene and is designed to quickly retrieve all occurrences of the given query peptides from a reference or specified proteome. Docking studies. The top two percent of the foreign epitopes based on immunogenicity scores were selected. To further confirm the candidacy of the foreign epitopes for vaccine development, these epitopes were subjected to molecular docking studies to confirm their interactions with the specified HLA at the peptide-binding groove considering PDB IDs 6O9C, 6AT5, and 3KPS for the structures of HLA-A3, HLA-B7, HLA-B44 respectively. This molecular docking study was performed using HPEPDOCK Server 8 . The docking was performed without specifying the binding site residues to investigate if the studied epitopes would bind at the peptide-binding groove without any lead. The interaction diagrams are generated using LigPlot+ 9 . Conserved nature of the selected epitopes. To check the conserved nature of the selected epitopes from Indian isolates of the virus, The SARS-CoV-2 genomic sequences isolated in the Indian region were obtained from GISAID 10 and aligned using the MAFFT sequence alignment server 11 . All the 2084 genomic sequences as of 15 September 2020 were downloaded from the EpiCoV repository of the GISAID. Later, the alignment was translated using the standard genetic code and the locations containing the epitopes were extracted from the alignment. The extracted alignment was subjected to WebLogo 12 to generate sequence logos for visualizing their conserved nature in the Indian isolates. Molecular dynamics studies. To study the stability of the HLA-epitope interactions, the structures from the docking studies were subjected to all-atom molecular dynamics simulations to explore their stability and conformational flexibility using CHARMM36 all-atom force field 13 and GROMACS (Version 2018.2) 14 . The complex was solvated using TIP3P explicit water molecules and in-house ad hoc scripts were used to neutralize, minimize, and equilibrate using GROMACS. The neutralization was performed using Cl− and Na+ ions as needed while the minimization was performed using the steepest descent algorithm until the maximum force is less than 10.0 kJ/mol. The system was equilibrated using NPT conserved ensemble under constant pressure and temperature of 1 Bar and 300 K respectively. Further, the molecular dynamics simulations of 50 ns were performed using the leapfrog algorithm with an integration time step of 2 fs. The generated trajectories were analyzed using GROMACS analysis utilities to derive results. The graphs showing the root mean square deviation (RSMD) between the initial and the simulated structure, the change in coulombic interaction energies between HLA and epitope over the simulation time, and the change in the number of hydrogen bonds formed between HLA and epitope are plotted to determine the stability of the HLA-epitope complex. 15,16 . The MHC class I are also known to be ubiquitous while MHC class II molecules are known to be present only on select antigen-presenting cells 17 . There are also instances of viruses inhibiting MHC class II antigen presentation 18 . The IEDB immunogenicity tool was used to calculate the immunogenicity of the epitopes as it plays an important role in examining the immune response. IEDB immunogenicity tool returned 139 epitopes with positive scores. The data showing the immunogenicity score of the epitopes are shown in Supplementary Table S2 online. All the epitopes with positive immunogenicity scores are subjected to the Multiple Peptide Match tool to identify the epitopes that are foreign to the human body. It was observed that all the epitopes that showed positive scores for immunogenicity calculations were foreign to the human proteome. The peptide matching step was performed on the sequence dataset 'UniProtKB release 2020_01 plus isoforms |SwissProt| Isoform' with the target organism set as 'Homo sapiens [9606]' . The output log had stated that 0 out of 139 unique peptides had matches in 0 protein(s) found in 0 organism(s) confirming their foreignness to the human body. Ultimately, to select the promising epitopes, the NetCTLpan score and immunogenicity score were employed. The NetCTLpan score was used to filter the highly likely CTL epitopes by eliminating the epitopes with a negative NetCTLpan score. The remaining epitopes were sorted based on their immunogenicity score to identify the top epitope candidates. Although there were many peptide-based vaccines against various infections in different phases of development, none of these have either been concluded or had their sequence data directly available. So, we have considered an epitope from protein Superoxide Dismutase of Mycobacterium tuberculosis that showed promising results in human subjects as a control epitope in this study 19 . The control epitope and the top five vaccine epitope candidates amounting to the top two percent of the total 253 epitopes as predicted by the above steps along with their epitope prediction score by NetCTLpan and immunogenicity score are given in Table 1. The top three vaccine epitope candidates based on their immunogenicity score are subjected to molecular docking studies using HPEPDOCK Server. Although the top two percent of the predicted epitopes were selected for docking studies, coincidentally the top three selected epitopes were against different HLAs covering all the HLAs from the study. So, the docking studies were performed only for the best epitope candidate for each HLA. The interaction diagrams revealed that the peptide epitopes bind to the peptide-binding groove even though they were subjected to blind docking. Thus, confirming their vaccine candidacy. To visualize the equivalent interactions between the reference and the epitope towards the corresponding HLA, the interaction plots of the docked epitopes and the references are shown in Figs. 1, 2, and 3 by superposing the reference peptide-HLA interaction plot (foreground) onto the epitope-HLA interaction plot (background) where the peptide molecule already bound to the HLA in the PDB structure file is considered as the reference. The equivalent interactions between the reference peptide available in the PDB file and the studied epitope are circled in red to facilitate identification. The hydrogen bonds are represented by a green dotted line along with their distance while the non-bonded interactions (salt bridges and hydrophobic interactions) are represented by a red dotted line. The residues from different loops of the heavy chain are represented by different shades of green. Chains A and B are the chains corresponding to the protein while the chain C is the epitope. The two-dimensional interaction diagrams tracing the whole backbone of the studied epitopes are shown in Fig. 4. The three-dimensional representation of the interactions between the identified epitope and the corresponding HLA is also shown in Fig. 4 with hydrogen bonds in cyan and nonbonded interactions in dot surfaces. The epitope residues are represented in magenta either by balls or intra-residue bonds. As it is observed that the Interactions between the identified epitopes are similar to that of the reference peptides, the conserved nature of the selected epitopes was studied and the sequence logos as generated by WebLogo is shown in Fig. 5 showing their probability of occurrence. From the sequence logos, we conclude that the identified epitopes are highly conserved in the Indian isolates of the virus. To check the HLA-epitope stability of the conserved epitopes, a molecular dynamics simulation of 50 ns was performed. The RMSD plot as shown in Fig. 6 suggests that the docked HLA-epitope complexes are highly stable with an average RMSD of 0.2 nm, 0.19 nm, and 0.16 nm for HLA-A*03-epitope, HLA-B*07-epitope, and www.nature.com/scientificreports/ www.nature.com/scientificreports/ HLA-B*44-epitope complexes respectively. The RMSD plots of the reference complexes are studied as control and these complexes are also found to be stable throughout the simulation time as shown in Supplementary Fig. S1 online. Further, the short-range coulombic interaction and hydrogen bonding between the epitope and the HLA were analyzed to reveal the interaction stability of the epitope towards the corresponding HLA. The interaction plots as shown in Fig. 7 further confirm that the epitopes form a stable complex with strong bonding with the corresponding HLA. While identifying the potential vaccine epitopes from SARS-CoV-2, this study considers all the epitopes that are not an identical subsequence to the human proteome as a foreign sequence. This could lead to the epitopes showing immune camouflaging that is generally seen in pathogens as an immune evasion strategy. This immune evasion is achieved by adopting amino acid configurations recognized by autologous regulatory T cells of the hosts which are known for their immune-modulating capabilities 20 . Apart from immune camouflaging, crossreactivity of the T cells is also a major concern where the TCR recognizes more than one peptide-MHC complex. Although cross-reactivity is an essential feature of the immune system for biological robustness, in this case where only completely identical peptide sequences are filtered out to identify the epitope candidates, the autologous peptides could also be identified for an immune response especially in the case of an engineered TCR 21,22 . The identified epitopes could be further subjected to in-vitro and in-vivo analysis after considering the limitations. Although epitopes play an important role in immune response, epitopes alone are not sufficient to develop vaccines as they cannot stimulate the immune system sufficiently. So, adjuvants such as biopolymers and nanoparticles are used besides an epitope in developing a peptide-based vaccine prototype for high immune response. These peptide-based vaccine prototypes should be further tested on a cell line for their physiological, biological, and chemical effects leading to cytotoxicity using in-vitro cytotoxicity assays such as in-vitro titration of live organisms, enzyme-linked immunosorbent assay (ELISA), and in-vitro antigen-quantification tests. The promising prototypes could be further subjected to in-vivo studies using animal testing techniques to identify the immunogenic prototype through immunization of laboratory animals and titration of immune sera to measure their antibody response. In-vivo serology analysis methods can be used to measure antibodies in blood samples employing techniques such as enzyme-linked immunosorbent assay (ELISA) and a multiplex assay that could analyze multiple antigens simultaneously before proceeding to further clinical phase studies 23 . Over the last few months, there have been various studies on the identification of immunogenic epitopes from SARS-CoV-2 using computational techniques. But, these studies focus on a particular viral protein 24-26 , identifying epitopes from SARS-CoV which shows high sequence similarity along with sequence conservation to SARS-CoV-2 27 or considering viral sequences from diverse geographical regions 28 . Unlike the other studies, our analysis primarily targets the Indian population where an infectious disease could escalate very rapidly. We have also analyzed the conserved nature of the identified epitopes along with their interaction stability towards the HLAs at the peptide-binding groove considering a known bound peptide as a reference. This could significantly increase the predictability in an in-vivo environment. Conclusion Designing a vaccine is of top priority in the time of a pandemic. In this study, we attempted to identify potential CTL epitopes from the SARS-CoV-2 Indian isolate for the Indian population using a bioinformatics approach. The list of CTL epitopes was predicted using the NetCTLPan server which considers HLA binding affinity, TAP transport efficiency, and C-terminal cleavage to identify the epitopes. Further, Immunogenicity scores were calculated using the IEDB immunogenicity tool to identify the potential vaccine epitope candidates. The epitopes www.nature.com/scientificreports/ with positive immunogenic scores are subjected to peptide matching against the human proteome to check their foreignness to the human body. These unique immunogenic epitopes were further docked with their respective HLA molecule to study their interactions with the HLA molecule. The docking studies revealed that all the studied epitopes bind at the peptide-binding site of the HLA confirming their epitope candidacy. The identified epitopes were checked for their sequence conservation in the viral isolates from India and the HLA-epitope complexes were subjected to molecular dynamics simulation studies to analyze their interaction stability. The epitopes were observed to be highly conserved and the interactions between the HLA and the epitopes were seen to be very stable, further confirming their potency in vaccine development. The epitopes identified in this study can be further subjected to in-vitro and in-vivo studies to design a vaccine against the dreadful SARS-CoV-2. www.nature.com/scientificreports/ Data availability
2021-02-27T05:07:50.915Z
2020-04-27T00:00:00.000
{ "year": 2021, "sha1": "5c2156da2c8165aac39675db8534fb3a98c93ea7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-83949-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c2156da2c8165aac39675db8534fb3a98c93ea7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250229789
pes2o/s2orc
v3-fos-license
Vitamin D Status in Children with Autism Spectrum Disorders: Determinants and Effects of the Response to Probiotic Supplementation A relationship between the presence of clinical symptoms and gastrointestinal (GI) disturbances associated with nutritional deficiencies, including vitamin D (25(OH)D) deficiency, has been observed in autism spectrum disorder (ASD). The aim was to evaluate 25(OH)D levels according to the annual rhythm cycle, gender, the severity of autism, nutritional or clinical status, inflammatory and metabolic biomarkers, GI symptoms, and the clinical response to probiotic/placebo supplementation in preschooler children with ASD. Eighty-one ASD preschoolers (67 males) were assessed with standardized tools for ASD severity (ADOS score) and GI symptoms (by GI-Index at six-items and at nine-items, the latter defined as the Total GI-Index). The 25(OH)D levels were compared among different ASD subgroups according to metabolic and inflammatory biomarkers (leptin, insulin, resistin, PAI-1, MCP-1, TNF-alfa, and IL-6), gender, and the presence or absence of: (i) GI symptoms, (ii) the response to probiotic supplementation (the improvement of GI symptomatology), (iii) the response to probiotic supplementation (improvement of ASD severity). Only 25% of the ASD children presented an adequate 25(OH)D status (≥30 ng/mL according to the Endocrine Society guidelines). All the 25(OH)D levels falling in the severe deficiency range (<10 ng/mL) were observed in the male subgroup. A significant inverse correlation between 25(OH)D and leptin was observed (R = −0.24, p = 0.037). An inverse correlation was found between 25(OH)D levels and the GI Index 6-Items and Total GI-Index (R = −0.25, p = 0.026; −0.27, = 0.009) and a direct relationship with the probiotic response (R = 0.4, p = 0.05). The monitoring of 25(OH)D levels and the co-administration of 25(OH)D and probiotic supplementation could be considered in ASD from early ages. Introduction Autism spectrum disorders (ASD) are neurodevelopmental disorders characterized by persistent social communication difficulties with concurrent restricted interests, repetitive activities, and sensory abnormalities [1]. The pathogenesis of ASD is complex and not yet fully clarified, but it is widely recognized that genetic liability and environmental factors interact in producing the early alteration of structural and functional brain development, responsible for ASD symptoms [2]. Emerging evidence indicates that gestational or developmental vitamin D (25(OH)D) deficiency may be associated with an increased ASD risk, likely due to its known pleiotropic effects, including those on the central nervous system. The 25(OH)D role in brain development and numerous neuronal functions Characteristics of the Population General and clinical characteristics of 81 participants are reported in Table 1. A higher prevalence of males was observed. We did not find statistically significant differences between GI vs. No-GI group or between females and males as far as age, BMI, ADOS CSS, and the other studied blood parameters. Annual Rhythm Cycle and Anthropometric Characteristics Levels of 25(OH)D did not differ when considered according to day saving time-DST (26.6 ± 8.9 vs 22.8 ± 10.7 ng/mL, in DST vs no-DST, p = NS). However, when seasonality was considered, 25(OH)D levels were slightly higher in summer/autumn (28.5 ± 8.3 ng/mL), as compared to spring/winter (21.1 ± 7.0 ng/mL), although values did not reach a statistically significant difference. There was no seasonal difference in levels based on the gender (p = 0.6), as well as no significant differences in mean 25(OH)D between females and males (23.6 ± 7.2 ng/mL vs 25.1 ± 10.5 ng/mL). In Figure 1. the distribution of 25(OH)D ranges according to the Endocrine Society's guidelines is reported (adequate levels ≥30, insufficient 21-29, deficient <20 ng/mL with severe deficiency for values <10 ng/mL) [20] in the overall population and in the two sexes. To note, all the 25(OH)D levels falling in the severe deficiency range (<10 ng/mL) were observed in the male children subgroup (one taken in Autumn, one in Spring, two in Winter). Annual Rhythm Cycle and Anthropometric Characteristics Levels of 25(OH)D did not differ when considered according to day saving time-DST (26.6 ± 8.9 vs 22.8 ± 10.7 ng/mL, in DST vs no-DST, p = NS). However, when seasonality was considered, 25(OH)D levels were slightly higher in summer/autumn (28.5 ± 8.3 ng/mL), as compared to spring/winter (21.1 ± 7.0 ng/mL), although values did not reach a statistically significant difference. There was no seasonal difference in levels based on the gender (p = 0.6), as well as no significant differences in mean 25(OH)D between females and males (23.6 ± 7.2 ng/mL vs 25.1 ± 10.5 ng/mL). In Figure 1. the distribution of 25(OH)D ranges according to the Endocrine Society's guidelines is reported (adequate levels ≥30, insufficient 21-29, deficient <20 ng/mL with severe deficiency for values <10 ng/mL) [20] in the overall population and in the two sexes. To note, all the 25(OH)D levels falling in the severe deficiency range (<10 ng/mL) were observed in the male children subgroup (one taken in Autumn, one in Spring, two in Winter). 25(OH)D According to Blood Parameters and BMI A significant inverse relationship (R = −0.24, p = 0.037) was shown between 25(OH)D and leptin. We also found a linear regression between leptin levels and BMI (R = 0.34, p = 0.002), but not between leptin and the 6GI-Index or Total GI-Index, neither between 25(OH)D and BMI. 25(OH)D According to GI and ADOS The regression analysis between 25(OH)D with all the variables reported in Table 1 was performed. Significant relationships between 25(OH)D and the GI Index 6-Items (R = −0.25, p = 0.026), and the Total GI-Index severity score (R = −0.27, p = 0.009) were found. Accordingly, the levels of the 6GI-Index and Total GI-Index severity score significantly increased according to 25(OH)D reduction ( Figure 2). Conversely, no significant correlation with ADOS parameters was found. 25(OH)D According to Blood Parameters and BMI A significant inverse relationship (R = −0.24, p = 0.037) was shown between 25 and leptin. We also found a linear regression between leptin levels and BMI (R = 0 0.002), but not between leptin and the 6GI-Index or Total GI-Index, neither b 25(OH)D and BMI. 25(OH)D According to GI and ADOS The regression analysis between 25(OH)D with all the variables reported in T was performed. Significant relationships between 25(OH)D and the GI Index 6-Item −0.25, p = 0.026), and the Total GI-Index severity score (R = −0.27, p = 0.009) were Accordingly, the levels of the 6GI-Index and Total GI-Index severity score signif increased according to 25(OH)D reduction ( Figure 2). Conversely, no significant c tion with ADOS parameters was found. Multivariate Regression Analysis for 25(OH)D A multiple regression analysis was also applied to verify the effect of significa iables (leptin and Total GI severity score,) in determining 25(OH)D concentration. tiple regression analysis showed that leptin (T-value −2.1, p = 0.048), and the Tota verity score (−2.7, = 0.007) remained as independent determinants affecting 25(OH els in our population. 25(OH)D According to ADOS Total Score Improvement due to Probiotics In the placebo group, there were 9 children in the group with "ADOS Tota Improved", 12 in the "ADOS Total Score Unchanged", and 11 in the "ADOS Tota Worsened". These groups did not show any difference in 25(OH)D mean baseline (24.2 ± 12.6, 28.1 ± 12.2, and 23.7 ± 7.7 ng/mL in "Improved", "Unchanged", and " ened" ADOS Total Score, respectively, p = ns). Instead, a significant relationship was found between 25(OH)D and the resp probiotics treatment measured by the decreased ADOS Total score in the probiotic Multivariate Regression Analysis for 25(OH)D A multiple regression analysis was also applied to verify the effect of significant variables (leptin and Total GI severity score,) in determining 25(OH)D concentration. A multiple regression analysis showed that leptin (T-value −2.1, p = 0.048), and the Total GI severity score (−2.7, = 0.007) remained as independent determinants affecting 25(OH)D levels in our population. 25(OH)D According to ADOS Total Score Improvement Due to Probiotics In the placebo group, there were 9 children in the group with "ADOS Total Score Improved", 12 in the "ADOS Total Score Unchanged", and 11 in the "ADOS Total Score Worsened". These groups did not show any difference in 25(OH)D mean baseline levels (24.2 ± 12.6, 28.1 ± 12.2, and 23.7 ± 7.7 ng/mL in "Improved", "Unchanged", and "Worsened" ADOS Total Score, respectively, p = ns). Instead, a significant relationship was found between 25(OH)D and the response to probiotics treatment measured by the decreased ADOS Total score in the probiotic group (n = 31) (R = 0.4, p = 0.05). Moreover, when the group treated with probiotic was stratified depending on the different responses measured as delta ADOS Total Score, children in the "ADOS Total Score Improved" group (n = 14) showed the highest 25(OH)D status ( Figure 3) (29.9 ± 9.9 versus 21.2 ± 6.3 ng/mL in 11 children of the "Unchanged" group and 20.7 ± 8.8 ng/mL in 6 children belonging to the "Worsened" group, respectively). (n = 31) (R = 0.4, p = 0.05). Moreover, when the group treated with probiotic was stratified depending on the different responses measured as delta ADOS Total Score, children in the "ADOS Total Score Improved" group (n = 14) showed the highest 25(OH)D status ( Figure 3) (29.9 ± 9.9 versus 21.2 ± 6.3 ng/mL in 11 children of the "Unchanged" group and 20.7 ± 8.8 ng/mL in 6 children belonging to the "Worsened" group, respectively). Notably, all children with markedly reduced 25(OH)D (<10 ng/mL) were in the group of worsened ADOS Total Score (negative predictive power of 100%). None of the anthropometric and biochemical variables influenced ADOS Total Score improvement at the univariate analysis except for 25(OH)D. Having 25(OH)D below 30 ng/mL carries a 5.6 higher risk of a lack of improvement in ADOS after 6 months of probiotic supplementation (intervals of confidence 1-35, p ≤0.05). 25(OH)D According to GI Improvement due to Probiotics When the children, stratified by treatment response, were evaluated for GI symptoms, no significant differences in 25(OH)D values were observed among the three groups treated with probiotic (GI-Index worsened, GI-Index unchanged, and GI-Index improved). Population Characteristics Our study population consisted mainly of males, with a ratio between ASD males and ASD females similar to that reported in the literature [21]. The children were all in preschool age as the intent of the original study was to verify the effect of the probiotic on autistic symptoms, hypothesizing that children in their first years of age retain greater Notably, all children with markedly reduced 25(OH)D (<10 ng/mL) were in the group of worsened ADOS Total Score (negative predictive power of 100%). None of the anthropometric and biochemical variables influenced ADOS Total Score improvement at the univariate analysis except for 25(OH)D. Having 25(OH)D below 30 ng/mL carries a 5.6 higher risk of a lack of improvement in ADOS after 6 months of probiotic supplementation (intervals of confidence 1-35, p ≤ 0.05). 25(OH)D According to GI Improvement Due to Probiotics When the children, stratified by treatment response, were evaluated for GI symptoms, no significant differences in 25(OH)D values were observed among the three groups treated with probiotic (GI-Index worsened, GI-Index unchanged, and GI-Index improved). Population Characteristics Our study population consisted mainly of males, with a ratio between ASD males and ASD females similar to that reported in the literature [21]. The children were all in preschool age as the intent of the original study was to verify the effect of the probiotic on autistic symptoms, hypothesizing that children in their first years of age retain greater neuronal plasticity with neurodevelopmental processes still in progress, and as such they could benefit more from this supplementation [22]. No significant abnormalities in the inflammatory and biomarkers analyzed were observed in the studied population. We previously reported a lack of reference values for non-routine biomarkers such as cytokines, especially in the pediatric range [17]. The results of these biomarkers may vary according to the tests and instrumentation used as well as specimen sampling and storing. Nonetheless, our values are comparable to those reported in the literature for children of comparable age [23,24]. 25(OH)D, Anthropometric Data, and Annual Rhythm Cycle In recent years, many studies have examined the link between 25(OH)D and ASD by comparing 25(OH) levels of ASD children with controls. The review by Alzghoul reported lower levels of 25(OH)D in ASD as compared to the control samples, with a significant percentage of ASD patients with insufficiency/deficiency [8]. In particular, the percentages of ASD patients with deficient or insufficient vitamin D levels were 86% [25], 87% [26], and 100% [27] compared to typically developing children. In a similar vein, 75% of subjects we examined showed 25(OH)D levels deficiency (under 20 ng/mL) or insufficient (between 21 and 29 ng/mL). Our report did not detect a significant difference in 25(OH)D levels between female and male children as reported also in a previous study [28]. Nonetheless, the fact that in our sample all subjects with 25(OH)D severe deficiency were males, merits further investigation. In fact, the low 25(OH)D levels in pediatric age may further worsen during adolescence, a critical period when the restructuring process of bone development occurs, thus potentially interfering with a proper growth in this age stage. In addition, beyond gender-related differences in 25(OH)D levels, the capacity to utilize 25(OH)D may differ between sexes. Accordingly, Cannel [29] argues that the higher prevalence of ASD in males could be partly related to the fact that 25(OH)D metabolism may markedly differ under the effects of the sex hormones, in particular estrogen, which can enhance the beneficial effects of 25(OH)D on brain development. This consideration is supported by studies showing that the developing brain of a female fetus could more efficiently use available 25(OH)D due to its higher estrogen levels as opposed to the brain of a male fetus, with its higher testosterone levels [30]. In a situation where the levels of 25(OH)D are more than sufficient, the differences due to the distinct actions of the sex hormones could be overcome [29]. On the other hand, a condition of 25(OH)D deficiency, both maternal and in early childhood, could contribute to abnormal brain development favoring ASD onset with a higher incidence in males [11,31]. Accordingly, recent experimental data confirmed that 25(OH)D deficiency increases testosterone levels in maternal blood and male embryonic rat brains [32]. Therefore, a 25(OH)D deficient status could represent a predisposing factor for ASD onset, increasing foetal exposure to testosterone. The active steroid 25(OH)D is obtained by dietary uptake or mainly synthesized in human skin after exposure to sunlight, and it is known to vary according to seasonality. In our population, no significant differences were found as far as a 25(OH)D annual rhythm cycle, although levels in summer/autumn were higher compared to those taken during spring/winter, suggesting the important contribution of sun exposure to achieve higher 25(OH)D levels, and the importance of outdoor activities in these children. To note, all over the year, the average values remained suboptimal as compared to the recommended level (according to the Endocrine Society's guidelines) probably due to poor sun exposure. In fact, although ASD children should benefit from sunlight, either the ASD children tend to refuse communal play outdoors or their parents are likely to keep them indoors since they cannot be left alone to play outdoors like typical developing children [28]. In support of the critical role of exposure to sunlight, a significant positive association between latitude and the prevalence of autism has been reported [13] and Grant and coll. [33] found that children who live in low UVB light have almost three times the prevalence of ASD compared to children who live in sunny areas. Moreover, ASD children may show particular dietary habits, often having food selectivity and restricted diets, which expose them to an increased risk for micronutrient deficiencies [34]; thus, 25(OH)D synthesis or intake may be reduced in these children. Based on these assumptions, the monitoring of vitamin D levels could be considered in autistic children, especially in males, to take protective measures and treat this condition as early as possible. 25(OH)D, Blood Biomarkers, and BMI Some studies described that leptin in ASD subjects is higher than in typically developing controls [19,35]. This hormone has an important role in the regulation of food intake and body weight [36], and its expression by adipose tissue is also influenced by feeding behavior [37]. In Castro's study [38] ASD participants showed higher levels of leptin in comparison with typically developing children, and a positive correlation between leptin and fat mass was demonstrated, bringing out the role of leptin as a marker of adiposity in ASD children. Initially, the adipokines, hormones synthesized mainly by the adipocytes, were associated with eating disorders and obesity but later studies showed their important role in the regulation of immune responses and inflammation; for this reason, their involvement in the pathophysiology of autism was hypothesized [39]. Beyond ASD, an inverse association between leptin levels and 25(OH)D concentration was found in observational studies [40]. A recent review indicates that leptin plays roles in immunity, the regulation of insulin secretion, sex hormone release, performs lipolysis in adipocytes, and modulates plasticity in learning and memory-based behavioral tasks [41]. The presence of leptin receptors in specific regions of the brain implies the potential effect of this hormone in multiple mechanisms related to the function and structure of the brain [42]. In fact, leptin shares structural and functional similarities with several cytokines, many of which are involved in neurodevelopment, including IL-6 and IL-12 [43]. The inverse relationship between leptin and 25(OH)D levels found in our study could be related to the fact that the leptin levels are regulated by 25(OH)D. In particular, 25(OH)D may directly affect the expression of leptin, reducing its release from adipose tissue and consequently decreasing tissue inflammation through the inhibition of NF-kB signaling [38]. It has also recently been demonstrated that 25(OH)D affects brain serotonin concentrations, and may control leptin levels [44]. These interactions could be relevant to neuropsychiatric disorders, such as autism, with a possible impact also on the eating behavior [44]. 25(OH)D, ADOS, and GI In our sample, no baseline correlation between 25(OH)D and ADOS was observed. In the literature it is widely debated whether 25(OH)D levels correlate with the severity of ASD, with some evidence reporting an inverse relationship between the averaged serum 25(OH)D level and the severity of ASD (p > 0.001) [26], and others reporting a lack of correlation [27]. Interestingly, in a recent study [8] no significant correlation was found between vitamin D levels and calcium levels or EEG abnormalities in children with ASD. Therefore, the link between 25(OH)D values and ASD severity remains a topic to be further investigated. The deficiency of 25(OH)D, which affects approximately 80% of the general population, has been linked with gut dysbiosis and inflammation [45]. In our study the regression analysis between 25(OH)D with GI Index 6-Items and the Total GI severity score showed a significantly negative relationship. In fact, the levels of the 6GI-Index and the Total GI-Index severity score significantly increased according to 25(OH)D reduction (Figure 3). This result is in agreement with a previous study detecting that children with ASD and 25(OH)D deficiency experienced a significantly higher number of GI complaints compared to 25(OH)D-non-deficient children with ASD [14]. Indeed, the authors found an association between low 25(OH)D levels (≤30 ng/mL) and various GI problems, including diarrhea, constipation, pain, and bloating. Interestingly, to corroborate this result, 25(OH)D supplementation was demonstrated to improve the symptoms of GI problems in ASD patients [46]. 25(OH)D and the Effects of Probiotic Supplementation One significant result that emerged from this study is that ASD children who showed significant improvements in ADOS scores after probiotic supplementation [47], had higher 25(OH)D levels at baseline, while all children with severe 25(OH)D deficiency belonged to the groups with no changes or worsening in ADOS scores. Therefore, 25(OH)D seems to be positively related to the response to probiotic treatment in improving ASD severity. Instead, having sufficient 25(OH)D levels does not affect the ADOS improvement in the placebo group, having all three groups have similar 25(OH)D mean levels, reinforcing the hypothesis of a synergistic effect between 25(OH)D and probiotics in subjects having adequate baseline 25(OH)D levels. When the children were evaluated for GI symptoms, stratifying by treatment response (GI-Index worsened, GI-Index unchanged, and GI-Index improved), no significant differences in 25(OH)D values were observed among the three groups in our population. So, the negative correlation between 25(OH)D with the GI-Index could confirm that 25(OH)D deficiency or insufficiency could represent a pathological determinant for GI symptomatology, but not a crucial factor in determining the responsiveness to treatment with the probiotics. All together, these data may suggest that the evaluation of 25(OH)D status before probiotic supplementation may be useful for predicting the response to treatment. In fact, in case of inadequate levels, a combined supplementation of 25(OH)D (targeting a blood concentration of at least 30 ng/mL) and probiotics could be considered to assist the probiotic response. Notably, all children with marked reduced 25(OH)D (<10 ng/mL) were in the group of worsened ADOS. Conversely, the percentage of children with 25(OH)D higher than 20 ng/mL resulted in 93% in the ADOS improvement group, and 56% in the ADOS unchanged/worsened group. Indeed, evidence of synergistic health effects of cosupplementation with 25(OH)D and probiotics is emerging in other clinical settings. In this framework, a recent study has suggested that the combined administration of L. paracasei DG with an oil-based cholecalciferol supplement could contribute to the maintenance of the adequate 25(OH)D serum levels in mice [48]. In addition to preclinical results, randomized controlled trials were recently conducted [49]. Abboud and coauthors in their systematic review of randomized controlled trials (six studies were double-blind, and once single-blind) supported the synergic effects of 25(OH)D and probiotics: conditions explored included schizophrenia, gestational diabetes, type 2 diabetes, coronary heart disease, polycystic ovarian syndrome, osteopenia, irritable bowel syndrome, and infantile colic. To the best of our knowledge, our study is the first exploring the relationship between 25(OH)D status on the effects of probiotic supplementation in ASD. At present, no studies have been carried out in subjects with ASD utilizing the combined administration of probiotic with 25(OH)D [50]. Nonetheless, Ghaderi and colleagues recently determined the effects of a novel combination of 25(OH)D and probiotic on metabolic and clinical symptoms in chronic schizophrenia, demonstrating beneficial effects not only on metabolic profiles, but also on the severity of psychiatric symptoms [51]. It has been shown that 25(OH)D is a factor that modifies the composition of the gut microbiota [45], demonstrating a potential reciprocal interaction between the gut microbiome and 25(OH)D. The synergic effect of probiotics with 25(OH)D could be due to the 25(OH)D effects at the gut level, involving immune cell differentiation, gut microbiota modulation, gene transcription, and gut barrier integrity [52,53]. Moreover, 25(OH)D and probiotic administration trigger a series of biochemical pathways that in turn reduce oxidative stress and inflammation and improve antioxidant defense implicated in brain function. Strengths and Limitations Although significant, R values of 0.25 or 0.34 are not high, thus, the confirmation of these associations is needed in future studies. Pharmacokinetic studies focused on the absorption and bioavailability of the supplements given to ASD children must be mandatory, rendering more precise (even personalized) the calculation of dosing regimens in future. Moreover, although out of the focus of the present study but in view of paucity of data, it will be interesting to compare levels of 25(OH)D in ASD children with respect to typically developing (Italian) children of comparable ages and genders and/or siblings. The strengths of the study include a relatively large sample size of patients, the twoarm design with a placebo, which allows for valid treatment group comparisons, the use of a battery of validated scores to assess ASD severity and GI symptoms, and the fact that patients act as their own controls, reducing the amount of error deriving from variance between individuals. Materials and Methods This study was carried out according to the standards for good ethical practice and with the guidelines of the Declaration of Helsinki. The study protocol was approved by the Pediatric Ethics Committee of the Tuscany Region (Approval Number: 126/2014) and substantial amendment (Approval Number 2-13/08/2019). Written informed consent from a parent/guardian of each participant was obtained. Participants Eighty-five ASD preschoolers were included in a double-blind, randomized controlled trial, funded by the Italian Ministry of Health and by Tuscany Region (grant GR-2011-02348280) on the efficacy of probiotic supplementation on GI, sensory, and core symptoms in ASD children [22]. Children were enrolled from November 2015 to February 2018 at the ASD Unit of the IRCCS Stella Maris Foundation (Pisa, Italy), a tertiary care university hospital. ASD diagnosis was performed by a senior child psychiatrist with specific expertise in clinical evaluation of ASD according to DSM-5 [1]. Exclusion criteria were brain anomalies; neurological syndromes/focal neurological signs; anamnesis of birth asphyxia, severe premature birth/perinatal injuries; epilepsy; significant sensory impairment; diagnosis of organic GI disorder or Coeliac Disease; and special diets. The probiotic supplement was De Simone Formulation, a patented mixture already approved for use in children (marketed as Vivomixx ® in EU, Visbiome ® in USA). The effects of probiotic supplementation vs placebo on GI, and ASD Core Symptoms have been previously published [47]. In 4 children, 25(OH)D blood levels were not assessed and were excluded by the analysis. Thus, baseline evaluation was conducted in 81 ASD children, and the response to a probiotic or placebo supplementation was studied in sixty-three children who completed the six months trial (placebo: n.32; probiotic: n.31), as measured by the change in the values of ADOS score for ASD severity and GI-index for GI symptoms [47]. ASD Severity To assess ASD severity, we used the Total ADOS Calibrated Severity Score (ADOS-CSS) introduced in the Autism Diagnostic Observation Schedule-Second Edition (ADOS-2). The ADOS-2 [54] is a semi-structured assessment considered as the gold standard for the diagnosis of ASD with a demonstrated inter-rater reliability, test-retest reliability, and internal validity. The ADOS-CSS was created to standardize and compare ADOS-2 raw scores across different modules and ages. Calibrated scores are less influenced by the developmental functioning and demographics of the participant than raw totals and are therefore considered the best measure of core features of ASD in preschool children [55]. The ADOS-CSS is useful for comparing assessments across time and identifying trajectories of autism severity for clinical research [56]. ADOS-CSS can range on a scale-point from 1 to 10, while raw scores range from 0 to 28, with higher scores indicating greater severity. GI Symptoms The presence of GI symptoms was evaluated using a modified version of the GI Severity Index (GSI) [57] splitting the subjects into two groups (GI vs. No-GI). GSI is a 9 items-score to identify signs and symptoms of GI distress commonly reported by parents of children with ASD including nine variables. The first six variables (6GI-Index) evaluate specific GI symptoms (constipation, diarrhea, stool consistency, stool smell, flatulence, abdominal pain), and the additional three explore unexplained daytime irritability, nighttime awakening, and abdominal tenderness (Total-GI-Index). A total score of 4 and above (with at least 3 score points from the first six items) are considered clinically significant for the classification of a subject within the GI group. Blood Sample Collection and Analysis A fasting blood sample (3 mL) was collected in ethylenediamine tetraacetic acid (EDTA) tube to perform the biomarkers quantitative analysis. Each tube was centrifuged for 10 min at 3500 rpm and all the plasma samples were stored at −80 • C until the biohumoral analysis was performed. Cytokines were measured directly in the plasma through specific immunometric tests (MILLIPLEX MAP, human-magnetic bead panel, Millipore Corporation, Billerica, MA, USA) using an integrated multi-analyte detection platform (high-throughput technology Magpix system, Luminex xMAP technology, Luminex, Austin, TX, USA). This method allows to identify specific biomarkers (leptin, insulin, resistin, PAI-1, MCP-1, TNF-alfa, and IL-6) with some high level of automation and/or throughput. Magnetic Beads can make the process of automation and high throughput screening easier, receiving the advantage of ideal speed and sensitivity, allowing quantitative multiplex detection of analytes simultaneously. Each sample was analyzed in duplicate. In each experiment, a sample was analyzed as a quality control. Inter-assay variability was <10%. Quantitative determination of 25(OH)D was performed by DiaSorin "LIAISON 25-OH Vitamin D TOTAL" CLIA, a direct competitive immunochemiluminescent assay, as previously described in detail [58]. In brief, the method does not require any pretreatment of samples (minimum sample requirement: 250 µL, measuring interval: 4-150 ng/mL, turn-around time: 40 min and assay throughput: 80 tests/h). During the first incubation phase, 25(OH)D is separated from its binding protein, and it interacts with binding sites on the solid phase. After the second incubation with the tracer, unbound material is washed off and a flash chemiluminescent signal generated by adding the starter reagents, then measured by a photomultiplier. Statistical Analysis The data are expressed as mean ± SD. Since some biomarkers (insulin, TNFalfa, IL-6, and resistin) were not normally distributed, we used log-transformation with parametric statistic tests. Data were back-transformed for result visualization. Statistical analysis included Student's t-test (to determine the significance of the difference between the means of two data sets), χ2 tests to determine if there is a significant difference between predicted and observed frequencies in one or more categories of a contingency table, and linear regression. Moreover, unpaired analysis of variance (ANOVA) was used to evaluate whether an overall difference in the group data exists. In addition, a multivariate analysis was carried out to measure the relationships in which more than one independent variable (predictors) are related to the dependent variable. Findings with p value < 0.05 were considered significant. StatView software (version 5.0.1; SAS Institute, Abacus Concept Inc., Berkeley, CA, USA) was used for data analyses. Conclusions ASD male children may be at a higher risk of 25(OH)D severe deficiency. The 25(OH)D status is inversely correlated with GI symptomatology. Moreover, the inverse correlation between 25(OH)D and leptin suggests that the maintenance of adequate levels of 25(OH)D may exert beneficial effects on the hormones, regulating the appetite and contributing to regular growth in ASD children. The most important preliminary data that emerged from our study is that the beneficial response in ADOS Total Score to 6 months of probiotic administration is related to 25(OH)D status. Therefore, it may be of significance to evaluate through laboratory assessment 25(OH)D levels before starting a treatment with probiotics in ASD children, and to provide a vitamin D supplementation when needed, in order to reach a serum 25(OH)D target level of at least 30 ng/mL. Thus, a co-administration of 25(OH)D and probiotics, in view of their possible synergistic effect could be considered as an effective supplementation in ASD children, and as such, merit further investigation in future studies. Moreover, in addition to the administration together with probiotics, evaluation and supplementation with 25(OH)D could be considered in ASD from an early age, in view of its positive role on adverse GI symptoms and leptin levels. Data Availability Statement: The datasets generated and/or analyzed during the current study are not publicly available due the privacy policy (containing information that could compromise research participant privacy/consent) but are available from the corresponding author on reasonable request and with permission of parents of the involved children.
2022-07-03T15:04:41.735Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "72bd1ff8427d7ea47aeb9ffa317ac01a2bc3abff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/12/7/611/pdf?version=1656656428", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a29975f97afb9ca7b16c0b6761ad18957c231302", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }