id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
189692658
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Quality from Users Perspective for Develop Website
Quality of website service becomes part of branding. Through the evaluation can be found the root of the problem and then find a solution. In this research, 92.3 MQ FM Jogja radio website was evaluated for website development. The method used in this study is a WebQual Improvement, taken from previous researchers then variables adjusted to the type of website. Through focus group decision with radio managers and radio website developers, stated that the results of the study are in accordance with the conditions that occur. Where complementary relationship and usability variables affect the interest of users in accessing the website. From the results of questionnaires data distributed to loyal radio listeners and the general public, drawn into the Importance Performance Analysis (IPA) diagram, with the aim of obtaining priority services that must be addressed immediately. In quadrant I shows 2 problems, ie on hope of off-air and download pages, and radio program list page. User expectations are constrained to be realized because the website system used is not supported to add these features.
Introduction
Information becomes a daily necessity for the society that is currently present in various media both digital visual, audio, and print media. Information in the form of audio still survive and continues to grow until now, radio is an electronic device that becomes broadcast media through a certain frequency. Radio evolves like a more minimalist shape change and rechargeable batteries, besides the USB support feature that works just like an audio player with Mp3 extension. In addition, to reach radio frequencies, now can be found through smartphones and live streaming on radio websites.
Service from the website is a necessity for the user, so the quality becomes the usability parameter of a website. Website evaluation is necessary to maintain the quality of website from time to time. There are several methods to evaluate the quality of the website, but the variables of each method must be adjusted to the type of website being evaluated. In previous research conducted website analysis using WebQual for the preparation of instruments to be distributed to respondents and data obtained from respondents will be processed using Quality Function Deployment [1]. Methods to understand the effectiveness of AliexPress website localization across the country, ie using WebQual and IPA methods to analyze user perceptions and expectations [2]. The quality evaluation of the WebQual model presents the measurement process of twelve constructs, ie information that fits the needs, customized information, on-line completeness, relative advantages, easy to understand, intuitive operation, trust, response time, visual appeal, innovation, power consistent emotional appeal and imagery [3].
In this study the radio website evaluation was conducted using WebQual 4.0 method combined with WebQual 4.0 Improvement and priority analyses using IPA. Where the variables used are adjusted to the current condition of the website. Object used in this research is Radio 92.3 MQ FM Jogja website.
Related Work
In research to know the influence of WebQual free variable that is usefulness, quality of information and quality of service interaction to user satisfaction associated with website quality. The results showed that the variable interaction of usability and service have a positive and significant impact on user satisfaction. However, the variable of information quality has no significant effect on user satisfaction. Coefficient of determination shows 57.5% contribution of three independent variables can explain the variation of user satisfaction. This means that there are 42.5% more variables or factors outside of this research model including psychological factors [4]. While in research evaluating website of e-government of Yogyakarta city government using WebQual modification become E-GovQual consist of six variables, ie ease of use, trust, functionality of the interaction environment, reliability, contents and appearance of information, and citizen support. The research aimed to get feedback on the success of e-Government implementation to improve user satisfaction in accordance with the rules in the use of government website [5].
While in research about analyzing citizen Satisfaction is an important and decisive factor for the continued use of e-Government services because it can substantially impact the failure or success of e-Government projects. The main obstacle of e-government planners and practitioners in Pakistan is to know the determinants of the public satisfaction. After a review of the relevant literature, researchers colleagues formulated seven hypotheses and distinguished seven different determinants of trust, accessibility, awareness of e-services, electronic service quality, computer anxiety, customer expectations and security or privacy [6].
In a research at a Chinese hotel to find out the influence of the quality of the hotel website and eTrust against online ordering intentions using WebQual approach with variables usability, entertainment, ease to use, and complementary relationship. The results showed that entertainment and complementary relationship variables positively influence to ask customers to order online [7]. While in the research to know the interest of customers to online banking services in Jakarta, using indicators from WebQual and TAM. The results of the research indicate that ease of use, trust, anxiety computing and service quality variables influence the willingness of customers in using electronic banking [8].
In a research on organic food sales website in Malaysia, conducted quality testing website using WebQual 4.0. The results show that the quality of website sites have an indirect impact in influencing the intention of customers to buy through the website [9]. While in research conducted on e-learning using three variables of WebQual method that is, usability, quality of 3 information and quality of interaction. The study aims to determine the quality of the website, and it turns out the variable of aging has a positive effect on user satisfaction [10]. The research conducted using WebQual approach with variable interactivity, online completeness, easy of use, entertainment and trust in ecommerce website in Indonesia. The results showed that the quality of the website affect the quality of information in a stimulant [11].
Methodology
The first version of WebQual was developed as part of the results of the workshops organized by involving the students who were asked to consider the quality of the school website. The WebQual instrument is filtered through an iterative repair process using a trial questionnaire prior to deploying for a larger population. Twenty-four questions within the WebQual instrument are tested with applications within the scope of business school websites in the UK. Analysis of collected data encourages deletion of one question item. Based on reliability analysis, the remaining 23 questions are then grouped into four main dimensions, namely ease of use, experience, information, communication and integration. The qualities identified in WebQual 1.0 form the starting point for assessing the quality of information from a website in WebQual 2.0. But in the WebQual application, on B2C (Business to Consumer) websites it is clear that quality interaction perspectives are not well represented in WebQual 1.0. Related to the quality of service, especially SERVQUAL, is used to improve the quality aspects of information from WebQual with the quality of interaction. Service quality is generally defined in the best way that the service is delivered whether it is in accordance with customer expectations [12].
WebQual 2.0 development requires several significant changes to the WebQual 1.0 instrument. In order to extend the model for interaction quality, Barnes and Vidgen (2001) analyzed the WebQual SERVQUAL and made detailed comparisons between SERVQUAL and WebQual 1.0. This review succeeded in identifying redundant questions and then overlapping areas were removed, the result most of the key questions in the SERVQUAL did not match WebQual 2.0, the number of instruments with 24 questions retained ( Barnes and Vidgen, 2001). WebQual 1.0 may be powerful in terms of information quality, but less robust in terms of service interaction. Likewise for WebQual 2.0 that emphasizes the quality of interaction eliminates some quality information from WebQual 1.0. Both versions contain various qualities associated with the website as software artifacts. In a review conducted by Barnes and Vidgen (2001) found that all qualities can be categorized into three different areas, ie website quality, information quality, and quality of service interaction. The new version of WebQual 3.0 has been tested in the online auction domain [13].
In this research will be an improvement WebQual method on the questions and take the variable from the discussion of hypothesis development of WebQual variables. Variables that will be further developed, ie the variable Usability and Service Interaction Quality [13].
It aims to explore the root of the problem adapted to the object under study. Differences in the development of variables based on needs as shown in Table 1
Discussion
In determining the sample number of respondents, in this study is not determined from how many populations. This is due to unlimited or unrecorded number of radio listeners MQ 92.3 FM. Therefore, the questionnaire respondents are divided into 2 levels that each level has certain criteria, among others: 1. Active radio listener MQ 92.3 FM, as evidenced by its membership in the WhatsApp group 2. The general public in the Amikom University neighborhood area of Yogyakarta The questionnaire was distributed from December 4, 2017 to March 16, 2018 to respondents via email. Active radio listeners MQ 92.3 FM there are 157 people, and who responded to the questionnaire in this study as many as 45 people. While the general public in the area of Amikom University who were randomly asked to fill out this questionnaire netted as many as 34 people. So the total number of respondents to the questionnaire of this study were 79 people.
From the results of collecting raw data of the questionnaire, the next step is to measure the level of suitability to find out how big customers / consumers feel satisfied with the performance of the company, and how the service providers understand what customers want the services they provide. The degree of conformity is the result of comparison of the perception score with the expected score. This level of conformity will determine the order of priority services provided by the company from a sequence that fits perfectly with the non-compliance. The level of conformity <100% means that the quality of service provided is less / does not meet what is considered important by the customer. Service has not been satisfactory. In the level of compliance <100% can be explained as follows:
Importance Performance Analysis
The data obtained in Table 2 are all the importance and performance figures of each indicator included in SPSS View data, then select Graphs menu -Legacy Dialog -Scatter / Dot -Simple Scatter and select define. In column Y Axis input Importance parameter, and in column X Axis input Performance parameter, and in Casey Label column input name variable, then select OK. Then the point or indicator appears on the screen IPA diagram. To define the centerline or mean of the double-click diagram select the option on the menu, select X Axis Reference Line, the Properties screen appears, on the Properties screen of the Set to column and select Mean in the Set to and Apply fields. Then automatically the mean line of X Axis appears in the IPA diagram. So it is with Y Axis in the same way. Thus Quadrant I, II, II, and IV are formed with the corresponding arrangement of the indicator value. Based on Figure 1, this is a description each quadrant.
Focus Group Decision
Focus group decision is done with the radio website developer section to discuss the results of the research for further action. In this session found the problem of the emergence of priority variables that must be fixed.
In the E3 statement that states that users expect the download and off-air service on the website. With the current web system that is CodeIgniter, it is quite support for added download and offair feature, with the provision should enlarge space hosting. In addition there are other solutions that can be used for free and free that is using social media YouTube. Where all the videos will be stored on the YouTube server so the data becomes less privacy, but has the advantage in promoting radio. As for the download service is directed at links that have been converted from YouTube to various audio and video extensions. There are several plugins or online converters from YouTube to .Mp3 that can be used like Tampermonkey.net, youtubeinmp3s.com, ummy videodownloader. Therefore the user interface in figure 2 only becomes catalog.
Figure 2. User interface download and off-air
In the I4 statement that states that the user hopes the program list view is easy to understand, which means that during the event there is an event title update on the live streaming feature. The solution recommended by the researchers that the right of access to the live streamer feature for the broadcaster to input the name of the broadcaster, the event title every time the broadcast will take place.
Conclusion
In this research improvement method WebQual used can indicate the problem of service website based on user point of view. Where with the average level of 88.5% compliance, 93.2% complementary relationship effect on usability of 91.9%. Level percentage of the influence means significantly affect the interest of users in accessing the website. Improvement method in this research need to be tested to similar website with radio website to prove the accuracy of variable used as indicator.
|
2019-06-13T13:20:23.934Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bafbcc86852100dec76212b9749d73c73dc406e8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1140/1/012051",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c258d7c51d0cb33d12968b40f63492c4130cf290",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
73596129
|
pes2o/s2orc
|
v3-fos-license
|
Application of Numerical Methods in Design of Hydraulic Structures
This study was aimed to evaluate the use of numerical methods in the design of hydraulic structures and use a numerical model to validate the simulation of flow over three types of spillway which are: smooth spillway, various of step spillway, labyrinth spillway and side spillway. Numerical methods can assist in the design of hydraulic structures in the evaluation of the energy loss, calculated discharge coefficient and cavitation phenomena investigated by examining the pressure field and flow field. Designers are challenged to use numerical methods for the design of hydraulic structures due to the complexity of the specific flow field of hydraulic structures, such as the free surface flow, two-phase and multi-phase flows, turbulent flow and turbulence. In this study provide the best and efficient numerical methods for the numerical modeling of hydraulic structures, which are obtained by various researchers who have conducted research studies on numerical methods. Using the guidelines presented in this study can help designers in numerical modeling of hydraulic structures that are under all the facts and reliable models apply the results of the numerical method.Computational Fluid Dynamics (CFD) is a type of numerical model that can be used to solve problems involving fluid flow. CFD can provide a significant amount of computation time and more economical solution than a physical model. The fundamental principles for all numerical models are similar. Problems can be described, physically, by a set of partial differential equations. Then, a numerical method is used to formulate a set of algebraic equations that represent the partial differential equations.
Introduction
Dam and reservoirs are being used in many countries in various sizes.They can be constructed by many kinds of materials e.g.; earth fill, rock fill, concrete masonry, roller compacted concrete (RCC), etc. depending on availability of materials, cost, and mass stability.They can be overflowed and cause the overtopping problem if their capacities are less than the difference between inflow and outflow.The most sensitive structures are earth-fill dams, which can be destroyed by a small overtopping (Khatsuria, 2005) [26].Even the dam that made of other kinds of materials can withstand the overtopping; overflow jet would be more concern about the immediate flow downstream, indirectly.It can cause other damages to the nearby structures and cause failure to them.All dams, then, should be constructed with the high safety device to prevent the overtopping.As a result of dam safety policies, the spillway design flood are among the most conservative dam safety policies.The spillway should be designed to pass probable maximum flood (PMF) (Dubler and Grigg, 1996) [19].It is also designed to surplus the excess water or flood.Takasu and Yamaguchi (1988) [39] discussed about seven more functions on the spillway.They are; (i) maintain water in river, (ii) discharge water for utilization, (iii) maintain water level for flood control system, (iv) control floods, (v) control additional flood from upstream, (vi) release surplus water, and (vii) lower water level.Spillways have been classified, according to the most prominent feature by Khatsuria (2005) [26], as shown in This study was aimed to evaluate the use of numerical methods in the design of hydraulic structures and use a numerical model to validate the simulation of flow over three types of spillway which are: smooth spillway,various of step spillway, labyrinth spillway and side spillway.To develop the equations and charts for the preliminary design of spillways for any cases of possible discharges.In this study provide the best and efficient numerical methods for the numerical modeling of hydraulic structures, which are obtained by various researchers who have conducted research studies on numerical methods.Using the guidelines presented in this study can help designers in numerical modeling of hydraulic structures that are under all the facts and reliable models apply the results of the numerical method.Computational Fluid Dynamics (CFD) is a type of numerical model that can be used to solve problems involving fluid flow.The present study is to model the complex flow pattern of two-phase turbulence flow in spillways by using the numerical model.The appropriate turbulence model and multiphase flow model for the simulation of flow over spillways and prediction of the flow velocity with smallest deviation are established.The appropriate grid size for the simulation is also suggested based on Grid Convergence Index (GCI) in order to minimize the discretization error.The numerical model is suggested to expand the simulation results from numerical models to large scale physical models to ensure proper simulation of flow in complex multiphase flows.The analytical solution and also the results from the numerical models are used to develop the equations and charts for the preliminary design of spillways for extreme events.One of the most important advantages of stepped spillway is the energy dissipation.Then, the equations of energy dissipation from using of spillways are proposed.This present the basic theory of the study.It consists of the equations of conservation; Mass conservation equation, Momentum conservation equation, and the Finite Volume Method (FVM) for numerical study.Generally, a CFD model study consists of the following steps: i) obtaining the data of the physical model for grid development; ii) selecting or developing appropriate model method; iii) define the boundary conditions based on available field information; iv) develop the computational grids; v) calibrate and verify the model; and vi) analysis various parameters or scenarios.
Equations of conservation
In the time of development of the computational fluid dynamics, CFD, it has been one of the best tools for the prediction of flow.However, complex flow through spillway still needs more studies and researches to be understood.The description of the physical processes of flow involves a variety of computational methodologies to predict the quantities of the components and its behavior (Franz and Melching, 1997) [22].The related conservation principles are; (i) the mass conservation, and (ii) the momentum conservation.
Mass conservation equation
Mass conservation equation or continuity equation states that the mass of a closed system of substances will remain constant, regardless of the processes acting inside the system.Matter cannot be created nor destroyed, although it may be changed in form.Consider the flow model shown in Figure 2, an infinitesimally small fluid element moving with the flow.(3.3)
Momentum conservation equation
The momentum equation is a statement of Newton's Second Law and relates the sum of the forces acting on an element of fluid to its acceleration or rate of change of momentum.Newton's Second Law can be written as The Rate of change of momentum of a body is equal to the resultant force acting on the body, and takes place in the direction of the force.We can write it as equation (3.4).The moving fluid element model is sketched in more detail in Figure 3.The Finite Volume Method, FVM, is a method for representing and evaluating partial differential equations as algebraic equations.It is one of the most versatile discretization techniques used in CFD.The advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes.Finite volume refers to the small volume surrounding each node point on a mesh.In this method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem.These terms are then evaluated as fluxes at the surfaces of each finite volume.Based on the control volume formulation of analytical fluid dynamics, the first step in the FVM is to divide the domain into a number of control volumes where the variable of interest is located at the centroid of the control volume as shown in Figure 4.The next step is to integrate the differential form of the governing equations over each control volume.Interpolation profiles are then assumed in order to describe the variation of the concerned variable between cell centroids.For each computational loop, volume is adjusted from the last value of previous grid cell as shown in Figure 5 and equation (4.17).For Finite volume method, there are many schemes for appropriate solving, they are; -First-Order upwind scheme: this method assign that properties at skin of object are equal to at center of object.It is appropriate used for grid parallel flow.-Second-Order Upwind Scheme: this method assign that properties at skin of object is averaged from two-side cells.It is appropriate used with triangle and hexagon grid cells that flow is not parallel with grid.
-QUICK Scheme: this method assign that properties of cells depend on weight of important.It is more accurate than two above-mentioned methods if it is used for cubic or hexagon meshes eddy computation.-Power-Law Scheme: this method uses the interpolation from other grids and its accuracy is equal to First-Order Upwind Scheme-A modified HRIC Scheme: this method is appropriate for VOF method.It is available for implicit and explicit computations.
Multiphase flow model
A multiphase flow can be defined as a mixture of flow which consists of more than two phases.For the flow over stepped spillway, free surface flow with high turbulence of air is of interests.Both air and water cannot be ignored from the model because of their influence on the fluid dynamic behavior.Then, in the numerical model, the multiphase flow model should be used in simulation.Two types of the multiphase flow model are used in the present study; (i) Volume of Fluid model (VOF), and (ii) Mixture multiphase flow model (MMF).
Volume of fluid model (VOF)
Volume of fluid model, VOF, which was completely reported in Hirt and Nichols (1981) [24], is based on a concept of a fractional volume and the fact that the phases are not interpenetrating.It is an interface capturing scheme for the free surface flow with the interface of each fluid is the point of focus (Nikseresht et al., 2008) [34].Each control volume can be filled with either a single fluid phase or combination phases.The volume fractions of all phases in each control volume sum to unity.In this study, there were 2 phases; air and water flow along the spillway.Due to the volume fraction of each phase in each control volume, the fields for velocity, pressure, and temperature are shared to be the same.The variables for pure water, air, or even mixture can be represented.If and are assumed to be the volume fraction of water and air, respectively, the cell density can be computed by http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services (5.18)Where is water density, and is air density.The other variables can be computed, instead of density, by the same volume fraction of equation (5.18).Two schemes of interface tracking are used; the standard interpolation and the geometric reconstruction schemes.The standard interpolation is used to interpolate the properties of a cell when it is completely filled with one phase.The geometric reconstruction scheme is used near the interface between phases to represent the interface between air and water.The equation for tracking the surface between two phase is The VOF and MMF models were used to deal with the multiphase fluids.The same main limitations for using both VOF and MMF are; both models cannot be used with the density-based solvers.Only the pressure based solver is allowed.Only one phase can be defined as a compressible gas.Stream wise periodic flow with specified mass flow rate cannot be modeled.However, there are 2 main differences between these models as the manner in which they handle phase interpenetration and the phase velocities.Phase velocities For VOF, which the volume fraction of each phase is known, the variables and properties are shared and represent volume-averaged values.Therefore, depending on the volume fraction, the variables and properties in a control volume can either represent only one of the phase or a mixture of all phases.The VOF solves a single set of continuity and momentum equations and tracking the volume fraction of each phase by tracking equation.In MMF, under the concept of slip velocities, each phase in a control volume can be allowed to move in different velocities.Also, other variables and properties can be different in each control volume.However, if any control volume is assumed to move at the same velocity, then MMF can be reduced to be a homogeneous multiphase model.The MMF solves the continuity and momentum equations for the mixture, and the volume fraction equation for the secondary phases, as well as algebraic expressions for the velocities if the phases are moving at different velocities.With these two differences, the initial boundary condition was set differently.The air velocity in MMF can be set at zero and then reduced to homogeneous multiphase model while the air velocity in VOF is the same as water velocity.http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services
Turbulence model
Among the linear turbulence models, the widely used two-equation model is based on: (1) the turbulent kinetic energy equation k and (2) the turbulent eddy dissipation ϵ, or the turbulent frequency .Five different turbulence models were chosen in the present study to simulate the flow over stepped spillways: the Standard k-ϵ, the Realizable k-ϵ, the Renormalization group k-ϵ, the Standard k- and the Shear stress transport k- model.
Numerical method
Approximate solution of a set of algebraic equations is obtained through some form of either an iterative or matrix solution.The solutions from the numerical model are mostly calibrated verified through comparisons to field observations or physical model experiments (Chanel, 2008 [10]) he two well established and widely used numerical methods are the Finite Difference and Finite Element Methods, FDM and FEM, respectively.Tabbara et al. ( 2005) [38] used the FEM to predict stepped flows for small scale experiments.In the upper part of the flume as well as in the bottom part steps were introduced along the chute such that the envelope of their tips followed the smooth spillway chute profile.Although their results were encouraging, physical or laboratory measurements are still crucial for providing reference data.Benmamar et al. (2003) [4] developed a numerical model based on the implicit FDM for the development of a twodimensional boundary layer over a steep stepped spillway.The finite volume method, which has been extensively used to model a wide range of fluid-flow problems, was originally developed as a special FDM.One of the CFD model studies in the spillway is from Kim et al. ( 2010) [27].The FLOW-3D model was used with the initial design plan of the Karian dam in Indonesia.The results showed that the flow in the approach channel was unstable.A revised plan was formulated and the appropriate amended design was examined using numerical modelling.Carvalho and Amador (2009) [7] also simulated the flow in the nonaerated region by using the FLOW-3D with the FDM.Their numerical results were compared with the physical data and found a good agreement in the non-aerated region.Another numerical method that is widely used in the simulation of flow in different forms is the Finite Volume Method, FVM.It is similar to both FDM and FEM in which their values are calculated at discrete places on mesh geometry.Its name refers the structure of its geometry which means the small volume surrounding each node point on a mesh.The surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem.These terms are then evaluated as fluxes at the surfaces of each finite volume.The FEM and FVM are Tadayon and Ramamurthy (2009) [40] investigated three different turbulence models which are; Reynolds Stress Model, RSM, Renormalized k-ϵ model, RNG k-ϵ, and standard k-ϵ model, to analysis the flow over circular spillways.The FVM was also used for the numerical simulation.Dastgheib et al. (2012) [2] carried out, with uses the FVM to simulate the free surface location and predict flow features such as velocity, pressures and complex free surface in different regimes including nappe, transition and skimming flow.Although some literature shows successful comparisons between CFD and physical model as a cost-effective and reliable tool, it still cannot be considered as a complete replacement of physical model for all hydraulic engineering projects (Li et al., 2011) [28].It still have some limitations in accurate simulation of the formation of free surface and vortices.Hence, more study would provide the confidence to use numerical model for different design purposes.Also, with the use of the finite volume method to simulate the complexity of flow, different kinds of multiphase flow and turbulence algorithms in the numerical method can be used to simulate the flow over stepped spillways.A multiphase flow can be defined as a mixture of flow which consists of more than two phases.For the flow over stepped spillway, free surface flow with high turbulence of air is of interests.2002) [11] and Cheng et al. (2006) [15] for the as multiphase flow simulation of stepped spillways.The simulated pressure profiles on the horizontal step surface were quite.similar to the physical model measurements.However, the pressure profiles on the vertical faces of each step were slightly different between the numerical and physical models.Dong and Lee (2006) [18] studied the numerical simulation of skimming flow overall mild stepped channel.All air boundaries were defined as pressure boundaries with zero pressure specified.Smooth channel flow was also simulated to compare the hydraulic characteristics with the stepped spillway overflow.[15] used a physical spillway model for validation of the MMF numerical model with a Renormalisation Group Theory k-ϵ algorithm (RNG).The Reynolds numbers for some cases in their study were relatively out of proportion with the need to meet prototype conditions.Some limitations with Froude similitude at low Reynolds number therefore emphasized that numerical results should be validated with large scale models.The turbulence model is also used in order to simulate the turbulence flow between two phases of fluid.Among the linear turbulence models, the widely used two-equation model is based on: [16].They modified the non-linear model to simulate the stepped spillways from Chanson and Toombes (2002) [9] and Boes and Hager (2003b) [5].The Rl k-ϵ showed the most satisfactory results amongst the linear turbulence models.The modified non-linear model also showed higher accuracy than other non-linear models.Naderi Rad (2007) [29], compared energy dissipation of stepped spillways and ogee spillway by using volume of fluid method.They show that ratio of energy dissipation to initial energy in stepped spillway is increased 9.80% more than ogee spillway.Bahrami (2008) [3] investigates effective factors in aeration and the role of aerators in preventing cavitation in dam spillways.In this study, to simulate flow from shoot and determining air concentration parameter in bottom of chute, Flow-3D software was used.The main objective of this research is to find ways to prevent cavitation by aeration.Esmaeily (2010) [21] modeled experimentally and numerically flow pattern of cylindrical spillway and simulates it by using Fluent Software.The results of his study show a good coordination between experimental and Fluent results in flow pattern of spillway.Dehdar (2011) [20] studies cavitation of flip bucket in Bala-Rood dam spillway.In this research, the spillway of Bala-Rood was modeled by Flow-3D and hydraulic flow was simulated.Daneshfaraz et al (2013) studies stepped fast water formula of Siah-Bishe spillway is stimulated via Flow-3d software and compared with physical model.This software is an accurate tool in analyzing unsteady 3D flow problems with free surface and complex geometry.It solves problems by solving conservation of mass rmulas, momentum and energy viafinite volume method.In this study, pressure parameter at the beginning, end and along the spillway is studied and negative pressure is observed in some parts.This pressure can make cavitation.The study shows the results of correspondence between physical model and finite volume method modeled by Flow-3d.http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services Naderi Rad et al (2004Rad et al ( -2014) ) studies evaluate energy dissipation in various types of stepped spillways; inclined steps, simple steps, cup steps and steps with end sills by taking into accounts parameters such as; characteristic height of step, flow discharge per unit width and overall slope of steps stepped spillway by numerical method.In this research the governing equations are solved by finite volume discretization method and the standard k-ϵ model is used for estimating the turbulence flow.In this research the structured grid is used to accommodate the well-defined boundaries and the volume of fluid (VOF) method is introduced to solve the complex free-surface problem.Results of the numerical method compare well with the experimental results of other researchers.Tabbare et.al.In 2005 conducted studies on stepped spillway by using finite element analysis.They applied k-ϵ model to account for turbulence.Tabbara et.al. in (2005) [38] took velocity boundary condition to study energy dissipation over stepped spillway and ogee spillway for initial boundary condition they applied a flow profile assuming that the profile reduced the solution time.
To achieve their studies they used ADINA software which used finite element to solve the problems.Crookston at al (2010) [17] studies Physical and numerical modelling were used to investigate the hydraulic performance of labyrinth weirs operating under high headwater ratios.Physical modelling was conducted in a rectangular flume in a laboratory setting.Numerical modelling was conducted with commercially available computational flow dynamic (CFD) software (Flow-3D).Preliminary results indicate that the CFD model can accurately predict the labyrinth weir head-discharge relationship obtained from the physical model, including upstream heads (relative to the crest) that exceed the weir height.Extensive physical modelling of labyrinth spillways, primarily flume studies, has been performed, resulting in the development of several design methods.Two of the more common methods used in the U.S.A., referred to herein as the Lux and Tullis methods, are compared for a given labyrinth geometry.Results of a RANS Computational Fluid Dynamics (CFD) model, using commercially available software (Flow-3D), for the same configuration is shown to give results comparable to those obtained using these design methods.Non-standard approach conditions and geometries are modelled using physical and numerical methods and the applicability of the Lux and Tullis methods for these conditions is evaluated.Paxson et al (2006) [36].Eghbalzadeh et al(2012) studies for investigation of Mixture and VOF method's ability, Air-Entrainment in skimming flow over the stepped spillways is simulated by using fluent software.The numerical simulation results of free surface, velocity components and air concentration in water and circumstance of air entry into the water have been compared with the experimental results.It was found, downstream of the inception point of free-surface aeration where rapid free-surface aeration is observed, free surface is simulated better by Mixture method.Nikseresht et al (2012) [35] studies the two-phase flow over two types of step-pool spillway was investigated using two-phase schemes (Volume Of Fluid (VOF) and mixture) and various turbulence modeling.Numerical simulation of two-phase flow was carried out on two types of step-pool spillway with various slopes.Comparison of the energy dissipation rates and flow field variables of the present simulation with those of experimental models is presented.Results show that the mixture model with the Reynolds Stress turbulence Model (RSM) is suitable for simulation of two-phase flow over spillways.Naderi Rad et al (2008) studies of the probability of cavitation in different types of stepped spillway i.e. simple, inclined steps and steps with end sills parameters such as the number of steps (N), height of steps (S), discharge per unit width of spillway (q), increment of step's height (m), slope of the spillway (α) and even flow characteristics into account and using numerical method.In this research the impacts of the above parameters on cavitation in different stepped spillway were mainly studied.Negative pressures were considered as the primary cause of cavitation.In order to achieve the numerical model study of stepped spillway, finite volume method was employed to account for free surface, the volume of fluid (VOF) technique was used the standard k-ϵ turbulent model was chosen.To get the parameters, power law method was used.http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services
Conclusions
Numerical modelling using Flow-3D and Fluent yields results similar to the physical model for a Hydraulic structures.It can be concluded, from the numerical results, that the numerical model can be used to model the complex flow pattern of two-phase turbulence flow in spillways.The Finite Volume Method, FVM, is also found to be the numerical method that can be used to simulate the complex flow.The Volume of Fluid, VOF, and the Realizable k-ϵ models are chosen as the multiphase flow model and turbulence model, respectively, that can simulate the flow from the physical model better than the other models.The flow initiation at the inlet is one of the locations that VOF shows the better simulation than MMF.Due to the employment of VOF model to determine free-surface it takes relatively large amount of time to reach a stable flow situation over steps.The existence of eddy and turbidity on steps cause the run-time to increase.VOF model can be used for all flow type modeling and, hence, it obviates the determine flow type.The discrepancy between numerical and experimental approaches are very close.
Figure 2 :
Figure 2: Definition sketch for flow model
Figure 3 :
Figure 3: Moving fluid element model for the x component
(3. 7 )
The component of acceleration in the x direction, equation (3.3) in the x direction can be written as equation (3.8).(3.8) Equation (3.9) is the rearranged form of the equality of equations (3.7) and (3.8) (3.9)From the substantial derivative as mentioned above, we get equation (3.10) (3.10) (3.11)The terms in the bracket are the form of mass conservation equation that equal to zero Thus equation (3.11) is reduced to equation (3.12).(3.12) Substitute equation (3.8) into equation (3.9), then it becomes equation (3.13) (3.13)Similarly, the y and z components can be obtained as http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services
5 . 2 .
(5.19) Firstly, the position of the linear interface relative to the center of each partially-filled cell is calculated.Then, the adverting amount of fluid through each face is calculated Using the computed linear interface representation and information about the normal and tangential velocity distribution on the face.Finally, the balances of fluxes, from the calculation during the previous step, are used to calculate the volume fraction.The continuity equation for water, as shown in equation (5.20), is used.(5.20)The momentum equation, equation (5.20), in the xi-direction is dependent on the volume fractions of all phases through the density, , and molecular viscosity,.Mixture multiphase flow model (MMF) Mixture multiphase flow model, MMF, used in this study, was proposed by Johansen et al. (1990) [25].It is a simplified multiphase model that can be used where the phases move at different velocities.It can model n phases by the continuity and momentum equations for the mixture, and the volume fraction equation for the secondary phases.The continuity equation for the mixture is (5.21) Both air and water cannot be ignored from the model because of their influence on the fluid dynamic behavior.Then, in the numerical model, the multiphase flow model should be used in simulation.Two types of the multiphase flow model are used in the present study; (i) Volume of http://www.ispacs.com/journals/cacsa/2016/cacsa-00050/International Scientific Publications and Consulting Services Fluid model (VOF), and (ii) Mixture multiphase flow model (MMF).The Volume of Fluid model, VOF, with an unstructured grid was used by Chen et al. ( Qian et al. (2009) [37] used a mixture multiphase flow model, MMF, to simulate flows overall stepped spillway with various kinds of turbulence models: (1) the realizable k-ϵ model; (2) the shear stress transport SST k-ϵ model; (3) the VOF model; and (4) the large eddy simulation LES model.The realizable k-ϵ model showed good performance for the simulation of flows involving rotation, boundary layer and recirculation.Cheng et al. (2006) (1) the turbulent kinetic energy equation k and (2) the turbulent eddy dissipation ϵ, or the turbulent frequency.Five different turbulence models were chosen in the present study to simulate the flow over stepped spillways: the Standard k-ϵ, the Realisable k-ϵ, the Renormalisation group k-ϵ, the Standard k-ϵ and the Shear stress transport k-ϵ model.Chen et al. (2002) [11] used a standard k-ϵ model to simulate the flow.Cheng et al. (2006) [15] used a mixture model to reproduce the flow over a stepped spillway, including also the interaction between entrained air and cavity recirculation in the flow, velocity distribution and the pressure profiles on the step surface.The Renormalisation group k-ϵ model (RNG k-ϵ) was chosen and their numerical results successfully reproduced the flow over the stepped spillway of the physical model.The results were helpful for understanding the rates of energy dissipation.Tongkratoke et al. (2009) used other turbulence models: a linear, the LES and the non-linear model of Craft et al. (1996)
|
2018-12-20T23:43:56.385Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "48af3980d358fe26ff4084f7364c8e33e658256d",
"oa_license": null,
"oa_url": "https://doi.org/10.5899/2016/cacsa-00050",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "48af3980d358fe26ff4084f7364c8e33e658256d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
117514270
|
pes2o/s2orc
|
v3-fos-license
|
c-axis Josephson Tunneling in Twinned YBCO Crystals
Josephson tunneling between YBCO and Pb with the current flowing along the c-axis of the YBCO is persumed to come from an s-wave component of the superconductivity of the YBCO. Experiments on multi-twin samples are not entirely consistent with this hypothesis. The sign change of the s-wave order parameter across the N_T twin boundaries should give cancelations, resulting in a small $(\sqrt{N})$ tunneling current. The actual current is larger than this. We present a theory of this unexpectedly large current based upon a surface effect: disorder-induced supression of the d-wave component at the (001) surface leads to s-wave coherence across the twin boundaries and a non-random tunneling current. We solve the case of an ordered array of d+s and d-s twins, and estimate that the twin size at which s-wave surface coherence occurs is consistent with typical sizes observed in experiments. In this picture, there is a phase difference of $\pi/2$ between different surfaces of the material. We propose a corner junction experiment to test this picture.
I. INTRODUCTION
Understanding the nature of the order parameter is one of the main challenges in the theory of high-T c superconductivity. One of the most fundamental issues is the competition between s-wave and d-wave. d-wave is definitely the rule for high-T c systems, yet there is strong evidence that the electron-doped NdCeCuO material is s-wave. At a microscopic level, this suggests that the mean-field pairing interaction has two eigenvalues which vie for dominance. Understanding this competition would provide insight into the pairing mechanism.
In this regard YBa 2 Cu 3 O 7−δ (YBCO) is of particular interest. In other systems, the square symmetry forces the ordering to be pure d-wave or pure s-wave. The presence of one most likely prevents the emergence of the other because of repulsive terms in the free energy, and the competition has a clear winner. In contrast, YBCO has orthorhombic symmetry. This makes it inevitable that d-wave and s-wave are always mixed [1]. Josephson tunneling experiments with current flowing mainly in the a-b plane [2] have made it clear that the dominant component is d-wave, but a substantial body of work has also demonstrated the existence of Josephson tunneling along the c-axis from YBCO to a Pb electrode, indicative of an s-wave component [3] [4] [5] [6]. It is to be hoped that these latter experiments, if carefully analyzed, can tell us about the strength of the s-wave admixture in the order parameter.
We shall first review the experimental and theoretical situation concerning c-axis tunneling, concluding that there are major theoretical puzzles still to be resolved. Then we shall present a new model of the phenomena which we argue is in agreement with the data as they stand, and how to test the model more thoroughly. c-axis tunneling from YBCO to Pb was first observed in twinned crystals etched wth Br [3], with an I c R n product of as much as 10 % of the known gap of about 30 meV. This strongly suggested the presence of an swave component of the superconductivity of YBCO, as a pure d-wave current would average to zero over the Fermi surface. However, another possibility was that the current was due to second order tunneling of the d-wave component [7]. This hypothesis predicts the presence of Shapiro steps in the conductivity in units of hf /4e, where f is the frequency of the incident radiation. This was ruled out in subsequent microwave experiments [6] Finally, the question of tunneling through step walls at the surface arises, particularly if it is deeply etched. This would be a process in which the current actually flows in the a − b direction. However, the fabrication of a − b junctions [4] and the observation of tunneling in situ without etching [8] appears to have laid this possibility to rest. The presence of a nonzero s-wave component in YBCO must now be accepted.
Is it reasonable to accept the 10% estimate of s to d which comes from the I c R n product at face value? Clearly not, for the following reason. A twinned sample should have a relative population of twins of the two possible orientations given by statistical considerations. The d-wave component must remain coherent through the sample, as is shown by the corner junction experiments [2]. Because the change in orientation reverses the relative sign of s and d we should have roughly equal areas of d + s and d − s superconductivity in the sample, in which case the net current should be zero.
More precisely, the net current should be proportional to to I c √ N T , where I c is the critical current of a single twin and N T is the total number of twins. If we accept this argument, then the actual proportion of s to d would be higher than 10%. This would move the nodes in the gap well away from the diagonal in the Brillouin zone. This would be inconsistent with tricrystal experiments [9] Furthermore, comparison of Josephson currents in single crystals to twinned crystals show similar R n values and I c values which range from 0.5 to 1.6 mA for single crystals and from 0.1 to 0.9 mA for twinned samples [4]. These numbers are subject to the objection that one cannot be sure that the tunneling matrix elements are not extremely sensitive to the sample preparation method. Nevertheless, in view of the fact that R n does not vary wildly from sample to sample, they suggest that a purely statistical analysis of twin populations with a resulting small imbalance of d + s and d − s is not a viable explanation of the data.
The dilemma was deepened by experiments on crystals with much larger twins, large enough so that junctions could be formed which straddled either one or even zero twin boundaries [5]. These showed that the direction of current definitely did change sign across the twin boundary, a fact which can be established unambiguously by investigating the current as a function of field in the plane of the junction. This observation was consistent in all eight samples studied. Also, no such sign changes were observed in the absence of a boundary. These experiments therefore clearly confirm the mechanism of an s-wave component controlled by the orthorhombic distortion, without offering any explanation of the large current in heavily twinned samples. One further observation in these experiments may offer a clue, however. The current at zero appplied field in single-boundary samples was consistently higher than calculated by looking at the relative sizes of the two twins. We will return to this point below.
Summarizing the experiments, we may say that an swave component which changes sign across boundaries is clearly present. If it always changes sign, then we cannot explain the data on twinned samples using purely statistical arguments. One possibility is that the twin populations are not equally likely. For example, if the twinning takes place under uniaxial stress, then one orientation would be favored. Experiments which correlate microstructure with Josephson current are needed to rule this out [4]. However, given the size of the Josephson effect in twinned samples, it seems to us that this explanation is somewhat implausible.
The most detailed theory of c-axis tunneling proposed to date is that offered by Sigrist et al. [10]. Their picture involves no net tunneling from the twins themselves. A time-reversal-breaking state at the twin boundary is predicted which results in a net Josephson current coming from the twin boundaries. This would give a Josephson current which is proportional to the number of boundaries for a fixed surface area. This is not observed, though again one must keep in mind that different samples must be compared to make any such statement, and variations in important microscopic parameters canot be controlled in such comparisons. In addition, however, the theory predicts a current which has maximum asym-metry (as a function of in-plane angle) when the applied magnetic field is parallel to the boundary. This is an experiment in which the unknown matrix elements are held fixed. This prediction is in conflict with the experimental observations, which are symmetric at parallel orientation [6].
A quite different proposal was made by Xu et al. [11]. These authors postulate a bulk d + is state. In this theory, however, the s-component does not change sign across the boundary, which does not agree with the measurements on single-boundary samples in a parallel field.
We present an alternative explanation in which the nonzero tunneling current is the result of a surface effect. YBCO is notorious in photoemission experiments for not showing a gap. This proposal is inspired by the fact that photoemission experiments (with resolutions of order 10meV or less) have also never succeeded in seeing a gap at the (001) surface in this material (in contrast to Bi 2 Sr 2 CaCu 2 O 8+x ). This shows that the magnitude of the gap at this particular surface is much reduced. Furthermore, if this reduction is due to disorder, such as surface scattering, one would expect that the d-wave component is relatively much more suppressed than any s-wave admixture. A similar suppression could result from an oxygen vacancy concentration gradient. This suppression of the d-wave component of the order parameter as we approach the (001) surface of the YBCO in the context of c-axis tunneling is of two central hypotheses of our model and was first suggested by Bahcall [12]. The second crucial ingredient is new: the d-wave surface suppression results in a coherent s-wave surface layer and hence an enhanced Josephson tunneling current in highly twinned samples without a very large admixture of s-wave in the bulk.
II. DOUBLE TWIN MODEL
Twinned samples are disordered on the µm scale, the twin boundaries running predominantly along the diagonal of the a-b plane. It is reasonable then to approximate the disordered sample by an array of straight twin boundaries running across the entire sample, which is considered to be semi-infinite. We concern ourselves in this paper with the ordered case in which all twins are of the same width and alternate between d+s and d-s. In real samples the twins have varying widths, but we have verified numerically that the basic results are unaffected by neglecting the disorder in the widths. The solution of the ordered model should be periodic with a period of two twins. Therefore, we solve the case of two twins with periodic boundary conditions. The twin boundary occupies the half plane defined by x = 0 and z ≤ 0. The plane z = 0 is the (001) surface of the YBCO sample. The model is illustrated schematically in Fig. 1. We shall write the bulk free energy density in terms of the two order parameters Ψ d and Ψ s . Spatial variation is allowed only along the x-direction, (normal to the twin boundary), and along the z-direction (normal to the surface). The free energy density is given by: The β sd term is the s-d repulsion mentioned in the introduction. We will neglect it in the calculations and have included it here only in order to stress that a large positive β sd suppresses all s-d mixing in the absence of the bilinear α sd term. If this term is present, as it is here because of the orthorhombic distortion, then the size of the s admixture is controlled by α sd /α d .
We must also include the free energy of the twin boundaries. Any x axis variation of Ψ s and Ψ d will take place within a distance of the order of a coherence length about the twin boundary. Since ξ ab ≈ 20A is very small compared to the average twin width (0.1 to 10µm) we conclude that the detailed structure of the twin boundary is not very important. We will assume a very thin boundary and thus take α sd to be piecewise constant. This is in direct contrast to the Sigrist et al. model in which the current comes from the twin boundaries. In our model the current comes from the twins. We therefore approximate the free energy of the twin boundary by a Josephson-type coupling: where J s and J d > 0. We can also now drop the x axis gradient terms in the bulk free energy. Any x axis variation in the order parameters takes place near the twin boundary and has been included in the boundary energy. The problem has been reduced to 2 one-dimensional twins which are Josephson coupled. However, only one twin is actually required. As the surface of the YBCO is approached, the magnitude of Ψ d and Ψ s should vary in exactly the same way in both the d + s and d − s twins. Only the phases φ s and φ d are different. But while the phases differ between twins, they are not entirely independent. We set φ d = φ s = 0 in the bulk of the d + s twin, and φ d = 0, φ s = π in the bulk of the d−s twin. As the (001) surface is approached, variation in φ + s and φ − s should be symmetric about π/2, while φ + d and φ − d will be symmetric about 0. This allows the boundary energy to be rewritten entirely in terms of the phases in the d + s twin.
φ s and φ d in the d − s twin can be deduced immediately, and the problem is now entirely one-dimensional.
III. SOLUTION
We will solve for the order parameters in the d+s twin. The solution for the d − s twin follows immediately. Our one-dimensional free energy density is: where w is the width of a single twin. Performing the usual minimizations, we get: and The analogous s-wave equations are: and In general, these equations must be solved numerically, but it is instructive to first consider the limit K dz = K ds = 0, which may be obtained analytically. If we consider Eqns. 6 and 8 we see that: φ s and φ d are between 0 and π/2. There are only the two obvious solutions: φ d = φ s = 0 or π/2. The particular solution which minimizes the free energy is dependent upon the relative strengths of the s and d intertwin Josephson couplings, i.e. , the ratio If R > 0 then φ d = −π/2 and φ s = π/2. The magnitudes are obtained from the coupled set of equations: where we have assumed that the intertwin coupling has little effect on the magnitude of the order parameters, that is J s << w|α s | where w is the twin width. The exact form of |Ψ s | and |Ψ d | will depend on the α d (z) chosen.
The main effect of finite gradient terms K s,d |∂ z Ψ s,d | 2 in the free energy is to smooth out the variation in the order parameters as the surface is approached. We expect the order parameter magnitudes to be only slightly affected by the introduction of the gradient terms. Variation in |Ψ d | and |Ψ s | should depend predominantly on α d (z), since K d << |α d | etc. The effect on the phases is more dramatic. For relatively narrow twins, φ s and φ d now undergo a smooth transition from φ s = φ d = 0 in the bulk of the d + s twin to φ s = φ d = π/2 at the surface. In the d − s twin, φ s changes from π to π/2 at the surface and φ d from 0 to −π/2. The order parameter magnitudes and phases for a model α d (z) are shown in Fig. 2. The degree of smoothing depends upon the strength of the gradient term versus that of the coupling across the twin boundary. The dominant factor in this competition between the gradient and the intertwin coupling energy is the twin width. For very wide twins, the change in surface phase is diminished and may be eliminated altogether.
The maximum c-axis Josephson current is given by: where A is the junction area and φ P b has been chosen to yield the maximum Josephson current. For very large twins it is not energetically favorable for the phase change to occur. The s-wave phase at the surface alternates between 0 and π across twin boundaries and no net Josephson current flows. As the twins become narrower, a threshold is reached where the s-wave phases start to shift towards π/2 at the surface. The s-wave surface phase alternates between φ d+s s and φ d−s s = π − φ d+s s . Some Josephson coupling is now possible. For very narrow twins φ s is coherent across the entire surface of the crystal, and the maximum Josephson current flows. The current saturates, and further reduction of twin size has no effect on the current. This is illustrated in Fig. 3. The saturation is one important phenomenological difference between our model and that of Sigrist et al..
We want an estimate of the average twin width at which s wave surface coherence begins in terms of experimentally measurable quantities. Roughly speaking, this threshold will occur when the strength of the s-wave coupling between twins is equal to the gradient energy involved in rotating the s-wave phase by π/2 at the surface. We will assume an very simple model with a surface layer of depth s in which |Ψ d | = |Ψ 0 d | for z < −s and |Ψ d | = 0 for −s < z < 0. |Ψ s | is assumed constant for all z.
The first task is to get some idea of the strength of s-wave coupling across the twin boundary. If we consider the situation far from the surface we may take |Ψ d (z = −∞)| to be large and fixed. An effective free energy may then be written down for Ψ s and an Euler-Lagrange equation for the variation in Ψ s with respect to x derived.
If we assume a step function boundary where α sd (x) = −sgn(x)α 0 sd then we have the solution where Ψ 0 s is the bulk value. The result for the free energy per unit area of the twin boundary is: The c-axis gradient energy is also required and is roughly where w is the twin width. Noting that K s /α s = ξ 2 c and setting F g = F b we obtain: For a surface layer with a depth of 100Å (about 8 unit cells) then we obtain a twin width of approximately 1µm. We emphasize that this is merely an order of magnitude estimate. In addition, it is not clear exactly how deep such a surface layer should be. However, the resulting twin width is not unreasonable. A highly twinned sample of 0.5mm may have as more than 10 3 twins resulting in an average twin width of a few tenths of microns. Thus while we expect no net Josephson current in a lightly twinned sample, our model predicts the net Josphson current observed in more heavily twinned crystals.
IV. PROPOSED EXPERIMENT
Our model predicts a nonzero Josephson current resulting from a surface effect. For samples with relatively large twins, we expect a d+s d-s alternation between twins at the surface and no net Josephson current. This explains why experiments on two twin crystals show a sign change in the Josephson coupling to Pb across the twin boundary [5]. In a sample with many smaller twins, however, the coupling between twins wins out and a coherent s-wave surface layer results. We expect this to take place in samples where the average twin width is less than a few micrometers. The s-wave surface layer is π/2 out of phase with the bulk d-wave phase.
We emphasize this fact because the π/2 phase shift is experimentally verifiable. A YBCO-Pb corner junction type experiment with one junction on the (100) surface and the other on the (001) surface of a highly twinned YBCO sample should be able to detect this π/2 phase shift, as was previously suggested by Sigrist et al. [10]. We give a schematic diagram of the proposed experimental configuration in Fig. 4. The current maximum as a function of field will be shifted by a quarter of a flux quantum. The Josephson coupling to the Pb at the (100) junction is predominantly due to the YBCO d-wave component since d-wave suppression is not expected at this surface. Since the c-axis Josephson coupling results from the smaller s-wave component, it is much weaker than the a-axis coupling. The (001) junction should therefore have a much larger area than the (100) junction in order to minimize any DC offset of the interference pattern.
V. CONCLUSION
The present theory can reconcile the puzzles mentioned at the outset. The suprprisingly large value of I c R n is ascribed to the partial, coherence of the s-wave component at the surface. The fact this coherence is only partial gives a reasonable account of the overall differences between single crystal and twinned samples. The fact that single-boundary junctions always show a change in sign of the s component is also consistent: in this case the twins are larger. These experiments also show excess current at zero applied magnetic field. This would be consistent with some partial coherence of the s component across the boundary, as the larger (stronger) of the two twins appears to control the weaker one. Hence we believe that the theory can account for all observations. The experiment of the previous section would be a critical test of the theory. Experiments in which the relative twin populations are precisely controlled would serve to rule out the alternative explanation in which the current is due to accidental anisotropy introduced in the growth process.
One important qualitative conclusion about the underlying physics of the bulk can be drawn from this picture: s-wave competes with d-wave in YBCO. If our model is correct, then the naive estimate of 10% admixture of s-wave as a proportion of d-wave remains roughly correct. Expressed in the language of Eq. 1, we have that |Ψ s |/|Ψ d | ∽ α sd /(α s − α d ) ∽ 0.1 at low temperatures. If s-wave were very strongly suppressed by a large positive α s , it would not be so easily induced by the lattice distortion.
The present theory would predict that only those materials with orthorhombic distortion should show c-axis tunneling. Recently, c-axis Josephson tunnneling between Ba 2 Sr 2 CaCu 2 O 8+x and P b [13] has been observed in spite of the absence of an orthorhombic distortion in this material. However, due to the fact that I c R n ∼ 1µeV , orders of magnitude less than the gap value, we believe that this interesting effect is physically different from that seen in the YBCO experiment.
We would like to thank J. Betouras for many helpful discussions. This work was supported by the NSF under the Materials Theory Program (DMR-9704972) and under the Materials Research Science and Engineering Center Program, (DMR-96-32527).
|
2019-04-14T02:19:39.942Z
|
1999-07-02T00:00:00.000
|
{
"year": 1999,
"sha1": "26469221b001b9249d32dee9811c6c7025349927",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9907021",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "26469221b001b9249d32dee9811c6c7025349927",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
}
|
247420106
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Women in the Church of Pentecost: A Case Study of the Kwadaso Area-Kumasi, Ghana
This article is an investigation of the role of women in the Kwadaso Area, Kumasi in the Ashanti Region of the Church of Pentecost (CoP) in Ghana. It is stimulated by the realisation that while many of the Mainline Churches, as well as the African Independent Churches and Charismatic Churches have created avenues for women to play key roles and ordaining some as ministers at various levels, the CoP does not ordain female ministers.This entrenched position of the CoP has made its leadership (pastoral responsibilities) a male-dominated one in the context of not allowing females to become either District Pastors or Area Heads or to be called into the full-time. This action of the church is similar to the patriarchal structures of the Jewish community. This study used both primary and secondary sources in the collection of data. The data collection consisted of interviews, questionnaires and observations to assess whether the CoP has cultural challenges when it comes to the appointment of women into leadership positions. It was observed that, the highest leadership position a woman would be allowed to occupy in the CoP was a Deaconess. Also, the CoP has not officially opened up to discussions on the need to create gender equality when it comes to the appointment of members into leadership positions. The study recommended that the CoP should open up to discussions on the issue since the status of women. Again, the CoP leadership should re-look its Constitution with regards to enrolling only males into the ordained ministry. This study adds to the existing literature on women’s empowerment in the African Church.
INTRODUCTION
Over the years it can be observed that leadership positions in the Church of Pentecost (CoP) have been maledominated and this could be attributed to the cultural perception concerning women in Ghana. 1 It is assumed that women should not take some leadership positions in Ghana. This cultural understanding has also been imbibed by the CoP and as a result, it has relegated women to the background when it comes to appointments to avoid being ostracised. Such a notion should be discouraged outright since it kills creativity and keeps women's ideas in the dark. In the light of the above, women should be allowed to express their views both in society and the church respectively. 13 From Oduyoye and Boachie's perspective, the full potential of women has not been allowed to be expressed due to cultural limitations and this has extended into the Church.
This article aims to assess why the CoP is reluctant to admit women into leadership positions. The study looks at the constitution of the CoP with emphasis on the role of women and what it says about women in leadership. It also highlights the history of women in the CoP and the challenges they face in the quest to take up leadership positions. An investigation of the role of women in the CoP, Kwadaso Area 14 , Kumasi, Ghana would serve as the basis for this research.
THE ROLE OF WOMEN IN THE CHURCH OF PENTECOST
The constitution of the CoP states that there shall be a Women's Ministry which shall consist of all the women in the Local Assemblies. ''It shall hold meetings at least once a week and has various assigned functions which include praying for church growth, organizing seminars, workshops, lectures and symposia in all aspects of life including: • Marriage enrichment, wives' responsibilities at home, child welfare, care and education, • Business management and techniques, • The teaching of basic principles of law relating to the family e.g. succession, marriage, etc, • To promote the welfare of widows, orphans the needy, • To perform such other functions as the General Council or the Executive Council may assign it amongst others.'' 15
Historical Developments of the Women's Ministry in the CoP
From the inception of the CoP, it was felt that the women in the church should be organized for prayers while assisting their husbands and helping the sisterhood to be useful citizens in both church and public life. 16 This later led to the formation of the Women's Movement in 1945 under the auspices of T. Adam Mckeown-the brother of James Mckeown. 17 The movement absorbed all other women in the church. The women became active in prayers and Bible studies, Evangelistic outreaches. They were also given lessons in various vocations like sewing, cooking and housekeeping. 18 Emmanuel Larbi explains that these activities were initiated by the women themselves. 19 It is not clear how long the training in the vocational training programme was available to the women because it soon gave way to their participation in various spiritual exercises of the church. The women of the Church are therefore currently well known for their organization of prayer meetings, evangelistic campaigns and conventions.
Women's Movement
In 1993, the Women's Movement was created and it was ironically led by a man and his deputies who happened to be women. In that same year, the Women's Directorate was established by the church and it finally had women been appointed as Directors. This change was actualized in 1994 at an extraordinary meeting held at Kwame Nkrumah University of Science and Technology (KNUST), Kumasi between 24 th -27 th March, 1994. 20 The Constitution of the CoP, Article 20, sub-section 20:3.1 provides the members of the Women's Movement/Executive committee, with a man as the leader/patron. 21 The author shares Quist's submission that the patron being male suggests that women were thought of as not having the capacity to make meaningful submissions and decisions. 22 This proves the fact the CoP is a patriarchal church in outlook and it is evidenced in the fact that men and women did not even sit together let alone dance together in the past. 23 Wiredu however, states that currently the situation has improved and as a result, men and women can sit together. 24 Despite these seeming 'challenges' women still enjoy full participation in the life of the church. 25 The Women's Movement is one of the major strengths of the CoP. It has been in existence since the establishment of the Church. Sophia Mckeown led the local women and made provisions for them. Women of repute such as Christiana Obo, Eunice Addison, Esme Siriboe, Perpetual Owusu and Beatrice Evelyn Kwaffo can be mentioned as pioneers of the Women's Movement of the Church. 26 On the matter of the operation of spiritual gifts such as healing and miracles, in the church, any group of people can lead and participate. It is noteworthy women have been given the opportunity to exercise their spiritual gifts without any limitation. Maame Dede, a deaconess is an example. She is reported to have been active in the 1960s in the areas of healing, word of knowledge, prophecy and discernment of spirits. Also, Grace Mensah was once the prayer centre leader at Edumfa. 27 Women can be found in all sectors of the church and often interpret when preachers are not familiar with the mother tongue of a particular locality. At certain points, they prophesy as well. 28
Gender Department of PENTSOS
The Constitution of the Church also established a unit to promote the social mission of the Church, which led to the creation of Pentecost Social Services hereinafter called "PENTSOS". 29 The Gender Department was tasked to implement this unit. 30 It has since been registered as a Non-Governmental Organization (NGO) with the focus of initiating, promoting, developing and managing social services and projects of the Church. 31 The department collaborates closely with the Women's Movement to address issues that affect the women and the needy in the church. 32
Widow's Ministry
The Widow's Ministry was initiated by Betty Ayagiba, a widow. Considering the ordeal she had to endure as a widow, she set out to empower widows to enable them to fend for themselves. Membership is also open to the public and by so doing, many widows became members of the COP. They finally became agents of church growth through their testimonies. Later on, the church commenced payment for Ayagiba's services by putting her on the payroll. She then went on to collaborate with the two other women's movements to work effectively. She currently works with 35 widow groups in the northern part of Ghana.
Religio-Cultural Challenges of Women in the CoP in Ghana
Wiredu explains that, at the 2021 executive zoom meeting held in May, it was decided that, the women's director and her deputy be co-opted in the International Executive Committee; that is, these two directors automatically become part of the National Executives through to the District Committees of CoP. 34 The non-inclusion of women in leadership positions of the Church is based on the four (4) main factors/reasons, namely, biblical, church doctrine/practice, culture and the general nature of women. 35 These reasons are explained in the subsequent sections:
Biblical
The CoP believes that the leaders in the Bible (prophets and apostles) were all male. Jesus also worked with the 12 disciples (Matt. 10: 2-4), all being male. 36 Paul in Ephesians 5:22-24 also says that the man is the head of the woman. 37 African Bible Commentary states that God has made the husband (male) the head of the family, just as Jesus Christ is the Head of the church (Eph. 5:23). Both wives and husbands have been assigned roles in the home and in the church. 38 The Constitution of CoP 39 with specific reference to 1 Corinthians 12: 28 and Ephesians 4: 11-13 indicates that, approved men are called to these offices by Revelation, Prophecy or Recommendation by local Presbytery, Area Presbytery and National Executive Council and ratification by the General Council. The leadership of the CoP seems to interpret this five-fold ministry 40 as exclusively for men. The requirements for admission into full-time ministry as stated in the CoP's Constitution 41 have as its main text: 1Timothy 3:1-7 and Titus 1:6-8. Briefly, the CoP looks particularly for 'MEN' to take up the position of a Minister. 42 Women are to be a helpmeet (helper) to assist in the discharge of their earthly duties. Other duties of women include the welfare of widows, orphans, the needy, evangelism, witnessing, soul-winning and training women to be responsible wives as indicated in the CoP Constitution. 43 This entrenched position set up by the leadership of the CoP has made it impossible to admit women into the ordained ministry/pastoral ministry.
Church Doctrine/Policies
The policies/structures of the CoP do not allow women to serve as pastors in the church except as directors in the Women's ministry. A woman can work as an elder in 'extreme' conditions. This situation may happen as a result of the absence of males. The woman who may be appointed to occupy such an office shall be accorded the needed recognition as would have been given to a man. It must be stressed that immediately a male arrives or becomes available, the eldership mantle would be handed over to the man. According to Quist, there however may be several thousands of women available who are capable of being in leadership positions but their gender disqualifies them; thus, when they want to become pastors. 44 46 Both performed very well in their capacities as leaders.
Culture
Culture and religion are bedfellows, hence the CoP is conscious about Ghanaian culture wherever a branch is planted. Like the spiritual churches, the CoP has adopted the Akan cultural practices which are relevant for the growth of the church. 47 Most cultural practices/ideas in Ghana, especially, the Akan culture do not allow women to take part in decision-making processes. The Ascension gifts or ministerial gifts have been reserved for men. Again, the position of women in Ghanaian culture is similar to that of the Jews where leadership is concerned. 48 This, to some extent, may have had some influence on the practice of the church. The relegated position assigned to women (women's status was clearly defined by Jewish law and custom in ancient Israel) 49 in Ghana might have had great influence in the church of Pentecost in respect to appointments of women to become a pastor. The role of women in the CoP has become stereotyped. 50
Nature of Women
The responsibilities of women cannot be quantified either in the home or in society (Gen. 2 : 18). Paul and Peter viewed women as the weaker sex both physically and emotionally (Rom. 9:21 & 1Pet. 3:7). This should rather be considered a strength than a weakness. The nature of women makes them more responsive to issues and this can be seen in the ministry of Jesus. By their nature, women are more caring and they respond quickly to issues just like Jesus in His ministry. The emotional nature makes women more caring and responsible to issues of human concern just like Jesus Christ. It would even make them better pastors. Nevertheless, the challenges that a woman goes through during pregnancy and the duties associated with childbirth and upbringing make it very difficult for women to be called into the full-time ministry of the church. Quist further argues that in an era where women play various roles across the economy, equal opportunities are been made available and both sexes are capable of playing their roles effectively. Some male pastors argue that their counterparts in the deprived areas have to travel long distances and they, therefore, claim that women may not be strong and bold enough to go through such hazards. The author agrees with the view of Quist that this opinion is not factual because women in deprived areas have to endure various hardships in their daily chores and are able to perform creditably. The same can be said of women in urban areas. 51
The Position of the CoP Regarding Women Leadership
It has been observed that the CoP's position is in agreement with the Egalitarian and Complementary schools of thought. A section of Egalitarians, see no difference between the sexes and do not agree with the interpretations of scripture given by their counterparts who believe that women and men are different when it comes to church leadership. 52 A faction that is more liberal posits that the differentiation between the sexes is just an opinion of patriarchy and has no basis in scripture. Most evangelicals on the other hand believe in authority placed in the male gender. They are however of the view that the distinct roles provided by the apostles were to address a particular issue at the time. 53 Complementarians on the other hand believe that the roles assigned to both genders by the apostles had a bearing on church leadership. 54 However, there is a challenge in applying this to the issue of both sexes holding leadership positions in the church. Hence, it is commonly understood that women are prevented from holding the highest office of pastors but are permitted to hold leadership positions in other aspects of the ministry. Another view is that a woman can hold any position so long as a man leads her. Therefore, a woman can be appointed pastor so long as she ministers under the authority of men. 55 Deducing from the two views as indicated by scholars, egalitarian's position is the reflection of the CoP's leadership structure which is also in line with the complementary as indicated above. The author agrees with Kuwornu Adjaoattor's position that the admonition Paul gave in 1 Timothy 3:2a is not about the gender of an individual but whether the person is living beyond reproach and possessing a good reputation. 56
METHODOLOGY
The study used both primary and secondary sources in the collection of data. Both qualitative and quantitative methods were used. The data collection consisted of interviews, questionnaires and observations. Openedended questions and questionnaires were used in surveying the Kwadaso Area of the CoP which has nineteen
ANALYSIS OF FINDINGS
From the data collated a majority of the respondents maintained that the CoP should follow the current Pentecostal pattern/tradition and stop promoting the modernization agenda since the Bible expects women to be submissive (Eph. 5:22). Comparatively, according to their estimation, women cannot perform well in some positions in the CoP like men. They were of the view that pastoral work in the CoP should be in the preserve of men. Should there be a woman in any of the councils, for instance, from the Local to the National levels, then, she is there to represent children and women's ministries.
Fifty-nine percent (59%) of those talked with affirmed that God works in the culture and the cultural setbacks of Akan do not encourage women to hold leadership positions in the CoP. The church policies/ structures do not permit that, and for that reason, women should be content with their current calling as deaconesses. They argued that women cannot be pastors since the highest office for a woman to be called into is a deaconess in the CoP. The selection of these church officers according to the respondents has to do with the Constitutional structures of the church. It was realized that the CoP is concerned about the full-time ministry; which the leadership thinks female pastors may not be able to contain intermittent transfers. In their opinion, the 'higher callings' to the District, Area and the National appointments are preserved for men as indicated by Apostles Ekow Badu-Wood-Kwadaso Area Head (2014) and J. S. Gyimah-Asokwa Area Head/ former Ashanti Regional Chairman of CoP (2014). 59 The respondents further indicated that the responsibilities of the Area/National executives are so demanding and stressful that women cannot bear them. Currently, the National and Area Women leader and assistants are part of the executives. Therefore women should support their husbands (Elders, Pastors and Apostles) to function well. They further mentioned that the administration and decision-making may be difficult for women for such a higher position. From the Biblical perspective, the respondents gave two reasons why women should not be allowed to serve as either an Apostle or a National Chairman: In the early church, there was no woman apostle (Matt. 10:1-4). Males are the head of the family according to the Bible (Eph. 5:24). This argument has been challenged by Evangelical feminists who argue that God created man and woman as equals in a sense that excludes male headship. Male headship/domination (feminism acknowledges no distinction) was imposed on Eve as a penalty for her part in the fall. It follows, in this view that a woman's redemption in Christ releases her from the punishment of male headship. 60 The respondents answered that the issue is not about gender competition but it should be complementary roles for women's ministry. In the light of the complementary nature of the women's roles; the respondents emphasized that women have their functions as the District Leader, Area Leader and the National Women's Leader/President respectively.
Some respondents also emphasized that the constitution of the church in no way debars women from holding leadership positions because the Women's Ministry once had an Apostle as a Patron. 61 The respondents indicated that women are not given the pastoral function because; ''in the typical Ghanaian society, a man is expected to lead in all aspects of life whiles a woman follows''. One of the respondents interpreted Luke 8:1-3 to mean that ''Jesus made women servants to the Elders, Pastors and Area Heads among others'' Fifteen respondents were however of the view that the issue of women's inclusion arose after the Fourth World Conference on Women in Beijing in 1995, (popularly referred to as the Beijing Conference) where a consensus was reached that women like men can equally play roles in every aspect of life. They also argued that the world is now a global village and the churches in Ghana cannot remain in isolation. Currently, many women are knowledgeable and have ideas to make meaningful contributions in the church. These views were however in the minority.
Also, the fifteen maintained that Paul had few arguments he wanted to deal with or resolve. They pointed out that the CoP has never had the challenge that Paul encountered as indicated in the Corinthian church. The CoP leadership is trying to operate in 'Paul's controversy' since Paul's statement about the role of women is not clear. The respondents further indicated that the Greek culture in a way had some sort of negative influence on the role of women in the Corinthian church.
Other varied views were given with respect to why gender equality should become important to other churches; Forty-five percent (45%) held the view that, issues relating to gender are not all that important since the church does not discriminate against women and that Christian leaders should rather be concerned about spiritual things but not issues relating to gender equality. Others argued that Paul entreats women to submit to their husbands and describes men as head of the family. There were other arguments that the socioeconomic trend in Ghana calls for gender equality in both society and in the church. Hence, there is the need to embrace both genders in God's ministry based on their ability to perform.
It is a known fact that women are already holding leadership positions in the Districts 62 , Area and National 63 levels, especially in the women's ministry, except pastoral positions. Forty-one (41%) of the respondents agreed that as long as the person is filled with the Holy Spirit, dedicated, hardworking, respectful and has the needed leadership qualities, such a person should be made a leader; regardless of gender. Some respondents again mentioned that as female prophets they were able to serve as leaders in the Old Testament and so women can equally hold positions as Area Apostles or National Executive Members. According to Ntumy as indicated by Asamoah-Gyadu, those whose ministries are singled out as unique in the history and development of the CoP are all women. Among these, Ntumy mentions Maame Dede, a deaconess who was said to have been active in the 1960s in areas of healing, word of knowledge, prophecy and discernment of spirits. Ntumy also mentions Grace Mensah, who is the founder and leader of the popular Edumfa Revival Centre. What Ntumy has said is in accordance with what Quist has stated that women are seen playing leadership roles in other churches in Ghana. In the Methodist and Presbyterian Churches, for example, women are being admitted into the clergy. However, in the Catholic Church, women are not permitted to be priests but are admitted as nuns who play a crucial role in the church. In the African Independent Churches (AIC's) women are found in leadership positions and some are even founders of ministries. In the view of Asamoah-Gyadu, women are to be largely credited with the survival of the church due to their active engagement. He added that the male hegemony in African churches was broken by the women who entered the prophetic ministry and established their own ministries. 64 The respondents maintained that there is a possibility that, a time will come when a female pastor would become a District Pastor, an Area Apostle or National Executive Member.
From the analysis, it is evident that the CoP has no immediate plans of letting women occupy key leadership positions in the Church due to the various reasons above. It is however the hope of some of the respondents and the author that there would come a time when women would lead the church. 65
RECOMMENDATIONS
Based on the above findings from the field, this article recommends the following for CoP Leadership: • There is the need to have a common theological education for Classical Pentecostal Fraternity in Ghana, such as the School of Theology, Mission and Leadership (STML). This will enable CoP members who go through theological education to know the importance of ordaining females as Pastors into the ordained ministry. • The CoP should officially open up discussion on the need to create gender equality when it comes to the appointment of members into full-time ministry of the church (leadership positions). Bible Study material could be used to sensitise their members on the issue under consideration. Again, this would enable women's ministry to become more independent as a ministry in CoP. • CoP leadership should do well to liberate women from socio-cultural setbacks; because, women are more vocal, can articulate their views and can equally handle any position a man can hold in the church in this 21 st century. This undertaking would call for overhauling of the church's Constitution to create an equal platform for both genders to reflect on what is stated in Galatians 3: 28 (There is neither Jew nor Greek, there is neither slave nor free, there is neither male nor female; for you are all one in Christ).
• CoP leadership should re-look its Constitution with regard to enrolling only males into the ordained ministry; that is, pastoral and eldership responsibilities. This Constitutional provision should be critically looked at by the National Executive Council; in reference to how Jesus Christ selected Mary Magdalene to convey the resurrection message to the disciples. • The Constitutional provision in CoP which allows the Pastor, apostle or an elder to identify a ''called person'' into the ordained ministry must be reviewed since the ''David's in the wilderness taking care of sheep'' might not be noticed (1 Sam. 16:11-14). The leadership should emulate the selection processes of Assemblies of God (AG) and the other mainline churches; where the candidate who feels he/she has been called by God informs the Pastor about his/her intention to be in the ordained ministry. The selection of only males to serve as Trustees of CoP should be critically cross-examined in the context of contemporary ideas about gender equality in across the globe and in Ghana especially (within the church and society). • CoP leadership should encourage Pastor's spouses to pursue theological education, particularly those ''who have the call of God.'' Some of the spouses who would avail themselves for training will be allowed to play the role of Associate Pastor (s) as is done in Assemblies of God. This will reduce the burden on the male District Pastors; who usually have many stations or congregants to man within their jurisdiction. This decision when implemented would boost the confidence/morale of other females who want to aspire to the leadership positions in the CoP.
CONCLUSION
This paper has examined women's role in the CoP. It has been observed that in the CoP, leadership positions are male dominated. In view of that, the highest leadership position/role for a woman in CoP is a deaconess. This is the result of the cultural perception concerning women in Ghana which has influenced the CoP's constitution. However, leadership roles/status in Ghanaian Pentecostal Fraternity should not be seen as gender-specific since God does not intend any gender distinction between men and women in the Christian ministry. God is more interested in the ability and availability to perform and not necessary about one's gender. Additionally, times have changed and the CoP cannot remain in isolation; when it comes to issues relating to gender equality since many women are knowledgeable and have ideas to make meaningful contributions in the church. Therefore, the calling of God into the leadership positions is for all persons regardless of gender since Jesus' death has broken down all gender barriers in the church.
|
2022-03-14T15:27:31.893Z
|
2022-03-11T00:00:00.000
|
{
"year": 2022,
"sha1": "a0449b775991ce74f92ec94ae2a23cc94c8cf743",
"oa_license": "CCBY",
"oa_url": "https://noyam.org/?download_id=7523&smd_process_download=1",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "95cba3c9796be3ded811716b2d670d3b007c7b71",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
22528138
|
pes2o/s2orc
|
v3-fos-license
|
Mechanistic Insights into the Retaining Glucosyl-3-phosphoglycerate Synthase from Mycobacteria*
Background: Knowledge of conformational changes occurring in glycosyltransferases is limited. Results: The active site of GpgS is essentially preformed as the protein proceeds along the catalytic cycle with the nucleotide sugar β-phosphate playing a central role in substrate binding. Conclusion: Conformational dynamics is a major determinant of GpgS activity. Significance: This model of action might be operational in other GT-A glycosyltransferases. Considerable progress has been made in recent years in our understanding of the structural basis of glycosyl transfer. Yet the nature and relevance of the conformational changes associated with substrate recognition and catalysis remain poorly understood. We have focused on the glucosyl-3-phosphoglycerate synthase (GpgS), a “retaining” enzyme, that initiates the biosynthetic pathway of methylglucose lipopolysaccharides in mycobacteria. Evidence is provided that GpgS displays an unusually broad metal ion specificity for a GT-A enzyme, with Mg2+, Mn2+, Ca2+, Co2+, and Fe2+ assisting catalysis. In the crystal structure of the apo-form of GpgS, we have observed that a flexible loop adopts a double conformation LA and LI in the active site of both monomers of the protein dimer. Notably, the LA loop geometry corresponds to an active conformation and is conserved in two other relevant states of the enzyme, namely the GpgS·metal·nucleotide sugar donor and the GpgS·metal·nucleotide·acceptor-bound complexes, indicating that GpgS is intrinsically in a catalytically active conformation. The crystal structure of GpgS in the presence of Mn2+·UDP·phosphoglyceric acid revealed an alternate conformation for the nucleotide sugar β-phosphate, which likely occurs upon sugar transfer. Structural, biochemical, and biophysical data point to a crucial role of the β-phosphate in donor and acceptor substrate binding and catalysis. Altogether, our experimental data suggest a model wherein the catalytic site is essentially preformed, with a few conformational changes of lateral chain residues as the protein proceeds along the catalytic cycle. This model of action may be applicable to a broad range of GT-A glycosyltransferases.
tion for the nucleotide sugar -phosphate, which likely occurs upon sugar transfer. Structural, biochemical, and biophysical data point to a crucial role of the -phosphate in donor and acceptor substrate binding and catalysis. Altogether, our experimental data suggest a model wherein the catalytic site is essentially preformed, with a few conformational changes of lateral chain residues as the protein proceeds along the catalytic cycle. This model of action may be applicable to a broad range of GT-A glycosyltransferases.
Glycosyltransferases (GTs) 4 play a central role in nature due to their exceptional capacity to synthesize a broad range of glycans. They transfer a sugar moiety from nucleotide sugar and lipid-phosphosugar donors to acceptor substrates, including mono-, oligo-, and polysaccharides, proteins, lipids, small organic molecules, and deoxyribonucleic acids (1). GTs can be classified as either "retaining" or "inverting" enzymes according to the anomeric configuration of substrates and products (2). Inverting GTs follow a direct displacement S N 2-like mechanism via a single oxocarbenium ion-like state. In contrast, the catalytic mechanism for retaining GTs remains less clear. By analogy with glycosylhydrolases, a double displacement mechanism involving a covalently bound glycosyl-enzyme intermediate was first suggested. However, in the absence of both, a clear catalytic nucleophile and structural/kinetic evidence of a viable covalent intermediate, an alternative mechanism has been proposed. In this mechanism, known as "internal return," leaving group departure and nucleophilic attack occur on the same face of the sugar (3-7) involving either a short lived oxocarbenium ion intermediate (SNi-like) (8) or an oxocarbenium ion transition state (SNi) (9). Two major structural folds have been described for the nucleotide sugar-dependent enzymes among the first 35 GT sequence-based families (CAZy, carbohydrate-active enzymes database (10)) for which three-dimensional structures have been reported. These topologies are variations of "Rossmann-like" domains and have been identified as GT-A and GT-B (11,12). Moreover, bioinformatics analysis revealed that many of the structurally uncharacterized nucleotide sugar-dependent GT families are also predicted to adopt one of these two folds. Interestingly, both inverting and retaining enzymes were found in GT-A and GT-B folds indicating that there is no correlation between the overall fold of GTs and their catalytic mechanism. The GT-A fold was first described for the 256-amino acid protein SpsA from family GT2, a putative inverting GT from Bacillus subtilis (13). It consists of two tightly associated /␣/ Rossmann-like domains, where the N-terminal domain recognizes the nucleotide sugar donor and the C-terminal domain of the protein contains the acceptorbinding site. Most GT-A enzymes exhibit an Asp-Xaa-Asp (also known as DXD) signature in which the carboxylate groups coordinate a divalent cation and/or a ribose ring (2). Kinetic and structural studies have revealed that most GT-A enzymes follow an ordered mechanism in which the divalent cation and nucleotide sugar donor bind first, prior to binding of the acceptor (14 -16). The glycosylated acceptor is then released, followed by the nucleotide group. The divalent cation may react with the free enzyme and does not dissociate after each catalytic cycle (17)(18)(19)(20)(21)(22)(23). Often the interaction of GT enzymes with their natural substrates leads to substantial changes in the structural conformation of the proteins, compared with their free forms, with direct implications for their function (12,24,25). Specific loops, adjacent to the active site, for instance, often adopt different conformations in the presence or absence of substrates. These loops have been suggested to restrict water access to the active site and appear to play a crucial role during substrate binding and catalysis in GT-A enzymes, including inverting GnT-I (26), 4Gal-T1 (21), GlcAT-I (27), CstII (28), and MgS (29) and retaining ␣-(1,3)GalT (20,30) and GTA/GTB (31).
The glucosyl-3-phosphoglycerate synthase (GpgS) is a retaining ␣-glucosyltransferase that initiates the biosynthetic pathway of the 6-O-methylglucose lipopolysaccharides (MGLPs) in mycobacteria. The enzyme transfers a Glcp moiety from UDP-Glc to the 2-position of the 3-phosphoglycerate (PGA) to form glucosyl 3-phosphoglycerate ( Fig. 1A) (31,33). MGLPs are cytoplasmic lipopolysaccharides of intermediate size containing up to 20 Glcp units, many of which are 6-O-methylated. Moreover, MGLPs can be acylated with additional acetyl, propionyl, isobutyryl, succinyl, and octanoyl groups (Fig. 1B) (34). A remarkable property associated with MGLPs is their ability to form stable 1:1 complexes with long-chain fatty acids and acylcoenzyme A derivatives in vitro (35,36). The fact that MGLPs are composed of Glcp units predominantly in ␣-(134)-linkage confers on these molecules a proclivity to assume an helical conformation. Within the complexes, the fatty acyl chain is included in the nonpolar cavity of the coiled polysaccharide chain (37). Interestingly, with an intracellular concentration of long-chain acyl-CoAs in Mycobacterium smegmatis of ϳ0.3 mM, the concentration of polymethylated polysaccharides approaching 1 mM, and the dissociation constant of the polysaccharide⅐lipid complex estimated to 0.1 M, all of the long-chain fatty acids of the cytosol may form complexes with polymethylated polysaccharides leading to the suggestion that the physiological function of these polymethylated polysaccharides may serve to as general carriers for long-chain fatty acids synthesized in the cytosol (38).
This study describes a detailed investigation of the conformational properties of MtGpgS in solution. Using a combination of x-ray crystallography, limited proteolysis, isothermal titration calorimetry (ITC), and analytical ultracentrifugation (AUC), we propose a plausible model for donor and acceptor substrate recognition and binding. The implications of this model for the understanding of the early steps of MGLPs biosynthesis and the catalytic mechanism of other members of the GT-A family are discussed.
GpgS Crystallization and Data Collection-The apo-forms of MtGpgS and MtGpgS in complex with Mg 2ϩ and uridine 5Ј-diphosphate (UDP, Fluka; MtGpgS⅐Mg 2ϩ ⅐UDP) were crystallized as described previously (39). Crystals of MtGpgS in complex with Mn 2ϩ , UDP, and PGA (MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA) were obtained by mixing 0.5 l of the protein (10 mg ml Ϫ1 ) in the presence of 5 mM MnCl 2 , 5 mM UDP, and 5 mM PGA with 0.5 l of a mother liquor of 0.4 M NH 4 dihydrogen phosphate using the sitting drop vapor diffusion method. Crystals appeared after 1-2 days and grew as rhombuses reaching 0.37 ϫ 0.30 ϫ 0.15 mm. Prior to data collection, the crystals were cryo-cooled in liquid nitrogen by using 0.5 M NH 4 dihydrogen phosphate and 30% (v/v) glycerol as cryo-protectant solution. X-ray diffraction data from single crystals of MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA were collected using synchrotron radiation in the ID-23-2 microfocus beamline ( ϭ 0.873 Å) at the European Synchrotron Radiation Facility (ESRF, Grenoble, France), and processed with the XDS program. MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA crystals belong to the I4 1 space group and diffracted to 1.98 Å and have two monomers per asymmetric unit, corresponding to a Matthews coefficient of 2.32 Å 3 and a solvent content of 47.04%. The complete data collection statistics are shown in Table 1.
Structure Determination and Refinement-The structures of the apo-form of MtGpgS and that of the MtGpgS⅐Mg 2ϩ ⅐UDP and MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA complexes were solved by molecular replacement with the program Phaser Version 2.1.2 (40), using the atomic coordinates of MAP2569c from Mycobacterium avium subsp. paratuberculosis as the search model (Protein Data Bank code 1CKJ, see Ref. 41). The final MtGpgS apo-form, MtGpgS⅐Mg 2ϩ ⅐UDP, and MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA models were obtained after alternate cycles of model building using the program COOT (42) and restrained refinement using the program phenix.refine (43).
N-terminal Sequence Analyses-Samples were run in a NuPAGE 4 -12% gel. The gel was then washed with NuPAGE transfer buffer (Invitrogen) during 15 min at room temperature. Proteolytic fragments were electrotransferred to a PVDF membrane using the iBlot dry blotting system (Invitrogen) during 7 min. The PVDF membrane was then washed for 10 min with Milli-Q purified water. Bands were stained with a (66). Position 3 of the second and that of fourth ␣-D-Glcp residues (closest to the reducing end) are substituted by single ␣-D-Glcp residues. R1, R2, and R3 are acyl groups: R1, acetate, propionate, or isobutyrate; R2, octanoate; and R3, succinate. MGLPs occur as a mixture of four main components that differ in their content of esterified succinate. The names of the genes thought to be involved in the different steps of their elongation and modifications are shown (32,67). Acylation and methylation are thought to occur concurrently; the precise stage at which the two -(133)-linked Glc residues are attached is not known, but the definition of early MGLP precursors suggests that they are added early during the elongation process. solution containing 0.1% Coomassie Brilliant Blue R-250, 40% methanol, and 1% acetic acid and subjected to N-terminal sequence analysis using Applied Biosystems 494 procise high throughput protein sequencer at the Biomolecular Resource Facility of the University of Texas Medical Branch.
Isothermal Titration Calorimetry-Ligand binding to MtGpgS was assayed using the VP-ITC system (MicroCal Inc.) as described previously (25,44), with the following modifications. The ITC cell (1.4 ml) contained 40 M MtGpgS in 50 mM HEPES, pH 7.5, 2 mM MnCl 2 , 150 mM NaCl, and the syringe (300 l) contained 500 M of URI, UMP, UDP, UDP-Glc, or PGA in the same buffer. Binding of PGA to MtGpgS-UDP and uridine complexes was assayed as follows. MtGpgS was first titrated with the nucleotide analogs, and the resulting solutions of the protein nucleotide complexes were then titrated with a 500 M PGA solution. Sample solutions were thoroughly degassed under vacuum, and each titration was performed at the indicated temperature by one injection of 2 l followed by 37 injections of 8 l, with 210 s between injections using a 416 rpm rotating syringe. Raw heat signal collected with a 16-s filter was corrected for the dilution heat of the ligand in the MtGpgS buffer and normalized to the concentration of ligand injected. UMP, uridine, and PGA binding isotherms were fitted to a single site bi-molecular model (45) using the Origin TM software provided by the manufacturer. Fitting the UDP-Glc and UDP binding isotherms required to develop a specific binding algorithm as described below.
Analysis of UDP and UDP-Glc Binding Isotherms-ITC binding isotherms observed upon UDP and UDP-Glc binding to GpgS were fitted to a model considering two equilibrium reactions, the equilibrium between two protein conformations, P 0 and a P 1 (Reaction 1), with equilibrium constant K e and thermodynamic parameters ⌬G e , ⌬H e , ⌬S e , and ⌬C p,e , and the binding equilibrium with 1:1 stoichiometry between one free protein conformation only, P 1 , and its bound complex (Reaction 2), The GpgS protein population shifts upon ligand binding from free P 0 to bound P 1 conformations with the overall equilibrium constant K ϭ K e ⅐K b . The heat absorbed or evolved upon ligand addition to the GpgS solution is the sum of the heats absorbed or evolved during equilibration of the two protein equilibriums, dQ ϭ dQ e ϩ dQ b . The experimental parameter determined in the titration calorimeter is the differential heat dQ/dL tot (actually ⌬Q/⌬L tot ), where L tot is the total ligand concentration, free plus bound (see Equations 1-3 and supplemental material) L r and r are two unitless parameters that depend on the total ligand and protein concentrations, r ϭ 1/P tot K b , L r ϭ L tot /P tot (45), and ␣ is a unitless parameter that depends on the equilibrium constant between the two protein conformations, ␣ ϭ 1 ϩ 1/K e . V 0 is the reaction cell volume. Nonlinear regression analysis of dQ/dL tot (Equation 1) allows estimation of the thermodynamic parameters of the two equilibrium reactions. Analytical Ultracentrifugation-AUC experiments were performed with a Beckman XL-1 analytical ultracentrifuge using absorbance optics. Velocity measurements utilized twosector charcoal-filled Epon centerpieces, quartz windows, 400-l sample and 420-l reference volumes in 50 mM Tris-HCl, pH 8.0, 150 mM NaCl. All samples were centrifuged in a Beckman 8-hole An50Ti rotor at 22°C at 40,000 rpm, and the data were collected at 280 nm, with a radial increment of 0.003 cm for ϳ7 h. Velocity data were edited and analyzed using the boundary analysis method of Demeler and van Holde as implemented in Ultrascan version 7.3 for Windows (46). Sedimentation coefficients (s) are reported in Svedberg units (S), where 1 S ϭ 1 ϫ 10 Ϫ13 s and were corrected to that of water at 20°C (s 20,w ). The partial specific volume of full-length MtGpgS was calculated from the amino acid sequence within Ultrascan. Modeling of hydrodynamic parameters was performed using Ultrascan. The frictional ratio (f/f 0 ) was calculated from the known molecular mass and measured sedimentation coefficient using Ultrascan.
Structural Alignment-Structural alignment of MtGpgS (Protein Data Bank codes 4DDZ, 4DE7, and 4DEC are from this study; 3E25 and 3E26 are from Ref. 47), MaGpgS (3CKJ, 3CKN, 3CKO, 3CKQ, and 3CKV are from Ref. 41), and other GT-A glycosyltransferases was performed by the distance alignment matrix method using DALI Lite. Molecular graphics and analyses were performed with the UCSF Chimera package (48).
RESULTS AND DISCUSSION
Metal Ion Promiscuity in MtGpgS-In most GT-A glycosyltransferases, metal ions play a central role in substrate recognition and catalysis (2,23). Specifically, in MtGpgS, the conserved His 258 residue and the carboxylate groups of the Asp 134 -Ser 135 -Asp 136 signature are involved in divalent metal cation coordination, which also coordinates to the ␣and -phosphate moieties of the donor substrate UDP-Glc. The divalent metal cation has been proposed to play the role of the Lewis acid during catalysis in retaining GT-A enzymes and is an essential co-factor for enzymatic activity in MtGpgS (Fig. 2) (2). Therefore, we decided to investigate the influence of several metal ions on the activity of the enzyme. In contrast to previous reports (49,50), we found that MtGpgS was not only active in the presence of Mg 2ϩ but also when other metal cations were used as co-factors. As depicted in Fig. 2, the enzyme requires Mg 2ϩ for maximal activity. However, MtGpgS was enzymatically active when another group II metal ion (Ca 2ϩ ) and transition metal ions (Mn 2ϩ , Co 2ϩ , or Fe 2ϩ ) were present in the reaction mixture. MtGpgS was not active in the presence of Zn 2ϩ or Cu 2ϩ . Our results thus point to a relatively broad specificity of GpgS for metal cations.
Dynamic Loop as Key Factor in MtGpgS-mediated Catalysis-The first described crystal structure of a mycobacterial GpgS (GT81 family) was that of M. avium subsp. paratuberculosis (MaGpgS; apo-form and Mn 2ϩ ⅐UDP, Mn 2ϩ ⅐UDP-Glc, UDP-Glc and UDP-GlcNAc complexed forms; Ref. 41) followed by that of M. tuberculosis (MtGpgS; apo-form and Mg 2ϩ ⅐UDP⅐ PGA complexed form (47), and apo-form, Mg 2ϩ ⅐UDP and Mn 2ϩ ⅐UDP⅐PGA (Table 1)). MaGpgS and MtGpgS are structurally closely related enzymes (root mean square deviation value of 1.4 Å) displaying an 82% sequence identity and 95% sequence similarity. Both MaGpgS and MtGpgS are homodimers and display the characteristic two tightly bound domain organization of GT-A glycosyltransferases (Fig. 3A) (2).
Flexibility and conformational heterogeneity of a loop connecting the acceptor binding domain and the C-terminal extension (residues 253-262, linking 8 and ␣9 in MtGpgS; the numbering system for MtGpgS is used for both MaGpgS and MtGpgS throughout this paper, unless stated otherwise) appears to be critical during substrate binding and catalysis in GpgS enzymes (Fig. 3, A and B). In the apo-form of MtGpgS, we observed that this flexible loop adopts a double conformation in both monomers of the protein dimer (r.m.s.d. value of 1.88 for six residues; Fig. 3, C and D), a property that was not seen in the previous reported apo crystal structures of GpgS. These conformations correspond to a catalytically active (L A , relative occupancy of 60%) and inactive (L I , relative occupancy of 40%) states of the loop in the active site. L A and L I most likely represent distinct energy states of the L loop rather than crystallization artifacts because they do not participate in crystal packing interactions. A detailed analysis of intermolecular interactions showed four residues, Arg 256 , Ala 257 , His 258 , and Arg 261 in the L loop, to be of particular importance. In the inactive L I conformation, the Arg 256 makes hydrogen bonds with the side chain OH of Asp 136 , which is part of the Asp-Xaa-Asp motif, and Glu 212 . In addition, Arg 261 is hydrogen bonded with the lateral chain of Tyr 229 . Although in the active L A conformation the two arginine residues conserved the described binding motif, a new hydrogen bond was formed between the main chains of Ala 257 and Ile 138 . Importantly, also found within this loop was His 258 , which plays a fundamental role in metal coordination in GpgS and other GT-A enzymes, including the UDP-GalNAc:polypeptide ␣-N-acetylgalactosaminyltransferase T1 (GT27 family) and the mannosylglycerate synthase (GT78 family) (41,51,52). The side chain of His 258 in the L A conformation is in an optimal position to readily coordinate with metal ions, although it is far away from the nucleotide-binding site in the L I conformation making van der Waals interaction with the side chain of Tyr 165 .
Notably, the active L A conformation of the L loop observed in the apo-form of MtGpgS is conserved in two other relevant structural states of the enzyme, the metal⅐nucleotide or metal⅐nucleotide sugar donor-bound complexes (Mg 2ϩ ⅐UDP, Mn 2ϩ ⅐UDP, and Mn 2ϩ ⅐UDP-Glc), and the metal⅐nucleotide acceptor-bound complex (Mn 2ϩ ⅐UDP⅐PGA) (Tables 2 and 3 for r.m.s.d. and // torsion angle values respectively; Fig. 3D). In all protein⅐substrate complexes, Arg 256 and Arg 261 slightly change their side chain traces to allow for metal ion coordination and substrate binding. The side chain of Arg 256 participates in Mg 2ϩ or Mn 2ϩ coordination and makes hydrogen bonding with Glu 212 , whereas Arg 261 makes electrostatic interaction with the ␣-phosphate of UDP-Glc. Importantly, the orientation of the key His 258 residue found in the L A conformation is preserved in the complexes and coordinates Mg 2ϩ or Mn 2ϩ ions. In addition, the hydrogen bonding between Ala 257 and Ile 138 is also conserved. Altogether, the structural data indicated that the conformation of the L loop detected during catalysis is already present in the free enzyme, suggesting that the protein conformation necessary for catalysis is an intrinsic property of GpgS.
Donor Recognition Site, Two Conformations for the -Phosphate-The uridine moiety binds to a pocket in the N-terminal domain mainly defined by the connecting loops 2-␣2 (residues 80 -87), 3-␣3 (residues 50 -56), 5-␣6 (residues 133-143), ␣8-␣9 (residues 221-230), and L A , where it makes a number of hydrophobic and hydrophilic contacts with the protein. Of particular relevance, Ser 81 makes hydrogen bond with the uridyl O 2 providing the basis for the nucleoside specificity. Moreover, the side chain of Tyr 229 makes an important hydrogen bond with the O 5 of the ␣-phosphate of UDP. Its side chain conformation is also conserved in the apo-form and other ligand-bound forms of the enzyme. Interestingly, the -phosphate of UDP binds to the enzyme into two different conformations (Fig. 4A). In the first conformation, the -phosphate is oriented toward the ␣-face of the ribose ring and in close proximity to the catalytic center. Consequently, the sugar moiety is favorably positioned for its transfer to the acceptor substrate PGA. This conformation has been observed in MaGpgS⅐Mn 2ϩ ⅐UDP, MaGpgS⅐Mn 2ϩ ⅐UDP-Glc, MtGpgS⅐Mg 2ϩ ⅐UDP⅐PGA, and other GT-A homologous enzymes, including MgS from Rhodothermus marinus, which catalyzes the synthesis of ␣-mannosyl-D-glycerate using GDP-Man as donor sugar ( Fig. 4B; family 78) (29,41,47,52). In contrast, we found that in the second conformation, the -phosphate, is on the contrary oriented toward the -face of the ribose ring, solvent-exposed, and away from the catalytic site. Specifically, the -phosphate rotates 240°, and its O 2 makes a new hydrogen bond with the side chain of Glu 54 . This conformer has been observed in MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA (this study) and in the homologous enzyme MgS from R. marinus, and it likely corresponds to the nucleoside diphosphate moiety of UDP-Glc leaving the catalytic site after sugar moiety transfer (52).
Acceptor Recognition Site-The acceptor-binding pocket in MtGpgS is located on the top of ␣-helix 7, which is also involved in protein dimerization. The carboxyl group of PGA makes a hydrogen bond with the side chain of Thr 187 , whereas Arg 185 positions its guanidinium group in close contact with the phosphate moiety (2.88 Å ArgN⑀ and the PO4 O3; Fig. 4). A conformational change was observed in the connecting loop 6-␣7, which presents an intrinsic flexibility provided by three consecutive glycine residues. In the MaGpgS⅐Mn 2ϩ ⅐UPD-Glc complex, which does not contain the acceptor substrate, Gly 183 -Gly 184 occupy the acceptor-binding pocket, whereas in MtGpgS⅐Mn 2ϩ ⅐UDP⅐PGA, the equivalent residues are oriented in opposite direction and away from the binding site, allowing for PGA binding. This places the accepting OH2 group of PGA at ϳ2.4 Å from the anomeric C1 of the modeled glucose moiety of UDP-Glc. This result is in contrast with a previous report in which the predicted distance between OH2 atom of PGA and C1 of the glucose moiety was ϳ5.48 Å, clearly not compatible for the glucose transfer to take place (47). Interestingly, the PGA and D-glycerate groups lie in equivalent positions in MtGpgS and MgS glycosyltransferases (MgS⅐D-glycerate complex; see Ref. 29), where the accepting hydroxyl group is located at ϳ2.3 Å from the anomeric C1 of the sugar ring. Furthermore, mutation of Thr 139 in MgS (which is equivalent to Thr 187 in MtGpgS) resulted in a 1500-fold increase in the K m for D-glycerate (52), highlighting its role in acceptor binding and suggesting a common acceptor recognition mechanism for GpgS and MgS.
Thermodynamics of MtGpgS-Substrate Interactions-To investigate further the molecular mechanism of donor and acceptor substrates binding to MtGpgS, binding reactions were studied in solution by isothermal titration calorimetry. First, binding of the donor substrate was studied in the presence of manganese ions. It is worth noting that in the absence of metal ions binding isotherms showed weak affinity confirming the requirement of divalent cations for UDP-Glc binding (data not shown) as observed previously with bovine ␣-1,3-galactosyltransferase and more recently with MgS (20,52). UDP-Glc bound to MtGpgS with an apparent stoichiometry of one ligand molecule per protein monomer (Fig. 5). However, the binding isotherm was atypical revealing two clearly detectable reactions, one with decreasing heats of reaction at low ligand to protein molar ratios the other with increasing binding heats at high molar ratios, revealing a complex binding process (Fig. 5A). This binding isotherm could not be fitted to a bimolecular association model. This peculiar binding process was even better observed with UDP as the heat contribution of the reaction at low UDP to protein molar ratios was greater than with UDP-Glc binding (Fig. 5A). The observation that MtGpgS bound both UDP-Glc and UDP with a similar binding process and overall stoichiometries of one nucleotide per protein monomer together with the observation that the heat of the low molar ratio reaction decreased with increasing temperatures from 15 to 35°C (supplemental Fig. S1) brought us to consider MtGpgS protein as being in equilibrium between two conformations, P 0 and P 1 , with only one conformation, P 1 , binding the nucleotide diphosphate ligand ("Experimental Procedure" and Table 4) as shown in Reaction 3, Within this model, nucleotide diphosphate (L) binding to P 1 conformation shifts the P 0 P 1 conformational equilibrium of
onformational Dynamics in GT-A Glycosyltransferases
the protein toward the bound P 1 conformation. This model allowed precise predictions of the observed binding isotherms ( Fig. 5 and supplemental Fig. S1). UDP-Glc bound MtGpgS with high affinity and a largely exothermic and enthalpy-driven reaction with a large heat capacity change on binding (K d ϭ 4 M, ⌬H/⌬G ϭ 172%, ⌬C p ϭ Ϫ368 cal⅐(mol⅐K) Ϫ1 ; Fig. 5A and Table 4), a set of binding parameters in agreement with the MaGpgS/Mn 2ϩ ⅐UDP-Glc crystal structure and the involvement of hydrophilic interactions in sugar donor association (41). With respect to UDP-Glc binding, UDP bound to MtGpgS with a 4-fold lower affinity, a 4 kcal/mol smaller enthalpy, and a 45 cal⅐(mol⅐K) Ϫ1 binding heat capacity reduction, revealing the contribution of the glucose moiety to the binding process (Table 4). Based on best fits of UDP binding isotherms at various temperatures, the apparent amounts of P 0 and free P 1 protein conformations varied from 31 and 69% at 15°C to 10 and 90% at 35°C, respectively. At 15°C, the P 0 P 1 transition was largely endothermic (⌬H e ϭ 7.1 kcal/mol) and entropy-driven (supplemental Fig. S1 and Table 4). The nucleotide binding process was further investigated by testing UMP and uridine binding. Both UMP and uridine bound to MtGpgS, however, with one main difference, i.e. binding isotherms exhibited one binding transition only that could be precisely fitted to a simple bimolecular association model with a binding stoichiometry of one ligand per protein monomer. Binding was enthalpy-driven with binding parameters JULY 13, 2012 • VOLUME 287 • NUMBER 29 (45)) was found in an inactive conformation or partially unmodeled, respectively. similar to those with UDP binding (UDP and uridine binding affinities were 4-and 6-fold lower than UDP-Glc binding affinity, respectively; Fig. 5A and Table 4). An observation clearly indicated that UMP and uridine bound to the two MtGpgS P 0 and P 1 protein conformations with equal affinities, respectively. Taken together, these results emphasize the important role of the -PO 4 in stabilizing the donor substrate or product MtGpgS complexes in the P 1 conformation, in agreement with the crystal data.
Conformational Dynamics in GT-A Glycosyltransferases
Second, binding of the acceptor substrate was studied by ITC. PGA bound to the UDP MtGpgS complex with a 1:1 stoichiometry with respect to protein monomer, a high affinity with an exothermic and enthalpy-driven reaction (K d ϭ 4 M, ⌬H/⌬G ϭ 85%) with a large heat capacity change on binding (⌬C p ϭ Ϫ475 kcal/mol) ( Fig. 5B and Table 4). PGA also bound to the UMP MtGpgS complex with a 1:1 stoichiometry, however, with a completely different binding process. Binding was endothermic and largely entropy-driven with a 2-fold lower binding affinity (K d ϭ 9 M, ⌬H/⌬G ϭ Ϫ23%; Fig. 1B and Table 1). Although tested at different temperatures, in the presence or absence of metal cations, PGA binding to the uridine MtGpgS complex or to free MtGpgS could not be detected (Fig. 5B).
Results of the ITC study clearly demonstrated that binding of the donor and acceptor substrates was sequential. Furthermore, UDP and UMP binding to MtGpgS leads to the formation of different PGA protein complexes. Altogether, the results of the ITC and x-ray studies make it tempting to hypothesize that P 0 and P 1 MtGpgS protein conformations could correspond to the catalytically inactive (L I ) and active (L A ) states of the protein, respectively, as observed in the protein crystals.
Overall Conformational Flexibility of MtGpgS-To further characterize the effect of substrate binding on the conformation of MtGpgS, we performed limited proteolysis experiments. When incubated with trypsin, the enzyme was rapidly degraded (Fig. 6A). Similar profiles to that observed with the unliganded enzyme were obtained when UDP-Glc, UDP, UMP, URI, or PGA alone were present in the reaction mixture. As shown in Fig. 6A, the presence of both UDP and PGA substrates slightly Ⅺ). B, MtGpgS was first titrated with a nucleotide as described in A, and the protein⅐nucleotide complex formed was then titrated with PGA. Thermodynamic data are reported in Table 4. protected MtGpgS from degradation by the protease. Sedimentation velocity AUC studies of pure MtGpgS were in agreement with the proteolysis experiments (Fig. 6B). The nearly vertical distribution s indicates that MtGpgS sedimented as a single homogeneous species with an average sedimentation coefficient of 4.42 s, which is consistent with a dimeric protein (71,311 Da). Upon addition of equimolar UDP and PGA, the sedimentation coefficients increased slightly to 4.50 s, whereas the presence of UDP-Glc or its derivatives did not significantly affect the s values of MtGpgS. Taking into account the apparent 1:1 stoichiometry of binding and the relatively minor increase in the molecular weight of MtGpgS upon ligand binding, this change in the sedimentation coefficient indicates the formation of a slightly less compact structure. Altogether, our results suggest that although the conformation of the catalytic loop L plays a central role during the catalytic cycle, the overall structure of the enzyme remains unchanged.
Structural Comparison with GT-A GTs-To date, the threedimensional crystal structures of GT-A enzymes in all three relevant functional states of their catalytic cycles (i.e. the ligandfree form, the binary complex with bound nucleotide (NDP), or nucleotide sugar (NDP-sugar) donor, and the ternary complex with bound nucleotide (NDP) and acceptor substrates/derivatives) have been reported for families GT6, GT7, GT8, GT13, GT14, GT15, GT29, GT43, and GT64 (2,11). Interestingly, a careful inspection of all available structures revealed examples wherein very few differences are found between the conformations of the apo-and complexed forms suggesting that, similar to GpgS, the catalytic site of some GT-A enzymes might be preformed before donor and acceptor binding. In the ST3Gal-I sialyltransferase (GT29), there are no significant structural changes of the protein main chain in the active site upon Gal1,3-GalNAc-␣-PhNO 2 or CMP/Gal1,3-GalNAc-␣-PhNO 2 binding, consistent with the random order mechanism determined for this enzyme (53,54). The ␣1,2-mannosyltransferase Kre2p/Mnt1p from Saccharomyces cerevisiae (GT15), involved in both N-linked outer chain and O-linked oligosaccharide biosynthesis, displays an r.m.s.d. between the ligand-free form and its binary and ternary complexes of 0.17 and 0.19 Å, respectively (55). Only a limited number of side chain conformational changes occur in the enzyme upon binding of the donor and acceptor substrates. The fact that an acceptor substrate⅐enzyme binary complex could not be obtained in this case suggests that the acceptor-binding site may only be available after binding of the donor substrate, which is consistent with a sequential ordered mechanism determined for other retaining GT-A GTs. Similarly, the overall structure of the apo-form of the UDP-GlcA:galactosylgalactosylxylosylprotein 3--glucuronosyltransferase (GT43) is almost similar to the ternary complex with Mn 2ϩ ⅐UDP⅐N-acetyllactosamine (r.m.s.d. of 0.38 Å) (56). Interestingly, only the side chains of three basic residues (Lys 153 , Arg 165 , and Arg 313 ) undergo conformational changes upon UDP binding. In contrast, a large conformational change induced after donor and/or acceptor binding has been observed in several GT-A enzymes, including the UDP-Gal:-galactoside ␣-1,3-galactosyltransferase (GT6) (20), the UDP-Gal: -GlcNAc -1,4-galactosyltransferase T1 (GT7) (23), glycogenin (GT8) (57), and the ␣1,4-N-acetylhexosaminyltransferase EXTL2 (GT64) (58).
Concluding Remarks-As highlighted by the structural and biophysical evidence presented herein, the intrinsic flexibility of an accurate region of the active site of GpgS plays a central role during the donor and acceptor substrates recognition that seems to be of significant relevance during the glycosyl transfer. The crystal structure of the apo-form of MtGpgS revealed two distinct conformational states of the protein characterized by a highly dynamic nature of the L loop. The conformation of the L loop in the binary Mn 2ϩ ⅐UDP-Glc and the ternary Mn 2ϩ ⅐UDP⅐PGA complexes displays minor structural rearrangements when compared with the L A state in the free enzyme. Essentially the very few differences involve the lateral chains of two basic residues, Arg 256 and Arg 261 , suggesting that the conformation necessary for catalysis is an intrinsic property of GpgS. The crystallographic snapshots of GpgS during its reaction cycle and calorimetric data strongly support a prominent influence of the nucleotide ␣and -phosphates in substrate binding and catalysis. Whereas the ␣-phosphate is stabilized by a stacking interaction with the conserved Tyr 229 , the -phosphate seems to alternate between two conformations, which likely correspond to the pre-and post-sugar transfer states. Intriguingly, GpgS shows uncommon metal ion preferences for a GT-A enzyme with a broad range of metal cations capable of assisting catalysis.
Recent reports show a remarkable role for protein conformational dynamics in substrate recognition and product release and enzyme catalysis (59 -64). These conformational dynamics seem to act locally and allosterically to modulate the affinity and selectivity of enzymes, signaling proteins, and receptors (65). The current scenario shows the conformational dynamics of the L loop of GpgS as a major determinant in metal/substrate association and catalysis and opens the debate of whether a "conformational selection rather than an "induced-fit" mechanism might govern substrate recognition. Nevertheless, further studies would be required to confirm this hypothesis and the occurrence of a similar model in other GT-A glycosyltransferases.
|
2018-04-03T04:46:33.252Z
|
2012-05-25T00:00:00.000
|
{
"year": 2012,
"sha1": "d25a932098071395a57159d7303da8aa8acca527",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/287/29/24649.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e086a461b873bdf6b4027aa1a19878585061347b",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
2113529
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the performance of screening test for gestational diabetes in singleton versus twin pregnancies
Objective We compared the performance of the 50-g glucose challenge test (GCT) in singleton versus twin pregnancies and investigated the need for adjusting GCT cutoff values for gestational diabetes mellitus (GDM) in twin pregnancies among Korean women. Methods A retrospective chart review was performed in women who underwent GCT at 24 to 28 weeks' gestation and delivered in our department between January 2000 and April 2008. GCT performance was compared between singleton and twin pregnancies for an ideal cutoff value of the GCT for GDM screening. Results GCT results were available in 3,578 pregnancies (3,435 singleton and 143 twin pregnancies). The mean GCT value was higher in the twin group than in the singleton group. Women in the twin group had a higher mean GCT value (P=0.043) and a higher incidence of GCT ≥130, ≥135, and ≥140 mg/dL (P=0.014, 0.005, and 0.015, respectively). The false positive rate for GCT ≥140 mg/dL was significantly higher in the twin than in the singleton group (P=0.042). The optimal GCT screening cutoff value appears to be ≥145 mg/dL in twin pregnancies. Conclusion Our study demonstrates that the GCT is associated with a higher false positive rate in twin rather than singleton pregnancies. This study suggests we should consider adjusting the GCT cutoff value for GDM in Korean twin pregnancies.
Introduction
Gestational diabetes mellitus (GDM) frequently affects women during pregnancy [1,2]. According to recommendations by the American College of Obstetrics and Gynecology (ACOG), a universal screening for GDM using a 50-g glucose challenge test (GCT) is advocated. Women with abnormal GCT (serum glucose levels above a threshold of 130 to 140 mg/dL) should undergo the definitive diagnostic 3-hour 100-g oral glucosetolerance test (OGTT) [1].
Overall, physiologic changes are amplified in multiple gestation pregnancies compared with singleton pregnancies [3,4]. Multiple gestations, such as twin pregnancies, have larger placentas, which result in higher hormone levels. Higher levels of estrogen, placental lactogen, and progesterone affect insulin sensitivity [4][5][6]. Another important contributor to insulin resistance during pregnancy is weight gain, which has also been shown to be higher in twin pregnancies [7,8]. Of these physiologic changes, those that affect glucose control are significantly influenced by differences in gestation number.
Considering these differences between twin and singleton pregnancies, it has been hypothesized that the accuracy and characteristics of the GCT may differ between these two groups [9,10]. Although the incidence of twin pregnancies has increased in recent years, studies related to twin pregnancy and GDM are clearly lacking.
Thus, this study aimed to compare the performance of the GCT in twin versus singleton pregnancies and to evaluate the ideal GCT cutoff value in twin pregnancies among Korean women.
Materials and methods
A retrospective chart review was performed in pregnant women who delivered between January 2000 and April 2008 at the obstetrics and gynecology department in Severance Hospital, Yonsei University College of Medicine. The study group included all women who underwent a GCT at 24 to 28 weeks' gestation. Maternal age, parity, body mass index, weight gain during pregnancy, gestational age at delivery, family history of diabetes mellitus, and neonatal outcomes were reviewed using medical records. Maternal age was categorized into <35 and ≥35 years. Pregnancies complicated by any of the following conditions were excluded from the study: pregestational diabetes mellitus, delivery at <24 weeks gestational age, and birth weight <500 g. Pregestational diabetes mellitus was defined as glucose intolerance that occurred before pregnancy.
Diagnosis of GDM was based on a two-step strategy. Patients underwent the GCT at 24 to 28 weeks' gestation. During the study period, all patients with a GCT result ≥140 mg/dL underwent OGTT. GDM was diagnosed if two of four values on the OGTT were abnormal, based on Carpenter-Coustan cutoffs [11].
Data analysis was performed using IBM SPSS ver. 20.0 (IBM Corp., Armonk, NY, USA). The Student t-test was used to compare continuous variables between the groups, and the chi-square test was used for categorical variables. A P-value <0.05 was considered statistically significant.
Results
The results of the GCT were available in 3,578 pregnancies, of which 3,435 were singleton and 143 were twin. The characteristics of the singleton and twin groups are presented in Table 1. The twin group gained more weight during pregnancy, had a higher rate of nulliparity, and delivered earlier in comparison with the singleton group. Although the rate of advanced maternal age in the twin group was close to 30%, it was not significantly different from that of the singleton group.
The characteristics of the GCT results in women with singleton and twin pregnancies are presented in Table 2. Twin pregnancies were associated with a significantly higher mean GCT result (123.5±27.7 vs. 117.0±38.1 mg/dL, P=0.043). The overall rate of GDM, diagnosed according to the standard set by ACOG, was similar between the twin and singleton pregnancies (7.7% vs. 8.3%, P=0.79). However, the false positive rate for GCT was considerably higher in the twin pregnancy group compared with the singleton pregnancy group, when the cutoff value was defined as 140 mg/dL.
In addition to the overall result averages, GCT results were compared according to three different GCT cutoffs (≥130, ≥135, and ≥140 mg/dL) between the twin and singleton groups. Using a GCT cutoff of ≥130 mg/dL, 52 of 143 (37.1%) pregnancies were screened to be positive. However, using a GCT cutoff of ≥135 and ≥140 mg/dL, 46 (32.9%) and 37 (26.4%) pregnancies were screened to be positive, respectively.
The diagnostic characteristics of the GCT based on the different cutoffs are shown in Table 3. All three GCT cutoffs in twin pregnancies had a high sensitivity. A GCT cutoff of ≥140 mg/dL had a higher specificity and had lower false positive rate for GCT. The receiver operating characteristic (ROC) curve considering singleton pregnancies is shown in Fig. 1. The area under ROC curve of GCT was 0.920 (95% confidence interval, 0.898 to 0.943; P<0.001). It was observed that a GCT cutoff value ≥139 mg/dL had a sensitivity of 87.1% and specificity of 86.3% in diagnosing GDM. The ROC curve reflecting twin pregnancies is shown in Fig. 2. The area under ROC curve of GCT in twin pregnancies was 0.958 (95% confidence interval, 0.917 to 0.999; P<0.001). It was observed that a GCT cutoff value ≥145 mg/dL had a sensitivity of 88.9% and specificity of 86.3% in diagnosing GDM.
Neonatal outcomes for the twin group were compared according to the three different GCT cutoffs (≥130, ≥135, and ≥140 mg/dL). As shown in Table 4, there were no differences between the different GCT values with regard to the rate of low Apgar score at 5 minutes, respiratory morbidity, neonatal hypoglycemia, admission to the neonatal intensive-care unit, or perinatal death. Women with a GCT ≥140 mg/dL but normal oral glucose-tolerance test results (no abnormal values). Women with a GCT ≥140 mg/dL but normal oral glucose-tolerance test results (no abnormal values).
Discussion
Although correlations of twin pregnancy with GDM risk have been investigated in a number of studies, the results have been considered narrow and controversial. There have been reports concluding that GDM incidence in twin pregnancies does not differ from that in singleton pregnancies [9,12,13]. However, some studies demonstrated that multiple gestations have a higher incidence of GDM [10,14]. This discrepancy between studies could be explained by differences in the study design and the cutoff values for diagnosing GDM. Therefore, the Carpenter-Coustan criterion, which has high sensitivity in the diagnosis of GDM, was selected for our study [11]. Screening thresholds for GCT have varied from 130 to 140 mg/dL, with varying sensitivities and specificities reported. There are no randomized trials to support a clear benefit to one cutoff compared with others. In a recent review, sensitivity for a threshold of 140 mg/dL ranged from 75% to 83 %. Sensitivity estimates for a GCT threshold of 135 mg/dL improved only slightly to 78 to 85 %. And, specificity dropped from a range of 72% to 85% for 140 mg/dL to 65% to 81% for a threshold of 135 mg/dL [15]. In other analysis, sensitivities were only marginally improved when using lower thresholds (130 and 135 mg/dL) [16]. ACOG recommends using either 135 or 140 mg/dL as the GCT threshold [1]. In the absence of clear evidence supporting a cutoff of 135 versus 140 mg/dL for the GCT, it is suggested that health care providers select one of these as a single consistent cutoff for their practice, with factors such as community prevalence rates of GDM considered in that decision. So, we compared the results for three different GCT threshold (130, 135, and 140 mg/dL) in singleton and twin pregnancies.
Women with twin pregnancy tend to gain more weight during pregnancy and tend to be older compared to women with singleton pregnancy. These two variables have been assessed as high risk factors for GDM in some studies [17][18][19][20]. Additionally, as mentioned previously, large placental mass is associated with high levels of hormone that influence insulin sensitivity [5,6]. This supports the hypothesis that twin pregnancy is associated with higher GDM risk. In our study, women with twin pregnancy had significantly greater gestational weight gain, but the proportion of women aged >35 years in twin pregnancies was not significantly different from that of women with singleton pregnancy. More women with multiple gestations delivered earlier than did women with singletons; however, this might be owing to a higher incidence of preterm labor in multiple gestations rather than owing to GDM [21]. In this study, twin pregnancies had relatively higher average GCT results than did singleton pregnancies. In addition, three different GCT cutoffs (≥130, ≥135, and ≥140 mg/dL) demonstrated greater GCT results exceeding the cutoffs in twin than in singleton pregnancies. However, the actual diagnosis of GDM with OGTT had no difference in the incidence of GDM between the two groups, with more false positives occurring in the twin group than in the singleton pregnancy group.
Yogev et al. [22] recently studied the characteristics of the GCT in twin versus singleton pregnancies. Although they found that the GCT results were significantly higher in twin pregnancies than in singleton pregnancies, their population of twin pregnancies had lower GCT values than did ours. For example, their mean GCT value was 104.7±28.5 mg/dL compared with 123.5±27.7 mg/dL in our study. In addition, the rates of GCT >130 (20.2% vs. 37.1%) and >140 mg/dL (13.8% vs. 26.4%) were lower in their cohort than in those of the present study. Moreover, these findings were independently associated with twin gestations even after adjusting for potential confounders . Their findings suggest that these physiological differences between singleton and twin pregnancies may lead to an only mild form of glucose intolerance that does not translate to a difference in the rate of GDM as defined by an abnormal OGTT. Another possible reason for the lack of difference in the rate of GDM in twin pregnancies may be that the diabetogenic effect by a twin gestationrelated protective effect from GDM, which may be attributed to the increased demand of glucose due to the presence of multiple fetuses and to the higher basal metabolic rate in twin pregnancies [23].
As higher false positive GCT results were reported in singleton and twin pregnancies, investigations into the ideal cutoff value for GCT were performed. In our study, ROC curve analysis showed that the area under the ROC curve for GCT in twin pregnancy was 0.958 (P<0.001) with 145 mg/dL as the cutoff value. This resulted in 88.9% sensitivity and 86.3% specificity. Rebarber et al. [24] reported 100% sensitivity but only 28.6% test positive rate when they set the optimal GCT cutoff at ≥135 mg/dL. High GCT cutoff values in our studies, compared with other studies, are thought to be most likely due to our sample group consisting only of Koreans. While GDM is in- Values are presented as mean±standard deviation or number (%).
Vol. 58, No. 6, 2015 creasingly common worldwide, largely owing to the obesity epidemic, its frequency is relatively low in Korean women. These differences may be attributed to both genetic and environmental factors [25][26][27]. When a woman is diagnosed with GDM, maternal complications and neonatal outcomes should be monitored closely. Some reviews point out that the cases with false positive GCT results had poorer neonatal outcomes, such as glucose intolerance, than did the cases without false positive GCT results [28,29]. Comparative analysis of neonatal outcomes based on different GCT cutoff levels of 130, 135, and 140 mg/dL was done in the present study. The two groups below and above a GCT cutoff level of 130 mg/dL showed significant differences in birth weight; however, other complications were not noted. Comparing neonatal outcomes in twin pregnancies by GCT levels did not show any statistically significant results. This could be because we only included neonatal outcomes that are considered highly critical. This is the first study comparing the performance of the GCT in twin versus singleton pregnancies and evaluating the ideal GCT cutoff value among Korean women. Our results could be utilized as references for comparisons with the results of other studies performed abroad. The study by Yogev et al. [22], which was performed in Israel, had only a 15% advanced maternal age rate, while another study by Rebarber et al. [24], which was performed in the US and mostly targeted Caucasians, reported an advanced maternal age rate at nearly 50%. This difference could be due to geographic discrepancies, because GDM is also associated with environmental factors, such as lifestyle and diet, and genetic factors, such as race.
In this study, the GCT cutoff value was above 139 mg/dL in singleton pregnancies which showed no difference compared to previous studies. But the GCT cutoff value was above 145 mg/dL in twin pregnancies which is higher than the currently accepted diagnosis value. As GDM is often asymptomatic, screening is necessary to identify women with GDM. High sensitivity is often warranted in screening tests, as a falsenegative test result (in which disease remains undiscovered) is considered to be more harmful than a false-positive test result (in which a reference test is unnecessarily performed). Our study has a small sample size, with only 143 cases of twin pregnancy, which results in a low estimated accuracy. If the GCT cutoff value extrapolated from this study is used for diagnosis in twin pregnancies, there would be a higher false negative rate so diagnosis rates would fall. A study with a larger sample size is required in the future to establish a more accurate GCT cutoff value in Koreans. In addition, long-term neonatal outcomes should be investigated to obtain a more appropriate cutoff value.
In conclusion, our study suggests that the GCT is associated with a higher false positive rate in twin pregnancies than in singleton pregnancies. More research is needed in twin pregnancies to establish the optimal GDM screening and treatment paradigm in twin pregnancies.
|
2016-05-12T22:15:10.714Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "0da2561b3470a8a467a913a55259830362b21a2b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5468/ogs.2015.58.6.439",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17877b2c0a77034676d8f82777576ead74dd5607",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226850718
|
pes2o/s2orc
|
v3-fos-license
|
Protein complexes and neighborhoods driving autophagy
ABSTRACT Autophagy summarizes evolutionarily conserved, intracellular degradation processes targeting cytoplasmic material for lysosomal degradation. These encompass constitutive processes as well as stress responses, which are often found dysregulated in diseases. Autophagy pathways help in the clearance of damaged organelles, protein aggregates and macromolecules, mediating their recycling and maintaining cellular homeostasis. Protein-protein interaction networks contribute to autophagosome biogenesis, substrate loading, vesicular trafficking and fusion, protein translocations across membranes and degradation in lysosomes. Hypothesis-free proteomic approaches tremendously helped in the functional characterization of protein-protein interactions to uncover molecular mechanisms regulating autophagy. In this review, we elaborate on the importance of understanding protein-protein-interactions of varying affinities and on the strengths of mass spectrometry-based proteomic approaches to study these, generating new mechanistic insights into autophagy regulation. We discuss in detail affinity purification approaches and recent developments in proximity labeling coupled to mass spectrometry, which uncovered molecular principles of autophagy mechanisms. Abbreviations: AMPK: AMP-activated protein kinase; AP-MS: affinity purification-mass spectrometry; APEX2: ascorbate peroxidase-2; ATG: autophagy related; BioID: proximity-dependent biotin identification; ER: endoplasmic reticulum; GFP: green fluorescent protein; iTRAQ: isobaric tag for relative and absolute quantification; MS: mass spectrometry; PCA: protein-fragment complementation assay; PL-MS: proximity labeling-mass spectrometry; PtdIns3P: phosphatidylinositol-3-phosphate; PTM: posttranslational modification; PUP-IT: pupylation-based interaction tagging; RFP: red fluorescent protein; SILAC: stable isotope labeling by amino acids in cell culture; TAP: tandem affinity purification; TMT: tandem mass tag.
Introduction
The majority of proteins are degraded via two pathways: the ubiquitin proteasome system and autophagy. In comparison to the specific degradation of the ubiquitin proteasome system, autophagy is thought to degrade substrates nonselectively and selectively due to cargo receptors [1]. Thus, autophagy summarizes constitutive lysosomal degradation pathways and stimulus-dependent stress responses to preserve cellular homeostasis. Macromolecules, protein aggregates and organelles are degraded and recycled to accomplish the energy demands of the cell and to restock basic building blocks for anabolic processes. Autophagy acts in general in a cytoprotective manner and its dysregulation has been linked to various diseases such as neurodegeneration, cancer and metabolic syndrome [2]. Autophagy has been broadly categorized into three different subtypes: macroautophagy, microautophagy and chaperone-mediated autophagy, which differ in the mode of transport and delivery of substrates for lysosomal degradation. Microautophagy describes an invagination of endosomal or lysosomal membranes that engulf cytoplasmic substrates for degradation [3]. Chaperone-mediated autophagy is probably the most selective subtype of autophagy. Recognition and unfolding of substrates carrying KFERQ-like motifs by cytosolic chaperone HSPA8/HSC70 (heat shock protein family A [Hsp70] member 8) determines this selectivity. Unfolded proteins are translocated across lysosomal membranes for degradation by LAMP2A, a lysosomal membrane protein [4,5]. In a process termed endosomal microautophagy, substrates carrying the KFERQ motif are selectively recognized by cytosolic HSPA8 chaperones and targeted for degradation to late endosomes instead of the lysosomes via binding to phosphatidylserine on endosomal membranes [6]. Macroautophagy, hereafter referred to as autophagy, is an intracellular degradation pathway that starts with the formation of a double-membraned vesicle called autophagosome, which enwraps cytoplasm targeted for lysosomal degradation. Autophagosome biogenesis consists of distinct hierarchical phases starting with the formation of a cup-shaped membrane called phagophore, followed by elongation of the phagophore, maturation, closure, fusion with endosomes, and finally with lysosomes leading to the formation of autolysosomes [7,8]. Autophagy has been shown to play a critical role in selectively clearing damaged organelles (mitochondria, peroxisomes etc.), infectious agents and protein aggregates [2].
Over the last few decades, more than 40 autophagy-related (ATG) genes/proteins have been reported in yeast. Most of these are conserved between yeast and mammals and have crucial roles in the progress of autophagy [9]. The canonical core pathway is regulated by six conserved protein complexes [10]: (i) the ULK1 (unc-51 like autophagy activating kinase 1)-ULK2 complex, which is critical for autophagy initiation; (ii) the ATG9 system, which provides membranes for autophagosome generation; (iii) the class III phosphatidylinositol 3-kinase complex (PtdIns3K), which phosphorylates the lipid phosphatidylinositol (PtdIns) generating PtdIns-3-phosphate (PtdIns3P) that serves as binding site for protein recruitment; (iv) the ATG2-WIPI complex, which is important for membrane expansion, (v) the ATG12 and (vi) Atg8-family proteins ubiquitin-like conjugation systems. The latter being important for phagophore expansion and cargo recruitment [11]. We will discuss the roles of these complexes in more detail in the following paragraphs ( Figure 1).
Protein complexes regulating autophagy
Initiation of autophagy is executed by two kinase complexes, the ULK1 complex, the mammalian homolog of yeast Atg1 complex, and secondly, the PtdIns3K complex [12]. ULK1 (or its homolog ULK2), a Ser/Thr kinase, phosphorylates itself, and its complex members ATG13, ATG101 and RB1CC1/ FIP200. Together, they form the active tetrameric autophagy initiation complex, e.g., in response to starvation [13,14]. Endoplasmic reticulum (ER) recruitment of this complex happens through direct interaction of ULK1 and RB1CC1 via their FFAT motifs with ER membrane proteins VAPA (VAMP associated protein A)-VAPB, forming active phagophore initiation sites on the ER membrane. Additionally, VAPs interact with WD repeat-containing protein WIPI2; a membrane-associated protein forming a tethering complex and strengthening ER-phagophore contact, making ER an essential organelle for autophagosome biogenesis [15]. ATG9, a multi-membrane spanning protein, predominantly localizes at the trans-Golgi network and endosomes. It is postulated to organize a lipid source for phagophore generation and expansion due to its colocalization to the phagophore membrane [16]. ATG9-containing vesicles fuse with ATG16L1-containing vesicles to progressively promote autophagosome biogenesis [17] (Figure 1).
There are two distinct PtdIns3K complexes. Complex I promotes autophagy and consists of the lipid kinase PIK3C3/VPS34, the adaptor protein PIK3R4/VPS15, ATG14, which helps in ER membrane tethering, the stabilizing subunit BECN1/Beclin-1, and the accessory subunit NRBF2. In complex II, UVRAG replaces ATG14 and Figure 1. Schematic representation of autophagosome biogenesis and maturation. MTORC1 inhibits autophagy via its inhibitory phosphorylations on ULK1 and ATG13 under nutrient-rich conditions. Under stress, autophagy is activated via the formation of an active tetrameric ULK1 initiation complex to promote autophagosome nucleation. PtdIns3K complex I catalyzes the production of PtdIns3Ps, which contributes to phagophore nucleation and omegasome formation. PtdIns3P-binding proteins like ZFYVE1/DFCP1 and WIPIs decorate the omegasomes. WIPI2 interaction with ATG16L1 mediates the recruitment of the ATG12-ATG5-ATG16L1 complex for the conjugation of LC3-I to PE and phagophore expansion and maturation. Additionally, lipid sources from ATG9 vesicles, ATG2A/B recruited by WIPIs and from cell membranes collectively help in expanding the phagophore membrane. Double-membraned autophagosomes fuse with lysosomes to form autolysosomes and their content is degraded by lysosomal hydrolases.
NRBF2. Complex II takes over functions in later stages of autophagy [12]. ULK1 helps recruiting the PtdIns3K complex I to active initiation sites by phosphorylating the following proteins: (a) AMBRA1 on amino acid residues Ser465 and Ser635 to release the tethered PIK3C3-BECN1 complex toward the ER from AMBRA1, which is bound to microtubules via DYNLL1 and DYNLL2 (dynein light chain LC8type 1/2) [18], (b) PIK3C3 on Ser249 [18,19], (c) BECN1 on Ser15 (human)/Ser14 (murine) stabilizing the complex [20], and (d) ATG14 on Ser29 activating autophagy by increasing PIK3C3 activity [21]. The PtdIns3K complex increases production of PtdIns3P at phagophore formation sites, where PtdIns3P binding protein ZFYVE1/DFCP1 accumulates, leading to crescent-shaped growing phagophores called omegasomes [22]. These structures provide strong platforms for the binding of WIPI1 and WIPI2 involved in nascent autophagosome biogenesis [23]. WDR45B/WIPI3 and WDR45/ WIPI4 positively affect signaling events up-and downstream of PtdIns3P production, controlling the size of autophagosomes [24]. In addition, human homologs of yeast lipid transfer protein Atg2 (ATG2A and ATG2B) decorate phagophores via their interaction with WIPIs [24]. ATG2s are involved in maintaining a membrane tether or contact site between ER and phagophores leading to expansion of the growing phagophores by direct lipid transfer [25]. Later, they help in autophagosome membrane closure [26] ( Figure 1). Ubiquitin-like modifiers (UBLs) play important roles in autophagosomal cargo recruitment and maturation. ATG12 and the family of Atg8 homologs are characterized by ubiquitin folds as UBLs. As ubiquitin, respective proteins are activated and transferred to target proteins by sets of sequential reactions [27]. ATG7, a ubiquitin E1-like enzyme, activates the C-terminal glycine of ATG12 and hands it over to ATG10, an E2-like enzyme. ATG10 transfers ATG12 to its target ATG5, which in turn binds to ATG16L1 to form the ATG12-ATG5-ATG16L1 trimeric complex [27][28][29]. Atg8 homologs are UBLs classified into two subfamilies: the LC3 family consisting of MAP1LC3A, LC3B, LC3B2 and LC3C and the GABARAP family consisting of GABARAP, GABARAPL1, and GABARAPL2 (hereafter collectively referred to as LC3s) [30]. LC3s are anchored in membranes by conjugation to phosphatidylethanolamine (PE). First, cleavage at the C terminus by ATG4B proteases exposes a terminal glycine residue and activates LC3 (LC3-I) to get bound by ATG7 [31]. The E2-like enzyme ATG3 and the ATG12-ATG5-ATG16L1 complex, which has E3 ligase activity, transfer LC3-I (cytosolic) to PE (LC3-II). This happens at active sites of autophagosome biogenesis due to recruitment of the ATG12-ATG5-ATG16L1 complex via the interaction between ATG16L1 and WIPI2 [29]. LC3s are predominantly involved in the selective capture of substrates for degradation. Autophagy receptors bind to LC3s via so-called LC3interacting regions (LIRs) [32]. However, LC3s also help in the maturation and closure of nascent autophagosomes [33]. In general, autophagosomes fuse with endosomes to form amphisomes before they finally fuse with lysosomes to form autolysosomes for degradation of their contents [34]. Fusion of mature autophagosomes with lysosomes is carried out in a concerted manner by the action of RAB and SNARE proteins and membrane tethering complexes, most importantly by RAB7A, STX17, SNAP29, VAMP8 and the HOPS complex [35].
Delicate balance between anabolism and catabolism is essential for the survival of cells. Anabolism is positively regulated by the master regulator of cell growth MTORC1, a conserved Ser/Thr kinase complex consisting of the kinase MTOR (mechanistic target of rapamycin kinase), and the regulatory subunits MLST8 and RPTOR. MTORC2, composed of MTOR, RICTOR and MLST8, is thought to positively modulate cell proliferation [36]. MTORC1 is well known to inhibit catabolic pathways, including autophagy where inhibitory phosphorylations of Ser637 and Ser757 on ULK1 and Ser258 on ATG13 prevent them from forming the active autophagy initiation complex. Nutrient deprivation or rapamycin treatment inhibit MTORC1 activity, thus enabling the activation of autophagy via dephosphorylation of MTORC1 sites and subsequent activating phosphorylations of, for example, ATG13 by ULK1. Under starvation conditions, dephosphorylation of MTORC1 target sites on ULK1 is carried out by the heterotrimeric PP2A protein phosphatase, consisting of the catalytic subunit PPP2CA, the scaffolding subunit PPP2R1B/PRL65, and the regulatory subunit PPP2R2A/B55alpha [37]. Additionally, the energy demand activates AMP-activated protein kinase (AMPK), an AMP: ATP ratio sensor. AMPK activates the ULK1 complex by phosphorylating Ser317, Ser659, and Ser777 on ULK1 itself and Ser224 on ATG13. Active ULK1 catalyzes autophosphorylations on different sites, among others Thr180, Ser1042, and Thr1046 [14].
The dynamic organization of sequential events in autophagy is highly regulated by protein-protein interactions (PPIs) and post-translational modifications (PTMs) happening at the right spatial and temporal resolution. To study PPIs in vitro and in vivo, different techniques have been used [38]. The strength of the interactions, either strong/permanent or weak/ transient, determine the efficient usage of a specific method. In order to obtain a global, unbiased picture of PPIs, mass spectrometry (MS)-based proteomic approaches studying PPIs and neighborhoods have gained much attention lately [39]. New developments allow the study of strong and weak interactions, supporting the construction of hierarchical networks. Co-immunoprecipitation methods like tandem affinity purification (TAP) enrich strong interactions. To capture weak interactions, crosslinking coupled to affinity purification (AP), proximity labeling (PL) and bio-orthogonal chemistries coupled to MS are now widely used. In the following chapters, we summarize the different MS-based methods to identify PPIs of different strengths and highlight their usage to study the regulation of autophagy.
Protein-protein interactions and their regulation
Dynamic PPIs regulate virtually all biological processes, e.g., metabolism, DNA replication, protein synthesis, as well as autophagy. Interactions can range from simple binary interactions to complex multimeric interactomes and are essential to maintain efficient functioning of cells thereby balancing cell physiology. Hence, understanding these interactions is crucial for revealing molecular functions and diseaseassociated mechanisms in order to design potent therapeutics [40]. Based on stability, PPIs can be classified into obligate, when binding partners are not stable by themselves, and nonobligate interactions of otherwise stable protomers. Based on binding affinity, i.e. the temporal profile of interactions, PPIs can be classified into permanent and transient interactions [41]. However, often PPIs do not fall into a static classification and a continuum exists between quasi-permanent and transient interactions. As a given protein can also interact with different proteins and form several distinct complexes in vivo, the discrimination between obligate and nonobligate interactions may not be straight forward and depend on a given (patho)physiological condition [42,43]. In addition, permanent interactions often involve proteins that are unstable as monomers and that function in complexes. Thus, due to experimental limitations and in order to simplify a classification based on in vivo observations, only obligate PPIs might be regarded as quasi-permanent ( Figure 2). In contrast, structurally stable proteins that interact with different proteins by undergoing association and dissociation reactions form transient/non-obligate PPIs, i.e. time-limited interactions. Based on affinity and temporal quality, transient interactions can be further classified as strong and weak transient interactions. Moreover, protein complexes are classified based on composition as homo-oligomeric (having identical protein members) and hetero-oligomeric complexes (having non-identical protein members).
Homo-oligomers generally form highly stable, permanent structures and are symmetric in nature. They also form scaffolds for stable interactions with other macromolecules. For example, yeast Atg7 is an E1-like enzyme that forms a functionally-active homo-dimeric complex (K d = 1 nM). As described above, ATG7 plays a major role in the two ubiquitin-like conjugation pathways involved in autophagy: ATG12-ATG5-ATG16L1 and LC3-PE generation [44]. TUBA (tubulin alpha) and TUBB (tubulin beta) form a quasipermanent heterodimer (K d ≈ 84 nM [54-123 nM]), which polymerizes to form dynamic microtubules. These help in autophagosome formation, motility and fusion with lysosomes [43,45,46]. Another example of a quasi-permanent, homo-oligomeric complex is CASTOR1 (cytosolic arginine sensor of MTORC1), a protein that forms a stable homodimer and acts as an amino acid sensor by binding to arginine [47].
In contrast, transient complexes associate and dissociate based on physiological conditions. Proteins can function either independently or by forming a complex [42,48,49]. For instance, ATG12 is activated by ATG3 forming a strong, transient hetero-oligomeric complex (K d = 50 nM) [50]. BECN1, a protein that forms a strong transient homodimer, remains inactive in nutrient-rich conditions. However, upon a physiological stress stimulus leading to activation of autophagy, a BECN1 monomer can interact strongly with ATG14 (K d = 3.2 µM) to form an active hetero-oligomeric PtdIns3K class-III complex [51]. Weak transient interactions are continuously broken and formed and difficult to capture by Figure 2. Classification of protein-protein interactions based on stability and binding affinity. Affinity is inversely proportional to the dissociation constant K d . Based on stability and binding affinity, PPIs can be classified into quasi-permanent/obligate and transient/non-obligate interactions. In contrast to permanent complexes, transient complexes are dynamic with proteins associating and dissociating. Transient complexes can be subclassified as strong and weak based on affinity and temporal profile of interactions. Moreover, PPIs can be classified based on composition as homo-oligomers with identical proteins interacting and hetero-oligomers with non-identical chains interacting. Often, interactions change due to physiological conditions rather reflecting a continuous than a "static" classification.
biochemical/analytical approaches. For example, cadherins are adhesion proteins, which mediate cell-cell communication.
Expression of different cadherins in distinct cell types helps in spatial positioning of respective cells during development and maintaining their 3-D architecture, crucial for tissue function. In order to maintain specific cell-cell communication, CDH1/E-cadherin forms weak homodimers (K d = 160 µM). CDH1 forms also hetero-oligomers with CDH2/ N-cadherin, which interact stronger, mediating contacts of different cell types. Autophagy ensures optimal cell growth via regulating the abundance of CDH1, a protein involved in CTNNB1 (catenin beta 1)-WNT signaling [52][53][54] (Figure 2). SNXs (sorting nexins), a class of peripheral membrane proteins important for endosomal sorting, can oligomerize via their Bin/Amphiphysin/Rvs (BAR) domains to perform vesicle to tubular membrane remodeling. SNX4 forms weak homodimers to perform tubule formation of membranes.
Under certain physiological conditions such as autophagy, SNX4 can form a hetero-oligomeric complex with SNX7 to mediate autophagosome assembly and maturation [55][56][57]. Overall, the above examples highlight the variety of affinities of protein-protein interactions important for autophagy. Structural features of proteins, such as motifs and domains, modulate an array of interactions. In addition, differential interactions can be modulated by factors such as pH, compartmentalization, local concentration of ions, and covalent modifications like phosphorylation and ubiquitination. The above features collectively impart specificity to PPIs and mediate oligomerization of proteins meant to interact in a crowded environment [42,58].
Mass spectrometry-based proteomic approaches to study protein-protein interactions
Due to the importance of PPIs, numerous cell biological and biochemical methods exist to study them in vitro and in vivo. This review cannot comprehensively summarize all approaches in detail (e.g., see [59] for more technical details), but we would like to list important, well-established examples of in vitro and in vivo methods, focusing on MS-based approaches. In vitro methods are techniques carried out in a controlled environment after cell lysis and allow the analysis of binary as well as oligomeric interactions. Examples are affinity purifications, size-exclusion chromatography, protein arrays, enzyme-linked immunosorbent assay, and biophysical methods such as isothermal calorimetry, microscale thermophoresis, and surface plasmon resonance. Powerful MS-based in vitro methods are: AP-MS, chemical cross-linking MS (XL-MS), co-fractionation coupled to MS (CoFrac-MS) and PL-MS methods. In vivo methods are techniques carried out in a living organism and commonly address binary interactions [60]. Examples are yeast two-hybrid analyses or image-based analyses, such as bimolecular fluorescence complementation, Förster resonance energy transfer, bioluminescence resonance energy transfer, and proximity ligation assays [61].
MS-based proteomic studies are widely used to understand mechanisms in biological pathways by analyzing PPIs. Technological advancements in terms of MSinstrumentation, liquid chromatography (LC), sample preparation methods, techniques used to enrich interactomes, high throughput analyses, and bioinformatics data interpretation have greatly increased the usage of MS-based approaches. The ultimate goal of proteomic studies is to comprehensively characterize the proteome, protein expression levels, PTMs, and build functionally relevant protein networks. Based on the nature of protein identifications and characterizations, two approaches are discriminated: top-down and bottom-up proteomics [62].
In top-down proteomics, intact proteins are ionized, followed by fragmentation and measurement in a highresolution mass spectrometer. This approach provides a complete characterization of proteoforms including PTMs. The bottom-up proteomic approach involves chemical or enzymatic digestion of proteins into peptides prior to ionization and MS measurement. This method is easily automatable and generally helps in the identification of large numbers of proteins compared to top-down approaches, which require a more sophisticated front-end biochemistry to generate samples for MS analyses [63].
Different strategies in MS sample acquisition exist: datadependent acquisition (DDA), data-independent acquisition (DIA), both unbiased discovery methods, and directed/targeted proteomics approaches. These strategies differ in the depth or coverage of protein identifications. DDA is based on a selection and identification of the most abundant ions in the sample. In DIA, a relatively new method, ions are not selected. Entire groups of ions are measured, generating a "digital map" of the entire sample [39,[64][65][66][67][68][69][70]. Whereas DIA approaches lead to less missing values across experiments, data interpretation is more challenging. Targeted proteomics approaches deal with the specific isolation and measurement of predefined ions, making it more reproducible compared to discovery proteomics experiments.
The aforementioned strategies have advantages and disadvantages, leading to trade-offs in sensitivity, specificity, reproducibility, accuracy, and dynamic range (for more information on these strategies, see [39,[64][65][66][67][68][69][70]). In general, different tools or methods are available for every step in the experiment starting from sample isolation and preparation, MS data acquisition, data analysis, and statistical interpretation. Given the applicability of MS in a wide range of biological questions, choosing the right combination of tools is essential for answering the complex biochemical questions underlying autophagy regulation.
Quantitative proteomics
As signal intensities recorded by MS depend on ionization properties of respective biomolecules, MS is not a truly quantitative analytical approach. Therefore, quantitative proteomics strategies have been developed that allow a systematic quantification of samples revealing changes between measured proteomes. Quantitative information is key to characterize molecular pathways both at protein and PTM level. Quantification can be performed in two ways, either absolute or relative. Absolute quantification is possible by comparing signal intensities of biomolecules with signals of known amounts of respective synthetic/purified standard substances.
Relative quantification is performed by comparing ion intensities between different samples. Quantification of proteins or peptides can be performed label-free or via stable isotope labeling strategies, the latter being more accurate as samples can be analyzed in single MS experiments. Labeling approaches add tags that differ in mass but do not interfere with ionization properties. Tags can be incorporated at protein or peptide level, enzymatically, chemically, or metabolically [71].
Metabolic labeling summarizes strategies that commonly lead to the in vivo incorporation of 13 C and 15 N isotopelabeled metabolites such as amino acids. In one of the most common approaches, stable isotope labeling by amino acids in cell culture (SILAC), labeled lysine (K) and arginine (R) variants are used. Commonly three different SILAC labels are used to perform relative quantifications between three biological conditions [72,73]. SILAC is majorly used in mammalian cell culture studies. Metabolic labeling is advantageous as it allows mixing of cells prior to biochemical perturbations, thus improving quantification accuracy. Enzymatic labeling is being carried out during MS sample preparation, most commonly used in bottom-up proteomics experiments. For example, the use of 18 O-labeled water allows the incorporation of 18 O to neo-C-termini generated by proteolytic digestion [74,75]. Chemical methods utilize the availability of reactive N-termini and amino acid side chains to incorporate chemical groups. Isobaric mass tags are linked to reactive N-terminal and epsilon amino groups of lysine residues via N-hydroxysuccinimide (NHS) chemistry. These tags are made up of a peptide reactive group, a stable isotope-labeled reporter ion, and a balancer group. Prominent examples are isobaric tags for relative and absolute quantification (iTRAQ) and tandem mass tag (TMT) labeling. These tags have similar chemical structures, differing in positions, numbers, and combinations of 13 C and 15 N isotopes, and can be highly multiplexed [76,77]. Currently, up to 16 samples can be quantified in a single experiment on a routine basis [78,79]. Due to new bioinformatics approaches, label-free quantification based on peptide precursor (DDA) or fragment ion (DIA) intensities is a widely used and cheap method for robust relative quantification [68,80]. The aforementioned methods have advantages and disadvantages pertaining to the nature of the quantification method, sample type, multiplexing capacity, costeffectiveness, quantification accuracy, sensitivity, and proteome coverage [68,81,82].
Affinity purification-mass spectrometry to study protein-protein interactions
AP-MS is one of the widely used methods to identify and characterize PPIs. The principle of this method is to isolate a protein of interest (referred to as bait protein) and to identify/quantify its interacting proteins by MS. For AP, either an antibody recognizing the endogenous protein or the expression of a tagged-protein variant in combination with a tag-recognizing structure are commonly used. Affinity tags or antibodies are captured by other macromolecules covalently attached to a solid matrix such as agarose beads. Macromolecules can be oligonucleotides (DNA/RNA), proteins, such as proteins A and G binding the F c parts of antibodies, peptides and lipids [83][84][85][86] are coupled to beads and are used to capture tagged or endogenous bait proteins (Figure 3). We will briefly explain the two most common AP approaches: (i) antibody-based immunoprecipitation of endogenous proteins and (ii) affinity tags, with critical examples related to mammalian autophagy. In antibody-based purifications, the endogenous bait along with its binding partners can be enriched in their native environment. For instance, autophagy induction by the tumor suppressor protein CDKN2A/ ARF, which is a growth suppressor localizing to the nucleolus under growth conditions and to mitochondria under autophagy-inducing conditions, was discovered by an AP-MS approach. Endogenous CDKN2A and its binding partners were enriched using an anti-CDKN2A antibody prior to MS measurements. Under autophagy conditions, CDKN2A interacted with BCL2L1/Bcl-XL at mitochondria, interfering with the BCL2L1-BECN1 interaction and freeing BECN1 to bind to the PIK3C3 complex to promote autophagosome biogenesis [87]. In general, immunoprecipitations of endogenous proteins coupled to MS generate information on native interactions of baits, though there are shortcomings to this method. Firstly, antibodies can bind to different bait isoforms or splice variants, reducing the resolution of interactions. Secondly, protein interactions can be disrupted due to competition between an antibody and interacting proteins, thereby losing information on such interactions. Thirdly, testing and choosing the right high-quality antibody for the immunoprecipitation can be cost-intensive. Finally, a control antibody might not represent the same background binders even though the antibody is species and isotype-matched [88].
Years of research and tremendous improvements in protein engineering helped in developing an array of affinity tags differing in size, length, and affinity to study interactomes. These affinity tags can be classified as protein-and peptidebased tags. There are plenty of single-peptide affinity tags: HA (hemagglutinin), FLAG, MYC/c-myc, 6x/8x His, 2x SBP (streptavidin-binding peptide), and CBP (calmodulinbinding peptide), to name a few. In addition, there are protein tags e.g., GFP (green fluorescent protein), MBP (maltose binding protein), and GST (glutathione-S-transferase) [89]. Next to single tags dual tags exist, consisting of a protein domain/peptide, a cleavage site, and a second peptide tag collectively used for TAP [90]. These are widely used twostep protein purification methods to isolate rather clean protein complexes in yeast. In its initial variant, the tag consists of two IgG-binding units or Z-domains from protein A followed by a tobacco etch virus (TEV) protease cleavage site and a CBP peptide. This purification involves the usage of IgG-coupled beads, which capture the protein A domain, followed by cleavage and release of the complex by TEV protease. Using calmodulin-coupled beads, the CBP-tagged protein complex is captured and purified in a second step. Though this strategy works well in yeast cells and significantly reduces nonspecific background proteins, it was rather inefficient in mammalian cells. Another TAP tag called GS-TAP was developed to purify proteins from mammalian cells [91]. The tag consists of a protein G domain, a TEV protease cleavage site, and an SBP peptide. One advantage of using this tag is the 10-fold increased efficiency of bait purification; moreover, the two-step purification can be skipped using streptavidin beads and eluting the bait with biotin [91].
Aforementioned tags were used in many studies to uncover PPIs modulating sequential events in autophagosome biogenesis. Here, we focus on studies performed in mammalian cells and refer the reader to some excellent articles explaining in detail the studies that have so far been carried out in yeast to understand autophagy [92][93][94][95]. A list of crucial AP-MS studies that helped in uncovering the functional network of interactions modulating different events in autophagy using ATG proteins expressed in mammalian cell lines are listed in Table 1. A seminal AP-MS study on autophagy-related proteins and proteins involved in autophagosome biogenesis utilized the expression of 65 HA-tagged bait proteins, of which 32 were primary and 33 secondary baits, in HEK293T cells. Primary baits were chosen based on their functional links to autophagy and vesicle trafficking. To validate the interaction network of primary baits, secondary baits were chosen based on high interconnectivity with primary baits and functional domains or gene ontology (GO) terms linked to autophagy. Bait proteins and interaction partners were purified using anti-HA beads prior to MS measurements. The analysis of this large-scale proteomics dataset identified 2,553 potential interactors, of which 409 high-confidence candidate interaction proteins with 759 interactions were shortlisted, which revealed a global interaction network involved in mammalian autophagy. These interactions revealed the involvement of proteins with various functionalities, among others protein kinases, PtdIns3P-binding proteins, lipid transport proteins, lipid kinases, and protein ubiquitination machinery. New functional links between various proteins were revealed, as for example, association of ULK1 and ULK2 with AMPK, thus indicating a crosstalk in energy sensing. The authors performed extensive AP-MS analyses of Atg8 homologs comparing nutrient-rich and autophagy-inducing (Torin-1 treatment which inhibits MTORC1) conditions. Interestingly, the extensive overlap of interactomes from ATG8 homologs showed a functional redundancy between these proteins. This overlap of interactions is likely due to the presence of a conserved LIR docking site in ATG8 homologs interacting with cargo receptors/substrates having a conserved hydrophobic LIR [96]. GST-tagged LC3 variants were also used to identify interacting partners by LC-tandem mass spectrometry (MS/MS): e.g., GST-tagged LC3B bound to glutathione-sepharose led to the identification of FYCO1 as novel LIR-dependent LC3B interactor. FYCO1 was shown to also bind RAB7A and PtdIns3P and to promote microtubule-dependent transport of autophagic vesicles [97]. GST-tagged GABARAPL1 was employed to characterize its interaction with HSP90 family members, HSP90AA1 activity being critical for GABARAPL1 stability [98].
Moreover, AP-MS approaches were used to analyze organellar compositions, representing a valuable alternative approach for the characterization of cellular stress-response pathways. SILAC-labeled MCF-7 cells expressing eGFP-LC3 were utilized to compare autophagosome proteome changes upon different stress conditions. Vesicle fractionations in combination with anti-GFP-based enrichments, were coupled to MS analyses to reveal stimulus-dependent autophagosome proteomes [99]. Affinity-purified lysosomes were used to profile proteome changes under nutrient-rich and autophagy conditions. The authors expressed 3x HA-tagged TMEM192, a lysosomal membrane protein, in HEK293T cells and performed anti-HA AP to enrich lysosomes prior MS-based identification and quantification. They identified a change in the localization of a protein called NUFIP1 (nuclear FMR1 interacting protein 1), which shifted from the nucleus toward lysosomes/autophagosomes during starvation-induced autophagy and suggested that NUFIP1 might act as a potential receptor for ribophagy [100]. However, in a recent unbiased approach that studied the contributions of protein translation and degradation to ribosomal protein abundance, the role of NUFIP1 in ribosomal protein degradation could not be confirmed, questioning its function in ribophagy [101].
Post-translational modifications such as phosphorylation, ubiquitination, and sumoylation modulate interactions between autophagy proteins at a particular spatial and temporal resolution [102]. For example, AP-MS studies were performed to identify phosphorylation sites on ULK1. N-terminal TAP-tagged mouse ULK1 was expressed and purified from HEK293T cells prior to LC-MS/MS measurements. A total of 16 novel phosphorylation sites on ULK1 were identified, which also included autophosphorylation sites. The authors suggested that phosphorylation at Ser867 and Ser913 of ULK1 might promote its association with ATG13 and RB1CC1 to form an active autophagy initiation complex [103].
AP methods coupled to either western blot or MS allow analyses of protein complexes in all organelles and compartments without the requirement of prefractionation. The aforementioned examples highlight the robustness of these approaches in identifying PPIs and PPI-modulating PTMs to shed light on the molecular mechanisms in autophagy pathways. However, these approaches also have shortcomings. In general, co-purifying contaminants or nonspecific proteins, either binding to the solid matrix or the antibody, are also enriched, making it sometimes hard to distinguish false and true interactors [104,105]. Usage of quantitative proteomic approaches and analyzing data against contaminant repositories, such as Contaminant Repository for Affinity Purification MS Data (CRAPome), can help in identifying contaminants [106]. Due to overexpression, bait proteins are prone to problems like altered localization, protein misfolding, and aggregation. However, bait levels can be controlled via inducible promoters. In addition, CRISPR approaches are now being used to perform genome editing, enabling the expression of tagged proteins at endogenous levels [107]. Usage of either CRISPR knockout or RNAi approaches that remove the endogenous protein can be efficiently used as negative controls [86,88,108]. Also, the localization of the affinity tag, either at the N-or C-terminus of the bait, has to be critically evaluated. In rare cases, internal tags have been used to address specific biochemical questions such as linear ubiquitination [109]. For the analysis of endogenous proteins, choosing the right mammalian cell line is essential. Commonly, cells expressing high amounts of bait proteins simplify subsequent mechanistic studies.
Critically, AP-MS data are commonly binary and do, therefore, neither yield insights into the structure (and stoichiometry) of complexes nor differentiate between direct and indirect interaction partners. Importantly, AP can isolate only stable or strong bait interactors with nanomolar or higher affinities; thus, dynamic, weak, and transient interactions are often missed [110].
Proximity labeling-mass spectrometry to study weak protein-protein interactions and protein neighborhoods
PL-MS-based quantitative proteomics enables the identification and characterization of weak transient PPIs and neighborhoods. To this end, different classes of enzymes, such as biotin ligases, PTM ligases, and peroxidases, are used to label protein neighborhoods (Figure 4) [111][112][113]. These enzymes are fused to bait proteins and catalyze the covalent transfer of a chemical group to proximal proteins. Biotin ligases are enzymes that catalyze, in an ATP-dependent reaction, the conversion of biotin to a reactive biotinoyl-5ʹAMP intermediate. Biotinoyl-5ʹAMP reacts with exposed lysine residues of proximal proteins, leading to the covalent attachment of biotin within an estimated radius of 10 nm around the enzyme [114,115]. A classical proximity-labeling technique, called proximity-dependent biotin identification (BioID), utilizes the biotin ligase BirA, a monomeric 35.4-kDa protein, from Escherichia coli that has been further engineered to BirA R118G to improve its catalytic activity (also referred to as BirA*, with "*" indicating its promiscuous activity). Cells expressing BioID-fused bait proteins are incubated with biotin for 16-24 h to reach maximum labeling efficiency. This relatively long incubation time is required to generate sufficient proximally biotinylated proteins, which can then be subsequently enriched using classical high-affinity streptavidin or neutravidin beads prior to LC-MS/MS sample preparation and measurements ( Figure 5A) [116].
Introducing further mutations to BioID improved its labeling efficiency several-fold and led to two other commonly used biotin ligases called TurboID and miniTurbo [117]. Compared to wild-type BirA, the improved 35 kDa TurboID variant contains a total of 15 mutations, amongst these a substitution of Arg118 to Ser. By deleting the N-terminal DNA-binding domain, a slightly smaller version of TurboID of 28 kDa, called miniTurbo, has also been generated. MiniTurbo harbors 13 out of the 15 mutations of TurboID. These two tags help in identifying snapshots of protein interactions using labeling times of only 5-15 min ( Table 2). A drawback of TurboID is its high affinity toward biotin, which can lead to the usage of endogenous biotin, thereby disrupting biotin-dependent metabolic pathways and leading to toxicity in certain model organisms [117]. A smaller version of BirA of 28 kDa, BioID2, which naturally lacks the N-terminal DNA-binding domain, was identified in the thermophile Aquifex aeolicus. The interesting feature of this enzyme is the reduced background of DNA/chromatin interacting proteins due to absence of the DNA-binding domain [118]. Overall, labeling time and activity of BioID2 are comparable to the one of BioID. Another biotin ligase variant, termed BirA from Bacillus subtilis (BASU), was engineered without DNA-binding domain. This ligase was used to identify novel RNA-binding and proximal proteins; the respective approach was termed RNA-protein interaction detection (RaPID) [119]. For this, BASU was fused to the λN peptide, which binds bacteriophage lambda BoxB stem-loops. BoxB stem-loops are synthesized to flank the respective RNA-ofinterest. Binding of the ligase to the stem-loop structure led to biotinylation of proteins binding to the specific RNA-ofinterest within a 10 nm radius. At labeling times of 1 min, BASU showed more than >30-fold increased signal-to-noise ratios compared to BioID and BioID2 [119]. One of the most recent technical developments is the generation of ancestral BirA for proximity-dependent biotin identification (AirID). Metagenome data and ancestral sequence reconstruction coupled to site-directed mutagenesis gave rise to this BirA variant with 82% sequence similarity to BioID, which labels proteins in vitro or in vivo in 3 h using lower concentrations of biotin than in the original approach [120].
PTM ligases are enzymes that commonly add tags onto lysine residues of bait-proximal proteins. Pupylation-based interaction tagging (PUP-IT) is a method that was developed to identify PPIs at membranes. This approach involves the expression of a bait protein fused to the bacterial Pup ligase PafA and co-expression of a Pup (prokaryotic ubiquitin-like protein) variant with a C-terminal Gly-Gly-Glu sequence. The Pup ligase catalyzes the phosphorylation of the C-terminal Glu residue, leading to its activation and conjugation to lysine residues of proteins that are in proximity to the bait. Due to the low diffusible nature of the activated Pup tag a smaller labeling radius is achieved, thus reducing background. The major shortcoming of this method is the large size of ~54 kDa of the ligase and its low catalytic activity, which render it inadequate to capture snapshots of PPIs [121]. Neddylation by the 76 amino acids large protein NEDD8 is a ubiquitin-like modification, which has been exploited to identify PPIs. For this, a modified version of the NEDD8 E2-conjugating enzyme UBE2M/UBC12 called NEDDylator is fused to a protein or a small molecule. The specific tagging of NEDD8 to proximal proteins happens via the nucleophilic attack of prey lysine epsilon amino groups to thioester-bound NEDD8 on the bait-NEDDylator. This confirms the direct contact between bait and prey proteins, which ensures neddylation of potential binding partners. This tool has been successfully coupled to SILAC-based quantitative MS to identify small molecule-protein and protein-protein interactions in mammalian cells [122,123]. Peroxidases are oxidoreductases that catalyze redox reactions in the presence of hydrogen peroxide. This includes APEX (ascorbate peroxidase) and HRP (horseradish peroxidase) enzymes, which catalyze the conversion of biotinphenol to biotin-phenoxyl radical in the presence of hydrogen peroxide (H 2 O 2 ) ( Figure 5A). This membrane-impermeable reactive radical covalently labels electron-rich amino acids, majorly Tyr and to some extent Trp, Cys, and Phe, of proteins proximal to the bait within a 10 nm radius. APEX was initially identified in pea and engineered to reduce its dimerization property and improve its catalytic activity. This enzyme contributes to polymerization and deposition of diaminobenzidine for electron microscopy (EM) studies. APEX2 from soybean, a 28-kDa enzyme, was engineered to improve its catalytic activity allowing labeling times of 1 min or less in mammalian cells [124][125][126]. HRP is another peroxidase of 44 kDa used for both EM and PL studies [127]. Due to its inactivity in cytosol, it is majorly used to study cell surface molecules and secretory pathways targeting the enzyme via either ligands or antibodies to cell surfaces, using techniques like selective proteomic proximity labeling assay using tyramide (SPPLAT) and enzyme-mediated activation of radical sources (EMARS) [128]. Alternatively, antibody-conjugated HRP was used intracellularly in fixed tissues and cells to perform PL of bait proteins in techniques such as biotinylation by antibody recognition (BAR) [129]. Overall, due to its usage in EM studies and its fast labeling time, APEX2 is the most widely used technique revealing snapshots of PPIs and enabling analyses of stimulus-dependent interactomes. However, APEX2 is less active compared to HRP and sensitive to H 2 O 2 -mediated inhibition.
Notably, tremendous improvements have been made in protein fragment complementation assays (PCA) coupled to PL approaches. Split variants of labeling enzymes were generated using inactive N-and C-terminal fragments of BioID or APEX2 ( Figure 5B). The respective fragments are fused to two different baits known to interact with each other. Interaction between the baits enables the reconstitution of the holo-enzyme allowing biotinylation of proximal proteins. This technique enables identification of interactions depending on interaction of two bait proteins at a specific spatial and temporal resolution. Split-BioID was the first variant of this approach [130]; however, longer labeling time and lower activity of the reconstituted complex made this split version non-suitable to study dynamic interactions [131]. These hurdles were resolved with the recent discovery of split versions based on HRP, APEX2, and TurboID. Split HRP was used, for example, to study cell-cell interactions [132]. The initially made split-APEX2 was less active compared to its full-length variant and led to a second split version with additional nine mutations, which improved activity and specificity [133,134]. Also recently, a version of split-TurboID was developed for the analyses of PPI of complexes, organelle and cell contact sites [135]. For more technical information, we would like to refer readers to recent reviews [111][112][113]. In the remainder of this manuscript, we focus on studies using PL approaches coupled to MS to understand autophagy mechanisms.
Up to date, only a few studies utilized PL coupled to MSbased quantitative proteomics to uncover the roles of proteins involved in autophagy (Table 3). TBC1D14, a TBC domain containing protein, has a strong effect on the structure and function of recycling endosomes and negatively regulates the formation of autophagosome upon overexpression [136]. BioID coupled to MS analysis of TBC1D14 identified its interaction with TRAPPC8, a subunit of trafficking protein particle III (TRAPP-III), a multimeric protein complex with guanine exchange factor (GEF)-activity toward RAB1B, promoting its GTP-loaded state. TRAPP-III regulates the cycling of ATG9 from early endosomes and the Golgi apparatus to the ATG9 compartment. Overexpression of TBC1D14 led to mislocalization of TRAPPC8 onto recycling endosome tubules, to a fragmented Golgi apparatus, and to disruption of the Golgi ATG9 pool leading to inhibition of autophagosome formation [137].
APEX2 labeling of human ATG8 homologs coupled to MS-based quantitative proteomics was used to analyze autophagosome content. APEX2-LC3C labeling identified a reproducible interaction of LC3C with a protein called MTX1 (metaxin 1). MTX1 binds to SAMM50 located on the outer mitochondrial membrane. Together with MTX2 bound to the cytosolic face of MTX1, these proteins form the sorting and assembly machinery (SAM) complex. This complex, together with the mitochondrial contact site and cristae junction organizing system (MICOS), maintain cristae structure, mitochondrial morphology, and homeostasis. Colocalization studies and functional biochemical analyses of MTX1 revealed its role in autophagic clearance of parts of damaged mitochondria in a piecemeal fashion via LC3C and SQSTM1 [107]. Recently, new roles of LC3s in protein secretion were identified using PL-MS. RNA-binding proteins and small non-coding RNAs were shown to be packed into extracellular vesicles in a MAP1LC3B and LC3conjugation-machinery-dependent manner [138]. Also, GABARAP was identified in extracellular vesicles using cells expressing APEX2-GABARAP. Interestingly, these extracellular vesicles also contained proteins with which GABARAP was shown to interact inside autophagosomes, further supporting a crosstalk between autophagy and protein secretion [139].
APEX2 labeling of the mitophagy receptors OPTN, OPTN D474N a ubiquitin-binding defective mutant, and TAX1BP1 in HeLa cells coupled to TMT (8/9 plex)-based quantitative MS analysis was performed comparing antimycin A/oligomycin A (inducing depolarization of mitochondria) treatments for 1 and 3 h with non-treated cells to identify proteins proximal to the receptors during mitophagy. In combination with a CRISPR-based genetic screen coupled to mitophagy flux assays, HK2 (hexokinase-2) was characterized as a scaffold, forming a 700-kDa complex consisting of PINK1-PRKN and other ubiquitinated proteins, essential for the clearance of damaged mitochondria [140]. It is used as an EM tag. It gives a snapshot of PPIs due to fast labeling time. [126] (Continued ) The role of galectins in maintenance, repair, removal, and biogenesis of lysosomes upon injury has been extensively characterized using APEX2 labeling coupled to quantitative MS. Galectins are beta-galactoside binding proteins with an intrinsic carbohydrate recognition domain (CRD). This property helps in sensing membrane damage due to exposure of membrane glycans. LGALS8 (galectin 8) was shown to induce autophagy upon endomembrane damage by regulating MTORC1 activity via changes in the activation state of RRAG GTPases [141]. APEX2-LGALS9 labeling revealed its role in lysosomal damage sensing via activation of AMPK. Upon lysosomal damage, LGALS9 displaces the deubiquitinase USP9X from MAP3K7/TAK1 kinase, thereby promoting K63-linked polyubiquitination and activation of the kinase. MAP3K7 in turn activates AMPK and thereby autophagy by phosphorylation of Thr172 [142]. Previously, LGALS3 was known to promote TRIM16 based autophagic removal of damaged lysosomes and activate TFEB, a transcription factor for lysosome biogenesis [143]. In addition, APEX2-LGALS3 labeling was performed to understand its role in the repair and clearance of endomembrane damage via autophagy.
LGALS3 helped in recruiting the ESCRT component PDCD6IP/ALIX upon lysosomal membrane damage.
LGALS3 promoted the interaction of PDCD6IP with CHMP4B, which collectively mediated scission and closure of lysosomal membranes [144]. LGALS3 helps in recruiting ESCRT complex and PDCD6IP to promote repair of damaged lysosomal membranes.
LGALS9 APEX2 SILAC Lysosomal damage is sensed by LGALS9, and along with ubiquitin, it signals binding of autophagy receptors to promote lysosome degradation.
LGALS9 and ubiquitin cooperatively activates AMPK for autophagy induction.
[ TMT Identification of essential factors involved in the formation of mitochondria-autophagosome synapse and for selective degradation of mitochondria. [140] 7. STK38 APEX2 SILAC STK38 a Ser/Thr kinase, phosphorylate XPO1 (exportin) to mediate its export from the nucleus along with BECN1and YAP1. Cytosolic localization of XPO1 is crucial in starvation-induced autophagy. [157] 8. TBC1D14 MYC-BioID Label free TBC1D14 interacts and traps one of the subunits of TRAPP, TRAPPC8 which inhibits starvation induced autophagosome formation. [137] 9. TEX264 APEX2 TMT TEX264 interacts with autophagy receptors: SQSTM1, CALCOCO2 and TAX1BP1, ER membrane proteins: CANX, CISD2, and autophagy regulators: ATG14, and WIPI2 during starvation. TEX264 get degrades in a LIR-dependent manner showing its role as a potential receptor in reticulophagy. [158] Thus, the aforementioned examples show the impact of PL-based MS in understanding autophagy-relevant mechanisms. PL provides information about the proximal proteome change around the bait of interest at a given spatial and temporal resolution. However, PL methods reveal many proximal neighbors and potentially interacting proteins leading to an inherently high background. Therefore, the controls used to distinguish true versus false-positive interactors are critical. Commonly, five different types of controls are employed: (i) cells without any PL enzyme treated with biotin/biotin-phenol, (ii) the enzyme fused to an unrelated protein such as GFP or RFP, (iii) the enzyme fused to an inactive or mutant version of the respective bait, (iv) the enzyme fused to a compartment-specific, unrelated protein, which localizes similarly to the bait, and (v) cells with the free PL enzyme [111,145]. Given the variety of controls with their intrinsic pros and cons, choosing the right control for the desired PL experiment is still a debate in the field. We see large differences in enrichments of baits and their binding partners depending on the used control conditions (unpublished data). Whereas control (i) leads to high enrichment rates, the number of false positives appears high. In contrast, due to the high activity of free enzymes in control (v), enrichment rates are low, and the number of false-negatives appears high, i.e., weak transient interactions seem to be lost. Thus, we favor controls (ii)-(iv) in which PL enzyme-tagged bait proteins are compared to PL enzyme-tagged control/ unrelated proteins. By using inducible expression systems, PL enzyme-bait protein amounts can be titrated to avoid high expression levels, which again increases background signals of nonspecifically enriched proteins. Additionally, one should consider that BioID-based approaches rely on the availability of accessible lysine residues and APEX2 methods on electron-rich amino acid residues, i.e., tyrosine, in proximal proteins. Importantly, problems could arise from toxicity issues related to overexpressed PL enzymes e.g., via protein aggregation, mislocalization, and functional inactivation of baits due to the fused PL enzyme. Hence, experimental characterizations of PL enzymes and controls are essential to design meaningful experiments for studying complex biological questions.
Conclusions and outlook
Over two decades, there has been a significant increase in the understanding of mechanisms regulating autophagy. Technological improvements contributed to the expanding list of autophagy-related and -associated proteins. MS-based proteomics studies helped tremendously in identifying and characterizing relevant proteins. Overall, various factors influence the identification of PPIs, like the range of affinities and composition of complexes, next to intrinsic methodological shortcomings of the used analytical approach. Thus, in-depth knowledge of potential methods and understanding of influential experimental factors that might modulate PPIs is essential for choosing appropriate approaches to characterize any PPI. Here, we introduced various MS-based methods available to study PPIs, explained in detail principles and applications of AP-MS-and PL-MS-based approaches, and listed examples of autophagy relevant studies. There are still various unanswered, mechanistic questions like: which membrane sources are employed under which conditions for autophagosome biogenesis? Which factors influence the localization of autophagy initiation complexes at ER sites? Which triggering factors regulate the balance between selective and nonselective autophagy, and how do proteins modulate autophagosome size and shape under these conditions? Extensive characterizations of PTMs modulating PPIs are essential for understanding the above questions. In the future, it will be essential to combine the analyses of PTMs and PPIs. The increasing scanning speed and sensitivity of mass spectrometers will help in generating more detailed views of the regulation of PTMs. Both DIA and DDA methods will likely contribute to generating truly comprehensive datasets that will also be useful for systems biology-based approaches. However, the limited dynamic range of mass spectrometers still poses a challenge to be addressed. Modified peptides still have to be enriched, but enrichment approaches often differ depending on which PTMs are analyzed, impeding the study of PTM crosstalk. Top-down proteomics approaches might partially address this issue. Finally, the growing field of MS-based lipidomics will be essential to fully understand the membrane dynamics underlying autophagy. Thus, we believe that cutting-edge MS approaches will continue to help to address autophagy-related questions and lead to a comprehensive understanding of autophagy-related processes.
|
2020-11-14T14:07:02.144Z
|
2020-11-13T00:00:00.000
|
{
"year": 2020,
"sha1": "ab1ae0d094969f60f903a8a320672724a753e1ec",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15548627.2020.1847461?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3642f46a8284e3b553193e32224835a75843b21f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119144323
|
pes2o/s2orc
|
v3-fos-license
|
On the average exponent of CM Elliptic Curves Modulo p
Let $E$ be an elliptic curve defined over $\Q$ and with complex multiplication by $\mO_K$, the ring of integers in an imaginary quadratic field $K$. It is known that $E(\F_p)$ has a structure E(\F_p)\simeq \Z/d_p\Z \oplus \Z/e_p\Z. with $d_p|e_p$. We give an asymptotic formula for the average order of $e_p$, with improved error term, and upper bound estimate for the average of $d_p$.
Introduction
Let E be an elliptic curve over Q, and p be a prime of good reduction. Denote E(F p ) the group of F p -rational points of E. It is known that E(F p ) has a structure (2) E(F p ) ≃ Z/d p Z ⊕ Z/e p Z.
with d p |e p . By Weil's bound, we have (3) |E(F p )| = p + 1 − a p with |a p | < 2 √ p. We fix some notations before stating results. Let E[k] be the k-torsion points of the group E(Q). Denote Q(E[k]) the k-th division field, which is obtained by adjoining coordinates of E [k]. Denote n k the field extension degree [Q(E[k]) : Q]. Recently, T.Freiberg and P.Kurlberg [TP] started investigating the average order of e p (In the summation, we take 0 in place of e p when E has a bad reduction at p). They obtained that there exists a constant c E ∈ (0, 1) such that unconditionally when E has CM. More recently, J.Wu [JW] improved their error terms in both cases unconditionally when E has CM. In this paper we improve the unconditional error term in CM case by using a number field analogue of Bombieri-Vinogradov theorem due to [H,Theorem 1].
We are also interested in the average behavior of d p . For the average of d p , we have an upper bound result. We apply the number field analogue of Brun-Titchmarsh inequality due to [HL,Theorem 4].
Theorem 1.2. Let E be a CM elliptic curve defined over Q and with complex multiplication by O K , the ring of integers in an imaginary quadratic field K. Let N be the conductor of E. Let A > 0, and N ≤ (log x) A . Then we have where the implied constant is absolute.
Note that the upper bound is sharper than the trivial bound ≪ x log x.
Preliminaries
Lemma 2.1. Let E be a CM elliptic curve defined over Q and with complex multiplication by O K . Then for k > 2, where φ is the Euler function.
Lemma 2.2. Let E be an elliptic curve over Q, and p be a prime of good reduction. Then Proof. See [M, page 159].
Let N be the conductor of E, and denote where the implied constant is absolute.
Proof. See A.Cojocaru [AC,Lemma 2.6], and note that there are only nine possibilities of K.
We state some class field theory background. For the proofs, see [AM,Lemma 2.6,2.7].
Here c is an absolute constant and φ(f) is the number field analogue of the Euler function.
Let π K (x; q, a) = #{p : prime ideal; N (p) ≤ x, and p ∼ a mod q}. The following is a number field analogue of the Bombieri-Vinogradov theorem due to Huxley [H,Theorem 1].
where Q = x 1/2 (log x) −C . The implied constant depends only on B and on the field K.
There is a number field analogue of Brun-Titchmarsh inequality due to J. Hinz and M. Lodemann [HL,Theorem 4].
Lemma 2.7. Let H denote any of the h(q) elements of the group of idealclasses mod q in the narrow sense. If 1 ≤ N q < X, then We are now ready to prove Theorem 1.1. From now on, E is an elliptic curve over Q that has CM by O K , where K is one of the nine imaginary quadratic field with class number 1. Let N be the conductor of E.
3. Proof of the theorem 1.1 By Weil's bound, we have As shown in both [TP] and [JW], we use the following elementary identity Thus we obtain p≤x,p∤N Then we split the sum into two parts as in [JW].
Here a variable y is to be chosen later within 3 ≤ y ≤ 2 √ x. We treat S 2 using trivial estimate and Lemma 2.3, then we obtain Let π E (x; k) = Li(x) n k + E k (x). Our goal for treating S 1 is making use of Lemma 2.6. First, we take care of the inner sum by partial summation p≤x,p∤N,k|dp Then we deal with S 1 using the trivial estimate (10) and Lemma 2.1, we have (12) Let π E (x; k) = #{p : N (p) ≤ x, p ∤ kf, p splits completely in K(E[k])}. By Lemma 2.4, we have For the detailed explanation, we refer to [AM,page 9]. By Lemma 2.5, we have Again using Lemma 2.5 to bound t(m) and applying Lemma 2.6 as in [AM, page 10], where C = C (A, B) is the corresponding positive constant in Lemma 2.6 for the positive constant A + B + 1. Note that T (q) ≤ 6. Writing E k (x) = π E (x; k) − Theorem 1.1 now follows.
Proof of Theorem 1.2
Let N be the conductor of a CM elliptic curve E satisfying N ≤ (log x) A . We use the following elementary identity We unfold the sum similarly as in the proof of Theorem 1.1.
p≤x,p∤N
We introduce a variable y and split the sum as shown in the proof of Theorem 1.1. The inequality in the last line is due to the primes p in K which have degree 2 over Q and split completely in K(E[k]). p≤x,p∤N Let S 1 , S 2 denote the second sum and the third sum respectively.
Now, we use Lemma 2.5, and 2.7 to give an upper bound for each π E (x; k).
Then we treat S 1 by (19), and S 2 by the trivial bound (π E (x; k) ≪ x k 2 ) in Lemma 2.3. As a result, we obtain where the implied constants are absolute. Applying partial summation to S 1 with φ(k) 2 ≪ n k , and k≤t 1 φ(k) = A 1 log t + O(1), we obtain (20) provided that 3 ≤ x y 2 N (f) . Choosing y = x 3N (f) , it follows that (21) S 1 + S 2 ≪ A x log log x Therefore, Theorem 1.2 now follows. Note that the trivial bound in Theorem 1.2 given by Lemma 2.3 is ≪ x log x. The number field analogue of Brun-Titchmarsh Inequality(Lemma 2.7) contributed to the saving.
|
2012-11-02T00:24:14.000Z
|
2012-07-27T00:00:00.000
|
{
"year": 2012,
"sha1": "34d58f43b143c4c223b352b42bda5cae25efb9ea",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.ffa.2014.07.003",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "34d58f43b143c4c223b352b42bda5cae25efb9ea",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
155546317
|
pes2o/s2orc
|
v3-fos-license
|
competition over decentralisation: the influence of ideology and electoral incentives on issue emphasis.
Under what conditions is decentralisation a salient issue for state-wide political parties? It is argued in this article that the extent to which state-wide parties emphasise decentralisation depends on their strategic considerations: on their overall ideology, on the electoral incentives created by the context in which they compete, and on the interaction between the two. The results of the analysis of party manifestos in 31 countries since 1945 are as follows. First, parties that pay greater attention to cultural matters relative to economic matters tend to talk more about decentralisation. Second, the systemic salience of decentralisation also encourages parties to talk more about decentralisation. Third, the larger the regionally based ethnic groups within a country, the more salience all state-wide political parties will attach to decentralisation. Finally, only parties that put greater relative emphasis on cultural matters tend to respond to the electoral threat of regionalist parties. The influence of territorial diversity on the salience of decentralisation thus works through two channels and is partly conditioned by political parties’ ideological profile.
have either focused on the influence of party organisation in shaping the incentives for regional actors to challenge the territorial allocation of authority (Riker 1964;Garman et al. 2001;Filipov et al. 2004) or studied how the preferences of actors shape constitutional reforms (Banting and Simeon 1985;Behnke and Benz 2009;Benz and Colino 2011). But we know little about how decentralization becomes an issue in the first place.
To answer this question, we argue that it is necessary to examine the conditions under which state-wide political parties emphasise the issue of decentralisation. To identify those conditions, we follow the encouragement of Marks and Hooghe (2000: 811) to 'bring politics into the study of institutional change'. The premise of this article is that, much like European integration, the territorial structure of the state is a politicised question because it touches policy areas that are close to the hearts and minds of citizens, such as health and education, and because it furnishes regions with a political mantle that potentially challenges citizens' sense of national identity. As a result, we expect that a party's decision to address the state's territorial structure will be sensitive to its ideology. But parties are also responsive to their environment and adjust to the imperatives of party competition. So, we also expect that the emphasis that state-wide parties put on the issue of decentralisation will be shaped by the presence of a territorial cleavage and its politicisation by regionalist parties. Thus, the salience of decentralisation as an issue for state-wide political parties is explained by strategic considerations: political parties talk more about decentralisation if this is consistent with their ideology and with their electoral incentives.
Using manifesto data from the CMP/MARPOR (Budge et al. 2001;Klingemann et al. 2007;Volkens et al. 2012), we first show that state-wide parties that focus on cultural rather than economic issues are more likely to address decentralisation, because this topic is associated with other 'New Politics' issues such as minority rights. But, the context of political competition matters as well. Related to this, we demonstrate that systemic saliencethe extent to which other parties in the party system talk about decentralisationinfluences the party under consideration. We also reveal that a country's degree of territorial diversity matters: where there are large regionally-based ethnic groups, the issue of territorial autonomy is generally more likely to be addressed by state-wide parties. Finally, we show that party ideology and electoral incentives reinforce one another: state-wide parties that focus on cultural issues talk more about decentralisation when faced with the electoral threat of regionalist parties. Thus, the effects of the two main strategic driversideology and electoral incentivesare conditional on one another.
Our approach adds to existing analyses of the salience of decentralisation among statewide parties undertaken by Mazzoleni (2009) and Alonso (2012). 1 These authors seek to understand the 'contagion' of support for decentralisation across national party system, using the CMP as evidence: Mazzoleni's (2009) account indicates that electoral defeats and the electoral threat of regionalist parties play an important role in determining the salience of decentralisation, while Alonso (2012) identifies a conditioning effect of ideology in shaping the ability of mainstream parties to adopt a credible pro-devolution strategy. We extend these findings in two ways. First, we consider how the importance of decentralisation is conditioned by the emphasis placed on other issue dimensions (economic, cultural) which constitute the 'package' of party policies. Second, we look at how state-wide parties respond to strategic incentives across a wide universe of cases. Mazzoleni (2009) and Alonso (2012) apply their arguments to a select number of West European countries (Belgium, France, Italy, Spain and the UK), where the territorial cleavage and decentralisation processes have been prominent. 1 While there have been efforts to study state-wide parties' positioning on decentralisation (e.g. Toubeau and Wagner 2013), we note that studying party positioning on the issue needs to be distinct from the analysis of salience: the position a party takes on an issue is not necessarily related to the emphasis it places on it, even if there is evidence that some parties emphasize their more extreme positions (Wagner 2012).
But this means that the broader applicability of their explanation is limited and that we know little about why decentralisation becomes salient in different kinds of settings, i.e. homogeneous or heterogeneous countries, and, in the latter case, whether it is driven by the threat of regionalist parties or by territorial diversity tout court.
The next section develops the theoretical reasoning underlying our claims. In the third section we describe the data, measures and statistical models that we employ to assess these claims. The fourth section describes the results. The conclusion summarises the findings and discusses their relevance to the study of issue competition and multi-level governance.
When do parties emphasise decentralisation?
Recent decades have seen a rise in issue voting and competition (Franklin et al. 1992;Green-Pedersen 2007), in which topics such as the environment, immigration and the EU have become increasingly influential in shaping voter choice and party strategy. From the perspective of political parties, decentralisation is one among this set of policy issues over which they compete for electoral support. We argue that, as with other issues, the emphasis that political parties put on decentralisation is determined by strategic considerations: specifically, by their ideology and by their electoral incentives.
Our first contention is that state-wide parties' policy stances on decentralisation are part of their overall ideology, i.e. the set of values, goals and beliefs about societal institutions that define their identity and guide their actions (Freeden 1998). We posit that the ideology of parties encourages them to prioritize certain issues above others and that, during electoral contests, they place a selective emphasis on those higher priority issues because it confers to them a strategic advantage (Budge and Farlie 1983). Therefore, the salience of decentralisation is likely to be conditioned by the emphasis placed on the other issues that constitute their programmatic 'package', which can be organised along the economic and cultural dimensions (e.g., Kriesi et al. 2008).
Specifically, parties that put greater emphasis on the cultural dimension relative to the economic dimension should be more likely to talk about decentralisation. We expect this because decentralisation has come to be associated with the post-materialistic values that grew among Western electorates in the late 1960s and early 1970s, and with the subsequent rise of Green parties and 'New Politics' (Inglehart 1990). Decentralisation was seen as a method for fostering greater participation in decision-making and granting collective rights to autonomy for particular territorial groups, and thus became prominent alongside other 'new' issues related to the cultural dimension, such as the environment and the collective rights of non-economic groups like women, ethnic minorities or immigrants (Marks et al. 2010). In response to this evolution, there occurred a parallel rise of Radical Right parties during the 1980s that emphasised the flip-side of these post-materialist issues (Mudde 2007). These culturally conservative parties adopted radically opposite stances, favouring traditional morality, nationalism and central state authority.
State-wide parties from mainstream families such as Social Democrats and Liberals also articulated the cultural dimension as a result of its prominence in public opinion and the electoral threat of the new 'niche' parties (Meguid 2008). Parties that respond in this way should also be likely to talk more about decentralisation. In contrast, mainstream parties which maintain their focus on the economic dimension should not emphasise decentralisation as much. Of course, all political parties will devote some attention to both the cultural and economic dimension. However, the relative emphasis on the two dimensions will vary, depending for instance on the extent to which mainstream parties have incorporated 'New Politics' issues. This should have a direct bearing on the importance they assign to decentralisation. So, our first hypothesis is: H1: The more a state-wide party emphasises the cultural dimension relative to the economic dimension, the more it will emphasise decentralisation.
Our second contention is that parties adjust the salience of decentralisation in their programmes in accordance with the electoral incentives shaped by their environment.
Specifically, we argue that parties will alter the emphasis they put on decentralisation in response to the territorial diversity of a society and its politicisation by regionalist parties.
First, state-wide parties should place greater emphasis on decentralisation in countries with regionally-based ethnic groups, such as the Scots in the UK or the Flemish in Belgium.
We define a regionally-based ethnic group as a group of people living in a territorially delimited space that shares a sense of commonality based on a belief in a shared ancestry and a common culture, that is politically relevant insofar as it is represented in national politics by a least one political organisation (Cederman and Girardin 2007;Cederman et al. 2010). It is in those contexts that decentralisation is politicised because it has ambiguous consequences for the mobilisation of ethnic grievances. 2 The size of the regionally-based ethnic group is a key determinant underlying both their predisposition to advance secessionist claims (Sorens 2005(Sorens , 2008 and the willingness of states to accommodate their demands (Walter 2006). This generates the expectation that political parties will pay greater attention to decentralisation if regionally-based ethnic groups are large. This is because it is a potential threat to the integrity of the state and thus an unavoidable issue of debate, and because there may be a strategic incentive to check the future actual threat of regionalist parties.
Once this actual threat is present, so when there are relevant regionalist parties, statewide parties may have an even stronger incentive to talk about decentralisation. By regionalist parties, we denote parties that represent territorially-bounded ethnic, linguistic, cultural groups, that seek electoral support on a limited territorial basis and whose main goal is selfdetermination, i.e. the right to exert direct control over their ruler and policies, whether in the form of territorial autonomy or independence. In virtue of their presence and demands, regionalist parties can introduce decentralisation as a separate issue dimension of competition over which they can claim ownership as niche political parties (Meguid 2008;Wagner 2011) and through which they can exert pressure on state-wide parties in the electoral, parliamentary and governmental arenas (Toubeau 2011;Amat and Falco-Gimeno 2013). Following Meguid (2008), we expect that if the size of regionalist parties is small and the actual threat therefore negligible, on the whole state-wide parties will likely dismiss the issue as unimportant.
Conversely, if the size of regionalist parties is large and the actual threat therefore significant, we expect state-wide parties in general to increase the priority they assign to decentralisation. They do this either by deploying an accommodative strategy in an attempt to challenge regionalist parties' ownership of the issue or by deploying an adversarial strategy in order to under-cut the accommodative efforts of their mainstream rivals (Meguid 2008). Our two hypotheses relating to potential and actual threats created by territorial diversity are thus: H2a: The larger the size of the regionally-based ethnic groups, the more state-wide parties will emphasise decentralisation.
H2b: The larger the size of regionalist parties, the more state-wide parties will emphasise decentralisation.
Our third claim is that the components underlying a party's strategic considerations (ideology and electoral incentives) reinforce one another. Thus, how parties react to territorial diversity will depend on the configuration of their ideology. The first expectation is that parties that emphasise the cultural dimension are more likely to respond to territorial diversity by emphasising decentralisation than parties that emphasise the economic dimension. Given the importance they assign to cultural pluralism and localised decision-making or, conversely, to centralism and state authority, these parties have the requisite ideological background to address the issue of decentralisation in their strategic response to the threats presented by regionally-based ethnic groups and by regionalist parties. That is, they can expect that putting selective emphasis on decentralisation and thereby raising its prominence in the political space will be favourable to them. The related expectation is that the issue of decentralisation will be more closely associated with the cultural dimension in countries that feature greater territorial diversity. That is because in such contexts, parties that assign importance to the cultural dimension will include decentralisation in their 'package' of cultural concerns. The consequence is that the relative emphasis on the cultural dimension is more closely tied to the salience of decentralisation in countries where the potential or actual threat of territorial diversity is higher. This generates a third set of hypotheses: H3a: The larger the size of regionally-based ethnic groups, the more state-wide parties that emphasise the cultural dimension will emphasise decentralisation.
H3b: The larger the size of regionalist parties, the more state-wide parties that emphasise the cultural dimension will emphasise decentralisation.
Data and Model
To measure the salience of decentralization for political parties, we make use of the party manifestos coded by the CMP/MARPOR (Budge et al. 2001;Klingemann et al. 2007;Volkens et al. 2012). This project summarizes party manifestos quantitatively by assigning each quasi-sentence to one of 56 categories. This approach is useful for our purposes as it explicitly measures the emphasis each party places on the different issues. 3 Following Alonso (2012), we code decentralisation emphasis using six issue categories: decentralisation, centralisation, national way of life (positive/negative) and multiculturalism (positive/negative). These additional categories are not exclusively related to decentralisation, as they may refer to topics related to immigration or integration. But, adding them nevertheless provides us with a more valid measure of the total salience of decentralisation, as it offers a measure of a party's stance towards the institutional and cultural component of decentralisation. 4 Moreover, parties do not always phrase support for the national state and the central government as support for 'centralisation', so such quasi-sentences hardly exist at all.
While robustness checks reveal that our results are not sensitive to the precise categories we include, we follow Alonso's coding as this appears to best reflect actual party salience. 5 Figure 1 presents descriptive graphs depicting the average salience of decentralisation across time, party families and countries. The growing importance of decentralisation is evident from the trend line in the left panel: the attention that parties have paid to this issue was mostly constant from the 1950s and then increased in the 1970s, a period that corresponds to the surge of political nationalism among stateless nations and to the increasing appeal of decentralisation. Yet, there is also a large amount of variation in the salience of 3 We exclude programmes classed as estimates by the CMP itself. 4 Certain scholars have sought to improve the validity of decentralisation scores in the CMP dataset by coding these two different dimensions, but these efforts remain limited to specific countries, like Spain (Libbrecht and Maddens 2009) and Italy (Basile 2012). 5 We also ran our models with just the two decentralization codes as well as adding just the items related to 'national way of life'. The results remain largely consistent (see Appendix 5, Tables 2 and 3), with the partial exception of H3b (Model 4).
decentralisation. This fact is evident in the considerable spread of salience scores around the trend line in the left panel of Figure 1. Moreover, while countries and families differ on average in the importance that parties assign to decentralisation, there are also notable differences within all party families (central panel) as well as within countries (right panel).
The overall picture is thus one of considerable variation in the salience of decentralisation within party families and within countries, and it is this variation we aim to explain.
[ Figure 1] The countries and elections included in the analysis are listed in Appendix 2. We use an OLS model to predict the salience of decentralisation. Since errors may be correlated within parties, we cluster standard errors by party. To address the autocorrelation of errors from one election to another, we run our models using a Prais-Winsten transformation (as recommended by Plumper et al. 2005). We choose this method over the use of a lagged dependent variable as the latter approach arguably uses lagged values to explain much of the variance of interest. 6 To address the possibility that errors within one country-election may be correlated, we use a series of country-and election-level covariates, detailed below. We exclude all parties coded as regionalist (see Appendix 3) from our analysis. In our regression models, we use the natural logarithm of the salience score as is recommended for skewed data that is zero-censored (Gelman and Hill 2007). 7
Predictor variables
To measure our first predictor variable, we create a relative salience indicator for the cultural dimension relative to the economic dimension. We use the coding approach suggested by and Hobolt (2012) (see Appendix 1). 8 The cultural dimension includes topics such as freedom and human rights; traditional morality; law and order; and environmental protection. The economic dimension contains familiar issues such as protectionism, regulation and free markets. We measure the relative salience of the cultural dimension as the share of cultural statements among all economic and cultural statements: salienceculture/ (salienceeconomy+salienceculture). This variable ranges from 0 to 1.
We measure the size of regionally-based ethnic groups using information provided in the Ethnic Power Relations (EPR) dataset (Wimmer et al. 2009, Wucherpfennig et al. 2011 retrieve the proportion of the population belonging to a regionally-based ethnic group. We code ethno-politically relevant groups as regionally concentrated if they are recorded as either only or partly regionally-based in the EPR dataset. We exclude the largest ethno-politically relevant group, which is generally the dominant group (e.g. the English in the United Kingdom). The resulting variable ranges from 0 to 1; the maximum value in the dataset is for Belgium (0.4). Finally, we also code the electoral strength of regionalist parties. To do so, we first created a list of all regionalist parties (see Appendix 2) and then coded their electoral success (using Massetti and Schakel 2013). We use the values from the current (and not the previous) election as we believe that this best captures the electoral threat.
Controls
At the party level, we include two variables that may affect the emphasis that parties place on decentralization. Party size is measured as the share of the vote at the election after the manifesto was written; this information is included in the CMP dataset. Smaller parties should emphasise decentralisation more as they may wish to occupy a niche area in the policy space (Wagner 2011). Government participation is 1 if the party was in government for any amount of time (excluding caretaker cabinets) between the previous and the current election; this information is taken from the Parlgov dataset (Döring and Manow 2012), with missing cases added manually by the authors. Parties in opposition should emphasize decentralisation more than parties in government, who will seek to preserve the institutional status quo.
At the country level, we include eight variables that may shape the strategic incentives faced by political parties to emphasize decentralisation. We control for how other mainstream parties campaign on decentralisation matters by including lagged measures of the systemic salience and party polarisation on decentralisation at the previous election. 9 Polarisation is measured as the standard deviation in party positions at the previous election. 10 We also 9 Appendix 5, Table 4 presents the results if we use concurrent levels of systemic salience and polarization. The results remain substantively very similar; however, H2a receives somewhat less support in this specification, which is not surprising as the effect of regionally-based ethnic groups will affect all parties and thus be partly contained within the systemic salience measure if it is measured concurrently. include the level of disproportionality and the effective number of electoral parties; both variables are taken from Gallagher (2012). A permissive electoral system is likely to facilitate the articulation of the territorial cleavage, the multiplication of issue-dimensions (Anorim Neto and Cox 1997), and to generate a centrifugal dynamic of party competition in which parties seek marginal votes by focusing on non-economic issues like decentralisation (Cox 1990;Dow 2001). Finally, we control for the existing territorial distribution of authority, measured by the 'self-rule' value assigned by Marks et al. (2010). Self-rule is the extent to which sub-national units can run their own affairs independently of the central government.
We use the value from the previous election. The more powers territorial entities exercise, the more likely it should be that decentralisation will be an issue of contestation. The process of reforming the territorial distribution of authority can, however, also be contentious. So, we code whether there was a territorial reform between the previous and the current election: this variable is 1 if there was a change, 0 if not. Alonso (2012) suggests that the salience of centreperiphery matters increases immediately after (and not before) a territorial reform. Lastly, we also control for the population size and geographic area of the country. These are taken from
Results
To test our first hypothesis (H1) that mainstream parties that focus less on the economic dimension and more on the cultural dimension also put greater emphasis on the decentralisation dimension, we run a basic model ( Labour party did during its electoral campaigns in the late 1990s. In contrast, parties that remain concerned with questions of equity, state intervention and economic groups and thus compete mostly on the economic dimension will downplay the issue of decentralisation. Turning to the control variables, it appears to be difficult to explain the salience of decentralisation matters using country-or party-level factors. However, the result for systemic salience is consistently strong and statistically significant: for every 1 percent increase in systemic salience, the emphasis on decentralisation increases by .21 per cent. This means that a 25 percent increase in systemic salience (e.g., from 0.4 to 0.5) would lead to a 5 per cent increase in the emphasis on decentralisation. This reveals that parties do not autonomously determine the importance they attach to decentralisation according to their ideological profile, but rather are responsive to their political environment and strategically adjust their emphasis in function of how much other political parties talk about this issue. The influence of the 'party system agenda' may encourage increased attention to decentralisation even on the behalf of those individual parties that do not 'own' the issue and do not stand to reap electoral benefit from addressing it (Green-Pedersen 2007). This finding is comparable to studies that examine the determinants of the salience of issues like European integration (Steenbergen and Scott 2004). and the environment (Spoon, Hobolt et al. 2014) that show that parties are constrained by their strategic context in deciding the emphasis they place on the issue.
The effect of electoral incentives is also evident when we assess the influence a Why do state-wide parties respond to the potential threat of territorial diversity, but not to the actual threat posed by regionalist parties? The first thing to note is that, in countries with greater territorial diversity, the issue of decentralisation is more politicised at the systemic level, that is, across state-wide parties in general. 13 This is because it is in those settings where decentralisation is deployed for the contentious purpose of managing a territorial cleavage, which has bearing for the entire polity and thus demands to be addressed by all state-wide parties. Moreover, even if this territorial diversity does not find political articulation through regionalist parties, all state-wide parties face the electoral incentive to check the possibility that a potential threat becomes an actual threat. This incentive will naturally rise with the size of the group. This does not mean that the claims of smaller groups, such as the German-speaking minority of South Tyrol, will not be recognized, but rather that it will not feature as saliently as an issue in national elections. This result corroborates existing studies (Sorens 2005, Walter 2006).
But why is there no such effect for the influence of regionalist parties? The answer is that different state-wide parties will face different constraints in their ability to respond to this actual threat, depending on their ideological profile: some parties will respond by talking more about decentralisation, while others will not, thus reducing the effect that regionalist parties may have on the salience of decentralisation, when all parties are examined.
We show this when testing Hypotheses 3a and 3b, in which we argued that some parties may react more than others to the potential and actual threat of territorial diversity, depending on their ideological profile. Models 3 and 4 test whether the effects of the size of regionally-based ethnic groups (potential threat) and the electoral strength of regionalist parties (actual threat) are conditioned by the relative emphasis that a party places on the cultural dimension.
Model 3 tests the effect of the interaction between the relative salience of the cultural dimension and the size of the regionally-based ethnic group on the salience of decentralisation. The interaction effect is not statistically significant. 14 Thus, there is no evidence supporting the claim that the effect of regionally-based ethnic groups on the salience of decentralisation is stronger among parties that put greater emphasis on the cultural dimension, or that the salience of decentralisation is more closely associated with the salience of the cultural dimension in countries with larger regionally-based ethnic groups. The reason for this echoes what we argued above: all state-wide parties face the incentive to address the issue of decentralisation when responding to the presence and size of regionally-based ethnic groups, in order to check the potential emergence of an actual threat, so we do not observe any cross-party variation by ideological type.
In contrast, we find that the effect of the threat of regionalist parties is stronger among parties that put greater emphasis on the cultural dimension (Model 4). To verify the hypothesis, we restrict our sample to countries where there is a regionally-based ethnic group and we interact the electoral strength of regionalist parties with the relative salience of the cultural dimension. The interaction effect is statistically significant, providing support for H3b. In Figure 2, we see that the marginal effect of the electoral strength of regionalist parties on the salience of decentralisation increases as the relative emphasis on cultural matters increases. 15 This means that the effect of the strength of regionalist parties on the salience of decentralisation is conditional upon the fact that a party emphasises cultural matters; in contrast, the influence of regionalist parties is rendered neutral for parties that compete exclusively on the economic dimension. So, the main mechanism or 'transmission belt' from the actual threat of territorial diversity to the heightened salience of decentralisation is the relative emphasis that state-wide parties place on cultural issues.
[ Figure 2 about here] For an illustration of how these mechanisms worked in reality, consider the response of state-wide political parties to regional nationalism in the UK and Spain. During the 1980s, Scottish and Catalan regionalist parties put forth claims for independence and territorial autonomy, but at the time, the British Labour party and the Spanish Socialist Party were primarily concerned with economic matters like trade unions, state ownership of the economy, employment and welfare, and therefore did not much discuss devolution or the empowerment of Autonomous Communities. However, when these state-wide parties relaxed this focus on economic issues in the early 1990s and began to be more concerned with issue institutional reform, group rights and gender issues, then they responded to the claims of nationalist parties and decentralisation became a more prominent topic in their programmes. We first demonstrated that parties that assign greater relative importance to the cultural dimension also tend to give decentralisation a greater role in their party programme. So, parties that talk about safeguarding traditional morals, combatting crime or protecting minorities, and that stay clear of issues relating to equality, state intervention and redistribution, also tend to pay more attention to decentralisation. Second, decentralisation is also more salient among parties that compete in a context characterised by territorial diversity.
More specifically, we found that the two components of territorial diversity work in two distinct ways. The potential threat represented by regionally-based ethnic groups tends to raise the salience of decentralisation among all state-wide parties, irrespective of their ideological profile. In contrast, the actual threat of regionalist parties raises the salience of decentralisation only among political parties that put greater relative emphasis on cultural matters. Thus, the presence of regionally-based ethnic groupthe main characteristic distinguishing homogeneous from heterogeneous societieswill produce an increase in the salience of decentralisation at the level of the party system. Parties are clearly sensitive to their strategic context when they decide the issues on which to focus. This finding is strengthened further when we consider this result in conjunction with the significant effect of systemic salience: parties increase their emphasis on decentralisation if other parties talk about it as well. However, we also found that in heterogeneous countries, a party's responsiveness to the electoral threat of regionalist parties is conditional upon its ideology: only parties which emphasise the cultural dimension will pay greater attention to decentralisation in response to strong regionalist parties.
These findings advance the state of our knowledge on the topicone strongly shaped by the recent contribution of Alonso (2012) -by showing that a party's overall ideology, in particular the relative emphasis placed on the economic and cultural dimensions, shapes the importance that it assigns to decentralisation. By assessing our claims in a broader empirical universe of cases that includes homogeneous and heterogeneous countries, we are also able to show that decentralisation becomes prominent as territorial diversity increases, so as both regionally-based ethnic groups and regionalist parties become larger, but that parties' response to this diversity is conditioned, in part, by their ideological profile. of power in such systems cannot be limited to the narrow question of efficiency, but rather is subject to political contestation that will vary with the ideology of political parties and the structure of their electoral incentives, as shaped by nature of party competition and the articulation of the different identities of territorial groups living in a country. Note: * p<0.05, ** p<0.01, *** p<0.001; standard errors (clustered by party) in parentheses. See supplemental materials for robustness checks using alternative model specifications.
|
2019-05-17T14:38:55.103Z
|
2016-05-01T00:00:00.000
|
{
"year": 2016,
"sha1": "d9daecbe3ba9c4498d1cb2702651cbe8874e4c42",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/776391/Toubeau26Wagner_SalienceDecentralisation_main%20doc_March25_2015.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09bc036a1b9d474ed357abd42c8b9a5239b85a2e",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
197165902
|
pes2o/s2orc
|
v3-fos-license
|
Sc(OTf) 3 -CATALYZED C -GLYCOSYLATION OF β -DIKETONES. A FACILE ACCESS TO USEFUL PRECURSORS OF HETEROAROMATIC C -GLYCOSIDES
– Scandium tris(triflate) efficiently catalyzes C -glycosylation of β -diketones with glycosyl acetate. Elaboration of the β -diketo moiety in the resulting C -glycosides to heterocycles provides a flexible route to the C -nucleoside analogs.
INTRODUCTION
C-Nucleosides constitute a class of compounds containing a heterocycle connected to a sugar through a C-C bond, which is hydrolytically and enzymatically stable in contrast to the glycoside bond of the usual N-nucleosides.
Due to significant antiviral and antitumor activities exhibited by some of the members, considerable synthetic efforts have been devoted to these natural products and their analogs. 1 general synthetic strategy involves the initial installation to a sugar with a simple unit that serves as the progenitor of the desired heterocycle.β-Diketo moiety as the C(1) substituent is among the useful progenitors, from which various heterocycles could be derived.However, despite many different methods for C-glycosylation of malonate esters and β-keto esters, 2 only a few methods are available for the reaction of β-diketone derivatives. 3 previously discovered some prominent features of Sc(OTf) 3 as the promoter for C-glycosylation of phenol derivatives. 4,5f note, it catalyzes even the reaction of phenols (I) possessing a carbonyl functionality at the ortho position, which had been ranked as poor glycosyl acceptor.By analogy to the structure of such phenols (Scheme 1), we were interested in the possibility of C-glycosylation of β-diketone derivatives under Sc(OTf) 3 catalysis.
This paper is dedicated to the memory of Professor Kenji Koga.
I II
In this communication, we describe a facile procedure for the C-glycosylation of β-diketones by utilizing Sc(OTf) 3 as catalyst.
Utility of the resulting C-glycosides as the precursor to C-nucleoside derivatives is also illustrated with several examples.
Compounds (1) and ( 2) were mixed with the catalyst and Drierite ® in dichloroethane at -30 °C, and the mixture was allowed to warm up.TLC-analysis showed consumption of 1 at -25 °C (5 h), and quenching gave the desired C-glycoside (3) in 84% yield in the α/β-ratio 10/1.The isomers could be separated by silica-gel chromatography, and the anomeric stereochemistries were assigned by 1 H NMR spectra; the J H1,H2 is 0 Hz for 3α, and 9.4 Hz for 3β, and n.O.e. was observed between H1 (5.18 ppm) and H6 (1.24 ppm) in 3α.
NMR spectrum also showed that the β-diketo moiety entirely exists as the keto form for each isomer, which indicates that steric congestion around the C-glycoside bond hinders the molecule to adopt the planar enol form. 7-Glycoside (3α), preferentially formed at lower temperature, underwent gradual isomerization to the β-isomer, as the reaction temperature was raised (runs 2 and 3).The α/β-ratio was reversed, reaching to 1/5 at 15 °C, which implied the existence of equilibration between the isomers.Indeed, the isolated β-isomer (3β) underwent partial isomerization, upon treatment with diketone (2) (2 equiv.)and Sc(OTf) 3 (25 mol%) [dichloroethane, Drierite ® , 15 °C, 18 h], giving 1:5-mixture of 3α and 3β.Table 2 shows that the present C-glycosidation method is applicable to various combinations of glycosyl acetate and β-diketone.6,8 Though the final α/β-ratio depends on their structures, a general trend for isomerization decreases with the bulkiness of the substituents on the carbonyl carbons in the β-diketo moiety (R).
The α/β isomerization probably takes place by ring opening-reclosure at the C(1)-oxygen bond via the acyclic intermediate (A) and/or heterolytic disconnection-recombination of the glycoside bond via the ion pair (B) (Figure 1). 9 Considering the planarity of the β-diketo moiety in A and B, sever steric interference would be exerted between the sugar moiety and the substituent R in the transition states leading to such intermediates.This could account for the observed tendency of the isomerization.It is worthwhile to note the X-Ray structure and behavior of β-C-glycoside (10β) obtained by the reaction of 8 with 1 (run 4 in Table 2).In this case, the resulting β-glycoside (10β) existed in both keto and enol forms (keto/enol = 1.2/1), which was different from the C-glycosides of other runs in Table 1 and Table 2. 7 Although C-glycosides (10α) and (10β) could not be separated by silica gel chromatography, recrystallization of the mixture gave a pure sample of 10β in keto form (10β-keto). 10 This compound, at least in the crystal lattice, adopts the conformation in which the π-faces of the carbonyl groups are not coplanar [the angle between dipoles defined by both carbonyl groups is 141.0°], unfavorable to the isomerization. 11urthermore, NMR spectral analyses exhibited that the energy barrier of tautomerization is high.When a solution of 10β-keto in CDCl 3 (NMR tube) was allowed to stand at room temperature, the keto/enol-ratio reached only 13/1 in two days and 2.0/1 in two weeks.Pyrazoles (15α, 16α, and 16β) were obtained as single isomers as expected, while small quantities of the undesired isomers concomitantly formed in the synthesis of isoxazoles (17β, 18β) and pyrazolopyrimidine (19β).
Since these heteroaromatic C-glycosides, once formed, were configurationally stable under the reaction conditions, formation of the undesired isomers could be attributed to the isomerization of the starting material during the reactions.Although the anomeric stereochemistry is not always retained perfectly in the formation of the heterocyclic aglycon, these results demonstrate that the C-glycosyl β-diketones, obtained by the present C-glycosylation method, could serve as the useful precursors to a variety of C-nucleoside analogs.
In summary, Sc(OTf) 3 catalyzed C-glycosylation of β-diketones in high yields, and the β-diketone moiety in the C-glycosides could be easily converted to heterocyclces by conventional methods.Because of its facility and efficiency, the present method will find utilities in the synthesis of C-nucleoside analogs of biological interest.
time at T °C.b) The reaction was warmed to -25 °C for 5 min.c) The reaction was warmed to +5 °C for 1.5 h.d) The reaction was warmed to +15 °C for 0.5 h.
|
2019-03-22T16:17:30.187Z
|
2005-12-31T00:00:00.000
|
{
"year": 2005,
"sha1": "8969c893842bc7f54fd76ff1c588a7e866c9e065",
"oa_license": null,
"oa_url": "https://doi.org/10.3987/com-05-s(k)65",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4326486d1277f57de25316eacd63b41b21bc3343",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
256576964
|
pes2o/s2orc
|
v3-fos-license
|
A Dutch Study of Remarkable Recoveries After Prayer: How to Deal with Uncertainties of Explanation
This article addresses cases of remarkable recoveries related to healing after prayer. We sought to investigate how people who experienced remarkable recoveries re-construct and give meaning to these experiences, and examine the role that epistemic frameworks available to them, play in this process. Basing ourselves on horizontal epistemology and using grounded theory, we conducted this qualitative empirical research in the Netherlands in 2016–2021. It draws on 14 in-depth interviews. These 14 cases were selected from a group of 27 cases, which were evaluated by a medical assessment team at the Amsterdam University Medical Centre. Each of the participants had experienced a remarkable recovery during or after prayer. The analysis of the interviews, which is based on the grounded theory approach, resulted in three overarching themes, placing possible explanations of the recoveries within (1) the medical discourse, (2) biographical discourse, and (3) a discourse of spiritual and religious transformation. Juxtaposition of these explanatory frameworks provides a way to understand better the transformative experience that underlies remarkable recoveries. Uncertainty regarding an explanation is a component of knowing and can facilitate a dialogue between various domains of knowledge.
Introduction
Julia was diagnosed with post-traumatic dystrophy in 1990 (also known as CRPS) and became Dick's patient in 1992. She had pain in the right side of her body and was wheelchair bound. As a general medical practitioner (GP) Dick had a moderately large practice in a rural region of the Netherlands. His patients had various socio-economic backgrounds. He knew Julia's medical history well. In 2007, after 17 years of suffering, Julia and her husband took part in a prayer healing session, that was organised by a well-known Dutch evangelist. After the prayer Julia stood up from her wheelchair and started walking around without a trace of pain. Her physical condition has remained stable during the past 15 years. Dick was pleased but also intrigued by Julia's sudden full recovery. In search for an explanation he conducted a literature study but came up empty-handed. His inquiry led to research, supervised by an interdisciplinary team, consisting of a theologian, a psychiatrist-philosopher, a social scientist and a qualitative researcher in the field of medical humanities.
The turn to patient-centred medicine has been accompanied by an increased interest in the spiritual needs and beliefs of the patients (Mezzich, 2011;Vander-Weele, 2017). This is reflected in publications about the influence of spirituality on wellbeing and other measures of quality of life. Some of these studies focus on healing after prayer (Banerjee et al., 2014;Miranda et al., 2019). With healing after prayer (further HP) we mean that a person's health improved after intercessory, individual, or other types of prayer.
In Western countries HP was considered to be controversial as a field of medical and social research for a long time (Andrade & Radhakrishnan, 2009), but the number of publications that address the positive effects of prayer on health is steadily growing today (Shattuck & Muehlenbein, 2020). Most of the available empirical research on HP has been conducted with the use of Randomised Control Trials (RCTs) (Hodge, 2007) and usually reflects scepticism about the positive effect that prayer can have on a person's health (Roberts et al., 2009). Only a handful of published empirical studies make use of qualitative methodologies (Austad et al., 2020;Harris & Koenig, 2016;Helming, 2011).
In RCTs, prayer is usually operationalised as an intervention, with a possible cause-effect (or even dose-effect) relationship between action and outcome. Some concerns about RCT as a suitable method to study HP are based on a large variety of HP-practices and the validity of operationalisation (Chibnall et al., 2001;Pagliaro et al., 2018). Can prayer be conceived as an act that is demarcated in time and can it be quantified in terms of frequency, strength, fervency, numbers of intercessors or who prays to whom (see e.g. Klitzman, 2022)? Besides, the outcome may extend beyond the usual, clinically measurable variables, and encompass changes in body, mind and spirit (Kruijthoff et al., 2021a(Kruijthoff et al., , 2021b. In short, underneath the epistemological question how to study HP (De Aguiar et al., 2017), lies the conceptual issue how to understand a phenomenon that does not fit well with the currently dominant biomedical paradigm, that is based on the presumed duality between body and mind.
Theoretical Perspective
The challenge of the choice for a theoretical framework that can help studying and interpreting HP cases from a multidisciplinary perspective, lies in the absence of developed theoretical approaches that match the existing data (Levin, 2020). Reports on remarkable recoveries range between cases that are (un-)related to HP but are medically verified (Engebretson et al., 2014), cases that are described on the Lourdes pilgrimage site (François et al., 2014) and self-reported narrative accounts of patients, to name just a few. The authors who endeavour to provide an explanation for HP, usually take an eclectic approach. Barasch (2008) makes an attempt to summarise the processes that can have influenced remarkable recoveries, such as psychosocial interventions (Spiegel et al., 2007), biological modifiers, diets, psychological states like mindfulness or meditation (Rediger & Summers, 2007), immune responses and social connections. He also points to the lack of thorough accounts of the cases, and the difficulty to replicate the conditions under which these recoveries took place (see also Rediger, 2021). These accounts indicate that biomedical, biopsychosocial and even holistic explanatory frameworks can be of use when addressing certain physiological, lifestyle and relational aspects of remarkable recoveries (see, e.g. Engebretson et al., 2014 about social connections), but they come short in describing the spiritual or the transformative experiences of the patients.
Communication about remarkable recovery is a challenge of its own. Cases of medically unexplained symptoms (MUS) can be instructive about how this kind of communication unfolds. The interaction between patients and doctors is crucial in situations where diagnosis, treatment and recovery prospects do not fit within the mainstream clinical practice. It can result in the patient being treated as an "unreliable narrator of bodily events" (Scarry, 1987). In the absence of an evidence-based explanation for the symptoms of a disease or a sudden recovery, the patient-doctor interaction can be characterised by conflicting feelings of uncertainty or hope on the 1 3 part of the patient, and mistrust or even animosity on the part of the doctor (Greco, 2012(Greco, , 2017. The positions that patients and doctors find themselves in, can affect the credibility of both parties. Safe ways out to explain a remarkable recovery from the point of view of the specialist, are to admit that the patient was misdiagnosed, or to describe the condition as self-resolving, or to suggest that the recovery is nonreplicable (Barasch, 2008). The patient can feel torn between relief and fear that the recovery is only temporary, and not knowing what to further expect from the doctor. The doctor can become nervous and start second-guessing the diagnosis that was made in the first place (Salmon et al., 2007). In such cases the consultation can turn into a battle (Wileman et al., 2002) about whether the recovery has actually taken place or, broader still, about the legitimacy of the parties to ascertain the improvement.
The examples with remarkable recoveries and MUS demonstrate the same shortcomings where explanation and communication are concerned. The theoretical approaches and communicative tools that are used to interpret and discuss these cases, do not address the spiritual aspects of healing, even though the positive influence of spirituality on physical health is well-established (Koenig, 2015;Thoresen & Harris, 2002).
Our conceptual framework is built on a combination of approaches: positive health, horizontal epistemology, which addresses amongst others the asymmetry in the doctor-patient interaction, and trans-somatic recovery, that allows to place the recovery in the context of the person's spiritual development.
The framework of positive health focuses on agency and the adaptability of the patient, who may still be able to live a good life, after having been diagnosed with a chronic condition. Recovery is defined as 'the ability to adapt and self-manage in the face of social, physical and emotional challenges' (Huber et al., 2011, p 343). This definition reaches beyond the healthcare system, since it includes non-health factors (Andersen & Knudsen, 2015), such as life-events, identity-forming and sensemaking. Attention to self-management in the face of the challenges that a positive health-framework promotes, allows us to link our study to the field of research on biographical disruption and identity (Bury, 1982;Charmaz, 1983Charmaz, , 1995. Our attention will however be less on the biographical disruption as a consequence of a medical diagnosis, and more on the disruption resulting from spontaneous recovery followed by restoration of the self (Locock & Ziebland, 2015).
Horizontal epistemology (Abma, 2020) refers to the way of knowing where a hierarchic division between various types of knowledge (scientific, expert, experiential) becomes restrictive. Fricker (2007) has pointed to the epistemic injustice of hierarchic systems of knowledge, where certain people are being systematically wronged in their capacity as knowers and denied the possibility to tell their story (Carel & Kidd, 2014). When certain perspectives and types of knowledge are structurally left out of the process of knowledge production, this will lead to a limited understanding of our world.
Horizontal epistemology suggests that different epistemic perspectives and different types of knowledge should be dealt with as equally important in the interpretation of research findings. The approach has two advantages: it includes experiential knowledge as a legitimate source of knowing (Sturmberg & Martin, 2008) and it allows for a broad dialogue between various types of knowledge and knowing, including tacit forms of knowledge (Polanyi, 1958).
Horizontal epistemology is performative by nature; it is enacted in the interaction between various discourses about illness and recovery. It generates new insights by bringing various disciplines and stakeholder perspectives together, based on empirical data. On a positive side, it is transformative to our understanding of complex phenomena, and it broadens the existing explanatory possibilities of complex cases. On a challenging side, horizontal epistemology is rooted in interpretation of data, which includes interpretation by the researcher, who uses personal experiences as a source of knowledge and explanation. Here the role of the researcher is not that of a distanced impartial investigator. That is why ethical and emotional aspects of knowing carry a heavy weight within horizontal epistemology (Abma, 2020).
Horizontal epistemology entails the possibility that the medical specialist is no longer the (only) person who decides whether recovery has taken place. In fact, the self-reported functionality of a patient can outweigh the available medical readings (Kruijthoff et al., 2021b). This leads to a broader issue, whether recovery can be understood on the basis of untraditional somatic explanations. There is a wealth of critical literature about how patients and doctors use somatisation in order to explain a condition that is not supported by the available medical measurements (Greco, 2017;Salmon, 2006). Following this logic, in order to be legitimate, recovery should be substantiated by quantitative somatic measurements or by standardised verbal reports of the patient's experience. Recoveries that cannot be measured or articulated by standardised means, represent a challenge to explanation.
To do justice to this complexity, we frame our findings in terms of trans-somatic recovery. With this modifier we aim to highlight dimensions of recovery that go beyond its customary physical and mental characteristics. The term draws attention to the transformative and transcending nature of the recovery experiences.
Transformative recovery refers to healing experiences that extend beyond the functionality of body and mind. The term refers to instances in which healing leads to existential self-reflection, spiritual development, and/or religious transformation. Trans indicates the transcending aspect of recovery, understood as a process that brings patients to a level at which they can see their existence from a new overarching perspective. The transformative experience places their existence in a different light. The new perspective does not erase or replace other experiential dimensions. It includes them and transforms them into a new, meaningful, but sometimes disruptive, experience. Such experiences can vary from physical sensations, mood changes, experiences of improved health, to feelings of belonging to the universe or of undergoing a radical change, for example due to an encounter with God (Austad et al., 2020;Lundmark, 2010). All these experiences can unfold simultaneously and influence one another.
All in all, trans-somatic recovery does not imply that the body has become nonessential or subsidiary to other aspects of life. It rather means that there exists no hierarchy between the various dimensions of recovery, including the spiritual dimension, and that we should focus on discourses that do justice to the inclusive nature of transformative and transcending healing experiences.
Methods
The findings presented in this article form a part of the second author's PhD study. The full design protocol of that study has been published elsewhere (Kruijthoff et al., 2017). It is defined as a retrospective naturalistic case-based study (Abma & Stake, 2014) and consists of a preparation phase, that includes data collection, followed by three phases of analysis: medical assessment, qualitative data analysis and interdisciplinary meta-analysis. Most of the results of the PhD study have been published already (Kruijthoff et al., 2017(Kruijthoff et al., , 2021a(Kruijthoff et al., , 2021b(Kruijthoff et al., , 2022a(Kruijthoff et al., , 2022b. For the PhD study two sets of data have been used: medical records of (former) patients and transcripts of qualitative interviews with the patients. In this article we use the second set of data and focus on qualitative data analysis of the 14 interviews.
Procedure and Participants
In 2016 a Dutch newspaper announced that the Faculty of Theology of the Vrije Universiteit Amsterdam would, in cooperation with researchers from the Amsterdam University Medical Centre, location Vumc, be supervising a PhD study, which was being conducted by a GP, on the topic of healing after Christian prayer. The article generated a huge response, both positive and critical. In due course the second author received 83 reports from prospective respondents with accounts about their HP (for the detailed overview of all cases and the follow-up data that was collected in 2019 and 2021 see Kruijthoff et al., 2022aKruijthoff et al., , 2022b. The research protocol (Kruijthoff et al., 2017) describes in detail the criteria for inclusion in the study: the participants must have a well-documented medical history, followed by subsequent recovery related to Christian prayer. Based on these criteria 27 cases were presented for review to a medical assessment team, consisting of five medical specialists in the fields of internal medicine, haematology, surgery, psychiatry, and neurosurgery. According to the research protocol they represent a variety of ideological backgrounds, both agnostic and religious, in order to minimise bias (Kruijthoff et al., 2017). None of them consider HP as a medical intervention. All available medical files were collected -with written informed consent of the participants -from the medical institutions and hospitals where they had been treated (for the detailed overview of the 27 cases see Kruijthoff et al., 2022a).
The medical assessment team marked 14 (out of 27) cases as possibly medically remarkable or unexplained, and selected them for in-depth interviews. The term 'medically remarkable' refers to a healing which is surprising and unexpected in the light of current clinical and medical knowledge and that has a remarkable (temporal) relationship with prayer, while 'medically unexplained' indicates that no scientific explanation could be found at the time of assessment (Kruijthoff et al., 2017).
Subsequently the first author conducted semi-structured interviews with the 14 participants in 2017-2019. The transcripts of the interviews form the primary data for this article. For the participants' characteristics see Table 1. The participants are women (N = 9) and men (N = 5), between 29 and 71 years old. They are all white Dutch (N = 13) and Belgian (N = 1) citizens. The duration of their medical conditions, prior to their healing, varies between 7 weeks and 30 years. The period between the healing and the interview varies between one and 16 years, on average 8 years. The medical conditions from which they experienced recovery are: cuff rupture of the shoulder, pelvic instability and one-sided deafness, Crohn's disease, cerebrovascular accident (CVA), iatrogenic aortic dissection, ulcerative colitis (N = 2) and psoriatic arthritis, multiple sclerosis (MS), anorexia nervosa, Parkinson's disease, drug-induced hepatitis, severe asthma and impaired hearing, alcohol addiction and posttraumatic dystrophy, and congenital hearing impairment. An analysis of these cases has been published elsewhere (Kruijthoff et al., 2022a(Kruijthoff et al., , 2022b. Some of the cases were analysed more extensively in detailed case studies (Kruijthoff et al., 2021a(Kruijthoff et al., , 2021b. The interview guide included: general background information, social and physical conditions during childhood, (professional) education, religious background, marital status, employment, history of the illnesses, symptoms before and after the recovery, a detailed reconstruction of the moment/period of recovery, including bodily sensations, the respondent's knowledge about HP prior to recovery, the time frame between the prayer and the experience of being healed, the reactions that the participants received to the recovery, the impact of the recovery on the participants' lives and the meaning they ascribe to the recovery.
The interviews were conducted at the homes of the participants (N = 13) and at the university (N = 1). The duration of the interviews was 1, 5-2 h; they were audio-recorded and transcribed verbatim. The final versions of the interviews were adjusted in accordance with the suggestions of the participants during a member check. Subsequently the interviews were presented to the medical assessment team for final evaluation.
Data Analysis
The first and the second author conducted the analysis of the interviews, which was inspired by the principles of constructivist grounded theory (Charmaz, 2008(Charmaz, , 2014. Use was made of ATLAS.ti software for open and focused coding. An iterative approach to the data collection and analysis has been applied. The insights obtained from the analysis of the first interviews and the feedback provided by the interviewer regarding non-verbal interaction, were discussed and incorporated in the later interviews. Hence, the question about personal sense-making in relation to the healing experience was posed more explicitly in the later interviews.
The main guideline during the open coding was interacting with the data and comparing the codes from different interviews that were generated by the two authors. In order to avoid bias, in vivo codes were prioritised. The research goal, namely to look for categories that would contribute to an exchange between various explanatory frameworks and allow for juxtaposition, guided the researchers during the process of comparing codes and notes. During the focused coding we intentionally searched for categories that could enrich or transcend monodisciplinary 1 3 discourses, in order to match the complexity of the data and to allow for an elaborated epistemological framework to emerge.
We started theoretical sampling by comparing our data with the medical explanatory framework that was available to our participants and to our research team (through participation of the medical assessment team). In search for theoretical saturation and led by the rich data at hand we eventually broadened our theoretical sampling by investigating whether the life-course, spiritual-quest and sense-making explanatory frameworks might answer our question as well. At that stage of analysis we used not only inductive but abductive logic of reasoning as well (Reichertz, 2019), which allowed for a better understanding of surprising findings (e.g. similar physical experiences by various participants) and emergent themes, like the role of miracles in the life of our participants, which 'invoked imaginative interpretations' among the members of the research team (Charmaz, 2008, p 157)). Our analysis pointed out to a juxtaposition of three explanatory frameworks: medical, life-course and religious and spiritual transformation.
Reflexivity
This study was initiated because of a personal experience and curiosity of one of the authors. Constructivist grounded theory does not demand from the researcher to be totally impartial during the research process, but rather to continuously reflect on how the researcher's perspectives and also the context within which research takes place, can be made explicit (Charmaz, 2008). Such reflexivity is in accordance with the demands of horizontal epistemology as well. To ensure that personal perspectives of the researchers do not determine the results of the analysis, the research team had regular meetings in the course of several years, during which they reflected on the results, the process of the research and their own role in it. The second author has remained in contact with the participants to date and informs them regularly about the progress of the research. One of the participants became a co-author of one of the published articles (Kruijthoff et al., 2021a).
Findings
A noteworthy feature of the interviews is the temporal correlation between the moment of prayer and the experience of healing. In 10 cases the actual healing was experienced instantaneously, and in four cases the onset of the healing started immediately after the prayer and then continued for several days or weeks. Most of the participants did not have any previous experience or detailed knowledge about HP. Those who attended a service (N = 8) had low or no expectations. The participants who prayed on their own (N = 6) asked for an end to their sufferings one way or another.
Each story about a remarkable recovery emerges from a combination of different discourses, which we summarised in three themes: 'authenticity of the illness and recovery (un-)warranted by medical discourse'; "remarkable recoveries in the context of biographical discourse"; and "feeling healed and whole again: discourse of spiritual and religious transformation". For schematic representation of the findings (see Fig. 1).
Authenticity of the Illness and Recovery (un-)Warranted by Medical Discourse
The first theme is about the role of the medical discourse and the certainty of explanations during the interaction between medical specialists and participants, from the perspective of the participants. They talk about a large part of their illness and recovery in clinical terms, using a biomedical explanatory framework. Although they are convinced that their recovery is associated with the influence of a divine source, each of them seeks medical confirmation for the authenticity of their condition and recovery. Each case starts with a history of the disease, hence large parts of the interviews contain meticulous descriptions of diagnoses and impairments, as experienced by the participants: I had osteoarthritis, abdominal pain, depression, I took 22 pills in the end, 60 mg morphine, prednisone. I had to be washed twice a week. Then … they scheduled CT-scans, bronchoscopy, breathing tests, blood tests. And I had braces on my hands and on my knee. Then I got a device at home with flasks of oxygen and medicines, and I had to put a tube into my mouth and then go to sleep. (P4) (participant four, see table 1) The use of medical terminology is abundant and appears to give more strength to the accounts of the participants, in order for their suffering to be acknowledged: I ended up in hospital with a hernia at L5-S1. And then there was a rheumatologist standing next to my bed, and they said, you have Bechterew and you will never recover. Later they reversed that diagnosis, but they said that my pelvis was totally broken… (P2) A noteworthy aspect of the last quote is the definitiveness with which, according to the participant, the medical specialist communicates the diagnosis, which is similar to experiences of other interviewees. For some of them this leads to taking decisions that worsen their condition. The participant with Crohn's disease hears that her condition is incurable when she is 24. Her reaction can be described in terms of diagnostic shock (Belgrave & Charmaz, 2015). As she puts it, she feels devastated, because she has other plans for her life. Her distress and unwillingness to accept the diagnosis makes her look for alternative treatment. She stops with the prescribed medication and embarks on a diet, which makes her condition worse. Looking back, she calls it a big mistake, but she emphasises that the certainty with which the label 'incurable' was given, was not helpful either.
A message about a chronic condition that is delivered unemphatically can cause, to use Hadler's metaphor, an erosion of dreams (1996) and stimulate a rebellious response, as with a participant who has a severe hearing impairment: In the hospital I was told: you cannot choose a social profession, because your hearing is severely impaired. I was 11. And I was such a social being! That clashed completely with who I was. So, I became defensive. I did not want to be deaf. And I wanted to stop not-wanting-to-be-deaf. (P13) This participant chooses for a profession for which interactive skills are indispensable, but soon she has to stop due to a burnout. The ways in which our participants make use of-and react to the medical discourse, demonstrate their dependence on it and at the same time their wish to regain control of their lives.
According to the participants, the reactions of the medical professionals to the announcements about HP vary from incredulity, anger and irritation to neutral contemplation or sincere curiosity. The participants expect joy from the medical professionals, but the majority is confronted with doubt: I had no complaints at all. [The doctors] didn't know what to say. I sat there and thought: if I were talking to a patient who says to me that he is feeling well, I would reply 'how nice and how did that happen?'. And now it was like 'we think that is very odd'. It became even crazier when the doctor suggested 'using maintenance medication.' (P9) The participant who had recovered from a cuff rupture of the shoulder just a few days before the scheduled operation, describes eloquently the reaction of his doctor: I think of waltzing over to that hospital, but …the specialist was super sceptical when I said that the operation was no longer necessary. He became furious and started throwing Latin names at me. 'Surely that muscle has not seen the Light!' He didn't give in, and I was not allowed to have an ultrasound, because 'that muscle could not be healed anymore'. (P12) The participant with CVA had a similar experience. His physiotherapist puts him through extra heavy physical tests on the walking belt, because, according to the participant's account, he does not believe in his recovery. This participant mentions how angry the physiotherapist becomes and his own determination to prove his recovery: 'I'd rather drop dead than stop.' At least half of the interviews contain this kind of examples. Based on them, we suggest that medical professionals regard it as a challenge to think beyond the scope of their clinical experience and the biomedical concept of the disease. In several cases, however, the professionals do accept that contemporary medicine cannot explain each and every clinical phenomenon. The recovery from ulcerative colitis of one of the patients was confirmed by a medical examination. According to the participant, the specialist who conducted the test was astounded by the improvement. Another doctor formulates it explicitly: 'Medically speaking I have to admit that something happened that I cannot explain. I cannot substantiate it, but this is what I see'.
Both participants and doctors keep looking for confirmation of the initial diagnosis and the recovery by using the medical explanatory framework, albeit for different reasons. The account of the participant recovered from MS is very telling: I used to go to the neurologist in a wheelchair, but that time we went on the motorbike. That was such a kick! I wanted a new MRI. …Then [the doctor] called and said: the MRI is unchanged; we will not retract the diagnosis. That was very important to my story. On the one hand, I thought it was a real shame, because I would have so much liked to have all those spots gone. That would have been visible, tangible evidence for me. On the other hand, I can function normally, so it doesn't bother me anymore. The only strange thing is that the neurologist never sent a letter to my GP. (P8) This example shows how various epistemic frameworks can juxtapose, while both the participant and the doctor are searching for an explanation of the recovery. A somatic examination can increase the trustworthiness of the participant's story, but it can also raise doubts about the correctness of the initial diagnosis. An unchanged MRI can lead to various conclusions: for the participant it is an additional proof of divine interference, for the medical specialist it entails the question of responsibility for the patient, who declares she is healed, whereas evidence tells otherwise. The possibility that the neurologist never sent a letter to the participant's GP can be seen as a sign of uncertainty, time pressure or simply as a lack of communicative skills in cases of medical uncertainty.
A few participants with measurable improvements, receive acknowledgment of their remarkable recovery. One of the doctors asked the participant for permission to follow the process of his remarkable recovery. Another doctor shared the participant's line of thought: He says: 'I have no explanation for it, I know one thing: we, doctors, really don't know everything'. I asked: 'What will you write down in your file?' And he wrote 'a spectacular improvement after prayer.' (P5) The medical staff seem to remain ambiguous about joining the celebration of their patients' unexpected recoveries. According to one participant, her doctor put it as follows: 'To be a doctor is not just to master the craft of treatment, it is about the art of healing. This is not an easy profession, and the theory does not always show you the way forward.' In addition, the fact that recovery occurs after a prayer, makes all parties uncertain about how to articulate it.
Remarkable Recoveries in the Context of Biographical Discourse
The second theme allows to look at the medical history of the participants from the perspectives of their life-course and spiritual development. Each of the interviews includes a life-story, where the recovery is placed into the social and cultural contexts of the participant's life and their relationships with others. It also contains an account of the participant's spiritual journey, including a detailed description of their experiences with HP. An in-depth analysis of one of the cases has been published elsewhere (Kruijthoff et al., 2021a).
Illness catches up with the participants at different stages in life and is often followed by a biographical disruption and changes in perception of self (Bury, 1982). In that respect their experience is not different from that of any other patient with a diagnosis of a chronic illness (Charmaz, 2000). Their life expectations come into conflict with the consequences of their debilitating condition. Therefore, some of them try to conceal their illness or to cope with its consequences, since they are unwilling to accept the label of for ever being a patient with a chronic condition (Hadler, 1996). The participant with MS diagnosis states bluntly: 'The moment you tell them, you will become Multiple Sclerosis' (P5).
The duration of the participants' impairments varies between months and decades. The coping strategies are often directed at preservation of the participants' psychological wellbeing. The participant with MS makes 'armed peace' with her illness, because she does not want to feel like a victim. The participant with a hearing impairment learns to hide her condition by lipreading so well, that people simply don't believe her when she finally reveals it. The participant with an inflammatory bowel disease convinces herself that she is meant to accumulate all the hereditary conditions of her family, thus allowing the others to remain healthy, because she is the one 'who can bear them best of all' (P9). The ability to be self-reflective is used as a coping strategy as well. The participants tell openly that coping with their condition takes its toll on their psychological wellbeing, their relationships with others and their self-image. Some of them have severe psychological complaints as well, like suicidal thoughts (P2), depression (P9), burnout (P13) or various forms of psychosis or addiction (P6, P14).
Each participant has had a connection with Christian religion since childhood, but only a few of them speak about having faith in their early years. Two of them bring up a faith-versus-autonomy issue, namely how making your own choices can coexist with faith. One of them remembers seeing God as 'dangerous', because people make themselves dependent on God and therefore cannot live their own lives. Another participant does not believe in God because as a child she found it 'too easy' to make yourself dependent on such a force, which can turn you into a weak person. There are two patterns that unite all the accounts: at some point in their lives the participants embark on a quest for 'their' God, who would satisfy their spiritual needs. They also keep their relationship with God separate from the church as an institution, resorting to a privatised form of religion: 'In church, I noticed, faith is something distant that you are told about, while for me it is something very personal'. (P9).
Some of the incentives to search for faith or to become converted, are feelings of loneliness, weak family ties, or previous experience with remarkable recovery. Several of the participants undergo changes in their faith, from unquestioning faith to faith that they call 'relationships with God', that meets their need to belong, to becoming part of a community. One of the participants explains it to a stranger as follows: I believe very simply in God, as a child. I have a place where I can cry out, vent my frustration, share my joy. God gives me strength, he protects me. The man had to laugh, and I asked: 'Do you have anything better?' He had to think, and then said that, in fact, he didn't. I said I'll stand my ground then. (P1) Another participant states directly: 'We simply need each other. Some people do that in church, and that's fine.' (P6).
Four of the participants connect their faith with the witnessing of miracles. They do not use the term 'miracle' as a technical theological notion. They refer to miracle as an unexplained positive event, something transcending the rational world they are living in, an opening into a spiritual dimension. One of them witnesses the remarkable survival of a family member after a car accident. When a passer-by prays for the victim she comes back to life, after which our respondent embarks on a quest for his 'relationship with God'. A participant who has recovered from anorexia nervosa, considers her own recovery to be a miracle and although she is critical about the church as an institution, she starts believing in God after that. In fact, all participants consider their recovery to be a gift of God.
The medical, life-story and spiritual-quest discourses come together in a description of the moment when the healing takes place, which is central in all interviews. Initially none of them sees a connection between the possibility to recover and faith. When the recovery takes place, it comes unexpected and can therefore not be interpreted as a result of high expectations. The attention to the somatic symptoms that the participants provide in the description of their medical condition, stands in contrast with the description of the prayer-moment and its consequences, whereby the physical sensations form only a part of the entire healing experience. Although each of the 14 healings is experienced differently, the discourse that the participants use to describe them, can be called poetic. It is affective, full of metaphors and often refers to the sensation of being freed from something malignant: I have wondered many times, what is trapped inside me? And when they prayed for me, someone put his hands on my back. Later I felt as if my back was completely bruised, as if someone had drawn two claws from it. It felt like something had been ripped out. Later I thought, apparently there was something that I was suppressing with medication, but that is no longer there. Yeah, it sounds a bit crazy… (P9) The participants tell us about the affective side of the healing, that was experienced as being 'touched inside your head and feeling a slow current going from your toes through the entire body' (P2), 'a sudden feeling of joy and the warmth of a hand felt on the exact place' where the aorta was damaged (P7), the feeling of quiet and such a profound peace within, 'as if somebody had wrapped a blanket around me and I felt that I am allowed to be' (P4), 'a large warm cloud, and the feeling that something is happening now, as if a small net has been taken away from my brain' (P8). The last quote belongs to the participant with Parkinson, who adds: 'It seems as if God has operated on my head', an interesting addition that can be seen as an attempt to reconcile the medical and spiritual discourses from an overarching, transcendent perspective.
Feeling Healed and Whole Again: Discourse of Spiritual and Religious Transformation
The third theme addresses the transformative power of healing: the changes in selfimage of the participants before and after their recovery and the meaning that they give to the healing. The transformation of the self-image can undergo gradual as well as abrupt changes from the period before the diagnosis, during the disease, which is characterised by a partial loss of self, and after the recovery, resulting in a restored self (Charmaz, 2000). The onset of the disease shows how personality features become somatised, i.e. dependent on physical manifestations of the body. The participants become their disease (Dings & Glas, 2020;Hadler, 1996.) According to the accounts, the participant's spiritual needs are felt to be disconnected from the malfunctioning body. This detachment is underlined by the medical treatment, which is directed at physical recovery. Thereby, albeit unwillingly, both participants and doctors put the body-mind dualism into effect: I didn't feel good in my body at all. From my 16 th they stuffed me with prednisone, which worked very fast, but because of it I gained a lot of weight in no time. Such a big face. And when I looked in the mirror I thought I did not feel the way I looked. That was very crazy. (P9) You become a different person because of the disease. Everything turns around in your body, completely, also your feelings. You become kind of selfish, because you are in so much pain. (P11) Becoming the embodiment of your own disease is often a devastating process for the participants. But their ability to survive is tested even more after their recovery. The participants have to reinvent themselves, which paradoxically is Journal of Religion and Health (2023) 62:1731-1755 experienced as a burden. They have to leave the safe cocoon, as one participant puts it, where she had total control: You have been outside the society for 14 years and it was quite a job to come back. I thought getting ill was a job. I believe it took me three years to surrender to it. But recovering took time as well. Because you had your own world, but now people expect you to be there. …I felt overwhelmed... (P2) You're 46, like a beautiful dead bird, you can't do anything anymore. Eventually you climb up again a little bit. But you can do about 30 percent of what you did before. 24 years go by, and you are 70 [recovery date]. And then you must find out who you are. At 70, that's a tough matter. (P1) All participants are happy with their recoveries, but physically and mentally, they need time to extend their everyday life space beyond the restrictions imposed by their medical conditions. Their experiences, to use Bury's terminology, can be seen as a second biographical disruption, or as a first step towards restoration of one's wholeness. The participants feel the need to overcome uncertainty, to participate again. But this also means to be open to all kinds of reactions from their surroundings, including suspicion, distrust, jealousy and direct accusations of being a fraud or being a conduit of the devil's work: I had a lot of doubts about what was being said, am I healed, am I an impostor? 'Was it not all in my mind?' Because I heard that too: it was just in your head. I found that so difficult. Because how can I prove it? …Now I've learned, I don't have to explain. People can believe it or not. My family believes it. I believe it. (P10) Negative reactions to the recoveries within the church communities, families and among friends are mentioned in each of the interviews. The participants have to cope with an overwhelming number of questions, including self-doubts. Unsurprisingly, three of them end up with a burnout soon after their recovery, and at least two of them make use of psychological help and support.
Still, the positive transformative power of recovery appears to outweigh its challenges. More things are healed than the debilitating physical conditions, for instance the doubt whether the participant is worthy to be in this world, to be God's child. Before the recovery some of them felt that they were not allowed to belong or to be special, like the participant who had been intimidated by his parents during his entire life because he was born as a boy and not a girl as they had expected, or like another participant, who questions her right to accept the healing: 'I was standing onstage completely petrified, afraid that [disease] would come back, that I'm not good enough'. (P13).
This last participant learns to harness her uncertainty by straightening out her relationship with her mother, for whom she has become a social worker. Another participant takes the difficult decision not to see her sister anymore, because she feels she is being used by her. Another one speaks openly for the first time about things that felt wrong in her parental home, which gives her a feeling 'as if the 1 3 sky was falling'. The participants seem to experience the healing as just a first step in their pilgrimage towards feeling whole again.
The participants give a meaning to their recovery in relation to their life goals and future work, which leads to restoration of their selves that were temporarily lost to the illnesses. The pattern that emerges from the analysis comes close to a holistic outlook on life. The post-anorexia participant feels that her 'mind and body have completely reunited'. All the participants maintain that their physical condition will remain stable, and that they are now concentrating on a more profound transformation after 'being touched in your head or in your heart' (P9): God goes a little deeper. …It's not just a physical healing, but it touches the soul. It is a relationship. This is not a doctor who does an operation. God gets really close. I think there is a lot more to heal within me too. (P13) All the participants feel strengthened in their faith after their recovery. The majority feel a more profound connection with the world than before, which transcends the materiality of their existence. They give testimonies about their healing both within and outside church communities. Many have published their testimonies on the web or have written books about their experiences. When asked for clarification about their recovery, our participants react differently. Some of them are still looking for answers: 'I still notice that this is the only thing I feel lonely about, because I'm so happy, but I have so many questions!' (P6). Others simply feel content: I am not interested in explanations. I've stopped trying to find any. Healing comes from God, because I don't know anybody else who could do it. (P8)
Discussion
The article presents the analysis of 14 cases of medically remarkable healings after prayer. Two aspects unite these cases and at least one distinguishes them from the studies on MUS or the placebo effect. Firstly, our cases follow a non-medical intervention, which sets them apart from recoveries within a clinical context. Secondly, the recoveries have a transformative power on various aspects of the participants' lives, including their spiritual development. This second aspect unites our cases with other types of recoveries, like spontaneous remissions, which have been described elsewhere (Radin, 2021).
Since their remarkable recoveries, most of our participants chose to become engaged with their social environment: they do community (voluntary) work and use their own experiences in order to help others. They also engage in conversations about the transformative power of their recoveries (Levin & Steele, 2005), by making the accounts about their recoveries public (see e.g. Doodkorte, 2016). Most of them see it as their calling to spread the word about the extraordinary experiences they have gone through, whereas others are still searching for answers to questions like 'why me?' and 'how can I share this gift with others?'.
To do justice to these complex processes, we developed a study design, based on several frameworks, including grounded theory and horizontal epistemology. By doing so we remained as closely as possible to the accounts of the respondents about their medical conditions, but also broadened the interpretative framework stepwise, by subsequently adding new perspectives: a biographical perspective, including the histories of spiritual development and the role of the life-events that shaped the patients' views on life; a self-experiential perspective, with detailed descriptions of the healing, focusing on emotions and bodily sensations; and a spiritual perspective, including the patients' personal views about God and their effect on their faith. A juxtaposition of the perspectives can be productive, even when they do not line up. This reaches the surface in the divergent reactions to the remarkable recoveries, including the reactions from the participants themselves. Doubt and confusion that the patients and doctors express, resonate with some of the responses within the church communities to which our patients belong, and also among their friends and families. Disbelief, suspicion or even jealousy of people who prayed but did not heal, emphasise the limitations of the cause-and-effect logic, which, in the Western cultural climate of naturalism offers little room for the unexpected (Jüngel, 1977).
Our framework, which contains medical, life-course and spiritual-quest discourses, emerges empirically and points at uncertainty as an important issue in both medical and spiritual reactions to HP. In the method-section we referred to inductive and abductive types of reasoning that we had used in order to understand unexpected and sometimes surprising examples in our data, like the temporal coincidence between recovery and HP. Abductive logic provided us with an opportunity to question existing (for example psychosomatic) explanations, and, while using logical inference, remain open for an unexpected insight (Reichertz, 2019).
Both doctors and patients are uncertain about how to deal with a remarkable recovery and how to integrate the discourse of spiritual development into the history of illness and recovery. Uncertainty does not fit well within the prevailing medical epistemology (Miles, 2009). This is somewhat surprising, because, as Fox points out, uncertainty is inherent in medical research and practice (2002); it has been present in the medical-sociological discourse since the work of Parsons (1951) and has been described in-depth in a number of publications (see e.g. Fox, 2002 on epistemological uncertainty and critique of evidence-based medicine; Han et al., 2021).
Scientific and technological progress in medical sciences has not eliminated the uncertainties within the available knowledge and explanations, but rather is making them more complex (Fox, 2002). That is why the patients may be left looking for their own sources of explanation, where medical explanation comes up short. The persons who have experienced a spiritual journey may well frame it in terms of a miracle (unexplained but positive). They can illustrate the trans-somatic aspect of their healing by showing the transformative effect that HP has had on their spiritual development and how a new transcendent dimension has been added to their lives. For the patients, healing is much more than a repair of a bodily function. It underscores the necessity of what Miles calls medicine for the whole person, which implies that disease is just a partial aspect with respect to a person, and that not everything that '…is right to the disease is automatically right for the patient' (2009, p 944). In order to cover the full complexity of HP, follow-up studies are required, where cohesion of the physical, mental and spiritual aspects of recovery can be elucidated with the help of theological and philosophical theoretical perspectives. This study has practical and academic implications. Firstly, we should look critically at the interaction between the patients and medical professionals, the persistent asymmetry of which has already been addressed in literature (Pilnick & Dingwall, 2011). Insight in the medical discourse can bring the patient closer to the medical specialist and ensure that they are on the same page where disease and treatment are concerned. But when unexpected healing takes place, confusion tends to take over. Our data suggest that in modern Western medicine we are hardly able to get a grip on such experiences of recovery. This can lead to self-suppressive and selfstigmatising behaviour on the part of the patients, with corresponding consequences for their mental and physical health (Charmaz, 2000).
There is no literature known to us about the language that is used during medical consultations where HP is discussed. We do see some similarities in the psychiatric literature regarding spiritual dimensions (Glas, 2021) and in the research on explanations that are used by doctors during consultations on MUS (Ring et al., 2005). Some authors focus on the psychosocial dimension of the disease (Stortenbeker et al., 2022). However, many patients feel offended by the association of their ailment with psychosomatic disorders. As Greco explains, labels such as 'symptoms all in the mind' touch on 'moral failure' and can 'imply that the illness is imaginary, fake or inauthentic, possibly even intentional' (2019, p. 104). The discourse surrounding HP touches on existential matters of life and therefore can be similarly ambiguous, and yet, based on our analysis, we advocate for making it part of medical consultation.
Secondly, the literature about the positive influence of spirituality and beliefs on health is abundant, but often overlooked in the Western medical literature reviews (Levin, 2020). The benefits of spiritual beliefs about health are therefore often wasted where medical treatment is concerned (Balboni & Peteet, 2017). Our analysis points out the importance of a multi-layered approach to the patient's history, whereby the medical history forms only a part of the entire picture.
It is a challenge to implement that kind of approach, because patient-centred care and the efficiency of care ask for more and for less time respectively. Patient-centred care (Epstein, 1995) has brought along opportunities and tensions at the same time. Greco (2017) presents an analysis of those tensions, raising amongst others the important question of accountability. Following Stengers (2008), Greco advocates 'creative accountability' which, given our analysis, we can translate into being open to tentative or provisional and therefore uncertain forms of explanations. In that way the explanatory framework for the cases of remarkable recovery can be presented as a process of co-creation, where patients, doctors and possibly other stakeholders together are in search of an inventive understanding of a recovery (Glas, 2019;Savransky, 2017).
Finally, we have demonstrated that horizontal epistemology offers a fruitful approach to study HP. Horizontal epistemology departs from the assumption that there is no clear hierarchy or meta-theory to demonstrate why some types of knowledge matter more than others to understand a phenomenon (Abma, 2020). Horizontal epistemology is contrary to vertical epistemology, in which it is assumed that certain types of knowledge are more true than others. Yet, it is impossible to prove this convincingly, because there is no meta-theory that can be used.
So far most research on HP is grounded in a vertical epistemology. As a result, studies favour medical evidence over patient experiences and over psychological, sociological and theological interpretations of HP. The benefit of horizontal epistemology is that different explanations as well as frictions between epistemic discourses, are welcomed and can form a starting point for learning. This has offered new insights in how patients use and appropriate various discourses, to cope with an unexplained healing and how this can lead to tensions with people around them as well as with medical doctors. Also, it has enlarged and deepened our understanding of HP and offered a starting point for dialogue and deliberation across epistemic discourses, also within our project's medical assessment team. We recommend that future studies of HP will be grounded in horizontal epistemology.
Limitations
This study has several limitations. We focused on cases of recovery related to Christian prayer only. This decision was made intentionally, in order to keep a clear focus on the subject at hand. It prevents us however from comparing experiences of people with different beliefs and of non-religious people. Furthermore, we are aware that our interpretations only mirror attitudes that are existing in the Western cultures, where all the members of the research team are living and working. Finally, due to our limited time and resources, we have interviewed former patients only. Their medical specialists were contacted with requests to provide the medical files only. It would be worthwhile to gather first-hand data from medical specialists about their experiences with remarkable recoveries and HP, in order to fully enact horizontal epistemology. Church members, friends and family members of the participants were not interviewed, which limits our understanding of the context within which our participants live.
Conclusion
Summarising, our analysis of the data allows us to see that in the effort to understand cases of remarkable recovery, we require a combination of discourses and interpretative frameworks that include uncertainty as a means of (not-)knowing. Each of the discourses and frameworks has its value and none of them can be sufficient on its own. In order to understand the cases better, transdisciplinary analysis is required, where various discourses challenge each other in a process of co-creation. Allowing uncertainty of the unknown into a consultation, confession or interview, can boost the inventive side of our ability to understand and explain these cases.
|
2023-02-05T06:16:33.931Z
|
2023-02-04T00:00:00.000
|
{
"year": 2023,
"sha1": "fcd8ca00f3b12d4ac373ba2d35beefc04f860824",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10943-023-01750-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "6e3a3dad13ea512e2e9f2f443a06cc7d181d1d84",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
198241990
|
pes2o/s2orc
|
v3-fos-license
|
A Survey of Fecal Calprotectin in Children with Newly Diagnosed Celiac Disease with Villous Atrophy
Background: Fecal calprotectin (FC) has been used as a diagnostic marker in intestinal inflammatory conditions. Objectives: As a few studies have been dedicated to assess the role of FC in coeliac disease (CD), the current study aimed to address this issue. Methods: This study included 70 newly diagnosed CD (Marsh score 3) and 70 healthy children. The study was performed at the pediatric ward of Amir-Al-Momenin Hospital in Zabol city, the southeast of Iran, during June 2016-September 2017. The FC level was determined using a specific ELISA kit. Results: Women constituted 64.3% (45/70) and 55.1% (38/70) of CD and healthy children, respectively (P = 0.1). Three was no significant difference in the mean age between children with CD (6.3 ± 3.4) and without CD (8.3 ± 4.5) (P = 0.2). The mean level of FC was significantly higher in patients (239.1 ± 177.3 μg/g) than in healthy controls (38.5 ± 34.6 μg/g, P < 0.001). The titer of anti-tTG was significantly higher in patients than in healthy children (205.9 ± 156.2 U/mL vs. 6.7 ± 2.1 U/mL, respectively, P < 0.001). There was a significant correlation between the FC level and anti-tTG titer (r = 0.611, P < 0.001). However, the correlation was not statistically significant between FC and age (r = -0.154, 0.07). The ROC curve analysis revealed an AUC value of 0.893 (95% CI: 0.827 0.960, P < 0.001). At the level of 50 μg/g, FC rendered the sensitivity and specificity of 90% and 92%, respectively, for the diagnosis of CD. Positive predictive value (PPV) and negative predictive value (NPV) of FC at this cutoff value were 95.5% and 90.5%, respectively. Conclusions: FC can be considered a screening complementary tool for detecting CD with high sensitivity and specificity.
Background
Coeliac disease (CD), known as gluten sensitivity, is an autoimmune condition characterized by the atrophy of small intestine. It has been estimated that CD affects 1% of the western population (1). CD affects a wide age spectrum encompassing pediatrics, young and elderly people. The diagnosis of CD is currently dependent on a combination of clinical, histological, and serological approaches (2). The susceptibility to CD is attributed to the presence of certain HLA alleles (HLA-DQ2 and HLD-DQ8) as the dominant factor in the development of CD. Adherence to a longterm gluten-free diet (GFD) is necessary for the management of CD.
Evaluating the disease activity in CD patients requires performing the screening tests routinely. In addition, the efficiency of new therapeutic strategies (such as gluten proteases and immunomodulators) for CD can be validated by using sensitive screening markers (3). Being highly invasive in nature, using sensitive screening markers is not amenable by intestinal biopsies and therefore, it necessitates the application of non-invasive markers. On the other hand, available serological markers of CD can be useful in the diagnosis phase; however, these markers have limitations for predicting relapse or remission during the course of CD (1,4).
Fecal calprotectin (FC) is a relatively new inflammatory marker for intestinal pathological changes. It has been used for monitoring common intestinal inflammatory diseases including inflammatory bowel disease (IBD), Ulcerative colitis (UC), and Crohn's disease (CD) (5)(6)(7). FC also elevates in colitis and gastrointestinal neoplasms (8,9). Accordingly, FC has been superior to traditionally available markers of inflammation such as C-reactive protein for re-flecting intestinal atrophy (10). Furthermore, FC has been suggested as a reliable marker for predicting the relapse of intestinal inflammation in IBD (8). FC also has the potential to be used as a point of care test by patients in their homes (5).
Objectives
The role of FC as a disease indicator in CD is uncertain.
There are a few studies on this issue representing inconclusive remarks. We aimed to assess the diagnostic capacity of FC in newly diagnosed CD children.
Study Population
This study included 70 newly diagnosed CD children recruited from the pediatric ward of Amir-Al-Momenin Hospital in Zabol city, the southeast of Iran. As controls, 70 healthy age and sex-matched children were recruited from the same geographical region. The sample size was determined based on the availability of newly diagnosed CD children and the report of Biskou et al. (11). The individuals with systemic disorders, family history of intestinal inflammatory disease, and history of taking a gluten-free diet (GFD) were excluded. The study was performed during June 2016 -September 2017. It was approved by the Ethics Committee of Zabol University of Medical Sciences. We followed the principals of the Declaration of Helsinki.
Serological Assessment
Blood samples were drawn (5 mL) from each participant in the morning. The samples were immediately transferred to the laboratory of the hospital where sera were separated by centrifuging at 5000 rpm. The serum samples were kept at -20ºC until use. The levels of IgA anti-tTG were determined using specific ELISA kits (AESKULISA tTg-A New generation, Germany). The titers of higher than 20 U/mL were considered positive (12, 13).
Intestinal Biopsy
Upper endoscopy was performed to obtain biopsy samples. Histological diagnosis was made based on the observation of villous atrophy in at least one biopsy from the bulb and four biopsies from the distal duodenum. The biopsies were observed by the same experienced pathologist. Only were those children with a Marsh score of 3 included in the study.
FC Measurement
Stool samples were obtained in the morning in sterile containers and stored in a freezer (-20ºC) until use. A specific ELISA kit (Calprotectin ELISA, EuroImmun, Germany) was purchased. The protocol was followed as noted in the manufacturer's instructions.
Statistical Analysis
Statistical methods were performed in SPSS 19 software. Shapiro-Wilk test was used to determine the normality of data distribution. Descriptive measures (means, standard deviations, and frequencies) were deployed to present the data. Independent sample t-test and Fisher's exact test were considered to assess any association between intended variables. Receiver operating characteristic (ROC) analysis was used to ascertain the validity of FC levels for the diagnosis of CD. The significance level was considered at P < 0.05.
Results
Overall, women constituted 83 out of 140 participants (59.7%). In the sample of children with CD, females and males constituted the ratios of 64.3% (45/70) and 35.7% (27/70), respectively. In healthy children, there were 38 (55.1%) and 32 (44.9%) females and males, respectively. The gender distribution showed no significant difference between the two groups (P = 0.1). The mean age of the participants was 7.3 ± 4.1 (range 1 to 18 years old). Three was no significant difference in the mean age between children with CD (6.3 ± 3.4) and without CD (8.3 ± 4.5, P = 0.2).
The mean level of FC was significantly higher in patients (239.1 ± 177.3 µg/g) than in healthy controls (38.5 ± 34.6 µg/g, P < 0.001). Accordingly, 90% of children with CD had the FC levels of higher than 50 µg/g while only 4.3% of the healthy controls showed values above this cutoff (Table 1). In addition, the titer of anti-tTG was significantly higher in patients than in healthy children (205.9 ± 156.2 U/mL vs. 6.7 ± 2.1 U/mL, respectively, P < 0.001, Figure 1).
There was a significant correlation between the FC level and anti-tTG titer (Figure 2A). However, the correlation was not statistically significant between FC and age ( Figure 2B).
ROC curve analysis revealed a high AUC value for the FC level regarding the diagnosis of CD (AUC = 0.893, 95% CI: 0.827 -0.960, P < 0.001, Figure 3). At the level of 50 µg/g, FC rendered the sensitivity and specificity of 90% and 92%, respectively, for the diagnosis of CD. Positive predictive value (PPV) and negative predictive value (NPV) of FC at this cutoff value were 95.5% and 90.5%, respectively.
Discussion
FC has been suggested as a potential biomarker for the diagnosis and evaluation of a variety of intestinal in-flammatory conditions. FC promotes important biological activities encompassing anti-microbial, antiproliferative, and immunomodulation functions (8). In the present study, we found that the mean FC level was significantly higher in children newly diagnosed with CD (239.1 ± 177.3 µg/g) than in healthy children (38.5 ± 34.6 µg/g, P < 0.001).
At the cutoff value of 50 µg/g, FC showed high diagnostic validity for CD (AUC = 0.893, 95% CI: 0.827 -0.960, P < 0.001). It has been asserted that the elevated levels of FC could appropriately distinguish between intestinal inflammatory conditions from non-pathological conditions (14). In a Canadian study, the mean level of FC in children with CD (Marsh score II/III) was 67.5 µg/g with a wide range (4.9 -3068 µg/g) at diagnosis. This value fell within the range of 1.11 -736.5 µg/g (mean 33 µg/g) after one year of GFD administration (4). In another report of 29 newly diagnosed CD children, it was revealed that the FC levels were significantly higher in patients than in controls (15). In a recent report, children with total villous atrophy showed higher FC levels (13.8 ± 9.3 mg/L) than those with partial atrophy (3.7 ± 1.8 mg/L) (15). Similarly, 17 children newly identified with CD had higher FC levels than healthy children (11). According to the report by Tola et al. (16), the mean value of FC was significantly higher in adults with CD than in healthy ones. Nevertheless, the elevated FC levels were observed mainly in those patients with active CD (55.6%) compared to individuals with treated CD (13.6%) (16). In two reports in adult patients with CD, the FC levels were not significantly different between newly diagnosed cases and healthy counterparts (17,18). In general, these observations highlight the potential applicability of FC for the monitoring and diagnosis of CD.
As another finding, we detected a strong significant correlation (r = 0.611, P < 0.001) between the FC level and IgA anti-tTG titer in the patients. Anti-tTG antibodies of IgA class are the most common and validated serological markers for diagnosis of CD. In comparison, no significant association was detected between FC and Marsh score, clinical symptoms, or anti-tTG titer in adults with CD (18). The levels of FC were higher in serologically diagnosed children (89.6 µg/g) than in histologically diagnosed children (51.4 µg/g), indicating a potential correlation between the FC levels and IgA anti-tTG titer (4). The levels of FC can be influenced by the clinical picture of CD as symptomatic children may show higher FC levels than children with no signs and symptoms (19). Accordingly, the FC levels were higher in newly diagnosed children in comparison with those under GFD (19). Intestinal atrophy seems to be a dominant feature influencing the FC levels in CD patients (15). Here, we detected markedly higher FC levels in the patients that all had villous atrophy (Marsh score 3) while previous reports incorporated children with lower Marsh scores (4,11,15). However, FC was associated with neither the grade of intestinal inflammation nor with the clinical picture of CD in a report by Montalto et al. (17). This may be due to the impact of some other individual, physiological, or environmental factors modulating the FC levels in patients with CD. Nevertheless, the incorporation of FC with serological findings can provide high diagnostic accuracy.
A point of concern regarding the use of FC in the monitoring of pediatric inflammatory diseases is uncertainty regarding a valid and consensus cutoff value. Some have suggested a cutoff value of 50 µg/g; nevertheless, the range of FC could be very wide that limits the FC diagnostic potential (7,20). In the current study, we found that the 50 µg/g threshold resulted in high sensitivity, specificity, PPV, and NPV (90%, 92%, 95.5%, 90.5%, respectively) for CD diagnosis. However, the elevated levels of FC may be diagnostic for intestinal inflammation, but its normal value may not necessarily exclude a pathological condition (21). Accordingly, it is suggested that the FC levels be interpreted taking into consideration other available non-invasive markers such as CRP, serological findings, and fecal lactoferrin (22,23).
Conclusions
FC can be considered as a screening complementary tool for detecting CD with high sensitivity and specificity. One of the main benefits of measuring the FC level could be obviating the need for performing invasive screening methods. However, due to the broad range of this parameter, there is a need to develop diagnostic criteria incorporating FC with other clinical and serological diagnostic features.
|
2019-07-26T07:23:47.220Z
|
2019-07-02T00:00:00.000
|
{
"year": 2019,
"sha1": "a5af66a71d743a4e34ce35101dd10d70df8a7a46",
"oa_license": "CCBYNC",
"oa_url": "https://brief.land/semj/cdn/dl/97d50656-b0f9-11e9-b686-ff2a6f87c799",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f2914a80be89b53642a016d4006b8c942f1c8877",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247355214
|
pes2o/s2orc
|
v3-fos-license
|
Feature Selection for Analyzing Data Errors Toward Development of Household Big Data at the Sub-District Level Using Multi-Layer Perceptron Neural Network
— This research aims to analyze the patterns of data errors in order to fulfill the data required for household big data development at the sub-district level in Thailand. Feature Selection and Multi-Layer Perceptron Neural Network were applied, while the data imbalance was solved by the SMOTE method and the comparison between the CFS feature selection method and Information Gain (IG) feature selection method. Afterward, the datasets were classified the data errors by the Multi-Layer Perceptron Neural Network. Each model’s effectiveness was measured by the 10-fold cross-validation method. The research results revealed that the suitable data size after being adjusted data imbalanced was 400%. Once the data had been processed for developing the model, it was found that after being adjusted data size towards the application of the SMOTE, CFS feature selection technique, and classified data errors by the Multi-Layer Perceptron Neural Network, the model provided the highest level of effectiveness in data errors classification with an accuracy of 98.29%. Moreover, the application could effectively classify data errors and display the household big data at the highest level. The application evaluation results given by the experts and the users had an average mean of 4.69 and higher, a standard deviation of 0.47 and lower, which has the level of effectiveness of 93.78% and higher, while interquartile range values not over 1, a quartile deviation of no more than 0.5.
Introduction
The development of big data in the field of health, economics, environment, activities, developments, and household demographics is crucial for community development. This is because comprehensive and accurate data can demonstrate the community's genuine problems and demands in which the governmental agencies or responsible figures such as village leaders, subdistrict administrators, local people themselves, researchers, and the business sector can take advantage to solve the problems. Community demographics are considered a big data prototype linked with the national big data system, facilitating the data processing cycle and reflecting the genuine problems embedded in the data. Information is a highly valuable asset for any state or any agency because inequality issues between the rich and the poor could be solved by rapidly developing grassroots economies in a systematic and clear direction. Quite often, governmental development projects are not consistent or correspond with the local people's demand or cannot effectively solve problems. Community demographic data could contribute to the project proposals of the subdistrict development agencies since reliable data could be used for supporting their reports or budget proposals. Researchers, local people, and the public sector can altogether benefit from the data in terms of community and national developments.
In order to obtain the community data, it is necessary to get in touch or coordinate with community leaders. Primary data collection is required to develop big data for community development as well as the national development in each dimension. One of the most common problems while collecting community data is that the local people are hesitant to provide information. Even though both public and private sectors have tried to collect data from the local communities, local people rarely understand the overall picture because the analyzed data has not been accessible for the local people. Hence, they are reluctant to provide further information. As a result, the local people may not be able to see the whole picture or the real situations in their communities, leading to the inability to address the quick-response-needed issues. It is important to identify problems before resolving problems in the community.
Information technology at present offers many free services to establish online platforms and store data in the cloud system. For example, Google Forms can be instantly processed as long as there are internet connections. Although many areas lack internet access, it is still possible to record the data manually and then input it on Google Forms later. Collecting household data requires intervention in each household. However, some household leaders may lack literacy skills or have lousy eyesight, leading to an inability to read and fill in the questionnaire. Furthermore, this kind of project may also disturb the daily activities of the local people. Therefore, collecting household data for creating big data is considerably challenging. Nonetheless, when there is communication about the significance of data for local development, the local people are more likely to cooperate with the researchers. During the data collection process, it might have some problems. For instance, many data collectors were visiting each household, as shown in Figure 1. The data was recorded manually on the papers by data collectors before being input into the system, such as Google Forms by staff or officers. Therefore, the data was vulnerable to incorrect or repeated recordings. The fact that there are many respondents who may not have sufficient knowledge or literacy skills to understand the questions also leads to the same problems when inputting or processing data. Consequently, the charts in the application developed based on the deficient data may represent incorrect comparison results.
The goal of this study, hence, is to select features of data errors in order to support the system to fix data, provide accurate data for the users, and accurately predict features. As the collected dataset was small in size, the Synthetic Minority Over-sampling technique (SMOTE) was applied to adjust the data imbalance. Next, the dataset was used for selecting features towards two feature selection techniques, including Correlation-based Feature Selection and Information Gain. Once the features had been selected, the dataset was used for developing a model by the Multi-Layer Perceptron (MLP) Neural Network method. The model's effectiveness was measured by the 10-fold cross-validation method in order to use the model for developing a mini big data system of community information in Samut Songkhram Province, Thailand. The system is expected to be accessible anywhere and anytime by any user who has a smartphone. Local communities, governmental agencies, researchers, and the business sector can make use of this information for supporting and developing the communities in the future.
Related work
The related works for applying the feature selection for analyzing data errors toward developing household big data at the sub-district level using Multi-Layer Perceptron Neural Network were described as follows.
Synthetic minority over-sampling technique
Classifying data which includes more than one class can lead to data imbalance problems and, eventually, an inaccurate classification that inclines towards majority classes. In this research, the Synthetic Minority Over-sampling technique (SMOTE) was applied to resolve the data imbalance issues. It is an empirical over-sampling algorithm, which extracts artificial samples from the minority class by inserting nearby existent samples. It is a method that resynthesizes data by increasing the class's data size as much as the biggest class. A value was randomized to find the distance between it and every other value, and then the closest value was selected. The resynthesized data is represented in (1) [1] [2]. where: x nb is the newly synthesized data, x o is the original random data, nb x is the nearest value or neighbor data, R is the random value range from 0 to 1.
Feature selection
Feature selection is a technique that selects features of datasets before classifying them. There are many feature selection methods, but they all aim to select the significant data solely. The data whose features had been selected could be used for rapid model synthesization and effective data classification. In this study, two feature selection techniques were applied as follows.
1) Information Gain (IG) [3] is a feature selection technique which measures the gain value of each node. The node with the highest gain value will be selected as the root node. Then, the rest of the data will be measured again to find the next node. The information gain was calculated and represented in (2). where: Y is the feature value, which is a data class ranging between {Y 1 , Y 2 , …, Y n } where n is the number of features, X is the value of other features that are not classes ranging between {X 1 , X 2 , …, X n } where n is the number of features, Gain(Y; X) is the score value gained from sample randomization ranging between 0 and 1, H(Y) is the probability value gained from the randomization of Y samples, H(Y | X) is the probability value gained from the randomization of Y samples when compared to X. H(Y) and H(Y | X) are calculated in (3) and (4), respectively.
where: P(Y = y i ) is the probability value from y 1 to y k , P(X = y i ) is the probability value from x 1 to x k , k is the number of features.
2) Correlation-based Feature Selection (CFS) is a feature selection technique that depends on the link between the collections of features acquired from the assessment of feature prediction capacity utilized for data classification and inconsequential data management. CFS can rank the information subsets dependent on the information measurements and select the information subsets dependent on the information measurements concerning high and low connections between classes. For any immaterial data or any information with a low degree of relationship, they will be rejected, the equivalent with complex data dimensions which will be barred from the data dimensions with a high level of relationship. The equation for assessing the subsets of CFS data dimensions has appeared in (5) [4].
where: M zc is the value of data dimension subset which composes of k data dimension, k is the data dimension or features, r ff is the average value of the relationship of data dimension, r zf is the average value of the relationship between the variable and classes (f ∈S).
Multi-layer perceptron neural network
Multi-Layer Perceptron Neural Network consists of the input layer, hidden layer, and output layer. In each layer, there are nodes or processing units. Moreover, there could be more than one layer hidden in each layer. The network applies the supervised learning pattern, which involves both Feed Forward and Backpropagation techniques, allowing the network to learn how to classify complex data. This network is widely used in the field of medicine. The operation of the Multi-Layer Perceptron Neural Network starts from inputting data into the input layer, followed by delivering the processed data to the output layer, as illustrated in Figure 2. Data processing requires the sum of the multiplication of inputs and weights, as shown in Equation 6. Next, the outputs shall be calculated by the Sigmoid function (See Equation 7. The outputs in the hidden layer will be transferred to the output layer where the processed outputs and the target outputs are compared. If the difference is not acceptable, the outputs will be sent back by the backpropagation process to the hidden layer and the input layer, respectively. Meanwhile, they also enter the weight adjustment process before being tested with the data. Finally, the outputs are then calculated with the Sigmoid function once again [5].
where: n is the sum total of input P i multiplied by weight W i value, i is the number of inputs or weight value, x is to the input value.
Literature review
According to the previous studies on the solutions to data imbalance of big data, the most commonly used resolution techniques included Random Over Sampling (ROS), Random Under Sampling (RUS), and Synthetic Minority Over-sampling Technique (SMOTE), especially in case of the "double layer" scenarios. Nevertheless, with regards to big data cases, multiple layered imbalances have not been comprehensively explored; there have been only a few examinations so far. In this study, the model's effectiveness was analyzed under the circumstances of multiple layered imbalances of big data by the SMOTE technique. The analysis results showed that it is necessary to slightly increase the overall effectiveness of the classifiers to the non-random datasets [6].
The Synthetic Minority Oversampling Technique has been applied to resolve the data imbalance by the random sampling method, together with the data cleaning techniques such as neighborhood adjustment or Tomek's link in terms of big data. The results of competency and probability analysis of the heuristic sampling method based on the deep-learning Multi-Layer Perceptron Neural Network in the big data domain showed that the most effective classification could emerge when there was the application of the data cleaning process with the ANN output instead of using only the input attribute space. It can be seen that the adjustment of imbalanced classes could be applied to deep learning and big data scenarios [7].
Apparently, big data class imbalance resolutions have been adapted from conventional methods, especially sampling methods [8] [9]. Nonetheless, recent research demonstrates that some conclusions drawn by the machine learning process were not applicable to the context of big data. To illustrate, it is normal for machine learning that SMOTE can provide better results than ROS [10]. However, in some cases, the results did not represent similar trends in big data contexts [11] [12]. Additionally, not so many previous studies have focused on big data class imbalance resolution towards the application of "intelligent" or heuristic sampling methods [13] [14].
Therefore, this research points out there should be more studies on how to resolve the data imbalance problems and select suitable features for predicting data errors. Effective machine learning performances can solve data imbalance problems by using heuristic sampling algorithms with regards to the scale of big data, including subdistrict household demographics in Thailand. The research results could be further applied to the research and development of grassroots economies, which can reduce poverty and inequality in society. The findings can also be used for preparing accurate, in-depth information at the household level of the country, as they can formulate a big data system displayed in the form of graphs that clearly illustrate the comparisons of the data.
Both public and private sectors, including the local communities, can access the data, which is provided via an online application, in order to develop the country together.
Methodology
The model development consists of 6 stages, including 1) preprocessing for data transformation, 2) data imbalance adjustment by SMOTE, 3) feature selection by CFS and Information Gain (IG), 4) model creation by Multi-Layer Perceptron Neural Network, 5) model's effectiveness measurement by 10-fold cross-validation, and 6) development and deployment of the application, as illustrated in Figure 3.
Data preprocessing
This study collected household demographic data from 22 villages in 3 subdistricts, including Kradangnga and Bang Khonthi in Khonthi District and Bang Kaeo in Mueang District of Samut Songkhram Province. The household demographic data involved general information of family members, health data, economic data, environment and surroundings data, local activities data, and local development data. These data contain 124 attributes or features in a total of 2,845 records, with approximately 14% of incomplete or inaccurate data, including duplicates, missing data, incorrect input, typo error, and inconsistent data or violated attribute dependency. The data was transformed into a comma-separated value (.CSV) file to be processed by Weka version 3.9 later.
Handling imbalanced data by SMOTE
The preprocessed data was found to be imbalanced in the solution classes. Thus, the research applied the Synthetic Minority Over-sampling technique (SMOTE), which is a data re-synthesization method, to increase the number of randomized datasets with small classes by increasing the K value from 1-5 as an experiment. The results showed that the most effective K value was 5, and the random seed was 1. Then, the data size was increased starting from 100% until it reached the highest effective value measured by the 10-fold cross-validation. The experiment results showed that the most suitable data size was 400%. Thus, the data size increased from 2,845 records to 2,990 records, as illustrated in Figure 4.
Feature selection by CFS and IG
This research applied the preprocessed and adjusted-size data towards SMOTE and then inputted the data to the feature selection processes using the CFS and the IG in Weka. In this section, there are ten groups of data applied in this process, consisting of 1) the original dataset + CFS, 2) the original dataset + IG, 3) 100% of SMOTE + CFS, 4) 200% of SMOTE + CFS, 5) 300% of SMOTE + CFS, 6) 400% of SMOTE + CFS, 7) 100% of SMOTE + IG, 8) 200% of SMOTE + IG, 9) 300% of SMOTE + IG, and 10) 400% of SMOTE + IG. These resulting ten datasets were processed by feature selection technique will be used in modeling at the next process.
Model creation by multi-layer perceptron neural network
The adjusted-size data was delivered to the learning process in order to create a model by which the research team employed two feature selection techniques, CFS and IG, together with the Multi-Layer Perceptron Neural Network. The research team specified the parameters for the Multi-Layer Perceptron Neural Network model creation as follows: Hidden Layer = 4, Training Time = 500, Learning Rate = 0.3, and Momentum = 0.2. These parameters provided the highest effective results as measured by 10-fold cross-validation.
Effectiveness evaluation of the model
The developed model's effectiveness was measured by the four effectiveness evaluation methods: the precision, the recall, the F-measure, and the accuracy [ where: TP refers to when the targeted class is "Yes" and the model predicts it "Yes". TN refers to when the targeted class is "No" and the model predicts it "No". FP refers to when the targeted class is "No" but the model predicts it "Yes". FN refers to when the targeted class is "Yes" but the model predicts it "No".
Development and deployment of the application
This study developed an application that can be operated on both computers and smartphones based on the web application. It was scripted in PHP, HTML5, jQuery JavaScript-based. In addition, the Bootstrap framework was based on a cascade style sheet (CSS) and the jQuery function was also applied to the component arrangement on the screen design and user interface for displaying the output on both computers and mobile devices responsively. The XAMPP was set up and run to manage the MySQL database and Apache web server. The main workflow of the system is connected to the Weka software's instruction set through Java for creating or loading the best-performing MLP model for use in data error handling. The PHP script controlled the overall processes between client and server and communicated via extensible markup language (XML) formatted. The developed application infrastructure is illustrated in Figure 5. The application development was divided into two major dimensions of users: the power users and the general users. The power users have rights for massive data processing, including exporting, importing, detecting, and managing data errors in the database. In case of detecting incorrect data, the system will run semi-automatically by displaying a message advising users to edit the correct information or automatically let the system take care of it. For general users, they are allowed to enter data individually into the database. Therefore, the system will immediately notify the users if any data errors are found in the data input process, such as data duplicates, missing values. Figure 6 illustrates the display of the application system on a smartphone. After developing the application to correct the data errors and display the subdistrict household big data of 3 subdistricts in Samut Songkhram Province, Thailand, a 10-minute video was filmed alongside creating a user guide to the developed application. They were published on Facebook, a popular online platform in Thailand with more than 45 million accounts [18]. This video aimed to help the users actively and continuously learn how to use the application. They could play, pause, repeat, fast forward or backward anytime they wanted to learn how to use the application. There were different icons and the 'Help' section that showed the advice and how to use the application for the users to learn by themselves. If the users have any questions, they could send an inquiry or videocall the operators via Facebook services.
Before the mobile application was evaluated, the training had been provided to 46 samples who voluntarily participated in the mobile application testing advertised on social media. In this research, the documents describing the protocols and research ethics were sent to the participants, asking them to give their consent to participate in the study.
The Black-box testing evaluated the developed system with five criteria indicators, including functional testing, compatibility testing, usability testing, performance testing, and security testing. All forty-six participants evaluated the system based on a 5-point Likert scale [19], as illustrated in Table 1. In addition, the developed system was assessed in quartiles (Q), including the first quartile (Q1), the third quartile (Q3), the interquartile range (IQR), and the quartile deviation (QD), based on mean, standard deviation (SD), median (MED), and percentage.
Research results
In this experiment, the results of the model's effectiveness and application evaluation and can be summarized as follows:
The results of data analysis by feature selection and multi-layer perceptron neural network
After adjusting the data imbalance towards the SMOTE method, the most suitable data size was 400%. The balanced data was then used to select features by comparing the effectiveness with two feature selection techniques, including CFS and IG. Multi-Layer Perceptron Neural Network was applied to create the model, and the 10-fold cross-validation method was employed to measure the model. The results of the model evaluation are illustrated in Table 2 and Figure 7. Moreover, the author compares the performance of the data error handling method for big data with other studies, as shown in Table 3. The scope of data error handling includes data duplicates (DUP), incorrect input (IC), missing value (MV), typo error (TPE), and inconsistent data or violated attribute dependency (VAD).
Effectiveness evaluation results of the application
The application was evaluated towards the black-box testing by 9 experts in the field of information technology and community development and 37 users composed of community leaders, local people, researchers, and community developers.
The results of application evaluation by the nine experts towards the black-box testing revealed that the functional testing and compatibility testing had the highest mean value of 4.78 with a standard deviation of 0.44, while the usability testing and performance testing had a mean value of 4.67 and a standard deviation of 0.50. The security testing had a mean value of 4.56 with a standard deviation of 0.33. Overall, it had a mean value of 4.69 and a standard deviation of 0.47. In other words, the effectiveness evaluation results provided by the experts showed that the application was effective at the highest level.
Meanwhile, the results of application evaluation by the 37 users towards the blackbox testing showed that the functional testing had the highest mean value of 4.84 with a standard deviation of 0.37. The compatibility testing, usability testing, and performance testing had a mean value of 4.81 with a standard deviation of 0.40. The Security testing had a mean value of 4.73 and a standard deviation of 0.49. Overall, evaluated by the users, it had a mean value of 4.80 and a standard deviation of 0.40. That is to say, the effectiveness evaluation results provided the users showed that the application was effective at the highest level.
Additionally, the developed system was also analyzed in terms of conformity by both the experts and the users in quartiles. The results suggested that the interquartile range values were not over 1, and the quartile deviation had no more than 0.5, indicating that all participants had the same opinion and evaluated the application in the same manner, as demonstrated in Table 4.
Conclusion
Dealing with data errors is challenging for big data, including missing data, incorrect input, typo error, inconsistent data, or violated attribute dependency. Therefore, this research proposed the technique for analyzing and classifying the data errors using feature selection and Multi-Layer Perceptron Neural Network. Additionally, the original dataset with imbalanced data was synthesized the minority class based on SMOTE up to 400%. Thus, the dataset was increased from 2,845 records to 2,990 records. Further, these datasets were processed by applying the feature selection technique between CFS and IG methods to compare the results. All datasets were classified the data errors based on Multi-Layer Perceptron Neural Network. Finally, each model's effectiveness was evaluated by the 10-fold cross-validation technique.
It can be concluded from the research findings that the most suitable MLP model was the dataset that adjusted the data imbalanced was 400% of SMOTE and applied the CFS method. This model provided the highest effectiveness in data errors classification with an accuracy of 98.29%. The results showed that the application of CFS improved the accuracy of the model better than IG. Therefore, the most suitable model could be used to develop the application for data error handling and displaying household big data of the developing subdistrict household in Thailand. Moreover, the effectiveness of the developed application was evaluated by experts and users. It was shown that the application had the highest level of effectiveness. All mean of 4.56 and over on each indicator which has a standard deviation of not more than 0.53. Besides, the interquartile range values were not over 1, the quartile deviation was no more than 0.5, and the percentage was higher than 93%. All of the above showed that the development of subdistrict household big data in Thailand successfully analyzed data errors by CFS feature selection technique and Multi-Layer Perceptron Neural Network, with SMOTE data imbalance adjustment. Therefore, the developed application which based on MLP and CFS can help users process large amounts of data or enter data to be more accurate and help perform data cleansing in big data for the household at the sub-district level. However, this research does not cover all of them in detail when analyzing character-level errors in in-depth and unstructured data. Still, this study can only classify which data records have errors in attributes and how to correct them. Additionally, this research supports many attributes or features which have errors in the dataset.
Future studies should explore the development of data error classifying techniques to fulfill the data collected on social media and unstructured data, which can be further used to record household information to develop big data of diverse agricultural occupations and productions. Then compare the runtime with published studies.
|
2022-03-10T16:21:55.450Z
|
2022-03-08T00:00:00.000
|
{
"year": 2022,
"sha1": "0952ad354d161fbb326190327b9f542cf1a2e3bc",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jim/article/download/22523/10891",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2dd3e05e784ec8cc6b9f58de879e8debccb64203",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
30308356
|
pes2o/s2orc
|
v3-fos-license
|
Bending the Trends
304 In this issue of the Annals of Family Medicine, Dr Johansen adds to our understanding that despite efforts to control health care costs over the past 2 decades, we are quickly approaching a reality in which health care spending subsumes one-fifth of our economy, which is well above our international peers.1,2 As Dr Johansen notes, this rising spending is the result of continued utilization of higher cost services such as specialty and hospital care, as well as increased prices. Increases in health care spending are not associated with better outcomes or more equitable health. The health status of the people in the United States continues to be burdened with high rates of chronic disease and for the first time in generations, life expectancy is declining.3 The Triple Aim has been the national call to action that drives the goals of “improving the experience of care, improving the health of populations, and reducing per capita costs of health care.”4 To date, the strategy for achieving the Triple Aim has been predominately focused on improving the health care system through the adoption of value-based payment design in lieu of fee-for-service payment models, and on reducing variability in health service delivery.5 Early results indicate that cost growth is slowing and that innovative delivery models are improving quality and safety of care and decreasing unnecessary utilization such as avoidable hospital readmissions.6
I n this issue of the Annals of Family Medicine, Dr Johansen adds to our understanding that despite efforts to control health care costs over the past 2 decades, we are quickly approaching a reality in which health care spending subsumes one-fifth of our economy, which is well above our international peers. 1,2 As Dr Johansen notes, this rising spending is the result of continued utilization of higher cost services such as specialty and hospital care, as well as increased prices. Increases in health care spending are not associated with better outcomes or more equitable health. The health status of the people in the United States continues to be burdened with high rates of chronic disease and for the first time in generations, life expectancy is declining. 3 The Triple Aim has been the national call to action that drives the goals of "improving the experience of care, improving the health of populations, and reducing per capita costs of health care." 4 To date, the strategy for achieving the Triple Aim has been predominately focused on improving the health care system through the adoption of value-based payment design in lieu of fee-for-service payment models, and on reducing variability in health service delivery. 5 Early results indicate that cost growth is slowing and that innovative delivery models are improving quality and safety of care and decreasing unnecessary utilization such as avoidable hospital readmissions. 6
ADDRESSING THE SOCIAL NEEDS
As delivery system reform has progressed, payers and health systems are assuming greater financial risk for health outcomes. Even the highest performing health systems are finding the medical model insufficient to adequately constrain costs and improve health outcomes due to the social needs of their patients. Failure to appropriately contextualize the health care plan can have significant consequences. 7 Providing the best quality care for a patient with COPD in the clinical setting is an important goal. But if that patient cannot afford the medication or does not have access to transportation for their follow up care, their disease will quickly become uncontrolled, leading to worse health outcomes and higher utilization-related costs.
In response, public and private payers are piloting payment models that encourage the health care system to address social needs. For example, the Centers for Medicare and Medicaid Services recently announced the Accountable Health Communities demonstration model that encourages health care providers to build linkages with community organizations, such as Meals on Wheels, that can address their patients' social needs such as hunger or poor nutrition. 8 Pay-for-success models, such as the South Carolina Nurse Family Partnership, are a version of social impact bonds that go a step further by encouraging community linkages and providing resources to support them. 9 Community-oriented primary care providers are especially likely to welcome these types of payment models and the new technologies that support them because addressing the social factors of health is fundamental yet complex and rarely compensated. Encouragingly, evidence is building that addressing social factors improves health outcomes at a lower cost 10,11 ; investing in coordinators who connect patients to social services can save between $15 and $72 billion annually. 12
ADDRESSING THE SOCIAL DETERMINANTS OF HEALTH
payment models alone is insufficient. 13 Leaders must work to create healthy communities by addressing factors further upstream such as the environment, housing, transportation, and access to healthy food and safe spaces. By moving to a public health model, rather than a purely medical model, communities can create the conditions where everyone can be healthy and reverse health disparities. [14][15][16] This undertaking requires collaboration and resources from many community sectors, and cannot be the sole responsibility of the health care system. The promising news is that communities across the country are pioneering a new approach to improving the health of their communities by addressing all the determinants of health. 17,18 These "Public Health 3.0" communities are coming together to create new umbrella organizations to set a shared vision and shared goals about the health of their communities, to share data and funding, and to coordinate activities aimed at improving health. Their efforts are showing promise, including improvements in health outcomes and reductions in mortality. 19,20 For patients with COPD, this would mean not only that their community's health care system can link them to support services for their social needs, but also that they can live in smoke-free housing.
DISRUPTION NEEDED TO CREATE AFFORDABLE, EQUITABLE HEALTH FOR ALL
These collaborations will only be successful if we address the social needs of our patients and make structural changes to funding and accountability for individual and community health. First, clinical teams should identify and support the social needs of our patients with the rigor they would apply to avoiding other medical errors. Second, health systems should show leadership by holding their executives accountable not only for outcomes for their patient population, but also for the health outcomes of their communities. Third, communities can only advance health if they have access to timely, specific data. Data availability will require continued focus on creating a culture of data sharing for public health advancement. Fourth, federal and state policy makers should work with states to maximize funding flexibility to accommodate local innovations aimed at investing in upstream social determinants of health. Fifth, education of the clinical and public health workforce should encourage an understanding of the social determinants of health and provide training in working across sectors. Sixth, it will require an increase in investment in the social determinants of health. Currently, US spending on social services is on par with other Organisation for Economic Co-operation and Development countries, but we spend a significantly greater proportion on health care. This spending pattern may need to change if we seek to improve health outcomes. 21
CONCLUSION
The health system that Johansen describes is one that has been on a relentless path of increasing high-cost utilization without clear return on investment. While the health system is working to achieve the triple aim by improving the health care delivery system, it alone will not be sufficient to bend the cost curve and reverse declining life expectancy and increasing disparities. This will be true even if we build better delivery models that address the social needs of patients. To improve overall population health, we will need to embrace disruptive models of health that address health care needs as well as the social factors and enable leaders to build healthier communities that support affordable, equitable health for all.
Now is the Time to Address Substance Use Disorders in Primary Care
Richard Saitz, MD A lthough over 21 million people in the United States have substance use disorders, most individuals with addiction do not receive treatment. 1 Of those who are fortunate enough to receive therapy, less than 7% access it through their doctor. 2 In addition, fewer than 10% of people with opioid use disorder in specialty care receive buprenorphine. 3 Primary care physicians are on the front lines of this epidemic and we see it in the faces and stories of our patients: in the night sweats or gastrointestinal symptoms that are due to alcohol or opioid withdrawal; in the anxiety symptoms that are associated with cocaine use; in managing chronic pain that raises concerns about possible addiction. We are good at managing people with many coexisting conditions, and at prioritizing and knowing when we and our patients need specialists. The current opioid epidemic and marginalization of substance use disorders away from primary care has been a disaster, however, and it is a marker for the overextension of primary care. The most complex functions in health care-the much needed integrating, prioritizing, and personalizing care across prevention, acute illness care, mental health care, and management of multiple chronic illnesses-crammed into 10 minutes.
This issue of Annals of Family Medicine contains several studies that address substance use disorders and may point to a way forward for primary care physicians. The study by Anderson and colleagues found that primary
|
2018-04-03T01:35:51.362Z
|
2017-07-01T00:00:00.000
|
{
"year": 2017,
"sha1": "c1af9dc802aadd4b83bdc55946fcc1d4e33d6d9f",
"oa_license": null,
"oa_url": "https://www.annfammed.org/content/annalsfm/15/4/304.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "066f564f4fc34d99f73c5bec73b2f0647e66d219",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3957687
|
pes2o/s2orc
|
v3-fos-license
|
Preliminary efficacy and feasibility of embedding high intensity interval training into the school day: A pilot randomized controlled trial
Current physical activity and fitness levels among adolescents are low, increasing the risk of chronic disease. Although the efficacy of high intensity interval training (HIIT) for improving metabolic health is now well established, it is not known if this type of activity can be effective to improve adolescent health. The primary aim of this study is to assess the effectiveness and feasibility of embedding HIIT into the school day. A 3-arm pilot randomized controlled trial was conducted in one secondary school in Newcastle, Australia. Participants (n = 65; mean age = 15.8(0.6) years) were randomized into one of three conditions: aerobic exercise program (AEP) (n = 21), resistance and aerobic exercise program (RAP) (n = 22) and control (n = 22). The 8-week intervention consisted of three HIIT sessions per week (8–10 min/session), delivered during physical education (PE) lessons or at lunchtime. Assessments were conducted at baseline and post-intervention to detect changes in cardiorespiratory fitness (multi-stage shuttle-run), muscular fitness (push-up, standing long jump tests), body composition (Body Mass Index (BMI), BMI-z scores, waist circumference) and physical activity motivation (questionnaire), by researchers blinded to treatment allocation. Intervention effects for outcomes were examined using linear mixed models, and Cohen's d effect sizes were reported. Participants in the AEP and RAP groups had moderate intervention effects for waist circumference (p = 0.024), BMI-z (p = 0.037) and BMI (not significant) in comparison to the control group. A small intervention effect was also evident for cardiorespiratory fitness in the RAP group.
Introduction
Less than 20% of adolescents worldwide are participating in sufficient physical activity to accrue health benefits (Hallal et al., 2006); cardiorespiratory fitness levels among young people have steeply declined over the last 30-years (Tomkinson and Oliver, 2007). In Australia, only 15% of youth aged 12-17 accumulate 60-minutes of moderate-tovigorous physical activity everyday (Cancer Council Victoria, 2010), and 65% of youth have aerobic fitness levels associated with reduced risk of poor cardiometabolic health (Hardy et al., 2010). Longitudinal studies have demonstrated that physical activity levels decline by 10% each year during adolescence (Dumith et al., 2011) and health behaviors established during this period continue into adulthood (Hallal et al., 2006;Menschik et al., 2008;McDavid et al., 2012). While adolescents are a high priority population for these reasons described, previous interventions to increase physical activity and improve fitness levels have been largely ineffective (Dobbins et al., 2013).
Schools represent an ideal setting for promoting physical activity and fitness in adolescent populations (Mura et al., 2015). As young people spend 6-8 h/day in schools, which have facilities, personnel and curriculum to provide opportunities for physical activity. Physical education (PE) is the primary vehicle associated with physical activity promotion in the school setting (CDC, 2013), yet physical activity levels within PE lessons are generally low (Rosenkranz et al., 2012;Lonsdale et al., 2013). In addition, lessons may not occur frequently enough to achieve health gains and students' opportunities for physical activity decrease in senior years. While increasing the duration and frequency of PE lessons would be ideal, this is not practical considering the challenges associated with the existing 'crowded curriculum' (Hills et al., 2015). Indeed, any strategy designed to increase activity and fitness in schools needs to be time efficient and scalable for easy implementation (Dobbins et al., 2013;Naylor et al., 2015).
A growing body of literature supports the efficacy of high intensity interval training (HIIT) for improving sport performance in athletes (Laursen and Jenkins, 2002) and cardiorespiratory fitness in adult populations (Weston et al., 2013). While there is not a standardized definition of this type of training, HIIT involves (a) short or long intervals (from ≤ 45 s to 2-4 min) of intense exercise (e.g., N 85% max heart rate) interspersed by short rest periods or (b) reoccurring short or long (b10 s to 20-30 s) bouts of maximal sprints, interspersed by a rest period (Buchheit and Laursen, 2013). For adolescent populations the "all out" maximal type of HIIT would not be palatable for most individuals (Hardcastle et al., 2014). The main appeal of HIIT is that it can be completed in a short period of time (compared to traditional aerobic training), while resulting in equivalent physiological adaptations (Buchheit and Laursen, 2013).
Although the efficacy of HIIT for improving metabolic health in different population groups (including adolescents) is now well established, it is not known if this type of activity can be effective for population-level health promotion (Biddle and Batterham, 2015). Indeed, the majority of HIIT studies conducted with adolescents have examined running-based programs (Buchan et al., 2011a(Buchan et al., , 2011b(Buchan et al., , 2013De Araujo et al., 2012) and most have been conducted in clinical settings with trained athletes. To the authors' knowledge, no previous study has evaluated the efficacy of embedding HIIT into the school day. The objective of this study was to evaluate the efficacy and feasibility of a three-arm randomized controlled trial design testing two HIIT protocols [aerobic exercise program (AEP) and resistance and aerobic exercise (RAP)] for improving health-related fitness, body composition and physical activity motivation in a sample of adolescents. Due to the effectiveness of HIIT on fitness in other population groups, we hypothesize that HIIT will be a successful strategy to improve health-related fitness outcomes in adolescents.
Methods
Ethics approval for the study was gained from the University of Newcastle Human Research Ethics Committee (H-2014-0083) and permission to conduct research from the relevant educational organization was granted. The study protocol has been registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12614000729628). To be included in this study the school needed to meet the following criteria: (a) co-educational; (b) provide at least 2 PE lessons per week; and, (c) not currently participating in a physical activity program in addition to regular PE. The school principal, parents and study participants provided written informed consent to participate in the study. Study participants (n = 65), were students in year 9-10 attending the study school, who consented to participate. The design, conduct and reporting for this randomized controlled trial adhere to the Consolidated Standards of Reporting Trials guidelines (Moher et al., 2010).
Study design
A three-arm school-based randomized controlled trial was conducted with adolescents attending one secondary school in Newcastle, to evaluate the effects of two 8-week training programs focused on improving fitness via the provision of short HIIT sessions three times/ week (total: 24 sessions). Sessions ranged from eight to ten minutes in duration (weeks 1-3: 8 min; weeks 4-6: 9 min; weeks 7-8: 10 min), with a work to rest ratio of 30 s:30 s. The AEP and RAP sessions were delivered by the research team (PE qualified) at the study school.
Power calculations were based on change in the primary outcome (cardiorespiratory fitness, assessed using the multi-stage shuttle test (Léger et al., 1988)). Based on our previous research , a between-group difference of 10 laps was considered achievable. Assuming a standard deviation of 9 laps, 80% power with alpha levels set at 0.05, it was determined that 20 participants per group would provide adequate power to detect statistically significant effects.
Once baseline assessments of cardiorespiratory fitness, muscular fitness, body composition and physical activity motivation were conducted (research assistants blinded to treatment allocation), participants were randomized at the individual level using a random numberproducing algorithm, by an independent researcher. A stratified random sampling procedure was conducted to ensure that equal numbers of boys and girls were allocated between the three groups.
Participants in the intervention groups participated in three HIIT sessions/week for eight weeks and all sessions were conducted inside the school hall. Two HIIT sessions/week were delivered in scheduled PE lessons, with a third session delivered at lunchtime. The focus of each of the three programs included: i. AEP: Participants completed HIIT sessions primarily involving gross motor cardiorespiratory exercises requiring minimal equipment (e.g., shuttle runs, jumping jacks, skipping); ii. RAP: Participants completed HIIT sessions that included a combination of cardiorespiratory and body weight resistance training exercises that required minimal equipment (e.g., body weight squats, push-ups, hovers); and iii. Control: Participants continued with their programmed PE and usual lunchtime activities over the 8-week intervention period. The control group received the AEP program once the intervention and follow-up assessments were completed (Fig. 1).
The AEP and RAP groups engaged in HIIT sessions while the control group did their usual PE warm-up, then the groups were combined to complete the remainder of the PE lesson. HIIT session duration and intensity were the same for both intervention groups. To encourage maintenance of the appropriate exercise intensity, participants were fitted with heart rate monitors (Polar H7), which were connected to a central iPad application (Polar Team). Participants were able to view this information on a projector screen during sessions.
To promote exercise adherence, sessions were designed to be enjoyable, with fun warm-up and cool-down activities. In addition, sessions were completed in pairs, with one participant undertaking the 'work' phase, while their partner completed the 'rest' phase. Sessions focused on promoting encouragement and support to peers, 'Trainer of the Day' certificates were awarded to one pair at the completion of each session. Awards were given to participants who provided positive feedback and motivation for their partner and demonstrated outstanding effort and dedication during the workout. At the conclusion of the intervention the pair awarded the most certificates received a prize (e.g., a gift voucher). As the intervention progressed and exercises were mastered, participants were given additional elements of choice including: music (student playlists used weeks 2-8), exercise choices during a workout (weeks 4-6) and choice of workout (between two workouts previously completed; weeks 7 and 8).
Outcomes
All assessments were conducted by trained members of the research team blinded to group allocation (baseline and post-test). A protocol manual including specific instructions for conducting all assessments was used by research assistants for accuracy and consistency. Physical assessments were conducted in a sensitive method (e.g. weight/waist circumference were measured in a private setting) and questionnaires were completed under exam-like conditions.
Primary outcome
The primary outcome was cardiorespiratory fitness assessed using the Progressive Aerobic Cardiovascular Endurance Run shuttle test (Léger et al., 1988) (Mahar et al., 2011).
Note: PACER = number of laps completed; gender: 1 = boy and 0 = girl; and age in years. (Lubans et al., 2011). The standing long jump was used as a measure of lower body muscular strength (Castro-Pinero et al., 2010) and has acceptable reliability and validity in adolescents (Ortega et al., 2008).
Secondary outcomes
Body composition: Weight was measured in light clothing without shoes using a portable digital scale (Model no. UC-321PC, A&D Company Ltd., Tokyo Japan) to the nearest 0.1 kg. Height was recorded to the nearest 0.1 cm using a portable stadiometer (Model no. PE087, Mentone Educational Centre, Australia). BMI was then calculated using the formula weight (kg)/height (m) 2 . Waist circumference was measured to the nearest 0.1 cm against the skin using a non-extensible steel tape (KDSF10-02, KDS Corporation, Osaka, Japan) in line with the umbilicus.
Physical activity motivation: Autonomous motivation to engage in physical activity was assessed using an 8-item validated questionnaire examining benefits, fun, importance, enjoyment, effort, pleasure, restlessness and satisfaction related to physical activity participation (Markland and Tobin, 2004). Cronbach's alpha was used as a measure of scale reliability [baseline: (α = 0.90) and post-test: (−α = 0.91)].
Process evaluation
Program feasibility was assessed based on the following: consent rate (how many participants offered the program agreed to be involved), retention rate (how many participants completed the intervention and participated in baseline/post-intervention testing), adherence (weekly session attendance of 3 sessions delivered/week (total: 24), average session heart rate across the 8 weeks totalling 10 min inclusive of warm-up/cool-down phase) and participants' satisfaction with the program (I enjoyed participating in the HIIT sessions on 5-point Likert scale: 5 = strongly agree to 1 = strongly disagree). In addition, teachers were asked to report their confidence to deliver the HIIT programs at the end of the study period (I am confident that I could deliver the HIIT/body weight sessions at the start of my PE lessons on 5-point Likert scale: 5 = strongly agree to 1 = strongly disagree).
Statistical analyses
Statistical analyses of the primary and secondary outcomes were conducted using linear mixed models with IBM SPSS Statistics for Windows, Version 20.0 (2010 SPSS Inc., IBM Company Armonk, NY). Cohen's d was used to provide a measure of effect size (adjusted difference between HIIT and control groups over time divided by the pooled standard deviation of change). Moderators of HIIT effects were explored using linear mixed models with interaction terms for the following: i) sex (boys versus girls), ii) and baseline fitness level (i.e., healthy fitness zone versus needs improvement). Subgroup analyses were conducted if the interaction term was statistically significant (p = 0.10) (Assmann et al., 2000).
Results
The number of participants involved at each phase of the study is reported in Fig. 1. One secondary school was successfully recruited and 65 adolescents from three classes (45 males, 20 females, mean age: 15.8(0.6)) from years 9-10 completed baseline testing (see Table 1). The intervention groups were similar for baseline characteristics. Of the 65 participants, 52 were classified as within the 'Healthy Fitness Zone' (HFZ) and six were identified as 'Needs Improvement' for cardiorespiratory fitness at baseline. Cardiorespiratory fitness was not reported for seven participants.
Changes in primary outcome
Changes for all outcomes are reported in Table 2. Analyses of efficacy (adjusted difference between group and Cohen's d effect sizes) identified a small intervention effect for the RAP condition for the primary outcome, cardiorespiratory fitness (5.2 laps, 95% CI = − 4.2 to 14.7; d = 0.4). After converting laps to estimated VO 2 max (Mahar et al., 2011), a between group difference of 5.9 ml·kg·min was found in favor of the RAP condition.
Process evaluation
The program achieved good recruitment (consent rate: 86%), adherence (average attendance: 2.2 of 3 sessions/week) and retention (90.8%). Heart rate targets were met, with a higher average heart rate evident for the RAP (AEP: 74.04% of max, 148.09 bpm; RAP: 77.58% of max, 155.15 bpm) (average across all session weeks 1-8 inclusive of warm-up/cool-down phase). Of the 43 intervention participants 31 completed the post-program evaluation questionnaire and reported on a 5-point Likert scale (5 = strongly agree to 1 = strongly disagree) that the program was enjoyable (x̄= 4.2). Similarly, the four teachers involved in the study all agreed that: (i) their students had enjoyed participating in the intervention, (ii) they could confidently deliver the HIIT sessions at the start of their lessons with minimal professional learning and, (iii) they intend to include HIIT in their physical education lessons in the future.
Conclusions
The aim of this study was to evaluate the preliminary efficacy and feasibility of embedding HIIT into the school day. Although not statistically significant, small improvements in cardiorespiratory fitness were observed for the RAP condition. In addition, participants in both HIIT groups improved their body composition in comparison to the control group. Overall, the strongest intervention effects were observed for participants in the RAP group, which included resistance and aerobic exercises during sessions. In regards to feasibility, the program achieved high recruitment, good adherence and retention. Participants enjoyed participating in the HIIT sessions and supervising teachers reported a willingness to embed HIIT within future PE lesson.
The RAP intervention condition achieved small intervention effects for cardiorespiratory fitness, an increase of 5.2 laps on the shuttle test was achieved in comparison to controls, which converts to an estimated VO 2 Max increase of 5.5 mL·kg·min (6.1% improvement). Similarly, a recent systematic review and meta-analysis (Costigan et al., 2015) revealed that HIIT can improve cardiorespiratory fitness [unstandardized mean difference (MD) = 2.6 mL·kg·min, 95% CI = 1.8 to 3.3, p b 0.001] in comparison to moderate-intensity exercise and non-exercising control conditions in adolescents. However, results of our study were not statistically significant, which may be explained by the small sample size. In contrast, the AEP resulted in only trivial improvements in cardiorespiratory fitness, this difference is of interest given that both HIIT conditions had an aerobic component and the same training volume and intensity. It may be that muscle performance was enhanced by the lower body strength exercises (e.g., body weight squats) performed as part of the RAP condition and this contributed to larger performance improvements and higher average session heart rates.
There was a moderate intervention effect for BMI and BMI-z in both groups. High BMI values are associated with various adverse health outcomes (Buncher et al., 2015;Weber et al., 2014;Twig et al., 2014), therefore even moderate improvements can be meaningful at the population level. The favorable intervention effects on BMI in our study are supported by findings of a recent systematic review and meta-analysis which reported HIIT to be a feasible and time efficient approach for improving body composition in adolescent populations, reporting a moderate and statistically significant intervention effect for BMI [(MD = − 0.6 kg/m 2 , 95% CI = − 0.9 to − 0.4, p b .001) (d = − .37, 95% CI = −0.68 to −.05)] (Costigan et al., 2015).
Moderate intervention effects were found for waist circumference for the AEP and large statistically significant intervention effects for RAP. Intervention effects on waist circumference for HIIT are supported by a range of other studies in adolescent populations (Buchan et al., 2013;Boer et al., 2014;Farah et al., 2014;Racil et al., 2013;Tjønna et al., 2009). Of these studies, four utilized sprints-based training (Buchan et 2009), and one study used sprint cycling (Boer et al., 2014). Follow-up periods of these studies ranged between post-intervention and 3weeks, therefore it is unknown whether participating in these activities would result in continued participation and long-term improvement in waist circumference for adolescents. In addition, it is unknown whether participating in the same type of activity (e.g., cycling or sprints) for an extended time period is appealing for adolescents given three of the five studies reported low retention rates (44-50% (Buchan et al., 2013;Farah et al., 2014;Tjønna et al., 2009)).
There was a negligible effect on objective measures of muscular fitness in comparison to the control condition. Similarly, a recent systematic review found the overall effect of HIIT on muscular fitness was not statistically significant (MD = 0.8 cm, 95% CI = −1.8 to 3.4, p = 0.5) (Costigan et al., 2015). In our study, muscular fitness improvements for the control group were similar to the HIIT conditions, which may be explained by the learning effect associated with fitness testing. In addition, the lack of intervention effect could also be attributed to the ability of the tests to detect change. For instance, we used field-based tests to assess muscle performance but it may be that more sophisticated laboratory-based assessments are able to detect modest improvements in performance resulting from HIIT. There is clearly a need for further studies to examine the long-term impact of HIIT on muscular fitness in adolescent populations. It could be necessary for interventions to implement a higher dose and to be conducted for longer duration for muscular fitness improvements.
The intervention effect of HIIT on physical activity motivation was trivial. However, this in itself may be an encouraging outcome, given recent commentaries have suggested that prescribing intense exercise (specifically sprints training) to general/sedentary populations may lead to feelings of incompetence and failure resulting in reduced physical activity motivation and participation (Hardcastle et al., 2014). Numerous studies have reported positive associations between young people's physical activity and various measures of motivation (Owen et al., 2014) (e.g., autonomous motivation (Vierling et al., 2007;Standage et al., 2012); intrinsic and introjected physical activity motivation (Verloigne et al., 2011); self-determined motivation (Owen et al., 2013)). If delivered using an authoritarian teaching style, HIIT could be unenjoyable. However, our HIIT intervention was developed in reference to self-determination theory (Deci and Ryan, 1985) and the sessions were designed to satisfy participants' basic psychological needs for autonomy (e.g., choice of music, exercise choices during a workout and choice of workout), competence (e.g., provision of challenging yet achievable workouts, positive feedback and heart rate data) and relatedness (e.g., working in pairs, sessions focused on promoting encouragement and support to peers). We suggest that HIIT can be delivered using an autonomy supportive manner, but teachers may require appropriate professional learning to ensure that programs support rather than thwart young people's basic psychological needs.
Based on the high retention rates, session attendance, satisfaction and adherence to heart rate targets, the HIIT protocols and delivery methods were acceptable for participants and teachers. Intervention strategies appealed to participants and resulted in continued involvement in the program. Further investigation of technology-based strategies such as smartphone applications and text messaging Thompson et al., 2014) to promote adherence and participation beyond the school setting are clearly warranted. In addition, qualitative research is needed to inform future studies of additional strategies for sustained intervention fidelity and the perceptions and pragmatic aspects of introducing HIIT within the school context.
Strengths and limitations
This study has a number of strengths including the randomized design, assessor blinding and high levels of intervention fidelity. Importantly, the retention and session attendance rates were high, demonstrating that the program was appealing to the target group. Adjusted difference between groups and 95% confidence interval between intervention and control groups after the 8-week intervention (AEP minus control; RAP minus control).
⁎ p b 0.05 However, some limitations should also be acknowledged. The small sample size may limit the generalizability of our findings, as the study was conducted in one school with more boys than girls. Laboratory-based methods such as DXA for body composition and isokinetic/isotonic muscle performance testing may have detected more substantial changes resulting from the intervention. physical activity undertaken outside of school time was not taken into account, which could affect the changes in some outcome measures. Finally, cardiorespiratory fitness was assessed using the multi-stage fitness test; while this test is considered the most appropriate field-based measure of cardiorespiratory fitness (Pate and Daniels, 2013), VO 2 max testing is considered to be the gold standard.
Conclusions and future directions
Evidence from this study highlights the potential of embedding HIIT within PE for improving cardiorespiratory fitness and body composition among adolescents. While an 8-week, school-based HIIT intervention appears to be a promising approach for improving fitness outcomes; some results were not statistically significant and therefore require further examination on a larger scale. In addition, the long-term effectiveness and sustainability of this approach should be assessed both quantitatively and qualitatively, and the potential of successfully training teachers to deliver the program also requires investigation. In summary, HIIT appears to be a feasible approach for improving fitness for adolescents in a school-based setting. Further longitudinal research with longer follow-up periods, investigating a larger sample of adolescents from different schools should be conducted.
|
2017-06-17T14:55:05.224Z
|
2015-11-14T00:00:00.000
|
{
"year": 2015,
"sha1": "ec0900f3bf4b9f9c9ca98238ccb7f504fd811a79",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pmedr.2015.11.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "729c16d544cc3fff4416051b5e52565ff6e588ae",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
69594502
|
pes2o/s2orc
|
v3-fos-license
|
Stable trajectory planning and energy-ef fi cience control allocation of lane change maneuver for autonomous electric vehicle
Purpose – The purpose of this paper is to investigate problems in performing stable lane changes and to find a solution to reduce energy consumption of autonomous electric vehicles. Design/methodology/approach – An optimization algorithm, model predictive control (MPC) and Karush–Kuhn–Tucker (KKT) conditions are adopted to resolve the problems of obtaining optimal lane time, tracking dynamic reference and energy-efficient allocation. In this paper, the dynamic constraints of vehicles during lane change are first established based on the longitudinal and lateral force coupling characteristics and the nominal reference trajectory. Then, by optimizing the lane change time, the yaw rate and lateral acceleration that connect with the lane change time are limed. Furthermore, to assure the dynamic properties of autonomous vehicles, the real system inputs under the restraints are obtained by using the MPC method. Based on the gained inputs and the efficient map of brushless direct-current in-wheel motors (BLDC IWMs), the nonlinear cost function which combines vehicle dynamic and energy consumption is given and the KKT-based method is adopted. Findings – The effectiveness of the proposed control system is verified by numerical simulations. Consequently, the proposed control system can successfully achieve stable trajectory planning, which means that the yaw rate and longitudinal and lateral acceleration of vehicle are within stability boundaries, which accomplishes accurate tracking control and decreases obvious energy consumption. Originality/value – This paper proposes a solution to simultaneously satisfy stable lane change maneuvering and reduction of energy consumption for autonomous electric vehicles. Different from previous path planning researches in which only the geometric constraints are involved, this paper considers vehicle dynamics, and stability boundaries are established in path planning to ensure the feasibility of the generated reference path.
Introduction
Autonomous vehicles (AV) and electric vehicles (EV), wherein in-wheel motors (IWMs) are adopted to drive wheels, have attracted increasing attention from both industrial and academic communities recently.Autonomous driving technology has tremendous potential in reducing vehicle casualties, and IWM EV can immensely enhance energy efficiency and lead to flexibility actuation which considerably enhances vehicle maneuverability, stability and safety (Li et al., 2013;Jin et al., 2015;Yin et al., 2015).Numerous studies have revealed that A-IWM EV is an effective option that can increase traffic safety and decrease emissions and energy crisis (Li et al., 2017;Potluri and Singh, 2015).
Unlike manned vehicles, which follow the driver's command to accomplish various driving tasks with the result that driver characteristics, vehicle dynamic features and energy management are major concerns (Wang et al., 2013(Wang et al., , 2015(Wang et al., , 2016;;Wu et al., 2013;Dai et al., 2014), the AV is supposed to appropriately perform various maneuvers under rare driver interventions or even without drivers.Therefore, autonomous lane change and the corresponding abilities of trajectory planning and trajectory tracking are most significant for AV.Many research works have been conducted in lane-changing trajectory planning (Soudbakhsh et al., 2013;Kim et al., 2014;Chen et al., 2014;You et al., 2015) and lane change control (Bayar, 2013;Berntorp et al., 2014;Naranjo et al., 2008).For example, Soudbakhsh et al. (2013) evaluated three different path planning methodsstate lattice, predictive constraintbased planning and spline-based search tree.Chen et al. (2014) proposed a feasible trajectory generation algorithm based on quartic Be¨zier curve to generate local trajectory for AV.You et al. (2015) adopted a polynomial method to describe the trajectory of AV carrying out the lane change maneuver.In comparison to conventional path planning strategies (such as road map, cell decomposition and potential field methods), which are constantly mentioned in the robotics field, the above curve-type path planning methods can greatly reduce calculation and avoid being stuck in the local minima.For lane change control, Bayar (2013) used the PID method to resolve the trajectory tracking control.In Berntorp et al. (2014), an optimal trajectory-based minimization of yaw acceleration was acquired, and the simulation and comparative analysis were done with different speed values.In Naranjo et al. (2008), the fuzzy controllers that mimicked human behavior and reactions were established to conduct AV executing the overtaking maneuver in the scenario of two vehicles overtaking.
Although the above research works on lane change path planning and lane change control have made great contributions, there are still some apparent shortages that need to be settled.To begin with, current researches about lane change trajectory planning only consider geometric constraints and kinematic characteristics (e.g. the road curvature and lateral acceleration); the restrictions associated with vehicle dynamic characteristics are normally neglected.Consequently, the vehicle's dynamic stability may not be fulfilled if the AV drives along the predesigned trajectory.In addition, the problem of the A-IWM EV's energy efficiency during lane change is rarely considered.For EV, especially for A-IWM EV, despite the redundant degrees of freedom providing additional control flexibility in maintaining vehicle safety and stability (such as Traction Control and Direct Yaw Control), unreasonable dynamic control laws that ignore the energy consumption may immensely shorten the driving mileage of EV.
Based on the aforementioned discussion, this paper presents a novel lane change control system for A-IWM EV, which consists of a stable trajectory planning level that ensures the feasibility of the generated reference path, a high-level model predictive control (MPC) and a low-level energy-efficient control allocation (EECA) scheme, to enhance the feasibility of lane changing and to reduce energy expenditure.The rest of this paper is organized as follows.In Section 2, the stable lane change trajectory that includes vehicle constraints is developed.A control-oriented model of IWM EV planar motion is described in Section 3. In Section 4, the control system is proposed.In Section 5, simulation results are displayed to verify the control performance and energy savings of the EECA.Conclusion is presented in Section 6.
Stable lane change trajectory
In this section, a new lane change trajectory that can guarantee the stability of A-IMW EV and keep the vehicle running smoothly is proposed.To establish this trajectory, the fifthorder polynomial function is first used to realize smooth lane change and the maximal comfortableness of passenger.Then, by founding the rational vehicle stability bounds and introducing those constraints into the trajectory equations, the stable lane change trajectory is created.
It should be noticed that in this paper, only the scenario of active lane change is considered, i.e. there should be no vehicles in the front and target lanes when the A-IMW EV is changing lanes.Therefore, the situation of collection avoidance is not considered in the reference trajectory generation.The corresponding path planning that can guarantee the stability of the vehicle and prevent vehicle collision at the same time can be studied in future research.
Stability constraints of in-wheel motors and electric vehicles
This section describes the plane dynamics of IMW EV.Hence, the stability constraints on longitudinal movement, lateral movement and yaw movement are constructed.In light of the vehicle dynamics, the lateral acceleration can be expressed as: where v is the yaw rate, v x and v y are the longitudinal and lateral velocities.Denoting b the slip angle of Center of Mass (CM), we get v y = v x tan (b ).The relationship between the lateral acceleration, yaw rate and slip angle can be described as follows: Note that the lateral acceleration should not exceed the maximal force that the ground can offer.Suppose b and b are small during vehicle lane change, the yaw rate of the vehicle under steady state should meet the following constraints (Rajamani, 2011): where m is the adhesion coefficient, g is the gravity coefficient and « is the scale factor, which is usually approximately equal to 0.85 in practical calculation.In addition, because the linear tire model is used in this paper, the maximum lateral acceleration should not surpass 0.5g to ensure the tire working in the linear area, i.e. ja y j 0:5g (4) Thus equation ( 3) is modified as: For longitudinal acceleration a x , according to the adhere-circle restriction, as shown in Figure 1, the longitudinal acceleration should abide by the following inequality: According to the findings of Hult and Tabar (2013), the reference lateral curves of vehicle, which can guarantee the succession of later acceleration and jerk minimum, can be expressed by using the fifth-order polynomial function.Considering the initial and final lateral states of vehicle, this function can be written as: According to equation ( 7), t f can be written as: where a y,max is the maximum lateral acceleration during lane change.
Moreover, insomuch as the longitudinal reference trajectory is normally longer than the lateral one, the following expression is adopted: It is noteworthy that the longitudinal acceleration is not constant.Considering the fluctuation of the longitudinal velocity in the actual steering process and the constraint of longitudinal jerk variation, the longitudinal acceleration is signified as: where h is the positive constant and k = 2p /t f .Equations ( 7)-( 11) constitute the original reference trajectory that can maintain the continuity of steering and achieve the jerk minimum.To introduce vehicle dynamic restrictions, the yaw rate in ideal state is given: where r (t) is the radius of curvature.
The maximum v r is denoted by v r,max .Because the initial and final states of X r and Y r are certain, the value of v r,max is only connected to t f .By restricting a y,max , v r,max and âx not outstripping the boundaries described in equations ( 4)-( 6), the minimal lane change time t à f that can simultaneously fulfill the constraints of dynamics and the succession of later acceleration can be obtained.The new reference curve (X à r ; Y à r ) that can simultaneously pledge the vehicle stability and fulfill the jerk optimization is obtained.
Nevertheless, seeing that the order of v r is generally high, it is difficult to give the explicit expression about v r,max .In consequence, the t f is hard to gain.Actually, by observing the variation in the yaw rate of the vehicle driving along some curves, it can be perceived that the positive and negative maximum values always approximately arise at the quadrate and three-quarter lane time, i.e. t f =4 and 3t f =4 in Figure 2.Meanwhile, the maximum yaw rate v r,max is monotone decreasing when the lane time increases.Therefore, can be represented as: Figure 1 The constraints of the longitudinal and lateral forces where # ¼ 1:2 is the penalty factor that offsets the loss of the authentic maximum of yaw rate.After v r,max is gained, the bisection search algorithm is used to seek the suitable t f .The whole search algorithm is shown in Figure 3.
Vehicle model for A-IWM EV
3.1 Dynamic model In this section, as shown in Figure 4, the traditional vehicle model is used to describe the plane dynamics of A-EGV and the dynamic model can be written as: where m v is the IWM EV mass, I z is the moment of inertia, X and Y are the longitudinal and lateral coordinates of vehicle in the inertial frame, u is the heading angle of vehicle, F X and F Y are, respectively, the generalized longitudinal and lateral forces, M z is the generalized external moment about the Z-axis, C a and C r are the aerodynamic resistance coefficient and the rolling resistance coefficient, respectively.The forces F X , F Y and moment and M z that are related to the four tire forces and the front steering angel d can be expressed as: In equation ( 14), F xi and F yi are, respectively, the longitudinal and lateral tire forces, where i = 1,2,3,4 represents different wheels, l t is the track, l f and l r are the front and rear CM distances.
For the lateral tire force, F yi , when the vehicle lateral acceleration is small and the dynamics of the tire is in the linear region, the following linear-lateral-tire-forces model can be used in vehicle control: where K f and K r are, respectively, the tire lateral stiffness, a f and a r are the front and rear tire slip angles.Let x = [x 1 , x 2 , x 3 , x 4 , ] T be the states and control inputs of system in equation (1); T is the interval, then its discrete time form can be written as where fð ÁÞ represents the nonlinear terms in equation ( 14).
Note that for the sake of expression, the discrete states and inputs at time k are written as The state trajectory, denoted by x k 1 1 , is obtained by applying the input ũk ¼ u kÀ1 to system (13) at the time k with x k ¼ x kÀ1 , i.e.: In the light of equation ( 17), the nonlinear system ( 16) can be transformed into a linear time varying (LTV) system linearized at each time step k around the point x k ; ũk ð Þas follows: with where
In-wheel motor model
In this paper, it is assumed that the brushless direct current (BLDC) IWMs are used as the actuators of the EV.The dynamic models of BLDC-IWMs in both driving and braking cases can be described as follows: where J w,i is the combined rotational inertia of the wheel and IWM, is the yaw rate of wheel, M ti is the drive torque and R eff is the effective radius of the tire.The efficiency of adopted DC motors in the driving and braking statuses are obtained by IWMs EV tests that are conducted on a twin-roll chassis dynamometer (shown in Figure 5).Note that in the IWMs EV tests, the in-wheel motor torque values at different speeds and torque control signals were measured by a torque sensor equipped on the chassis dynamometer, and the motor control signals were changed from 1.5 V to 4.5 V with a 0.15-V step at different motor speeds.A dSPACE MicroAutoBox was used to control and record all the EGV and chassis dynamometer signals in realtime.Given the limited space available, the detailed test process is omitted in this paper, but the similar test method can be seen in (Wang et al., 2011).Based on the test data, the efficiency maps are plotted in Figure 6.In addition, to introduce the motor efficiency into the efficiency management control, similar to (Chen and Wang, 2014), the polynomial fitting method is used to gain the change in motor efficiency, and the polynomial function can be written as follows: where h D M t ð Þ and h B M ' t À Á are, respectively, the driving and braking efficiencies of one IWM, and M t and M ' t , respectively, represent the driving torque and regenerative braking torque of wheel.
The establishment of controller
In light of the obtained reference trajectory and vehicle model, in this section, a novel autonomous lane change control system that can ensure precise dynamic tracking control and optimal energy consumption is proposed.The structure of the control system is shown in Figure 7.
Note that different from the previous control allocation (CA) researches wherein the sliding mode control (SMC) is adopted to gain the virtual control laws (Song et al., 2015), in the dynamic control level of this controller, MPC method is used to acquire the real control signals to resolve the chattering phenomena and the problem of control inputs under restraints in traditional SMC-based CA studies.
Planning control level
Based on reference trajectory in Equations ( 7) and ( 10) and t à f in Subsection 2.2, the reference states of vehicle in the next N p times can be described as where x r ¼ X r ; Y r ; u r ; v xr ; v yr ; v r T h .Within equation ( 22), the reference yaw angle, longitudinal and lateral velocities can be expressed as where 0 1ÂNp represents the zero vector that includes N p elements, and i = 1,. .., N P .Remark 1: In stability control of vehicle steering, the slip angle of vehicle is expected to be zero.Because vehicle slip angle equals to the specific value of later velocity to longitudinal speed, the reference lateral speed v yr is zero in equation ( 23).
Dynamic control level
Based on the reference states generated by planning controller and the LTV system of vehicle ( 18), the cost function in the finite horizon optimal control problem can be expressed as where Q 1 2 R 6 Â 6 and Q 2 2 R 5 Â 5 are definite positive matrices and Z p is control horizons.At each time step T, the following finite horizon optimal control problem is solved: where The optimization problem (25) can be modified into a quadratic program (QP).The sequence of optimal input deviations computed at time k by solving (25) for the current states x ðkÞ is denoted by N Ã .Then, the first sample of N Ã is used to compute the optimal control action and the resulting state feedback control law is At the next time step k 1 1, the optimization problem ( 28) is solved over a shifted horizon based on the new measurements of the state x (k 1 1) and based on an updated linear model in equations ( 18)-( 19) computed by linearizing the nonlinear vehicle model.
Energy-efficient control allocation level
In Subsection 4.2, the obtained control law can only guarantee dynamic characteristics of vehicle.To reduce energy consumption, the EECA is used.
According to equation ( 14), the virtual control (force signals) in this paper can be expressed as 20), V inp ðkÞ can be re-expressed as where ; where the wheel angular acceleration v wi can be estimated through Kalman filters, just as was done in (Chen and Wang, 2012).
Based on the virtual control expression (29), the EECA design is described to solve the following nonlinear optimization problem: where h stand for the regenerative brake torque signals, B a =[BE, BE], Q 3 and s are the positive weighting factors.
Within equation ( 30), P c is the total power consumption of in-wheel motors for the driving and regenerative braking modes and can be formulated as where P Oi and P Ii is the energy consumption of IWMs in the driving model and regenerative braking mode, respectively.The corresponding electric efficiencies are indicated by h Di and h Bi , which can be obtained by using equation ( 21).
What is noteworthy is that J 2 is nonlinear and nonconvex optimization problem.To resolve this problem, the KKTbased method, which can transfer the nonlinear/nonconvex optimization problem into an algebraic eigenvalue problem and improve the computational speed, is applied in this paper.Because the focus of this paper is not on the optimization solution, the relevant resolving approach, which can be found in (Chen and Wang, 2012), is omitted.
Simulation and results
To verify the capability of the proposed control system, simulation analyses is carried out.The simulations are implemented based on the CarSim-Simulink platform with a high-fidelity and full-vehicle model.The simulation parameters are listed in Table I.
The simulation results are shown in Figures 8-17.In those figures, "Dynamic" means that only dynamic tracking control is involved, "D-EFCA" signifies the controller proposed in Section 4, "E m ", "E u ", "E vx ", "E v " are the absolute tracking errors of actual output of Carsim to the references.And the root-mean-square-errors (RMSE) of the vehicle states tracking are listed in Table II.From Figures 8-11 and Table II, one can see that both "Dynamic" and "D-EECA" controllers can track the references accurately.Meanwhile, when searching the optimal lane time t f , by introducing the vehicle dynamic stability boundaries into the path planning, the variations of yaw rate, lateral and longitudinal accelerations are limited (Figures 11,16 and 17), and the homologous optimal lane time t à f , maximal lateral acceleration ja y,max j and maximal yaw rate jv max j are equal to 2.9 s, 0.2256 g and 4.36 deg/s, respectively.
To control energy efficiency by redistributing the torques of the four wheels (Figures 14 and 15), the power consumption in "D-EECA" should be obviously less than that in "Dynamic" as shown in Figure 12.The total energy consumption in "D-EECA" and "Dynamic" are 1.1646e 1 3 kJ and 1.217e 1 3 kJ during simulation (Table III).The energy is reduced by 4.3 per cent in "D-EECA", compared with "Dynamic".Insomuch as "D-EECA" controller can realize accurate dynamic tracking control and reduce energy consumption, the proposed control method is valid.
It also should be noticed that the torques of thewheel in "D-EECA" and "Dynamic" are approximated in the first half of lane change time.This phenomenon is caused because of the small weight s .Because the dynamics performance is the first thing that must be satisfied for an autonomous vehicle, a small s can realize the fact that the energy consumption can be reduced effectively in the case of high tracking accuracy.To further decrease energy consumption, the new EECA method and a more complete and accurate model of energy loss may be available and will be researched in the future.
Conclusion
In this paper, a novel lane change control system for A-IWM EV that can enhance vehicle stability and reduce energy expenditure is proposed.The whole control system consists of stable trajectory planning level, high dynamic control level and low EECA level.In the planning level, to ensure the feasibility
Figure 2
Figure2The variations in yaw rate with regard to the different lane change times t f at 10 s, 8 s, 6 s and 4 s
Figure 3
Figure3The diagram of the search algorithm to find the optimal t à f
Figure 6 Figure 5
Figure 6 Driving and braking efficiency map of the DC in-wheel motors
Figure 8
Figure 8 The position of A-IWMs EV and tracking errors
Figure 9 Figure 10 Figure 11 Figure 12 Figure 13
Figure 9The heading angle of A-IWMs EV and tracking errors
Figure 15 Figure 16 Figure 14
Figure15The torque inputs in D-EECA Assume t 0 is the initial time of lane change, t f is the window time of lane change and the initial and final lateral states of the vehicle during lane change are [Y 0 , v y0 , a y0 ] T and [Y f , v yf , a yf ] T .Hypothetically, if the vehicle carries out straight line driving before and after lane change, then a y0
|
2019-02-19T14:07:18.140Z
|
2018-10-05T00:00:00.000
|
{
"year": 2018,
"sha1": "1058900974cade72686a0afcf5ceb021192bf6d9",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JICV-12-2017-0002/full/pdf?title=stable-trajectory-planning-and-energy-efficience-control-allocation-of-lane-change-maneuver-for-autonomous-electric-vehicle",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1058900974cade72686a0afcf5ceb021192bf6d9",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234034474
|
pes2o/s2orc
|
v3-fos-license
|
K-modes Algorithm Based on Rough Set and Information Entropy
The traditional K-modes algorithm is susceptible to interference of redundant attributes, and only adopts the 0-1 matching method to define the distance between attribute values of each two objects, without fully considering the influence of each classify attribute on clustering result. In order to overcome these shortcomings, this paper proposes improved K-modes clustering algorithm based on rough set and information entropy. Aiming at a large number of redundant attributes in the clustering data, this paper firstly utilizes attribute reduction algorithm of rough set to eliminate redundant attributes and determine the importance of each attribute, then combines information gain to determine the weight of each attribute and finally makes performance tests of the traditional algorithm and the improved algorithm on five data sets of UCI machine learning library, such as Soybean-Small and Zoo. The experimental results show that the clustering efficiency and accuracy of improved algorithm is higher than that of traditional algorithm, and the performance of improved algorithm is better.
1.First Section Introduction
Clustering analysis is an important direction in the research of data mining. It is generally divided into partition clustering, hierarchical clustering, density clustering, grid clustering, model clustering, graphtheoretical clustering and so on. K-means algorithm is a kind of partition clustering, it can only deal with numerical data, but cannot deal with the data of classify attribute type. In view of this deficiency, Huang et al. proposed K-modes algorithm [1] , which is suitable for processing the data of classify attribute type. In 2014, Chu Lulu and Jiang Feng [2] proposed a K-modes clustering algorithm based on weighted overlapping distances. Hong Jia [3] et al. proposed a redundant metric method based on interdependence [4] to calculate between different attributes degree of relevance, but its actual effect is only equivalent to giving each classification attribute a different weight. In 2016, Wang Yongsheng [5] et al. introduced parameter weights in the mixed attribute measurement mechanism, and on this basis, constructed a rough-based Integrated classifier of the mixed attribute measurement mechanism of the set. In 2017, Zhao Liang et al . [6] proposed a simple bayesian classifier based distance measurement method of operation results. In 2018, Wen Wu [7] et al. proposed a KNN text classification algorithm based on K center points and rough sets based on the traditional KNN algorithm. Zhang Tengfei [8] et al. introduced a natural adaptation measure to the imbalance degree of family size, and proposed an adaptive measure based on the imbalance degree of cluster size. In 2019, Yang Youlong [9] and others established a hybrid data similarity measure and improved the K-modes algorithm process to optimize the clustering results.
None of the above methods takes into account the effects of redundant data on the clustering result. However, in the actual situation, not every classify attribute is valid for the clustering result, and the influence degree of each classify attribute on the clustering result is also different. Therefore, based on these reasons, this paper introduces rough set theory and information entropy theory to solve the problem of redundant attributes of clustering data sets and the weight of each classify attribute. In the process of clustering, this paper utilizes the attribute reduction algorithm of rough set to remove the redundant attributes, get reduction set of training set,. Then obtains attribute importance degree of each reduction set. Calculates information gain of each classify attribute in the data set, combines rough set to determine the weight of each classify attribute value, and makes better clustering analysis. Experiments prove that the proposed K-modes clustering algorithm based on rough set and information entropy is both feasible and effective.
K-modes Algorithm Analysis
K-modes algorithm is an evolution of K-means algorithm. K-modes algorithm adopts simple 0-1 matching method to define the distance between attribute values of each two objects, and replaces means of K-means algorithm with modes which update iteratively based on frequency method. Definition 1: a dataset X has n objects represented as O , , ⋯ , and r classify attributes represented as A , , ⋯ , , , , ⋯ , . Supposing the classify attribute has mt different attribute values, DOM , , ⋯ , , The distance between the ith object and the jth object named d(x i ,x j ) is as formula (1).
Definition 2: dividing dataset X into k classes represented as , , ⋯ , . The objective function is as formula (2).
, , ⋯ , is the center of the lth class, is the mode of the dth attribute of the lth class. With the above constraints, a traditional K-modes clustering algorithm is presented to minimize the objective function F.
For the input, the number of sample objects is n and the number of classes is k. The output is clustering result.
The specific steps of traditional K-modes clustering algorithm are divided into four steps.
Step (1): randomly select k objects in dataset X as initial clustering centers.
Step (2): calculate the distance d , between all objects in dataset X and k initial centers according to formula (1), and assign each object to the class with the smallest distance to it, then get k classes as , , ⋯ , .
Step (3): in each attribute of each class, select the attribute value with the maximum frequency as the attribute value of the mode of this class. This value is chosen as the new clustering center.
Step (4): repeat step (2) and step (3) until the cluster center no longer changes. The traditional K-modes clustering algorithm only takes advantage of simple matching method to measure the distance between objects and makes clustering, but it ignores influence of redundant attributes and the difference between attributes on the clustering results. In view of this shortcoming, this paper proposes K-modes clustering algorithm based on rough set and information entropy to improve traditional K-modes algorithm.
3.1Rough Set Theory
In 1982, scholar z. Pawlak proposed rough set theory [11][12] . Rough set theory can deal with imprecise knowledge information and discover the hidden knowledge by analyzing and reasoning relevant data sets.
Information Entropy Theory
Definition 7: ∀a ∈ P, the importance degree of attribute value a relative to Q is defined as , the normalization is as formula (4).
Information Entropy
In 1948, Shannon proposed concept of information entropy [13] , which solved the problem of quantitative measurement of information and widely utilized to measure uncertainty in rough set. Definition 8: the information entropy that produced is as formula (5).
In formula (5), n is the number of classes divided by decision attribute D within the discourse domain U. is the number of corresponding elements of D. is the probability that samples in U belong to the ith class , that is ⁄ . Definition 9: for conditional attribute C, a ∈ C.If values of attribute a are set as , , ⋯ , , it will divide discourse domain U into r parts as , , ⋯ , . The conditional entropy of attribute a relative to decision attribute D is as formula (6).
In formula (6), is the row of data for attribute a with the same value . contains data objects of with the number of .
3.2.2Information Gain
The weight value of attribute determined by rough set alone will cause information loss. It can be corrected by information gain in the information entropy theory. Information gain can truly reflect importance degree of attribute. The higher the information gain of attribute is, the greater the influence of attribute on final class is, and the greater the importance of attribute to information system is.
Definition 10: information gain is effective reduction of information entropy. Information gain of attribute a is as formula (7).
Gain
Definition 11: importance degree of information gain is as formula (8).
K-modes Clustering Algorithm Based on Rough Set and Information Entropy
Rough set theory and information entropy theory are introduced in this paper to solve the problem of redundant attributes and weight of each attribute. In the process of clustering, this paper firstly removes redundant attributes by attribute reduction algorithm of rough set, obtains reduction set of training set, and gets attribute importance degree of each reduction set. Then, information gain of each attribute in the data set is calculated, and the weight of each attribute value is determined through combining with rough set. Finally, it can achieve better clustering analysis results.
Dissimilarity between Samples
Definition 12: given any two samples and , according to rough set and information entropy, the dissimilarity between these two samples is the distance between their attributes. The dissimilarity between samples can be indicated as formula (9).
Description of Algorithm Improvement
The thought of algorithm improvement is described as below. Firstly, it utilizes rough set to make data preprocessing and applies attribute reduction algorithm of rough set to eliminate redundant attributes and get a reduction set of training set. Then it calculates importance degree of attribute in each reduction set. On the basis of importance degree, it combines with information gain to determine the weight value of each attribute and finally realizes effective clustering. The specific process of improved K-modes algorithm based on rough set and information entropy is given as bellow.
For the input, the number of sample objects is n and the number of classes is k. The output is clustering result.
Step (1): randomly choose k objects from dataset X as initial clustering centers.
Step (2): calculate distance , , between all objects in dataset X and k initial centers according to formula (9), and assign each object to the class with the smallest distance to it, then obtain k classes as , , ⋯ , .
Step (3): in each attribute of each class, select the attribute value with the maximum frequency as the attribute value of the mode of this class. This value is chosen as the new clustering center.
Evaluation Index
In order to evaluate clustering results, this paper adopts two kinds of evaluation indexes: classification precision (PR) and classification accuracy (AC). PR and AC are respectively defined as where represents the number of objects correctly classify to class i, refers to the number of objects incorrectly classify to other classes, n is the number of all objects in dataset X, and k is the number of classes through clustering.
Experimental Analysis
In this paper, MATLAB R2015b was utilized to compile program to analyze effectiveness of algorithm.
The experimental datasets were all taken from UCI machine learning library, including Soybean-Small Compared with K-modes clustering algorithm based on rough set and information entropy proposed in this paper, Huang's K-modes algorithm is referred to as traditional K-modes algorithm and selected to make a comparative experiment. Before analysis, data in each dataset need to be normalized to eliminate interference caused by different value range for each data attribute.
Samples' number n and classes' number k should be set according to actual situation in table 1. The final classification precision (PR) and classification accuracy (AC) of Huang's K-modes method and this paper's K-modes method are shown from table 2 to table 5. table 2 to table 6, it can be concluded that the improved algorithm in this paper has a larger increase on the performance of five datasets with PR raised 0.0593 on average and AC increased an average value of 0.0727. Thus, the improved K-modes method based on rough set and information entropy ithis paper has higher classification accuracy compared with traditional K-modes algorithm, and it is indicatethat this improved method is both effective and feasible. Qualitatively, its effectiveness is derived from two aspects: (1) K-modes algorithm based on rough set and information entropy introduces concept of rough set, so it has ability to deal with non-deterministic problems. When background knowledge is uncertain, incomplete or has noise, this improved algorithm can also make relatively correct analysis and judgment without bringing in any prior knowledge.
(2) When calculating attribute distance, K-modes algorithm based on rough set and information entropy takes advantage of information entropy to determine weight in the calculation process, so as to ensure that the weight of attributes having a key impact on effective clustering result is larger. This is more conducive to precision and accuracy of clustering analysis and this clustering result is more consistent with displaying attribute features of things.
Conclusion
This paper proposes new K-modes clustering algorithm based on rough set and information entropy on the basis of traditional K-modes algorithm. The combination of information entropy and rough set theory is applied to K-modes clustering algorithm can improve efficiency and accuracy of traditional K-modes algorithm. Experimental results indicate that the proposed K-modes algorithm based on rough set and information entropy in this paper is superior to traditional k-modes algorithm in terms of classification precision and classification accuracy.
|
2021-05-10T00:03:28.941Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "347abc5353ea495fcccc9e596c50f89f4897cb4f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1754/1/012239",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ae86619de12835d9e15698f8f67c81b909f56558",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
270757157
|
pes2o/s2orc
|
v3-fos-license
|
Analysis Determinants of Unwanted Pregnancy in Adolescents : Literature Review
ABSTRACT
INTRODUCTION
Pregnancy in teenagers become beginning from pregnancy no desired, is one phenomenon recent social This the more increase occurs in the environment public.Pregnancy No desired is something a situation experienced by a person women who experience it pregnancy However No want presence baby from its content the (Kep et al., 2021).Teenagers who experience Unwanted Pregnancy generally are victims of rape and extramarital partners.Unwanted Pregnancy in Teenagers can give rise to its height possible abortion triggering an increase in the Maternal Mortality Rate (MMR) (Aisyaroh et al., 2023).
The United Nations Population Fund (UNFPA) issued report situation world population 2022 which shows the height number pregnancy No desired in the world.A total of 121 million pregnancy No desired happen every year.More of 60% of case pregnancy No desired ending in abortion.Additionally, 45% of abortion the done in a way No safe (United Nations Population Fund (UNFPA), 2023).According to the Population and Family Agency National Planning figures Unwanted Pregnancy reached 40%.Reason from non-pregnancy desired by teenagers that is wedding age young, at least Reproductive health knowledge, misinformation on social media, and sex free (Dana, 2023).Incidence rate Unwanted Pregnancies in Central Java Province in 2021 reached 21.89%.The city of Surakarta was obtained presentation Unwanted Pregnancy in Teenagers In 2021 it will reach 6.9% and in the District Sukoharjo obtained yield 9.9% (Dana, 2023).
Incident Unwanted Pregnancy in teenagers become attention special for government scope international nor national.On scope internationally, WHO has promote the Adolescent Sexual and Reproductive Health program For made guide every country in implement prevention programs Unwanted Pregnancy with objective strengthen and improve system service so that number incident Adolescent Unwanted Pregnancy.can minimized (Kep et al., 2021).Preventive measures implemented will potential reduce number incident Unwanted Pregnancy in teenagers, will but if no applied with maximum by the giver service and also awareness teenager as well as there are also other factors such as teenagers who experience it abuse sexual naturally will still give birth to number Unintended pregnancies among teenagers are increasing increase matter This will give rise to various detrimental impact for self teenager (Aisyaroh et al., 2023).
Unwanted Pregnancy in teenagers will have good negative impact from facet physical, psychological, social and spiritual.Impact from facet physique will endanger Mother nor the fetus she is carrying or Mother will try do an abortion that leads to death (Schonewille et al., 2022).Impact side psychology, mother will try run self from not quite enough answer or still continue her pregnancy with compulsion.Unwanted pregnancy that ends with abortion No safe it turns out is one of contributor to the Maternal Mortality Rate (MMR) both in the world and in Indonesia (Panda et al., 2023).Unwanted Pregnancy in teenagers influenced by several factors, namely, low knowledge health reproduction, attitude permissive in socializing, it's easy access to pornographic media push teenager for try and imitate, influence Friend near in associations, and patterns foster parents permissive-indifferent who tends to be let teenager in association so that teenager easy affected in association free (Yah et al., 2020).
Unwanted Pregnancy in teenagers is phenomenon social from year to year to be get attention Serious from various party.Must make efforts done for prevent happen pregnancy No desired by teenagers that is with increase knowledge about prevention Unwanted Pregnancy with give counseling, socialization, providing quality and easy information media accessible to teenagers, as well give education about prevention Unintended Pregnancy.It is knowledge and attitudes that underlie the formation process behavior.Knowledge or cognitive is a very important domain for formation behavior or action somebody (over behavior).If change behavior based with knowledge and positive attitude so will cause good behavior too (long lasting) (Chakole et al., 2022).Based on phenomenon the problems that have been described, the author interested For make Work Final Scientific Midwifery entitled Analysis Determinant Unwanted Pregnancy in Adolescents: Literature Review with approach Evidence Based Case Report (EBCR).
METHOD 2.1 Frameworks
The framework has been prepared is as following: Search article done with using keywords that have been designed on the framework that is "unwanted pregnancy", "unintended pregnancies in teenagers", and "analysis of unwanted pregnancy".Search done machined article database search scientific that is Sciencedirect , PUBMED, BMC, and Google Scholar.
Search result beginning obtained international journals from the Sciencedirect database found 251 articles, PUBMED found 195 articles , BMC found 3091 articles , and Google Scholar found 16,900 articles .Amount whole articles obtained from the Sciencedirect, PUBMED, BMC, and Google Scholar databases a total of 20,437 articles.Furthermore done article screening For determine appropriate article with criteria that have been determined.
DISCUSSION (12 PT)
Unwanted pregnancy in adolescents has produce various worry.The stigma attached to unwanted pregnancy women has give birth to practice exclusion to teenager women who experience untimely pregnancy.Practice This happen in various institutions: family, school, and society.Birth Not even a unwanted pregnancy phenomenon regardless from role family, community and school as participating institutions shape it (Yong et al., 2023).
Many factors can push phenomenon the keep going increase like in the modern era now with supported by increasing technology advanced everyone can with it's easy access various information from various parts of the world.moreover in adolescence is a time when somebody have a desire know what's high about various matter.With condition that 's adolescence can categorized as sufficient time vulnerable because of curiosity the if no controlled with Good can encourage bad things (Murewanhema et al., 2023).One of them is desire For try things new ones are prohibited based on norms or the value it has, however they curious for find out and try it.Like case regarding "sex" that can be give new knowledge for teenager.Besides that possible factors influence teenager pregnancy out of wedlock is exists pressure from boyfriend or partner For do connection sex (Brindis et al., 2023).
Based on results study of the 13 articles used There were 10 articles discussing causal factors happen non -pregnancy desired by teenagers that is exists internal factors and factors external.According to (Murewanhema et al., 2023) internal factors or more commonly within a teenager himself.A teenager will face tasks development connection with change physical and role social.Desire For understandable more from other people can become reason teenager do action deviation, excessive attitude crochet self Alone or always elevate self Alone.(Ermiati et al., 2022).Internal factors that become reason sex premarriage in teenagers including aspects health reproduction, knowledge, attitudes to sexuality, aspect style life, control self, perceived vulnerability to risk health reproduction, activity in social, aspect age, as well religious aspects (Brindis et al., 2023).
External factors are factors from outside a teenager.According to the biggest external factors that have an impact on the occurrence of deviant behavior in a teenager, namely the environment and friends (peers).A frequent friends gather together in One gang, automatically he will infected by attitudes and traits his friend the.Parental love and attention No fully poured out, made a child No feel at home are inside House those, they more often For is outside together his friends (Brindis et al., 2023).External factors become reason behavior sex pre-marriage in teenagers among other things, contact with information media, family, values, socio-culture, and supporting norms social in behavior certain (Amien & Khoiriyah, n.d.).Apart from internal and external factors There are also factors other influences teenager experience pregnancy out of wedlock.One of them is Because lack of knowledge regarding reproductive health that can be done cause happen pregnancy out of wedlock or nonpregnancy desired because lack of education (Ponsford et al., 2022).Then factor other is attitude permissive in association teenager scattered so that pushed him risk happen pregnancy out of wedlock or non -pregnancy desired (Shankar et al., 2023).Then it spread access pornography on social media cause its height desire teenager for try and follow matter the.Increasing technology advanced moment this makes it very easy everyone for look for information (Mbizvo et al., 2023).Plus, adolescence is a time when people want to find out and imitate.So various undesirable things arise when combined with weak parental control.The influence of close friends or peers in relationships is one of the factors that can cause casual sex.Then applying permissive-indifferent parenting patterns carried out by parents can indirectly allow teenagers to socialize so that teenagers are easily influenced in promiscuity (Ayu Dewi Permata Sari & Indriani, n.d.).
Based on results study of the 13 articles used There were 11 articles discussing it impact non -pregnancy desired by teenagers that is impact physical, impact economics, and impact environment.Adolescents with unwanted pregnancies will definitely experience physical impacts on themselves and the fetus they are carrying.Teenager Not yet own ready condition for undergo pregnancy the.According to (Kebede et al., 2021)age teenager, teenager enter into the age at risk.Need available service health pregnancy special for handle teenager with pregnancy no desired so that health adolescents and fetuses can be optimal.Other impacts are abortion and infection.Abortion is one of them frequent alternatives taken teenagers and their families For end her pregnancy.Termination attempt pregnancy This done illegally with drink purchased medication in a way free and access service health after There is complications.Necessity There is effort prevention abotrus Because can give rise to impact infection If done illegally.
Impact economy will felt teenager Because pregnancy No desired This happened at the time teenager still sitting on the bench school.Teenager Not yet own education basics that can be optimizing himself in the search process work.Teenager Still depends to both parents his in fulfillment need its economy.Teenager the sued Work with his abilities and education just.This matter will impact period long on the economy teenager Because no own decent education.Environment social teenager will give impact in a way social to teenagers who experience it pregnancy No desired.Educational process teenager will stopped moment pregnancy No desired happen.Teenager No can develop himself with own continuing education although baby his has born.Additionally, effects from pregnancy no desired teenager choose go from environment and move to environment his partner for avoid effect social conditions provided by the environment (Panda et al., 2023).
Based on results study of the 13 articles used obtained all article discussing effort prevention pregnancy No desired by teenagers that is choose good company, strengthening faith with busy self for worship, limit friendship especially towards opponents type, no watching pornographic films or sites, be consistent with the principles self alone, looking information health reproduction and danger pregnancy No desired by teenagers, and keep busy self with activity positive.Prevention efforts pregnancy no desired according to UNFPA, there are 4 basics that is empowerment teenager women, repair gender inequality, respect right basic humans, and reduce number poverty.For strive prevention the needed policy that is intervention prevention in adolescents aged 10-14 years, prevent happen violence sexual, caring health woman, protect right women, try education for women, involving man become part from solution, there is education sexual relations in children and adolescents, as well equitable development (United Nations Population Fund (United Nations Population Fund (UNFPA), 2023).The community has too role important in effort prevention pregnancy no desired by teenagers.The community in question is consists from role of parents, friends peers, and figures in society.In conclusion, there is a number of thing that can be attempted as business For prevent pregnancy no desired that is choose good company, strengthening faith with busy self for worship, limit friendship especially towards opponents type, no watching pornographic films or sites, be consistent with the principles self alone, looking information health reproduction and danger pregnancy no desirable in teenagers, and keep busy self with activity positive (Chakole et al., 2022).
CONCLUSION
Based on results study of the 13 articles used in studies literature This obtained conclusion as following: Causative factor happen non-pregnancy desired by teenagers that is exists internal factors and factors external.Impact non-pregnancy desired by teenagers that is impact physical, impact economics, and impact environment.Prevention pregnancy No desired by teenagers that is choose good company, strengthening faith with busy self For worship, limit friendship especially towards opponents type, no watching pornographic films or sites, be consistent with the principles self alone, looking information health reproduction and danger pregnancy no desired by teenagers, and keep busy self with activity positive.
|
2024-06-27T15:08:32.629Z
|
2024-06-24T00:00:00.000
|
{
"year": 2024,
"sha1": "580fa9d40ef5ee1668db7ce18a051a367bb2af7b",
"oa_license": "CCBY",
"oa_url": "https://jurnal.ukh.ac.id/index.php/KN/article/download/1337/613",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "860a4fc7cc8adf751e8edb88de66a01e59c25a35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
85499008
|
pes2o/s2orc
|
v3-fos-license
|
Axonal Conduction Velocity Impacts Neuronal Network Oscillations
Increasing experimental evidence suggests that axonal action potential conduction velocity is a highly adaptive parameter in the adult central nervous system. Yet, the effects of this newfound plasticity on global brain dynamics is poorly understood. In this work, we analyzed oscillations in biologically plausible neuronal networks with different conduction velocity distributions. Changes of 1-2 (ms) in network mean signal transmission time resulted in substantial network oscillation frequency changes ranging in 0-120 (Hz). Our results suggest that changes in axonal conduction velocity may significantly affect both the frequency and synchrony of brain rhythms, which have well established connections to learning, memory, and other cognitive processes.
I. INTRODUCTION
The timing of action potential (AP) arrival is of great importance to information processing in brain circuits [1]. Experimental studies have revealed that a number of pathways, such as thalamic pathways to the cortex or the auditory brainstem, maintain fine-tuned spike time arrival, with submillisecond precision [2,3]. Individual axons primarily control spike time arrival through their AP conduction velocity, which is modifiable through various mechanisms including ion channel densities, axonal structure, and myelinating glia, which wrap their processes around axons and form the myelin sheath that changes AP propagation speeds [1]. The prevailing hypothesis has always been that these mechanisms actively shape neuronal circuits during development of the central nervous system (CNS) [4]. Aside from slow homeostatic adjustments, conduction velocities in the adult brain have been considered static [4]. However, the emergence of new visualization tools has revealed that AP conduction velocities are highly adaptive in adult neuronal circuits [4].
Axons exhibit conduction velocity plasticity in response to neuronal activity through multiple mechanisms operating at different time scales [1]. Unmyelinated axons can adjust their conduction velocity through axon depolarization and diameter adjustment, while myelinated axons tend to rely more on their interaction with the myelin sheath [5,6]. Interestingly, some of these mechanisms rapidly alter conduction velocity in a matter of seconds or minutes. For example, depolarization of an unmyelinated axonal cell membrane, by prior APs, results in conduction velocity slowing within a matter of minutes [5]. Similarly, depolarization of oligodendrocytes, the primary myelinating glial cell in the CNS, increases conduction velocity in the axons they myelinate from 20 seconds to 3 hours [9]. Moreover, new myelin formation is driven by local neuronal activity, thereby changing local axonal conduction velocities over the course of several hours [10]. This last form of myelin plasticity has been shown to play a critical role in motor learning, suggesting that the effects of adaptive axonal AP conduction velocities may translate to network level dynamics.
In this work, we computationally investigated the effects of axonal AP conduction velocity distributions on the oscillatory behavior of neuronal networks. To do so we developed an Izhikevich-type network that conformed to several generally observed characteristics of cortical networks including the ratio between excitatory and inhibitory neurons, sparseness of connectivity, local interneuron inhibition, and lognormal synaptic weight distributions [8,11,13]. Most essential to our goal, is that such an Izhikevich network has been shown to exhibit brain-like rhythms [12]. Our analysis revealed multiple, nontrivial relationships between network conduction velocity statistics and the corresponding network oscillation frequency, and synchrony level. Our results suggest a fascinating possibility that neuronal networks may change their oscillatory activity in response to precisely tuned AP propagation delays.
II. METHODS
We developed Izhikevich-type neuronal networks with all parameters being static. Each network was initialized and simulated with a different conduction velocity distribution that was biologically constrained within the reported range [14]. The networks were analyzed for the ability of all their neurons to get entrained into synchronized activity, namely oscillation frequencies and levels(extent) of neuronal synchrony.
Our network architecture followed the one previously reported in [12]. Briefly, our network consisted of 1000 Izhikevich neurons, a simple, semi-empirical model of a cortical neuron [12]. The neuronal membrane fluctuation was described by a set of two differential equations: where , ∈ ℛ are the fast activation and slow recovery variables, and , , , ∈ ℛ determined neuron type. Neurons spiked if > 30 (mV), and variables , were reset as, The neuronal population consisted of 80% excitatory, regular spiking (RS), and 20% inhibitory, fast spiking (FS), neuron types, per experimental data [11]. Neuron parameters were obtained from [12]. Network connectivity depended on neuron type, with each neuron uniformly connected to 10% of the network [8]. Outgoing connections from excitatory neurons were connected to both excitatory and inhibitory neurons, while inhibitory neurons were connected only to excitatory neurons. We initialized excitatory and inhibitory synaptic weights with (μ, σ 2 ) exc = (1.67,0.5) and (μ, σ 2 ) inh = (1.49,0.5) lognormal distributions, preserving the experimentally observed functional form of the distribution [13]. Lastly, axonal AP conduction velocities were modeled as discrete time delays, with a resolution of 1 (ms); same as the network integration timestep. All inhibitory connections were assigned the minimum delay of 1 (ms), mimicking local interneuron inhibition [11]. Experimental data shows that the distributions of corticocortical axonal AP conduction delays from various cortical areas to the midline can be approximated using normal distributions described by moments ranging in = [3,6] (ms) and σ 2 = [0.5,5.0] with all measured delays largely limited to 0 − 10 (ms) [14]. We doubled this range to 0 − 20 (ms) to account for the fact that experimental data measured pathway delays only up to the midline, or half of possible total length. Therefore, for simulation purposes we restricted excitatory conduction delay distributions to normal distributions with ∈ [0,20] (ms) and 2 ∈ [0.5,5.0].
The purpose of the simulation process was to relate conduction delay distribution to global network dynamics. Simulations consisted of four steps, repeated for each unique delay distribution. First, a neuronal network was generated with a unique conduction delay distribution using the previously described rules and statistics. Second, the network was launched and stimulated continuously for 5 (s). Stimulation consisted of randomly selecting a neuron every millisecond and injecting 20 (pA) of current into it. During network run time, all network parameters remained constant. Third, spike data was collected for the last 4 (s) of simulation time. This simulation process was repeated several times for each conduction delay distribution simulated to ensure robustness and generalization of our results. We found the results were repeatable with little to no variation. Lastly, spectral and synchronization analysis were performed on the recorded data.
Network oscillation frequency was obtained from recorded network spike data through spectral analysis. First, each simulated network's 2-dimensional, 800 × 4000 , spike data was averaged over all neurons resulting in the 1-dimensional network activity time series, (t), signal. We then applied the Fast Fourier Transform (FFT) algorithm to 1000 (ms) segments of the (t) signal, to extract its frequency components. Lastly, the largest frequency component was taken to represent the oscillation frequency of the corresponding network.
Network synchrony was measured on spike data using a metric based on variance of time-averaged, neuronal spike fluctuations [15]. This measure computes the variance of network level fluctuations, normalized by the average variance of fluctuations of individual neurons, as described by the following equation: where 〈. . . 〉 represents an average over time, ( ) is the spike train over time for neuron , and ( ) is the network spike train signal averaged over neurons. This measure positively quantifies network synchrony on a scale 0 to 1.
III. RESULTS
We analyzed the effects of different axonal conduction delay distributions on network oscillations and synchronization. Our results revealed a nonlinear relationship between network oscillation frequency and mean delay of network connections. Additionally, network delay variance impacted both the form of the oscillation frequency response and network synchronization.
Network oscillation frequency exhibited a highly sensitive response to the mean delay of network connections. This nonlinear relationship is depicted in Fig. 1 and 2, depicting network oscillation as a function of average network delay. Oscillations ranged in the biologically relevant 0 -120 (Hz). Interestingly, sub-millisecond changes in mean delay produced frequency changes on the order of tens of Hertz, shown in Fig. 1 and 2. For example, more than 80 (Hz) of oscillation frequency range is covered by a mere 1 − 2 (ms) change in network mean conduction delay.
Variance of network delay distribution appeared to control both the position and form of the network oscillation frequency response curve. First, variance determined the shift of the response curve along the mean delay axis. Fig. 1 demonstrates this relationship most clearly, where higher variance values tend to shift the response curve towards higher mean delay values and lower frequency ranges. Conversely, Fig. 2 shows that for variance values, 2 < 2 , the form of the response curve completely changes resulting in multiple high frequency peaks. Interestingly, the low variance regime resulted in highest network frequency of approximately 125 (Hz). Additionally, variance impacted network synchrony.
Our model exhibited an inverse relationship between the variance of network delay distributions and the degree of network synchronization. This phenomenon was initially observed through network oscillation frequency power levels and network raster plots. Fig. 3 shows that the relative power
Network Delay Variance
Network Frequency Power level increases with decreasing delay variance, suggesting that network dynamics tended towards greater order with minor frequency components becoming less pronounced. Analysis of individual network raster plots confirmed this through large amplitude oscillations for low delay variance values, and small amplitudes for higher delay variance values, shown in Fig. 5. To conclusively verify this relationship, we measured network synchrony, shown in Fig. 4. Network synchrony appeared to be inversely related to delay variance, while the mean delay positively shifted the synchrony curve. Interestingly, for delay distributions with variance, σ 2 ≥ 2, the abrupt rise in network frequency occurred precisely at the cusp of rapidly increasing synchronization. This can be seen in the inset of Fig. 4, where the point on each curve corresponds to the mean at which network frequency rapidly rose as seen in Fig. 1.
IV. DISCUSSION
In this paper, we demonstrated the existence of a nonlinear relationship between neuronal network conduction delay distribution and network oscillation frequency, and synchronization. Both the distribution mean and variance inflicted non-trivial effects on network behavior. Conforming to experimental evidence our model proposed a new perspective on the origins of brain rhythms.
Our modeling results suggested that the entire biologically observed network oscillation frequency range of approximately 0 − 100 (Hz), could be partly driven by precise, submillisecond changes in the neuronal network's average axonal conduction delay. Intriguingly, experimental evidence corroborates such precise tuning in neuronal signal transmission, where it has been shown that the AP transmission speed between any two specific neurons is maintained at the sub-millisecond time scale with high degree of reproducibility [1]. This hints at the possibility that adaptive signal velocity mechanisms play a significant role in observed network level phenomena.
Our computational results further suggested a fascinating possibility that AP propagation speeds impact global network dynamics. Given that oscillations and synchronization are fundamental components of information processing in the brain [16], understanding the role that neuronal and non-neuronal cells have in higher cognitive functions is crucial. This challenges the long-held notion of glial passivity in information processing and reveals potential roles for non-neuronal cells proposed by us and others [18,19]. For instance, since oligodendrocytes are now known to adaptively affect AP velocity, through actively restructuring white matter [1,17], this study paves the way for computationally studying the interaction of neuronal and nonneuronal cells in brain health and disease.
This work supports our ongoing effort to investigate network AP conduction velocity distributions in the context of other network parameters such as connectivity, excitation vs. inhibition ratio, and synaptic weight distributions all of which are known to affect network level properties. This will enable us to study the prevalence of our findings in more comprehensive models of brain cells and networks.
|
2019-03-22T18:38:12.000Z
|
2019-03-22T00:00:00.000
|
{
"year": 2019,
"sha1": "b0a036b0f5a4167d1cbe248cfa56fe43c58993a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.09671",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b0a036b0f5a4167d1cbe248cfa56fe43c58993a5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Biology"
]
}
|
36161844
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Non-Linear High-Power Amplifiers on Cooperative Relaying Systems
In this paper, we investigate the impact of the high-power amplifier non-linear distortion on multiple relay systems by introducing the soft envelope limiter, traveling wave tube amplifier, and solid-state power amplifier to the relays. The system employs amplify-and-forward either fixed or variable gain relaying and uses the opportunistic relay selection with outdated channel state information to select the best relay. The results show that the performance loss is small at low rates; however, it is significant for high rates. In particular, the outage probability and the bit error rate are saturated by an irreducible floor at high rates. The same analysis is pursued for the capacity and shows that it is saturated by a detrimental ceiling as the average signal-to-noise ratio becomes higher. This result contrasts the case of the ideal hardware where the capacity grows indefinitely. Moreover, the results show that the capacity ceiling is proportional to the impairment's parameter and for some special cases the impaired systems practically operate in acceptable conditions. Closed-forms and high SNR asymptotes of the outage probability, the bit error rate, and the capacity are derived. Finally, analytical expressions are validated by the Monte Carlo simulation.
I. INTRODUCTION
C OOPERATIVE relaying-assisted communication is the corner-stone of the next generation wireless communication systems because of numerous advantages, such as coverage extension, reliability, uniform quality of service (QoS) [1], spatial diversity gain and hostpot throughput improvement [2]- [6]. Consequently, future mobile broadband networks such as 3GPP LTE-Advanced, IEEE 802.16m and IEEE 802.16j are expected to support communications based on relaying. Based on that, relaying networks have gained enormous attention over the recent decade both in industry and academia [7]- [10].
In most networking systems, the relaying technique is achieved in two steps. In the first step, a source (S) transmits the signal and all the relays are sensing. In the second time slot, the relays cooperatively transmit the information symbols to the destination (D). There are many relaying techniques, but the most commonly used are Amplify-and-Forward (AF) [11]- [14], Decode-and-Forward (DF) [15]- [17] and Quantize-and-Encode/Forward [18], [19]. The benefits of cooperative relaying come at the expense of the low signal coverage in farther areas. In fact, some cellular areas suffer from low coverage and power outage and it has been shown that an efficient way to increase the coverage reliability and the network scalability is to implement a set of relays along the path between the base station and the farthest areas. Furthermore, the inefficient utilization of the spectrum can be reduced by using relay selection protocols. These protocols state that a single relay is selected following some specific rules to forward the signal to D.
A. Literature Review
In the literature, there are many relay selection protocols, but the most popular are partial relay selection (PRS) and opportunistic relay selection (ORS) [20]- [23]. For the PRS, the selection is achieved based on the channel state information (CSI) of either the first or the second hop. However, ORS requires the knowledge of the CSI of the overall channels. Further details about this protocol will be given later in the next section. Although the PRS has low complexity, short network delay and low power consumption, ORS is known to be more efficient specifically, the signal outage, error performance and system capacity are better [21], [22]. In addition, the feedback signals between S, relays and D are slowly propagating, so it is important to take into account this delay and consider outdated CSI rather than perfect channel estimation during the relay selection. Moreover, the outdated CSI can be considered for the amplification gain at the relay as well. This point will be detailed further in the section of the system model.
The vast majority of previous work assumed relaying system with ideal transceivers [24]- [28]. However, in practice the transceivers are susceptible to many types of imperfections such as HPA non-linearities, In phase and Quadrature phase (I/Q) imbalance, phase noise, DC offset [29]- [32]. Schenk et al. [33] have considered I/Q imbalance and proved that this impairment attenuates the magnitude of the signal. Furthermore, Maletic et al. [34] characterized the effect of non-linear HPA and they demonstrated that the system performance such as the outage probability, BER and the ergodic capacity deteriorated compared to the linear HPA. As long as the impairment becomes more severe, an irreducible floor is created that it cannot be crossed by increasing the average transmitted power [35], [36]. Related work to cooperative relaying communication are prominent in the literature. In fact, Bjornson et al. [35] have considered a dual-hop system with single relay employing AF and DF relaying schemes wherein the source and the relay both suffer from aggregate hardware impairment. This work quantified the impacts of the impairments on the outage probability and the ergodic capacity. They proved that the capacity is finite and limited by a hardware ceiling and they also showed that DF is more resilient to the impairments than AF. The same research group also considered a two-way relaying under the presence of relay transceiver hardware impairments to prove that the outage probability and the symbol error probability are saturated by irreducible floors created by the hardware impairments. In the same context, Studer et al. [31] considered MIMO transmission with residual transmit-RF impairments wherein they proposed Tx-noise whitening technique to mitigate the performance loss. Moreover, Qi and Aïssa [37] provided a framework analysis of the compensation of the power amplifier non-linearity in MIMO transmit diversity systems wherein they derived the expressions of the total degradation, the symbol error rate and the system capacity. Advanced research attempts [38], [39] have considered mixed RF and FSO (Free-space optic) relaying system suffering from aggregate hardware impairments where the RF channels experience Rayleigh fading and the FSO channels are subject to Gamma-Gamma and Double Weibull fading, respectively. Furthermore, additional work have focused more on investigating the cooperative diversity of multiple relay systems but assuming ideal hardware. In fact, [21] and [22] have proposed dual-hop multiple relay systems with PRS and ORS protocols with outdated CSI wherein they derived the closed-forms of the outage probability, the BER and they also provided the diversity and the coding gains of the proposed systems. Although [21] and [22] came up with novel expressions, they neglected the impact of the impairments. Given that they assumed that such systems are promising for the advancement of future wireless communications since they are of high rate and hence, the assumption of neglecting the impairments cannot hold in this situation. To address this shortcoming, we keep the same configuration of the proposed system of [21] and [22] but we introduce hardware impairments into the relays. Such impairments' models presented by this work are detailed in the next subsection.
B. HPA Overview
The origin of the HPA comes from the fact that the relaying amplification is not linear which creates a non-linear distortion that severely degrades the quality of the signal. In practice, there is a finite peak level for which any power amplifier can produce an output power without exceeding that power constraint. This peak constraint is primarily amplifier-dependent and varies within a given bounded range. If the amplifier is unable to provide the required power, a non-linear distortion over the peak is introduced and such phenomenon is called clipping (clipping factor) of the power amplifier.
The HPA can be classified into memoryless and with memory. In fact, the HPA is said to be memoryless or frequency-independent if its frequency response is constant over the operating frequencies range. In this case, the HPA is fully characterized by the famous characteristics AM/AM (amplitude to amplitude conversion) and AM/PM (amplitude to phase conversion). AM/AM and AM/PM will be given in more details in Section II-C. On the other side, if the frequency characteristics totally depend on either the frequency components or the thermal phenomena, the HPA is said to be with memory [40]. Such system can be characterized by realistic memory models, such as the Volterra, Wiener, Hammerstein, Wiener-Hammerstein and memory polynomial models [41].
In practice, there are various models of memoryless HPA but the most commonly known are Soft Envelope Limiter (SEL), Traveling Wave Tube Amplifier (TWTA) and Solid State Power Amplifier (SSPA) or also called the Rapp model [42]- [48]. The SEL is typically used to model a HPA with a perfect predistortion system while the TWTA has been basically employed to model the nonlinearities impact in an OFDM system. In addition, SSPA is characterized by a smoothness factor to control the transition between the saturation and the linear ranges. This model eventually introduces a linear characteristic for low amplitudes of the input signal and then it is limited by an output saturation level. For larger values of the smoothness factor, SSPA practically converges to the SEL model.
C. Contribution
In this paper, we introduce three models of HPA nonlinearities at the relays, which are SEL, TWTA and SSPA [34], [49]. Then we will study the effect of the relay saturation on the outage probability, the average BER and the ergodic capacity under different relaying schemes. These relaying modes are fixed gain (FG), variable gain (VG) version I (VGI) and version II (VGII). Note that the first version of the variable gain scheme is based on calculating the amplification gain of the instantaneous CSI feedbacks between S, relays and D. The signal amplification will be based on this outdated CSI. For the second version of (VG), the relays are supposed to have an updated version of the CSI information to compute the amplification gain. To the best of our knowledge, this is the first work elaborating on a global framework analysis of multiple relays under the effect of various models of HPA non-linear distortion. We will show that both the outage and the error performances are saturated by inevitable floors while the system capacity is limited by a finite ceiling. For some special cases, we will show that the system can operate in acceptable conditions with the presence of the hardware impairments.
This work makes the following contributions: • Present a detailed description of the system model and the relay selection protocol. • Provide an analytical framework of the impairments and how to convert the non-linear distortion into a linear impact on the system using the Bussgang linearization theory. • Present the statistics of the channels in terms of the high order moment, the probability density function (PDF) and the cumulative distribution function (CDF). • Once obtaining the signal-to-noise-plus-distortionratio (SNDR), which is a measure of the degradation of the signal by unwanted or extraneous signals including noise and distortion, we will derive the closed-forms of the outage probability, BER and the ergodic capacity for FG, VGI and VGII. • Finally, to obtain further insights on the proposed system, we derive asymptotic expressions of the outage probability and BER at high signal-to-noise-ratio (SNR) regime. Capitalizing on these asymptotes, we derive the diversity gain of the proposed system.
D. Structure
This paper is organized as follows: Section II describes the system model and the impairment types. The outage probability analysis is provided in Section III while the BER analysis is given in Section IV. Analytical and numerical results are detailed in Section V. Concluding remarks and future directions are presented in the final Section.
II. SYSTEM MODEL
The system is composed of a source S, destination D and N parallel relays R n , n = 1, .., N wirelessly connected to S and D as shown in Fig. 1. The channels of the first and the second hops are symmetric, independent and indentically distributed following the Rayleigh distribution.
A. CSI Model
As we mentioned earlier, we assumed an outdated CSI instead of a perfect one. In this case, the relay selection protocol is achieved based on a delayed version of the CSI and not on the current one due to the feedback delay. In this way, the outdated and the current channels gains are denoted by h and h, respectively. Hence, the outdated CSI between S -kth relay and kth relay -D are, respectively, modeled as follows: and where w 1(k) and w 2(k) are two random variables that, respectively, follow the circularly symmetric complex Gaussian distribution with the same variances of the channels' gains h 1(k) and h 2(k) . The time correlation coefficients ρ 1 and ρ 2 are between the channels h 1 − h 1 and h 2 − h 2 , respectively. The coefficients ρ 1 and ρ 2 are given by the Jakes' autocorrelation model as follows [50]: and where J 0 (·) is the zeroth order Bessel function of the first kind [51, eq. (8.411)], T d is the time delay between the current CSI and the delayed version and f d is the maximum Doppler frequency of the channels.
B. Opportunistic Relay Selection
This protocol states that each relay should quantify its appropriateness as an active relay, using a function describing the link quality of the two hops. The first step is to select the minimum channel gains between two hops for each relay Eq. (5). Based on the first step, the relay of rank k characterized by the strongest bottleneck is the one with the best overall path between S and D Eq. (6). Then where γ 1(i) , γ 2(i) are the instantaneous SNRs of the i th channel of the first and second hops, respectively. Since the relays operate in a half-duplex mode, the best relay is not always available and so the control unit will select the next best available relay.
C. HPA Non-Linearities Model
We assume that the relays are subject to HPA non-linearities. For a given transmission, the selected relay receives the signal y 1(k) from S and then amplifies it by the factor gain G. This amplification takes place in two time slots. In the first phase, the gain G is applied to the received signal as follows: In the second phase, the output signal φ k passes through a non-linear circuit as follows: where f (·) is the function of amplitude and phase of the non-linear circuit. In addition, we assume that the relays power amplifiers are memoryless. A given memoryless power amplifier is characterized by both AM/AM and AM/PM. The signal at the output of the non-linear circuit is given by [37]: where arg(φ k ) is the phase of the complex signal φ k and F a (·), F p (·) are the characteristic functions AM/AM, AM/PM, respectively. 1) SEL: This type of impairment is suitable to model a HPA with perfect predistortion system. The characteristic functions of SEL are expressed as follows [42]: where A sat is the HPA input saturation amplitude.
2) SSPA: This impairment model, also called the Rapp model, was detailed in [52] and presents only the amplitude characteristic AM/AM. The functions are given by: where ν is the smoothness factor that controls the transition from linear to saturation domain. As ν converges to infinity, SSPA effectively converges to the SEL model.
3) TWTA:
This impairment is used to model the impact of non-linearities in OFDM systems [53], [54]. The characteristic functions of this model are given by: (12) where 0 controls the maximum phase distortion.
In practice, to mitigate the impacts of the non-linear distortion, the HPA operates at an input back-off (IBO) from a given saturation level. In the literature, there have been many definitions of the IBO, but in this work, we will adopt the following definition: where σ 2 is the mean power of the signal at the output of the gain block. Fig. 2 presents the variations of the AM/AM with respect to the normalized input modulus for SEL, SSPA and TWTA.
D. Bussgang Linearization Theory
This theory states that the output of the non-linear power amplifier circuit can be expressed in terms of a linear scale parameter δ of the input signal and a non-linear distortion τ which is uncorrelated with the input signal and distributed following the complex circular Gaussian random variable τ CN (0, σ 2 τ ). In this case, the characteristic function of the amplitude is given by: We can derive the expressions of δ and σ 2 τ following the two corollaries. Corollary 1: The linear scale δ can be derived as follows: Corollary 2: The variance of the non-linear distortion is given by: For the SEL model, δ and σ 2 τ can be expressed as follows [34, eq. (10)]: where erfc(·) is the complementary error function.
To simplify the calculation for the case of SSPA, we first assume that the smoothness factor (ν = 1) and then we refer to [55] to derive the parameters as follows: where Ei(·) is the exponential integral function. If the phase characteristic AM/PM is negligeable (i.e., 0 ≈ 0), the impairment parameters δ and σ 2 for TWTA can be obtained by [34, eq. (11)]:
E. Statistics of the Channels
Since the channels of the first hop experience Rayleigh fading with outdated CSI and the system employs the opportunistic relay selection protocol, the PDF of the SNR of the first hop of the kth channel is given by [22, eq. (21)]: Due to the symmetry of the channels fading, the CDF of the second hop at the kth channel can be expressed as follows: where P n , S m , Q n, j , R n, j , T m,i , U m,i and γ are defined by: The nth moment can be derived using [51, eq. (3.326.2)]:
F. End-to-End SNDR: Fixed Gain Relaying
The relaying gain of the FG scheme is given by: where P 1 is the average transmitted power from S and σ 2 0 is the noise variance.
The end-to-end SNDR of the FG relaying can be expressed as follows: where ζ is defined by: For ideal relays (ζ = 1), the end-to-end SNR can be written as follows:
G. End-to-End SNDR: Variable Gain Relaying I
In this relaying scheme, the relays compute the gain using the CSI of the channel S-R k . The relays already know the CSI information since it was measured during the relay selection. However, this CSI information is not updated and it will be used to calculate the signal amplification gain which can be written as follows: The end-to-end SNDR is given by:
H. End-to-End SNDR: Variable Gain Relaying II
This relaying scheme states that unlike the VGI, the relays computes the amplification gain using the current estimated CSI rather than the outdated one. Although this scheme appears to be more realistic and sophisticated, it is very complex for implementation compared to the first version of VG since the two CSIs h and h are required to be estimated by the control unit. The estimation of the CSI h is achieved by the superimposed pilots used during the feedback exchange between the various nodes of the system.
The amplification gain can be obtained by: In this case, the end-to-end SNDR can be derived as follows:
III. OUTAGE PROBABILITY ANALYSIS
The outage probability is the probability that the overall SNDR falls below a given threshold γ th of acceptable transmission quality. It can be defined as: where γ is the effective overall SNDR and Pr(·) is the probability notation.
A. Fixed Gain Relaying
After substituting the expression of the effective SNDR (24) in Eq. (31) and applying the following identity [51, eq. (3.324.1)], the outage expression is given by: where K ν (·) is the modified Bessel function of the second kind of order ν and the parameter c is given by: To get a more accurate insight on the system, we derive an analytical expression of the outage probability at high SNR regime which is given by Eq. (33).
where γ e is the Euler-Mascheroni constant.
Proof:
The proof of Eq. (33) is provided in appendix A.
B. Variable Gain Relaying I
In this case, we should substitute the expression of the effective SNDR (28) in Eq. (31). Since the derivation of a closed-form of the outage performance of VGI is very complex, an approximation is provided by Eq. (34). Proof: The derivation of Eq. (34) is detailed in appendix B.
C. Variable Gain Relaying II
After replacing the end-to-end SNDR (30) in Eq. (31) and after applying the identity [51, eq. (3.324.1)], the outage probability can be finally expressed as follows: For every value of x very close to zero, we get K 1 (x) ≈ 1 x and e x ≈ 1 + x. Based on these asymptotic expressions, a simpler approximation of the outage expression of VGII at high-SNR regime is given by:.
For ideal or linear relaying, the diversity gain can be derived from Eqs. (33,34,36). It can be expressed as follows: If the relays are impaired, the outage performance saturates by the impairments floor and so the diversity gain in this case is equal to zero (G d = 0).
IV. AVERAGE BIT ERROR RATE ANALYSIS
In this section, we address the error performance of the system for different modulation schemes and considering the three relaying modes. The average BER for various modulation formats such as BPSK, M-PAM, M-PSK and M-QAM is defined by: where 2 dt is the Gaussian Q-function and α, β are the modulation parameters. Using integration by parts, Eq. (37) can be expressed as follows:
A. Fixed Gain Relaying
To derive a closed-form of the average BER for the FG relaying scheme, we should substitute the expression of the outage probability (32) in Eq. (38). Then we must apply the identity [56, eq. (4.17.37)] to get the expression as follows: where W p,q (·) is the Whittaker function. Now, we should substitute Eq. (33) in (38). After applying the identity [57, eq. (2.3.3.1)], the high SNR approximation of the average BER of FG relaying can be expressed as follows:
C. Variable Gain Relaying II
Since the derivation of a closed-form of the average BER is complex, we should consider a simpler form. After some mathematical manipulation, the analytical approximation is given by Eq. (42).
where p F q (a,b,z) is the hypergeometric function. Proof: The proof is detailed in appendix C. After substituting the expression (36) in Eq. (38) and applying the identity [57, eq. (2.3.3.1)], the asymptotic high SNR of the BER is given by:
V. ERGODIC CAPACITY ANALYSIS
The channel capacity, expressed in (bit/s/Hz), is defined as the maximum error-free data rate transmitted by the system. It can be written as follows: Since the transmission is achieved in two steps, the system capacity is multiplied by the factor 1 2 . After some mathematical manipulation, the egodic capacity can be expressed as follows: where γ is the end-to-end SNDR and F γ is the complementary cumulative distribution function (CCDF) of γ . Since the non-linear distortion deteriorates the system performance, an indesirable ceiling is created by the impairments which limits the achievable rate of the system. The ceiling expression is given by [34, eq. (37)]: where ε is the clipping factor of the hardware impairments.
A. Fixed Gain Relaying
After replacing the CCDF of the SNDR (24) in (46) and applying some mathematical manipulation, the closed-form of the ergodic capacity is derived in term of bivariate Meijer G-function as follows:
B. Variable Gain Relaying I
In this case, we should replace the expression of the CCDF of (28) in Eq. (45). After referring to the identity [51, eq. (3.353.5)], the approximation of the capacity is derived as follows:
C. Variable Gain Relaying II
Since the integral (45) is not solvable for the case of VGII, we derive a very tight upper bound in term of bivariate Fox H-function.
VI. NUMERICAL RESULTS AND DISCUSSION
In this section, we present the analytical and simulation results illustrating the effects of the hardware impairments, the relaying schemes, the number of the relays, the rank of the selected relay and the outdated CSI on the system. The performance metrics used to quantify the robustness and the resiliency of the system, are the outage probability, the average bit error rate and the ergodic capacity. The analytical results are confirmed by Monte Carlo simulation considering 10 9 iterations. Fig. 3 shows the variations of the outage probability of FG, VGI and VGII with respect to the average SNR. As expected, it is clear that the variable gain relaying outperforms the FG scheme. Regarding the variable gain protocol, the system performs better when using the second version compared to the first one. In fact, the main difference between the two versions is the CSI used for the relaying amplification. Given that the second version employs the perfect CSI retrieved by the pilot training technique, the amplification in the first version is based on the outdated CSI. As a result, the CSI used for the amplification makes the second version of the variable gain relaying more efficient than the first one. Fig. 4 presents the dependence of the outage performance of VGII relaying against the average SNR under the different models of impairment. For low SNR, the system response to the impairment is acceptable as the three impairments' models have the same impact. As the average SNR increases above 20 dB, the system responses to the various hardware impairments significantly differ from each other. We note that in the high SNR region, the impairments effect becomes more severe particularly for the TWTA and SSPA. As the average SNR exceeds 25 dB, an irreducible outage floor is created which inhibits the performance from converging to zero. Graphically, we note that the system saturates at 0.002 and 0.0003, respectively, for TWTA and SSPA. Consequently, the TWTA has the most detrimental effect on the system. For the SEL impairment model, the system still operates in acceptable conditions and there is no significant impact on the system performance especially the non-creation of the outage floor unlike SSPA and TWTA at least below 40 dB. Fig. 5 shows the variations of the outage probability of FG relaying against the average SNR under the SSPA impairment and for various number of relays. For low SNR and below 10 dB, the number of relays has no remarkable impact on the outage probability. However, as the SNR grows largely, the performance significantly deviates from each other. In fact, the system operates better as the number of relays increases. To achieve an outage probability equal to 10 −3 , the system requires the following average SNRs 20 dB, 27 dB and 35 dB, respectively, for N = 10, 5 and 2 relays. Thereby, the main contribution of the number of relays is useful to reduce the power consumption of the system. This main advantage is explained by the fact that for a higher number of relays, there is a higher probability to select a better channel/relay. However, as the average SNR increases, the impairments effect becomes more severe as the outage probability saturates by the irreducible floor created by the impairments. Even the number of relays play no significant role in this situation. Therefore, the number of relays introduces limited improvements at low SNR, however, it does not contribute in anyway as the impairments become severe at high SNR. Fig. 6 illustrates the variations of the BER of VGII relaying under the SEL impairment and for different values of the IBO. For low SNR below 20 dB, the IBO factor has no observable impact on the system, i.e, the BER is the same regardless of the IBO values. However, when the average SNR overtakes 25 dB, the IBO factor gets more involved. In fact, as the IBO value increases, the system performs better. For lower value of IBO = 5 dB, the BER is limited by a floor created at higher value of the SNR. Considering a large value of IBO = 10 dB, the system performance improves and the BER floors are mitigated. Technically, increasing the IBO value comes directly from increasing the input saturation level A sat . We already showed that the saturation's amplifier is relieved as the input saturation level increases. For a lower value of A sat , i.e, lower value of IBO, the system becomes more saturated by the impairment's distortion. Consequently, the relation between the input saturation level and the IBO thoroughly explains the impact of higher values of IBO on the system performance. Fig. 7 illustrates the variations of the average BER of FG relaying under the SSPA impairment and for different values of the correlation coefficients ρ 1 and ρ 2 . We note that the system performs better as the correlation coefficients increase. In fact, both the arrangement and the selection of the relay are based on the CSI monitored by the control unit. As the correlation coefficients grow, the CSI estimation becomes more accurate and so the relay selection will be based on error-free CSI estimation. Furthermore, when we achieve a full correlation between the CSIs (ρ 1 , ρ 2 ≈ 1), the performance improves further particularly when the relay of the last rank is selected. However, when the correlation coefficients decrease, i.e, the CSIs become more uncorrelated, the relay selection will be based on a completely outdated CSI. In this case, even when we select the relay of the last rank N, the performance gets worse since the selection of the best relay becomes uncertain and there is no relation between the received CSI and the rank of the selected relay.
The same results given by Fig. 7 are confirmed by other approaches in figures 8 and 9 which present the variations of the channel capacity for different values of k and for high and low correlation coefficients, respectively. Unlike the configuration assumed in Fig. 7, the correlation coefficients (ρ 1 , ρ 2 ) are fixed to a high value (0.95) and the rank k is varied. We note that the capacity performance significantly enhances when the rank k increases. Given that we assumed the opportunistic protocol for the relay selection, we stated that the control unit arranges the CSIs in an increasing order. Thereby, as the rank of the selected relay becomes closer to the rank of the best relay (rank = N), the system performs better. In this case, the efficiency of the channel/relay is related to the rank given that the correlation must be high. However, the results of Fig. 9 are absolutely the opposite for the configuration adopted in Fig. 8. We clearly see that the system performs worse as the rank k becomes higher. In fact, this result is expected since the CSIs are completely uncorrelated (low correlation 0.009) and so the rank k has nothing to do with the channel/relay efficiency.
The effect of IBO is illustrated by Fig. 10 which presents the variations of the ergodic capacity for different values of IBO. As we concluded about the effect of IBO on the BER performance in Fig. 6, the impact of IBO is more notable on the capacity performance at high SNR. As the IBO decreases, the channel capacity saturates more especially for IBO = 4 dB and the maximum rate is around 2 bits/s/Hz. However, the level of saturation vanishes for a higher value of IBO equal to 20 dB. For low SNR, the effect of IBO is negligeable and the system operates efficiently. This result is graphically shown by the small difference between the capacities for different values of IBO, especially for an average SNR range less than 15 dB. As the average SNR increases, the IBO essentially contributes to improve the extent of the achievable rate.
VII. CONCLUSION
In this work, we present a system with multiple relays operating at various relaying schemes FG, VGI and VGII.
We assume the opportunistic relay selection to choose a single relay to forward the signal. Moreover, we introduce three models of the hardware impairments SEL, TWTA and SSPA that affect the relays during the power amplification. We quantify the impacts of these imperfections on the system performance in terms of the outage probability, the average BER and the ergodic capacity. We also investigate the effects of the IBO, the number of relays, the rank of the selected relay and the correlation coefficients on the system. We conclude that the impairments have deleterious impacts on the system as the average SNR increases and particularly the TWTA impairment model has the most detrimental effects on the system compared to SSPA and SEL. We also demonstrate that as the number of relays increases, the performance substantially improves mainly the power consumption significantly decreases. Furthermore, we show that the system performs better when selecting the relay with the highest rank simultaneously coupled with higher values of the correlation coefficients. In addition, we prove that the capacity saturates quickly at high SNR when the IBO level is low and grows up infinitely as the IBO takes higher values.
APPENDIX A HIGH SNR APPROXIMATION -FIXED GAIN RELAYING
The end-to-end SNDR γ FG ni is upper bounded by γ u which is given by: The complementary CDF (CCDF) of γ u can be written as follows: The high SNR approximation is nothing but the CDF of γ u , given by F u (γ th ) = 1 − F u (γ th ).
After developing this expression, the CCDF of γ u can be written as the summation of four integrals where each of them has the following general form: As the average SNRs γ 1 and γ 2 grow largely, we can approximate the expression of the integral I by using [58, eq. (25)].
After that, we apply the following identity for every x = 0: After some mathematical manipulation, we finally derive the asymptotic high SNR.
APPENDIX B OUTAGE PROBABILITY DERIVATION -VARIABLE GAIN RELAYING I
It is complex to derive a closed-form of the outage probability of VGI. In this case, we have to derive an approximation of the end-to-end SNDR γ VGI ni .
The approximate outage probability can be written as follows: y) is the joint PDF of the two random variables γ 1(k) and γ 1(k) given by [22, eq. (48)]. After substituting the expression of the joint PDF in Eq. (56), the approximation can be written as the summation of integrals taking the general form as follows: where I r is given by: We can simplify further the expression of I r , we get: Since a closed-form of the integral I r does not exist, we should apply a partial fraction expansion on the argument of the exponential function to get a simpler form of I r .
x + ζ γ th Applying the Maclaurin series over the following term e − γ 2 th ρ 1 br (ar ζ −br ) a 2 r γ 1 (1−ρ 1 ) 2 (br γ th +ar x) and ignoring higher order of γ th γ 1 to get the following approximation Using [58, eq. (11)], the approximate form of the integral I r can be developped as follows: After some mathematical manipulation, we derive the approximation of the outage probability of VGI relaying scheme.
APPENDIX C AVERAGE BIT ERROR RATE -VARIABLE GAIN RELAYING II
After substituting the expression of the outage probability given by (35) in Eq. (38), the resulting integral function is not solvable. In this case, it is practical to provide an approximation of the BER. The first step is to modify the expression of the end-to-end SNDR γ VGII ni as follows: Using the identity given by [51, eq. (6.621. 3)] and after some mathematical manipulation, we finally derive the analytical expression of the approximation of VGII relaying gain.
APPENDIX D ERGODIC CAPACITY -FIXED GAIN RELAYING
To derive the expression of the system capacity, we should substitute the expression of the CCDF of (24) [59, eq. (9)] to solve the integral containing three Meijer G-functions. After some mathematical manipulation, the closed-form of the ergodic capacity is derived in term of bivariate Meijer G-function.
The implementation of the bivariate Meijer G-function in Matlab can be found in [60].
APPENDIX E ERGODIC CAPACITY -VARIABLE GAIN RELAYING II
First of all, we consider an upper bound of the end-to-end SNDR (63). Then we compute the approximate CDF by substituting the new expression of the end-to-end SNDR by applying the identity [ .3)] to evaluate the analytical expression of the integral containing three Fox H-functions. After some mathematical manipulation, the ergodic capacity is derived in term of bivariate Fox H-function.
An efficient implementation of bivariate Fox H-function in Matlab can be found in [62].
|
2017-10-16T17:27:20.158Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "98ee1e30b257a96fbcd73507b770a00ec0b430e9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1902.03176",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "98ee1e30b257a96fbcd73507b770a00ec0b430e9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
9520821
|
pes2o/s2orc
|
v3-fos-license
|
Power and Sample Size Determination for the Group Comparison of Patient-Reported Outcomes with Rasch Family Models
Background Patient-reported outcomes (PRO) that comprise all self-reported measures by the patient are important as endpoint in clinical trials and epidemiological studies. Models from the Item Response Theory (IRT) are increasingly used to analyze these particular outcomes that bring into play a latent variable as these outcomes cannot be directly observed. Preliminary developments have been proposed for sample size and power determination for the comparison of PRO in cross-sectional studies comparing two groups of patients when an IRT model is intended to be used for analysis. The objective of this work was to validate these developments in a large number of situations reflecting real-life studies. Methodology The method to determine the power relies on the characteristics of the latent trait and of the questionnaire (distribution of the items), the difference between the latent variable mean in each group and the variance of this difference estimated using Cramer-Rao bound. Different scenarios were considered to evaluate the impact of the characteristics of the questionnaire and of the variance of the latent trait on performances of the Cramer-Rao method. The power obtained using Cramer-Rao method was compared to simulations. Principal Findings Powers achieved with the Cramer-Rao method were close to powers obtained from simulations when the questionnaire was suitable for the studied population. Nevertheless, we have shown an underestimation of power with the Cramer-Rao method when the questionnaire was less suitable for the population. Besides, the Cramer-Rao method stays valid whatever the values of the variance of the latent trait. Conclusions The Cramer-Rao method is adequate to determine the power of a test of group effect at design stage for two-group comparison studies including patient-reported outcomes in health sciences. At the design stage, the questionnaire used to measure the intended PRO should be carefully chosen in relation to the studied population.
Introduction
Patient-reported outcomes (PRO) are important as endpoint in clinical trials and epidemiological studies. These outcomes comprise all self-reported measures by the patient regarding the patient's health, the disease and its impact, or its treatment. They include health related quality of life, pain, patient satisfaction, psychological well-being, symptoms, treatment adherence/preference,… [1] PRO have first gained importance as secondary endpoints because they can be helpful to evaluate the effects of treatment on patient's life or to study the quality of life of patient along with the disease progression to adapt the patient's care. They can also be used as primary endpoint, especially in chronic diseases such as cancer [2], to compare two standard treatments with comparable survival outcomes or to help decision making.
The deleterious impact of each treatment on patient's quality of life can also be evaluated [3].
The singularity of PRO lies in the fact that the outcome, such as quality of life or wellness, cannot be directly observed. This particular outcome is defined as a latent variable. Generally, a questionnaire is the instrument that indirectly measures the latent variable and the responses of patients to items are further analyzed. Models from the Item Response Theory (IRT) link the probability of an answer to an item with item parameters and a latent variable. This theory has gained importance in Patient-Reported Outcomes area compared to the Classical Test Theory (CTT) where models are based on a score that often sums the responses to the items. IRT has shown advantages such as the management of missing data, the possibility to obtain an interval measure for the latent trait, the comparison of latent traits levels independently of the instrument, the management of possible floor and ceiling effects [4].
With the development of patient-reported outcomes in clinical research, guidelines were edited for construction, validation and administration of questionnaires [5][6][7]. However, the literature presents few references to the design stage. In particular, the sample size requirements when IRT is intended to be used for analysis of PRO seems to lack of theoretical work [8,9]. When PRO are used as primary endpoint in a group comparison study, it is essential at the design stage to correctly determine the sample size to achieve the desired power for detecting a clinically meaningful difference in the future analysis. An inadequate sample size may lead to misleading results and incorrect conclusions. General recommendations on the sample size in the framework of education can be found. It should be highlighted that these recommendations are usually made without any theoretical justification. It is admitted that the sample size has to increase with the complexity of the model [10]: a number of 50 individuals was proposed for the simplest model of IRT, the Rasch model [11], a sample size of 200 respondents for the twoparameter logistic model has been suggested [12] and 500 examinees for the graded-response model [13]. Consequently, publications on health outcomes assessments make generally only few comments on the sample size determination as no analytical formula for the sample size exists.
It has been recently pointed out that the widely-used formula for the comparison of two normally distributed endpoints in two groups of patients was inadequate in the IRT setting [9]. Indeed, the power achieved by the tests of group effects using IRT modeling in a simulation study was lower than the expected power using the formula for normally distributed endpoints. Subsequently, Hardouin et al [14] have proposed a methodology to determine power related to sample size for PRO cross-sectional studies comparing two groups of patients in the framework of the Rasch model. The power determination depends on the difference between the expected means in the two groups (the group effect) and its standard error. The key point of the method is to estimate this standard error using the Cramer-Rao bound. This theoretical approach was first validated by simulation studies in some cases (small variance, appropriate questionnaire for the population under study) that may not reflect what is encountered in practice. Whether the method would perform as well in a large variety of situations often met in clinical and epidemiological studies remains unknown. As a matter of fact, the population of the study can have heterogeneous levels of the latent variable. Moreover, the PRO instrument might be more or less suitable for the population under study. Indeed, the items composing the instrument can be more or less relevant for the intended population of the study. For example, items from a disease-specific questionnaire (such as the QLQ-C30 [15] evaluating the quality of life of cancer patients) can be too difficult in a newly-diagnosed population in the sense that items specific to the disease can almost never be encountered in a population where the disease was recently detected, potentially before most of the symptoms appear. The measures provided by the PRO might not be reliable for all patients and the power could therefore be impacted by the choice of the questionnaire.
The purpose of this study was to validate the Cramer-Rao method for PRO cross-sectional studies comparing two groups of patients using the Rasch model. The impact of the variation of the variance of the latent variable (inter-patient heterogeneity regarding the latent variable) and of the distribution of the item parameters (appropriateness of the questionnaire for the population) on the proposed methodology has been studied by comparing the results of the Cramer-Rao method to the results of a simulation study.
Methods
At the planning stage, the calculation of a sample size is usually based on a statistical test to detect a clinically meaningful effect at desired levels of type I and type II errors. In the case of the comparison of mean levels of PRO measures in two groups of patients, the widely-used formula for the comparison of two normally distributed endpoints may apply [16]. The formula assumes that the two groups are independent and that the variance of the endpoint s 2 is common across the groups. The hypotheses for the two-sided test of comparison are defined as H 0 : m 0~m1 against H 1 : m 0 =m 1 , where m 0 and m 1 are the means of the endpoint in the first group and the second group respectively. The number of patients to be included in the first group N 0 is determined by specifying an expected difference in the means of the PRO measures (m 0 {m 1 ) and the common variance (s 2 ) as well as the type I error (a) and the desired power (1{b) of the test.
where N 1~k N 0 is the number of patients in the second group and z i the i th percentile of the standard normal distribution. If this formula is adequate for manifest variables such as quality of life scores, it seems to incorrectly determine the sample size for latent variables [9] as it doesn't take into account the uncertainty due to the estimation of the latent variable. So, this formula is not adapted for studies intending to use IRT models for the analysis.
Sample Size and Power Determinations in IRT
The rasch model. In IRT, the link between a latent variable, that is the non-directly observable variable that the PRO instrument intends to measure (quality of life for example), and item parameters is modeled. Amongst the large family of IRT models, the Rasch model [17,18] is largely used for dichotomous items in health sciences. It models the probability that a person i answers a response x ij to an item j by a logistic model with two parameters, (i) the value of the latent variable of the person, h and (ii) the item parameter associated with the item j, d j . For a questionnaire composed of J dichotomous items answered by N patients, the Rasch mixed model can be written as follows: where x ij is a realization of the random variable X ij (x ij~0 for the most defavorable response, x ij~1 for the most favorable one). d j is also called the difficulty of item j. As the value of d j increases, the item is more and more difficult which means that patients are less and less likely to answer positively to the item. For example, an item ''Does your health allows you to run an hour?'' will be more difficult than an item ''Does your health allow you to dress yourself?'' if the positive answer is defined as ''yes''. h is a realization of the random variable H, generally assumed to have a gaussian distribution. In this case, the parameters of the Rasch model can be estimated by marginal maximum likelihood (MML) [19]. A constraint has to be adopted to ensure the identifiability of the model. The nullity of the mean of the latent variable (m~0) is often used for this purpose.
Power estimation using cramer-rao bound. In the design of a cross-sectional study for the comparison of two groups of patients in IRT, we are interested in the evaluation of a group effect, c~m 1 {m 0 , defined as the difference between the means of the latent variable in the two groups. Let N 0 and N 1 be the expected sample size in the first group and the second group respectively. To identify the model presented above, the constraint of the nullity of the mean of the latent variable m is adopted. The mean m is the mean between m 0 and m 1 , each of them weighted by the sample sizes N 0 and N 1 . Consequently, Let H be a random variable representing the latent variable in the first and the second group respectively.
The variance of the latent trait s 2 is assumed to be equal in the two groups. The mixed Rasch model including a covariate to estimate a group effect c can be expressed as follows: in the first group and g~N 0 N 0 zN 1 in the second group in order to meet the constraint of identifiability. The sample size determination often relies on the Wald test to assess whether the group effect is significant. The following hypotheses are to be tested, H 0 : c~0 against H 1 : c=0. To perform the test, an estimate C of c and its variance are required.
The test statistic C ffiffiffiffiffiffiffiffiffiffiffiffiffi ffi var(C) p follows a normal distribution N(0,1) under H 0 . At the design stage, Hardouin et al. [14] proposed to use Fisher's information and the Cramer-Rao (CR) boundary property to obtain an analytical formula for the standard error of C. This method takes into account the characteristics of the questionnaire by using the parameters of the items to estimate the variance of the group effect. It also incorporates the uncertainty related to the estimation of the latent trait in the IRT model. At the design stage, the item parameters are set to some planning expected values as well as N 0 , N 1 , c and s 2 . In addition, as the patient's responses are not known, they should be determined. For each possible response patterns (2 J for binary response), the associated probability is computed for each group using the Rasch model, conditionally on the planned values of N 0 , N 1 , c and s 2 . The expected frequency of each response pattern in each group is then determined [14]. The dataset created with the response patterns and their associated expected frequencies is analyzed using a mixed Rasch model including a group effect to estimate the variance of the group effect using CR and the power of the Wald test.
The expected power of the test of the group effect based on the Cramer-Rao bound (CR), 1{b b CR , can be approximated by [14]: with c assumed to take a positive value, z 1{a=2 be the quantile of the standard normal distribution and c var var(ĉ c) evaluated using Cramer-Rao bound.
The whole procedure has been implemented in the free Raschpower module accessible at http://rasch-online.univnantes.fr. This module determines the expected power of the test of the group effect based on the Cramer-Rao bound given the expected values of the sample size in each group (N 0 and N 1 ), the group effect (c), the variance of the latent variable (s 2 ) and the item parameters (d j ) defined by the user.
Simulation Study
To validate the Cramer-Rao method, the power determined with this method was compared to the power obtained by a simulation study, used as a reference.
Generation of data. Responses to J dichotomous items of two groups of patients were simulated using a mixed Rasch model where the latent variable has normal distributions in the first and the second group respectively.
To study the impact of the values of the item difficulties, the distribution of items could vary in two different ways according to the regularity of the spacing of the items and the gap between the mean of the latent variable and the mean of the items distribution. To obtain item difficulties that are quite regularly spaced, their values are set to the percentiles of a determined probability distribution. The normal distribution is used with the same mean and variance as the latent trait distribution. The questionnaire will therefore estimate the patients levels of quality of life with the same accuracy whatever the level of quality of life on the continuum of the latent trait as shown on Figure 1 (subfigure A). To obtain irregularly spaced item difficulties, an equiprobable mixture of two gaussian distributions was used. When the spacing is irregular, the estimates of the patients levels, of quality of life for example, will be more precise when difficulties are close to each other than when they are far apart from each other. We can see on Figure 1 (subfigure B) that the quality of life levels around 21 will be estimated more precisely than quality of life levels between 20.5 and 0.5. The case of irregular spacing of item difficulties is probably more encountered in practice than regular spacing.
The distribution of the items could be centered on the same mean as the latent trait or a gap, D, between the means of the latent trait and the mean of the item difficulties could be simulated. A positive gap is illustrated in Figure 1 (subfigures C and D). The latent variable distribution and the items distribution are then no more overlaid. The most difficult items of the questionnaire (on the right of the distribution) will be too difficult for the population. Hence, a very small part of the patients will respond positively to these items while most of the patients will respond positively to the easiest items (on the left of the distribution) leading to a floor effect. Due to this floor effect, the estimates of the patients levels will be less accurate on the left of the latent trait distribution (for poor levels of quality of life for example). In practice, a floor effect can occur when a disease-specific population answers to a generic questionnaire. For example, patients with serious physical impairment won't be likely to answer positively to physical functioning items such as the ability to walk a block, to run or to climb stairs (example of items from the physical functioning of the generic questionnaire SF-36). On the opposite, a negative gap will lead to a ceiling effect as the items will be too easy for the studied population.
Parameters of the simulation study. The following values of the parameters were used in the simulation study: N The number of individuals was equal in both groups (N 0~N1~N ) and could take the value 50, 100, 200, 300 or 500.
N The group effect (c) was equal to 0, 0.2, 0.5 or 0.8. N The value of the variance of the latent trait (s 2 ) could be 0.25, 1, 4 or 9.
N The number of items (J) was 5 or 10. N The item difficulties could come from a normal distribution N(0zD,s 2 ) (quite regularly spaced) or from an equiprobable mixture of N({szD,(0:3s) 2 ) and N(szD,s 2 ) (irregularly spaced). The global mean of the latent variable was equal to 0. So, the distributions of items and latent traits were overlaid if the mean of item distribution was also equal to 0. The gap, D, was defined as 0 (overlaid distributions), 1s, 2s. As the gap becomes larger, the items distribution departs more and more from the latent traits distribution and floor effect could occur more frequently. In the case of a normal distribution with a null D, the questionnaire is assumed to be appropriate for the population without a floor effect and the items are quite regularly spaced.
The combination of all parameter values lead to 960 different cases. 1000 replications were simulated for each case.
Evaluated criteria. Each simulated dataset was analyzed with a mixed Rasch model including a covariate to estimate the group effect. A Wald test was then performed for assessing the significance of the group effect. For the simulations only, the type I error was estimated as the rate of rejection of the null hypothesis (null group effect) amongst datasets where the group effect was null (c~0). The confidence intervals of the type I error was computed as exact binomial proportion confidence intervals. The power of the test of group effect of the simulations, 1{b b S , was estimated as the rate of significant tests amongst the simulations where the simulated value of c was not null. This result was compared with the estimated power using CR, 1{b b CR (eq. 2), computed with the Raschpower module of Stata [14]. As the estimation of 1{b b CR is based on the estimated value of the standard error of c, a good estimation of the power requires a good estimation of this standard error. Hence, the estimated value of the variance of the group effect in the simulation, d Var Var S , was compared with the estimated variance of the group effect using CR, d Var Var CR .
Results
Estimation of the Variance of the Group Effect Table 1 and table 2 show the estimated variance of group effect obtained either by simulations or using CR for all the values of parameters, for a group effect equals to 0 or 0.2 and 0.5 or 0.8, respectively. The estimations of the variance are close for both methods in general. As expected, the variance of the group effect decreases as N and J increase. Coherently, the variance of the group effect increases with the variance of the latent variable, s 2 . It slightly increases with the value of the group effect.
We note that the estimations of the variance for CR method are larger as compared to the simulation mostly when the gap is high (D~2s) and c=0. The highest overestimated values of the variance for CR are observed for low values of the sample size N and of the number of items J, high values of the latent variable variance s 2 and a normal distribution of the items as compared to a mixture of normal distribution of items.
Type I Error and Power of the Test of Group Effect
For the simulations, the type I error is well maintained to the expected value of 5% in almost all scenarios (results not shown). The type I error fluctuates between 2.6% (J~5, N~500, s 2~1 , D~2s, for a mixture distribution of the item difficulties) and 6.8% (J~10, N~200, s 2~0 :25, D~0, for a mixture distribution of the item difficulties). Amongst the 240 values of the type I error, only 9 confidence intervals at 95% of the estimated type I error don't contain the expected value of 5%. None of the parameters seems to have an impact on the value of the type I error. Table 3 and table 4 present the estimated values of the power obtained either by simulations or using CR for the values of all parameters for a questionnaire composed of 5 and 10 items, respectively. For simulations, the power was estimated by the rate of rejection of the null hypothesis amongst datasets where the group effect was not null (c=0). For all values of the simulation parameters, the estimated powers are close for each method (CR or simulations) when there is no gap. The difference between the powers obtained by simulation and using CR is around 0.003 in average and fluctuates between 20.034 (N~300, J~10, c~0:5, D~0, items normally distributed) and 0.059 (N~50, J~10, c~0:5, D~0, items normally distributed). As expected, the power increases as the sample size (N) and the number of items (J) increase and decreases as the variance of the latent trait (s 2 ) increases. It also increases with the group effect (c).
We observe an impact of the gap between means of the latent variable and item difficulties (D) which is stronger when the gap is high (D~2s). In these cases, the power obtained using CR is lower than the power of simulations. The loss of power is the highest when the variance (s 2 = 4 or s 2 = 9) and the group effect (c~0:5 or c~0:8) are high, the number of items J is low and the distribution of the items is normal as compared to a mixture of normal distribution of items. The loss can exceed 220% in the worst cases. For example, when N~300, c~0:8, s 2~9 , J~5, D~2s and the distribution of items was normal, the estimated power is 83.4% for the simulations and 60.3% for CR.
Discussion
The validity of the method to estimate the standard error of group effect and to determine the power of the test of group effect in IRT using Cramer-Rao bound was investigated for a large number of situations that may be often encountered in practice. The estimated variance of group effect and power obtained using Cramer-Rao were close to the estimations from the simulations when the distributions of the latent variable and the items were overlaid (D~0). As expected, the variance of group effect increased with the variance of the latent variable. This led to a decrease of the power of the test of group effect that does not differ for both methods (Cramer-Rao and simulations). The Cramer-Rao method seems to be still valid for high values of the variance of the latent variable.
However, when the gap between means of the latent variable and item difficulties (D) is high, we observed an inflated estimation of the variance of group effect and consequently a loss of power for CR compared to the simulations. The Cramer-Rao method seems to reach its limits for D~2s and high values of s 2 and c. The impact of an underestimation of the power can have large consequences on the planned sample size. To achieve a power of 80% for a gap equal to 2s when c~0:8 and s 2~9 , the Cramer-Rao method suggests to use N~500 patients per group whereas N~300 patients per group is a sufficient sample size to obtain a power of 83.4% according to the simulations. Hence, in this example, 200 patients in each group would have been unnecessarily included in the study to achieve a power of 80% using the Cramer-Rao method with a gap equals to 2s. So, the choice of a questionnaire appropriate to the population at the design stage is an important issue. For example, the use of a disease-specific questionnaire in general population is not recommended as the population of the study will probably not encounter some of the symptoms strongly related to this disease. Thereby, some items evaluating the symptoms will have only few or no positive responses leading to a floor effect and an incorrect determination of the power with the Cramer-Rao method.
We recommend taking time on the choice of the questionnaire before the study. To evaluate the suitability of a questionnaire, it seems important to first check that the items composing the questionnaire intended to be used are relevant for the population of study. An item is not considered as relevant if the population will answer mainly to one of its modality only and will lead to ceiling or floor effect. When choosing a questionnaire for a study, one has to take into account the characteristics of the population used for its former validation (type of the disease, seriousness of the pathology, …) in order to be suitable enough for the population to be studied.
At the planning stage, the parameters of the distribution of the latent variable and the item parameters have to be fixed. To do so, it is easier to rely on a pilot study or on previous articles for example. Hence, it may be possible to evaluate if a gap between the mean of the latent variable and the mean of the items distribution is likely to occur.
Despite all the precautions taken at the planning stage, a gap can be observed at the analysis stage. Unfortunately, the Cramer-Rao method would have underestimated the power in this case. Consequently, the number of subjects to be included in the study would have been overestimated which raise ethics and financial problems. Given the results, it does not seem reasonable to use the Cramer-Rao method for a gap equals or higher than 2s. In fact, a gap equals to 2 standard deviations seems to already reflect a poorly suitable questionnaire, a generic questionnaire assessing health-related quality of life of a seriously ill population for Table 2. Cont. Table 3. Power estimated in the simulation study (12b b S ) and using the Cramer-Rao's bound (12b b CR ) for different values of the sample size in each group (N~N 0~N1 ), the group effect (c), the variance of the latent variable (s 2 ), the spacing regularity of the items and the gap between the global mean of the latent variable and the mean of the distribution of the item difficulties (D). example. However, the Cramer-Rao performs well in a large number of situations and can handle a moderate gap between the distributions of the latent variable and the items (D~1s). We observed a slight impact between the quite regularly spaced items (normal distribution) and the irregularly spaced items (mixture of normal distributions) on the variance and the power when the gap was high (D~2s). The normal distribution gave higher estimations of variance and so lower power than the mixture of normal distributions. This effect increased with the gap. It could be explained by the fact that, in the way the data were simulated in our study, the items coming from the mixture of distributions covers a wider part of the latent variable distribution as shown in Figure 1. Furthermore, when the latent variable and items distributions are not overlaid (D=0), the easiest item coming from the normal distribution (d 1~1 :03 in Figure 1 (subfigure C) for example) is more on the right of the latent variable distribution than the easiest item coming from the mixture of distributions (d 1~0 :86 in Figure 1 (subfigure D)). Therefore, the floor effect, resulting from the gap, occurs at a lowest level of h for the normal distribution than for the mixture of distributions. And so, the floor effect has more impact on the variance and power obtained using item parameters coming from the normal distribution. As this effect is linked with the simulation process, it can't be interpreted as an impact of the regularity of the items on the performance of the Raschpower method.
Normal distribution Mixture of normal distributions
Beyond the impact of items and variance of the latent trait, the effect of the sample size, the number of items and the group effect were also studied. Their values were chosen to reflect what is frequently encountered in practice in health studies. However, some assumptions had to be made to perform the simulation study. Instead of the Rasch model, another IRT model for dichotomous items could be considered such as the 2-PLM [20] or the OPLM [21]. These models are more complex than the Rasch model in the sense that they include item discriminations in addition to item difficulties. The variance using Cramer-Rao could probably be estimated with the same efficiency by adapting the formula and fixing the item discrimination to known values as made for the item difficulties.
The estimation of the variance and the determination of the power are based on the expected planned values that are fixed. This is usual at the design stage but it can turn out to be problematic if no previous studies can provide some information on the values of the parameters. If the planning values are far from the estimated values in the study at the analysis stage, the variance could be incorrectly estimated and the power for a determined sample size could then not be achieved. It seems important to further study the impact of misspecifications in the choice of the planning values on the performance of the Cramer-Rao method. The robustness of this method when some of the assumptions on the model are violated should also be evaluated to identify settings where the method should or should not be used.
For now, the main limitation of the Cramer-Rao method is that the variance can only be estimated in the frame of Patient-Reported Outcomes evaluated with dichotomous items in a crosssectional setting. Two major developments seem to be necessary to make this method applicable in almost all studies in health sciences. First, the method should be able to deal with polytomous items. The estimation of the variance can be based on the partialcredit model [22] or the rating-scale model [23], which are extensions of the Rasch model for this type of items. The introduction of such models will lead to a more complex procedure of estimation as the number of parameters will increase with the number of modalities of the items. Second, the study of the evolution of a criteria is often of interest in health sciences. Patients' evolution of PRO through time are often evaluated in longitudinal studies. The validity of the Cramer-Rao method in this context has to be studied as the correlated measures of patients Table 4. Power estimated in the simulation study (12b b S ) and using the Cramer-Rao's bound (12b b CR ) for different values of the sample size in each group (N~N 0~N1 ), the group effect (c), the variance of the latent variable (s 2 ), the spacing regularity of the items and the gap between the global mean of the latent variable and the mean of the distribution of the item difficulties (D). bring into play a more complex model than in cross-sectional studies.
Normal distribution Mixture of normal distributions
The estimated variance of group effect and power obtained using Cramer-Rao were close to the estimations from the simulations in most cases. These results show that the variance using Cramer-Rao bound correctly estimates the variance of the group effect. Hence, the Cramer-Rao method can be used to determine the power of the test of group effect at design stage for two-group comparison studies including patient-reported outcomes for many situations in health sciences. The important recommendation is to choose the most appropriate questionnaire for the population. Otherwise, sample size might be misspecified by this methodological approach.
|
2017-08-16T09:25:01.781Z
|
2013-02-28T00:00:00.000
|
{
"year": 2013,
"sha1": "c79cfa6d4cbc4ef97a5299034417a534131e6f8a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057279&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c79cfa6d4cbc4ef97a5299034417a534131e6f8a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3867617
|
pes2o/s2orc
|
v3-fos-license
|
Novel thick-foam ferroelectret with engineered voids for energy harvesting applications
This work reports a novel thick-foam ferroelectret which is designed and engineered for energy harvesting applications. We fabricated this ferroelectret foam by mixing a chemical blowing agent with a polymer solution, then used heat treatment to activate the agent and create voids in the polymer foam. The dimensions of the foam, the density and size of voids can be well controlled in the fabrication process. Therefore, this ferroelectret can be engineered into optimized structure for energy harvesting applications.
Introduction
Ferroelectret is a porous polymer film that can store charges in its internal voids after charging. It is able to convert compressive force into electric pulses, which can be used for both sensing and energy harvesting. Despite being developed into sensing material for more than a decade, ferroelectret has attracted research interest in energy harvesting only in the recent years [1][2][3]. In the past, we demonstrated that multilayer ferroelectret is capable of powering the transmission of a low-power wireless chipset [1], and fabricated prototypes of ferroelectret-powered wearable devices [4]. The ferroelectret we investigated was commercial polypropylene (PP), which was fabricated by stretching the original polyolefin material and expanded into a foam. Hence, it is a thin polymer film with thickness less than 70 µm. This is favourable for sensor and actuator applications, but not for energy harvesting, since one layer of PP ferroelectret is not sufficient to power any electronic chipset [1]. The multilayer structure of ferroelectret can significantly increase the energy output [1,4], but also hugely increases the cost and complexity in manufacturing.
Recently we have developed a novel thick-foam ferroelectret that is specifically designed and engineered for energy harvesting. This material was fabricated by mixing a chemical blowing agent with a polymer solution. The polymer solution can be moulded and cured into different shape and thickness. Thus the ferroelectret foam can be fabricated with thickness from several hundred microns to several millimetres. A schematic diagram of the foam's fabrication comparing to the multilayer ferroelectret is shown in Figure 1. The blowing agent of the ferroelectret was activated at certain temperature and created cellular voids in the structure. The density of the voids was controlled by the concentration of the blowing agent, and the void size was controlled by the heating temperature and the heating time. Based on the model that we developed to predict the piezoelectricity of PDMS ferroelecctret [5], and the eletromechanical model to predict the energy output of ferroelectret [6], we were able to model and engineer the void dimensions of the ferroelectret. This optimized structure was engineered by adjusting the processing parameters during fabrication. Compared to the conventional thin ferroelectrets which are mostly fabricated by stretching or foaming, this moulded ferroelectret can be fabricated in desired dimensions and its void size/density can be engineered. This is useful for optimizing the structure of the ferroelectret for energy harvesting applications, and potentially can be used to produce large-area ferroelectret.
Experimental
Low density Polyethylene (LDPE) was used as the polymer and Azodicarbonamide (ADZ) was used as the blowing agent. LDPE pellets were mixed with 0.2 wt% of ADZ power. They were dissolved in xylene by heat treatment and stirring, and then left to dry for a week so the solution can solidify. In order to fabricate ferroelectrets with dimensions of 45 mm × 45 mm × 0.6 mm in this work, 120 g of solidified polymer was used to fabricate one layer of ferroelectret. The weighted portion was hot pressed in a mould at 130 o C with 3 tons of pressing weight for 1 min. This process can shape the solution into the desired dimensions and remove its bubbles. To activate the blowing agent, the hot pressed samples were heat treated in an oven at 230 o C for different durations of 1, 2, 3 and 4 mins, respectively. In this process, internal voids were created in the samples. Electrodes were deposited on the top and bottom surfaces of the voided samples. The deposition was achieved using a Leybold E-beam Evaporator Lab 600. The deposited layers were 10 nm of Cr then 1 µm of Al. After the deposition, the samples were charged in a self-built needle-to-grid corona charging apparatus, with an electric field of 35 kV for 1 min. The charged samples were then ready to be tested as ferroelectret.
Results and discussion
The piezoelectric coefficient d33 of the ferroeelctret was measured to be 200 pC/N using a Piezoetest PM300 d33 meter. Using a Wayne Kerr 4300 LCR meter to measure the electrical properties, the capacitance of the sample was 38 pF, resistance was 2.6 GΩ. Hence, the dielectric constant of the ferroelectret was calculated to be 1.27, indicating that a large number of voids are presented in the polymer.
During the heat activation, the ADZ blowing agents in the polymer decomposed and released gas. The expanded gas resulted in the void structure in the ferroelectret. A number of factors affects the density and size of the voids. Firstly, the concentration of the blowing agent obviously will determine the density of the void. Higher content of ADZ will create more voids in the sample. However, when the voids are too dense, the size of the void is more difficult to control since the neighboring voids tend to merge into a bigger one. Hence, 0.2 wt% of ADZ content was used in this work. This is not the optimized content of ADZ for energy harvesting, but will create voids that can be easily observed and measured. Secondly, the activation temperature of ADZ is about 200 o C. Heating the samples above this temperature will accelerate the decomposition of ADZ. Thus at the same heating time, higher temperature will result in more gas and larger void size. In this work, heating temperature of 230 o C was used for all the samples so the void size was altered by heating time only. Lastly, increasing the heating time will allow more gas to be released, thus larger void size. This is demonstrated in Figure 2, where the samples were heated for different durations. Voids with diameter of 90 µm was observed in the samples heated for 1 min, 140 µm for 2 mins, 310 µm for 3 mins, and 340 µm for 4 mins, respectively. It also shows that the growing rate of void size decreased as the heating time increased, because the size of the void tended to saturate when the ADZ fully decomposed. From our previous study in modelling the piezoelectric output in ferroelectret, the optimized void size for high piezoelectricity is in the range of 50 to 100 µm [5]. Therefore, it is anticipated that the sample heat treated for 1 min will have stronger piezoelectric response comparing to the one heat treated for 2 mins. This is supported by the result from the experiment where the samples were under trapezoidal function of compressive forces, which has maximum amplitude of 800 N. The function of the force is shown in Figure 3 (a). The output pulses of the ferroelectret samples measured by oscilloscope are shown in Figure 3 (b) & (c). It shows that the maximum output of the sample with void size of 90 µm is 0.46 V and -1 V, the one with void size of 140 µm is 0.26 V and -0.88 V. The voltage output of the former is stronger than the latter, indicating a stronger piezoelectric response.
Conclusion and future work
This paper reports the design and fabrication of a novel ferroelectret material. It is fabricated by mixing a polymer solution with a chemical blowing agent ADZ, then shaped into desired dimensions by casting. The blowing agent is activated by heat treatment and create voids in the polymer. By controlling the parameters in the fabrication process, such as heating temperature and time, the size of the voids can be engineered. Hence, numerical simulation can be used to design an optimal void size, and achieving this size by adjusting the processing parameters. The next step forward will be to improve the energy output of the ferroelectret by varying the ADZ's content and using different types of polymers as medium.
|
2019-04-29T13:15:49.955Z
|
2016-12-14T00:00:00.000
|
{
"year": 2016,
"sha1": "136c5710e669f64543edce18cc02a9a428bf17ed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/773/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ac859420d1486d8f81f8d81ff38f834a014cc8b9",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
255321845
|
pes2o/s2orc
|
v3-fos-license
|
Forgiveness and Flourishing: The Mediating and Moderating Role of Self-Compassion
(1) Background: This study investigated the relationships between forgiveness, self-compassion, and flourishing, and examined the mediating and moderating role of self-compassion (self-warmth and self-cold) in the relationship between forgiveness and flourishing. (2) Methods: A sample of 300 Polish participants aged 18–57 (M = 23.53 years, SD = 5.82) completed the Heartland Forgiveness Scale, the Self-Compassion Scale, and the Flourishing Scale; we used Spearman’s rho correlations to assess the associations between the main analyzed variables and used PROCESS software to calculate moderation and mediation. (3) Results: The obtained data showed that forgiveness and self-compassion were positively related to flourishing. Self-warmth (positive dimension of self-compassion) mediated and moderated the link between forgiveness and flourishing. In contrast, self-coldness (negative dimension of self-compassion) did not mediate or moderate the association between forgiveness and flourishing. (4) Conclusions: The results suggest that positive resources relate to and support one another. Compassionate self-responding is associated with positive resources; in contrast, uncompassionate self-responding is not significant for positive variables.
Introduction
Achieving well-being is one of the most important human goals. Research into mental health viewed in positive terms leads to better understanding of the indications that build well-being. Flourishing is a positive term, which means to function in a way that is conducive to growth, resilience, and goodness [1]. Flourishing links to both hedonist and eudaemonist components of well-being, and includes concepts such as purpose in life, relationships, self-esteem, feelings of competence, and optimism [2].
Understanding the factors that support flourishing can help design prevention models that reduce negative health symptoms among different groups. Drawing on the existing theory and research in positive psychology, in this study we proposed forgiveness and self-compassion as independent variables and flourishing as an outcome variable.
Forgiveness and Psychological Well-Being
In this paper, forgiveness is conceptualized as a general propensity to forgive, regardless of time, relationships, and situations [3,4]. Forgiveness is also a personality trait which involves prosocial emotions such as love, sympathy, compassion, and/or a reduction in negative emotions such as anger or hostility [4]. Additionally, forgiveness is associated with positive motivation such as benevolence towards a wrongdoer and/or reduced negative motivations, such as avoidance or revenge [5].
One of the theoretical approaches explaining the link between forgiveness and flourishing is the stress-and-coping model of forgiveness [6]. This model is based on the transactional theory of stress developed by Lazarus and Folkman [7]. The model of stressand-coping of forgiveness recommends that coping through forgiveness is one of the more efficient forms stress reduction and positive adaptations to harm [8,9]. 2 of 11 Another model referring to the relationship between forgiveness and well-being is the scaffolding self and social systems model of forgiveness and well-being (4S model) [10]. Forgiveness leads to subjective well-being through relationship harmony, relationship mastery, adaptive identity development, and self-acceptance/self-worth. This model suggests that forgiveness leads to an increase in positive perceptions of self and others. This is consistent with the broaden-and-build-theory [11], which suggests that, for example, the experiences of relief and other positive feelings which come after forgiveness builds other positive resources, leading to psychological well-being.
Empirical work on forgiveness and psychological well-being has showed positive associations between them. Toussaint and Fridman [9] used a number of different tools to measure both forgiveness (Heartland Forgiveness Scale, HFS, and Transgression-Related Interpersonal Motivations Scale, TRIM) and well-being (Satisfaction with Life Scale, SWLS, The Fordyce Happiness Scale, and The Bradburn Affect Balance Scale) in psychotherapy outpatients. Their results showed that, regardless of the tool used, the relationship between forgiveness and well-being was significant and positive. Additionally, a study among Ukrainian war refugees indicated a positive correlation between forgiveness (Decision to Forgive Scale, DTFS), mental well-being (The World Health Organisation Five Well-Being Index, WHO-5), and spiritual well-being [12]. Other studies in more general groups have also confirmed this link [13,14].
Self-Compassion as a Mediator or a Moderator
The relationship between forgiveness and well-being is well established in the literature [9,13]. Continuing to seek mechanisms and mediators to explain this relationship will further understanding. Previous studies have pointed to the mediating role of the affect and beliefs [9] and feeling connected to others [15], between forgiveness and well-being.
Referring to the scaffolding self and social system model of forgiveness [10] and the stress-and-coping model of forgiveness [6], self-compassion can be a potential mediator in a casual pathway between forgiveness and psychological well-being.
Self-compassion is a dynamic system that supports coping with difficult events. It comprises three components: (1) the first of them refers to an emotional response to suffering (with kindness or judgment); (2) the next refers to the cognitive assessment of one's own difficult situation as an experience of common humanity or as isolation; (3) the third component is about perceiving suffering (with mindfulness or over-identification) [16]. The positive components of self-compassion-self-kindness, common humanity, and mindfulnessare described as self-warmth. The negative components of self-compassion-self-judgment, isolation, and over-identification-are referred to as self-coldness [17]. Self-warmth is consistent with positive psychology, and it supports the protective role of self-compassion. The positive subscales of self-compassion related to more positive variables are gratitude, hope, and self-esteem.
On the other hand, uncompassionate self-responding-self-coldness (self-judgment, isolation, and over-identification)-is associated with symptoms of psychopathology such as anxiety disorders, depression, and other mental health problems [17].
Previous studies have found that higher levels of self-compassion are linked to other positive psychological constructs, such as life satisfaction [18], gratitude [19], and resilience [20]. On the other hand, lower levels of self-compassion are related to depression [17,21,22], anxiety [23], and PTSD symptoms [24].
Recent studies have shown that a self-compassionate orientation might help cope with negative situations, emphasizing its buffering role [17]. In the present study, selfcompassion is presented as a variable supporting the relationship between forgiveness and well-being in two possible roles, as a moderator and as a mediator. Self-compassion as a moderator might be an enhancer of well-being, in that it highlights the positive implications of forgiveness. On the other hand, forgiveness as a mediator might increase flourishing by enhancing self-compassion. Self-compassion can be conducive to both the hedonic and the eudemonic aspects of flourishing. The former shows that self-compassion can increase positive emotions and subjective well-being [16], whereas the latter focuses on the supporting role of self-compassion in utilizing adaptive mechanisms in difficult situations [25].
The mediation model hypothesizes that the components of self-compassion mediate the relationship between forgiveness and flourishing. Many studies have focused on the mediating role of self-compassion between negative variables [17,26] or buffering positive outcomes in negative situations [27,28].
Despite the lack of studies where self-compassion mediates the link between forgiveness and well-being, previous studies have reported significant mediation relationships with self-compassion as the mediator of positive resources and well-being as an outcome variable (mindfulness-psychological well-being) [29]. The theory combining forgiveness, self-compassion, and flourishing proposed by Hobfoll, called the resource caravan passageways, indicates that resources travel in packs or caravans and support each other [30]. Combined positive resources lead to positive mental health, and support coping with difficult events [31].
Several studies have examined the moderating role of self-compassion. These studies have found that self-compassion buffers [32] and supports [33] positive human functioning. Self-compassion moderates the link between dietary restraint and emotion-focused impulsivity. This link is weaker for individuals with higher levels of self-compassion [32], pointing to the buffering role of self-compassion. Chen [33], examining the relationship between PsyCap and life satisfaction, found a supporting role in self-compassion among students.
The Present Study
Much of the early research focused on negative links between forgiveness and selfcompassion and negative outcomes, such as anxiety, depression, and PTSD [34,35]. Little research has been conducted to examine the possible underlying impact of these variables on the positive side of life [36]. According to the tenets of positive psychology, fostering positive aspects of functioning (e.g., psychological well-being) is just as important as preventing negative consequences, such as depression, anxiety disorders, etc.
Referring to the scaffolding self and social systems model of forgiveness and wellbeing [10] and the stress-and-coping model of forgiveness [6], both forgiveness and selfcompassion could support flourishing. Forgiveness and self-compassion could also weaken the aftermath of negative events by emotional and cognitive reframing, reducing negative feelings, thoughts, and behaviors. Through forgiveness, individuals who are victims can reformulate negative emotions, thinking, and motivation from negative to neutral or positive [4]. On the other hand, individuals with high self-compassion do not replace negative emotions and thoughts with positive ones; they accept negative events and give new meaning to them [16,25]. This is consistent with both hedonist and eudaemonist theories of psychological well-being.
Based on the reviewed theory and research, we examined whether two dimensions of self-compassion mediated and moderated the association between forgiveness and flourishing. The hypotheses were as follows: (1) forgiveness would inter-relate to increase flourishing through increased self-warmth and decreased self-coldness; (2) the link between forgiveness and flourishing would be stronger with higher self-warmth and lower self-coldness.
Power Analysis
To determine the minimum sample size for the current study, the G*Power version 3.1. Program [37] was used. The sample size required for multiple regression analyses with three independent variables for detecting a medium effect (f 2 = 0.03) with a power of 0.80 and 0.05 level of significance was N = 204 or more. To avoid Type II errors, the bootstrapped samples in the PROCESS macro were set to 5000 at 95% bias-corrected confidence intervals, which was statistically adequate for the number of respondents.
Participants
We used a sample of 300 adult participants from Poland. Female participants accounted for 83.3% (n = 250) of the sample. The subjects' age ranged from 18 to 57 years, with a mean of 23.53 (SD = 5.82). Regarding the level of education, 1% of the sample had completed primary education, 1% had completed vocational education, 41.4% had completed secondary education, 22.7% had a university degree, and 34.2% had graduated from college. The respondents were requested to participate in the study voluntarily-no remuneration was offered to them. Data were collected between October 2021 and February 2022 using an online questionnaire distributed via social networking sites. All respondents provided informed consent online. The responses were anonymous, and the confidentiality of information was assured. Participants were informed about their right to terminate the survey at any time if they wanted.
Forgiveness
Disposition to forgive was measured using the Heartland Forgiveness Scale [4]. The HFS is a multidimensional tool assessing the dispositional forgiveness of self, others, and situations beyond one's control. Participants rate their responses to 18 items on a 7-point scale. Higher scores on each scale reflect higher levels of forgivingness. The total HFS score indicates how forgiving a person tends to be. In this study, we only used the total score. The Cronbach's alpha (internal consistency) for total HFS was 0.85 in this study.
Self-Compassion
Self-compassion is typically assessed using the Self-Compassion Scale (SCS) [38]. The original SCS has 26 items measuring six components of self-compassion in two dimensions. The first dimension of self-warmth includes self-kindness, common humanity, and mindfulness. The second dimension of self-coldness includes self-judgement, isolation, and over-identification. Items are rated on a 5-point scale ranging from 1 (almost never) to 5 (almost always). Test-retest reliability was established as good for the overall scale (r = 0.87, p < 0.01, Cronbach's alpha = 0.93), as well as the subscales (Cronbach's alpha = 0.80-0.89).
Flourishing
Flourishing was measured with a brief 8-item Flourishing Scale (FS). The range of scores is from 8 to 56, where higher scores mean a higher level of psychological wellbeing [2]. The FS has demonstrated good validity in different cultures. The Cronbach's alpha coefficient was 0.91 in this study.
Data Analysis
Before the beginning of the main analysis, data were screened for potential errors in the expected range of values and for any indicators of careless answers. We used the Mahalanobis distance to evaluate the outliers [39]. All results fulfilled the criteria. All observations (N = 300) were included in the main statistical analysis. The studies were completed online, which avoided deficiencies. Incomplete answers were not included in the data file. We used Spearman's rho correlations to assess the associations between the main analyzed variables: forgiveness, self-compassion, and flourishing. We used IBM SPSS software (version 26, PS IMAGO PRO 6.0, Predictive Solutions) and employed regressionbased analysis to directly test the proposed moderating model and the mediating model using PROCESS software [40]. Self-compassion was both a moderator and a mediator. Model 1 (moderating analysis) and 4 (mediating analysis) were estimated using PROCESS with 5000 bootstrap samples and 95% bias-corrected bootstrap intervals for all indirect effects. For all data, the hypothesis of a normal distribution of the measurement results was tested using the Kolmogorov-Smirnov test; the results showed that the data were not normally distributed.
Preliminary Analyses
The results of the correlational calculations demonstrated that most of them were statistically significant (Table 1). We used Spearman's rho to calculate correlations. Forgiveness was positively and significantly correlated with flourishing and self-compassion (total score, self-kindness, common humanity, mindfulness, and self-warmth), and inversely correlated with four subscales of self-compassion: self-judgment, isolation, over-identification, and self-coldness. Flourishing was positively and significantly correlated with three subscales of self-compassion-self-kindness, common humanity, and mindfulness-and negatively and significantly correlated with self-judgement, isolation, and over-identification.
Mediational Analyses
To examine whether the two dimensions of self-compassion mediated the association between forgiveness and flourishing with age and gender as covariants, we used a multiple mediation model (Model 4 in PROCESS). All outcomes were standardized. Forgiveness was linked to all mediators-it showed a positive correlation with self-warmth (β = 0.64, p < 0.001), and an inverse correlation with self-coldness (β = −0.66, p < 0.001) (Figure 1). Only one mediator was significantly related to flourishing-self-warmth (β = 0.41, p < 0.001). The indirect effect (IE) of forgiveness on flourishing via self-warmth and self-coldness was found to be significant, because the 95% confidence interval did not include zero (β = 0.26 CI95% [0.157, 0.363]). However, only self-warmth was a significant mediator (IE, β = 0.26; CI95% [0.189, 0.344]). The indirect effect accounted for 50.49%.
Moderating Analysis
To test whether self-compassion (two dimensions: self-warmth and self-coldness) moderated the relationship between forgiveness and flourishing, we used a one-model moderation analysis in PROCESS. Age and gender were using as covariants. All outcomes were standardized.
Moderating Analysis
To test whether self-compassion (two dimensions: self-warmth and self-coldness) moderated the relationship between forgiveness and flourishing, we used a one-model moderation analysis in PROCESS. Age and gender were using as covariants. All outcomes were standardized.
The analysis revealed a significant interaction of forgiveness and self-warmth with flourishing (∆R 2 = 0. 37
Discussion
In this study, mediation and moderation models have been proposed, in which selfcompassion mediates and moderates the relationship between forgiveness and flourishing. This study investigated the differences in mediation and moderation between two dimensions of self-compassion: self-warmth and self-coldness.
The results obtained here partially support the hypothesis that forgiveness will interrelate with increased flourishing through increased self-warmth and decreased self-coldness. Our data showed a stronger tendency to forgive, inter-related with higher levels of flourishing via higher levels of self-warmth. Forgiveness, especially self-forgiveness, is understood as a manifestation of a positive attitude towards oneself even when one has been disappointed in oneself and others, which is consistent with self-warmth. Therefore, forgiveness with self-warmth had a strong effect on well-being.
Our findings correspond with previous research showing that the positive relationship between forgiveness and well-being is mediated by other positive variables [15]. For example, Bono et al. [15] found closeness to be a mediator between forgiveness and wellbeing in psychology students. Forgiveness was measured as a negative motivation (avoidance and revenge) and positive motivation (benevolence), and well-being was measured as a subjective assessment of life satisfaction, which is consistent with the hedonistic approach. Higher benevolence and higher closeness resulted in evaluating life as satisfying. On the other hand, self-warmth was a mediating association between mindfulness and personal recovery [41].
Interestingly, in the current study, self-coldness was not a significant mediator be-
Discussion
In this study, mediation and moderation models have been proposed, in which selfcompassion mediates and moderates the relationship between forgiveness and flourishing. This study investigated the differences in mediation and moderation between two dimensions of self-compassion: self-warmth and self-coldness.
The results obtained here partially support the hypothesis that forgiveness will inter-relate with increased flourishing through increased self-warmth and decreased selfcoldness. Our data showed a stronger tendency to forgive, inter-related with higher levels of flourishing via higher levels of self-warmth. Forgiveness, especially self-forgiveness, is understood as a manifestation of a positive attitude towards oneself even when one has been disappointed in oneself and others, which is consistent with self-warmth. Therefore, forgiveness with self-warmth had a strong effect on well-being.
Our findings correspond with previous research showing that the positive relationship between forgiveness and well-being is mediated by other positive variables [15]. For example, Bono et al. [15] found closeness to be a mediator between forgiveness and well-being in psychology students. Forgiveness was measured as a negative motivation (avoidance and revenge) and positive motivation (benevolence), and well-being was measured as a subjective assessment of life satisfaction, which is consistent with the hedonistic approach.
Higher benevolence and higher closeness resulted in evaluating life as satisfying. On the other hand, self-warmth was a mediating association between mindfulness and personal recovery [41].
Interestingly, in the current study, self-coldness was not a significant mediator between forgiveness and well-being. Previous research has indicated the significant indirect effects of self-coldness on mental health (e.g., depression) [17]. Brophy et al. [17] also found that self-coldness mediated the association between attachment and depression, and that it had a stronger effect than self-warmth. Additionally, Lu et al. [42] found that self-coldness was a stronger mediator in the relationship between stigma and two variables-depressive symptoms and demoralization-in hemodialysis patients. Possibly, self-coldness as a negative dimension of self-compassion is a stronger predictor of psychopathology than selfwarmth. In the present study, positive variables were used; thus, a stronger effect is shown by the positive dimension of self-compassion. Similar results were found by Mak et al. [41], where self-warmth, but not self-coldness, was a mediator between two positive variablesmindfulness and personal recovery. Self-warmth as an emotional regulation strategy [38] can reduce negative emotions, such as negative behavior towards a wrongdoer of an offence, which can lead to increased well-being. On the other hand, forgiveness is the letting go of negative emotions towards a wrongdoer and showing benevolence oneself [43]. Thus, there is a strengthening of self-warmth and a psychological well-being.
Our second hypothesis about self-warmth and self-coldness functioning as moderators between forgiveness and flourishing was partially corroborated. Self-warmth, but not selfcoldness, was a moderator. Regardless of the level of self-warmth (low, medium, or high), forgiveness was positively associated with flourishing. In other words, exhibiting warmth to oneself helps people increase flourishing by forgiveness. The moderation outcomes regarding forgiveness and well-being are consistent with previous studies revealing associations between forgiveness and well-being, including the hedonistic approach and the eudaimonia theory [13,14]. The result is also supported by the suggestion that the positive dimension of self-compassion can strengthen the link between forgiveness and well-being. This is consistent with compassion-focused therapy, which assumes that capacities of warmth and care towards oneself enhance well-being [44]. Additionally, Gilbert [44] proposed that self-compassion affects well-being by activating the social-safeness neurological system and deactivating the threat-defense system. Furthermore, self-compassion training decreases sympathetic nervous system reactivity and enhances adaptive parasympathetic activity [45].
In contrast, the negative dimension of self-compassion did not moderate the association between forgiveness and flourishing. Self-coldness may be a more effective moderator in the context of negative indicators of mental health. Our results are supported by previous studies showing that self-warmth and self-coldness have different interaction mechanisms [17]. Compassionate self-responding-self-warmth, including self-kindness, common humanity, and mindfulness-fosters the link between positive resources. On the other hand, uncompassionate self-responding-self-coldness (self-judgment, isolation, and over-identification)-is associated with symptoms of psychopathology [46,47]. This argument is supported by the Conservation of Resources theory and the resources caravan theory [30,31]. The resources support each other; they travel in caravans, not in isolation; and they are associated with other resources. The loss of some resources causes the loss of the next resources. Similarly, strong, positive resources foster growth in other positive resources.
Previous research has focused particularly on the mediating role of self-compassion for negative variables, such as depressive symptoms [17,21,22,42], suicidal risk [48], anxiety [49,50], and personality disorders [51,52]. Our results support the few previous studies focusing on positive variables [41], showing that self-compassion plays an important role in mental health and well-being. This explains that the application of methods such as compassion-focused therapy [44] or mindful self-compassion [53] can not only be em-ployed in the treatment of disorders, but also as a method of prevention or reinforcement of positive aspects of mental health.
The inter-relationships between forgiveness, self-compassion, and flourishing can be interpreted in light of the scaffolding self and social systems model of forgiveness and well-being (4S model) [10]. According to this model, forgiveness of oneself and others should entail stronger positive attitudes to oneself and others, such as self-acceptance and self-esteem, and leads to enhanced well-being. Tendency to forgive fosters kindness towards oneself by perceiving oneself as a moral person.
There are limitations to the present research that warrant attention. Firstly, only selfreporting tools were used, and all tools measured trends, not the present state. Future studies should include observer-rated variables or tools which measure variables as states (and not only traits). Secondly, due to the cross-sectional design, no causal inference can be made. Longitudinal designs or experiments in future studies should be used to confirm this causality. Thirdly, this study was based on data from a small sample. Future investigations could utilize a more heterogeneous group in terms of age, culture, clinical problems, etc. Next, this study only concerned positive aspects of mental health. This may have limited determining the mediating role of self-compassion. The design of future research should consist of both aspects of mental health-positive well-being and negative well-being, such as depression, anxiety, or feelings of stress. This study is one of the first to focus on the mediating role of self-compassion between forgiveness and flourishing; therefore, further research on this issue is necessary to better understand this mechanism.
Finally, the current investigation could be replicated with other variables controlled which may also mediate or moderate the relationship between forgiveness and flourishing/well-being.
Conclusions
The relationship between forgiveness and psychological well-being is well-documented. However, studies are exploring the underlying mechanisms of this relationship. The presented outcomes show that self-warmth, not self-coldness, is a mechanism (variable) which could explain the inter-relation between forgiveness and flourishing. Additionally, this conclusion is important in the context of previous studies which concern the mediating and moderating role of self-compassion, such as earlier studies including negative symptoms in mental health [27]. Our findings suggest that self-warmth is a more effective mediator and moderator between positive variables. In contrast, self-coldness has a stronger effect than self-warmth in negative variables, which resulted from previous data [17]. These results highlight differences between the dimensions of self-compassion. Self-warmth as a compassionate self-response supports the development of other resources, which buffer mental health.
Additionally, in practice, when designing positive interventions, the supporting role of self-warmth can be used to strengthen other positive resources such as forgiveness and flourishing.
|
2023-01-01T16:19:26.570Z
|
2022-12-30T00:00:00.000
|
{
"year": 2022,
"sha1": "2c43476e6d06e56a6ca037ecdc3120e082d7b62d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/1/666/pdf?version=1672388882",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d5808367770e949d3856180983f82acd9742707",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270376923
|
pes2o/s2orc
|
v3-fos-license
|
Surgery combined with photodynamic therapy for the case of perifolliculitis capitis abscedens et suffodiens: A case report
Perifolliculitis capitis abscedens et suffodiens, also known as dissecting cellulitis of the scalp, is a rare, chronic suppurative, inflammatory head hair follicle disease, which seriously affects the patient’s quality of life. Clearly, clinical treatment varies widely and is somewhat challenging. We report a case of a 19-year-old male patient who had good results through surgery combined with photodynamic therapy. Surgery combined with photodynamic therapy for perifolliculitis capitis abscedens et suffodiens is effective and safe, especially for patients with poor responses to previous traditional treatments.
Introduction
Perifolliculitis capitis abscedens et suffodiens (PCAS), also known as dissecting cellulitis of the scalp, is a chronic suppurative, inflammatory head hair follicle disease that manifests as fluctuating painful nodules of the scalp, abscesses, pus, sinuses, scarring, and cicatricial alopecia. 1,2Unknown etiology, similar to hidradenitis suppurativa, acne conglobata, including follicular hyperkeratosis, follicular duct obstruction, dilation and rupture, bacterial overgrowth, followed by neutrophils, histiocytes, lymphocytes, plasma cell infiltration and granulomatous inflammatory response, scarring, sinus tract formation, and later fibrosis. 3,4The clinical treatment is challenging, and the practice is quite different.The commonly used treatment methods include oral antibiotics, isotretinoin, hormones and other drugs, TNFα inhibitors, X-rays, lasers, and surgical resections, all of which have varying degrees of effect, but there are drug resistance, recurrence, or systemic side effects in all treatments. 5In recent years, the use of 5-aminolevulinic acid photodynamic therapy (ALA-PDT) has good therapeutic effect on patients with poor drug therapy for PCAS. 6,7We report a case of successful treatment of PCAS with surgery combined with photodynamic therapy.intraoperative bleeding.The scalp is turned over and fixed on the surrounding normal scalp with sutures (Figure 2).The bleeding point is electrocoagulated.After checking that there is no active bleeding, a 20% concentration of 5-ALA solution is prepared.The 5-ALA was incubated on the inside of the scalp, protected from light for 3 h, and then irradiated with a narrow-band red light (633 ± 10 nm, LED-IB, Wuhan Yage photoelectric technology Co., Ltd, China), and part of the sinus area was irradiated with optical fiber for 30 min, with an energy intensity of 100 mw/cm 2 (Figure 3), after the end of photodynamic therapy, the wound was washed again with normal saline, the scalp was turned over to its original position, a negative pressure drainage tube was placed, and local intermittent sutures were fixed.The drainage tube was removed 48 h after the operation.After the operation, the patient only experienced tolerable pain, the pain did not exceed 48 h, and there was no other adverse reaction.Three months after the operation, the patient's abscess and nodules improved significantly (Figure 4), and he was very satisfied with the treatment results.
Discussion
PCAS is a rare chronic, inflammatory disease of head hair follicles of unknown etiology.The cause of the disease is unknown, and it has certain genetic factors.At present, it is believed that this disease is related to hyperkeratosis of hair follicles and pilosebaceous unit occlusion, followed by dilatation and rupture of hair follicle ducts, and the release of its contents, such as keratin and bacteria, leading to the emergence of neutrophilic and granulomatous inflammatory reactions, the formation of abscesses, etc. 3,4 The pathological manifestations are characterized by folliculitis and perifollicular inflammation, accompanied by the infiltration of dense lymphocytes, histiocytes, and multinucleated cells, the formation of abscesses, and granulomatous inflammation in the deep dermis and subcutaneous adipose layer, etc., resulting in follicular sebaceous units and skin appendages damages, leads to a series of clinical manifestations such as nodules, cysts, sinus tracts, and fibrosis. 2,3It can occur alone or as part of follicular occlusion tetrad, namely PCAS, acne conglobata, hidradenitis suppurativa, and pilonidal sinus. 8he treatment of PCAS is challenging, and the clinical treatment varies greatly, causing varying degrees of distress to both doctors and patients. 3Oral isotretinoin is a clinically used treatment method, but the therapeutic effect is limited and there is a certain recurrence rate. 8Considering that PCAS and hidradenitis suppurativa have a similar pathogenesis, in patients with hidradenitis suppurativa in the skin, elevated levels of TNF-α, IL-1β, and IL-10 can be detected. 3,8NF-α inhibitors are used clinically, such as adalimumab, infliximab, etc., and the treatment of PCAS has improved to varying degrees. 9,10Oral zinc preparations can reduce NF-kB-mediated inflammatory response, 4 other treatments include oral antibiotics, including doxycycline, minocycline, clindamycin, rifampicin, dapsone, etc. alone or in combination with isotretinoin treatment, etc., all have different degrees of improvement, but they are prone to relapse after drug withdrawal, 8,11 X-ray irradiation, ND:YAG laser, photodynamic therapy, surgery, and other methods have also been used in the treatment of PCAS. 5 5-ALA is the precursor of porphyrin and other photosensitizers.Under the irradiation of a specific excitation light source, it can generate reactive oxygen species and other substances, and play a photodynamic effect. 6][14] At the same time, red light can also stimulate matrix metalloproteinases, promote collagen reconstruction, and promote skin lesion repair and healing.In view of the fact that PCAS has similar pathophysiological processes with acne and hidradenitis suppurativa, it provides a certain theoretical basis for the treatment of PCAS.Ye et al. 7 first used ALA-PDT to treat refractory PCAS and achieved good results.Yuxin et al. 15 and Hao et al. 6 used pyonex and 22G needles to perform acupuncture on the head lesions to facilitate drainage of secretions and the penetration of ALA (Shanghai Fudan Zhangjiang Bio-Pharmaceutical Co. Ltd.Shanghai, China), and photodynamic therapy after drug application has achieved good therapeutic effect on patients whose previous traditional drug treatment was not effective.
Cui et al. 1 combined with photodynamic therapy for PCAS after flipping the scalp through skin surgery had a good therapeutic effect without obvious adverse reactions.Considering the limited penetration depth of ALA-PDT, in patients with PCAS, the smallest protruding nodule can reach a depth of 8 mm. 3 In our case, the necrotic tissue was fully and completely removed through surgery, and the scalp was turned over and fixed.The application of ALA and light irradiation makes photodynamic therapy more precise, effective, and direct, achieving a faster and more obvious therapeutic effect.
Each case of PCAS is different, and individualized treatment depends on the severity of the condition and the patient's willingness to treat, ranging from topical application of benzoyl peroxide, clindamycin, isotretinoin, oral steroids, antibiotics, retinoic acid drugs, intralesional injection of hormones into biological agents, lasers, X-rays, photodynamics, and surgery, 5,16 through single or combined treatment so that patients can achieve a better therapeutic effect, surgery combined ALA-PDT is suitable for more serious patients, 4 and it is effective for patients who have not responded well to previous oral medications, offering quick and notable results.
Figure 1 .
Figure 1.Abscesses and nodules can be seen merging on the head to form sinus tracts and sulcus-like structures.
Figure 2 .
Figure 2. The scalp is fixed, the infected wound is exposed for easy application of medicine, and high-energy red light is irradiated.
Figure 3 .
Figure 3. Photodynamic therapy (20% concentration of 5-aminolevulinic acid solution (Shanghai Fudan Zhangjiang Bio-Pharmaceutical Co. Ltd.Shanghai, China)), applied to the inner side of the scalp, protected from light for 3 h, then 633 ± 10 nm Large spot irradiation, part of the sinus area combined with optical fiber irradiation for 30 min, the energy intensity is 100 mw/cm 2 .
Figure 4 .
Figure 4. Three months after surgery combined with 5-aminolevulinic acid photodynamic therapy treatment.
|
2024-06-12T05:09:46.204Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "43bdef851ef2151df410d7400614c279c3a21b11",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "43bdef851ef2151df410d7400614c279c3a21b11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259073563
|
pes2o/s2orc
|
v3-fos-license
|
Social genomics, cognition, and well-being during the COVID-19 pandemic
INTRODUCTION: Adverse psychosocial exposure is associated with increased proinflammatory gene expression and reduced type-1 interferon gene expression, a profile known as the conserved transcriptional response to adversity (CTRA). Little is known about CTRA activity in the context of cognitive impairment, although chronic inflammatory activation has been posited as one mechanism contributing to late-life cognitive decline. METHODS: We studied 171 community-dwelling older adults from the Wake Forest Alzheimer’s Disease Research Center who answered questions via a telephone questionnaire battery about their perceived stress, loneliness, well-being, and impact of COVID-19 on their life, and who provided a self-collected dried blood spot sample. Of those, 148 had adequate samples for mRNA analysis, and 143 were included in the final analysis, which including participants adjudicated as having normal cognition (NC, n = 91) or mild cognitive impairment (MCI, n = 52) were included in the analysis. Mixed effect linear models were used to quantify associations between psychosocial variables and CTRA gene expression. RESULTS: In both NC and MCI groups, eudaimonic well-being (typically associated with a sense of purpose) was inversely associated with CTRA gene expression whereas hedonic well-being (typically associated with pleasure seeking) was positively associated. In participants with NC, coping through social support was associated with lower CTRA gene expression, whereas coping by distraction and reframing was associated with higher CTRA gene expression. CTRA gene expression was not related to coping strategies for participants with MCI, or to either loneliness or perceived stress in either group. DISCUSSION: Eudaimonic and hedonic well-being remain important correlates of molecular markers of stress, even in people with MCI. However, prodromal cognitive decline appears to moderate the significance of coping strategies as a correlate of CTRA gene expression. These results suggest that MCI can selectively alter biobehavioral interactions in ways that could potentially affect the rate of future cognitive decline and may serve as targets for future intervention efforts.
Introduction
Alzheimer's disease (AD) is a progressive neurodegenerative condition that is a common cause of both mild cognitive impairment (MCI) and dementia. 1,2 Psychosocial risk and resiliency factors can modulate the rate of subsequent cognitive decline in the context of developing neuropathologic changes in the brain, and many of these factors may be modifiable. 3 Key to developing interventions to harness such resilience effects is identifying the specific psychosocial processes that impact the biology of cognitive decline.
Research in those with normal cognition (NC) has identified some psychobiological pathways through which psychosocial factors may impact biological processes relevant to cognitive function. One such molecular pathway is the conserved transcriptional response to adversity (CTRA). 4 The CTRA is a pattern of leukocyte gene expression that has been observed across species (i.e., conserved) in response to a host of adverse social conditions. The CTRA transcriptional pattern involves an increase in expression of proinflammatory genes (e.g., IL1B, IL6, TNF) and a decrease in the expression of type I interferon response genes (e.g., IFI-, OAS-and MX-family genes) in response to fight-or-flight stress signaling from the sympathetic nervous system (SNS). A proposed evolutionary explanation for such a response is the necessity to pivot to an anti-microbial, and away from the anti-viral, state of the immune system during periods of acute threat. 5 Under ancestral conditions, activation of the SNS would have predicted an increased risk of wound-associated bacterial infections and thus support wound healing. In modern conditions, chronic low-level threat produces chronic activation of pro-inflammatory genes, which can contribute to the pathogenesis of a host of common chronic conditions such as neurodegenerative disorders such as Alzheimer's disease and related dementias, 6 cardiovascular disease, and neoplastic disorders 7,8 .This chronic, lowgrade inflammation also increases with age ("inflammaging") and the CTRA is one mechanism through which conditions of chronic psychosocial adversity can alter underlying biology and potentially accelerate inflammaging-related disease processes. 9 However, to date, few studies have evaluated CTRA risk or resilience processes in the context of cognitive aging.
The CTRA was first identified in the context of loneliness, 10 and subsequent work linked these effects to reduced levels of eudaimonic well-being, or a sense of purpose and meaning in life. 11 Eudaimonic well-being is distinguishable from hedonic well-being, which is the summation of positive affective experiences in a person's life. 12 In the context of cognitive aging, Boyle and colleagues in the Rush Memory and Aging Project (MAP) have previously linked a sense of purpose in life to variations in cognitive aging: 13 participants with a greater sense of purpose in life were found to have a lower risk of both AD and MCI over a 7-year follow-up period. The biological mechanism through which a sense of purpose relates to cognitive decline is not yet known. However, given the potential role of inflammatory biology in cognitive aging, the CTRA-associated inflammatory biology may represent one mechanism through which psychosocial factors (i.e., a sense of purpose) could affect either risk or resiliency to cognitive decline. psychosocial correlates of CTRA gene expression may be similar to or different between those with normal cognition and those with mild cognitive impairment.
Participants
All participants were previously enrolled in the Alzheimer's Disease Clinical Core (Clinical Core) cohort of the Wake Forest Alzheimer's Disease Research Center (WF ADRC) and underwent standardized evaluations in accordance with the National Alzheimer's Coordinating Center (NACC) protocol for data collection which meets Uniformed Data Set (UDS) requirements. Specific inclusion and exclusion criteria for the Clinical Core are described elsewhere. 14 Cognitive adjudication at yearly Clinical Core study visits using clinical and cognitive assessment and brain MRI provides a cognitive diagnosis of NC, MCI, or dementia. For the purposes of this study, only participants with NC or MCI were eligible for study inclusion. The determination of mild cognitive impairment was made using clinical criteria according to NACC guidelines as described by Petersen and Morris. 15 To address early pandemic concerns of in-person study visits, both questionnaire responses and dried blood spot collection were designed to be collected remotely. All study procedures were approved by the Wake Forest Baptist Health Institutional Review Board. Written informed consent was obtained for all participants and/or their legally authorized representative. Questionnaires were administered via telephone between February 15 and July 21, 2021. Dried blood spot collection occurred a median of 8 days (range: -40 to 136 days) following the questionnaire completion.
Perceived Stress
The 10-item Perceived Stress Scale (PSS) is a questionnaire that is an index of an individual's perception of stress over the past month. 16 Stress was operationalized as finding one's life unpredictable and uncontrollable, and feeling overloaded. The questions were answered on a Likert scale that ranged from "never" (0) to "very often" (4) after reversing the four positively stated questions. Individual items are summed to produce a total score and showed good internal reliability (α = 0.85). Higher scores reflect higher levels of perceived stress.
Loneliness
Loneliness was assessed using the UCLA Loneliness Scale, Version 3. 17 Participants rate statements to describe how often they feel the way described, ranging from "never" (0) to "often" (4). There were twenty statements, and nine were reverse coded according to standard instructions. A total score was computed with higher scores indicating greater feelings of loneliness and showed good internal reliability (α = 0.85).
14 factors of coping along a Likert scale ranging from "I have not been doing this at all" (0) to "I have been doing this a lot" (3). We added 6 questions related to positive distraction. 19 Consistent with prior research, 20 we performed a parallel factor analysis which identified a 3factor solution of Support (comprised of emotional support, instrumental support, and active coping items), Distraction and Reframing (comprised of positive distraction, positive reframing, and self-distraction), and Blame and Disengagement (comprised of self-blame and behavioral disengagement). Non-participating factors were denial, substance use, venting, planning, humor, religion, and acceptance.
Hedonic and Eudaimonic Well-Being
The Mental Health Continuum -Short Form (MHC-SF) 12 is a 14-item questionnaire derived from a 40-item questionnaire. 21 The MHC-SF was designed to measure hedonic and psychological well-being as conceptualized by Ryff and social well-being as conceptualized by Keyes. 22,23 Respondents were asked to answer questions about the degree to which they've felt a given way over the past month ranging from "never" (0) to "every day (5). Three questions were summed for hedonic well-being (HWB), five for social well-being (SWB), and six for psychological well-being (PWB). SWB and PWB together make up eudaimonic well-being (EWB). The overall internal reliability of the MHC-SF was good (α = 0.89).
COVID-19 Specific Experiences
Two questionnaires were given to assess the impact of COVID-19 on participants. We administered by telephone the participant version of the COVID-19 Impact Survey, version 1 24 from the NACC that was initially released in the Summer of 2020. This questionnaire captured information regarding COVID-19 exposure, medical consequences, impact on psychosocial factors, and perceived cognitive, psychiatric, and behavioral consequences. We supplemented this questionnaire with questions from the Questionnaire for Assessing the Impact of COVID-19 Pandemic and Accompanying Mitigation Efforts on Older Adults (QAICPOA). 25 The QAICPOA was developed specifically to assess the impact of COVID-19 on older adults and includes questions regarding diagnosis, symptoms, actions taken because of the pandemic, changes in contact and communication. Some questions included in these forms were discrete dates (e.g., dates of COVID diagnosis), others were yes/no responses to questions regarding types of care accessed. For the purposes of this study, three questions were included in the analysis, all from the COVID-19 Impact Survey. Questions 7 (worry about COVID-19 infection/reinfection), 8 (isolated or cut off from family and friends due to COVID-19), and 9 (disruption to everyday life due to . Each of these questions were answered on a five point Likert scale ranging from "not at all" (one) to extremely (five).
Dried Blood Spot Collection
After questionnaires were collected, participants were mailed a remote collection kit for selfcollection of dried blood spots. Training materials were adapted for use in our cohort from Allen and colleagues. 26 Participants were sent all necessary materials and a printed instruction booklet with instructions on specimen collection. Blood spots were placed directly on a standardized filter paper commonly used for neonatal screening (Whatman #903, GE Healthcare, Piscataway, NJ). Pictorial examples of both good and bad dried blood spot for use under a CC0 license. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. ; https://doi.org/10.1101/2023.05.31.23290618 doi: medRxiv preprint collection were provided. Participants were instructed to allow the collection card to sit for four hours to dry, then place the folded collection card, along with a humidity detector and silica gel packs, in a provided gas permeable bag and return to the WF ADRC in a provided return envelope.
Measurement of Gene Expression
Dried blood spot were stored at -80°C at the WF ADRC and then shipped as a single batch on dry ice to the UCLA Social Genomics Core Laboratory for transcriptome-wide RNA profiling and CTRA gene expression analyses as previously described. 27,28 Briefly, RNA was extracted (Qiagen RNeasy), converted into cDNA using a high-efficiency mRNA-targeted reverse transcription system (Lexogen QuantSeq 3' FWD), and sequenced on an Illumina NovaSeq instrument in the UCLA Neuroscience Genomics Core Laboratory, all following the manufacturers' standard protocols for this workflow. Sequencing targeted >10 million single stranded 100-nt reads per sample (achieved median = 17.3 million), each of which was mapped to the GRCh38 reference human transcriptome using the STAR aligner (median 83% mapping rate), and quantified as gene transcripts per million total mapped reads with expression values floored at 1 transcript-per-million to suppress spurious low-range variability, log2-transformed to stabilize level-dependent variance within gene, and z-score transformed to stabilize variance across genes. Among 171 assayed samples, routine postassay data quality screening identified 7 samples with insufficient RNA sequencing reads (< 5 million), 8 additional samples with poor read mapping rates (< 70%), and 6 additional samples with poor signal-to-noise ratios (average profile correlation with other samples: r < .50), leaving a total of 150 valid RNA profiles available for analyses of CTRA. This 88% valid data yield is consistent with previous research involving genome-wide transcriptional profiling of dried blood spot samples. 27,28
Statistical Analysis
As in previous dried blood spot CTRA studies, we used linear mixed effect models to analyze average expression of a pre-specified set of CTRA indicator gene transcripts as a function of psychosocial risk and resilience factors while controlling for covariates. Analyses focused on a pre-specified set of 53 CTRA indicator genes used in previous research, 4 (GBP1, IFI16, IFI27, IFI27L2, IFI35, IFI44, IFI44L, IFI6, IFIH1, IFIT1-IFIT3, IFIT1B, IFIT5, IFITM1-IFITM3, IRF2, IRF7, IRF8, JCHAIN, MX1-MX2, OAS1-OAS3, OASL), and 10 of which were removed due to minimal expression levels or variation (SD < .5 log2 expression units ; FOSL1, IFI27L1, IFI30, IFITM4P, IFITM5, IFNB1, IGLL1, IGLL3P, ILA1, IL6). Genespecific z-score signs were reversed for the antiviral gene set to reflect its inverse contribution to the CTRA profile. 4 Mixed models were estimated by maximum likelihood (SAS PROC MIXED) and specified fixed effects of indicator gene (repeated measure), cognitive status (normal vs mild cognitive impairment), psychosocial risk/resilience factors, a cognitive status x psychosocial factor interaction term (testing for differences in CTRA association as a function of cognitive status), and covariates (age, sex, race, BMI, history of regular smoking, and history of regular heavy alcohol consumption); a random effect of study participant; and a fully saturated (unstructured) variance-covariance matrix to account for for use under a CC0 license.
This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. ; https://doi.org/10.1101/2023.05.31.23290618 doi: medRxiv preprint residual heteroscedasticity and correlation across participants. In the event of a significant cognitive status x psychosocial factor interaction, additional follow-up "simple slopes" analyses quantified the association of psychosocial factors with CTRA gene expression nested within cognitive status group.
Demographics and Cognitive Status
A total of 171 participants provided dried blood spot samples (106 NC, 58 MCI, 1 Dementia, 6 Other/NA) with 148 of those samples (87%) yielding valid RNA data. Participants without diagnoses of NC or MCI were excluded from further analysis (4), and 1 participant completed dried blood spot without questionnaires and was excluded from further analysis, yielding a final analytic sample of 143 participants: 91 with normal cognition and 52 with MCI. The mean age of our group was 72.9 +/-8.04 years, 16% were Black, and 69% were female, and 18% were treated with a beta-blocker. Participant demographic characteristics are summarized in Table 1. Less than 4% of data were missing for all variables analyzed.
Cognitive Impairment and the Psychosocial Correlates of CTRA
To determine how cognitive impairment might affect the relationship between psychosocial factors and CTRA gene expression, we compared the relation of CTRA gene expression to psychosocial risk factors (stress, loneliness), two distinct domains of well-being (hedonic and eudaimonic well-being), and three distinct domains of coping (blame and disengagement, distraction and reframing, and social support) for NC and MCI groups while controlling for covariates.
CTRA gene expression also varied significantly as a function of the three major dimensions of coping in this sample (F(3, 126) = 7.22, p < 0.001; Table 2 Model 3, Figure 1). However, we detected significant interactions between cognitive status and the brief-COPE as it relates to CTRA gene expression (F(3, 123) = 9.07, p < 0.001). Among those with normal cognitive function, coping through social support was associated with lower CTRA gene expression (-0.075 ± 0.017, p < 0.001) whereas coping by distraction / reframing was associated with higher CTRA gene expression (+0.086 ± 0.018, p < 0.001). Among those with MCI, coping by blame or disengagement was associated with a lower CTRA gene expression (-0.077 ± 0.018, p < 0.001).
CTRA gene expression was significantly associated in a model containing both psychosocial risk factors (perceived stress, loneliness) and dimensions of well-being (eudaimonic, for use under a CC0 license. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
A model including both eudaimonic well-being and the three coping factors was significantly associated with CTRA gene expression; F(3, 123) = 7.01, p < 0.001 (Table 2, Model 5). In this model, there was a significant inverse correlation between CTRA gene expression and eudaimonic well-being (-0.03 ± 0.012, p = 0.013) with no effect modification of cognitive diagnosis. However, we again detected significant interactions between cognitive status and the brief-COPE in relation to CTRA gene expression (F(3, 119) = 10.71, p < 0.001). Among those with normal cognitive function, coping through social support was associated with a lower CTRA gene expression (-0.076 ± 0.017, p < 0.001) whereas coping by distraction / reframing was associated with a higher CTRA gene expression (+0.091 ± 0.017, p < 0.001). Among those with MCI, coping with blame or disengagement was associated with a lower CTRA gene expression (-0.080 ± 0.018, p < 0.001). There was no significant correlation between eudaimonic well-being and these coping factors.
Several COVID-19-related factors were associated with significant differences in CTRA gene expression, none of which differed by cognitive diagnosis (F(3, 124) = 3.20, p = 0.026). A past diagnosis of COVID-19, either confirmed or suspected, was associated with a lower CTRA gene expression (-0.144 ± 0.058, p = 0.014). Of note, only 6 out of 143 (4%) of our participants reported a past COVID diagnosis, and the timing of past infection was not documented. Participants who reported feeling isolated due to COVID-19 had a lower CTRA gene expression (-0.027 ± 0.011, p = 0.019), and those who reported higher distress had a higher CTRA gene expression (+0.030 ± 0.018, p = 0.019). Degree of worry about COVID-19 was not significantly associated with CTRA gene expression (+0.024 ± 0.015, p = 0.105).
Discussion -1522 need to remove 22 words…
Our analysis of genome regulation in the context of the COVID-19 pandemic during the period of general social distancing documented distinctive transcriptional correlates of wellbeing and dimensions of coping. In both MCI and NC, these data are consistent with previous research in identifying an inverse association of CTRA gene expression with eudaimonic well-being. For the NC group, CTRA gene expression was also inversely associated with coping through social support, but directly (unfavorably) associated with coping by distraction and reframing. By contrast, CTRA gene expression was not associated with either of those coping dimensions for individuals with MCI. The patterns of similar and distinct associations for MCI vs NC suggests that broad experiences of psychological and social well-being remain centrally relevant to biobehavioral function in the context of MCI, whereas more specific dimensions of self-management and coping may become less relevant to individual biobehavioral function in the context of MCI as individuals come to depend more on others to help support activities of daily life and cope with challenge, and thus less predominately dependent on their own cognitive processes and coping responses.
High levels of loneliness have been shown to be associated with an upregulation of CTRA gene expression. 32 However, loneliness did not predict CTRA profile in our cohort. It is for use under a CC0 license. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. ; https://doi.org/10.1101/2023.05.31.23290618 doi: medRxiv preprint possible that the relatively low loneliness scores among our participants was below a threshold at which an effect would be seen. One previous study found that eudaimonic wellbeing had a stronger relationship with CTRA gene expression than did loneliness when both variables are considered simultaneously, suggesting that the two variables' effects may stem from their common involvement in social well-being. 33 In a model containing both eudaimonic well-being an loneliness, we found that eudaimonic retained a significant inverse association with CTRA gene expression but a counterintuitive inverse relationship between loneliness and CTRA gene expression appeared after the shared variance between these two variables of interest was accounted for, a finding that will need to be explored in future work. Previous studies of the association of eudaimonic and hedonic well-being with CTRA profiles have demonstrated similar findings to that of our cohort. In a study of 84 healthy adults aged 35 -64, eudaimonic well-being was associated with downregulated CTRA gene expression, while hedonic well-being was associated with CTRA upregulation. 30 Wyman and colleagues 34 recently reported on the association between psychological wellbeing as measured in the NIH Toolbox on Emotion and found that, in a race-stratified analysis, Black and American Indian/Alaskan Native participants reported lower life satisfaction than White participants, but similar scores on positive affect, meaning in life, and purpose in life. Measures of executive functions, but not episodic memory, were higher in those with higher life satisfaction scores. Psychological well-being is a multi-dimensional construct, and includes evaluative well-being related to evaluations made about life, hedonic well-being or pleasures and satisfaction from life, and eudaimonic well-being or a sense of greater purpose in life. 35 Subjective clinical complaints associated with MCI, such as memory concerns, are predictive of reduced psychological well-being in individuals. 36 Additionally, neuropsychiatric symptoms in MCI are associated with increased risk of incident dementia, independent of prior functional or cognitive status. 37 It is possible that subjective clinical complaints and neuropsychiatric symptoms associated with MCI lead to the observed reduction in eudaimonic well-being among MCI participants in this study. Interventions targeting subjective self-reported health and emotional factors related to well-being have the potential to improve eudaimonic well-being and reduce the associated upregulation in CTRA profile.
The default mode network, an intrinsic connectivity network that is central to AD pathogenesis, 38 has been implicated in the neural correlates of loneliness and a sense of meaning and purpose in life (a component of eudaimonic well-being). 39 Internetwork connectivity was more dense and less modular between the default mode network, and the frontoparietal, attention, and perceptual networks in lonely individuals. Conversely, a greater sense of meaning in life was associated with an increase in modularity between the default mode and limbic networks. This work suggests that the default mode network is a central hub that is involved in shifting between states through differences in modularity and integration with the frontoparietal and limbic networks. These data suggest that the benefits of eudaimonic well-being translate to older populations with normal cognition as well as those with MCI. Interventions shown to improve eudaimonic well-being and reduce CTRA gene expression, such as mindfulness meditation practices, 31 might be investigated in future studies with NC and MCI individuals to gauge improvements in eudaimonic well-being and corresponding CTRA profiles.
Strategies used for coping with psychosocial stress in MCI have been assessed in prior work, though this is the first study to evaluate its molecular correlates in gene expression. Coin and colleagues 40 assessed coping strategies in people living with MCI and dementia. This study utilized the COPE, but used a priori categories of coping, including five scales of problem-for use under a CC0 license. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. ; https://doi.org/10.1101/2023.05.31.23290618 doi: medRxiv preprint focused coping, five scales of emotion-focused coping, and three scales of coping strategies sometimes considered less healthy (venting of emotions, behavioral disengagement, and mental disengagement). They found that individuals with greater cognitive impairment had poorer coping strategies. This association remained present even after adjusting for prepandemic depression, suggesting that less efficient coping strategies may have exposed those with a greater degree of cognitive impairment to more psychosocial stress related to pandemic-related social distancing. Murukesu and colleagues 41 assessed coping strategies using the brief-COPE in older adults with cognitive frailty during the period of restricted movement, travel, and assembly in Malaysia which looked at well-being and coping between two groups in a randomized control trial of a multi-domain intervention to address cognitively frailty. They found that older adults with cognitive frailty used religion, acceptance, and positive reframing (i.e., active coping) and self-blame, denial, and substance use (i.e., avoidant coping) were the least common. Our study adds to this literature in defining the molecular correlates of coping in the context of immune cell gene expression. We also did not utilize a priori assumptions of coping strategies and rather relied on a datadriven approach to evaluate coping in our cohort. In our sample, we found that only those with normal cognition demonstrated a reduction in CTRA gene expression with the use of social support. At the same time, among those with normal cognition, coping based on distraction and reframing was associated with elevated CTRA gene expression. One possibility that is beyond the scope of our data to answer is that, as individuals develop prodromal cognitive decline (i.e., MCI), self-appraised coping strategies may become less clearly associated with actual coping strategies. A similar pattern is seen in the self-appraisal of cognitive impairment, where those with MCI demonstrate a progressive underappreciation of their own cognitive deficits. 42 Given that possibility, it becomes all the more remarkable that eudaimonic well-being remains an important correlate of molecular well-being for adults with MCI as well as for adults with normal cognition, and that consummatory sources of hedonic wellbeing remain a risk even in the context of MCI. Again, this pattern could potentially reflect that persons with MCI may be "outsourcing" their coping to their caregivers/support network, and thus their own psychological reactions bear little relationship to their CTRA biology whereas their engagement with others (social well-being) is the primary psychosocial source of biological resilience.
Several issues limit the interpretation of the present results. In this sample, people with cognitive impairment were both older and lonelier than those without cognitive impairment, and this range-restriction could have contributed to the lack of association observed for loneliness and CTRA gene expression among those with MCI. The present results come from data collected during the COVID-19 pandemic and associated social distancing protocols, and it is unclear whether similar results would be obtained in other settings and social conditions. Our MCI sample was smaller than our NC sample, potentially leading to asymmetric power across subgroups. Because our data come from a single regional context, it is unclear whether our findings would hold true across all individuals with MCI, and future work should focus on larger and more broadly representative samples.
Despite these limitations, our work demonstrates several important findings. This is the first study to demonstrate the similar transcriptional correlates of eudaimonic vs. hedonic wellbeing in individuals with MCI compared to NC individuals. It is well-established that individuals with greater eudaimonic well-being (i.e., a sense of purpose in life) demonstrate a reduced CTRA gene expression profile, 30,31,33 and past work has found a significant reduction in the risk of AD and MCI associated with a greater sense of purpose in life. 13 The findings here suggest one potential mechanism through which this psychological resiliency factor may for use under a CC0 license.
This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. function, mediating a lower inflammatory burden and protecting against the "inflammaging" that has been proposed to contribute to the AD neuropathological cascade.
for use under a CC0 license.
This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. F i g u r e 1 . I n t e r a c t i o n s b e t w e e n c o g n i t i v e d i a g n o s i s a n d w e l l -b e i n g , c o p i n g f a c t o r s . F o r e s t p l o t d e m o n s t r a t i n g t h e s t r e n g t h o f a s s o c i a t i o n ( b ± S E ) b e t w e e n t h e i n d i c a t e d p r e d i c t o r v a r i a b l e a n d t h e 5 3 -g e n e C T R A c o n t r a s t s c o r e f o r t w o d i m e n s i o n s o f w e l l -b e i n g a n d t h r e e c o p i n g f a c t o r s . for use under a CC0 license.
This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 7, 2023. ; T a b l e 2 C T R A r e l a t i o n s h i p t o w e l l -b e i n g , s t r e s s , l o n e l i n e s s , a n d c o p i n g f a c t o for use under a CC0 license. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. This article is a US Government work. It is not subject to copyright under 17 USC 105 and is also made available (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
|
2023-06-06T01:31:54.528Z
|
2023-06-05T00:00:00.000
|
{
"year": 2023,
"sha1": "9b523d3bf1488eb50da5118250041c2d94e0b768",
"oa_license": "CC0",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2023/06/05/2023.05.31.23290618.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c63ac3a8ace7feb8195baa714e7f0220e04d74c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254174929
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between English Speaking Performance and Foreign Language Anxiety in Online Peer Learning
This study investigated the relationship between English speaking performance and foreign language anxiety before and after an online peer learning program at the college level. A total of 59 students enrolled in a one-semester English Speaking Communication course at a university in central Taiwan participated in the study. The course entailed a 7-week online peer learning program. The participants took a computer-based speaking proficiency test and completed the Foreign Language Classroom Anxiety Scale questionnaire before and after the program. The collected quantitative data were analyzed using descriptive statistics and the Pearson product-moment correlation coefficient. The results revealed that the participants experienced a moderate level of anxiety when taking the computer-based speaking test before and after attending the online peer learning program. A negative correlation was observed between foreign language anxiety and computer-based speaking performance before the online peer learning program. However, after the online peer learning program, a positive correlation was noted between anxiety and speaking performance. On the basis of these findings, this study provides pedagogical suggestions for second-language practitioners.
Online learning
The use of technology profoundly influences language learning (Aydin, 2018). The COVID-19 pandemic has resulted in online learning replacing traditional classroom lecturing worldwide (Russell, 2020;Kusuma et al., 2022). When language instructors struggle to apply technology in language teaching, learners may lose the motivation to learn online; they may even lose their selfdiscipline, thus taking no additional responsibility to self-learn (Russell & Murphy-Judy, 2020; Aydin, 2018). Bárkányi and Melchor-Couto (2017) conducted a study on foreign language anxiety among learners in a massive open online language course. They reported that the learners felt insecure when recording themselves but were not particularly anxious when using audio materials; moreover, the learners felt more comfortable when shielded by their computer during speaking activities, and most of them preferred computer-mediated communication interactions.
Scholars are increasingly focusing on the effect of technology-enhanced instruction on L2 achievement. Martin and Valdivia (2017) indicated that e-learning and e-teaching are aimed at influencing the construction of knowledge with respect to learners' individual experiences or knowledge. Yu et al. (2010, as cited in Martin & Valdivia, 2017) revealed that low achievers could benefit from webbased instructional learning. Kusuma et al. (2022) noted that although the literature demonstrates that e-portfolios can successfully improve the speaking performance of students pursuing English as a foreign language, it does not provide evidence of the effectiveness of e-portfolios in reducing foreign language anxiety, especially in an online speaking course. They thus investigated the effect of e-portfolios on students' foreign language anxiety and found that students' anxiety decreased significantly when using e-portfolios as a collection of video clips in online speaking courses. White (2014) hypothesized that language anxiety could be reduced using information communication technology and computerassisted language learning (CALL) and concluded that a CALL program does reduce anxiety. For example, students tend to feel less anxious when asked to speak English without preparation.
Anxiety and online peer learning
Learners with high anxiety may prefer taking online courses to seek security because they consider such courses as a chance to avoid engaging in speaking activities or interacting with others (Pitchette, 2009, as cited in Russell, 2020). However, the use of online learning as a tool requires learners to interact with peers or instructors. Learners are then forced to use English to respond online; hence, the anxiety increases because of the lack of sufficient time to prepare or seek help when necessary (Russell & Murphy-Judy, 2020; Alla et al., 2020). Therefore, whether e-learning can help reduce language anxiety is a critical issue worth exploring. The use of technology fosters autonomous learning and reduces students' anxiety toward the use of technology because students consider technology as a tool helping their learning process (Huang, 2002, as cited in Martin & Valdivia, 2017). Distance language learning may not be regarded as a natural means of communication, leading to an increase in students' cognitive effort as well as communication ambiguity. Moreover, distance learning may negatively influence learners' affective domain (Alla et al., 2020). Therefore, distance learning may induce high levels of anxiety (Jegede & Kirkwood, 1994
Speaking and anxiety
Anxiety influences the psychological learning process of both low-and high-proficiency students. Speaking anxiety tends to influence language learning, particularly performance ability. Studies using the Foreign Language Classroom Anxiety Scale (FLCAS) have revealed a moderately negative correlation between foreign language classroom anxiety and achievement among secondlanguage learners. This implies that with increasing anxiety, students tend to avoid learning the target language and focus on learning a second language instead (Hasibuan & Irzawati, 2019). Several studies have demonstrated that language anxiety in speaking courses may negatively affect learning aspects such as language acquisition; however, others have suggested that anxiety can help increase learners' ability to master a foreign language (Julianingsih, 2018) as well as increase learners' motivation to perform well in their language learning process (Pamungkas, 2018). Research on the relationship between foreign language anxiety and speaking performance before and after an online peer learning program is limited. Accordingly, to fill this research gap, the present study investigated the relationship between students' language anxiety and speaking performance in the context of online peer learning. The study was conducted to address the following research questions: 1) What are students' anxiety levels when attempting a computer-based speaking exam before and after attending an online peer learning program? 2) Is there any relationship between students' foreign language speaking ability and speaking exam anxiety before and after attending an online peer learning program?
Participants
As senior English majors, a total of 59 Taiwanese students who enrolled in English Communication Skills at a university in central Taiwan participated in this study. Among them, there were more female students (44) than male ones (15). They had learned English for more than 12 years prior to the study. In accordance with the Common European Framework of Reference for Languages (CEFR), their English proficiency levels were situated between A1 and B2: 4 students at A1, 22 at A2, 31 at B1, and 2 at B2. Moreover, the course above was compulsory and lasted 36 hours 18 weeks, i.e., two hours per week.
Instrument
The 33-item FLCAS, a self-report questionnaire that measures learners' anxiety in the foreign language classroom, was used. The FLCAS items are rated on a 5-point Likert scale with anchors ranging from strongly disagree (1) to strongly agree (5). According to Horwtiz (1986), the FLCAS was determined to have high internal consistency (Cronbach's alpha = .93) and high test-retest test reliability (.83; p = .001). To prevent misunderstandings, a Chinese version of the FLCAS was used in the present study.
Procedure
English Communication Skills, a one-semester required course, entailed a 7-week online peer learning program aimed at enhancing students' speaking skills. The participants first completed the FLCAS questionnaire and took a mock TOEIC Bridge Speaking Test at the beginning of the course. Subsequently, they participated in the 7-week online peer learning program. Upon completing the program, the FLCAS questionnaire and a mock TOEIC Bridge Speaking Test were again conducted in the lab, and the students' audio files were uploaded on the learning platform called ee-class. Two native-English speakers then scored these files on the basis of the criteria for the TOEIC Bridge Speaking Test.
Data analysis
This study applied SPSS software to analyze the collected data regarding the participants' speaking scores and their FLCAS scores before and after the program. The Pearson product-moment correlation coefficient was used to explain the linearity of variables in the relationship between language anxiety and speaking performance.
Results and Discussion
Descriptive statistics were used to describe the FLCAS data. The mean score of all participants was 3.08 (standard deviation [SD] = .53) before the online peer learning program, indicating that the participants had a moderate level of anxiety when dealing with the speaking test. According to Horwitz (2013), a mean FLCAS score of approximately 3 indicates that respondents are slightly anxious, and a mean score of <3 indicates that respondents are not very anxious, and a mean score of ≥4 indicates that respondents are highly anxious (Pamungkas, 2018). To examine the students' speaking proficiency, this study implemented a computer-based mock TOEIC Bridge Speaking Test; on the basis of scores provided by two raters, the participants' mean speaking score was determined to be 84.73 (SD = 3.85), indicating that the participants had a high-intermediate level of speaking proficiency (Table 1). To examine the relationship between foreign language anxiety and speaking performance before the online peer learning program, the Pearson product-moment correlation coefficient was used. The results revealed a negative correlation (r = −.057) between speaking anxiety and speaking performance. This implies that higher speaking scores were associated with lower levels of foreign language anxiety (and vice versa). However, the correlation coefficient between these two variables was not significant (p > .05; Table 2). These findings are similar to those of previous studies assessing the relationship between these two variables (Hasibuan & Irzawati, 2019;Huang, 2018;Liu, 2006;Rofida, 2021;Tien, 2018). The results revealed that the students' speaking performance was enhanced after the online peer learning program (M = 84.97, SD = 5.61), and the mean score of speaking anxiety was slightly lower than that observed before the program (M = 3.05, SD = .61). This indicates that online peer learning can facilitate students' speaking performance and reduce their language learning anxiety. This finding is consistent with those of previous studies (Chang, 2020; Yeh & Lai, 2019). The study demonstrated a significant positive correlation between foreign language anxiety and speaking performance after the program (r = .330, p < .05). This implies that higher speaking scores were associated with a higher level of foreign language anxiety (and vice versa; Table 4). These findings are not consistent with those of previous studies (Hasibuan & Irzawati, 2019;Huang, 2018;Liu, 2006;Rofida, 2021;Tien, 2018). Howitz et al. (1986) indicated that anxiety could be classified into two categories: facilitating and debilitating anxiety. Facilitating anxiety, representing a certain level of anxiety, can actually help improve learners' language performance. Debilitating anxiety, representing excessive levels of anxiety or no anxiety at all, may diminish language performance. The results of this study suggested that after attending the online peer learning program, some students exhibited higher performance when they experienced facilitating anxiety.
Conclusion
This study investigated the relationship between language anxiety and English speaking performance before and after an online peer learning program at the college level. The study compared the students' performance and anxiety levels before and after attending the online peer learning program, revealing that the students had moderate levels of language anxiety at both times. After attending the online peer learning program, the students exhibited enhanced speaking performance, and their mean anxiety score was slightly lower than that before the program. In line with existing research ( Tien, 2018), the present study revealed a negative correlation between foreign language anxiety and speaking performance before the online peer learning. However, the negative coefficient value between these two variables was not significant.
After the online peer learning program, the participants' speaking performance was enhanced, and their anxiety was slightly reduced. Foreign language anxiety and speaking performance showed a significant positive correlation, indicating that facilitating anxiety could be a minor factor positively influencing the students' speaking performance.
Although this study has shown a negative correlation between speaking performance and foreign language anxiety before the online peer learning program but a positive correlation after the online peer learning program, some limitations should be addressed by future research. Firstly, the participants only included college students, and the study scope in the present study only focused on the relationship between language anxiety and speaking performance using an online peer learning program. The authors suggest that future studies explore the relationship between language anxiety and speaking performance in different learning environments, among different kinds of learners, and using different online learning tools. Furthermore, the present study did not examine the effect of online peer learning programs on students' foreign language anxiety as well as speaking performance.
In order to comprehensively examine the relationship between anxiety and foreign language speaking performance, qualitative methods such as open-ended questionnaires, interviews, and learning logs or quasi-experimental design of the study should be employed as participant responses can give the researcher an insight into the issue above.
|
2022-12-03T17:20:42.589Z
|
2022-11-26T00:00:00.000
|
{
"year": 2022,
"sha1": "4c5499f3953a8a5fc3d2d035fb6401c9096fcca0",
"oa_license": null,
"oa_url": "https://al-kindipublisher.com/index.php/jeltal/article/download/4395/3694",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "241ad4472b6a04db4b7f07b018e621c09bdbc308",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
237156090
|
pes2o/s2orc
|
v3-fos-license
|
Salt-Resistive Photothermal Materials and Microstructures for Interfacial Solar Desalination
Solar interfacial evaporation, featured by high energy transfer efficiency, low cost, and environmental compatibility, has been widely regarded as a promising technology for solar desalination. However, the interplay between energy transfer and water transport in the same channels suggests that the tradeoff between high efficiency and long-term stability inherently exists in conventional photothermal nanomaterials. We summarize state-of-the-art research on various anti-salt clogging photothermal microstructures as long-term stable interfacial solar evaporators for solar desalination. The review starts with an overview of the current status and the fundamental limit of photothermal materials for solar desalination. Four representative strategies are analyzed in detail with the most recent experimental demonstrations, including fluid convection enhancement, surface wettability engineering, energy-mass-path decoupling, and surface chemistry engineering. Finally, this article focuses on the challenges in anti-salt clogging solar interfacial evaporators and potential point-of-use applications in the future.
INTRODUCTION
The shortage of freshwater resources has become one of the most severe global challenges nowadays (Shannon et al., 2008;Elimelech and Phillip, 2011;Cetrulo et al., 2019). To address the current situation of global water scarcity, widespread concerns have been garnered about freshwater extraction from seawater (Dobson et al., 2002;Gude, 2011). Solar desalination is an environment-friendly water treatment technology without the additional consumption of fossil energy (Kalogirou, 1997;Abdel-Rehim and Lasheen, 2005;Qiblawey and Banat, 2008). In recent years, a promising route, interfacial solar vapor generation, has been proposed and developed to promote the photothermal conversion efficiency of liquid-gas phase transition of water, dramatically improving vapor production (Tao et al., 2018;Zhou et al., 2019;Vaartstra et al., 2020). The performance of solar vapor generation can be significantly enhanced by floating the absorber or evaporator at the water-air interface to evaporate the water at a relatively localized photothermal area (the liquid/air interface), which is therefore named interfacial solar vapor generation (Ghasemi et al., 2014;Wang et al., 2014). The rapid development of solar water treatment technologies has thus been essentially improved.
However, most of the proposed interfacial solar evaporators have been more or less suffering from the rapid decay of energy transfer efficiency and poor stability during the long-term operation of solar desalination Zhang Q. et al., 2020). Therefore, it is vital to inhibit salt deposition to maintain long-term solar vaporization. Traditional methods are rinsing and washing in the post-treatment process (Finnerty et al., 2017;Jin et al., 2018;Zhang et al., 2018;Xiao et al., 2019;Zhu et al., 2019). However, they introduced extra treatment costs and merely worked for washable and flexible membrane absorbers. In addition, by logically segregating the lightabsorbing surface from the surface of salt precipitation, rationally designed absorbers can precisely monitor the position of salt precipitation to allow long-lasting, highperformance, and ZLD solar desalination Zeng et al., 2019;Zhang et al., 2020b;Wu et al., 2020;Xia et al., 2020;Zhuang et al., 2020). However, the imperfections of the above methods are that the threat of salt accumulation on the absorber surface to light absorption and permeability can interrupt the generation of stable and high-efficiency steam in long-term desalination and highly concentrated brine.
Aiming to solve or alleviate the salt clogging issue, the researchers have developed a couple of effective strategies to develop stable solar desalination devices with anti-salt clogging properties. We evaluate the state-of-the-art solar interfacial desalination as shown in Figure 1. It illustrates that the reported anti-clogging mechanisms for solar interfacial systems can be categorized into four major groups: 1) fluid convection can dilute the formed high concentration brine in the absorbers during desalination; 2) Hydrophobic designs can prohibit the contact of salt ions with the solar absorber; 3) Thermal engineering by re-radiating infrared photons can entirely avoid fouling via the physical separation from the brine; 4) Donnan that can prevent ions upward movement by creating the double electrode layer.
ENHANCED FLUID CONVECTION: MULTISCALE HIERARCHICAL STRUCTURES
In the past years, various research groups have suggested that the primary issue that accounts for the salt accumulation is the emergence of the saturated salt solution on the solar absorber surface during the highly efficient interfacial solar desalination process. Therefore, it is of tremendous importance but rather challenging for rationally designing structures capable of highflex ion diffusion. A variety of concepts have been proposed and demonstrated experimentally, including multi-scale precise manipulation of the structures of solar evaporators across the microscale to the macroscopic scale. One of the most representative strategies, reported by Chen group , is the regulation of micro/macro structures because of the rapid diffusion of ions through the micro/macro channels He et al., 2019;Liu et al., 2019;Wang C. et al., 2020;Zhu et al., 2017). As shown in Figure 2A, in this saltrejecting evaporator proposed by Chen et al., the hydrophilic white fabric is composed of microstructured pores and channels and artificially confined in macroscale narrow gaps among thermal insulation foams, which can finally wick the underlying water into the absorber layer. At the same time, the highly concentrated salt solution can be effectively diffused back into the remaining water. When the intriguing design integrated with a cost-effective polymer film condensation cover, the overall system can produce clean water at a rate of 2.5 L m 2 per day at a total system cost of about $3 m 2 , which is sufficient to meet individual drinking requirements without energy framework while cheaper than conventional solar stills by order of magnitude, ideal for water-stressed and disasterridden populations with affordable drinking water. Similarly, inspired by banyan, a hierarchical evaporator consisting of an activated carbon-cotton fabric, a polyester pillar, and an expandable polyethylene foam was suggested by Zhang et al. (Zhang et al., 2020c). A banyan tree in nature grows with several aerial prop roots to meet the growth of plants, growing eventually into the soil for additional water availability. By modifying the amount of PPs during desalination, the suggested hierarchical evaporator was shown to exhibit anti-salt clogging ability.
In addition, macro-pores of solar absorbers can also be utilized as efficient water channels to prevent salt deposition through enhanced fluid convection. Porous wood-based solar evaporators were reported by the Hu group (Zhu et al., 2017;He et al., 2019) for effective and safe high-salinity water desalination. The outstanding antifouling characteristics are due to its distinctive porous bimodal and 3D interconnected microstructure. During desalination, the rapid diffusion and evaporation of water and the capillary pumping of micro-channels in the system lead to a rapid re-supply of surface vaporized brine to maintain consistent and rapid production of water vapor. Further research has shown that a self-regenerating solar evaporator with outstanding anti-FIGURE 1 | An overview of the anti-clogging interfacial solar desalination. The anti-clogging principle can be divided into four main categories: fluid convection, hydrophobic design, contactless solar evaporation, and the Donnan effect. Adapted from . Adapted from (Xu et al., 2018). Adapted from . Adapted from (Epsztein et al., 2018 properties is recognized via a rationally designed virtual channel array in a natural wood substrate . Artificial polymer porous foams (Zhang et al., 2019a;Wang X. et al., 2020;Dong et al., 2020;He et al., 2020) with super-hydrophilic wettability have also been introduced. The sufficient water transport in these solar evaporation systems have been widely regarded to be beneficial to resolve the crystalline salt through enhancing fluid convection.
SURFACE WETTABILITY ENGINEERING: HETEROGENEOUS HYDROPHILICITY
By enhancing fluid convection, the anti-salt evaporators cannot maintain efficient evaporation out of high-salinity brine (over 10 wt%) because of the excessive salt solubility on the solar absorbers during desalination. Hence, the effective way to prevent salt deposition on the absorber is to blocks water molecules while allowing steam. It is also noteworthy that the hydrophobic surface of a solar absorber can effectively prevent water while blocking salt ions from penetrating into the absorber and thus water evaporation happens at the bottom of the absorber (Gao S. et al., 2019;Chen et al., 2020;Peng et al., 2021). The hydrophobic surface of solar absorbers has been gradually recognized as salt-rejective solar evaporators by a couple of groups (Kashyap et al., 2017). As illustrated in Figure 2B, Xu et al. (Xu et al., 2018) indicate that safe and effective solar desalination can be facilitated by a sequential electrospinning flexible Janus absorber. The membrane can be divided into two primary parts-the hydrophobic upper levels for salt-resistance and the porous and hydrophilic lower level for water supply. Benefiting from Janus' unique structure, solar absorption and water pumping are split into various layers with polymethylmethacrylate (PMMA) coating of the hydrophobic upper carbon black nanoparticles (CB) for light absorption and the lower polyacrylonitrile (PAN) hydrophilic layer for water pumping. As a result, salt can only be accumulated in the PAN hydrophilic layer and easily dissolved due to consistent water pumping. High efficiency and steady water generation (1.3 kg m-2 h-1, over 16 days) under one Sun is shown by the Janus absorber, which has not been obtained in many previous absorbers. Similar Janus cotton fabrics with saltrejection properties may also be reported by Lai et al. (Gao T. et al., 2019). However, the absorbers mentioned above are in direct contact with the volume of water, resulting in heat dissipation through the water. The vertically oriented Janus . (B) Janus-structure based absorber for interfacial solar desalination processes. The schematic of a highly efficient solar steam generation system and a salt resistance strategy. Adapted from (Xu et al., 2018). (C) Surface heating using mid-infrared radiation (non-contacting heat localization). Schematic of a conventional evaporation pond and the proposed non-contact heat localization. Energy balance and heat transfer modes for the umbrella and water. Adapted from (Menon et al., 2020). (D) Salt-resistant solar desalination based on Donnan Effect. Adapted from (Zhao et al., 2021 MXene aerogel with a hydrophobic upper layer and a hydrophilic lower layer was designed by Quan et al. (Zhang et al., 2019b). MXene, which has a theoretical light-to-heat conversion capability of 100 percent, can easily convert light to heat in combination with the Janus structure and protect the photothermal layer from direct bulk water interaction with the hydrophobic upper layer, thereby reducing heat loss and thus effectively inhibiting the crystallization of salt owing to its fast dissolution with continuous pumping water. Among most Janus absorbers, the hydrophobic upper layer of the absorber was generally generated by manipulating with a polymer solution, which may block the steam escape pathways. An innovative double-layer hydrophilic/hydrophobic nonporous structure was designed by Que et al. (Yang et al., 2018). A porous hydrophobic layer can withstand salt deposition and supply pathways for vapor evaporation. Water vapor is generated and then lost through the stomata on the upper epidermis in the natural water lily. The hydrophobic surface is the root cause of the self-cleaning property. Xu et al. modeled a water lily-inspired hierarchical structure composed of a bottom bracket and a top solar absorber (Xu et al., 2019). Unlike most conventional structures of salt rejection, there was a thin water film at the interface between the bottom supporter and the top hydrophobic absorber. This sandwich structure plays a vital role in achieving safe and efficient high-concentration brine evaporation in this design. The formed salt particles or high salt solution gradually diffused to the bulk brine along the one-dimensional channel rather than accumulating in the absorber during desalination.
DECOUPLING ENERGY-MASS TRANSPORT PATHS: NON-CONTACTING OPTICAL HEATING BY THERMAL EMISSION ENGINEERING
As in most of interfacial solar evaporation systems, mass transport and thermal energy transfer are basically existing in the same channel, in which case the high evaporation rate is accompanied with more salt reservation on the absorbers. The deposition of salt on the top surface of solar absorbers typically results in lower stability and efficiency degradation, which is difficult to prevent due to the direct interaction of the solar absorber with saline. Hence, the physical separation of the absorbers from the brine is an effective way to prevent salt deposition caused by evaporation. Non-contact solar interfacial evaporation is a new type of evaporation structure that fouling is entirely avoided. The brine absorbs solar radiation and transfers the heat from solar to the brine to generate steam by infrared radiation (Bian et al., 2020). Thomas A. Cooper et al. recently described a structure that can absorb solar radiation and reradiate infrared photons absorbed directly by water within a penetration depth of sub-100 μm . Due to thermal isolation, the structure is no longer modified at the boiling point and is used to overheat the steam generated. Steam was produced at temperatures up to 133°C, showing superheated steam in a single sunlight system in a non-pressurized system. Akanksha K. Menon et al. (Menon et al., 2020) recently used a photothermal system that converts sunlight into midinfrared radiation where water is significantly absorbed to increase evaporation by more than 100 percent by a passive and non-contact procedure ( Figure 2C). Apart from traditional evaporation ponds, heat is located by radiative coupling at the water's surface, leading to better use of solar energy with 43% conversion efficiency. The non-contact design of the system makes it ideally suited for treating a wide variety of uncontaminated wastewater. The usage of industrial materials facilitates relatively cheap and highly efficient technology for effective wastewater management, with the additional advantage of the salt recovery.
SURFACE CHEMISTRY ENGINEERING: DONNAN EFFECT FOR PRE-INTERCALATION OF SALT IONS
As discussed in the previous part, the deposition of salt is mainly due to the increasing concentration of salt on the top surface of the solar absorber. The regulation of the local salt concentration in the absorbers is an effective and desirable way to avoid salt deposition. The salt ion has positive/negative charges on the top surface of the solar absorber. Hence, the key is to regulate the number of salt ions in the absorbers according to the boundarylayer differential equation for mass transfer (Pantoja et al., 2015). Donnan effect can prevent ions upward movement by creating a double electrode layer due to electric neutrality, which can be a good way for enhanced anti-fouling performance (Chang and Kaplan, 1977;Cumbal and SenGupta, 2005;Galama et al., 2013;Ma et al., 2020).
Depending on the pH of the polyamide NF membrane, the rejection behavior of sodium cation-containing ternary ion solutions (Na+) and two monovalent anions has been consistently examined by Razi Epsztein et al. (Epsztein et al., 2018). The Donnan (charge) exclusion mechanism in the NF is more likely to affect anion with a smaller ion radius and a relatively high charge density. Membranes based on graphene oxide (GO)-based suffer from either low water flow or low ion rejection when employed for desalination. Chengzhi Hu et al. successfully created an electroconductive threedimensional hybrid membrane by employing reduced GO and carbon nanotubes (GCN) and showed significant ability to overcome the tradeoff between selectivity and permeability when performed concurrently as a philter membrane and electrode. In addition to developing various water transport channels to facilitate permeability, the intercalation of carbon nanotubes (CNTs) in the reduced GO matrix has also improved the active adsorption sites for salt ions, contributing to an improved Donnan effect and extraordinary salt rejection. An exceptional NaCl rejection of 71 percent was accomplished by the optimized GCN (containing 15 percent CNTs), three times higher than without bias. As shown in Figure 2D, Zhu et al. recently described a hierarchically designed evaporator with saltresistant ability based on the Donnan effect (Zhao et al., 2021). Due to the Donnan distribution equilibrium, this structure can Frontiers in Energy Research | www.frontiersin.org August 2021 | Volume 9 | Article 721407 minimize the number of salt ions in the absorber while supplying water. This hierarchically designed evaporator is shown by high efficiency (80%) and steady water generation under one Sun in high-salinity brine (15 wt% NaCl).
CONCLUSIONS AND OUTLOOK
Interfacial solar desalination has been intensely pursued as the most effective method for obtaining cleaner freshwater, attributing to an intrinsic low cost, high conversion efficiency, and eco-friendliness. However, the accumulated salt on the absorbers during desalination blocks continues the solar energy input and the vapor escape channels, leading to a significant decrease in the steam yield. In response to this problem, tremendous efforts have been dedicated to developing solar interfacial evaporators with an antifouling performance for long-term and stable desalination. This mini review summarized different solutions for inhibiting the salt deposition on absorbers' surfaces, including enhancing fluid convection, surface wettability engineering, decoupling energymass transport paths, and surface chemistry engineering. These researches are of paramount importance in improving our insight into the development of stable and continuous desalination. There is still a wide gap between the current strategy and the practical uses due to the complicated process of crystalline salts. Therefore, the deep study still needs more effort in future studies. 1) The yield of freshwater is critical for practical application in the solar interfacial desalination process. However, most of the previous devices still focus on the steam yields rather than the water yield to date. Therefore, the high yield of freshwater from seawater is still a fundamental challenge in solar interfacial desalination. 2) Based on the deep understanding of the principles of water transport, salt crystallization, evaporation, continuous steam production, and solid salts harvesting were proposed (Xia et al., 2019;Shao et al., 2020;Xia et al., 2020;Xu et al., 2021), which provides a new direction in further research. Hence, the collection of salt or chemicals of economic value should be consecrated deserves some attention in the process of solar interfacial desalination. 3) Regarding the current solar interfacial desalination devices, the main obstacle to solar interfacial desalination in practical uses is commercial-scale manufacturing.
AUTHOR CONTRIBUTIONS
All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.
|
2021-08-18T13:19:14.749Z
|
2021-08-17T00:00:00.000
|
{
"year": 2021,
"sha1": "f9c204fad1b2ad42a80ea14bffc5098b136bc5e4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2021.721407/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "f9c204fad1b2ad42a80ea14bffc5098b136bc5e4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
67790820
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of diabetes-induced effect on apex of rat heart myocardium by using cluster analysis and neural network approach : An FTIR study
Diabetes mellitus (DM) is a progressive chronic disorder, which affects people belonging to all age groups of the population. This disease is accompanied by a greatly increased risk of cardiovascular death. In the present study, the effects of streptozotocin (STZ)-induced type 1 diabetes on apex myocardium of the rat heart have been investigated using Fourier Transform Infrared (FTIR) spectra. The cluster analysis has been applied to FTIR spectra to differentiate the diabetic samples from the normal controls. In addition, the protein secondary structures of diabetic and normal tissues were predicted by neural networks based on the amide I band of the FTIR spectra. The findings mainly suggest that 5 weeks of diabetes alters the lipid and protein profile of normal rat heart apex myocardium, which might have an important role in understanding the molecular mechanism of diabetes-related heart diseases.
Introduction
Diabetes Mellitus (DM) became a serious health problem throughout the world.World Health Organization estimated that the worldwide prevalence of diabetes is expected to grow from 171 million in 2000 to 366 million by 2030 [1].There are two types of this disease namely, type 1 and type 2. Type 1 occurs due to the loss of insulin production in the beta cells of the pancreas, while type 2 occurs due to lack of serum insulin or poor uptake of glucose into the cells [2].Both types of diabetes are known to affect normal heart function, which eventually leads to congestive heart failure even in the absence of coronary artery disease [3].In diabetes, hyperglycemia causes rapid changes in membrane function, followed by contractile dysfunction within weeks [4][5][6].Cardiomyopathy in diabetes is associated with decreased diastolic compliance, interstitial fibrosis and myocyte hypertrophy, to mention a few.The molecular mechanisms leading to this disease are still uncertain.
Previously, the effect of diabetes have been documented in many studies, including our's, on different regions of the heart, such as left ventricle myocardium [7][8][9], right venticle myocardium [7,10,11], papillary muscles [13,14], and ventricular myocytes [13].However, the number of studies investigating the effect of diabetes on the apex myocardium is very limited.In one of them, the effects of diabetic cardiomyopathy have been investigated on the rat apex with respect to electrophysiological characteristics [15].Apex is a clinically important region of the heart in terms of both diagnosis and treatment of heart diseases [16][17][18][19][20].For instance at cardiac apex, a mitral systolic murmer is audible as a holosystolic or late systolic murmer [21].Molecular changes in the cardiac apex are known to be very important for the regulation of the normal electrical activity of the heart [22].Therefore, in the present study, we have investigated the effects of type 1 diabetes on the protein secondary structure of apex myocardium of the rat heart using neural networks based on Fourier Transform Infrared (FTIR) spectra to understand the underlying biochemical changes associated with diabetes.In addition, we have differentiated successfully the diabetic tissues from the control ones with cluster analysis for investigating the changes in lipids (at 2800-3050 cm −1 spectral region), and in proteins (1480-1800 cm −1 spectral region) using FTIR data.
Materials and methods
The control and the diabetic groups of the Wistar rats used in this study were prepared as described previously [7].Two adjacent cross-sections of tissues having 9 µm thickness were taken from the apex of all groups for acquisition of spectral data.In order to record FTIR spectra, these sections were thawmounted on IR-transparent CaF 2 windows.The rat heart was attached to the cryotome by using a small amount of optimum cutting tool.The first tissue sections were used for FTIR microspectroscopy measurements and the second serial sections were used for Hematoxylin & Eosin (H&E) staining.This staining was performed to see the histologically defined tissue regions on the sections.To map the tissue sections, an FTIR microspectrometer (Bruker, Germany) was used.The details of this IR mapping are described previously [7].To have IR data completely covering the tissue area of interest, spectra were collected in both x and y directions in steps of 80 µm.IR absorption spectra were collected from 850 to 4000 cm −1 .The resolution was 6 cm −1 .The number of scans co-added per pixel spectrum was 64.The spectrometer was continuously purged with dry air to get rid of spectral contributions from water vapor and CO 2 .
Spectral analysis
For microspectroscopic data analysis, CytoSpec (http://www.cytospec.com)and OPUS data collection software packages (Bruker, Germany) were used.Quality test was applied to the raw spectral data prior to the data evaluation procedure, as described previously [7].The spectra which failed the quality test were removed and were not used for further analysis.The remaining spectra were used for first derivative calculations in 950-1480 cm −1 and 2800-3050 cm −1 spectral regions, using a five smoothing point Savitzky-Golay algorithm.The next step was the vector-normalization.For this purpose, first derivative spectra in the frequency range of 950-1480 cm −1 were used.Spectral classes corresponding to specific tissue structures were identified using cluster analysis-based maps.For this purpose, cluster analysis was performed in 950-1480 cm −1 and 2800-3050 cm −1 regions by using first derivative spectra to obtain the average spectra arising only from the myocardium of the rat cardiac apex.The original absorption spectra and their averages belonging to different clusters were saved.For further analysis, the data were loaded into OPUS.
Cluster analysis
To facilitate comparison of normal and diabetic cardiac apex myocardia, cluster analysis were performed on second derivative spectra using a nine smoothing point Savitzky-Golay algorithm in 2800-3050 cm −1 and 1480-1800 cm −1 spectral regions for the analysis of signals arising from lipids and proteins, respectively.Spectral distances were calculated between pairs of spectra as Pearson's correlation coefficients [23].For separation of control and diabetic tissue, cluster analysis was based on the Euclidean distances.Ward's algorithm was used for hierarchical clustering in all cases.The hierarchical clustering is a multivariate statistical data analysis method, which builds a hierarchy of clusters from individual elements.With cluster analysis, it is possible to examine the interpoint distances between all the samples and represents the information in the form of a two dimensional plot.This plot is known as dendogram, which is a tree-like diagram demonstrating the arrangement of the clusters.Each subset of spectra having closer similarities to each other than another set of spectra are classified in a single cluster [24].Even small differences between groups can be visualized easily by cluster analysis.
Neural network analysis
We have used neural networks to predict the secondary structure content of proteins.The neural networks were initially trained using a data set containing FTIR spectra of 18 water soluble proteins recorded in water using the method described in reference [25].The secondary structures of these proteins were known from X-ray crystallographic analysis.The size of the data set was increased by interpolating the available FTIR spectra to improve the training of the neural networks.Before training, amide I band located at 1600-1700 cm −1 of FTIR spectra were first normalized and their discrete cosine transforms (DCT) were obtained.To improve the generalization property of the neural network, the number of inputs were restricted to the significant DCT coefficients.Bayesian regulation was used to train the neural networks whose structures were optimized in terms of the number of inputs and number of hidden units.The trained neural networks have standard error of prediction values of 4.19% for αhelix and 3.49% for β-sheet.Although these errors may seem to be large, it should be pointed out that neural networks yield more reliable results for incremental changes in α-helix and β-sheet contents as in this study.The details of the training and testing algorithm can be found in [25].
Statistical analysis
The results were expressed as "mean ± standard error of mean".Mann-Whitney U test was used to test the significance of the differences between control and diabetic groups.p values less than 0.05 were accepted as significantly different from the control group.The degree of significance was denoted as * p < 0.05.
Results and discussion
Due to the increasing importance of diabetes-related cardiovascular diseases, in the current study, we have investigated the effect of diabetes on rat heart apex myocardium by using cluster analysis for comparison of spectral changes in lipids and proteins between control and diabetic groups, and by neural networks for protein secondary structure estimation.
Figure 1(a) demonstrates the H&E staining image of a section with 9 µm thickness taken from the rat heart apex at ×25 magnification.The mapped region is shown in square on this section.After completing FTIR microscopy measurements, the data were loaded to cytospec program for data analysis procedure.In the cluster imaging of IR, clusters should contain spectra from histological regions exhibiting similar spectral features.Spectra in different clusters ideally manifest different spectral signatures.Assigning a distinct color to all spectra in one cluster is the main idea in image assembly on the basis of cluster analysis [26].The image of cluster analysis, which was performed to discriminate myocardium of the apex, is shown in Fig. 1(b), (c) illustrates the average spectra arising from different clusters.In Fig. 1(b) and (c), it is possible to see the 4 different clusters given in gray scale ranging from white color (numbered as 2 in Fig. 1(b)) to very dark gray color (numbered as 3 in Fig. 1(b)) belonging to different components, and the original average absorption spectra of these clusters, respectively.The cluster shown by dark gray color, numbered as 4 in Fig. 1(b), arises from the tissue freezing medium (optimum cutting tool) and the spectrum belonging to this cluster is numbered as 4 in Fig. 1(c).The cluster shown by very dark gray color, numbered as 3 in Fig. 1(b), arises from the epicardium of the apex and the tissue freezing medium.The corresponding original average absorption spectrum is also illustrated with the same number in Fig. 1(c).The cluster represented by white color, numbered as 2 in Fig. 1(b), belongs to the epicardium of the apex.The cluster shown with gray color, numbered as 1, arises from the myocardium of the apex and the same number is used to show the original average absorption spectrum in Fig. 1(c).The spectra which failed the quality test were excluded from all consecutive evaluations and are shown with black color in Fig. 1(b).The aim of this cluster analysis is to obtain the average original absorption spectra arising only from the myocardium of the apex since we are interested in the changes occurring in this specific region of the cardiac apex between control and diabetic groups.The data belonging only to this cluster showing the apex myocardium is used for the comparisons.The cluster analysis was performed for the apex part of all the rat hearts and the original average spectrum arising from myocardium was saved.For further analysis, the data were loaded into OPUS program.Then, cluster analysis was done for comparison of normal and diabetic groups for apex myocardium of the rat heart.Representative average absorption spectra belonging to control apex myocardium is given in Fig. (2).In Fig. 2(a), absorption spectrum of the control apex myocardium is given in 3050-2800 cm −1 region.Figure 2 The major absorptions in 3050-1480 cm −1 are numbered in this figure and the frequency values with their assignments are given in Table 1.
The spectral information contained in the second derivative spectra was used as input data for hierarchical cluster analysis in order to obtain objective classification on the basis of spectral patterns.Cluster analysis results are displayed as dendograms using spectral information between 2800 and 3050 cm −1 ; and using spectral information between 1480 and 1800 cm −1 of normal and diabetic groups for apex myocardium in Figs 3 and 4, respectively.The 2800-3050 cm −1 region mainly contains C-H stretching vibrations arising from the fatty acyl chains of membrane lipids, and the spectral region rich in proteins is between 1800 and 1400 cm −1 [27,28].As seen from the figures, cluster analysis resulted into two distinct clusters corresponding to control and diabetic groups with success of 6 out of 6 and 8 out of 8, respectively, in both spectral regions subjected to this analysis.These results imply that diabetes causes some important alterations not only in lipids but also in proteins of the apex myocardium.These findings of this study demonstrated the feasibility of FTIR microspectroscopic discrimination of diabetic and nor- Amid II: (protein N-H bending, C-N stretching), α-helix Fig. 3. Cluster analysis results displayed as dendograms using spectral information between 2800 and 3050 cm −1 of normal and diabetic groups for apex myocardium.
Fig. 4. Cluster analysis results displayed as dendograms using spectral information between 1480 and 1800 cm −1 of normal and diabetic groups for apex myocardium.
mal cardiac tissues.The diabetes-induced changes in the lipids might be due to the altered myocardial energy substrate utilization, which consequently may have a detrimental effect on heart function due to accumulation of lipid intermediates [29].
The protein region (amide I), corresponding to absorption values between 1600 and 1700 cm −1 , was further analyzed to determine the diabetes-induced changes in the protein secondary structure using neural network predictions based on FTIR data.A number of techniques, ranging from simple tissue staining to highly specialized methods including X-ray crystallography, NMR and protein sequencing can be used for the identification of molecules associated with DM.While performing conventional analytical procedures, some specific property in a sample might be damaged or denatured due to extraction, fixation or staining.Specifically for protein structure determinations, conventional procedures used for isolation of proteins are too destructive to derive accurate information about the real structure of proteins.With FTIR spectroscopy, it is possible to monitor molecular and structural composition directly in the untreated and unfixed whole tissue [30].IR spectroscopy opens a new field of medical research, as it causes no damage to the important constituents of the cells or tissues [31].In addition, recent developments in data processing techniques and instrumentation made IR spectroscopy a useful tool in the medical arena.Furthermore, having objective results is another important advantage of using IR spectroscopy technique, since spectral data are collected and treated by computer controlled algorithms.
Recently, this technique has successfully been applied for the determination of protein secondary structure in solution [32], in tissues [33] and in membranes [34].The amide I region (1700-1600 cm −1 ) is commonly used for the analysis of secondary structure of proteins in FTIR spectra.In this particular spectral region, different protein conformations result in different discrete bands, which are usually broad and consequently overlapping.For the identification of the bands seen in FTIR spectra, mathematical resolution enhancement techniques are commonly used [35][36][37].Curve fitting technique is one of them which was originally used by Byler and Susi [35], to analyze protein amide I bands.One of the main disadvantages of the curve fitting procedure is that it requires a series of subjective decisions, such as assignment of peaks, which can significantly alter the results [38][39][40].Another disadvantage is that curve fitting has a tendency to overestimate the beta sheet content of the primarily helical proteins.These disadvantages of curve fitting approach stimulated the development of new methods such as neural networks.This new computational technique has proven to be an alternative powerful tool for the analysis of protein structure from FTIR spectra [41].
The neural network prediction results are presented in Table 2.It is clearly seen from the table that diabetes causes significant changes in the protein secondary structure of cardiac apex myocardium by decreasing the content of α-helix from 69.88 ± 2.34 to 48.97 ± 5.42 ( * p < 0.05), and by increasing the content of β-sheet structures from 11.70 ± 2.83 to 43.12 ± 9.03 ( * p < 0.05).So, we can deduce that DM is altering the secondary structure of proteins which might be indicative of either a structural rearrangement of already existing proteins or the expression of new types of proteins having different structural compositions.The conformational changes we have observed in the secondary structure of proteins might also be due to cleavage of proteins in diabetes.As suggested by Kugiyama et al. [42], proteins exposed to glucose are cleaved and undergo conformational changes.These changes were shown to be dependent on hydroxyl radicals, which might be produced by glucose auto-oxidation [43].In previous studies, myocardial cell structure was reported to be destructed due to hyperglycemia [44], and cardiac structure was reported to be changed in diabetes [29].Although no detailed secondary structure analysis has been performed previously, diabetes-induced conformational changes were indicated from the changes in the amide I band shape in rat heart homogenates [45].Another possible explanation for the changes in the protein structure might be the impaired heat shock protein (Hsp) response, which was previously reported to occur in diabetes [46].In the current study, we obtained significant insights into the changes in protein secondary structure of cardiac apex myocardium, which might be important in understanding the molecular mechanism lying behind diabetes-induced heart diseases.With the technique we have used in the current study, we were able to monitor changes in the secondary structure of proteins in tissues without isolating them.Membrane proteins are difficult to be isolated and only a very limited number of proteins were so far isolated.It is crucial to reveal diabetes induced alterations for the development of new drugs to at least treat the diabetes induced damages at molecular level.
Conclusions
In this study, application of cluster analysis to FTIR spectra permitted a rapid and reliable discrimination between control and diabetic group in both lipid and protein regions.Furthermore, neural network approach based on FTIR data has been used for the first time in this study to reveal diabetes induced changes in the secondary structure of proteins in apex myocardium of the rat heart.Present study points out the importance of FTIR microspectroscopy as an excellent technique for the estimation of protein secondary structure.
Fig. 1 .
Fig. 1.Light microscope image of an H&E-stained section taken from the apex of rat heart at ×25 magnification including the mapped region shown in a square (a), image of cluster analysis (b) and average spectra belonging to different clusters (c). 4 different clusters are given with different colors and numbers ranging from 1 to 4 in (b), and the corresponding original average absorption spectra are illustrated with the corresponding numbers in (c).
Table 1
[7]d assignments of major absorptions in IR spectra of control cardiac apex myocardium in 3050-1480 cm −1 spectral region[7]
Table 2
The results of neural network predictions based on FTIR data in 1600-1700 cm −1 spectral region for the changes in protein secondary structure between control and diabetic groups Data shown as mean ± standard error of mean.P < 0.05 were accepted as significantly different from the control group.The degree of significance was denoted as: * p < 0.05.
|
2019-01-01T17:33:35.788Z
|
2007-01-01T00:00:00.000
|
{
"year": 2007,
"sha1": "68664c57b4d943ff2f168f84a731cdf173340ba4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jspec/2007/269618.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68664c57b4d943ff2f168f84a731cdf173340ba4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
8203323
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive dysfunction in pediatric multiple sclerosis
Cognitive and neuropsychological impairments are well documented in adult multiple sclerosis (MS). Research has only recently focused on cognitive disabilities in pediatric cases, highlighting some differences between pediatric and adult cases. Impairments in several functions have been reported in children, particularly in relation to attention, processing speed, visual–motor skills, and language. Language seems to be particularly vulnerable in pediatric MS, unlike in adults in whom it is usually preserved. Deficits in executive functions, which are considered MS-specific in adults, have been inconsistently reported in children. In children, as compared to adults, the relationship between cognitive dysfunctions and the two other main symptoms of MS, fatigue and psychiatric disorders, was poorly explored. Furthermore, data on the correlations of cognitive impairments with clinical and neuroimaging features are scarce in children, and the results are often incongruent; interestingly, involvement of corpus callosum and reduced thalamic volume differentiated patients identified as having a cognitive impairment from those without a cognitive impairment. Further studies about pediatric MS are needed in order to better understand the impact of the disease on brain development and the resulting effect on cognitive functions, particularly with respect to different therapeutic strategies.
Background
Multiple sclerosis (MS) is an autoimmune-mediated disorder that causes demyelination of the central nervous system and secondary axonal degeneration, leading to brain atrophy. This chronic inflammatory neurodegenerative disease is typically diagnosed in young to middle-aged adults, but it can occur in childhood, although children and adolescents rep resent an uncommon patient population, accounting only for 2%-5% or less of patients suffering from MS. 1 An open question is whether pediatric MS shares the same disease features as adult MS. Clinically, the presentation and course of pediatric and adult MS may be different, particularly when onset is at younger ages. However, it is not clear if these differences are due to pathophysiology of the disease or to the disease processes being expressed on a different substrate (ie, the immature central nervous system and immune system). 2 Inflammation is more pronounced in pediatric MS, which is characterized by higher rates of relapses, 3 higher T2 lesion volume observed through magnetic resonance imaging (MRI), 4 and an often polyfocal onset of symptoms; 3 moreover, a small proportion of cases fulfill, at onset, criteria for acute disseminated encephalomyelitis, but are later diagnosed as MS. 5 Compared to adults, neurologic, motor, and cognitive outcomes of children suffering from MS differ in evolution and features, suggesting different cerebral involvement or enhanced recovery and repair mechanisms, but also differences in immune-pathophysiology. 2 In this context, the neurocognitive outcome of pediatriconset MS is particularly complex to understand and to predict. Pediatric age is characterized by ongoing myelination in the central nervous system; thus, inflammatory demyelination occurring in MS may result in atypical or incomplete formation of white matter pathways. The integrity of the white matter is a necessary prerequisite for development of neural networks and cognition; moreover, it is not clear if increased plasticity, typical of the immature central nervous system, has a protective role from the deleterious impact of the disease in children. 6 Demyelination effects and plasticity mechanisms have to be evaluated in the context of timing of development of cognitive functions, as well as in terms of their possible interference with the acquisition of the basic building blocks that are critical for academic achievement. 7 Taking into account all these aspects, it is not surprising that the cognitive and neuropsychological profile in pediatric MS is similar to, but does not overlap with, adult MS.
Few review articles have been published so far on cognition in pediatric MS; the most comprehensive is that of Blaschek et al. 8 This work revised major studies on cognitive dysfunction in pediatric MS, published until 2010, and reported frequent deficits in attention, processing speed, and memory. Cognitive deficits seem to emerge early and are generally more severe in children with an early-onset disease. Other important studies, reporting wide numbers of pediatric MS patients, have been subsequently published but are not included in available reviews. 6,[9][10][11][12][13][14][15][16] Moreover, the focus of recent research has included correlations of cognitive dysfunction with MRI abnormalities and clinical features.
The aim of the present study was to perform a comprehensive narrative review. We underwent a systematic search of all articles indexed in PubMed Central, PsychINFO, Science-Direct, Web of Science, and Scopus, up to January 15, 2014, without time or other limits. The following keywords were used: pediatric OR childhood OR juvenile OR adolescents AND multiple sclerosis AND cognitive OR neuropsychological OR impairments OR outcome. We also inspected the reference list of the retrieved articles to ensure a wider search.
Cognitive dysfunction in pediatric MS
Cognitive dysfunction in pediatric MS was an unexplored field until the last decade, but, in the last few years, research on global cognitive and neuropsychological profiles of children suffering from MS expanded rapidly. Reported impairments occur in a range from 30% to 80% of the cases; evaluation criteria are not homogeneous among studies, accounting for some differences in the reported prevalence.
The intelligence quotient (IQ), which is a measure of global cognition, even if abnormally low in some patients, is generally better preserved than neuropsychological function where a wide range of deficits -including attention, information processing speed, memory, executive functions, and some aspects of language and visual-motor domainshave been reported.
Data on global cognitive functioning in pediatric MS
The full IQ profile was evaluated by a complete administration of the Wechsler Intelligence Scale in only a few studies; however, a short form of the Scale, the Wechsler Abbreviated Scale of Intelligence, has been frequently used to integrate an extensive neuropsychological assessment.
Attention and information processing
Most studies evaluating the attention domain considered several components of this function and used the Trail Making Test, the Symbol Digit Modalities Test, or the Continuous Performance Test. Complex aspects of attention, such as shifting attention between complex stimuli, were the more compromised components. When tested, attention and information processing were found to be abnormal in nearly all studies; deficits appeared to emerge very early in the course of the disease. Impairments were particularly evident from comparison of MS patients' performance with that of healthy controls. 7,9,11,18,21 visual-motor and visual-spatial skills The majority of data on visual-motor and visual-spatial skills in pediatric MS come from studies using tests such as the Visual Motor Integration test and the Rey Figure. Impairments, both on visual-motor integration skills and on visual memory, are often reported (eg, spatial recall 23
1387
Cognitive dysfunction in pediatric multiple sclerosis Memory Deficits in verbal and visual memory, both immediate and delayed as well as in episodic and working memory, have been reported in pediatric and juvenile patients. 12,20,21,24 Language In pediatric MS, an involvement of language is frequently seen. Children are still developing complex linguistic skills; therefore, language seems to be particularly vulnerable in this age group. By contrast, language dysfunctions seem relatively rare in adult-onset MS, and linguistic problems are usually associated with more generalized cognitive impairment. Indeed, differences in linguistic involvement appear to be the main neuropsychological difference between children and adults with MS.
The linguistic profile of pediatric MS was studied with tests evaluating different aspects of expressive and receptive language. The most used tests were the Vocabulary and Similarities tests of the Wechsler Intelligence Scale, verbal fluency, and naming tests. Juvenile MS patients seem to have poor performance in complex and speed-dependent language functions, 18,24 while naming skills appear to be less affected. Tests of expressive language, although rarely compromised, are useful to discriminate between children suffering from MS and healthy matched controls. 6,7,18 executive functions The most-used tests in studies evaluating the executive functions in MS patients were planning and abstract reasoning tasks, such as the Wisconsin Card Sorting test and the Tower of London test. In adults, particular deficits in conceptual reasoning are considered to be specific to MS; 25-27 by contrast, in pediatric MS, conceptual reasoning is not commonly impaired. However, executive functions encompass multiple interdependent abilities (ie, working memory, attention shifting), which are more commonly affected in children as an expression of executive-function involvement in a different stage of brain maturation. 13,21,24 In many affected children, neuropsychological impairments may compromise intellectual functions and academic performance. 20,21 Deficit in attention, information processing speed, and executive functions may have an impact on activities such as listening to lengthy instructions, organization of unstructured assignments, and generation of novel ideas. These skills are increasingly needed in higher academic grades and correlate with the clinical observation of declining school performance.
Furthermore, pediatric MS patients appear to show a selective vulnerability in mathematic skills despite average IQ and the provision of unlimited time to complete mathematic questions. 14 However, some measures of academic achievement (reading and spelling) may be completed within age expectations, demonstrating that the use of measures of academic achievement alone, similarly to measures of global intellectual function, may miss or underestimate the real cognitive impairments. 6,20,28 Up until now, most studies in literature calculated an index of cognitive impairment from a complete neuropsychological assessment. This global measure of cognitive functioning was calculated from impairment of single functions; criteria for classification of impairment differed widely between studies. For some authors, dysfunction was based on at least two impaired cognitive tasks 24,28 while, for others, dysfunction was based on at least three impaired cognitive tasks, 6,11,21,22 or five or more tests. 9 Finally, others graded the cognitive performance based on the percentage of abnormal functions. 18 This methodology is useful to quantify a wide range of impairments in a unitary measure, but it may introduce important differences in results, leading to studies that are barely comparable.
Thus, research does not yet provide extensive data on cognitive evaluation of pediatric MS using a standardized validated neuropsychological battery specifically tailored for this age range. However, brief and cost-effective validated instruments have been recently proposed internationally. 7
Time course of cognitive dysfunction in pediatric MS
Research in adults suffering from MS converged to show a progression of cognitive loss over time, both in patients cognitively intact at baseline and in those initially showing some degree of impairment. 29,30 Most of the studies on pediatriconset MS report cognitive dysfunctions often appear early in the course of the disease, close to MS diagnosis. Furthermore, impairments are more severe in those with younger age at disease onset. 6,21 Information about the long-term outcome of cognitive deficits is scarce. 9,11,18,20,28,31 A lower cognitive functioning was reported in children with more recent disease onset (within 12 months) compared to those with later onset (mean 5 years); more widespread deficits, involving mainly tasks requiring self-generated organizational strategies or reliance on efficient processing speed or working memory, were seen in those with longer disease duration. 20 Similar results were found by MacAllister et al 28
1388
Suppiej and Cainelli previously normal functioning, was found in the majority of the pediatric patients tested after 5 years from first evaluation (which was done at MS onset). The deterioration involved most of the tested cognitive measures; in particular, verbal memory, complex attention, verbal fluency, and receptive language.
In pediatric neurodegenerative disorders, the deleterious effects of disease progression on cognitive abilities coexist with expected maturational improvement in function, creating more difficulty in detecting evolution, particularly at the individual level. Indeed, a recent study that used a robust statistical method to determine individual changes on cognitive tests showed improvement in functioning on only 18% of the measures in the MS group as compared with 86% of the measures in the control group. These results highlight a lack of expected maturational improvements in the MS patients relative to age-matched healthy peers. Deterioration in functioning, defined as significant decline on three or more tests, was observed in 25% of the patients, whereas more global deterioration, defined as decline on five or more tests, was observed in roughly 11% of the patients. 9 Differences between studies may be due to more stringent cut-off, different statistical approach, or simply differences in the timing of follow-up.
Psychiatric disorders in pediatric MS
Among psychiatric disorders, depression is a common symptom in MS, with a prevalence ranging from 27% to 54% in adults. 32 Depression might originate from prolonged hospitalization or be the result of the psychological impact of the MS diagnosis itself; on the other hand, the fact that patients seem to be more affected by depressive states than other neurologic patients with and without involvement of the central nervous system suggests that depression may represent a primary symptom of MS. [33][34][35] A relationship between depression and immunomodulatory treatment was also suggested, but results are incongruent. 33,36,37 The prevalence of affective disorders in children suffering from MS varies from 6% to 46%, 20,21,24,31,38 probably due to the different assessment strategies (Children's Depression Inventory, Kiddie-SADS-Present and Schedule for Affective Disorders and Schizophrenia for School-Age-Children-Present and Lifetime Version [K-SADS-PL]). Although many studies evaluated the prevalence of affective symptoms in children with MS, the correlation with cognitive and neuropsychological impairment remains a less-explored field. The only study fully addressing this question is that of Weisbrot et al. 15 They found a variety of psychiatric diagnoses among 45 pedi atric MS patients undergoing psychiatric assessment. Out of the 25 children with a psychiatric disorder, more than two-thirds had more than one psychiatric diagnosis. The most fre quent were anxiety disorders, mood disorders, and attention deficit hyperactivity disorder. No differences were found in patients with or without a psychiatric disorder with respect to cognitive impairment, but children with concurrent mood or anxiety disorders had a higher frequency of cognitive dysfunction when compared with children with other psychiatric diagnoses. The authors suggested an influence of depression and anxiety symptoms on attention and information-processing skills, but they could also be independent manifestations of the underlying demyelinating central nervous system disease.
Fatigue in pediatric MS
Fatigue is among the most common symptoms of MS. In adults, the frequency ranges from 65% to 95% across studies 39,40 and many patients with MS describe fatigue as their most disabling symptom, significantly interfering with daily functioning. 41 Fatigue has been defined as an overwhelming sense of tiredness, lack of energy, and feeling of exhaustion of such severity as to interfere with usual and desired activities. Despite its significance, the pathogenesis of fatigue is poorly understood.
The prevalence and severity of fatigue, and its complex relationship with cognitive dysfunction, have been extensively investigated in the literature on adult-onset MS. However, to date, there is limited information on the impact of fatigue on cognitive impairment in children and adolescents with MS. The majority of studies considered fatigue as part of the evaluation of cognitive dysfunction or depression. 7,18,21,24,42,43 There is a heterogeneity of instruments used to test fatigue and often it is measured from simple dichotomous yes/no questions or with the Fatigue Severity Scale, which is validated for adults. When tested in children and adolescents, this scale provided low sensitivity levels in the identification of fatigue. MacAllister et al 24 failed to show an effect of fatigue on cognitive profile. Amato et al 21 did not find differences in fatigue between patients with and without cognitive impairment, even if fatigue was reported as a common symptom.
Recently, a Multidimensional Fatigue Scale has been proposed, entailing items from the Pediatric Quality of Life Inventory administered both to children and their parents. 44 Goretti et al 11
1389
Cognitive dysfunction in pediatric multiple sclerosis occurrence of neu ropsychological impairments and psychiatric disorders. They failed to find an association between the overall measure of subjectively assessed cognitive fatigue and global cognitive performance. However, higher levels of self-reported cognitive fatigue by children were associated with impaired performance on a problem-solving test, while higher levels of parent-reported cognitive fatigue were associated with impairment on tests of verbal learning, processing speed, complex attention, and verbal comprehension.
Clinical features predicting cognitive impairment in pediatric MS
Research on clinical risk factors for cognitive decline are very scarce; the factors most frequently considered are age at onset, MS duration and number of relapses, severity of neurological disability, and effect of therapy.
Age at onset
Few studies evaluated the effect of age at onset of MS on cognitive outcome, and results are incongruent between studies. 6,18,20,24 Amato et al found that low IQ scores were significantly associated with younger age at MS onset; they suggested that earlier neuropathological damage in the central nervous system can have more disruptive impact on the development of intellectual abilities. 21 However, by evaluating the same MS patients 2 years later, the effect of age at MS onset disappeared. 18 Banwell and Anderson 20 found a more severe cognitive impairment in a group of children with earlier onset of MS compared to a group with later onset; since they tested both groups at the same age it could not be determined if the effect was due to longer duration of the disease or earlier age at onset. Indeed, other authors showed that association of more severe cognitive dysfunction with younger age at disease onset disappeared after controlling for disease duration. 6,24 MS duration and number of relapses Data in the literature are scarce and fail to demonstrate an effect of disease duration on cognitive outcome. 6,11,18,20,24,28 The relapsing-remitting form of MS, where episodes of neurologic dysfunction are followed by a period of recovery, appears to be very common in childhood MS, with estimates ranging from 93% to 100% of the cases. [45][46][47][48] However, similar to disease duration, the number of relapses does not seems to influence cognitive decline, 11,18 or the influence was too weak to be noticeable. 24,28 Neurological disability Neurologic data in MS patients were usually collected using the Expanded Disability Status Scale (EDSS). The EDSS is a ten-point scale of neurologic impairment, quantifying eight functional systems: pyramidal; cerebellar; brainstem; sensory; bowel/bladder; visual; cerebral; and other. A summary scale is generated that rates impairment according to a scale ranging from zero -indicating normal neurological function -to ten -indicating death due to MS. 49 Over the first 15 years of MS, EDSS scores tend to be less than three in the large majority of in pediatric patients. However, some studies indicated neurological disability as the most robust predictor of cognitive impairment in children suffering from MS. A hierarchical multiple regression analysis used to evaluate the relative contribution of several clinical variables in predicting cognitive outcome (EDSS, number of relapse, MS duration, age at onset) showed that EDSS was the strongest predictor; 24 moreover, a neurological impairment emerging early in the course of the disease was predictive of cognitive decline. 16,28 These findings are consistent with results in adult-MS literature pointing out an association between EDSS and cognitive loss. 29 In contrast with the above studies, other authors reported that cognitive deficit occurred independently from neurological disability as assessed with EDSS, likely reflecting the relatively limited accrual of physical disability early in the MS disease course of pediatric-onset MS patients. 6,11,18,20 Therapy Studies in adults suggest that disease-modifying drugs may positively influence the cognitive outcome of the patient in the long-term. 50 The effects of these treatments on cognitive outcome has not been specifically tested in pediatric MS; however, up to 90% of the children included in studies on cognitive outcome of pediatric MS were receiving diseasemodifying drugs. When studies attempted to correlate cognitive outcome with therapy, among a variety of other risk factors, no effect was found. 18 Furthermore, while studying the effect of therapy on cognitive functioning, it should be taken into account that treatment with interferon beta has been linked to the occurrence of depression in adults with MS. 51
MRI correlates of cognitive impairment in pediatric MS
In adults with MS, cognitive impairment has been positively associated with several MRI abnormalities: axonal and
1390
Suppiej and Cainelli neuronal damage, as measured by global brain atrophy; 52-54 cortical volume; 55 ventricular enlargement; [55][56][57] and thalamic atrophy. 58,59 Other associations were also suggested but results were not always replicated. 6 Spatial distribution of lesions showed little correlation with specific cognitive deficits, as highlighted by the incongruent results reported in studies investigating the relationship between executive deficits and frontal lobe lesions. [60][61][62] However, it has been speculated that the pathological processes in MS may be more diffuse in nature, given the occurrence of changes also in the so called "normal appearing white matter". 63 Less is known regarding the cognitive impact of MRI lesions on pediatric MS; studies specifically linking cognitive dysfunction to neuroimaging in childhood MS are scarce, but there is emerging evidence of some MRI correlates of neuropsychological deficits.
Research showed that, in children, patterns of lesion distribution slightly differed from that of adults, probably due to incomplete myelination of anterior regions. In children, lesions were mainly reported in the pons, peduncles, brainstem, and cerebellum. [64][65][66] Selective loss of volume of the thalami compared to healthy controls was reported in a study of 28 pediatric-onset MS patients, 67 but the cognitive correlations were not described.
Till et al 6 evaluated neuropsychological performance and MRI correlates in 34 pediatric-onset MS patients and in 33 demographically age-matched healthy children. They identified a cognitive impairment in 29% of the patients, predominantly involving attention and processing speed, expressive language, and visual-motor integration; MRI showed significantly lower thalamic volume, total brain volume, and gray matter volume in MS children. Interestingly, corpus callosum and thalamic volume differentiated patients identified as having a cognitive impairment from those without. Furthermore, thalamic volume was the most robust MRI predictor of global IQ, processing speed, and expressive vocabulary scores, though corpus callosum area, normalized brain volume, and T2-weighted lesion volume were also significantly correlated with most cognitive outcomes. About 15 months later, 28 of the MS patients and 26 of the healthy controls were retested in order to evaluate changes of cognitive functioning in time. Surprisingly, changes in brain volume and lesion volume did not predict changes in cognitive status, with the exception of a significant relationship between increased lesion volume and slower psychomotor speed on attention tasks. 9
Conclusion
Research over the last few years has improved our understanding of the impact of childhood MS on cognitive functions. Recent data on large series suggests that deficits occur early in the course of the disease and affect children's quality of life. Language, an early developing function, appears particularly vulnerable in this age group.
In children, as opposed to adults, the degenerative processes occur before the full maturation of cognitive abilities; therefore, failure to show age-expected progress is considered a sign of disease progression.
The relationship of cognition with the other two main symptoms, psychiatric disorders and fatigue, is beginning to also be appreciated in pediatric MS.
The predictive value of clinical and neuroimaging features on cognitive outcome has not been extensively explored, but emerging data suggest a relationship between regional brain volume (particularly thalamic volume) measured on MRI and neuropsychological findings.
A limitation of the present review was the lack of a systematic search. Furthermore, comparison of available studies has been hampered by the differences in adopted protocols and the variability of criteria used to define abnormalities. However, brief and cost-effective validated instruments for evaluation of cognitive functions in pediatric MS have been proposed recently. 7 Future research should include multicenter studies with agreed protocols for testing cognitive function and common criteria for definition of abnormalities. More longitudinal studies with multiple assessment points and long-term evaluation, ideally up to adulthood, are needed. They should take into account resiliency factors such as the baseline social and cultural status as well as preexisting school performance.
Important directions for future research will be the identification of risk factors for cognitive decline by correlation of patients' clinical and instrumental data with cognitive outcome; individual neuroimaging indicators of cognitive reserve should be included when evaluating the time-course of cognitive decline. Research should also take into account the evaluation of the effect of different disease-modifying agents on cognitive performance.
The clinical implications of such studies for potential interventions and rehabilitation strategies, ultimately addressed to improve children's quality of life, cannot be overlooked.
Neuropsychiatric Disease and Treatment
Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
|
2016-05-12T22:15:10.714Z
|
2014-07-23T00:00:00.000
|
{
"year": 2014,
"sha1": "2f61f0b90092c5efddb7350d6a637e683b511182",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=20941",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfa116a5e8157008483052533a96eb8b62cbef95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208184603
|
pes2o/s2orc
|
v3-fos-license
|
A survey of surface imaging use in radiation oncology in the United States
Abstract Surface imaging (SI) has been rapidly integrated into radiotherapy clinics across the country without specific guidelines and recommendations on its commissioning and use aside from vendor‐provided information. A survey was created under the auspices of AAPM TG‐302 to assess the current status of SI to identify if there is need for formal guidance. The survey was designed to determine the institutional setting of responders, availability and length of its use, commissioning procedures, and clinical applications. This survey was created in REDCap, and approved as IRB exempt to collect anonymized data. Questions were reviewed by multiple physicists to ensure concept validity and piloted by a small group of independent physicists to ensure process validity. All full members of AAPM self‐identified as “therapy” or “other” were sent the survey link by email. The survey was active from February to March 2018. Of 3677 members successfully contacted, 439 completed responses; the summary of these responses provides insight on current surface imaging clinical practices, though they should not be assumed to be representative of radiation oncology as a whole. Results showed that 53.3% of respondents have SI in their clinics, mostly in treatment rooms, rarely in simulation rooms. Half of those without SI plan on purchasing it within 3 years. Over 10% have SI but do not use it clinically, 36.8% classify themselves as “expert” users, and 85.5% agreed/strongly agreed that SI guidelines are needed. Initial positioning with SI is most common for breast/chestwall and SRS/SBRT treatments, least common for pediatrics. Use of SI for intra‐fraction monitoring follows a similar distribution. Gating with SI is most prevalent for breast/chestwall (66.0%) but also used in SBRT (33.0%), and non‐SBRT lung/abdomen (<30%) treatments. SI is a rapidly growing technology in the field with widespread use for several anatomic sites. Guidelines and recommendations on commissioning and clinical use are warranted.
ability to perform these tasks without the use of ionizing radiation.
The technical characteristics and a description of how current commercially available surface imaging systems work have been described elsewhere. 2 In brief, these systems can monitor the patient's position in real time using optical light and compare it to a given reference from either the external contour of the planning CT or an SI system-acquired capture. Typical applications of these systems, based on current literature, mainly include open-mask stereotactic radiosurgery (SRS) procedures and breast radiotherapy, particularly deep inspiration breath-hold (DIBH) treatments for leftsided breast patients. 1,3 Literature describing SI use for other sites is more limited. While it is evident that this technology is being increasingly used in radiation oncology, its prevalence, implementation workflows, or scope of use in the field have not been described to date. An electronic survey was conducted in an effort to compile this information.
| ME TH ODS
A questionnaire was designed to assess the extent of use of SI for radiotherapy in the United States and gain more insight on its implementation in the field. Questions were crafted to inquire about the availability of this technology in clinics, existing commissioning procedures, and its role in current clinical practice regarding both its applications and common treatment sites of use (see Table S1). This survey was deemed IRB exempt after institutional board review at The University of Chicago as all the responses were anonymized and aggregated and could not be related back to the participants. The survey, along with text outlining its purpose, length, participation consent, and anonymity of results, was sent out via email to all full members of the American Association of Physicists in Medicine who self-identified as specializing in "therapy" or "other" and had a mailing address in the U.S. Both the survey questions and the text used in the survey invitation are listed in Table S1. The survey was active from February to March of 2018.
Study data were collected and managed using REDCap (Research Electronic Data Capture) electronic data capture tools hosted at The University of Chicago. 4 REDCap is a secure, web-based application designed to support data capture for research studies, providing (a) an intuitive interface for validated data entry; (b) audit trails for tracking data manipulation and export procedures; (c) automated export procedures for seamless data downloads to common statistical packages; and (d) procedures for importing data from external sources.
Survey questions were organized into two sections. The first one was to determine the institutional setting of the responder, the availability and duration of use of the technology, and the commissioning process performed upon initial acquisition of the system(s). The second section focused on the clinical uses of surface imaging, including applications (e.g., initial positioning, intra-fraction monitoring, gating) and types of treatment (e.g., anatomical site and type of procedure conventional, stereotactic, pediatric). All questions were reviewed by more than ten physicists for concept validity. The survey was tested by a small cohort of physicists independent from the survey creators to ensure response process validity prior to deployment for data collection. The survey length was intended to be brief: 10 min for participants who had surface imaging and 2 min for those who did not.
| RESULTS
There were 205 undeliverable emails of the 3882 emails originally sent. We received 509 responses, 439 were complete. Only complete responses were used. The overall response rate was 13.8% (from self-identified "therapy" and "other" AAPM members). The response rate from "therapy" only members was 14.7%. reported that their clinic plans to purchase an SI system within the next 1-3 years. Out of the respondents with SI in their clinics, most (59.4%) report their SI equipment was installed on or after 2015, and 10.7% indicated that although they have SI at their facilities, it is not being used clinically. Only 36.8% of reported users classify their level of expertise as "expert," and 85.5% of all respondents with SI agree to strongly agree that guidelines for the clinical use of surface imaging are necessary.
3.A | Surface imaging for initial positioning
Participants who indicated that SI has been clinically implemented in their department (n = 209) were asked to elaborate on their use of SI for initial positioning. This included specifying both the type of reference surface being used for this purpose (DICOM surface from the planning CT acquired during simulation, camera-acquired surface at simulation, or camera-acquired in the treatment room) and the treatments and sites for which SI was being employed for initial setup. Survey responses showed that the majority of users, 63.2%, perform initial positioning based on a single type of reference surface for every fraction of a patient's treatment (DICOM surface). site/type are compiled in [ Fig. 1(a)]. The frequency of use of SI for initial positioning is highest for breast (routinely: 64.9%), SRS (routinely: 50.5%), and SBRT (routinely: 42.3%). It is rarely used for pediatric patients (never: 64.8%), GU/Prostate (never: 62.8%), and other pelvic or abdominal treatments (never: 65.8%). Note that respondents were given the option of selecting "Not Applicable" for treatment sites/types that are not treated in their clinic (see Table S1).
3.B | Surface imaging for intra-fraction monitoring
Participants were also asked about their use of SI for intra-fraction
Intra-FracƟon Monitoring
RouƟnely Occasionally Rarely Never Don't know
(b)
F I G . 1. Use of surface imaging for initial patient positioning (a) and intra-fraction monitoring (b) by site/treatment type. "Other" includes abdominal treatments (liver, pancreas, etc.), non-GU/prostate pelvis treatments, primary brain, and electron treatments. Note the "n" for each site/treatment type is listed in the x-axis. This number differs from 209 (total number of respondents using SI clinically) because some of them indicated these categories as "NA -Not Applicable." NA responses have been excluded from these results.
select the reference surface type for intra-fraction monitoring depending on the treatment site or patient. The breakdown of the use of SI for intra-fraction monitoring per treatment site/type is shown [ Fig. 1(b)]. Similar to initial positioning, the frequency of SI for each site/treatment type is listed in the x-axis. This number differs from the n in Fig. 1 because only respondents using SI for initial positioning of the indicated site/ treatment type were given these questions. This number is further decreased in graph b of this figure because some respondents indicated that the use of bolus for these treatments is "NA -Not Applicable" in their clinic.
Respondents were also asked what internal imaging verification tools, if any, they used to confirm the respiratory gating position given by SI. Figure 3 summarizes those results. Respondents were allowed to select more than one modality per site, if applicable.
Except for breast/chest wall treatments, which are verified with planar MV imaging slightly more frequently than with planar kV imaging (63.8% and 61.6%, respectively), the most common modalities for verification are planar kV imaging and CBCT/CT on rails. Volumetric imaging (CBCT/CT on rails) is more widely used than planar kV imaging for SBRT (78.3% vs 53.6%), non-SBRT lung (74.5% vs 61.8%), and non-SBRT abdomen (73.8% vs 54.8%).
| DISCUSSION
The use of SI in radiotherapy is increasing rapidly and it is important to understand how and for what purpose it is being used. This can help characterize the current status of the technology and identify areas of need for official guidelines and recommendations for safe application.
To the authors' knowledge, this is the first survey ever published on this topic. Although this survey has a low response rate, this limitation is not uncommon in such studies in medical physics. 5,6 Due to the anonymous nature of the survey, no specific information was collected on the respondents' employers, and over 75% of This assessment is reinforced by the fact that the majority of the participants with surface imaging (85.5%) agree/strongly agree with the need for national recommendations on the use of these systems.
A large proportion of respondents with SI report having the same vendor (see Table 1), which was the first vendor to offer this technology in the U.S. market. Responses also indicate that surface imaging is most commonly found in treatment vaults (98.7%) rather than simulation rooms (33.6%). Since surface imaging systems have the capability of gating the treatment beam based on the patient's position being in or out of tolerance, participants were asked if this feature was available and clinically used. A total of 57.7% of respondents with surface imaging in their clinic, including photon and proton treatment machines, reported using the beam gating capability.
The results collected in this study show that surface imaging is most commonly used for breast (with and without breath-hold) and SRS treatments, which is reflective of the current body of literature published on this technology 1,3 . In addition, these two sites are expected to have a robust surface-to-target positional correlation which makes them ideal candidates for SI use. Respondents who indicate the use of SI for initial positioning, typically also use it for intra-fraction monitoring. As seen in Fig. 1
| CONCLUSION S
Surface imaging is an attractive imaging technology due to its ability to aid in initial positioning, intra-fraction monitoring, and beam gating without the use of ionizing radiation. Although our results cannot be generalized due to the limited response rate of the survey, they present the medical physics community with an overview of current uses and practices in the field. Currently, our results indicate that the majority of clinical applications are for breast (with and without DIBH), SRS, and SBRT treatments. Lower rates of use were reported for other treatments such as for pediatric and lung cancer. Onequarter of respondents with SI capabilities reported no or slow clinical implementation. As the rates of adoption are expected to increase, and different techniques for commissioning and implementation may introduce systematic errors into patient setup and monitoring, national guidelines on the clinical implementation of surface imaging are needed to expedite and standardize its use.
ACKNOWLEDG MENTS
We thank the members of AAPM TG-302 for their feedback in com-
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section.
|
2019-11-21T14:04:54.823Z
|
2019-11-19T00:00:00.000
|
{
"year": 2019,
"sha1": "30e73fa372ec6fc35aa474c318d68d6236fde7c4",
"oa_license": "CCBY",
"oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.12762",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5fd4e75b447a729076b8af64b7b0d7baea04be3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15389657
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Tracheal Intubation Using the Storz’s C-Mac D-bladeTM Video-Laryngoscope Aided by TruflexTM Articulating Stylet and the PortexTM Intubating Stylet
Background: Tracheal intubation using Storz’s C-Mac D-bladeTM videolaryngoscope is associated with difficult negotiation of the tracheal tube into the glottis due to steep angulation of its blade. Objectives: In this study, we hypothesized that TruflexTM articulating stylet with its ability to dynamically tailor the ETT shape to patients’ oropharyngeal anatomy would be better suited to the D-blade angulation and ease tracheal intubation compared to PortexTM intubation stylet. Patients and Methods: Following approval by the Ethical Issues Committee and informed consent, 218 ASA I and II patients of either sex were enrolled in this interventional, single-blind, randomized controlled trial. Tracheal intubation was performed following a uniform general anesthetic technique using the Storz’s C-Mac D-bladeTM videolaryngoscope aided by either TruflexTM articulating stylet or the PortexTM intubation stylet by an experienced anesthesiologist. The outcome measures included success or failure to intubate in the first attempt, total intubation time, hemodynamic disturbances, trauma if any and user satisfaction. Results: The number of patients in whom intubation was successful in the first attempt was significantly higher by using Truflex™ articulating stylet (99.1%) compared to PortexTM intubation stylet (90.0%; P-Value = 0.003). User satisfaction grade was significantly better while using TruflexTM articulating stylet (8.5 ± 0.88) compared to the PortexTM intubation stylet (8.23 ± 0.99; P-Value = 0.035). We did not observe any significant difference in total intubation time, hemodynamic disturbances or trauma. Conclusions: Storz’s C-Mac D-bladeTM videolaryngoscope provides grade I Cormack and Lehane’s glottic view in 99.1% patients. First attempt successful tracheal intubation and user satisfaction significantly improved by TruflexTM articulating stylet compared to the PortexTM intubation stylet.
Background
The new generation indirect video laryngoscopes provide an improved view of the glottic opening (1)(2)(3)(4). This is essentially because the design of videolaryngoscope blades, especially the StorzC-Mac D-blade TM (Karl Storz, Tuttlingen, Germany) and Glidescope TM (Verathon Medical, Bothell, WA), is such that they have a steep angulation of more than 60°. This angulation obviates the need for alignment of oral, pharyngeal and laryngeal axes for viewing the glottis. This design of videoscope blades leads to minimal or no pressure exerted on the upper airway structure during video laryngoscopy (5). Unfortunately, enhanced video blade angulation leads to difficulty in passage or navigation of the endotracheal tube (ETT) towards the larynx around the steep blade angula-tion despite adequate visualization of the glottis (6,7). Pre-shaping the ETT with a rigid malleable stylet is recommended (8).
However, we are still handicapped in clinical practice to precisely predict the curvature needed by advancing ETT towards the glottis using videolaryngoscope. We have observed that on occasions the ETT-stylet assembly has to be removed for reshaping its curvature prior to a new attempt using videolaryngoscope. This predisposes patient to the risk of aggravated hemodynamic responses and possibility soft tissue trauma. Truflex TM articulating stylet (TAS) [Truphatek International Ltd, Netanya, Israel] has an easily controllable flexible tip, which allows upward movement of 30 to 60° (Figure 1).
Objectives
We hypothesized that using TAS as a dynamic aid to tailor the ETT shape in patient's oropharynx would enhance first attempt tracheal intubation, shorten the intubation time, attenuate hemodynamic response and reduce the possibility of soft tissue trauma compared to conventional Portex TM intubation stylet (PIS) [Smiths Medical ASD, Inc. Norwell, MA, USA] while using Storz C-Mac D-blade TM videolaryngoscope.
Patients and Methods
Following approval by the ethical committee of Khoula hospital (Muscat) and trial registration (ISRCTN57679531), informed consent was obtained from 218 ASA I to II patients of either sex over a 6-month period for this interventional, single blind, randomized controlled trial. Patients with known airway pathology, past surgery of oropharynx, or immobilized cervical spine were excluded from the study. All patients underwent general anesthesia for a variety of elective surgical procedures. Tracheal intubation was performed by anesthesiologists well versed with the use of the Storz's C-Mac D-blade TM videolaryngoscope. Patients were intubated using PIS and labeled as PIS Group (n = 110) or TAS as TAS Group (n = 108).
All patients were assessed for adequacy of airway with a composite anticipated difficulty airway (ADA) score of routinely used parameters (Table 1). Based on this score, stratified randomization of patients was performed into easy airway or difficult airway strata when the ADA score was ≤ 6.0 or > 6.0 respectively. The detailed study protocol including plan of statistical analysis for this study has been published previously (9).
All patients were uniformly premedicated with oral 0.1 mg/kg midazolam about an hour prior to induction of anesthesia. A uniform induction technique with propofol 2.0 -2.5 mg/kg and muscle relaxation with either cisatracurium 0.1 mg/kg or rocuronium bromide 0.6 mg/kg was used, as evident by loss of all four responses using a peripheral nerve stimulator. Patients also received 1.5 µg/ kg fentanyl for induction of anesthesia. Primary efficacy endpoint was the success or failure to intubate in the first attempt and total intubation time. An attempt was counted if the laryngoscope or ETT needed to be removed for re-oxygenation (drop in oxygen saturation by 5%) or for reshaping the ETT in PIS group. The total tracheal intubation time was the sum of glotticoscopy time (from videolaryngoscope blade insertion between the teeth to the best laryngeal view) and ETT negotiation time (from receiving the stylet ETT in laryngoscopist's hand to cross the black line on the ETT just beyond the vocal cord). A maximum of three tracheal intubation attempts was permitted, after which technique was considered as failure and alternative method was used to secure the airway. Only the successful tracheal intubation time was counted for the purpose of analysis. In addition, hemodynamic disturbances (blood pressure and pulse rate) were recorded before intubation, 1 minute and 5 minutes postintubation; dental and airway trauma (present or absent) and trauma to the soft tissue were assessed as secondary safety endpoints. Furthermore, we analyzed the intubation difficulty score (IDS) between the two groups using intubation difficulty scale of Adnet et al. (10) and user satisfaction score. We used verbal analogue scale to note user satisfaction score (VAS = 1 and 10 were most unsatisfying and satisfying experiences respectively by anesthesiologist while performing C-Mac videolaryngoscopy and tracheal intubation).
Results
Total of 218 patients were finally recruited with 1:1 randomization into PIS (n = 110) or TAS Group (n = 108). There were no statistical differences (P > 0.05) in the possible confounding factors assessed as shown in Table 2. None of 218 patients belonged to difficult strata group where the ADA score had to be more than 6.
The number of patients in whom intubation was successful in the first attempt was significantly higher in the TAS group (99.1%) compared to PIS group (90.0%; P-Value = 0.003) ( Table 3). In 12 patients, more than one attempt at tracheal intubation was needed. Of these, 11 patients were intubated with PIS in contrast to only one first attempt failure by TAS.
There were no significant differences between the two groups in total intubation time with two components of glotticoscopy and ETT negotiation time (Table 4). We considered only the successful intubation time in this study. ETT negotiation time was shorter in patients of TAS group compared to PIS Group by a mean of just over 2 seconds. However, this difference was not statistically significant (P-Value = 0.074). We observed significantly better IDS in patients intubated using TAS compared to PIS (P = 0.021).
Percentage change in hemodynamic parameters at 1 and 5 minutes post-intubation from immediate pre-intubation value between the two groups were slightly less in the TAS group compared to PIS group at the same time interval; however, these differences remained statistically insignificant (P-value > 0.05) over the study period (Table 5). As indicated in Table 6, user satisfaction grade was significantly better in TAS group (8.5 ± 0.88) compared to PIS group (8.23 ± 0.99; P-Value = 0.035).
The rate of adverse events in the form of dental/airway trauma was comparable in the both groups as shown in Table 6.
Discussion
The findings of this study on 218 randomized patients demonstrated that the use of Storz C-Mac D-blade TM videolaryngoscope was associated with Cormack and Lehane's grade I view in 99.1% of patients having an ADA score ≤ 6. We also noted first attempt successful tracheal intubation in 99.1% of patients in whom tracheal intubation was aided by TAS compared to 90% using PIS.
In this series, ADA Score showed no statistical difference in patients of either group. This was reflected in similar Cormack and Lehane' laryngeal view with Storz C-Mac Dblade TM videolaryngoscope in the both groups. During glotticoscopy, Cormack and Lehane's grade 1 was noted in 216 (99.1%) patients with one patient each showing grades 2 and 3. Similar findings were reported by other investigators while using Storz C-Mac D-blade TM videolaryngoscope (11). A good glottic view with the Storz C-Mac D-blade TM videolaryngoscope is understandable as the Dblade has an increased angulation and the video camera is distally positioned on the blade. We observed failed tracheal intubation in the first attempt in 12 patients. Of these, 11 patients (10.0%) belonged to the PIS group. In all these 11 patients, failed first attempt tracheal intubation was due to difficulty in ETT negotiation towards the glottis needing reconfiguration of the stylet. There was one patient whose trachea could not be intubated despite 3 attempts at reconfiguration of the PIS. This was a female patient with a short and thick neck and showed Cormack and Lehane's grade 3 with an unliftable epiglottis. We did not attempt tracheal intubation using TAS as a crossover technique as we did not have ethical committee approval for this. Proseal laryngeal mask was successfully used in this patient for her surgical procedure. In contrast, there was only one patient needing a second attempt at successful tracheal intubation in the TAS group. This patient had slightly restricted mouth opening of 3.2 cm and developed desaturation during the first attempt. Thus, this study found that a good laryngeal view does not always ensure first attempt successful tracheal intubation with Storz C-Mac D-blade TM videolaryngoscopes using the standard malleable stylet.
This study demonstrated that once the curvature of the PIS suits the oropharyngolaryngeal anatomy, there is no significant difference between the ETT negotiation time using either PIS or TAS. We observed first attempt success rate in 90.0% of our patients using PIS. This finding is similar to that observed by Kilicaslan et al. (12) who noted that ETT could be placed in the trachea on the first attempt in 86% and on the second attempt in 14% patients using Storz C-Mac videolaryngoscope after initial failure with conventional Macintosh laryngoscope. Time to achieve optimal laryngoscopic view of the glottis and total intubation time achieved in this study were similar to those reported by others (13).
Understandably, the IDS in this study was better when using TAS compared to PIS since its use was associated with a significantly improved first attempt tracheal intubation and an insignificantly shorter time to achieve intubation. There are no other studies with similar findings. We noted that the percentage change in hemodynamic parameters at 1 and 5 minutes post-intubation from immediate pre-intubation value between the two groups were slightly higher in the PIS group compared to TAS group at the same time interval. This may be attributed to the greater number of repeat attempts at tracheal intubation in the PIS group, which was observed in 10% of these patients. However, these differences were not statistically significant.
In this study, users of Storz C-Mac D-blade TM videolaryngoscope expressed their satisfaction with overall tracheal intubation experience based on quality of laryngeal view and ease of passage of tracheal tube using TAS or PIS on a scale of 1 -10. We observed a significantly better user satisfaction in TAS group compared to PIS group.
This study had two major limitations. First, ethical issues committee did not give us permission for use of cross over method in shaping the ETT with PIS or TAS in case of failure with either of these two stylets. It is quite possible that change over TAS in the single failed tracheal intubation patient, while using PIS would have given a different result. Second, the anesthetist performing tracheal intubation could not be blinded as it was not possible to conceal the nature of stylet in use. However, to reduce the investigator bias, the intubation times were measured by an independent observer who was not part of the study. Furthermore, the data was analyzed by a statistician who was blinded to treatment allocation.
In conclusion, this study showed that a grade I Cormack and Lehane's glottic view is observed in almost all patients with significantly improved first attempt successful tracheal intubation with the aid of TAS compared to PIS while using Storz C-Mac D-blade TM videolaryngoscope.
Footnote
Authors' Contribution:Aida Al-Qasmi: designed and conducted the original study and analyzed the data. Waffa Al-Alawi: designed and conducted this original study. Azharuddin Mohammed Malik designed the study and analyzed the data. Rashid Manzoor Khan conceptualized the original study design, reviewed the analysis of the data and approved the final manuscript. Naresh Kaul designed the study, reviewed the analysis of the data, and approved the final manuscript.
|
2016-10-06T20:11:54.042Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "2eada639968f52e3ea5810a353e60a0f1af8b2fc",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4688814?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "bae6292ee74b66868ef0c4c846a15a83af0cd480",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11439165
|
pes2o/s2orc
|
v3-fos-license
|
GRIN2A mutations cause epilepsy-aphasia spectrum disorders
Epilepsy-aphasia syndromes (EAS) are a group of rare, severe epileptic encephalopathies of unknown etiology with a characteristic electroencephalogram (EEG) pattern and developmental regression particularly affecting language. Rare pathogenic deletions that include GRIN2A have been implicated in neurodevelopmental disorders. We sought to delineate the pathogenic role of GRIN2A in 519 probands with epileptic encephalopathies with diverse epilepsy syndromes. We identified four probands with GRIN2A variants that segregated with the disorder in their families. Notably, all four families presented with EAS, accounting for 9% of epilepsy-aphasia cases. We did not detect pathogenic variants in GRIN2A in other epileptic encephalopathies (n = 475) nor in probands with benign childhood epilepsy with centrotemporal spikes (n = 81). We report the first monogenic cause, to our knowledge, for EAS. GRIN2A mutations are restricted to this group of cases, which has important ramifications for diagnostic testing and treatment and provides new insights into the pathogenesis of this debilitating group of conditions.
pathogenic role of GRIN2A in 519 epileptic encephalopathy probands with diverse epilepsy syndromes. We identified four probands with GRIN2A variants that segregated with the disorder in their families. Strikingly, all four families presented with EAS, accounting for 9% of epilepsy-aphasia cases. We did not detect pathogenic variants in other epileptic encephalopathies (n=475), nor in 81 probands with benign childhood epilepsy with centrotemporal spikes. We report the first monogenic cause for EAS. GRIN2A mutations are restricted to this group of patients, with important ramifications for diagnostic testing and treatment, and novel insights into the pathogenesis of this debilitating group of conditions. The epileptic encephalopathies are a severe group of disorders characterized by seizures and abundant epileptiform activity that contribute to cognitive and behavioral impairment 1 . The epileptic encephalopathies comprise a range of electroclinical syndromes with characteristic ages of onset, clinical and EEG manifestations. Two syndromes with overlapping manifestations have the remarkable EEG signature of continuous spike-wave during slow wave sleep (CSWS) in which the non-REM sleep EEG shows virtually continuous (≥85%) high voltage bilateral slow spike wave activity that largely remits on awakening. In Landau-Kleffner syndrome (LKS), children who were previously normal or isolated language delay present with an acquired epileptic aphasia; focal motor seizures occur in 70% of cases and are usually easily controlled. In contrast, in the syndrome of epileptic encephalopathy with continuous spike-wave during slow wave sleep (ECSWS), prior development is delayed in half the children and refractory epilepsy with multiple seizure types is usual. Regression is more global with language, behavior and motor impairment 2 . MRI brain studies are often normal or may show a malformation of cortical development such as perisylvian polymicrogyria.
In clinical practice, there are patients who do not meet the criteria on EEG or clinical grounds for LKS and ECSWS, usually because their EEG abnormalities do not occupy 85% of slow sleep, yet they have significant language or learning difficulties which may fluctuate in severity. There is debate whether <85% of bilateral epileptiform activity in non-REM sleep is diagnosable as CSWS or whether it should be regarded as an intermediate epilepsyaphasia disorder (IEAD) 3 . These disorders can be conceptualized as falling along a spectrum with LKS, ECSWS at the severe end, IEAD in the middle and benign childhood epilepsy with centro-temporal spikes (BECTS) at the mild end of the spectrum 3 . BECTS is the most common focal epilepsy syndrome in childhood and occurs in normal children who present with focal motor rolandic seizures. The EEG shows unilateral or bilateral centro-temporal spikes that are activated by sleep but do not show the almost continuous bilaterally synchronous pattern of CSWS, and the children do not show cognitive decline. The presence of subtle oral dyspraxia has been noted in some patients with BECTS 4 .
Until recently there has been scant evidence for a genetic etiology for the disorders of the epilepsy-aphasia spectrum. To date, only four families have been reported with monogenic inheritance of rolandic epilepsy and speech or language difficulties. We reported an autosomal dominant family in 1995 with the syndrome of autosomal dominant rolandic epilepsy with speech dyspraxia (ADRESD) 5 . An additional three-generational family with a strikingly similar phenotype was reported more recently 6 . Finally a family with dysphasia and epilepsy with generalized and focal manifestations was reported 7 . A causal gene has not been implicated in these families. Conversely, a fourth family, presenting with X-linked rolandic epilepsy, oral and speech dyspraxia and intellectual disability (ID) was identified with a gain-of-glycosylation SRPX2 mutation 8 . Besides a SRPX2 mutation in an unrelated proband with perisylvian polymicrogyria and rolandic seizures and female relatives with mild ID, no additional SRPX2 variants in epilepsy-aphasia phenotypes have been described.
Clinical genetic studies of probands with BECTS or EAS provide little support for genes of major effect. Investigation of relatives up to three degrees of relatedness to probands with BECTS or the epilepsy-aphasia spectrum suggest that complex inheritance is most likely with febrile seizures being the most common phenotype in relatives of probands 3, 9 . While there has been strong contention that the epilepsy-aphasia syndromes have an immune basis, partly due to their resolution with high dose steroids, a genetic etiology is supported by the rare familial forms described. Furthermore, recent evidence for a genetic etiology has come from copy number variant (CNV) studies. An excess of rare CNVs was noted in a cohort of LKS and CSWS probands 10 , including a single LKS proband with a 16p13 deletion containing one gene, GRIN2A (NM_000833.3) 10 . Furthermore, three children with complex dysmorphic phenotypes were reported with 16p13 deletions that included GRIN2A 11 . GRIN2A encodes the NR2A subunit of the N-methyl-D-aspartate (NMDA) receptor, a neurotransmitter-gated ion channel that mediates excitatory transmission in the mammalian brain, making it an attractive candidate for epileptogenesis. GRIN2A mutation screening in 127 probands with epilepsy or an abnormal EEG and/or ID detected two pathogenic mutations: a nonsense mutation segregating with epilepsy or an abnormal EEG in three family members and a de novo missense mutation in a patient with severe early-onset epileptic encephalopathies 12 . Furthermore, a missense mutation was recently reported in a single proband in a large exome sequencing cohort with ID 13 . While these observations strongly support a role for GRIN2A in epilepsy and ID, no clear genotype-phenotype correlations have emerged. Therefore, we sought to delineate the phenotypic spectrum of GRIN2A mutations by screening a large cohort of patients with epileptic encephalopathy.
We performed high-throughput sequence analysis of GRIN2A in 519 probands with a range of epileptic encephalopathies ( Table 1). As part of a larger study 14 we performed targeted gene capture of 18 genes associated with epilepsy, including GRIN2A. Briefly, we resequenced all exons and flanking 5 base pairs using molecular inversion probes (MIPs), highly multiplex PCR and next generation sequencing as described previously with minor exceptions (online methods) 15 . Using this approach we achieved, on average, 98% coverage (>25X) across GRIN2A for all probands.
We identified four probands with GRIN2A mutations, each of which was confirmed by Sanger sequencing. Segregation analysis in additional family members showed that each variant segregated in an autosomal dominant manner (Table 2, Figure 1). These GRIN2A variants are not present in 6500 control exomes (see Resources). Two families (A, C) carried the same c.1005-1C>T variant, affecting the highly conserved donor splice site. Genotyping of microsatellite makers and a rare SNV flanking this GRIN2A mutation revealed an identical haplotype in these families, suggesting a common founder mutation (Supplementary Figure 1). The c1005-1C>T variant was predicted in silico to cause skipping of exon four during pre-mRNA splicing, resulting in the removal of 593 exonic nucleotides from the mature transcript and thus a frameshift mutation, Phe139Ilefs*15 (predicted) (see Supplementary Table1). We tested for the presence of a rare exonic SNV (rs61753382), encompassed by the common haplotype in affected individuals, in the RNA transcripts of three affected individuals from both families. We detected monoallelic expression of the wild-type variant, suggesting nonsense mediated decay of the mutant transcript (Supplementary Figure 2).
We detected a p. Met1Thr variant in family B. The alteration of this translation start codon is likely to have detrimental effects on GRIN2A protein synthesis, resulting in either complete absence of product due to failure of translation initiation at the start codon, or a truncated protein stemming from translation initiation at an alternate start codon. We were unable to test this as proband RNA was unavailable.
Finally we describe a p.Thr531Met variant that affects a highly conserved residue (as predicted by high GERP and Grantham scores), that is predicted to be probably damaging by Polyphen2 and SIFT (Table 2). This variant is located in the extracellular ligand-binding domain of NR2A. Specific sites within this domain are known to influence gating and kinetic properties of NMDA receptors 16,17 . We assessed the effect of the p.Thr531Met mutation on NR2A function by co-expression with NR1 in COS-7 cells to form a mutant heteromeric NMDA receptor. A resultant shift in NMDA receptor kinetics was observed by single channel recordings with a four-fold increase in mean open time of the mutant channels (36.7±2.5 ms ; n=2299 channel events), as compared to the wild-type channels (9.1±0.2 ms; n=6715 channel events) (P<0.0001, Mann Whitney test, two-tailed) (Figure 2). This novel variant displayed similar clinical and functional consequences to missense mutations in the same domain in a parallel study by Lesca and colleagues (this issue).
The c1005-1C>T and Met1Thr variants likely cause disease as a result of haploinsufficiency of the NR2 subunit of the NMDA receptor, possibly by aberrant NMDA receptor composition or distribution in the brain. Furthermore, we show that the p. Thr531Met variant has a profound effect on NMDA receptor kinetics. Given the pathogenic effect of these mutations and their segregation with the disorder, we conclude that the GRIN2A mutations ascribe causality in these families.
Remarkably, all four GRIN2A positive families presented with EAS, yielding a 9% (4/44) mutation rate in patients with this group of EE. No additional pathogenic variants were detected in the remaining epileptic encephalopathy phenotypes (Table 1). In the 40 remaining EAS patients, we performed array-CGH using a custom microarray with probes spanning GRIN2A at an average density of one probe every ~350bp. No copy number alterations were detected.
Given that BECTS lies at the mild end of the EAS, we next screened 81 probands with BECTS for GRIN2A variants using Sanger sequencing. No additional pathogenic variants were identified.
There were 16 subjects with GRIN2A mutations. Segregation was perfect in the 7 affected members of the original family with autosomal dominant rolandic epilepsy with speech dyspraxia (Family A, Fig. 1) 5 . The same mutation was found in a father-son pair with ECSWS (Family C). Interestingly the GRIN2A mutations were associated with a range of EAS phenotypes including LKS, ECSWS and IEAD (Table 3). All individuals with LKS and ECSWS showed CSWS on EEG studies. Individuals with IEAD had not had a sleep EEG performed to detect CSWS. Affected family members had a complex phenotype including epilepsy (14/16), speech and language difficulties (16/16). While intellectual disability occurred in 6/16 mutation carriers, a further two were of borderline intellect (Supplementary Table 2).
Previous cases implicating GRIN2A have not identified a consistent epilepsy phenotype but have shared features with our cases. Four cases with 16p13 microdeletions including GRIN2A have been reported; one had LKS 10 . The remaining three were more complex with dysmorphic features and moderate to severe ID; two were non-verbal and only one walked independently 3 . All had seizures; one had atypical benign partial epilepsy, which is part of the EAS. One had rolandic seizures without regression and EEG studies were not available. In two patients, eyelid myoclonias were noted which is somewhat atypical for EAS. Two had an EEG pattern suggestive of CSWS. In another study, a three generation family with a translocation disrupting GRIN2A was associated with childhood and adolescent onset convulsions in the setting of learning difficulties or ID. There was no suggestion of CSWS on their EEG studies and no epilepsy syndrome was determined.
We conclude that GRIN2A mutations are highly predictive of EAS that include LKS, ECSWS and IEAD. Furthermore, in a separate study Lesca and colleagues report GRIN2A mutations in 20% of LKS, ECSWS and atypical rolandic epilepsy with speech impairment confirming the importance of GRIN2A to the EAS (this issue). Of note, we did not detect any GRIN2A variants in 475 probands with other epileptic encephalopathy phenotypes, or in 81 probands with BECTS. Furthermore, in a large series (n= 1703) of autism probands no GRIN2A mutations were identified 18 . These results demonstrate that the genetic etiology of EAS may well be distinct, an observation that balks the current trend towards an overlapping etiology for neurodevelopmental disorders. We hypothesize that altered NMDA receptor activity due to GRIN2A haploinsufficiency or missense mutations results in aberrant ion flux and disruption of the downstream signaling cascade. The role of NMDA receptor aberration and its potential role in the corticothalamic network disrupted in slow sleep will be an important area of future research. This study is the first to detect a monogenic cause for epilepsy-aphasia syndromes with a mutational rate of 9%. These results strongly suggest that GRIN2A diagnostic testing is warranted in patients with epilepsy-aphasia and will enhance prognostic and genetic counseling for families.
Data analysis and variant calling
Raw read processing and alignment was performed as previously 14 . However variant (single nucleotide and indel) calling and filtering was performed using the Genome Analysis Tool Kit (GATK) (see URLs version). Variants that did not adhere to the following criteria were excluded from further analysis: allele balance >0.70, QUAL<30, QD<5, coverage<25X, clustered variants (window size-10) and variants in homopolymer runs (5 bp). Variants were annotated with Seattle seq (see URLS and version) and the ESP6500 dataset (see URLs) used to assess variant frequency in control population. PCR and Sanger sequencing were conducted according to standard methods as described previously.
Array CGH
We performed array CGH using a custom designed 8plex microarray [Agilent], designed to detect copy number alterations in known epilepsy genes. GRIN2A was covered at a density of one probe every ~350bp. All experiments were performed as per the manufacturer's instructions and data analysis conducted using Genomic Workbench [Agilent].
Genotyping
We performed genotyping in all available affected and unaffected members of Family A and B, who carried the c.1005-1C>T variant. We selected three microsatellite markers, D16S404, D16S3126 and D16S407, spanning a 0.56Mb interval across GRIN2A. Fluorescently labeled PCR products were analyzed on an ABI3100 genetic analyzer, and allele size ranges determined with the GS500LIZ size standard [Applied Biosystems] using the PeakScanner V2.0 software [Applied Biosystems]. Furthermore, we genotyped all family members for the rare exonic GRIN2A variant (rs61753382) using Sanger DNA sequencing.
RNA transcript analysis
RNA was isolated from whole blood of affected family members and controls using the PAXgene blood RNA kit [PreAnalytiX]. cDNA synthesis was performed using 1ug of DNA using the iScript Reverse Transcription Supermix kit [Bio-Rad]. Nested PCR and Sanger sequencing was performed for GRIN2A RNA transcripts analysis in three affected family members from both c1005-1C>T carrier families. We assessed the presence of the rs6173382 variant, the minor allele was linked to the c.1005-1C>T mutation (see Supplementary Table 3 for primer pairs).
Constructs and transfections
NR1 and NR2A constructs were commercially purchased [Genecopeia]. Site-directed mutagenesis [Agilent Technologies] was used to generate the mutant NR2A (p.Thr531Met) construct, using the following 5′-gtgccctttgtggaaatgggaatcagtgtcatgg and corresponding reverse complement primers. All wild-type and mutant constructs were verified by Sanger sequencing. Monkey kidney fibroblast like COS-7 cells were seeded in six-well plates (10 5 cells/well) one day before transfection. Magnetofection of NR1 and NR2A constructs (1:3 ratio) was performed using the Magnetofectamine transfection kit [OzBiosciences, France]. The presence of the NR1 and NR2A (wt and mutant) subunits at the plasma membrane was verified by immunocytochemistry experiments (data not shown). Phenotypes and segregation of GRIN2A mutations in four families with epilepsy-aphasia syndromes Table 2 Pathogenic GRIN2A mutations in four epilepsy-aphasia families.
|
2016-09-14T22:35:13.896Z
|
2013-08-11T00:00:00.000
|
{
"year": 2013,
"sha1": "be879cc21ed73e313c2652018f2186963d75df68",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3868952?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d59e003c3eacd6c49c1149e7f0989e856f5790c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
55380088
|
pes2o/s2orc
|
v3-fos-license
|
Optimum design of a CCHP system based on Economical , energy and environmental considerations using GA and PSO
Article history: Received January 15 2017 Received in Revised Format April 1 2017 Accepted April 2 2017 Available online April 4 2017 Optimum design and control of a Combined Cooling, Heating and Power generation (CCHP) system, in addition to the economic benefits, could be profitable in environmental and energy consumption aspects. The aim of this study is to determine the optimal capacity of equipment and define the best control strategy of a CCHP system. Since determination of optimal system control strategy has a huge impact on improving the objective functions, the system’s performance under five different strategies (developed based on well-known Following Electrical Load (FEL) and Following Thermal Load (FTL) strategies) is evaluated. In a real case study, a CCHP system is designed for an educational complex located in Mahmoudabad, Mazandaran, Iran. The objective is to minimize capital and operational costs, energy consumption, and CO2 emissions of the system. Due to the complexities of the model, genetic algorithm (GA) and particle swarm optimisation (PSO) algorithm are used to find the optimal values of the decision variables. The results show that using FEL strategy CO2 emissions reduces in compression to FTL strategy. Furthermore, using multiple power generation units under FTL strategy eventuates the least cost but increases CO2 emissions and energy consumption in compression to FEL strategy. © 2018 Growing Science Ltd. All rights reserved
Introduction
A large portion of national energy consumption is expended for fulfilling the buildings' heating, cooling, and electricity demand.Consumed energy in the buildings sector, consisting of residential and commercial end users, accounts for 20.1% of the total delivered energy consumption worldwide (International Energy Outlook, 2016).The type and amount of energy consumed by households can vary significantly within and across regions and countries (International Energy Outlook, 2016).In the USA, residential buildings consume 22% of the total final energy use, compared with 26% in the EU.Residential buildings energy consumption is 28% of total energy consumption in the UK, well above Spain at 15%, mainly due to a more severe climate and the building types.In 2030, energy consumption attributed to residential and the non-domestic sectors is predicted to reach to 67% and 33%, respectively (Pérez-Lombard et al., 2008).Lack of efficient construction regulations, in addition to the low energy prices in the past have caused careless and inefficient consumption of energy by the Iranian residential sector compared to industrialized countries (Karbassi et al., 2007).Commercial and residential building sector consume about 40% of total energy in Iran.This consists, 11.7% of oil products, 73.13% of natural gas and 13.25% of electricity (Iran Energy Efficiency Organization (IEEO-SABA), 2016).In addition to the economic burdens, this energy consumption trend all around the world is causing severe environmental problems as well as energy security issues (Cai et al., 2009).
Using Combined Cooling, Heating and Power generation system (CCHP) is a proven method for enhancing energy efficiency.Using CCHPs leads to economic savings, while reducing the emissions (Zheng et al., 2014).Also, possible energy sources for CCHP systems include a vast range of fossil fuels, biomass, geothermal and solar power, giving the flexibility desired for installing these systems in different geographical regions.Consequently, CCHP systems are installed in a variety of buildings, such as hotels, offices, hospitals and supermarkets (Ge et al., 2009;Wang et al., 2008).
In this field the goal is to fulfil a building's energy demand while minimizing the costs and environmental consequences (Løken, 2007).In order to do so, structural design and operational planning of the system need to be optimised.Structural design of the system relates to defining the optimum number and capacity of the equipment; and operational planning of the system relates to the determination of the hourly operation of the equipment (Mago & Chamra, 2009).One of the main challenges in energy planning field is access to reliable estimation of energy demand.Additionally, the fluctuations in the building energy demand (in terms of heating, cooling, and electricity) makes the design and operational planning of the system a complex task (Cao, 2009) since reaching the optimum design requires solving an optimisation model at every interval of time.The large scale of the optimisation problem makes the models computationally intractable; therefore, operating strategies are proposed to reduce the complexity of the models.These strategies determine the state of the Power Generation Unit (PGU) and the proportion of the cooling demand fulfilled by the electric chillers (so called "electric cooling to cool load ratio") in each period, which significantly reduce the complexity of the problem.A variety of methods for determining optimum design of CCHP systems are proposed.Initially linear optimisation models were developed to design energy systems.Cao (2009) analysed the influence of energy prices on the system's economic feasibility.The objective function was minimisation of the annual cost, and maximization of the exegetic efficiency.Piacentino and Cardona (2008) presented a Mixed Integer Linear Programing (MILP) model to optimize the economic and environmental performance of a tri-generation system (heating, cooling and electricity production).Nonlinear Programming (NLP) and Mixed Integer Programming (MIP) models were used to find the optimum design of the system in research by Gamou and Yokoyama (1998) and Arcuri et al. (2007), respectively.Reduced gradient method was used by Chen and Hong (1996) to solve the presented mathematical model.In a similar study a matrix approach was employed to model the problem by Geidl and Andresson (2007).They presented the mathematical model of the problem in the matrix form and used Sequential Quadratic Programming to optimize an hourly linear objective function.
Due to their capability in tackling large-scale optimisation problems, artificial intelligence, in the form of heuristic and metaheuristic algorithms are commonly employed to optimize the design and operation of CCHP systems.Metaheuristic algorithms' ability of exploration and exploitation is admissible when evaluation of limited number of feasible solutions is desired (Črepinšek et al., 2013).Genetic Algorithm (GA) and Particle Swarm (PSO) algorithm have been applied to optimize of CCHP design and operational parameters.The PSO algorithm is used by Tichi et al. (2010) for minimizing the cost of operating various CHP and CCHP systems in an industrial dairy unit.Wu (2011) considered the optimisation of operation of a CHP system under uncertainty and used the PSO algorithm to solve the model.Ghaebi et al.
) 2012 ( investigated exergoeconomic optimisation of a CCHP system.The presented economic model was based on the Total Revenue Requirement (TRR) and the total cost of the system was defined as the objective function.This model was solved by GA.Designing CCHP systems involves determination of the equipment's capacity as the main goal.Wang et al. (2010) designed a CCHP system with consideration of PGU and storage tank capacity as decision variables.On-off coefficient and "electric cooling to cool load" ratio was considered as decision variables too.This research was extended by an investigation a biomass gasification CCHP system (Wang et al., 2014).In this research the capacity of the gasification reactor, PGU, absorption chiller, electric chiller, and heat exchanger were considered as decision variables.In another study by Sanaye et al. (2015) a CCHP system was designed with equipment's capacity, partial load of PGU in each month, and electric cooling to cool load ratio as decision variables.This study considered a more comprehensive design compared with previous studies; in addition to the capacity of PGU, the number of them was considered as the decision variables.When a high-capacity PGU is installed, due to fluctuations in electric demand, the optimum solution dictates that the PGU is in off state a number of courses.This will lead to purchase of the whole electricity demand from the grid and supplement of heating demand by auxiliary boiler.Therefore, energy consumption and pollution increase during these periods; also, the cost of buying electricity will increase significantly.Consequently, it might be more beneficial to consider a number of smaller-capacity PGU instead of a large-scale one.This is why the number of PGUs is considered as a decision variable.
In the previous research the number and the capacity of the equipment, the on-off coefficient, and "electric cooling to cool load ratio" under different strategies were not simultaneously considered as decision variables.Therefore, the interconnections between these variables have not been taken into account, which is investigated in this study.In this research various strategies, for simultaneous utilization of several power generation units and adaptation of the operation status of chillers throughout the year, are explored so that the performance of the system under different circumstances is evaluated.Commonly employed strategies such as Following Electrical Load (FEL) and Following Thermal Load (FTL) are implemented for an actual set of buildings to optimize the performance of the CCHP system.Moreover, different strategies are implemented for a real case and results are analysed.As mentioned before, main goal of using the CCHP systems is to lower the economic costs and the environmental consequences.In this study, it is endeavoured to reflect the influence of optimum design and operation planning of the system in reduction of economic costs, environmental footprint measured in terms of CO2 emissions, and energy consumption.As a summary, the followings are the contribution of this paper: Three commonly employed strategies in addition to two novel strategies for operational planning of CCHP systems are explored. GA and PSO algorithms are employed to obtain the optimum values of design parameters and their performance in solving this optimisation problem are compared. Eight design parameters (decision variables), including the capacity of gas turbine as the prime mover, their number and operational strategy, the capacity of the backup boiler and storage tank, the capacity of electrical and absorption chillers, the electric cooling ratio, and the on-off coefficient of PGUs are considered and the results under various strategies are compared. The developed strategies and algorithms are applied to a real case study.
Problem Description
Conventionally, in Separate Production (SP) systems, electric chillers are used to fulfil cooling demand, while heating demand of the buildings is supplied with a boiler (commonly a gas boiler), and the electricity is purchased off the grid.CCHP systems, however, consist of several separated segments that perform in an integrated fashion to fulfil the electricity, cooling, and heating demands.Fig. 1 shows general structure of a CCHP system.PGU generates electricity power by consuming the fuel; the heat exchanger retrieves heat generated during generation of electricity; depending on the implemented strategy, the recovered heat is either used to fulfil the heating demand or is directed to the absorption chiller to fulfil the cooling demand; the electric chiller is used to complement the absorption chillers and fulfil the cooling demand when needed; the auxiliary boiler and energy storage tank reduce the risk of system failure and increase system reliability.(Sanaye & Hajabdollahi, 2015) Operational planning of CCHP systems is usually conducted based on two strategies: FEL and FTL.The core of FEL strategy is fulfilment of the electrical demand.If the electrical demand of the buildings exceeds the capacity of PGU, it works in full load; otherwise it works in partial load to provide the required amount of electrical power.The cooling load provided by the electric chiller is determined in each period (an hour) based on the electrical power production.When systems operate based on FEL, overproduction of thermal energy would be wasted.When the energy generated by the PGU is insufficient, lack of electricity is purchased from the grid.Also, thermal storage tanks are used to enhance thermal efficiency of the CCHP systems.Recovered heat from the PGU will be used by the absorption chiller for cooling, or by the heat exchanger to supply the heat demand of the buildings.
On the contrary to the FEL strategy, in the FTL strategy, the purpose is to fulfil the thermal demand of the building, so there is no excess of thermal energy and in case of shortage, thermal energy would be supplied by an auxiliary boiler.When CCHP system operates based on the FTL, surplus or shortage of electrical power is possible.In case of surplus, if selling the excess electricity to the grid is not possible, surplus electricity would be wasted; and in case of shortage, unmet electricity demand is fulfilled by purchasing from the grid.
Another commonly adopted strategy for operational planning of CCHP systems is similar to FEL but with an additional decision variable (x) which defines the ratio of electric cooling to cool load.In other words, in this strategy the proportion of the cooling demand supplied by the electric chiller is a decision variable, compared to the FEL strategy where the priority is always given to the electric chiller and the capacity of the PGU defines the amount of the cooling demand supplied by the electric chiller.Using FEL with no restriction on the electric chiller utilization might lead to significant heat waste which could have been used by the absorption chiller to supply the cooling demand.In order to prevent increasing the complexity of the model, the value of x is commonly considered to be fixed throughout the year (Sanaye & Khakpaay, 2014;J. Wang et al., 2010) (one decision variable is added instead of 8760 variables).
As mentioned before, using several smaller PGUs instead of a single large-capacity PGU could be beneficial in some cases.At the first glance, deployment of several PGUs will dramatically increase the capital cost of the system; however, the operational costs could be reduced to the extent that compensate the extra capital cost.Therefore, considering the multi-PGU case provides the opportunity to evaluate the trade-off between the higher capital cost and the reduced operational costs which is missed when a single PGU is considered.Specifically, under the FEL strategy, it is anticipated that multi-PGU approach offers more favourable results when the minimum and maximum electrical load of the system throughout the year are widely different.If the electric load fluctuations in the system is significant and we have a high capacity PGU, it will be turned off in many periods when the partial load would fall below its economical operational threshold (as explained in section 3.1) leading to higher purchased amount of the electricity from the grid and utilizing the auxiliary boiler to fulfil the heat demand.On the contrary when several PGUs are available their status (on/off) could be adjusted respective to the partial load of the system, hence, reducing the operational costs.Similarly, when using a single high-capacity PGU under the FTL strategy, in a number of periods during spring and fall the PGU is turned off because the heating demand of the system is reduced and the PGU must operate at a low partial load that is not economical (as explained in section 3.1).Having several PGU with smaller capacity could reduce purchasing power from the grid and reduce supplying heating demand by the auxiliary boiler, hence, improving system's performance.Considering several PGU will change how the operational strategies are applied.When using FEL strategy in a multi-PGU system, in order to determine the status (on/off) of each PGU at each time step, the PGUs are sorted based on their capacity and the smallest unit with the lowest capacity is placed in active status.The reminder of the electrical demand is assigned to the next PGU and it is activated.This is continued until the demand is fully responded or the generation of the electricity by the next PGU is not economically viable.This approach is expected to increase the efficiency of the PGUs by increasing their utilization rate.When using the FTL strategy, in order to determine the status (on/off) of each PGU, the PGUs are sorted based on their capacity and the smallest unit with the lowest capacity is placed in active status.The reminder of heating demand is assigned to the next PGU and it is activated.This is continued until the heating demand is fully responded or the activation of the next PGU is not economically viable.
Model formulation
In the followings, the parameters of the model are introduced and the objective functions and constraints are presented.Table 1 shows a list of the parameters and symbols used in the model.
Table 1
Parameters and symbols of the developed model
3.1.Under FEL strategy
When FEL is the adopted strategy, the ratio of cooling load supplemented by the electric chiller per hour is calculated by ( 1).The amount of cooling load supplied by the electric chiller is calculated by (2). ,max .
The electricity consumption of the electric chiller is calculated as shown in (3).The capacity constraint of the electric chiller is shown in (4).The total electricity requirement of buildings is shown in ( 5).The capacity utilization of the PGU is denoted by , , called the instantaneous fraction of the PGU, and is calculated using (6).
, , / ec
, , ( ) ,max The efficiency of the PGU is highly dependent on its capacity utilization (Wang et al., 2011) .Below a certain threshold it is not rational to use the PGU since most of the consumed fuel is wasted on heat generation rather than electricity generation.In ( 7), is considered as the lower bound on the instantaneous fraction to ascertain the PGU is either turned off or is working above the predetermined level.is a decision variable in the model and its upper limit is equal to 1. , , , , Partial load of PGU is calculated by (8.The efficiency of the PGU is determined by (9).Increasing the partial load of PGU increases its efficiency (Sanaye & Hajabdollahi, 2013;Sanaye, Meybodi, & Shokrollahi, 2008) as shown in the (9).The electricity purchased from the grid or sold to the grid is calculated using (10).
) (11) declares that power exchange with the grid was purchase off the grid or sold to the grid.The fuel consumed by PGU is estimated by the (12).Based on the fuel consumption and the efficiency of the PGU, the generated heat is calculated by (13).
1, , 2, , , , The amount of the recovered heat is calculated by ( 14).Fuel consumption related to the electricity purchased from the grid is calculated by ( 15).The supplied cooling via absorption chiller is determined by ( 16).Absorption chiller's cooling range is between minimum and the maximum capacity of absorption chiller, represented in (17)., , . .
,min , ,max ac ac h ac The heat consumed by the absorption chiller is calculated by its coefficient of performance in (18).The total amount of heat requirement of the system is calculated as (19).
, , , Part of the heat requirement is supplied by recovered heat from the PGU and the rest is provided by the backup boiler.The generated heat from the auxiliary boiler can be calculated by (20).Capacity constraint of the boiler is represented in (21).Partial load of the boiler is determined by ( 22) and its efficiency can be calculated by ( 23) (Sanaye & Hajabdollahi, 2013;Sanaye et al., 2008).
r h s h req h b h req h r h s h req h r h
Fuel consumption of the auxiliary boiler is calculated by ( 24) and the total fuel consumption in the CCHP system is determined in (25).The amount of heat charged in or discharged from storage tank can be determined by ( 26).The initial investment cost per kW capacity of equipment ($/kW) is determined using ( 27) and ( 28) (Sanaye & Hajabdollahi, 2013).
, m a x 0.014 10 600 The gas turbine maintenance cost ($/kW), expressed based on its capacity, is assumed equal to $0.0055 kWh (Smith et al., 2010).The initial cost per kW of the boiler ($/kW) is estimated by (29 and its total capital cost is calculated by (30.Operating cost of the boiler is assumed $0.0027 kWh (Sanaye & Hajabdollahi, 2013).Initial investment per kW ($/kW) of absorption and electric chiller are estimated as shown in (31 and (32, respectively (Sanaye et al., 2008).Their total capital cost is calculated by (33 and (34.Operating cost for both chillers is assumed to be 0.003 $/kWh (Sanaye & Hajabdollahi, 2013).The initial cost of a storage tank per kW is considered 33 $/kW (Sanaye & Hajabdollahi, 2013) so the total capital cost of storage tank is calculated by (35.
Under FTL strategy
For the FTL strategy mathematical relationships are similar to FEL except for the chiller's operation parameters that are determined in a different way.At first the efficiency of PGU at full load (ƞ , ) is calculated using (9) and the heat generated by the PGU is calculated by (36).Using the heat recovery factor, recovered heat from the CCHP system at the highest capacity is calculated in (37) and the ratio of electric cooling load to cool load is determined as shown in (38).
(( ) ) / / hea h c h ac h hea h ac c h hea h hea h c h ac
By determination of , the required heat can be calculated according to (19).If the required heat is less than , , recovered heat would be equal to its required heat.By using (39), , , is calculated and using (40 temporary partial load ( , , ) is calculated.As shown in (41), is a binary variable which equals to 1 when PGU is on and equals to zero when PGU is off.Using (42), heat recovered from the PGU is calculated based on heat requirement of the system.Based on the recovered heat of PGU the amount of electrical generation ( , ) is estimated and the partial load of PGU is calculated by (43).In other words, at first , , is calculated to determine if PGU should be in on or off mode.After determination of PGU's state, the amount of , is calculated.
3.3.Under FEL strategy with fixed ratio of electric cooling to cool load
As previously mentioned, in the third strategy, the ratio of electric cooling to cool load is considered as a decision variable.PGU operates based on FEL but x is determined on the basis of both electrical and thermal load and Eq. ( 1) is omitted from the optimisation model.
Multiple PGUs, Under FEL strategy with fixed ratio of electric cooling to cool load
The power generation for each of the PGUs in each period of time is calculated in (7) until condition , is met.By this condition the other units will become inactive.PGUs are sorted in descending order of , as shown in (44).In this stage, based on FEL strategy the total electrical requirement of the system is determined.Total electricity requirement for each unit can then be calculated by ( 45) and ( 46).Initially the instantaneous load factor of the active unit is calculated in (47); then, the amount of electricity generated for each active unit is determined using (48).
, ,max , 1,max {1,..., K 1} , , , , , ,max At this stage, the electricity generated by the CCHP system in each period is determined.If unit number j is placed on inactive status all next units will be in inactive status.This restriction is shown in (49); then, the total electrical load production is calculated by ( 50). ( 49) shows that PGUs are activated respectively and (50) represents total electricity generated by PGUs.Purchased electricity from the grid can be calculated by (51).Calculation of total heat retrieved from the CCHP system in each period is based on (52).The rest of the equations are similar to FEL strategy.
3.5.Multiple PGUs, under FTL strategy with fixed ratio of electric cooling to cool load
(49 is used to sort the PGUs.First, the entire heating system requirement is determined by ( 19).For the first activated unit the total heating demand of the system is considered as the required heat amount which is shown in (53).By (54) required heat of other units can be determined.After the last PGU, if still there is unmet demand, the auxiliary boiler is used for which the amount of heating demand is calculated using (55).
3.6.Evaluation Criteria
Capital and variable costs, the amount of CO2 emissions, and amount of energy consumption are considered as the criteria in the objective function in the optimisation model.The first criterion evaluates the economic costs of the system.It consists of the capital cost of equipment, cost of fuel consumed by the boiler and the PGU, operation cost of equipment, and cost/profit from transfer of electricity from/to the grid.Salvage value of equipment is considered 10% of their capital cost (Gibson et al., 2015).
) N k buy h e buy b h f b pgu h f pgu sale h e sale k k pgu h b h
. .
R is the capital recovery factor and calculated as shown in (56 and A is the uniform series sinking fund and its value is calculated using (57), n represents the service life of the equipment and i is the interest rate.Similar to Bahrami and Farahbakhsh (2013), it is assumed that the values of i and n are equal for all equipment.The Annual Total Cost (ATC) is calculated by (58.The second criterion evaluate the amount of CO2 Emissions (CDE) to reflect environmental concerns.CO2 emissions are released by fuel consumption of auxiliary boiler and the PGU.Also, when electricity is purchased from the grid, the related CO2 emissions are accounted for as shown in (59).
The third criterion represents the Primary Energy Consumption (PEC) of the system which is composed of two parts.The first part is amount of fuel consumed by the boiler and the PGU, and the second part is the fuel related to the electrical energy purchased from the grid.Total energy consumption of the system is shown in (60).In this study we use a weighted sum of these three criterion to form a single objective function as shown in (61).8760 3 , 1 To calculate the optimal value of the multi objective function, first each of the single-objective functions are optimised to obtain the optimum value for each of them, denoted as , , , , and , in (61.Each single-objective function is normalized and then sum of them is calculated.By changing the weights different results can be achieved.Since the economic cost of fuel consumption (fuel consumption of the PGU, boiler and electricity purchased from the grid) in CCHP system in addition to CO2 tax are considered in ATC function, the objective functions are largely aligned with each other which makes using the normalized weighted sum a proper method in handling the multi-objective optimisation model.
Evolutionary Algorithms
GA and PSO are both population-based algorithms commonly employed in the field of energy planning.Genetic algorithm was introduced by John Holland (Mitchell, 1998) as an evolutionary algorithm inspired by biology concepts such as inheritance, mutation, selection and crossover.Small random changes are determined by mutation which is determinative of GA diversity.Crossover operator determines how the algorithm combines two selected parents, to generate children for the next generation.Candidate solutions are assessed by the evaluation function (also known as fitness function).
PSO was invented by Kennedy and Eberhart in the mid1990s (1995), inspired by the movement of the particles.The PSO algorithm includes three phases, namely, generating particles' positions and velocities, updating the velocity of particles, and updating the position of particles.The three values that affect the new direction of a particle are its current motion, the best position in its memory, and swarm influence.(62) shows how the movement speed of the particle is updated and (63) shows how the new position of the particle is determined where the inertia coefficient for particle i in its next movement is represented by 1 , new position of the particle i is represented 1 , and is the current motion factor, is particle own memory factor, and is the swarm influence factor.
Solution Representation
Solution representation scheme adopted in this research are similar for the both algorithms, with minor differences respective to the implemented strategy.In GA, a chromosome is a set of parameters which define a solution for the problem.An example chromosome for each strategy is shown Fig. 2. In the FEL strategy a chromosome with six bits is used.Chromosomes encompass normalized variables, i.e., all the bits filled with continuous variables between zero and one.Bits represent the capacity of PGU, auxiliary boiler, absorption chillers, electric chiller, heating storage tank and on/off coefficient.In the FTL strategy chromosome consist of 5 bits.In this strategy chromosome structure is similar to the FEL strategy just the bit corresponding to heat storage tank capacity is removed, because in this strategy excess heat is not produced at any time and thus the storage tank is removed.In FEL with fixed ratio of electric cooling load to cool load the chromosome has one bit more than in the FEL strategy which is related to the electric cooling load to cool load ratio.In the multi-PGUs strategy an upper limit for the number of PGUs is considered and binary bits are used to indicate the instalment of the PGUs; corresponding to each binary bit there is one bit related to that PGU's capacity.Therefore, assuming n as the upper limit on the number of the PGUs, 2*n bits are considered for PGUs to determine their instalment and capacity.
Operators
Mutation and crossover are the two essential operators in GA.In crossover operator, at first the parent chromosomes are selected, then by selection of genes from the parents, new off-springs are created.Parent chromosomes are selected according to their fitness; chromosomes which have better fitness function value have higher chance of being selected.After crossover, mutation takes place which prevents algorithm from premature convergence to a local optimum by inserting randomly created solutions based on the existing ones.In Fig. 3 examples of crossover and mutation operators for FEL strategy are shown.In this study a single point crossover is implemented.The crossover point is randomly selected, and two new solutions are created by swapping the two sides of the point in the parents to form a new solution.Applying the mutation operator, two points of a given chromosome are randomly selected and swapped to form a new solution.An example of crossover and mutation operators for Muti-PGUs strategy is shown in Fig. 4.These chromosomes possess binary and continues variables limited to [0,1].Crossover operator is exactly the same for the multi-PGU strategy.
To apply mutation two random points are selected.If this selected point is binary, the bit will change to its contrary form.The capacity related to this binary variable will change based on this bit's new value.For example, if it becomes zero the capacity related to this bit will change to zero; otherwise the capacity related to this bit is updated to a new random value.If selected bit is continuous, it will become updated to a new continuous random variable by mutation operator.
In the implemented PSO algorithm the particles structure is similar to the chromosome structure in GA.Mutation is the only operator in PSO which is same as GA.
Case study
The proposed CCHP optimisation model and the solution procedures are applied to a real case study, an educational complex located in Mazandaran, Iran.The CCHP system provides electricity, heating (for both space heating and domestic hot water), and cooling.List of existing buildings, the total area of outer wall, windows, doors and usable area are shown in Table 2.The technical parameters of CCHP system and the specifications of the components are listed in Table 3. Design parameters (decision variables) and the acceptable range of their variations are listed in Table 4.The range of the decision variables is determined by load demand of the buildings; the range of the boiler and storage tank capacity are determined according to heating demand; and chillers capacity range is determined according to cooling demand of the buildings.The electricity, cooling, and heating load curves are estimated using Energy Plus ("Energy Plus ", 2014) shown in Fig. 5.
Results and discussions
In this section the results of the algorithms under each strategy is presented and the comparison is drawn between them.In this research GA and PSO are coded in MATLAB R 2013b ("MATLAB," 2013) Stopping criteria and parameters for GA and PSO are in Table 5.
6.1.Results for different strategies for different weights
The results obtained when different weights for each objective function is used are shown in Fig. 6; 19 different weights results are represented.It is started by the (1,0,0) vector and ended to (0,0,1) vector for different scenarios.Extreme points which are related to single objectives are represented in Table 6; since minimizing PEC is equivalent to minimizing CDE, only one row (under each strategy) is considered for these two objectives.As ATC decreases, PEC and CDE increase because investment decreases and less electricity is generated by the CCHP system.As a result, buying electricity from the grid increases rising PED and CED.PEC and CDE are aligned because consumption of more fuel consequences more emission.The range of the results for the three objectives is less than 1% because of the relationship between the objective functions, reaffirming the suitability of the using the weighted-sum method to handle the three objectives simultaneously.The equal weights method is implemented in many decisionmaking problems; this method's results are most of the time close to the optimal weighting methods as discussed in (Wang et al., 2009(Wang et al., , 2010)).Therefore, in the following all of weights are assumed equal and w1 = w2 = w3 = .
Results for FEL strategy
The best design of the system in this strategy is presented in the Table 7 for both the GA and PSO algorithm.PSO outperforms GA in this settings as reflected in the last three rows of Table 7. Numerical results in various segments of the system are shown in Fig. 7. Following this strategy, PGU is operating in its maximum capacity in 5870 periods and in partial load larger than 0.8 in 8684 periods (see Fig. 7 d.E PGU).The results regarding the heat storage tank (Fig. 7 c.Q Storage) indicate that during two periods of time the storage tank discharges completely.First in summer, to fulfil the heat requirement of the absorption chiller; and second in the winter, to meet the heat demand of the buildings.As shown in Fig. 7 a.Q Boiler, the thermal load provided by the boiler is maximized during winter, because recovered heat from the PGU is not sufficient to fulfil the heat demand of the system and the boiler supplies the remaining heat demand.In summer, the boiler is operational in 21 periods because recovered heat is not sufficient to fulfil absorption chiller's heat demand.As shown in Fig. 7 b.Q Electric Chiller, despite the priority given to the use of electric chiller under this strategy, only 34% of the cooling demand is supplied with electric chiller and the remaining cooling demand is fulfilled by the absorption chiller.
Results for FTL strategy
The obtained results for this strategy are listed in Table 8.The on-off coefficient factor ( ) is valued at the lowest limit (0.2).When the PGU is turned off the heat requirement of the buildings has to be provided by the boiler as shown in Fig. 8 d.Q Boiler.The priority in this strategy is with the absorption chiller, thus, cooling load provided by the absorption chiller throughout the year is more than FEL because, 98 % of cooling load is provided by the absorption chiller (c.Q Electric Chiller & b.Q Absorption chiller).
The electric chiller is used in 45 periods throughout the year.The obtained results from the FTL strategy are dominated by the results from the FEL strategy.In addition to higher economic costs, FTL strategy cause higher energy consumption and environmental pollution.In 6648 periods PGU is turned off as shown in Fig. 8 a. E PGU and power requirement of the system is supplied through the network which in addition to the cost of buying electricity imposes higher emissions and energy consumption to the system.
Results for FEL, fixed electric cooling ratio
The results of this strategy, shown in Table 9, indicate that best performance of the system is obtained when electric chiller supplies 24% of cooling demand of the system; the electric cooling to cool load ratio is set to 0.24 in the PSO as shown in Fig. 9 show that this approach incurs less ATC, PEC and CDE than the two previous strategies.96% of the power requirement of the system is provided via PGU.The total heat requirement is 11,586,092 kWh which is less than FTL (11,924,618 kWh) and more than FEL (11,436,859 kWh).
Table 9
Best design of the system at FEL, fixed electric cooling ratio strategy Heat required in this strategy is between FEL and FTL strategy because under the FEL strategy, the priority is with the electric chiller and heat requirement of the system will decrease.Under FTL and FEL strategies, the priority of the chillers is given so the performance of the chillers is partially predefined while under FEL with fixed electric cooling ratio, the model determines the best ratio of cooling load supplied by chillers; therefore, the results show a better performance.The capacity of the storage tank in this strategy is lower than under the FEL strategy (c.Q Storage).Following this strategy PGU operates in maximum capacity (a.E PGU) in 2974 periods and in 7761 periods operates in partial load above 80% of its capacity.
Results for FEL multi PGU based on fixed electric cooling ratio
The obtained results show that the selection of several PGUs in this strategy is not efficient and only increases the system costs.The reason is that PGU under the FEL strategy operates in partial load higher than 80% in 7761 periods indicating an efficient use of its capacity; as a result, selection of multiple PGUs does not improve the performance and only increases the capital cost of the system.So if the system is to operate based on FEL strategy, it is suggested to use a single PGU in this case study.
Results for FTL, Multi PGU based on fixed electric cooling ratio
As shown in Table 10 under FTL strategy using three PGUs is recommended.As shown Fig. 10 b.Q Boiler, a boiler with lower capacity compared to the previous strategies is chosen.This is because the recovered heat from the PGUs in most periods can fulfill the majority of the heat demand.Most of cooling demand is supplied by the absorption chiller which is more efficient and electric cooling to cool load ratio is 0.96.In the peak of the heat demand, during winter and summer, all three units are in active status and operate in their maximum capacity as shown in Fig. 10 (a.E PGU).The costs of FTL strategy with one PGU is more than the FTL strategy with multiple PGUs.It is because at least one PGU is in the active mode with high partial load in 4805 periods, therefore, provided thermal and electrical load covers the demand.
Table 10
The best design of the system under the FTL strategy.As a result, in addition to purchasing less electricity from the grid, less thermal load is supplied via auxiliary boiler in compression to the FTL strategy.The total electricity purchased from the grid in FEL, FTL, FEL with fixed ratio of electric cooling to cool load, and FTL with multiple-PGU are respectively 1,070,688 kWh; 9,717,615 kWh; 516,235 kWh; and 7,081,472 kWh.Electricity sold to the grid in FTL and FTL with Multiple PGU are respectively 978,028 kWh, and 1,026,760.Total electricity purchased form the grid under the FEL strategy with fixed ratio of electric cooling to cool load is less than the other strategies; so if increases in the price of electricity is predicted, this strategy's chance of being chosen by the decision makers increases.If the selling price of the electricity rises, multiple-PGU under FTL results less ATC which can alter the decision makers' decision.
Conclusion
A combined cooling, heating and power generation (CCHP) system was optimally designed.The decision variables were the number of prime movers (PGUs), their capacity and operational strategy, backup boiler, storage tank heating, absorption chiller and electric chiller capacity, electric cooling ratio, and the on-off coefficient of the PGU.This combination of the decision variables, along with the various strategies considered in this study, provide a more comprehensive view of the real system and, to the best of authors' knowledge, is presented for the first time.Due to the complexity of the developed model, PSO and GA were used to solve it.PSO showed a better performance in solving the optimisation problem although the difference between PSO and GA was maximum 0.7%.
The results obtained for the case study show that under the FEL strategy, CDE and PEC are significantly reduced in compression to the FTL strategy.Under FTL strategy with one PGU, the PGU is put in the inactive mode in a large number of time.Under FTL strategy with multiple PGU significant reduction in costs was observed but, CDE and PEC were still higher in compression to the FEL strategy.If the approach of decision makers is to reduce the economic costs, it is better to work based on multi-PGU under FTL strategy; and if they rather to reduce CDE and PEC or are seeking to buy the lowest amount of electricity from the grid, system should work based on FEL strategy with fixed ratio of electric cooling to cool load.If selling price of electricity were rising, the system should operate based on multi-PGU under the FTL strategy; and if buying price of electricity were rising, system should operate based on the FEL strategy with fixed ratio of electric cooling to cool load.
As a direction for future research, analysing the potential usage of municipality waste in the presented case study, and generally biomass in other jurisdictions, as the primary source of energy in the CCHP systems is suggested.Another avenue of the future research could be the consideration of the chill storage tanks in the CCHP system and analysing its effect on system's performance.Also, estimating the input parameters such as electricity, cooling and heating demand of the CCHP by design of a system dynamic model or time series prediction is an interesting topic.These methods can be used to predict input data regarding energy consumption.When only the energy consumption data of last years and physical futures of the buildings are used to estimate energy demand of the system, the possible trends in the energy consumption of the system are neglected.
Fig. 5 .
Fig. 5. Electricity, Cooling, and Heating load curves during a year
Fig. 6 .
Fig. 6.Objectives amount for different strategies in different weights
Fig. 7 .
Fig. 7. Performance of system all year long under FEL
Fig. 8 .
Fig. 8. Performance of system all year long (FTL) (d. Q Electric Chiller) and to 0.26 in the GA.The results
Fig. 9 .
Fig. 9. Performance of system all year long (Fixed ratio of electric chiller cool load to cooling demand)
Table 2
List of existing buildings and their characteristics
Table 3
Technical parameters and specifications of components (Liu et al., 2013) , ($/ kWh) ec COP Coefficient of performance of electric chiller 3 (Liu et al., 2013) ac C O PCoefficient of performance of absorption chiller 0.7(Liu et al., 2013)
Table 4
Range of variations of the decision variables.
Table 5
Parameters for GA and PSO
Table 6
Results of different strategies in thresholds
Table 7
Best design of the system at FEL strategy
Table 8
Best design of the system at FTL strategy
|
2018-12-12T11:25:47.768Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "be7d149ef7bf48d25adb63c45da8c94d930a88bd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.ijiec.2017.4.002",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "be7d149ef7bf48d25adb63c45da8c94d930a88bd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
215852652
|
pes2o/s2orc
|
v3-fos-license
|
Backstepping active disturbance rejection control for trajectory tracking of underactuated autonomous underwater vehicles with position error constraint
In this article, the three-dimensional trajectory tracking control of an autonomous underwater vehicle is addressed. The vehicle is assumed to be underactuated and the system parameters and the external disturbances are unknown. First, the five degrees of freedom kinematics and dynamics model of underactuated autonomous underwater vehicle are acquired. Following this, reduced-order linear extended state observers are designed to estimate and compensate for the uncertainties that exist in the model and the external disturbances. A backstepping active disturbance rejection control method is designed with the help of a time-varying barrier Lyapunov function to constrain the position tracking error. Furthermore, the controller system can be proved to be stable by employing the Lyapunov stability theory. Finally, the simulation and comparative analyses demonstrate the usefulness and robustness of the proposed controller in the presence of internal parameter uncertainties and external time-varying disturbances.
Introduction
Autonomous underwater vehicles (AUVs) are widely used in marine scientific investigation, marine mineral exploration, and oceanographic mapping. 1 So the need for AUVs has become increasingly apparent and the research on the trajectory tracking control of AUVs becomes more important. Considering reducing the actuator cost and weight or increasing the reliability of the system in case of actuator failure, most of AUVs have fewer actuators than the number of degrees of freedom. 2,3 Nowadays, motion control of underactuated AUVs has been absorbing significant attention of researchers mainly in nonlinear control.
In past decades, lots of methods have been proposed to track AUVs, such as sliding mode control, 4,5 robust control, 6 neural network control, 7 and so on. There are also several methods combining the backstepping technique and other methods that are proposed to solve trajectory tracking problem of underactuated AUVs in a variety of complex environments. A current observer-based backstepping controller is developed to achieve trajectory tracking control of underactuated AUVs in the presence of unknown current disturbances. 8 The combination of backstepping technique and adaptive sliding mode control enhances the robustness of an AUV in the presence of model parameter uncertainties and external environmental disturbances. 9 Backstepping technique and bio-inspired models are used to increase system robustness and avoid the singularity problem in backstepping control of virtual velocity error. 10 Active disturbance rejection control (ADRC) was originally proposed by Han. 11 It's worth noting that nonlinear tracking differentiator (TD) and extended state observer (ESO) are important parts of ADRC. The uniqueness of ADRC is that it treats all factors affecting the plant, such as system nonlinearities, uncertainties, and external disturbances as total disturbances to be observed and compensated by ESO and it has been applied in almost all domains of control engineering. 12 ADRC has been adopted to solve the path following control problem of underactuated AUVs, 13 but it has rarely been used in underactuated AUVs' trajectory tracking control.
On the other hand, in some cases, it is necessary to constrain the position tracking error of an underactuated AUV within certain given boundary function all the time for safety reasons. It has been proved that the barrier Lyapunov function method [14][15][16][17] and prescribed performance control method 18,19 are effective solutions to prevent constraint violation. A barrier Lyapunov function is incorporated with the backstepping control scheme to handle the position tracking error constraint. 20 Prescribed performance functions have been adopted to constrain the position and orientation errors of an underactuated AUV which can ensure both the prescribed transient and steady-state performance constraints. 21,22 But in these studies, point-to-point navigation is used and it is worth noting that the desired yaw angle is not continuously differentiable when the position error equals zero. To avoid this problem, the position tracking error converges to a constant instead of zero.
Underactuated AUVs lack sway and heave propellers, so the transverse and vertical position errors of trajectory tracking are usually eliminated by controlling the yaw and pitch angles, that is also why the backstepping technique is often employed in the trajectory tracking control of underactuated AUVs. However, backstepping controller has two obvious disadvantages. The first one is that the unknown transverse and vertical disturbances are usually omitted when deducing the virtual control variables of yaw angular and pitch angular velocities. As a result, the unknown transverse and vertical disturbances cannot be compensated in time, which always affects the result of trajectory tracking. The other one is that the problem of "explosion of terms" caused by the operation of differentiation always exists. Based on the above analysis, that reduced-order linear extended state observers (RLESOs) are used to estimate and compensate for the total disturbances which can improve the performance of backstepping controller and enhance the robustness of underactuated AUVs. Besides, TD can be used to provide the filtered version of the input signal and its differentiation. So TDs are employed to solve the problem of "explosion of terms." What's more, by barrier Lyapunov function, the controller can prevent the violation of the time-varying position error constraint and render the position tracking error converges to zero. In this work, the proposed controller can solve the threedimensional trajectory tracking control problem of underactuated AUVs in the presence of internal parameter uncertainties and external unknown disturbances.
The rest of this article is organized as follows. The mathematical model of an underactuated AUV system is presented in the second section. In the third section, RLESOs are designed to estimate the total disturbances. The backstepping ADRC controller design procedure for the three-dimensional trajectory tracking of an underactuated AUV is illustrated in the fourth section. The simulation and comparative analyses are provided in the fifth section and some conclusions of this article are brought forward in the sixth section.
Problem formulation
This section presents the five degrees of freedom kinematics and dynamics model of an underactuated AUV and then formulates the problem of trajectory tracking.
AUV modeling
We define the earth-fixed coordinate system fEg and the body-fixed coordinate system fBg of the AUV as shown in Figure 1, where x, h, and z represent the inertial coordinates of the vehicle in the earth-fixed frame; u, v, and w are the surge, sway, and heave velocities, respectively, defined in the body-fixed frame; p, q, and r represent roll angular velocity, pitch angular velocity, and yaw angular velocity; ', q, and are roll angle, pitch angle, and yaw angle.
The underactuated AUV is assumed to satisfy the following assumptions 23,24 : (1) the center of gravity coincides with the center of buoyancy, (2) the mass distribution is homogeneous, and (3) the hydrodynamic drag terms of order higher than two and roll motion are neglected. Then the five degrees of freedom (DOF) AUV kinematics equations can be written as The five DOF underactuated AUV dynamics equations, considering the internal parameter uncertainties and external environmental disturbances, can be expressed by the following differential equations and w r ¼ w À w c represent the relative velocities of the vehicle with respect to current in the body-fixed frame and V cx , V cy , and V cz represent the ocean current velocities in the earth-fixed frame; DðÞ represents the parameter uncertainties in the vehicle model. d 1 , d 2 , d 3 , d 5 , and d 6 are the external environmental disturbances; D u , D v , D w , D q , and D r are the total disturbances to be estimated; t u , t q , and t r are considered as the available control inputs. Assumption 1. The underactuated AUV's velocities and control inputs are bounded, that is, w, q, and r are the known upper bounds. (2), the total disturbances and their derivatives are all bounded. There exists:
Control objectives
In order to facilitate the formulation, p ¼ ½xðtÞ; hðtÞ; zðtÞ T is defined as the actual coordinate variable and p d ¼ ½x d ðtÞ; h d ðtÞ; z d ðtÞ T is a given sufficiently smooth time-varying desired trajectory with its derivative with respect to time bounded.
Considering the kinematics and dynamics equations, design a controller to render the tracking error jjp À p d jj converges to a neighborhood of the origin that can be made arbitrarily small.
RLESOs design
According to the design principle of RLESO which is firstly proposed by Huang and Xue, 12 RLESOs for the estimations of the total disturbances are designed as follows whereD u ,D v ,D w ,D q , andD r are the estimations of D u , D v , D w , D q , and D r ; b i are the observer gains and p i (i ¼ 1; 2; 3; 4; 5) are the auxiliary states of the observer. If Assumption 2 is satisfied, we can obtain 25 : represents the estimation error of the RLESOs and can be tuned arbitrarily small by increasing the observer gains b i .
Coordinate transformation
The desired yaw angle and pitch angle are obtained only based on the reference trajectory where . Then the position and attitude error variables x e , y e , z e , q e , and e are defined as x which express the errors in the body-fixed frame. x d , h d , z d , q d , and d are the desired position and attitude variables in the earth-fixed frame. The derivatives of the position and attitude error variables are described as _ x e ¼ u À v p cos q d cos q cos e À v p sin q d sin q þ ry e À qz e _ y e ¼ v þ v p cos q d sin e À rx e À rz e tan q _ z e ¼ w À v p cos q d sin q cos e þ v p sin q d cos q þ qx e þ ry e tan q _ Backstepping controller with RLESOs This section designs a trajectory tracking controller by combining the backstepping technique and RLESOs and analyzes the stability based on the Lyapunov stability theory.
Step 1: A time-varying barrier Lyapunov function is defined as where r is the position error constraint function and r e 2 ¼ x e 2 þ y e 2 þ z e 2 . The time derivative of V 1 yields Choose virtual velocity error variables 26 In order to make _ V 1 negative, we choose u, a 1 , and a 2 as virtual controls, and their desired values are as follows a 1d ¼ Àv À k 2 y e þ y e _ r=r ð15Þ where k 1 , k 2 , and k 3 are positive constants.
Since the virtual variables u d , a 1d , and a 2d are not true controls, we define error variables Substituting equations (14) to (19) into equation (11) þ a 1e y e À a 2e z e Þ=ðr 2 À r e 2 Þ ð 20Þ Step 2: To stabilize the error variable u e , define Its derivative along with equation (17) and equation (2a) becomes _ V 2 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 þ a 1e y e À a 2e z e Þ=ðr 2 À r e 2 Þþu e ð _ u À _ u d þ x e =ðr 2 À r e 2 ÞÞ ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 þ a 1e y e À a 2e z e Þ=ðr 2 À r e 2 Þ þ u e ðf u þ t u =m 11 þ D u À _ u d þ x e =ðr 2 À r e 2 ÞÞ ð22Þ If we choose control input t u as where k 4 is a positive constant. Then equation (22) can be rewritten as _ V 2 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 þ a 1e y e À a 2e z e Þ=ðr 2 À r e 2 Þ À k 4 u e 2 þ u eDu Step 3: To stabilize the error variable a 1e , consider the following Lyapunov function Then the time derivative of V 3 becomes Define $ ¼ Àk 2 y e þ y e _ r=r ð27Þ It can be known The time derivative of equation (18) along with equation (12) and equation (29) becomes In order to make _ V 3 negative, define r ¼ r cos e and its desired value r d ¼ r d þ _ d cos qðcos e À 1Þ. The desired value of r is as follows where _ v 1 ¼ f v þD v and k 5 is a positive constant. Considering that r d is not a true control input, we define error variables r e ¼ r À r d ð32Þ We can obtain where d 1 ¼ ðr d À _ d cos qÞðcos e À 1Þ. Substituting equations (30) to (34) into equation (26) yields _ V 3 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 À a 2e z e Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 þ u eDu þ a 1eDv þ a 1e v t r e cos À1 q Step 4: To stabilize the error variable r e , define Its derivative along with equation (32), equation (35) and equation (2e) becomes _ V 4 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 À a 2e z e Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 þ u eDu þ a 1eDv þ a 1e v t r e cos À1 q þ r e ð_ r À _ r d Þ ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 À a 2e z e Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 þ u eDu þ a 1eDv þ r e ðf r þ t r =m 66 þD r À _ r d þ a 1e v t cos e cos À1 qÞþa 1e v t d 1 cos À1 q ð37Þ In order to make _ V 4 negative, control input t r is chosen as t r ¼ À m 66 f r À m 66Dr þ m 66 _ r d À m 66 a 1e v t cos e cos À1 q À k 6 m 66 r e ð38Þ where k 6 is a positive constant. Then equation (37) becomes _ V 4 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 À a 2e z e Þ=ðr 2 À r e 2 Þ Àk 4 u e 2 À k 5 a 1e 2 À k 6 r e 2 þ u eDu þ a 1eDv þ r eDr þ a 1e v t d 1 cos À1 q Step 5: To stabilize the error variable a 2e , consider the following Lyapunov function Then the time derivative of V 5 becomes À k 4 u e 2 À k 5 a 1e 2 À k 6 r e 2 þ a 2e ð _ a 2e À z e =ðr 2 À r e 2 ÞÞ þ u eDu þ a 1eDv þ r eDr þ a 1e v t d 1 cos À1 q Define ¼ v p cos q d sin qðcos e À 1Þ À k 3 z e þ z e _ r=r ð42Þ It can be known The time derivative of equation (19) along with equations (16) and (44) becomes In order to make _ V 5 negative, define q ¼ q cos q e and its desired value q d ¼ q d þ _ q d ðcos q e À 1Þ. The desired value of q is as follows where _ w 1 ¼ f w þD w and k 7 is a positive constant. Considering that q d is not a true control input, we define error variables We can get where Substituting equations (45) to (49) into equation (41) yields _ V 5 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 À k 6 r e 2 À k 7 a 2e 2 þ u eDu þ a 1eDv þ r eDr À a 2eDw þ a 2e v p q e þ a 1e v t d 1 cos À1 q Step 6: To stabilize the error variable q e , define Differentiating V 6 with respect to time along with equation (47), equation (50), and equation (2d) yields _ V 6 ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 À k 6 r e 2 À k 7 a 2e 2 þ u eDu þ a 1eDv þ r eDr À a 2eDw þ a 2e v p q e þ a 1e v t d 1 cos À1 qþq e ð _ q À _ q d Þ ¼ ðÀk 1 x e 2 À k 2 y e 2 À k 3 z e 2 Þ=ðr 2 À r e 2 Þ À k 4 u e 2 À k 5 a 1e 2 À k 6 r e 2 À k 7 a 2e 2 þ u eDu þ a 1eDv In order to make _ V 6 negative, control input t q is chosen as where k 8 is a positive constant. Then equation (52) becomes The complete Lyapunov function is as follows Here, if we define Then with the help of the inequality logðr 2 =ðr 2 À r e 2 ÞÞ < r e 2 =ðr 2 À r e 2 Þ, we have where g ¼ min k 1 ; k 2 ; k 3 ; k 4 ; k 5 ; k 6 ; k 7 ; k 8 f g . We know from the section "RLESOs design" that E ¼ ½D u ;D v ;D w ;D q ;D r T is bounded, so it is easy to know that d is bounded. According to the comparison principle, 2 the following inequality can be obtained Therefore Equation (59) means that the tracking error signals converge to a compression bounded value near the zero by increasing control gains appropriately and the system is stable.
Backstepping controller with TDs and RLESOs
TD can be used to provide the filtered version of the input signal and its differentiation as fast as possible. And it is well known that the operation of differentiation always causes the problem of "explosion of terms" when the traditional backstepping technique is employed. In order to solve this problem, we add TDs to the above-mentioned controller and the TDs are as follows f h ¼ f hanðr c ðkÞ À r d ðkÞ; R 2 ; hÞ f h ¼ f hanðq c ðkÞ À q d ðkÞ; R 3 ; hÞ where R 1 , R 2 , and R 3 are the acceleration factors to be adjusted and h is the sampling period. The function f hanðÁÞ can be found in Han 11 and the following statement can be found in Miao et al. 13 Corollary: Considering the TDs described by equation (60), if the input signals u d , q d , and r d are differentiable and bounded, and if there exist arbitrarily small values a, b, c, e k ðk ¼ 1; 2; Á Á Á ; 6Þ, then the solution of the considered TDs are Using u c , q c , r c and _ u c , _ q c , _ r c produced by TDs, the controller can be rewritten as u c þ k 9 ðu c À uÞ À m 11 x e =ðr 2 À r e 2 Þ À k 4 m 11 u e t q ¼ Àm 55 f q À m 55Dq þ m 55 ½ _ q c þ k 10 ðq c À qÞ À m 55 a 2e v p cos q e À k 8 m 55 q e t r ¼ Àm 66 f r À m 66Dr þ m 66 ½_ r c þ k 11 ðr c À rÞ À m 66 a 1e v t cos e cos À1 q À k 6 m 66 r e 8 > <
> : ð62Þ
Redefine equation (56) as where d is still bounded. And the system is still stable.
Simulation and discussion
In this section, the simulation and comparative analyses demonstrate the usefulness and robustness of the proposed controller. The simulation is performed in MATLABR2014a/ Simulink and the following two controllers are compared.
The simulation results are shown in Figures 3 to 8. As can be seen in Figure 3, the vehicle actual trajectory fluctuates around the desired trajectory under the backstepping sliding mode controller which is induced by the model uncertainties and disturbances, but the backstepping ADRC controller shows perfect performance. Clearly, the performance of the backstepping ADRC control laws is better than that of the backstepping sliding mode control laws in the presence of both internal parameter uncertainties and external time-varying disturbances.
The position error r e and x e , y e , and z e are displayed in Figure 4. It shows the proposed control laws can constrain the position tracking error of an underactuated AUV within the given boundary function all the time by using a time-varying barrier Lyapunov function and make position tracking errors converge to zero while backstepping sliding mode control method results in large tracking error.
The responses of velocity errors and virtual variable errors are shown in Figure 5. The velocity errors and virtual variable errors converge to zero rapidly and smoothly under the proposed controller, but the velocities and virtual variables fluctuate around desired values with the disturbances under the backstepping sliding mode controller. The results clearly demonstrate the effectiveness of the designed RLESOs.
The responses of actual control inputs are displayed in Figure 6.
The actual values of the total disturbances and the estimation values of the RLESOs are shown in Figure 7 which demonstrates that the designed RLESOs can quickly and accurately estimate the total disturbances D u , D v , D w , D q , and D r .
In addition, Figure 8 shows _ u c ,_ r c , _ q c and u c , r c , q c produced by TDs, which can rapidly converge to u d , r d , q d .
Conclusion
In this work, a backstepping ADRC controller based on the time-varying barrier Lyapunov function is developed to achieve three-dimensional trajectory tracking control of underactuated AUVs in the presence of internal parameter uncertainties and external time-varying disturbances and guarantee the satisfaction of predefined performance requirements. RLESOs are used to estimate and compensate for the total disturbances and TDs are employed to calculate the derivative of virtual control commands. From the simulation results, the proposed controller can remain satisfactory performance of an underactuated AUV in a complex environment and the simulation shows high accurate tracking capacity which demonstrates that using RLE-SOs can improve the performance of backstepping controller and using TDs can make the calculation of backstepping control laws simplified. Conclusions can be drawn that the backstepping ADRC controller has the ability to reject both internal parameter uncertainties and external time-varying disturbances. Experiments will be conducted to demonstrate the effectiveness of the proposed control laws and the effect of actuator saturation and faults will be studied in the near future.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
|
2020-03-19T10:14:54.555Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "83620678bda5b5a2b0bc42640352b5a1d3714586",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1729881420909633",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "3a7e81e8e442b9c63493ed5bab3ccd10e911bd4b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
216619381
|
pes2o/s2orc
|
v3-fos-license
|
Osteoporosis in the age of COVID-19
As the world grapples with the crisis of COVID-19, established economies and healthcare systems have been brought to their knees. Tough decisions regarding redirection of resources away from the management of conditions deemed “nonessential” are being made. How can we balance urgent resourcing of our acute crisis while not abandoning the real need of patients with osteoporosis? This article offers a few practical solutions.
"In a crisis, every little thing counts" -Jawaharlal Nehru As the world grapples with the crisis of COVID-19, this pandemic continues to exceed our worst expectation in terms of the number of lives lost, the human suffering that has ensued and the rapidity at which established economies and healthcare systems have been brought to their knees. At the time of writing, the global death toll stands at more than 127,000, global economies and labour markets teeter on the edge and health systems in the developed world are forced to make tough decisions regarding re-direction of resources away from the management of conditions deemed "nonessential".
In these unprecedented times, the model of healthcare towards chronic disease may undergo indelible change. Individuals with chronic conditions, such as frail older and immune-compromised members of our community have been advised by their governments to avoid outdoor activity and limit their exposure to large groups of people, including attending hospitals and other centres of healthcare delivery. At the same time, resources will be redirected from chronic disease care programs to the fight against this rapidly evolving, acute global threat.
Clinical services designed to prevent morbidity and improve functional independence in older people, such as Fracture Liaison Services, will be scaled back, possibly for months, and suspended in their current form. Whilst Telehealth medicine may provide new opportunities [1], clinical decision-making around the assessment and management of osteoporosis, as well as many other chronic conditions, will be impacted.
Osteoporosis kills. Hip fractures remain a catastrophic event with a 1-year mortality of 20% and are a leading cause of morbidity and loss of functional independence in older members of our society [2,3]. Every year, approximately 740,000 people lose their lives around the world as a result of hip fracture [4]. An estimated 5.8 million disability adjusted life years (DALYs) are lost as a direct result of osteoporotic fracture every year [3]. Expert groups have raised the alarm on the public health emergency of osteoporosis, the reduction in bone density scanning and declining treatment rates amongst patients presenting with fractures [5]. One in 3 men and one in 5 women will experience an osteoporotic fracture in their lifetime [6]. The risk of re-fracture is greatest in the months following the first fracture, and the timely assessment and rapid treatment of subjects with fracture to prevent further fracture is an essential, established model [7].
How can we balance urgent resourcing of our acute crisis whilst not abandoning the real need of patients with osteoporosis? We propose rethinking the way we treat osteoporosis for the foreseeable future in the following ways:
Assessment of fracture risk
With suspended DEXA services and advice that vulnerable people limit their exposure to clinical spaces, bone density assessment of patients with suspected osteoporosis will no longer be feasible in the near term. This will increase our reliance on fracture risk calculators that do not rely on bone density values, such as FRAX® [8,9]. Fracture Liaison Services will need to consider fracture risk thresholds for their particular patient group to guide treatment initiation.
Careful education of patients receiving intravenous bisphosphonates regarding flu-like reactions
The risk of a flu-like reaction after an intravenous bisphosphonate infusion is substantial in treatment-naïve individuals and, in some studies, affected > 50% of individuals [10,11]. Strategies to reduce the risk of a flu-like reaction after IV bisphosphonates have variable success, and some groups may be more vulnerable than others. The possibility of an acute-phase reaction with fever and myalgia should be carefully discussed with patients, to avoid needless concern about this mimic of infection. The decision to use an intravenous bisphosphonate should consider the risk to patient or health-care worker posed by attendance at a healthcare facility (or even home) to access this therapy.
Avoiding denosumab interruption
Patients receiving long-term denosumab treatment may face a dilemma in weighing up the importance of receiving their treatment at regular 6-monthly intervals whilst also wishing to avoid attendance of their healthcare centre for the subcutaneous injection. Case series suggest that the risk of rebound increase in bone turnover and spontaneous vertebral fractures begins approximately 8 months following the last dose of denosumab [12], but this time interval may depend on a patient's duration of denosumab treatment, clinical course and baseline fracture risk [13]. We see a strong role for education programs for self-administration of denosumab, possibly in conjunction with Telehealth appointments. Patients receiving bisphosphonates should also be encouraged to continue treatment, bearing in mind the significantly greater risk of new fractures during drug intermission [14].
Changes in therapy
Decisions to escalate treatment, such as a switch from antiresorptive to an osteo-anabolic agent will be challenging in the current climate. Discussion of the benefits of such a change and demonstration of daily administration of agents such as teriparatide may require face-to-face clinical encounters. Patients may also wish to continue with their current treatment regimen in a time of uncertainty [15]. Decisions on bisphosphonate drug holidays may be guided by clinical judgement and, perhaps, bone turnover markers rather than DEXA measurement.
Home-based exercise programs
Patients with osteoporosis are advised to engage in regular weight-bearing exercise to improve their strength, balance, posture and reduce the risk of falls [16,17]. With the advice to avoid large gatherings such as community centres or local gyms, hone-based exercise programs should be considered. Such programs have been shown to improve the quality of life of older individuals, may improve muscle mass and are feasible [18]. The prescription of a suitable home-based program would require a multidisciplinary approach between physician and allied health members.
Generic advice to minimize risk of COVID-19
Patients with osteoporosis are also likely to be at high risk from sequelae of contracting COVID-19. Telehealth consultations should take advantage of promoting regional-specific advice to minimize infection by social distancing and/or isolation where appropriate.
In the age of COVID-19, treatment of chronic diseases such as osteoporosis should not become an unintended casualty. Clinicians need to adapt to the challenges posed by this crisis and consider ways to continue serving the most vulnerable amongst us, those with chronic disease with their own substantive morbidity and mortality. For in a crisis, every little thing counts.
|
2020-04-29T05:06:13.281Z
|
2020-04-28T00:00:00.000
|
{
"year": 2020,
"sha1": "4201038e536800d48f15280b1e986f4a84aba853",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00198-020-05413-0.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4201038e536800d48f15280b1e986f4a84aba853",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
128879591
|
pes2o/s2orc
|
v3-fos-license
|
Protected Areas, Conservation Stakeholders and the ‘Naturalisation’ of Southern Europe
The critical analysis of conservation conflicts in Protected Areas (PAs) raises interesting questions about the redefinition of human-environment relations in the current ecological crisis. In recent years these debates have unveiled that, in the attempt to define the ‘proper’ place of humans in nature, PAs have embodied modern dualistic worldviews, which understand nature as a realm different from society, culture and ‘civilisation’. This paper suggests that the utilisation of these worldviews should be understood as part of the conceptual apparatus that enables a transition in management roles in Protected Areas, through which new empowered groups are granted the right to control and use natural resources. By analysing the practices and discourses of conservation stakeholders at the Cabo de Gata-Níjar Natural Park, in southern Spain, this paper shows that modern ideas of nature are essential to the collective appropriation of Cabo de Gata by new empowered groups because these ideas justify a new way of managing local resources in accordance with their own interests and desires. This has deep implications for the study of people-park conflicts and the problems associated to the promotion of more environmentally friendly ways of mastering the environment, which must be approached in the light of the power relations associated to the appropriation of territory and natural resources. The paper also concludes that, in order to understand how the nature-society dualism still dictates the way we should relate to the environment, we must trace the practices of those who bear this worldview and unveil the strategies and mechanisms that are used.
Introduction
Central to the constitution of current ecological crises are modernist environmental views that separate nature from society and which make possible large-scale exploitation and despoliation of natural resources (Arnold, 1996;Latour, 1993). This ontological separation is also integral to the emergence of modern environmentalism and many attempts to redress the ecological problems caused by capitalism (Pepper, 1996). The tensions surrounding this separation and attempts to deal with them are visible in numerous conservation conflicts, from disputes between biodiversity conservation and farming, fishing and grazing practices to the material and symbolic eviction of local groups from conservation-targeted areas (Redpath et al., 2013). Critiques of these conflicts contend that, despite promoting new environmental attitudes, most conservation initiatives have failed to question the nature-society separation inherent to ecologically depredatory initiatives.
Critical Social Sciences studying conflicts in protected areas (PAs) have produced especially incisive analyses of the links between conservation policies and the naturesociety dualism (West et al., 2006). They are indebted to the query of the US National Park model and its connection to a Western rhetoric of wilderness, authenticity and untouched nature (Cronon, 1995), which has largely inspired a State-centred conservation model in many other countries in Africa, Asia and Latin America (Adams, 2004). This model hinges on the coercive utilisation of the State's force and technologies of governance upon some social groups in order to create 'islands' of supposedly untouched nature; its most extreme manifestations being termed 'fortress conservation' (Brockington, 2002;Igoe, 2004).
Although this model has had a limited impact on the design of PAs in other regions such as Western Europe, where local inhabitants' presence and interests are to a certain extent acknowledged in conservation plans (Redford, 2011), their critical examination reveals that, in the attempt to define the 'proper' place of humans in nature, these PAs have also embodied dualistic environmental views. Drawing on an ontology named 'Western Naturalism' by Descola (2005), these conservation policies have incorporated ideas of nature and society in binary opposition, extending the belief that conservation depends a great deal on limiting the transformation of natural resources by humans (Santamarina, 2009). However, we need to take into account that areas such as those in the European Mediterranean Basin are broadly accredited as highly transformed and shaped by human beings (Grove and Rackham, 2001). This rather explicit counterintuitive utilisation of dualist ideas of nature is what converts European PAs in unique places for the study of Western naturalism within the conceptual apparatus that justifies the introduction of conservation policies. This paper seeks to make a contribution to this field by focusing on key members of the network of actors that support conservation policies in European PAs. Through the study of their situated interests, intentions, practices and environmental discourses, my analysis engages with two main bodies of work. The first is the study of the drivers of conservation policies and development initiatives, whose narratives are usually portrayed as apolitical in the process of decision-making (Escobar, 1998;Ferguson, 1990;Peet et al., 2010;Robbins, 2004) and associated debates about the role of scientists, economic lobbies and expert bureaucracies in policies of environmental governance (Brockington and Duffy, 2010;Jasanoff, 2004;Scott, 1998). The second is the study of Western naturalism (Descola, 2005) and the questioning it has been subjected to in recent decades (Castree and Braun, 2001;Haraway, 1988;Latour, 1993;Whatmore, 2001). In particular, I engage with ongoing debates that query if naturalism can become an empirical object of study for ethnographers, which would involve studying those who bear and enact this particular ontology (Candea and Alcayna-Stevens, 2012).
My analysis centres on a particular case: the Cabo de Gata-Níjar Natural Park, in the Region of Andalusia, southern Spain. The story of conservation in this extremely dry, coastal place features decade-long social disputes regarding the introduction of a more environmentally friendly way of managing natural resources, which has tried to hamper the expansion of mass tourism, industry and intense irrigated agriculture and to redress the ecological impact of non-intensive, customary practices, such as fishing, grazing and dry farming. I examine the key role that certain local stakeholders played in this process and how modern ideas of nature were utilised to transform Cabo de Gata from a historical farming, grazing and fishing area to a biodiversity reserve and ecotourism destination. For reasons that I will explain in due course, my analysis will centre on two specific groups: scientists and new ex-urban inhabitants.
My examination of the Cabo de Gata-Níjar Natural Park draws on a growing literature about the links between the nature-society dualism, people-park conflicts and issues of territorial reintegration and land-use reorganisation across Europe (Cortes-Vazquez, 2012;Green, 2005;Ruiz et al., 2009;Santamarina, 2009;Vaccaro and Beltrán, 2008;Valcuende et al., 2011). For example, in Andalusia the establishment of new PAs in the last quarter of a century, covering up to 20 per cent of the territory, has paralleled the promotion of ecotourism; a new economic activity articulated around Western rhetoric of wilderness, authenticity and untouched nature (Escalera, 2011). This has introduced land-use changes not only informed by environmentalist concerns but also by European Union (EU) macroeconomic interests, which aim to promote the growth of a service economy within economically marginal areas, replacing customary farming and fishing practices whose reliance on subsidies makes them clearly deficient within a globalised economy (Coca, 2008;Gonzalez, 1993).
In order to conceptualise similar processes of ecological redefinition, environmental governance and land-use reorganisation and their connection to the actions and interests of empowered groups, usually with an urban background, some scholars have proposed such terms as 're-territorialisation' (Vaccaro and Beltrán, 2008) and 'heritagisation' (Frigolé and Del Mármol, 2009;Quintero, 2009). More or less explicitly, these terms hinge on an approach to the idea of territory as an area that a particular group claim to be their own, granting some of its members the control and management of natural resources (cf. Godelier, 1986). With a similar rationale, I will refer to the issues analysed in this paper using the term 'naturalisation' 1 for it stresses the essential role of the idea of nature in justifying the introduction of conservation policies, which I will approach as the collective appropriation of a territory and its resources by new empowered groups. 2 This paper starts with a historical review of the Cabo de Gata-Níjar Natural Park. I will describe the collaborations in which scientists and new ex-urban inhabitants engaged in support of the establishment of this Natural Park during the 1970s and 1980s. I will then analyse the conceptual apparatus that underlies the discourses these conservation stakeholders have enacted; discourses whose aim is both to justify their support and to delegitimise those that oppose conservation plans in Cabo de Gata. I will particularly emphasise their strategic utilisation of the naturesociety dualism amidst notions of livelihood, value and land-use rights. Finally, I will show the extent the Park policy embodies these interests and environmental narratives.
This analysis will permit me to reflect about the impacts conservation initiatives have on vast areas across Europe and the influence of certain ideas of nature in the material and symbolic reshaping of these territories. This will allow for a more empirically based examination of the political dimensions of modernist ideas of nature, the intimacies between these ideas and certain interests and the challenges this poses to the ethnographic study of people-park conflicts, when we approach them as conflicts over the control of certain territories and their natural resources. 3 or animal) so that it lives wild in a region where it is not indigenous and (3) regarding as or causing to appear natural and explaining (a phenomenon) in a naturalistic way. Although the mainstream use of the term regards the first definition, the other two meanings are closely related to the main subject of this paper. Despite the confusion that this might generate, the reason for using the term naturalisation concerns its explanatory power. I believe the term summarises the process of material and symbolic production of a space in accordance with the particular environmental views that are inherent to Western Naturalism, which is the main phenomenon I study here. As such, I contend that the term is much more suitable than those more general ones, including re-territorialisation and heritagisation. Whether there might be connections between this phenomenon and others covered by the term Naturalisation, it is by no means my intention to explore them in this paper. 2 This reflects the large extent my analysis is influenced by Foucauldian and Marxian approaches to the society -environment nexus and the production of nature (Castree, 2000; as well as by related analysis about the links between conservation policies and the distribution of privilege, fortune and misfortune (Anderson and Berglund, 2003;Brockington et al., 2008). 3 The findings I am presenting here are the result of a six-year research, working with members of the Department of Social Sciences at the Pablo de Olavide University (Spain) on two applied research projects involving ethnographic fieldwork in several protected areas in Andalusia, southern Spain (project references: SEJ2004/SOCI-06161 and P06-RNM-02139). I carried out my own research at the Cabo de Gata-Nijar Natural Park alongside these two projects.
Using an ethnographic approach based on semi-structured interviews and participant observation, I focused on social conflicts following the introduction of the park's management and land-use zoning plan. I opted for qualitative data since my intention was not to survey different positions towards conservation initiatives, but to obtain a 'thick description' -in Geertz's The birth of a Natural Park The Cabo de Gata-Níjar Natural Park is a 495 km 2 coastal PA located within one of the driest regions in Western Europe. A long history of scarce rainfalls, erosion and resource misuse has generated extremely poor soils in the area. To date only animal and plant species fully adapted to its desert conditions are able to grow wild. Barren plains and hills, characterised by the absence of trees and shrub, dominate the landscape. These are interspersed with small farming fields, where wheat and barley are grown for the feeding of goats and sheep herds (around 300 animals per herd). The protected marine area, which covers almost 25 per cent of the Park, comprises a sandy seabed with small groupings of seaweed and some reefs. It hosts a small but diverse in-shore fishery that is exploited by local fishermen. Other activities in the Park include ecotourism and small initiatives of fish farming and intense agriculture in plastic poly-tunnels.
Like most rural regions in Andalusia, over the last half of a century this area has undergone major changes: the 1950s and 1960s agricultural crisis, an intense de-agrarianisation process, strong rural-urban migrations, transformations linked to the political and economic integration within the EU, interventions derived from a growing concern about environmental problems and changes related to the incorporation into global markets (cf. Delgado, 2010). These phenomena have deeply influenced Cabo de Gata's socio-economic conditions, moving from decades of deep economic crisis, impoverishment and marginalisation to a growing dependence of a service economy, the intervention of globalised discourses and disputes over conflicting development strategies. As I will explain in the next few paragraphs, these changes show different aspects of a transition in management roles over natural resources, which at times has become a rough and drawn-out process.
In the early part of the twentieth century most inhabitants in Cabo de Gata were small landowners and landless labourers who were working in the local mining industry and in the estates of just a few big and powerful landowners. A deep economic and social crisis was starting to affect this place. Intense mining, farming and grazing activities in the previous centuries had caused severe soil degradation and decreasing yields (Sánchez, 1996). Moreover, the dependence of small landowners and landless people on wage labour 4 made the local economic situation quickly worsen following the end of both mining activities and the exploitation of large plantations of esparto (1973) terms -of the senses and meanings given to the changes occurring in Cabo de Gata from the day-to-day experiences of different groups. For this paper, I am using data gathered via semistructured interviews and participant observation with Park Managers, scientists, NGO members, ecotourism entrepreneurs and other new exurban inhabitants, as well as those obtained from an in-depth literature review and the analysis of Park's conservation plans and other secondary information sources. 4 This dependence on wage labour is to be understood in relation to the privatisation of common lands, which mostly benefited big landowners (Góngora, 2004) and transformed the livelihoods of most small landowners and landless people, who relied on the utilisation of common lands as grass in the estates of big landowners. This crisis also dragged down local in-shore fishing activities, which depended on the commercialisation of their catches in the nearby farming and mining towns (Compán, 1977). By the 1950s and 1960s the area was already renowned as one of the poorest and most marginal in Spain; farmhouses, villages and lands were progressively being abandoned as migration became unavoidable for those with fewer economic resources (Goytisolo, 2004(Goytisolo, [1960).
The 1970s and 1980s brought about a sea change. Intense irrigated agriculture under plastic poly-tunnels and mass tourism quickly spread from neighbouring areas, where they were becoming extremely successful. Population levels began to grow; the area's economic potential attracted both new investors and some of the people who had emigrated years before (Fernández and Egea, 1991). 5 Another extremely important phenomenon was taking place alongside these changes. A growing number of people from both Spanish and northern European urban areas started to settle down in the region. The strong urban development experienced in most coastal areas in Spain granted the barely inhabited, desert landscape of Cabo de Gata new meanings and values. These ex-urban inhabitants -mostly artists, students and young entrepreneurs -interpreted the abandoned condition of the place in a 'naturalistic' way. They felt they had discovered a 'remote and natural' space -the 'ideal' place to start a new, 'alternative' and 'genuine' life far from the 'artificiality' of modernity and city life. 6 Furthermore, a much more powerful phenomenon was also to deeply impact the region in those years. Following a nationwide political swift that tried to leave behind the environmentally exploitative policies introduced during the early Franco dictatorship and to come closer to European political trends, the 1970s and 1980s witnessed the establishment of multiple new PAs across Spain. 7 Biologists, geologists and botanists played a key role in this by informing the selection of those areas with remarkable ecological values that were worth protecting (Mulero, 2002). In Cabo de Gata, these scientists joined the new ex-urban inhabitants in their effort to stop the expansion of poly-tunnels and mass tourism, which they both deemed a threat to the local ecological and aesthetic qualities. As a result, a local environmentalist movement emerged, requesting the introduction of conservation measures (Castro and Guirado, 1995). a source of fodder, graze and fuel. As Provansal and Molina (1991) analyse, these activities were essential economic complements because of the poor yields that were obtained from agriculture. 5 Between 1900 and 1970, the population level decreased by a striking 38 per cent. This trend changed from the 1980s onwards. The number of people living in Cabo de Gata has doubled since then (from 2700 to around 5700) (Source: Andalusia Statistic Institute. http://www. juntadeandalucia.es/institutodeestadisticaycartografia/; last access: July 2011). 6 Similar population movements, which Vaschetto (2006) terms 'utopian migrations', have been studied in other parts of the World, for example in South America. 7 Although some conservation initiatives in Spain date back to the early twentieth century, it has been from the 1970s onwards that the number of new PAs increased as never before. In Andalusia they were covering almost a quarter of the territory in just a few years. Favoured by this 'greening' political shift, their demands were quickly successful and the Andalusia Regional Government established the Cabo de Gata-Níjar Natural Park in 1987.
The development of a 'green' tourism industry within the Park became one of the main priorities for policy-makers, with the aim of providing an environmentally friendly economic alternative for local inhabitants (Castro, 1989). However, the goal was not only to address concerns about the local populations' means of living but also EU macroeconomic interests. The latter were seeking the promotion of a service economy in marginal regions across the EU in order to develop multifunctional rural areas and reduce their dependence on highly subsidised farming and fishing practices. Another continent-wide affair was also at stake as new Parks were expected to play a part in the territorial redistribution that sought to compensate exceeding externalities from highly industrialised and urban areas in central Europe with a protected periphery, where recreational practices were being fostered. As a result, Cabo de Gata, like many other locations across 'peripheral' Europe, witnessed a deep land-use reorganisation process, led by supra-local and supra-national institutions with the local support of increasingly empowered groups with an urban background. 8 However, most of the small landowners, farmers, shepherds and fishermen who still inhabited this area were reluctant to accept these transformations, whilst those aspiring to capitalise on irrigated agriculture and mass tourism totally opposed them. The conservation policy introduced in the area not only promoted ecotourism but also banned poly-tunnels, industrial developments and mass tourism and restricted customary uses in many areas, including grazing, dry farming and in-shore fishing. As discussed elsewhere (Cortes-Vazquez, 2012; Cortes-Vazquez and Zedalis, 2013; Valcuende et al., 2011) a troublesome relationship between these different groups and conservation supporters was to condition social life in Cabo de Gata to present times, especially as the Park mostly occupied private lands.
Furthermore, the argument that these farming plots, old mining areas and in-shore fishing grounds were natural areas worth protecting from human aggression contradicted what most farmers, shepherds and other local groups believed to be the 'proper' way of using local resources in a region they judge historically poor and ecologically hostile. A closer look at the conservation measures introduced in the Park will help us understand the source of these conflicts and the role played by the environmental redefinition that accompanied the creation of this Natural Park. 9 But 8 For more information at EU level, see Baker et al. (1994). For specific details on how this affected the framework Region of Andalusia, see Marchena (1993). Some notes about its influence in Cabo de Gata can be found in Provansal (2003). 9 Although in this paper I focus exclusively on conservation supporters, I believe further clarifications about local population's position against the Park would be welcome. As I analyse elsewhere (Cortes-Vazquez, 2012;Cortes-Vazquez and Zedalis, 2013;Valcuende et al., 2011), it has been small landowners who have more fiercely opposed conservation measures. Reasons behind this concern the nature of irrigated agriculture in poly-tunnels, which report substantial before doing this, we need to analyse in more detail the interests and desires of those social groups that supported the transformation of this historically farming, grazing and fishing area into a biodiversity reserve and ecotourism destination.
Conservation and local stakeholders
As previously mentioned, in the late 1970s amidst a greening political context the Spanish Government asked a group of scientists to produce a catalogue of areas that could be worth protecting within the Almeria Province, where the Natural Park is located. One of the activities carried out by this group of experts -mainly geographers, geologists and biologists from different University Departments and National Research Councils -was to study Cabo de Gata's most outstanding features: its volcanic rocks, rare species of flora and fauna and the functioning of its uncommon desert ecosystems. Their intention was to assess the natural value of this region. 10 Like in many other contexts where expert knowledge -uncritically portrayed as objective and apolitical (Franklin, 1995;Jasanoff, 2004) -has been used in support of politically driven decisions (Fairhead and Leach, 1996;2003), these experts' research findings were instrumental to changing the perception of Cabo de Gata from a barren land to a biodiversity hotspot and therefore key to supporting the idea that it was worth protecting.
Fuelled by these findings, in less than a decade environmentalist ambitions quickly expanded from the protection of only a few metres of coastal lagoons in 1978 to a much larger area almost the current size of the Park in 1987. 11 Some of these scientists settled down in the area and started to work in the Park Office and other regional and mixed environmental agencies. Some of those who remained at their home universities and research institutions also kept close links with this place by becoming authoritative members of the Park Governing Board. 12 revenues without requiring large estates. Moreover, this positioning also concern the historical relationship maintained with big landowners. These have been able to capitalise both in mass and nature tourism at the same time that benefited from the revalorisation of lands following the Park establishment. In addition, in recent years the Andalusian Government has been purchasing private lands in Cabo de Gata as a strategy to improve conservation management. Park Managers acknowledge that acquired lands mostly belong to big landowners who were able to offer larger plots at a lower price. Small landowners perceive this with distrust. 10 More details about this process in Capel (1980). 11 The Park extension increased once again in 1994. 12 The Park Governing Board (Junta Rectora, in Spanish) is an advisory consultant panel formed of different groups of stakeholders (scientists, NGOs, farmers, fishermen, local and regional government and tourism entrepreneurs, among others). They periodically meet to discuss issues concerning the Park management. However, despite its name, their only function is to provide advice, lacking any management capability or power to change the Park policy. These are exclusively on the hands of Park Officers and the different environmental bureaucracies of the Andalusia Regional Government.
Two particular aspects of expert influence on this area appeal to my analysis. First, their findings not only supported the establishment of the Natural Park but have also informed conservation management up to this time. The values they have identified, which include endemic plants, communities of migrant birds and exceptional marine ecosystems, are key arguments to defining which parts of the Park deserve stricter protection. Furthermore, they are also essential to deciding which human-related elements must be preserved. These include some archaeological and ethnologic items because of their architectural singularity (eighteenth century coastal towers, constructions linked to old mining and farming activities such as farmhouses, water cisterns, terraces, wells and mills) or because of their historical contribution to the maintenance of present ecological conditions (for example, the links between dry farming and rare birds' nesting habits). 13 Second, some of these scientists and experts, in their role of Park Managers, officers, rangers and guides, have become responsible for meeting the Park's conservation goals. As such, their views about the 'proper' place of humans in nature -based on the Western modern nature-society dualism -permeate their decisions and strongly influence the way the Park is managed. The next quote is a good example: The role of Park Managers is to decide which conservation initiatives must be implemented. For example, if there is an interest in restoring a watercourse, we need to take into account its multiple functions. As key components of traditional mechanisms for the collection of rainwater, watercourses provide an essential service for the irrigation of orchards and domestic water supply. As such, they must be kept free of weed and shrubs. This makes sense, right? But if we study a particular watercourse and discover that it has now become a preying area for Bonelli's eagles, for example, our decision will then consider this new ecological function and prioritise it, because it is more important than that of a traditional water supplier. (Male, biologist, member of the Park management team) 14 Experts and expert criteria have also been instrumental to the development of a 'green' tourism industry in Cabo de Gata. This mirrors a worldwide trend that regards tourism as the key to overcome the contradictions between nature conservation and economic development. New modalities of tourism (ecotourism, sustainable tourism, nature 13 What is particularly interesting of these human-related components is that they are deemed the remaining signs of the 'traditional' inhabitants of Cabo de Gata, who are considered to have held a 'wise' know-how that permitted them to adapt to a dry environment in an efficient, environmentally friendly way. This form of regarding local inhabitants has acquired great importance in the Park policy in current years. Behind it, there is a serious attempt to integrate human presence within the conservation landscape (see similar cases in Anderson and Berglund's (2003) edited volume). Yet, as I will discuss in forthcoming sections and these authors have also stressed, the depiction of 'traditional' humans and their role, in a style that recalls the kind of ahistorical narratives analysed by Wolf (1982), raises further problems. 14 To facilitate reading, this and the other of quotations in this paper have been translated from Spanish into English. tourism) have caught the attention of institutions and policy-makers all around the World (Ceballos-Lascuráin, 1996;West and Carriers, 2004). In Cabo de Gata, experts have embraced ecotourism as the best economic alternative for local inhabitants because it supposedly permits them to live in this Natural Park by means of a 'lowimpacting' human activity. 15 In recent years, 'visiting a natural place' has become a powerful gimmick that attracts thousands of tourists to Cabo de Gata. This has required the intervention of Park officers, who have developed several initiatives to guarantee that 'nature' is attractive and accessible for visual consumption. For example, one of the Park's attractions is that it provides tourists with the possibility of 'being closer to nature' through quiet countryside walks far from noisy, crowded and polluted urban environments. To make this possible, Park Managers have worked towards the design and construction of a network of pathways, with access to vantage points, signs and an efficient rubbish collection system. Further examples include the construction of costly infrastructure (Visitors Centre, Information Points, Campsites, Botanic Gardens) and the edition of information material (Maps, Park Guidebooks). The next quote shows the importance Park Managers give to this role: In relation to tourism, we do several things: environmental reports, projects related to public use and conservation. We shouldn't talk strictly about tourism, but about infrastructure, projects and dissemination material that enhance the Park public use. [ . . . ] For example, we started by building up a Visitors Centre that, like in other Parks, provides a broad overview of this space. Then we established a few Information Points alongside the coast, which are open during the high season. We also worked toward the creation of a network of pathways that span along the Park and covers its most salient particularities: inland pathways, coastal pathways and other theme pathways that focus on geological values, ethnographic or cultural aspects . . . [ . . . ] All these are elements that facilitate the Park touristic use and that differentiate this place from any other that is not a Natural Park. (Female, biologist, member of the Park management team) The importance of ecotourism gives those that engage with this economic activity a protagonist role in conservation efforts. On one hand, Park Managers have good reasons to address ecotourism entrepreneurs' requests because they have become conservation's best allies at the local level. On the other hand, these entrepreneurs, a vast 15 The literature specialising in tourism and Protected Areas has questioned this supposedly flawless relationship and has stressed the influence exerted by tourist expectations -based on a Western rhetoric of wilderness, authenticity or primitive life (Vivanco, 2001;Wels, 2004)on conservation management because they urge Park Managers to take actions so that these areas become attractive to potential consumers (West and Carriers, 2004). In other words, to achieve this win-win partnership between conservation and development through tourism, Parks must remain attractive and accessible for tourists, which eventually make conservation to be somehow dependant on the success of tourism initiatives. majority new ex-urban inhabitants, 16 have also good reasons to give their support to the Park's policy for nature protection is essential to preserving their own livelihood. As such, ecotourism has prompted the emergence of a new sense of ownership and belonging, which, as I am about to explain, has developed during the last few decades.
The number of new ex-urban inhabitants in Cabo de Gata has steadily grown since the first few started to arrive in the late 1960s and 1970s. Fleeing from urban areas in Spain and other parts of Europe -France, the UK, Denmark, Switzerland, Germanythey were on the lookout for 'natural' locations 'untouched' by urban development and modernity, where they sought to commence a new 'alternative' lifestyle. Although they first came as tourists, some of these people ended up settling down in the area, purchasing or renting old farmhouses at a relatively low price. 17 As already mentioned, they lobbied with scientists for the establishment of the Natural Park. But they also played an important part in the development of ecotourism and, through this, in the definition of the Park's conservation policy and the transformation of this area.
The development of the first few ecotourism initiatives, mostly accommodation and outdoor activities, was key for these new groups to settle down in the area. These initiatives hinged on the utilisation of new images and narratives that portrayed this peripheral region as a place that remained traditional, natural and untouched by modernisation; images and narratives that urged tourists to visit the area while it stayed authentic and unspoiled. 18 The establishment of the Natural Park soon became essential to maintaining these activities since the quick development of intensive agriculture, mass tourism and industry was threatening the area's main attractions. In fact, it was by virtue of the Park policy that this place became an 'island of untouched nature' and a 'natural paradise' surrounded by plastic poly-tunnels, factories and seaside resorts, a phenomenon that paradoxically increased the area's appeal and attracted more new ex-urban inhabitants. The next quotation is an example of how this was experienced first-hand: We opened this hotel in 1988. There were very few people working on tourism here by then. I had studied in Germany and when I was about to finish my degree and write my dissertation, I came here to spend a whole winter. It was the first time I saw this 16 Some other local inhabitants (old farmers and fishermen and their descendants) have also initiated some tourism activities, although in a significantly lower proportion. This is clearly manifested in the marginal position they occupy in the main association of ecotourism entrepreneurs that exist in the Park (Natural Park Tourism Entrepreneurs Association [ASEMPARNA in Spanish initials]). 17 Housing prices had plummeted due to the deep economic crisis in previous decades and the high levels of emigration that were being experienced at that time. Further particularities such as exchange rates between national currencies (Deutsche marks or British pounds being much stronger than Spanish pesetas) also explain this. 18 It is interesting to note the references to a modernist, lineal sense of time in these images and narratives. For an explanation of the relationship between dualist ideas of nature -society and this sense of time, see Latour (1993). abandoned farmhouse [the actual hotel]. After asking some people, I finally got hold of the owner and we ended up buying it. The first thing we did, as I had studied languages, was offering Spanish courses for foreigners. Then we started Tai Chi and Yoga courses. We got in touch soon with a travel agency which was interested in promoting these kinds of initiatives. They included us in their catalogue. You know, there are lots of people in Germany that look for 'different' places to go on holidays, places outside mass tourism circuits . . . people that travel during low season. Those are the kind of people that come to my hotel. They are usually middle class teachers, doctors . . . who are looking for quiet, original, authentic places. The kind of places you find in Natural Parks. (Male, nature tourism entrepreneur, originally from Germany) Through the dissemination of these images and narratives about Cabo de Gata, these new ex-urban inhabitants have also conditioned the way local resources are managed. The rhetoric that they use in printed vouchers, advertisements, postcards, websites, blogs and oral communications has contributed to the definition of what is 'proper' and 'improper' in this Natural Park. As such, they place uninhabited valleys and endemic plant species in opposition to cities, plastic poly-tunnels and irrigated crops; ageing mills, old farmhouses and small hotels versus factories and tourism resorts. Furthermore, this rhetoric also differentiates between 'proper' and 'improper' Park users and inhabitants, featuring traditional peasants, ecotourists and scientists in sharp contrast to mass tourism, poly-tunnel farmers and urban developers.
As we will see in the last section of this paper, the analysis of the Park policy permits us to identify the extent conservation measures embody these people's interests and desires. However, a closer examination of the different discourses that conservation stakeholders enact in relation to conflicts with other local groups (farmers, fishermen, shepherds, landowners, urban developers) will furnish us with further evidence of the extent conservation supporters collaborate towards meeting common goals and the ideas of nature they use to do so. We focus on this issue in the next section.
People-park conflicts and different ideas of nature
The establishment of the Cabo de Gata-Níjar Natural Park triggered decade-long conflicts with some local groups. For example, in the late 1990s social tension escalated when many farmers and landowners tried to install plastic poly-tunnels. They had witnessed the successful development of this activity in neighbouring areas and were determined to capitalise on it. However, to their surprise, they discovered that this was forbidden since 1987. They argued that the Natural Park had been established without a proper and broad popular consultation and that this had made many landowners oblivious to the new regulations. They reacted with anger and created a pressure group (the ARROPE association) to overturn the Park policy. Even worse, in the heat of the moment some decided to disregard the bans and built up polytunnels on their own lands.
Conservation stakeholders witnessed these issues with horror and decided to take action. An environmentalist NGO 19 and an ecotourism association 20 emerged in this context to join forces. They organised meetings and demonstrations, denounced illegal practices -such as new poly-tunnels -on websites and newspapers 21 and urged Park Managers and governmental agencies to take exemplary actions against offenders. They decried that the growing presence of intense irrigated agriculture in the Park was a threat to its natural values and accused local farmers and landowners of putting personal gain before public assets. They lamented that these groups had forgotten the supposed 'know-how' that allowed their ancestors to make a living off this place without harming the environment. Take the following quote as an example: Those that had a small plot, where they used to grow wheat and barley, now want to install poly-tunnels. You can't even breed a pair of goats with the yields you get from dry farming, but you can make a fortune out of poly-tunnels. That is good business. So, when they get in trouble with the Park, they can't come saying: 'Oh, we are so poor!' No, we are not that stupid . . . (Male, freelance and NGO member, originally from Almeria city) Similarly, the support given by some local groups to the development of mass tourism in the Park has also raised conflicts with conservation stakeholders. Especially controversial have been certain initiatives developed within or near the border of the PA. Because of its international notoriety, the Algarrobico Hotel case is perhaps the best example. The construction of this hotel on the Algarrobico beach, next to the town Carboneras, in the earlier part of the 2000s sparked the outrage of conservation stakeholders. They vilified it and accused it of trespassing the red line that separates the Park from its surroundings, while criticising the damage it would cause to valuable ecosystems. They took their actions even further this time as they brought the case into court and initiated an international campaign to denounce that this hotel was illegally constructed within a PA. 22 Finding out legal responsibilities was to become a drawn-out judicial process. Meanwhile multiple demonstrations organised by both hotel supporters and hotel detractors revealed widening social divisions within the Park. Some local groups defended that the hotel was essential in order to create much-needed jobs. On the contrary, conservation supporters alleged that local people were unable to appreciate the 19 Amigos del Parque Natural Cabo de Gata-Nijar (Friends of the Cabo de Gata-Nijar Natural Park): http://www.cabodegata.net/. 20 ASEMPARNA: http://www.cabodegata-nijar.es/. 21 See, for example: 'Ecologistas en Acción denuncia ' (1999) and 'Ecologistas denuncian' (1999). 22 Different news items that appeared in several international newspapers are evidences of this: 'Building blight ' (2006), 'Costas turn back tide ' (2006), «Espagne» (2009) and «Naturpark in Spanien» (2009). natural value and beauty of this area, its exceptional features and the necessity to avoid its destruction.
The analysis of these conflicts, as well as many of the others that have emerged in relation, for example, to land ploughing, in-shore fishing, grazing and fish farms, 23 reveals underlying aspects of conservation stakeholders' arguments that go beyond the value of natural assets and that relate to new senses of ownership and belonging. As such, it is frequent to encounter the opinion among these stakeholders that nature conservation is also a way of preserving people's livelihood. For them, the construction of new hotels and poly-tunnels and also the grazing of endemic species or fishing in marine reserves threaten the values on which ecotourism relies. They reprimand those in favour of these activities for damaging nature and impairing the successful development of a green tourism industry. In other words, they reproach the locals for going against their means of living, which depend on the protection of nature from certain forms of human exploitation. The following quotes illustrate this: They don't care that this is a public asset... They don't care that this is a Natural Protected Area . . . With the Algarrobico Hotel, the illegal poly-tunnels, urban developments . . . they are just looking to fiddle the system! There is where you realise how weak this is . . . This is a Nature Park and here both nature and humans are protected: those who live inside the Park and whose main activities depend on the Park being just the way it is . . . With the Algarrobico Hotel, what kind of shameless person would sell this hotel on the basis that it's in a Natural Park? This is the most Anti-Natural Park thing I've ever seen! (Female, ecotourism entrepreneur, originally from Madrid) This [place] has a big problem with overcrowding. And all these new big hotels are only bringing more and more tourists. This place has very sensitive areas where even walking might cause a great impact on rare plant and animal species, because they can be so easily damaged. You can kill all these endemic species if you are not careful enough. And if we ruin this, we ruin our natural heritage and our main source of income. They say we [ecotourism entrepreneurs] are environmentalists, like in a negative way . . . But, apart from our education and ideology, I always answer: Those of us who live off this Park are its main defenders, because it not only concerns our ideals but also our way of subsistence, our life! (Male, ecotourism entrepreneur, originally from Almeria city) Similarly, scientists and experts also acknowledge this problem and agree that conservation measures are justified not only because they preserve the Park's natural values but also because they preserve the livelihood of the Park inhabitants. Take the following quote as an example: Tourism plays a very important role in the Park, especially because most other activities [farming and fishing] are currently a minority. The main income for the Park inhabitants is tourism, but this is threatened because most tourists only come in high season and also because they all want to have their own summerhouse in the Park and that damage nature. That is incompatible. (Female, biologist, member of the Park management team) Despite some punctual internal disagreements between Park Managers, ecotourism entrepreneurs, NGO members and scientists, 24 the protection of Cabo de Gata has always remained a common and shared goal. As we have seen throughout this and the previous section, different and multiple situated interests and desires are behind the support given to the Park policy. What remains to be examined is how the Park policy has addressed these interests and desires and how modern ideas of nature have been used in order to justify the transition in management roles that made possible the transformation of Cabo de Gata from a farming and fishing area to an ecotourism destination and biodiversity reserve.
Conservation measures and the naturalisation of Cabo de Gata
The most important management tool in the Park is the land-use zoning plan. The first plan came into force in 1994 and divided the Park's total extension into 4 zones (A, B, C, D) and 10 subzones, bounding each of them to a different degree of protection. Whilst most activities were forbidden in Zones A and B, many were allowed in Zones C and D. This zoning plan was in force for 14 years, until a new one was approved in 2008 without major differences. I summarise the 2008 plan in Table 1. 25 What is particularly relevant to my discussion is how the Park land-use zoning plans make use of certain environmental narratives based on the nature-society dualism in order to justify different levels of restriction. For example, Zones A, where the most restrictive measures apply, are deemed virgin natural areas barely transformed by human action. Only conservation practices and scientific research are allowed in them. In Zones B the Park policy allows some non-intensive farming, fishing and grazing practices as well as ecotourism. Zones B are regarded as semi-natural areas, where 'traditional' practices have shaped the local ecosystems in such a non-aggressive way that there are still significant values in them. In Zones C and D most practices are allowed, even intensive agriculture, although under the supervision of Park Managers in order to avoid any harm to those natural values found in other parts of the Park. This 24 Further clarification is required at this point. Although for the sake of clarity and brevity I have made an effort to present the position of all these different conservation stakeholders as somehow homogeneous, the real situation is not so neat. Over the past two decades, there have been several conflicts between them, in particular as NGO members and ecotourism entrepreneurs urged Park Managers to take more severe actions to stop the installation of polytunnels, illegal ploughings, fish farming and the construction of new hotels. 25 The original documents can be acceded at the environmental section of the Andalusian Government website: http://www.juntadeandalucia.es/medioambiente/site/portalweb/; last access: June 2011. more permissive regulation is justified because Zones C and D are said to lack significant natural value due to years of human exploitation.
This division into completely natural, partially natural and barely natural areas shows the instrumental use of dualistic ideas of nature-society in the justification of land-use changes. The modern, Western ontological premises that underpin these ideas, as discussed by Latour (1993), explain that the more transformed by humans an area is judged the less natural it is considered, thus involving less restrictive measures. However, a closer look into the kind of activities that are either allowed or forbidden in each zone unfolds a paradox: the counterintuitive utilisation of modernist ideas of nature, in as much as the land-use zoning plan renders certain activities either essential or compatible with nature conservation. As I will explain in the remaining paragraphs of this section, this suggests that these ideas of nature are utilised to grant certain groups the right to use natural resources as well as to disenfranchise others, instead of protecting the Park from human intervention.
An analysis of the mechanism that makes the above possible shows that the Park land-use zoning plan not only conveys a redefinition of the physical environment but also of social relations. This redefinition of social relations is articulated around a new social hierarchy, which hinges on a new categorisation and classification of human-environment relations into: (1) modern and intense activities, such as agriculture in plastic poly-tunnels, mass tourism, mining and industry, which are regarded as potential destroyers of the Park's assets; (2) customary, non-intensive farming, grazing and fishing practices, which are considered somehow respectful to the environment because they have historically produced valuable semi-natural ecosystems by making use of an ecologically wise know-how; (3) modern, environmentally friendly activities, such as ecotourism, that make conservation and economic development compatible because they exclusively rely on the visual consumption of nature and (4) scientific research, environmental education and conservation management, which are considered essential to the correct preservation of nature. For the sake of clarity, I simplify this in four different roles, defined in terms of the relationship they are said to maintain with nature: (1) nature destroyers, (2) nature producers, (3) nature consumers and (4) nature protectors, respectively. Table 2 summarises the compatibilities and conflicts associated with these roles in the different parts of the Park, according to the land-use plans. The table also permits us to visualise the new social hierarchy that regulates land-use rights and distribute uneven access to natural resources among the different local groups. As such, it becomes the most important mechanism to meeting conservation goals, including not only the supra-local concerns that aim to protect European peripheral regions like Cabo de Gata while promoting a service economy, but also the situated interests and desires of local conservation supporters.' Scientists and experts are on top of this hierarchy. Their research and administration activity are allowed all along the PA, even in Zones A, which should supposedly be kept free from human action. A step below in this hierarchy we find 'traditional' farmers and fishermen as well as ecotourists and ecotourism entrepreneurs. The people belonging to these groups are allowed to carry out their practices within some parts of the Park (Zones B, C and D) but not in Zones A. However, they are not completely free to decide how to use local resources for their activity is always either directly or indirectly monitored and controlled by experts and scientists. Finally, at the bottom of this hierarchy there are those who engage with modern, intensive activities (poly-tunnels, urban development and industry). Their activities are banned within most parts of the Park, and, in the rare case they are allowed -mostly in those areas catalogued as non-natural -they need to adjust to the multiple requirements imposed by those people at the top of this hierarchy, so that these activities do not impact natural or semi-natural areas.
A final critical examination of this hierarchy in relation to the issues analysed throughout this paper permits us to identify the intimacies between the Park conservation policy and the situated interests and desires of those stakeholders that this paper centres on. Although both 'nature producers' and 'nature consumers' have the right to use local resources in many parts of the Park, the analysis carried out in the above sections suggests that they have a different capacity to influence the Park policy. As I hope to have already demonstrated, the relation between experts, Park officers, ecotourism entrepreneurs and the new ex-urban population at large is close enough to deny that 'nature consumers' hold a much more privileged position than 'nature producers' in what concerns their capacity to influence the Park policy.
Furthermore, the role of 'nature producers' compel farmers, shepherds, fishermen and landowners either to behave the way conservation supporters say that 'traditional' inhabitants should behave or to engage with ecotourism and become ecotourism entrepreneurs. Otherwise, if they attempt to capitalise on poly-tunnels or mass tourism, they will be putting themselves at risk of facing prosecution for contravening the Park plans. This shows their lack of influence in resource management for the Park plans relegate them to a position that is actually closer to that of 'nature destroyers' than to 'nature consumers'. They end up having little capacity to influence the Park policy and to give voice to their own interests and desires in the Park land-use plans. As such, the 'Naturalisation' of Cabo de Gata has less to do with the limitation of human impact on the environment and more with a redefinition of land-use rights that grant control of natural resources to conservation stakeholders, while disenfranchising those other groups that oppose or question conservation.
Conclusions
I have tried to demonstrate in this paper that the utilisation of modernist ideas of nature is essential to the collective appropriation of Cabo de Gata by certain social groups because it justifies a new way of managing local resources in accordance with their interests and desires. This suggests that conservationist arguments do not merely hinge on the unquestioned capacity of scientists and experts to protect nature from human aggressions neither on the ontological premises of Western naturalism that has historically been so influential in the conservation field. These are just part of the conceptual apparatus that enables a transition in management roles, through which new empowered groups, whose livelihood and desires are rooted in keeping places sparsely populated, barely urbanised and visually attractive for ecotourists, are granted the right to control and use this territory, allowing for its transformation into a biodiversity reserve and ecotourism destination.
The ethnographic analysis of people-park conflicts in the Cabo de Gata-Níjar Natural Park provides clear evidence of this phenomenon and permits us to identify some of its political and economic drivers. Any questioning of the naturalisation of this place, such as that supported by farmers and landowners, is deemed a threat not only to natural values but also to many stakeholders' livelihoods because it hinders the successful development of ecotourism. To ensure the future of this activity, the Park policy relegates conservation objectors to a powerless role so that their demands and land-use rights -whether stemming from land ownership or historical bonds -are suppressed. Instead the Park policy grants these rights to those who, despite lacking either land ownership or historical bonds, have gained significant power from the support of supra-local institutions (EU), the spur of a fast-growing economic activity (ecotourism) and the moral justification provided by a globalised greening rationale.
The above has deep implications for the study of people-park conflicts and the problems associated with the promotion of more environmentally friendly ways of mastering the environment. The establishment of this Natural Park is to be understood within a context where new actors intervened in a space that became 'naturalised'. Animals, plants and ecosystems acquired new meanings and values, which justified the introduction of conservation measures and the development of ecotourism as an alternative to activities such as intensive agriculture and mass tourism. As such, this story must be read in terms of the power relations that accompanied a process of territorial appropriation; and it is in this same way that we need to frame the disagreements and contestations articulated by other local groups, such as farmers, fishermen and landowners. This is especially important if we consider that one of the main critiques emerging from the examination of conservation practices in PAs is that they distribute fortune and misfortune among different social groups and even different members of a particular group (Brockington et al., 2008). Sometimes this is caused directly, through physical evictions, sometimes indirectly, through symbolic alienation. This means that most social problems in conservation-targeted areas -problems that are prone to threaten conservation goals -have less to do with environmental education and more with the different aspects of a transition in management roles over natural resources, which tend to disregard the interests and desires of many local inhabitants.
Finally, what also seems clear from the analysis I carried out in this paper is that, 15 years after Escobar (1999) foresaw the end of Western naturalism, rather than giving way to less essentialist accounts of the reality 'out there', modern ideas of nature prevail and in some cases have even become reinforced for political and economic reasons. The analysis carried out in this paper adds to those suggesting that naturalism is still a powerful worldview that dictates how we should relate with the environment and that PAs have become one of the main material and discursive means to achieving this (West et al., 2006). It also shows, as Yates-Doerr and Mol (2012) suggest, that if we want to make naturalism the object of ethnographic research, we need to trace the practices of those who bear this worldview and examine where the power that fuels its expansion emanates from.
|
2019-04-24T13:09:08.992Z
|
2014-05-04T00:00:00.000
|
{
"year": 2014,
"sha1": "ff47faf8f0ce11b10481a58da3d3193aab219a8f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/08039410.2018.1427621",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "1c5bf42c95ba7c3cf0c774fd36303ac010852dff",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
250091506
|
pes2o/s2orc
|
v3-fos-license
|
Bone Accrual During Adolescence: Do Endocrine-Disrupting Chemicals Play a Role?
1Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland 21205, USA Correspondence: Jessie P. Buckley, PhD, MPH, Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N Wolfe St, Rm W7515, Baltimore, MD 21205, USA. Email: jbuckl19@jhu.edu.
In a new study, Carwile et al (1) investigate the link between adolescent bone health and 2 classes of endocrine-disrupting chemicals: perfluoroalkyl and polyfluoroalkyl substances (PFAS) and phthalates. These synthetic chemicals are used in a myriad of consumer products and have been found in the bodies of nearly every American (2). PFAS have been dubbed "forever chemicals" given their resistance to degradation in the environment and long biological half-lives in humans, while phthalates are known as "everywhere chemicals" because of their extensive uses in plastics, personal care products, building materials, and many other everyday items. Decades of research indicates that PFAS and phthalates affect a wide range of endocrine-sensitive end points, particularly when exposures occur during critical periods of development (3). Findings from Carwile et al support the emerging hypothesis that these chemicals may also act as bone toxicants with implications for reduced bone mass accrual during adolescence.
Carwile et al conducted a cross-sectional study using data from the 2011 to 2016 National Health and Nutrition Examination Survey (NHANES) to examine relationships between biomarkers of exposure to PFAS and phthalates and total body less head areal bone mineral density (aBMD) Z scores among adolescents aged 12 to 19 years (1). The authors found that higher concentrations of several PFAS and phthalate biomarkers were associated with lower total body less head aBMD Z scores in males, while there were modest positive associations with aBMD for some chemicals among females. Similar to prior epidemiologic studies examining prenatal (4,5) or childhood (6, 7) PFAS exposures, Carwile and colleagues found the strongest magnitude of association for exposure to perfluorooctanoate.
A novel feature of the Carwile et al study is assessment of the combined effects of multiple PFAS and phthalates. Evaluating chemical mixtures is a priority of the National Institute for Environmental Health Sciences given that individuals are simultaneously exposed to many environmental chemicals that may have synergetic effects on health (8). Using Bayesian kernel machine regression, an advanced statistical technique developed to assess mixture effects, the authors estimated that higher combined exposure to all PFAS and phthalate biomarkers was associated with lower total body less head aBMD among males but not females (1). This study is the first to examine bone health in relation to mixtures across chemical classes and supports the importance of evaluating effects of real-world combined exposures.
Because PFAS have long half-lives in serum, cross-sectional associations may reflect effects of cumulative PFAS exposure on bone over time. In contrast, phthalates are metabolized and eliminated in urine within hours of exposure and therefore urinary biomarkers quantify recent exposures, are subject to measurement error, and may not represent exposures at the relevant time period for bone mineral accrual. Therefore, prospective studies of phthalate exposures in adolescence are needed to disentangle temporal ordering of exposure and bone outcomes.
The Carwile et al results expand on prior studies by demonstrating that adolescence may be a critical period of susceptibility to environmental bone toxicants. Owing to the cross-sectional study design and lack of clinical end points, additional longitudinal studies with long-term follow-up are needed to determine whether bone deficits persist and relate to greater risk of fractures and osteoporosis in adulthood. Likewise, mechanisms for effects of PFAS and phthalates remain to be elucidated but may involve inflammatory pathways, peroxisome proliferator-activated receptor gamma agonism, or androgen antagonism (1). The latter pathway may explain stronger relations of chemicals with lower aBMD among males compared with females, though differences in pubertal stage may also play a role in observed sex differences. Because PFAS and phthalates may influence pubertal timing and body composition, future research in longitudinal cohorts will be necessary to determine whether these characteristics mediate associations with aBMD. Both PFAS and phthalates have been the target of policy actions and market-based campaigns to reduce human exposures from consumer products. However, chemicals are often replaced with alternatives that have similar structure and biological activity and may be "regrettable replacements" for their predecessors. Furthermore, national biomonitoring in NHANES includes only about 350 of the more than 40 000 chemicals currently allowed for use in the United States. Gaps in our understanding of chemical exposures reinforces the need for further research to elucidate the effect on bone accrual of not only the PFAS and phthalates studied by Carwile and colleagues but also their replacements and other contemporary endocrine-disrupting chemicals.
Given the ubiquity of chemical exposures in the United States and around the world, even small decrements in bone mass accrual could have large effects on population-level bone health. Although long-term and clinical implications remain to be elucidated, emerging research by Carwile et al and others suggest that reducing early-life chemical exposures may be a promising new avenue for optimizing early-life bone accrual and potentially lifelong bone health.
|
2022-06-29T06:18:00.588Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "20560730970e4dd5303501ab11fe91b8a862ae11",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jcem/advance-article-pdf/doi/10.1210/clinem/dgac391/44383160/dgac391.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "424d62a0ee019d75cdccbfeff4da580743bf95c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54726785
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and Application of Bimetallic Zinc(II) Phenoxy-Imine Complexes as Initiators for Production of Lactide Polymers
This article reports the synthesis of two bimetallic zinc(II) complexes containing phenoxyimine ligands and their application as initiators of ring opening polymerization (ROP) of L-lactide (LA). The phenoxy-imine ligands were obtained from salicylaldehyde derivatives and 2,3,5,6-tetramethylphenylenediamine, yielding two bidentate regions per ligand. The reactions of both phenoxy-imine ligands with ZnEt2 in the presence of n-BuOH afforded [Zn2(L1)(OBu)2] and [Zn2(L2)(OBu)2], which were characterized by elemental analysis, Fourier transform infrared (FTIR) and H nuclear magnetic resonance (NMR) spectroscopy. In addition, the geometries of both complexes were investigated by DFT (B3LYP/LACVP**) calculations. [Zn2(L1)(OBu)2] and [Zn2(L2)(OBu)2] were tested as initiators of ROP of LA at 180 °C using different LA/Zn molar ratios, namely 500, 1000 and 2500. Both complexes showed good activity, resulting in conversions up to 96% in 2 h. The poly-LLA exhibited average molecular weight (Mw) ranging from 45,000-92,000, relatively low polydispersity (Mw/Mn = 1.6-2.0) and high stereoregularity with melting temperature Tm = 164 °C.
Introduction
The manufacture of lactide polymers has been pursued due to their biodegradability and biocompatibility. Polylactide (PLA) and their copolymers are important polyesters. They are versatile macromolecules that can be used in biomedical areas, such as prostheses and implants, and in the pharmaceutical industry as components of drug delivery systems, besides their applications in packaging. 1 An effective way to produce these polymers involves ring-opening polymerization (ROP) of lactides (LAs). Several methodologies are available for such reactions. However, the most commonly used methodology exploits coordination-insertion pathways, since they allow the production of higher molecular weight polymers exhibiting low polydispersity. Furthermore, these reactions offer a high degree of polymer structure and tacticity controls. 2,3 ROP of lactide monomers requires the use of catalysts (initiators) that may be cationic, anionic or neutral coordination complexes and should provide a rapid polymerization with adequate control of polymer features. 3,4 There is a great number of metal-based catalysts able to perform ROP reactions via coordination-insertion mechanism. However, some of these catalysts are toxic and difficult to remove from the reaction mixtures, preventing applications of the resulting polymers in biomedical and pharmaceutical areas. 2,5 Zinc(II) complexes are ubiquitous in catalysis, from enzymatic transformations to organic synthesis. In effect, zinc(II) complexes have been employed as initiators to produce PLA by ROP reactions. In addition to being a soft Lewis acid, zinc is a biocompatible metal, providing less poisonous residues in the produced polymers if compared to other metals. Among other features, Zn-based initiators allow a high control over polymer tacticity, thus producing polymers with high stereoregularity. [5][6][7] Although different coordination compounds could be used as catalysts for LA polymerization, little is known about the influence of their structures and electronic properties on PLA production. 8,9 In this respect, we have been interested in the synthesis of phenoxy-imine ligands capable of simultaneously coordinating two metal atoms. This kind of ligand can furnish bimetallic Zn-based initiators containing two metal centers close to each other, which can act in a synergic fashion during the polymerization reaction.
In this work, we report the synthesis, spectroscopic characterization and DFT (density functional theory) study of two new bimetallic zinc(II) complexes containing phenoxy-imine ligands, and their application as initiators of ROP of L-lactide aiming to correlate the influence of the initiator structure on the thermal properties of the produced poly(L-lactic acid).
Materials
The syntheses of the bimetallic zinc(II) complexes and the polymerization reactions were carried out under inert atmosphere (dry N 2 ) using Schlenk-type glassware and glove bags. Toluene was dried over sodium-benzophenone under N 2 . 3,5-Diiodine-2-hydroxybenzaldehyde and 5-methyl-2-hydroxybenzaldehyde (Aldrich), ZnEt 2 15% solution in hexane (Akzo Nobel), and EtOH were purchased from commercial sources and used as received. n-BuOH (Aldrich) was dried under molecular sieves. L-Lactide (Purac) was recrystallized from dry toluene prior to use.
Ligands and complexes characterization
Nuclear magnetic resonance (NMR) spectra were recorded on a Varian Mercury VX (300 MHz, 25 °C) spectrometer using CDCl 3 . Fourier transform infrared (FTIR) analyses were carried out on a Frontier FT-IR/FIR spectrometer using the technique of attenuated total reflectance (ATR). Melting points were determined using a Quimis melting point apparatus (model 0340513) with controlled heating. Elemental analyses were performed in a HANAU Elementary Vario Micro cube.
Lactide polymerization
The activity of C1 and C2 was investigated in the bulk polymerization of L-lactide using molar monomer/initiator ratios LA/Zn = 500, 1000 and 2500. The mixture of the desired zinc initiator and L-lactide was prepared in a Schlenk flask under nitrogen atmosphere and the polymerization was carried out by heating the mixture at 180 °C for 2 h, after which the reaction medium was quenched by cooling it to room temperature. 18 Subsequently, the solid product was dissolved in chloroform and the polymer precipitated from ethanol. The isolated polymer was dried under vacuum at 50 °C.
Polymer characterization NMR spectra were recorded in a Varian Mercury VX-300 NMR spectrometer operating at 75 MHz ( 13 C) (25 °C) using CDCl 3 . Gel permeation chromatography (GPC) was carried out in a Shimadzu LC 20 instrument equipped with a set of two Phenogel columns and a RID-20A differential index detector. GPC analyses were performed using chloroform as eluent at 1 mL min -1 and 25 °C. A calibration curve using ten polystyrene standards was used for molecular weight determination by the Shimadzu software. Thermal properties of the polymers were determined by differential scanning calorimetry (DSC) using a TA Q1000 calorimeter with a heating and cooling rate of 10 °C min -1 from −10 to 200 °C under nitrogen flow. Values of glass transition (T g ), crystallization temperature on heating (cold crystallization, T cc ) and melting temperature (T m ) were obtained from a second heating after a quenching run. The crystallization temperature on cooling (T c ) was recorded in a cooling run after the second heating run. Thermogravimetric analyses (TGA) were performed with a TA TGA Q500 Thermo analyzer. Measurements were carried out under nitrogen at a heating rate of 20 °C min -1 up to 700 °C with a gas flow rate of 20 mL min -1 .
DFT calculations
All calculations were carried out with JAGUAR 7.9 using an energy convergence criterion of 1.00 × 10 -8 hartree. 19 Gas-phase geometry optimizations were performed without constraints using the B3LYP hybrid functional along with the LACVP** basis set. 20 The vibrational frequencies of each optimized geometry were evaluated at the same level of calculation and verified to be real. 21 An excellent agreement between experimental and calculated frequencies was obtained.
Ligands and complexes
The condensation reaction of salicylaldehyde derivatives with anilines constitutes an ordinary method for the preparation of phenoxy-imines and was used in this work to prepare two bidentate ligands (L1 and L2). L1 and L2 were synthesized in high yields according to a modified literature procedure. [14][15][16][17]22 Complexes C1 and C2 (Figure 1) were also obtained in high yields. The structures of these ligands allow the coordination of two zinc(II) centers while the vast majority of the phenoxy-imine ligands coordinate to only one metal.
In the IR spectra of L1 and L2 (Figures 2 and 3), ν(OH) bands of weak intensity lie at about 3430 cm -1 , while strong ν(C=N) bands occur at about 1620 cm -1 . After coordination of zinc(II), the ν(C=N) and ν(C=C) bands were shifted to lower wavenumbers with respect to L1 and L2 (Figures 2 and 3). 23 The presence of ν(O−H) bands after complexation may be ascribed to residual solvent (n-BuOH) molecules or complex degradation. 24,25 Figures 4 and 5 illustrate the 1 H NMR spectra of C1 and C2, respectively. Aromatic and aliphatic 1 H signals occur in the expected regions.
DFT calculations
Two conformers were considered for each zinc(II) initiator (Figure 6). At the B3LYP/LACVP** level of calculation, the self-consistent field (SCF) energy difference between anti-C1 and syn-C1 is very small (0.7 kcal mol -1 ), whereas anti-C2 is only 1.8 kcal mol -1 more stable than syn-C2. Based on these results, both initiators are likely to operate as a mixture of isomers in the reaction conditions. Figure 6 shows that C1 and C2 exhibit three-coordinate zinc(II) centers in the expected trigonal planar coordination environment. Moreover, the Zn-O OBu bond is shorter than Zn-O OPh regardless of the isomer. Our DFT results suggest that both C1 and C2 operate via imine-phenoxide-assisted coordination-insertion ROP mechanism.
Polymerization C1 and C2 were evaluated regarding their activity to produce poly(L-lactide) from ROP of L-lactide (LLA) (Figure 7). Polymerizations were carried out using different LLA/Zn molar ratios (500, 1000 and 2500) in 2 h at 180 °C. Polymer yield, number average molecular weight (M n ), weight average molecular weight (M w ) and polydispersity (M w /M n ) are presented in Table 1.
C1 and C2 are very effective initiators for bulk polymerization of L-lactide, producing poly(L-lactic acid) (PLLA) in good yields (between 71 and 96%) when reactions were performed at 180 °C for 2 h. For both initiators, the yield increased as the LLA/Zn molar ratio decreased, attaining a maximum yield of 96% at LLA/Zn = 500. This behavior can be explained considering that at lower LLA/Zn ratios, the Zn concentration and consequently the number of active species are higher, increasing the polymerization rate. The PLLA obtained with these initiators showed relatively high molecular weight, with M w varying from 45,200 to 92,200 g mol -1 . It is important to highlight that the two complexes have different behavior depending on the influence of LLA/Zn ratio on the PLLA molecular weight. For C1, when LLA/Zn ratio decreases, M w also decreases, while for C2, when LLA/Zn ratio decreases, M w increases. This behavior might be related to the ligand structure, but it cannot be rationalized at the moment. The polydispersity (M w /M n ) of the polymers ranges from 1.63 to 2.02, which is in the range expected for high molecular weight PLLA obtained by ROP with metal initiators. 5 C1 produced polymers in higher yields, higher molecular weights and lower polydispersities as compared with C2.
As reported in literature, for many medical application a M w of about 70,000 g mol -1 is desired, for instance for preparation of electrospun scaffolds. Thus, the M w obtained in this work is exactly in the desired range for some medical application. 2,5 It is important to mention that the molecular weights were obtained by GPC. All PLLA samples showed monomodal GPC curves (not presented in this work), indicating that only one type of active species seems to be present during the polymerization reactions. According to our DFT results, each complex can exist as two stable isomers, anti and syn. Moreover, each complex has two metallic centers capable of polymerizing LLA. In this regard, all active catalysts are likely to behave in a similar fashion during the catalysis, indicating that for each bimetallic initiator the metal centers are kinetically isolated from each other. Therefore, the differences in the polymer features obtained with C1 and C2 seem to be influenced by the stereoelectronic features of the phenoxy-imine ligands and should not be attributed to the different isomers that occur in the reaction mixture.
Microstructure of PLLA
The microstructures of the PLLA were studied by 13 C NMR spectroscopy. The 13 C NMR spectra of these polymers show three signals at 16, 68.5 and 169 ppm, attributed to methyl, methine and carbonyl groups, respectively. 5,[26][27][28] The polymerization of L-lactide can produce a highly isotactic polymer if no side reactions, like racemization, take place during polymerization. Racemization reactions can be induced by the metal initiator and, in principle, it is influenced by the metal type and ligand structure, introducing D-lactyl units in the polymer structure. The presence of these isomeric units in the polymer backbone is manifested by the appearance of lateral side signals in the methine and carbonyl signals. 18 13 C NMR spectra of these polymers (Figure 8) did not show any significant signals beside the peak of the carbonyl and methine regions, suggesting that L-D dyads are not present in significant amounts. Therefore, the PLLA polymer structures are highly stereoregular. Apparently, the different stereoelectronic attributes of C1 and C2 did not strongly affect the regularity of the produced PLLA, which are highly isotactic. Powder X-ray diffraction (XRD) data were used to obtain information on the crystallinity of the resulting PLLA (Figures 9 and 10). From XRD, it is possible to infer that the initiators could produce highly crystalline polymers. It was not possible to identify by this technique any relation between the complex structure and the degree of crystallinity (X c ) of the polymers, but it is possible to note that the intensity of the reflection at about 17° increased as LLA/Zn increased for C1, suggesting the influence of LLA/Zn in the polymer crystallinity. However, the tendency observed for C1 was not reproduced for C2, since the crystallinity seems to decrease as the LLA/Zn ratio increased by considering the intensity of this reflection.
Thermal properties of PLLA Figure 11 shows the DTG curves for the polymers obtained in this work. The results show that the PLLAs obtained from C1 exhibit the lowest temperature of maximum degradation rate T max , considering the three LLA/Zn molar ratios. The PLLAs obtained from C2 showed a remarkably higher T max in the three LLA/Zn molar ratios. This demonstrates that the thermal property of PLLA was directly influenced by the structural and electronic variation of the ligands coordinated to Zn II . It seems that the presence of iodine in the ligand C1 structure led to less thermally stable PLLAs. Table 2 summarizes the results of thermal stability for the polymers. When heated to 700 °C under N 2 , the PLLAs produced by using C1 generated lower amount of residue than those prepared from C2. In addition, polymers prepared from C1 showed lower thermal decomposition temperatures (T onset and T max ).
The transition temperatures and degree of crystallinity of the PLLAs were also determined by DSC. Figures 12 and 13 show the DSC heating run (second heating run after quenching) for PLLA obtained with catalysts C1 and C2, respectively. The curves show 3 events: glass transition temperature (T g ), crystallization temperature on heating (cold crystallization) (T cc ), and melting transition temperature (T m ). Table 3 lists the T g , T m , T cc , and the degree of crystallinity (X c ) of PLLAs obtained from the second heating run after a first melting of the samples to erase thermal history and quenching to obtain amorphous materials. Table 3 shows that the polymers obtained from C1 and C2 initiators display T g values between 42 and 48 °C, which fall within the range usually found for PLLA (40-60 °C). Our values occur in the lower-bound of the expected range, which can be related to the molecular weight of these polymers as shown in Table 2. The literature reports that commercial high molecular weight PLLA (M w above 150,000 g mol -1 ) has T g around 60 °C. The polymers obtained in this work had M w below 100,000 g mol -1 . The T g showed no significant variation with LLA/Zn ratio. The melting transition of the polymers showed a bimodal behavior with two visible peaks of T m as described in Table 3. The peak at higher temperature varied from 151.3 to 163.8 °C, values in the range reported in literature (from 130 to 180 °C). 29 Regarding the degree of crystallinity (X c ), almost no variation was observed relative to the variation of LLA/Zn ratio for the polymers synthesized from the C1 initiator, whereas for C2 the higher LLA/Zn ratio (lower Zn content) generated polymers with a higher degree of crystallinity. These two features, high molecular weights and crystallinity, of the polymers obtained from C1 and C2 make them appropriate for use as biomaterials. 29,30 Conclusions Two bimetallic Zn II complexes (C1 and C2) containing phenoxy-imine ligands were prepared and characterized by IR and 1 H NMR spectroscopy. Their molecular structures were optimized by DFT calculations at the B3LYP/LACVP** level, which showed that both complexes have two stable isomers, syn and anti. C1 and C2 display the necessary features to act as initiators of lactide polymerization, showing good ROP activity in the bulk polymerization of L-lactide, resulting in the production of poly(L-lactic acid) (PLLA) with conversion up to 96%. The produced polymers are semi-crystalline materials and exhibit M w up to 92,000 g mol -1 and relatively low polydispersity in the polymerization condition used in this work. Based on the 13 C NMR and thermal data, we conclude that the polymers synthesized from C1 and C2 show high regularity. Apparently the difference between the structures of the ligands does not affect the regularity of the polymers obtained, but do directly affect their thermal stabilities. The polymers obtained from C1 and C2 exhibit high molecular weights and crystallinity which make them potential candidates for application as biomaterials. T g : glass transition temperature; T cc : crystallization temperature on heating (cold crystallization); T m : melting transition temperature; X c : degree of crystallinity.
|
2018-12-15T13:44:59.688Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f81d35b8f3f70756ea990ce7ab47bd23c0c1638a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21577/0103-5053.20170131",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9de3b77eaa356bd3fb4e0c97e9edd1056292cf1a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
13738919
|
pes2o/s2orc
|
v3-fos-license
|
MiR-144 suppresses cell proliferation, migration, and invasion in hepatocellular carcinoma by targeting SMAD4
Background/aim Increasing evidence show microRNAs (miRNAs) are engaged in hepatocellular carcinoma (HCC). The aim of this study was to investigate the role of miR-144 in HCC, as well as to identify its underlying mechanism. Methods The expression levels of miR-144 were assessed in multiple HCC cell lines, as well as in liver tissues from patients with HCC. We further examined the effects of miR-144 on HCC. The molecular target of miR-144 was identified using a computer algorithm and confirmed experimentally. Results We found that the levels of miR-144 were frequently downregulated in human HCC tissues and cell lines, and overexpression of miR-144 dramatically inhibited HCC metastasis, invasion, cell cycle, epithelial–mesenchymal transition, and chemoresistance. We further verified the SMAD4 as a novel and direct target of miR-144 in HCCs. Conclusion Taken together, overexpression of miR-144 or downregulation of SMAD4 may prove beneficial as therapeutic strategies for HCC treatment.
Introduction
Hepatocellular carcinoma (HCC) is one of the most prevalent malignant diseases and the third leading cause of cancer-related deaths. One-half of the new HCC cases and HCC deaths worldwide were estimated to occur in the People's Republic of China. 1 Currently, surgical resection, liver transplantation, and radiofrequency ablation are the effective approaches for HCC treatment. The recurrence rate of HCC within 2 years in patients who received surgery exceeds 50%. Due to the late detection of the tumors and high rate of recurrence and metastasis, the prognosis of HCC is still dismal, and the 5-year survival rate for patients is less than 5%. 2 Therefore, further elucidation of the molecular mechanisms underlying HCC invasion and metastasis are important for the development of new therapeutic strategies for diagnosis, treatment, and prognosis of HCC.
MicroRNAs (miRNAs) are a class of small, short, and noncoding RNAs, regulating gene expression by binding to sequences in a 3′-untranslated region (3′-UTR) of the target mRNA, resulting in upregulation and downregulation of the targeted gene. 3,4 In various human cancers, some miRNAs are often upregulated and have an oncogenic function, while most miRNAs are downregulated and may possess a tumorsuppressive activity. Accumulating evidence suggests that the abnormal expression of miRNAs is involved in the invasion and metastasis during the progression of various human cancers. Recent evidence indicates that miRNA expression profiling has been characterized in a variety of cancers, including breast cancer, 5 pancreatic cancer, 6 ovarian cancer, 7 and HCC. 3 Previous data showed that certain miRNAs are involved in the proliferation and survival of HCC, including miR-199, miR-7, miR-124, and so on. In this study, we found that the levels of a specific miRNA, miR-144, were frequently downregulated in human HCC tissues and cell lines, and overexpression of miR-144 dramatically inhibited HCC metastasis, invasion, cell cycle, epithelial-mesenchymal transition, and chemoresistance. We further verified the SMAD4 as a novel and direct target of miR-144 in HCCs. In summary, our data demonstrate that SMAD4 expression is inversely correlated to miR-144 levels in HCC tissues and cell lines, and that overexpression of miR-144 in HCC cell lines decreases SMAD4 mRNA and protein levels by directly binding to the 3′-UTR of SMAD4, which subsequently leads to downregulation of SMAD4. Therefore, our data strongly indicate that miR-144 is a tumor suppressor by targeting SMAD4 expression to modulate HCC biological behaviors. Taken together, overexpression of miR-144 or downregulation of SMAD4 may prove beneficial as therapeutic strategies for HCC treatment.
Patient selection
Samples of 100 HCC tissues were obtained from patients who had undergone HCC surgical resection at the Guangdong General Hospital. The study complied with the Declaration of Helsinki and was approved by the Institutional Ethics Committee of Guangdong General Hospital. All patients signed consent forms indicating their willingness to participate, and their understanding of the procedure and general aim of the study. All of the included patients met the following criteria: pathologically and histologically confirmed HCC, no history of any other malignant tumors, and no neoadjuvant therapy prior to the surgery.
cell culture
The following human HCC cell lines were studied: MHCC-97H, SMMC-7221, HepG2, Huh-7, and Hep3B. The normal human liver LO2 cell line was also employed as normal control. All cells were grown in Dulbecco's Modified Eagle's Medium (Thermo Fisher Scientific, Waltham, MA, USA) and supplemented with 10% fetal bovine serum (HyClone, Logan, UT, USA) and 1% penicillin/ streptomycin.
Manipulation of mir-144 expression levels
The miR-144 mimics and negative control (micrON™ miRNA Mimic Negative Controls) were purchased from Land (Guangzhou, Guangdong, People's Republic of China). The final concentration of transfection is 50 nM.
cell transfections
Transfection of the miR-144 mimics was performed using Lipofectamine ® RNAiMAX (Thermo Fisher Scientific) according to the manufacturer's instructions.
rna extraction and real-time Pcr analysis
Total RNA was extracted from the cell lines and frozen tissue specimens with TRIzol reagent (Thermo Fisher Scientific), and the concentration of the total RNA was quantitated by measuring the absorbance at 260 nm. Complementary DNA was generated using a miScript Reverse Transcription Kit (Qiagen NV, Venlo, the Netherlands). Primers for miR-144 and the U6 small nuclear RNA (snRNA) (internal control) were purchased from Land. The expression level of miRNA was defined based on the threshold cycle (Ct), and relative expression levels were calculated using the 2 -ΔΔCt method, using the expression level of the U6 snRNA as a reference gene. Each polymerase chain reaction (PCR) was performed in triplicate. The primers for the examined genes are presented in Table 1.
cell invasion assay
The invasion assay was performed using a transwell chamber, consisting of 8 mm membrane filter inserts (Corning Incorporated, Corning, NY, USA) coated with Matrigel (BD Biosciences, San Jose, CA, USA). Briefly, cells were trypsinized and suspended in serum-free medium. Next, 1.5×10 5 cells were added to the upper chamber, and the lower chamber was filled with medium containing 10% fetal bovine serum. After 36 hours of incubation, cells that had invaded the lower chamber were fixed with 4% paraformaldehyde, stained with hematoxylin, and counted using a microscope.
4707
The role of mir-144 in hcc Wound-healing assay Wound-healing assay was performed using HepG2 and Huh-7 cells. Cells were trypsinized and seeded in equal numbers into six-well tissue culture plates, and allowed to grow until confluent (approximately 24 hours). Following serum starvation for 24 hours, an artificial homogenous wound (scratch) was created onto the cell monolayer with a sterile 100 μL tip. After scratching, the cells were washed with serum-free medium, complete media was added, and microscopic images (20× magnification) of the cells were collected at 0, 12, and 24 hours.
luciferase reporter assay
Luciferase reporter assay was performed according to the manufacturer's instructions. Briefly, cells (3.5×10 4 ) were seeded in triplicate in 24-well plates overnight. Next, 100 ng of pGL3-SMAD4-3′-UTR (wild type/mutant) or controlluciferase plasmid plus 1 ng of pRL-TK renilla plasmid (#E2810; Promega, Madison, WI, USA) were transfected into the cells using Lipofectamine ® 2000 (Thermo Fisher Scientific). Three independent experiments were performed and the data are presented as the mean ± standard deviation (SD).
statistical analysis
A Student's t-test was used to evaluate the statistical significance of the difference between two groups of data. P-value of less than 0.05 was considered to be statistically significant. All analyses in the present study were performed by SPSS 13.0 (SPSS Inc., Chicago, IL, USA) statistical software package.
The expression of mir-144 is frequently downregulated in hcc cell lines and tissues
To determine whether miR-144 is correlated with the progression of HCC, the expression level of miR-144 was detected in HCC cell lines, tissues, and matched with adjacent nontumor liver tissues obtained from 100 patients by quantitative reverse transcription polymerase chain reaction (qRT-PCR). The results showed that the expression of miR-144 was dramatically decreased in various HCC cell lines, including MHCC-97H, SMMC-7221, HepG2, Huh-7, and Hep3B, compared with the normal hepatic cell line LO2 ( Figure 1A). As shown in Figure 1B, the expression of miR-144 was found decreased (negative expression and low expression of miR-144) in 85.0% (85/100) of HCC tissues compared with matched adjacent nontumor liver tissues, with an average of 5.20-fold reduction in expression (median =0.73 vs 1.52; P0.01). No statistically significant relationships were found between miR-144 expression and any of the clinicopathological parameters except for recurrence (P=0.0041) ( Table 2). Moreover, the expression of miR-144 was significantly higher in the HCC samples at early stages (TNM I and II) compared to that of the HCC samples at advanced tumor stages (TNM III) ( Figure 1C).
effect of mir-144 on the proliferation of hcc cells (Figure 2A and B). MTT assays indicated that both cells with miR-144 mimics proliferated at a slower rate than did control cells, and statistical analysis showed a significant difference after culture for 4 days ( Figure 2C and D). The cell cycle of these cells was detected by flow cytometry. The results showed that 32.70% of MHCC-97H and 32.74% of HepG2 cells were in S-phase, while 22.96% of MHCC-97H and 23.15% of HepG2 cells were in S-phase after treatment with miR-144 mimics (P0.05, Figure 2E-G). Therefore, the results indicated that miR-144 can suppress the cell cycle progression and inhibit the proliferation of HCC cells. (Figure 3A and B). Transwell with Matrigel showed that treatment with miR-144 led to a significant decrease in invasive potential of MHCC-97H (6.31-fold reduction, P0.001) and HepG2 (5.14-fold reduction, P0.001) ( Figure 3C and D). Taken
Function of mir-144 in hcc cells partially attributed to targeting SMAD4
To determine the underlying mechanism by which miR-144 regulates progression and chemoresistance of HCC, we integrated bioinformatics algorithms, including miRanda, Pic-Tar, and TargetScan, to predict the potential direct target of
4709
The role of mir-144 in hcc
4710
Yu et al miR-144. According to the prediction, SMAD4 has the putative miR-144-binding site that maps to the 3′-UTR. To further validate the prediction results, we constructed the luciferase reporters carrying the wild type and mutant type of SMAD4 3′-UTR ( Figure 5A). As shown in Figure 5B, luciferase assays indicated that the wild type of 3′-UTR caused a significant reduction in luciferase activity, whereas mutation of the key seed region in the 3′-UTR of SMAD4 showed no variations in the luciferase activity compared with the control ( Figure 5B). The qRT-PCR analysis suggested that treatment by miR-144 mimics significantly repressed the expression of SMAD4 mRNA ( Figure 5C). These findings were further verified by Western blot analysis, which indicated that treatment by miR-144 mimics markedly inhibited SMAD4 protein level ( Figure 5D). Taken together, these results strongly suggested that miR-144 could significantly suppress the expression of SMAD4 through targeting the 3′-UTR.
To determine the correlation of miR-144 and SMAD4 expression in clinical HCC tissues, qRT-PCR was employed to assess the expression of SMAD4 in 100 HCC tissues. As indicated in Figure 5E, results of Spearman's rank test showed a significantly negative correlation between miR-144
4711
The role of mir-144 in hcc and SMAD4 expression (r=-0.768, P0.001) ( Figure 5E). Therefore, our results suggested that miR-144 represses the HCC development of HCC partly through inhibiting the expression of SMAD4.
Discussion
More and more evidence indicate that miRNAs are important regulators in various cellular processes and are recently extensively investigated relating to cancer initiation, progression, diagnosis, and treatment. 8 The involvement of miRNAs in cancer pathogenesis is well established, as they behave as oncogenic or tumor-suppressive role depending on their functional targets. 9 Dysregulation of miRNAs is often found in HCC, and some of them have an important role in the progression and development of HCC. 3 However, the role of miRNAs in the pathogenesis of HCC is still largely unclear as a single miRNA may regulate multiple target genes and a single mRNA may be regulated by various miRNAs. 10 Given the complexity of the network between mRNAs and miRNAs, further studies are needed to determine the importance of miRNAs because of the potential in cancer diagnostic and prognostic value. Further understanding of the functional role of miRNAs in cancer helps to better reveal the underlying mechanism of HCC pathogenesis and progression.
Previous evidence demonstrated that miR-144 expression was deceased in various cancers, including HCC, cholangiocarcinoma, 11 colorectal cancer, 12 bladder cancer, 13 and thyroid cancer, 14 and inversely related with cancer proliferation and metastasis. Although a previous study demonstrated that miR-144 might suppress the growth and motility of HCC cells partially by targeting E2F3, the knowledge about the role of miR-144 in HCC is still limited. 15 Therefore, the present study was carried out to further investigate the functional role of miR-144 in HCC. The findings of the present study are in line with those of previous evidence, which showed the reduced expression of miR-144 in cancer cell lines and human tissues. The present study showed that decreased expression was found in HCC cell lines and human tissues, and negatively correlated with the severity and progression of HCC, suggesting that decreased expression of miR-144 correlated with the malignant potential of HCC. Therefore, we assessed the effect of miR-144 on proliferation, migration, invasion, chemoresistance, and apoptosis. Consistent with prior studies, 15 we showed that upregulation
4713
The role of mir-144 in hcc of miR-144 can suppress the proliferation, migration, and invasion of HCC cell lines. Further, our results revealed that upregulation of miR-144 could repress cell cycle progression by inducing G0/G1 cell cycle arrest, and also could enhance chemosensitivity and induced cancer cell apoptosis. Also, we identified SMAD4 as a novel target of miR-144 in HCC cells. SMAD4 proved to be the key mediator of transforming growth factor beta (TGF-β) pathway, [16][17][18] which has a central role in the growth of hepatocytes. 19 Interestingly, Xu et al reported that miR-144 is a critical regulator of the TGF-β signaling cascade and is overexpressed in lungs with bronchiolitis obliterans syndrome, which suggest an important role of miR-144 in regulating TGF-β pathway. 20 According to the previous data, SMAD4 has dual roles of tumor-suppressive and tumor-promoting effects in different cancers. Loss or inactivation of SMAD4 is proved to be inversely related with prognosis pancreatic cancer, 21,22 colorectal cancer, 21 cholangiocarcinoma, 23 and other malignancies. 24 However, increased expression of SMAD4 was observed in HCC 16,25,26 and correlated with poor prognosis. 27 Recent evidence suggested that SMAD4 processes a highly tumor-promoting function of SMAD4 in HCC and might serve as an ideal therapeutic target. Therefore, SMAD4 inhibition represents a rational and promising new approach for HCC therapy due to its unique and specific role in HCC. To validate the prediction experimentally, luciferase reporter assay was employed and the results confirmed that SMAD4 is a target gene of miR-144. These data were further strengthened by assessment of the protein level of SMAD4 in both HCC cell lines treated with miR-144 mimics. Moreover, the coexpression of miR-144 and SMAD4 was detected in HCC tissues, and the results showed a significantly negative correlation between them. Taken together, these results strongly suggested that miR-144 may exert a tumor-suppressive function by repressing the expression of SMAD4 in HCC development.
Conclusion
The results of this study strongly suggested the tumorsuppressive role of miR-144 in HCC. Moreover, the present study also demonstrated that upregulation of miR-144 leads to inhibition of cell proliferation, cell cycle progression, chemoresistance, and other malignant biological behaviors.
|
2018-04-03T00:41:41.100Z
|
2016-07-29T00:00:00.000
|
{
"year": 2016,
"sha1": "3f6ed2b90a0a7687ecf843e31cf214850f44dec5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=31635",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de89aad297ae560ef597b197edef00e4cd5ea7bc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
265170440
|
pes2o/s2orc
|
v3-fos-license
|
Importance of realistic zonal currents in depicting the evolution of tropical central Pacific sea surface temperature
Classical El Niño–Southern Oscillation (ENSO) theories mainly consider the vertical process-related Ekman and thermocline feedbacks. However, the zonal current-related zonal advective feedback has been suggested to play a crucial role in the evolution of central Pacific (CP) El Niño and La Niña events. Also, the simulation of a realistic current is complex and not the focus of the classical ENSO theories. Using reanalysis datasets and a statistical model, this study emphasizes the importance of the zonal currents in the sea surface temperature anomaly (SSTA) evolution in the Niño4 region (160° E–150° W, 5° S–5° N). Specifically, in addition to the widely used predictors for the ENSO evolution, i.e. the equatorial Pacific mean thermocline depth anomaly (D20) and the zonal wind stress anomaly (ZWS), the zonal current anomaly (ZCA) averaged in the CP is first extracted to construct a statistical model to predict the SSTA of the Niño4 region. The results show that this model has improved overall prediction skill and accuracy for several CP El Niño and La Niña events during 1980–2020, compared with the benchmark linear regression model based on D20 and ZWS. By further removing the components related to the equatorial Kelvin and first symmetric meridional ( m=1 ) Rossby waves (namely, the principal part of the traditional ENSO mechanism) from the ZCA, the remainder, which contains higher-order Rossby waves and other nonlinear components and is called the zonal current anomaly residual (ZCA_RSD), is found to be the key part of the improvements in the prediction skill. This suggests that to better simulate and predict the complex ENSO events, more vertical and meridional modes of the tropical Pacific need to be included to obtain a realistic anomalous zonal current.
Introduction
The El Niño-Southern Oscillation (ENSO) is the dominant mode of interannual variability on Earth, and it has substantial impacts on global climate and weather patterns (Bjerknes 1969, Philander 1983, Neelin et al 1998, Klein et al 1999, Alexander et al 2002, Brönnimann 2007, Wang 2018, Fang and Xie 2020).The ENSO cycle typically lasts two to seven years, with alternating warm (El Niño), cold (La Niña), and neutral phases (Philander 1983, McPhaden et al 2006).However, the complexity of Wang et al 1999), and the advective-reflective oscillator (Picaut et al 1997).All of them can be distilled from the classic Zebiak-Cane model (Zebiak and Cane 1987) and place emphasis on the role of linear waves in the life cycle of ENSO events, although they are implicit in the recharge oscillator theory.The wave propagation drives the initiation, development, and transition phases of ENSO and offers insights into the air-sea interaction in regulating the ENSO cycle (Suarez andSchopf 1988, Battisti andHirst 1989).As the core components of these theories, the Kelvin and m = 1 Rossby waves, i.e. the leading eastward and westward propagating waves of the tropical ocean dynamics, are considered as the key approach to propagate the warm and cold signals of oceanic variation along the thermocline and influence the sea surface temperature anomaly (SSTA) in the eastern Pacific (EP) through vertical processes.
Although the above-mentioned theories have proven to be useful in explaining the evolution of EP El Niño events, their applicability to central Pacific (CP) El Niño events is limited.Since 2000, CP El Niño events have received much attention from the scientific community owing to their frequent occurrences (Larkin and Harrison 2005, Ashok et al 2007, Kao and Yu 2009, Yeh et al 2009, Lee and McPhaden 2010, Capotondi et al 2015, Chen et al 2022) and notable effects (Cai and Cowan 2009, Yu et al 2012, Karori et al 2013, Yeh et al 2014, Córdoba-Machado et al 2015, Fang et al 2015).Previous studies have suggested that the mechanisms responsible for CP El Niño events are inconsistent with those responsible for EP El Niño events.Kug et al (2009) found that the thermocline feedback in the EP, which is the main positive feedback in classical ENSO theories and is tightly related to the propagation of the Kelvin and m = 1 Rossby waves, could only account for the processes of EP El Niño events, whereas the zonal current-related zonal advective feedback in the CP is key to the development of CP El Niño events.The distinct mechanisms underlying the EP and CP El Niño events can be attributed to differences in the climatological background fields in the two regions.In the EP region, the prominent feature is its shallow thermocline, which results in a solid vertical ocean current and makes the surface water easily influenced by subsurface variations.In the CP region, however, the thermocline is deep and the principal feature is the large temperature gradient between the cold tongue and warm pool.Thus, the zonal current-related zonal advective feedback plays a key role in driving the local SSTA variations.
The importance of the zonal advective feedback on the ENSO evolution was emphasized in the advective-reflective oscillator theory (Picaut et al 1997), which suggested that the eastern boundary of the warm pool moves eastward or westward with changes in the zonal current anomaly (ZCA), and its zonal migration is synchronized with the ENSO evolution.Kim and Cai (2014) also found that for extreme El Niño events, the zonal current plays a crucial role in their early formation and development.Recently, from the theoretical perspective, Fang and Mu (2018) extended the two-box (EP and western Pacific (WP)) recharge oscillation model (Jin 1997a(Jin , 1997b) ) to a three-region model that includes the ZCA and SSTA variations in the CP, further emphasizing the importance of the zonal advective feedback and ZCA on the development of CP El Niño events.Nevertheless, the ZCA is always taken as a whole to investigate its influence on the interannual variability of the equatorial Pacific SSTA.A limited number of studies have proposed the division of the ZCA into distinct components and have investigated their individual impacts.Among these, Delcroix et al (1994), Picaut and Delcroix (1995) found that the Kelvin and m = 1 Rossby waves in the ZCA contributed to the warm pool displacement and the evolution of equatorial SSTA in the 1986-1989 ENSO event.As mentioned earlier, the linear wave theories of ENSO have a long history and are relatively wellestablished, however, they fail to explain the mechanism behind the CP El Niño.And the zonal current anomaly residual (ZCA_RSD), excluding Kelvin and m = 1 Rossby waves, contains all the remaining parts, such as high-order Rossby waves, nonlinearity and so on, which has not been considered and is not the focus of primary ENSO theories.From the perspective of equatorial oceanic waves, the equatorial Pacific could be divided into multiple meridional modes by employing Hermite polynomials.The mentioned Kelvin and m = 1 Rossby components of ZCA are controlled by the ocean first meridional mode, while other higher-order meridional modes and vertical modes may also potentially control the ZCA_RSD.In this study, the role played by this part will be evaluated, with an emphasis on its impact on the CP El Niño and La Niña events.
Using models, including both statistical and dynamic models, to advance our understanding of ocean-atmosphere coupling processes is always effective and necessary.Statistical models primarily describe the features of ENSO based on empirical relationships and observed data (Penland 1996, Tseng et al 2017, Chen et al 2020), whereas dynamic models simulate the underlying ENSO physical processes through solving partial differential equations (Barnett et al 1993, Tang andHsieh 2002).Although dynamic models are potentially the best tool for exploring the mechanisms driving ocean-atmosphere coupling processes and ENSO, the state-of-the-art models still have limitations in accurately capturing certain essential characteristics of the mean state and interannual variability (Planton et al 2021).In such cases and for preliminary understanding, statistical models can provide a complementary approach.Many of the statistical models of ENSO are based on multivariate linear regression equations, the efficiency of which thus depends critically on the selection of appropriate predictors.So, a comparison of different statistical models that utilize different predictors could help us to select the more crucial factor and investigate its physical indications.In this study, a statistical method is utilized to quantify the influence of the equatorial Pacific ZCA, especially its residual part, on the evolution of the SSTA in the CP region.The main purpose of this work is trying to find the physical processes that are not taken seriously in the classical ENSO theories, especially for the understanding of the ENSO diversity and complexity.
The remainder of this paper is structured as follows.Section 2 introduces the data and methods, including the construction of the benchmark and new Niño4 statistical models.Section 3 evaluates the effect of the ZCA and the ZCA_RSD, through comparison analyses between the benchmark statistical model and the new statistical models.Section 4 provides a summary of the results and the outlook from this work.
Data
In this study, all the monthly data, i.e. ocean potential temperature, zonal currents, sea surface height (SSH), and zonal wind stress (ZWS), are from the National Centers for Environmental Prediction Global Ocean Data Assimilation System (Behringer andXue 2004, Behringer 2007), with a horizontal resolution of 1 • × 1/3 • and 40 vertical levels.All the data have been detrended over a period of 41 years.Thermocline depth (D20) is taken as the depth of the 20 • C isotherm.Anomalies of each variable are calculated by subtracting the climatology from 1980 to 2020, i.e. the analysis period of this work.The Niño4 index is calculated from the average SSTA in the region (160
Classification of ENSO years
In this study, different ENSO types, which include EP El Niño, CP El Niño, and La Niña years, are classified based on Takahashi et al (2011) and is presented in table 1.Specifically, the E index (representing the EP El Niño) and C index (representing the CP El Niño and La Niña events), derived from the first two principal components (PCs) of SSTA in the equatorial Pacific (10 • S-10 • N), are defined to classify ENSO events.The purpose of ENSO classification is to assess the performance of the statistical models in simulating different types of ENSO events.
Zonal geostrophic current and its residual part
As mentioned above, the role of zonal current, especially the residual part without the Kelvin and m = 1 Rossby waves components, in influencing central Pacific SSTA will be investigated in this work.To Categories Years EP El Niño 1982/1983, 1986/1987, 1997/1998, 2015/2016CP El Niño 1987/1988, 1991/1992, 1994/1995, 2002/2003, 2004/2005, 2006/2007, 2009/2010La Niña 1983/1984, 1984/1985, 1988/1989, 1995/1996, 1998/1999, 1999/2000, 2000/2001, 2005/2006, 2007/2008, 2010/2011, 2011/2012, 2017/2018 quantify the different parts, the total of ZCA should be decomposed.The reason for not using observed ZCA lies in the requirement of employing zonal currents that adhere to geostrophic balance in the decomposition method (Delcroix et al 1994), which induce that the observed ZCA cannot be decomposed into Kelvin and m = 1 Rossby waves.In this study, to facilitate the decomposition of the zonal currents, surface zonal geostrophic currents defined by Picaut and Tournier (1991) are utilized to replace the gross zonal currents (see details in appendix).To verify the effectiveness of this method, figure 1 shows a comparison between geostrophic and observed ZCA during the March-April-May (MAM) of El Niño developing year.The positive geostrophic ZCA is observed near the equator, with the maximum in both the eastern and western Pacific (figure 1(a)), which shares great resemblance with the observed ZCA in figure 1(b).So, in the remaining part of this work, the geostrophic ZCA will be used instead of the original one, and without any ambiguity the term ZCA will refer to this geostrophic one.
For further investigation, the Kelvin and m = 1 Rossby waves components from the ZCA are firstly distilled referring to Delcroix et al (1994) (see details in appendix) and then removed from the total ZCA to obtain the residual part of the ZCA (ZCA_RSD), which thus contains all the higher-order Rossby waves and nonlinearity.
Statistical model 2.4.1. Benchmark Niño4 statistical model
Previous studies have established that D20 and ZWS are two reliable predictors for describing the following ENSO evolutions (e.g.Clarke and Van Gorder 2001, Ruiz et al 2005, Drosdowsky 2006, Ren et al 2019).So, as a benchmark, they are used to construct a linear regression model for the Niño4 index in this study.
Referring to Ren et al (2019), the 'lead-lag correlation' analysis is used to determine the key regions of D20 and ZWS affecting the Niño4 index.And the area means of the significant regions for D20 and ZWS, which are presented in table 2, are then used to construct the statistical model.Considering of the significant phase-locking feature of the El Niño events (Neelin et al 2000, Chen and Jin 2020, Fang and Zheng 2021), i.e. they always initiate during boreal spring where α and β are regression coefficients.
New Niño4 statistical model
Compared with the widely studied wind and thermocline depth variations, only a limited number of studies have centered their investigation on the ZCA in the context of ENSO-related mechanisms (Picaut et al 1997, Yeh et al 2009, Kim and Cai 2014, Fang and Mu 2018, Chen et al 2022).However, ZCA may provide an extra contribution to improve the benchmark model in predicting El Niño, especially the CP El Niño events.For this, the significance of the ZCA on the SST variabilities in the CP region is firstly validated.
In this study, a representative region of the ZCA should be selected for constructing the new Niño4 statistical model.As a physical criterion, this region can be determined by finding where the zonal advective feedback plays the dominant role in the local SST variabilities.For this, a regression analysis is used to measure the contribution of zonal advective feedback (−u ′ ∂ T ∂x ) to the SSTA tendency ( dT ′ dt ) along the equatorial (5 • S-5 • N) Pacific, as was used by Stevenson and Niiler (1983), where T ′ is the SSTA, T is the climatology of SST, and u ′ is ZCA.The result shown in figure 2 clearly suggests that the CP is the key region, i.e. the zonal advective feedback has greater significance in the CP, particularly in the Niño4 region, compared with the other places, which is consistent with the previous studies (Wang and McPhaden 2000, 2001, Huang et al 2012, Fang and Mu 2018, Chen et al 2022).In our comprehensive experiments conducted across all regions of the equatorial Pacific, we discerned that the effect of ZCA on SSTA is particularly prominent within the Niño4 region.As a result, it is reasonable to select the Niño4 region as the significant region of the ZCA.
With the complementary of the ZCA to the benchmark model, the new model reads Ni ño4 (NDJ) = αD20 (MAM) where α, β, and γ are regression coefficients.The corresponding predictors are shown in table 2.
Cross-validation
To obtain a more robust result, the leave-one-out method is used to perform cross-validation on the statistical model.The method involves systematically leaving out each observation from a dataset and using the remaining observations to train the model.The model is then used to predict the one left-out observation, and the process is repeated for each observation in the dataset.The prediction results can be used to analyze the accuracy of the statistical model.
Comparison between the benchmark and new models (ZCA)
To boreal spring.Furthermore, while the prediction performance of both the benchmark and new models are similar for neutral years (black dots) and EP El Niño events (red triangles), the new model suggests a better capability on capturing the NDJ mean Niño4 index in CP El Niño (green dots) than the benchmark model, that is, the predicted values are closer to the red reference line (figure 3(b)).Also, the prediction deviation of La Niña events (the average of the difference between the predicting and the observed Niño4 index of La Niña events) is smaller in the new model (0.19) than the benchmark model (0.29), which implies that the new model's prediction of La Niña is closer to the observation.Since the warming (cooling) center of CP El Niño (La Niña) is closer to the Niño4 area, this means the realistic zonal current-related zonal advective feedback is particularly important for the types of events and thus for the ENSO diversity.Except for MAM season, the predictors in June-July-August (JJA) and September-October-November (SON) season are also used to constructed the model.However, the result demonstrates that ZCA yields beneficial outcomes exclusively during the spring (MAM) in forecasting Niño4 index, with no discernible impact during the summer (JJA) and autumn (SON) seasons (figure not shown).
Contributions to CP El Niño and La Niña events
The contributions of MAM mean D20, ZWS, and ZCA are compared by decomposing their predicted NDJ mean Niño4 index respectively.As an example, .This is quite inconsistent with the observation, namely, the NDJ mean Niño4 index are about 0.86 • C and 1.17Although there is still a gap between the observation and the model, the progress made by adding the ZCA term is substantial.From this view, the MAM mean ZCA term is important to the following CP El Niño and La Niña evolution, which is consistent with the previous studies.
The importance of ZCA to Pacific SSTA
To further illustrate the importance of ZCA, figure 5 demonstrates the contributions of each predictor
Evaluating the impacts of ZCA_RSD
The Pacific zonal current is a complex system, encompassing not only the classic ENSO theories emphasized the part of Kelvin and m = 1 Rossby waves, but also the higher-order Rossby waves and nonlinearity (i.e. the ZCA_RSD).The contribution from the latter part of the ZCA on predicting the NDJ SSTA in the Niño4 region is evaluated in this subsection, with a similar method as the section 3.1.
The role of ZCA_RSD in influencing CP El Niño and La Niña events
A similar Niño4 statistical model of equation ( 2) is utilized to evaluate the role of the ZCA_RSD, i.e. substituting the original ZCA with this term.It can be seen that this new model shows a quite similar performance with the model using the total ZCA (figure 3(b)), indicating that ZCA_RSD is important to the evolution of CP El Niño and La Niña events (figure 6(a)).Also, only the predictors in MAM contribute to improvements in the model, consistent with findings in section 3.1.1(figure not shown).By analyzing the contribution of each predictor, the ZCA_RSD term also has demonstrated its crucial role in predicting the NDJ Niño4 index, with substantially improving the prediction of two CP El Niño and two La Niña events (figure 6(b)).These results indicate that the improved NDJ mean Niño4 index predicted by adding the ZCA component primarily arises from the effect of ZCA_RSD, instead of the Kelvin and m = 1 Rossby waves components.The collective analyses of the ZCA_RSD demonstrate its crucial role in the SSTA evolution within the CP region, especially in the Niño4 region.Furthermore, the analyses indicate that when considering the effect of the ZCA on the SSTA in the central Pacific, it is the ZCA_RSD component that plays a prominent role rather than the Kelvin and m = 1 Rossby waves.The warming process within the Niño4 region holds particular significance for the prediction of CP El Niño and La Niña events.Diagnostic evaluations of the ZCA and ZCA_RSD confirm their essential contribution to the central Pacific warming process, thereby highlighting the necessity of including them in future studies of ENSO diversity.
Conclusion and discussion
In this study, we incorporate the ZCA and ZCA_RSD terms into the benchmark multivariate linear regression statistical model to demonstrate their significance in predicting the Niño4 index and the SSTA in the CP region according to the reanalysis data from 1980 to 2020.
After the ZCA term is added, the new model shows improved skill and can predict CP El Niño and La Niña events more accurately than the benchmark model (figure 3).The improved performance in predicting CP El Niño and La Niña events can be attributed to the notable contribution of the ZCA (figure 4).On the basis of the spatial distributions of contribution, the ZCA term primarily contributes to the equatorial SSTA in the central Pacific region (figure 5).Considering that the effects of the Kelvin and m = 1 Rossby waves through the ZCA and D20 are similar, these overlapping parts in ZCA are eliminated.The residual part of higher-order Rossby waves and nonlinearity, referred to as ZCA_RSD, is used to explore their effects on the Niño4 index and SSTA in the CP region, as in the preceding analysis.And ZCA_RSD shows similar results to the ZCA in section 3.1.The analysis shows that the inclusion of ZCA_RSD can improve the statistical model as the ZCA, indicating its significance in predicting the NDJ mean Niño4 index and equatorial central Pacific SSTA (figures 6 and 7).The influence of the zonal currents on the central Pacific SSTA can be attributed to the ZCA_RSD, rather than the Kelvin and m = 1 Rossby waves.
This research endeavors to offer a new direction in the study of ENSO diversity by exploring the role of ZCA_RSD component and achieves noteworthy results.However, this study has some limitations.The improvement obtained from incorporating the ZCA and ZCA_RSD into the model is not substantial enough to enable the construction of a robust statistical model for ENSO forecasting.Furthermore, the restriction imposed by the simple linear regression equation used in this study precludes a more comprehensive investigation and a deeper understanding of the underlying mechanisms by which ZCA or ZCA_RSD may affect the diversity of ENSO.It is recommended to use simulations from complex coupled dynamics models to examine the specific mechanism through which zonal currents affect the SSTA in the equatorial central Pacific, which may verify the viewpoints in this paper.To achieve a more comprehensive understanding of the relationship between the equatorial western and central Pacific, further investigations are warranted.Multiple studies are needed to illuminate the intricate interplay between ocean currents and ENSO.
Figure 1 .
Figure 1.Spatial distributions of surface (a) geostrophic and (b) observed ZCA during the MAM of El Niño years from 1980 to 2020.Units are m s −1 .
evaluate the efficiency of the benchmark and new Niño4 statistical models in different ENSO years, the scatter distributions during 1980-2020 are shown in figure 3. The cross-validated benchmark model without ZCA shows a correlation coefficient of 0.78 between the model and observed Niño4 index (figure 3(a)), whereas it is 0.83 in the new model (figure 3(b)).And the root mean square error (RMSE) of the new model is 0.42, lower than that of the benchmark model (0.47).These suggest including the ZCA term can overall improve the depiction the following ENSO evolution starting from the
Figure 2 .
Figure 2. Contribution (regression coefficient) of the equatorial (5 • S-5 • N) zonal advective feedback term (−u ′ ∂ T ∂x , • C/month) to the sea surface temperature tendency ( dT ′ dt , • C/month), which is shown as a black solid line.The blue shading shows the 95% confidence interval of Student's t test.The red dash lines show the Niño4 region.
figure 4
figure4shows the contributions of each predictor for the CP El Niño years of 2006 and 2009, and the La Niña years of 1998 and 2010.For the benchmark model, D20 and ZWS do not show any substantial contributions on predicting the NDJ mean Niño4 index for both CP El Niño years, and their total prediction is close to 0 • C (figure4(a)).This is quite inconsistent with the observation, namely, the NDJ mean Niño4 index are about 0.86 • C and 1.17• C in 2006 and 2009, respectively.The new model with ZCA, however, has significantly improved the prediction for both events, namely, the predicted Niño4 index are approximately 0.4 • C and 0.51• C in 2006 and 2009, respectively (figure 4(b)).In this situation, D20 and ZWS do not show any marked contributions, while this improvement almost comes from the ZCA.Regarding the two La Niña events, similarly, the prediction of the benchmark model exhibits discrepancies from observation.However, the new model with ZCA shows a noticeable improvement, primarily attributed to the contribution of ZCA, particularly in 1998 La Niña, where it closely aligns with observation.Although there is still a gap between the observation and the model, the progress made by adding the ZCA term is substantial.From this view, the MAM mean ZCA term is important to the following CP El Niño and La Niña evolution, which is consistent with the previous studies.
Figure 3 .
Figure 3. Scatter distributions of the observed NDJ mean Niño4 index and (a) the NDJ mean Niño4 index predicted by the cross-validated benchmark model (without ZCA), and (b) the Niño4 index predicted by the cross-validated new model (with ZCA) during 1980-2020.The x axis is the predicted NDJ mean Niño4 index, and the y axis is the observed NDJ mean Niño4 index.Units are • C. Red triangles denote the EP El Niño events during 1980-2020.Green dots denote CP El Niño events.Black dots denote neutral years.Blue rectangles denote La Niña events.'R' denotes correlation coefficient between observation and model.'RMSE' denotes root mean square error.'La Niña devn' denotes predicting deviation in La Niña events.The solid red line is the reference line, and the solid black lines are 95% confidence intervals.
Figure 4 .
Figure 4. Contributions of the cross-validated (a) benchmark and (b) new models' predictors to the NDJ mean Niño4 index compared with the observed NDJ mean Niño4 index in two CP El Niño (2006, 2009) and two La Niña events (1998, 2010).Red dots denote the observed Niño4 index.Black dots denote the predicted Niño4 index.Yellow, green, and blue dots denote the contributions of D20, ZWS, and ZCA to the NDJ mean Niño4 index by model prediction, respectively.Units are • C.
Pacific SSTA The spatial contributions of D20, ZWS, and ZCA_RSD are shown in figure 7. By removing the overlapping part, i.e. the Kelvin and m = 1 Rossby waves in the ZCA, D20 exhibits its contribution to the tropical Pacific SSTA (figure 7(a)), which disappears in figure 5(a).Among these three predictors, ZWS appears to have the most substantial effect on the NDJ mean tropical Pacific SSTA (figure 7(b)).Above two precursors illustrate their contributions are more pronounced in the central-eastern Pacific region.Distinguishing from them, the pattern of ZCA_RSD is primarily concentrated in the equatorial central
Figure 6 .
Figure 6.(a) is the same as the scatter distributions of the NDJ mean Niño4 index by observation and new model in figure 3(b).(b) is the same as the contributions of the new models' predictors in figure 4(b).But the new Niño4 model is constructed with ZCA_RSD instead of ZCA.
Figure 7 .
Figure 7.The same as in figure 5, but the new Niño4 model is constructed with ZCA_RSD instead of ZCA in figure 5.
Table 2 .
Significant regions of predictors for the Niño4 index in the new model.
|
2023-11-15T17:08:53.545Z
|
2023-11-09T00:00:00.000
|
{
"year": 2023,
"sha1": "f95e58d8a0765eb194d5a474badede65368c9e44",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1748-9326/ad0b21/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "66cc20fc5ab2b708fd861bbad1ba05bdc40450b0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
245343971
|
pes2o/s2orc
|
v3-fos-license
|
The status of neglected tropical diseases amidst COVID-19 in Africa: Current evidence and recommendations
Health care services and programs directed towards combating the neglected tropical diseases (NTDs) have been disrupted because of the impact of the coronavirus disease 2019 (COVID-19). The African continent because of its staggering health care system and poor economy disproportionately bears the burden of these diseases. While successes have been recorded in controlling and eliminating the NTDs, policymakers in Africa should consider the potential of the COVID-19 to dwindle these successes an issue of high priority. This commentary seeks to discuss the current status of NTDs in Africa and proffer recommendations to help combat these diseases at this period. It is worthy to say that similar dedication directed towards fighting the COVID-19 should also be deployed into eliminating other diseases like the NTDs which often, are neglected.
Introduction
Neglected tropical diseases (NTDs) are manifold group of communicable diseases caused by parasites, fungi, bacteria and viruses that are primarily found in tropical and subtropical climates. 1 Globally, over one billion vulnerable and marginalized communities are negatively impacted by the burden of NTDs, with more impact on resourceconstrained countries. 2 Africa contributed 39% to the 1.5 billion of the world population affected by these diseases 3 which made it a recognized public health problem in the region. NTDs are also known to be common among poor population living in tropical regions and hard-to-reach communities like the slums, rural environments and waraffected zones. 4 African countries such as Nigeria and Democratic Republic of Congo ranked the first and second country hosting the highest number of schistosomiasis, onchocerciasis, as well as lymphatic filariasis in Africa respectively. 5,6 Despite the health promotion and prevention programs on NTDs in Africa such as the World Health Organization (WHO) African Region NTDs programme, Global Eradication Efforts for Guinea Worm Disease, United States Government Initiative to Address NTDs and other global NTDs programs; the NTDs still persist. 4 These efforts are further complicated by the coronavirus disease 2019 (COVID-19) pandemic in Africa. Since the declaration of the COVID-19 outbreak as a Public Health Emergency of International Concern by the WHO and the continent's first confirmed case in Egypt on February 14, 2020; Africa continues to record daily rise of confirmed cases. 7,8 Currently, Africa is experiencing the third wave of the pandemic, which further poses pressure on the fragile health systems and increased workload on the already stretched health workforce. 7 Therefore, healthcare services delivery towards NTDs is greatly disrupted with stalled initiatives and financial commitments towards prevention and eradication strategies. 9
Vulnerability of Africa to NTDs
Popularly referred to as disease of poverty, the NTDs affect the most impoverished and marginalized people. 10 Today, a larger concentration of poor people reside in Africa with about 490 million people in extreme poverty especially among the rural communities and the disadvantaged urban dwellers. 11 This implies that Africans live below the World Bank international poverty line that stood at generating at least $1.9 financial income per day. 12 Sustaining NTDs control is associated with integration of quality water supply, basic sanitation and personal hygiene programs. However, the vulnerable segment of the population lack these crucial amenities including access to health services and good housing. 13 Low-income countries TUOMS P R E S S experience at least five NTDs happening concurrently as stated by the global health department of Centers for Disease Control and Prevention (CDC). The high burdens of these diseases are the source of physical and cognitive impairment, increased morbidity and mortality among women and children, reduced productivity rate which promote extreme poverty cycle. 14 As COVID-19 pandemic evolves, population health and socioeconomic status of individuals and African nations are adversely affected. 15 Africa's economy experienced decrease in economic growth by 2.1% in 2020 based on the negative impact of the pandemic, according to the African Development Bank estimation. 16 Majority of rural dwellers and people living with NTDs engage in agriculture including farming, fishery and animal husbandy. These activities were interrupted by the widespread of COVID-19 virus and resulted in production shutdown and decreased financial buoyancy. Additionally, lack of timely access to quality healthcare services and increased out-of-pocket payment attributed to the prevalence of NTDs in Africa. 17 COVID-19 outbreak stretched the capacity of the health systems in Africa, promoting inaccessibility to quality health services. Despite the implementation of health insurance scheme in some African countries like Nigeria, Ghana and Kenya; a good number of the population affected by the NTDs and people at-risk still wallow in abject poverty due to disease co-morbidities and out-ofpocket expenditures. 18 Although NTDs have desolating effects on human health and socioeconomic development, COVID-19 further intensified the undesirable effects associated with the diseases on the impoverished communities. 19
COVID-19 and NTD programs
Following the emergence of the COVID-19 pandemic and the public health measures to curtail the pandemic, the WHO noted suspension of activities directed towards active case-finding, mass treatment of NTDs, and population-based surveys for NTDs. 20 However, interim guidance for the implementation of these programs was developed because a total halt of these programs may dwindle the hard-won successes towards NTD elimination and control. 21 Models such as the NTD modeling consortium provided quantitative information about the impact of the COVID-19 pandemic on NTD elimination including ways to minimize these impacts. 22 The consortium showed that the impact of the pandemic can still be mitigated provided there is prompt action to reverse these impacts. 23 For NTD programs to resume to full functionality, there will be a need to establish innovative methods on how interventions are planned and implemented to achieve maximum health impacts and to build programs back and better. 24
Global health diplomacy and the fight against NTDs
To eradicate and control the NTDs in highly endemic regions, achieving universal treatment coverage and health services play a crucial role. 25 At the 74th World Health Assembly in 2021, WHO endorsed a new 9 year (2021-2030) road map for NTDs with the aim of strengthening programmatic response to NTDs and promoting sustainable health systems with the use of cross-sectoral, integrated interventions, smart investment and community engagement. 26 Being at the forefront of eradicating NTDs for years, WHO in 2005, created a Department for Control of Neglected Tropical Disease (WHO/NTD) in Geneva led by late Director-General Dr JW Lee at a time when it was urgent to initiate health intervention for millions living with the diseases. 19 The 2021-2030 roadmap supersedes the 2012 roadmap by identifying the gaps and appropriate actions needed to achieve the new targets. Also, it shifted focus from diseasespecific interventions to an integrated approach. 27 The London Declaration which followed the WHO 2020 roadmap in 2012 organized by a group of partners integrated collaboration and commitment towards achieving universal health access for 10 of the NTDs, promoting the active involvement of global health actors including the pharmaceutical industries, philanthropic foundations, multilateral organizations and the government of the endemic countries in the eradication and control of NTDs. Ever since, essential medications and logistics have been made available to ensure the hardto-reach communities and affected countries have access to NTDs-related treatment. 28 Despite these efforts, about 600 million Africans are still denied access to treatment. 3 Over the years, CDC with other partner organizations ensured proper implementation of available health interventions towards the eradication of NTDs. In 2020, COVID-19 negatively impacted the decades of actions and commitments from the global health actors towards improving health of affected people and prevention of occurrences among at-risk population, as resources are diverted to COVID-19 disease management and vaccine production. 29 Social determinants of health made impoverished communities more susceptible to COVID-19 and NTDs. However, CDC strategies to reinforce human power towards disease surveillance and management, provision of accurate information for infection control and hospital preparedness assessment, identification of at-risk population including those vulnerable to disease co-morbidity and vaccine access for those in low and middle income countries contributed to the minimization of the disease burden. 8 More than ever, the fight against COVID-19 further reiterate the power of synergistic action, strong political will and integrated intervention approach in the control and elimination of NTDs.
Furthermore, African Union in its Agenda 2063 goal indicated commitment to promoting healthy and wellnourished Africa, which is free of all NTDs, so also the Sustainable Development Goal 3 target 3 which focuses on 90% reduction in number of people seeking NTDs-related health interventions. 30 Even though the diseases are prioritized in different agendas and roadmaps, the advent of COVID-19 pandemic sidelined the interventions towards eliminating the identified NTDs.
Opportunity for innovation
The 2021-2030 roadmap which is the second WHO blueprint for the control and elimination of NTDs succeeded the initial blueprint developed in 2012 with milestones established for 2020. 31 Although, many successes were recorded in the 2012-2020 roadmap; for instance, the Global Program to Eliminate Lymphatic Filariasis (GPELF) is regarded as one of the most successful public health programs. 32 However, not all the milestones were met and this had implications for improving interventions, program management and delivery, diagnostic methods, and adequate financing mechanisms for each disease. 31 The new 2021-2030 WHO road map for NTD programs focused on three major action areas which are to accelerate programmatic action against NTDs; intensify cross-cutting approaches by integrating interventions for several NTDs into national health systems; change operating models and culture by increasing country ownership. 31 In March 2020, the Africa Union Commission revised its continental framework for NTDs (2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030) intending to guide member states in fighting NTDs in the continent and called for improved efforts in eliminating and controlling NTDs in the wake of the new WHO road map. 33,34 The framework recognized strategic approaches to the elimination of NTDs such as increase financing, community engagement and ownership, effective partnerships and collaboration, and research, development, and innovative technologies. 34 Country ownership of health systems will be successful in Africa if there is private investment. 35 However, no African country has achieved their pledged target of 15% allocation of their annual budget towards the fight of NTDs. 36 With the disruptions in the health system and economies of African countries caused by the pandemic, effective budget allocation is now paramount more than ever if the elimination of NTDs is to be achieved. To achieve health systems strengthening, community health workers which are a critical part of the African health care delivery system should be trained to deliver communityoriented primary care for disease surveillance and testing such as that implemented in South Africa. 35,37 Researchers in Sub-Saharan Africa have identified inadequate human capacity, complex logistical and financial systems and delays in ethical reviews as barriers to conducting clinical trials. It is therefore important that African countries create a more enabling environment for research and innovation. 38
Conclusion and Recommendation
Achieving the control and elimination of NTDs should be of high priority in Africa because of the health benefits that come with eliminating communicable diseases and the impact of its elimination on the socio-economic development of Africa. Now more than ever, strong and effective partnership and collaboration with donors and relevant international bodies are essential to achieve adequate financing and development of endemic countries from their political leaders to their communities. 33 Studies have shown that COVID-19 will further enhance global inequity, particularly in low and middle-income countries. Therefore, NTD programs with recognized success over the past years should remain a priority on the health and development agenda in Africa because of their roles in promoting many of the Sustainable Development Goals. 39 We have seen the resources deployed towards combating COVID-19, similar dedication should also be demonstrated in fighting the NTDs.
|
2021-12-21T16:07:10.097Z
|
2021-12-19T00:00:00.000
|
{
"year": 2021,
"sha1": "a83a4219e68ecb3732d2aec0d759e29875e7a612",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8722286fdd7159c798c102d652b724375d17566a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233408357
|
pes2o/s2orc
|
v3-fos-license
|
Understanding the sodium cation conductivity of human epileptic brain tissue
Transient and frequency-dependent conductivity measurements on excised brain-tissue lesions from epilepsy patients indicate that sodium cations are the predominant charge carriers. The transient conductivity ultimately vanishes as ions encounter blockages. The initial and final values of the transient conductivity correspond to the high-frequency and low-frequency limits of the frequency-dependent conductivity, respectively. Carrier dynamics determines the conductivity between these limits. Typically, the conductivity rises monotonically with increasing frequency. By contrast, when pathology examinations found exceptionally disorganized excised tissue, the conductivity falls with increasing frequency as it approaches its high-frequency limit. To analyze these measurements, excised tissues are modeled as mixtures of “normal” tissue within which sodium cations can diffuse and “abnormal” tissue within which sodium cations are trapped. The decrease in the conductivity with increasing frequency indicates the predominance of trapping. The high-frequency conductivity decreases as the rate with which carriers are liberated from traps decreases. A relatively low conductivity results when most sodium cations remain trapped in “abnormal” brain tissue, while few move within “normal” brain tissue. Thus, the high densities of sodium nuclei observed by 23Na-MRI in epilepsy patients’ lesions are consistent with the low densities of diffusing sodium cations inferred from conductivity measurements of excised lesions.
I. INTRODUCTION
Brain tissue of 67 epilepsy patients at the UCLA medical center was excised from a variety of lesion locations. Each freshly excised sample was subjected to (1) pathology examination, (2) measurement of the diffusion MRI [spin-echo nuclear magnetic resonance (NMR)] of its hydrogen nuclei, and (3) measurement of its frequency-dependent conductivity σ(ω). Qualitative changes in σ(ω) occur for excised samples whose pathology examination finds to be exceptionally disorganized. The following five paragraphs summarize the previously published experimental findings. The remainder of this paper develops a theoretical framework with which to understand these experimental results.
Transient currents induced by application of dc electric fields to cm-sized samples decayed in about 100 s as charge transport stopped at cellular blockages separated by about 0.5 mm. 1 In addition, the conductivity measured between 6 and 1000 Hz typically rises gently with increasing frequency as reversals of the applied electric field increasingly enable drifting charges to avoid blockages. This charge transport was attributed to the slow motion of solvated ions, primarily solvated sodium cations. 1 A density of solvated sodium cations of 2.5 × 10 25 ions/m 3 moving with a diffusion constant of 10 −9 m 2 /s generates a room-temperature conductivity of about 0.16 S/m. 1 Solvation occurs as oxygen atoms at the apex of the surrounding water molecules orient themselves toward cations. 2 Thus, motion of solvated ions is associated with the reorientation of surrounding water molecules. Diffusion MRI (i.e., spin-echo NMR) performed on excised samples' hydrogen atoms yields a diffusion constant of about 10 −9 m 2 /s. 3 The value of this diffusion constant is comparable to that inferred from conductivity measurements of solvated sodium cations. This result suggests that the diffusion constant obtained from these diffusion-MRI measurements monitors the reorientation of water molecules associated with the motion of solvated cations. 1 In most instances, the conductivity rises as the applied frequency was increased, ∂σ(ω)/∂ω > 0. At high enough frequencies (typically >100 Hz), the frequency dependence of the conductivity weakens as most diffusing cations avoid blockages. The conductivity then simply becomes proportional to the product of the density of diffusing ions and their diffusion constant. Distinctively, the relative variations in the measured conductivities were an order of magnitude greater than those of diffusion constants obtained from diffusion-MRI measurements. 4 Thus, changes in the ionic conductivity are overwhelmingly caused by changes in the density of diffusing ions. 4 Although the conductivities of samples of excised brain tissue usually increase slowly with increasing frequency, the conductivities of some samples decrease with increasing frequency, ∂σ(ω)/∂ω < 0, at high frequencies (cf. Fig. 1 of Ref. 5). Furthermore, the magnitudes of conductivities at 100 Hz of these samples were somewhat lower than those of typical samples. As exemplified in Fig. 2 of Ref. 5, pathology investigations indicated that samples of brain tissue whose conductivities manifest this "anomalous" frequency dependence were especially disorganized. 5 In particular, gray matter is extremely sparse and inhomogeneous, while white matter is grossly segregated.
Thus, measurements of the frequency-dependent conductivity imply that the densities of conducting sodium cations are especially low for especially disorganized brain tissue excised from epilepsy patients. By contrast, unusually, high concentrations of sodium atoms measured in vivo by 23 Na-MRI are used to identify the lesions of epileptic patients. 6 However, 23 Na-MRI provides no information about the mobility of the sodium atoms it identifies. 7 An aim of this paper is to address the relationship between the frequencydependent sodium-cation conductivities σ(ω) measured on excised epileptic human brain tissue and sodium concentrations in epileptic lesions as measured by 23 Na-MRI.
To analyze published measurements, we model excised samples as mixtures of "normal" tissue and "abnormal" (severely disorganized) tissue. Normal tissue generally exists at the margins of excised tissue and also may be included within its lesion. At its most extreme (Ra = 0 in the formulas of Sec. II), the model developed in Sec. II envisions sodium cations to be mobile within normal tissue and trapped within abnormal (severely disordered) tissue. The frequency-dependent ionic conductivity is then primarily governed by two electric-field-induced rates. First, Rn is the rate at which sodium cations in the normal material migrate to blockages at which they are stopped. Second, R l is the rate at which ions are liberated from traps in the abnormal material to then move within the normal material. We find that the sign of the slope of the conductivity with respect to the applied frequency, ∂σ(ω)/∂ω, in the high-frequency limit just depends on the ratio R l /Rn and on f , the fraction of ions initially trapped in the abnormal material. For inefficient transfer of ions from abnormal tissue into normal tissue, R l /Rn ≪ 1, the anomalous frequency-dependence of the high-frequency conductivity, ∂σ(ω)/∂ω < 0, only occurs when most ions are initially trapped in abnormal tissue, f → 1. Concomitantly, relatively few sodium cations initially move through normal tissue.
In summary, sodium-cation conductivities fall with increasing applied frequency to especially low values in the severely disorganized excised brain tissue excised from the most severely affected patients. 5 The calculation in Sec. II implies that the density of sodium cations that then move in normal excised brain tissue has decreased. However, the total density of sodium nuclei measured by 23 Na-MRI increases with epilepsy's severity. 6 In combination, these results imply that the density of trapped sodium cations rises with the severity of the structural disruptions associated with epilepsy. Discussion of the uncertainties and limitations of this simple model is relegated to Sec. III.
II. CALCULATION
We divide brain tissue into two categories, "normal" (less etiologically related) and "abnormal" (more etiologically related), based on the severity of their structural disruptions. Most generally, ions can move within each category until being blocked by their respective cell structures. Thus, each category is treated as composed of polarization centers defined by their respective blockages. Ions' motion within normal and abnormal polarization centers in addition to transfer from abnormal to normal tissue generates their frequency-dependent conductivities.
The application of an electric field at t = 0 initiates processes that alter the probabilities of cations (1) moving in the normal material (Pn), (2) moving in the abnormal material (Pa), and (3) being stopped and bound at blockages (P b ). In particular, three master equations govern the temporal evolution of the occupation probabilities for cations moving in normal tissue, in abnormal tissue, and being stopped at cellular blockages, and Here, R l represents the rate characterizing the electric-field-induced liberation of cations from being trapped within abnormal structures. Liberated cations then transfer to normal matter within which their ARTICLE scitation.org/journal/adv motion is relatively rapid. The rates with which cations move to blockages within normal and abnormal structures are denoted by Rn and Ra, respectively. The first-order linear differential equations [Eqs. (2) and (1)] have the respective solutions as follows: and The coefficients A and B are determined from the initial conditions that Pa = f and Pn = 1 − f at t = 0, where f equals the fraction of cations initially occupying abnormal tissue and 1 − f equals the fraction of cations initially occupying normal tissue. Then, after some straightforward algebra, Eqs. (4) and (5) become and Unlike Eq. (12) of Ref. 5, Eq. (7) includes the possibility of cation transport through the damaged regions. As such, Eq. (7) reduces to Eq. (12) of Ref. 5 in the limit that Ra = 0. The transient ionic conductivity most generally is the sum of (1) that from mobile cations drifting through regions of normal tissue until they are stopped at blockages and (2) that from mobile cations drifting through damaged regions until being stopped at their blockages, σt(t) = σnPn(t) + σaPa(t).
Here, σn and σa denote the initial ionic conductivities of normal tissue and abnormal tissue, respectively, where they are occupied by the totality of mobile ions, nc. Specifically, σn = nc(q 2 /kT)Dn and σa = nc(q 2 /kT)Da, where q represents the ion's charge; k denotes the Boltzmann constant; T signifies the temperature; and Dn and Da denote the ionic diffusion constants in normal and abnormal tissues, respectively. Employing Eqs. (6) and (7), the transient ionic conductivity [Eq. (8)] is written as Consider two limiting situations. First, if there are no damaged regions, f = 0, the transient conductivity, σt(t) ∝ σn exp(−Rnt), falls monotonically with time as solvated cations move through the normal material until they are stopped at a blockage. Second, if all the tissue is damaged with no transfer of trapped cations in abnormal tissue to normal tissue being possible, f = 1 and R l = 0, the transient conductivity becomes simply σt(t) ∝ σa exp(−Rat). That is, the transient current density falls monotonically with time as solvated cations move relatively slowly, Ra ≪ Rn, through the damaged material until they are stopped at blockages.
In between these two extreme limits, the transfer of sodium cations from abnormal tissue to normal tissue can produce a qualitatively different behavior, trap-limited motion through normal tissue. For example, in the absence of transport in abnormal tissue, σa = 0 and Ra = 0, Eq. (9) becomes As indicated in Fig. 4 of Ref. 5, at short times, the transient current of Eq. (9) rises with time for sufficiently large f with fR l > Rn(1 − f ), even though the transient current of Eq. (10) falls toward zero at sufficiently long times. These qualitatively different time dependences of transient conductivity manifest themselves in qualitatively different frequency dependences of the frequency-dependent conductivity. Indeed, the frequency-dependent current density is obtained from transforming the transient current density as follows: Inserting Eq. (9) into this formula and performing the standard integrations yield Some limiting values of Eq. (13) can be readily discerned. In the low-frequency limit, ω → 0, the ac conductivity approaches the final value, t = ∞, of the transient conductivity, where all carriers are stopped by blockages: σ(0) = 0. In the high-frequency limit, ω → ∞, the ac conductivity approaches the initial, t = 0, value of the transient conductivity, In the absence of ionic transfer from abnormal tissue to normal tissue, R l → 0, the frequency-dependent conductivity becomes simply the sum of that from polarization centers in the two types of tissues as follows: Since ionic transport in extremely damaged excised brain tissue is relatively poor, σa ≪ σn, the ionic conductivity of Eq. (15) becomes extremely small as the fraction of cations in damaged tissue grows, f → 1. By contrast, as R l → ∞, all trapped carriers are readily ARTICLE scitation.org/journal/adv liberated from abnormal tissue and are thereafter free to move through normal tissue. The effects of trapping then disappear from Eq. (13) as it reverts to that for all carriers moving through normal tissue, (16) Figure 1 illustrates the frequency-dependent conductivity of Eq. (13). The dashed curve shows that the conductivity rises monotonically with increasing frequency in the absence of carriers liberated from traps, R l /Rn = 0. By contrast, the solid curves depict the liberation of carriers from traps generating a substantial range of frequencies over which the conductivity falls with increasing frequency. As evidenced in this figure, the relatively mild frequency dependence of the conductivity observed experimentally occurs when ω/Rn ≫ 1.
A relatively simple expression for the slope of the relative conductivity ∂ [σ(ω)/σn]/∂[ω/Rn] is straightforwardly obtained from Eq. (13) in the relevant limits, ω/Rn → ∞, with vanishing transport through abnormal brain tissue, Ra → 0, The sign of the slope is given by the sign of the square-bracketed term of Eq. (17). As illustrated by the dashed curve in Fig. 1, the slope is positive in the absence of ions being transferred from being trapped in abnormal tissue to being mobile in normal tissue, R l = 0. By contrast, a negative slope is obtained when the second term within the square brackets of Eq. (17) dominates, Consider the two limiting situations that generate a negative slope. First, if R l /Rn ≫ 1, then, as illustrated in Fig. 2, a negative slope will result even when only a small fraction of ions is initially trapped in abnormal tissue, e.g., f ≪ 1. Second, if R l /Rn ≪ 1, then, as illustrated in Fig. 3, a negative slope only results when a large fraction of ions are initially trapped in abnormal tissue, f → 1 − (R l /Rn). The high-frequency conductivity is then relatively small since it involves only a small fraction of tissue's ions: σ(100 Hz) ≈ σn(1 − f ) ≈ σn(R l /Rn) ≪ σn. Rather, most ions are initially trapped within abnormal tissue.
The second scenario appears most relevant to the observations reported in Refs. 4 and 5. In particular, (1) negative slopes are only observed in especially disorganized excised brain tissue and (2) these tissues' high-frequency conductivities are lower than those of samples manifesting positive slopes. 5
III. SUMMARY AND DISCUSSION
Biological processes depend on the movement of ions. Ionic transport generally depends on the quality of the biological structures within which ions exist. The quality of these structures is evaluated by pathologists who relate it to the presence of disease.
ARTICLE scitation.org/journal/adv
Charge transport studies can be combined with pathology investigations. For example, anomalous frequency dependences of the ionic conductivities are observed in brain tissues excised from pediatric epilepsy patients whose pathology examination reveals to be exceptionally disorganized. 5 Here, we employ a simple model to describe changes in the ionic transport through brain tissue as a function of the extent of its structural abnormality. In particular, brain tissue is regarded as the mixture of normal tissue, which supports ionic transport, and abnormal tissue within which ions are trapped. As is widely observed, the ionic conductivity of typical freshly excised tissue increases slowly with the frequency of the applied electric field that drives cation transport. This behavior is understood as arising from ions moving until they encounter intrinsic blockages associated with cellular structures. 1 By contrast, the conductivity of the unusually damaged material decreases with the frequency of the applied electric field. 5 We attribute this anomalous behavior to electric-field induced freeing of cations from traps in abnormal tissue, thereby enabling the freed cations to move within normal tissue. In other words, the unusual frequency dependence of the conductivity is indicative of trap-limited ionic transport. As such, the densities of mobile ionic charge carriers are significantly smaller than the net density of these ions (mobile ions plus trapped ions). 5 Solvated sodium cations are the predominant charge carriers in tissues excised from epilepsy patients. 1 Excised tissues from the most severely affected patients tend to be most disorganized and to manifest conductivities that fall with increasing applied frequency to especially small (high-frequency) values. 5 Nonetheless, the density of sodium nuclei measured by 23 Na-MRI in epilepsy patients' lesions is taken as indicative of the severity of this affliction. 6 Taken together, the conductivity and 23 Na-MRI measurements indicate that the density of trapped sodium cations rises with the severity of the structural disruptions associated with epilepsy. In other words, epilepsy's etiology in these samples is consistent with (1) disorganized brain tissue, (2) high densities of trapped sodium atoms, and (3) low densities of mobile solvated sodium cations.
The qualitative features of these results are robust. However, we cannot estimate the fractions of excised brain tissue that should be regarded as normal and abnormal. Normal and abnormal tissues may be mixed within lesions. Furthermore, surgeons generally include some surrounding normal tissue when they excise a lesion. This effect will tend to exaggerate the fraction of normal tissue in an excised sample and its conductivity. As such, the prevalence of the distinctive features associated with abnormal tissue may be underestimated.
Our measurements do not allow us to reliably designate sodium cations as either intercellular or intracellular sodium cations. However, we note that the intercellular sodium concentration and volume fraction of the healthy rat brain (140 mM and 0.2) are quite different from the intracellular concentration and volume fraction (10 mM and 0.8). 8
ACKNOWLEDGMENTS
This study was supported by NIH Grant No. R21 NS108247-01 and the Weil Fund at the UCLA Semel Institute for Neuroscience and Human Behavior.
DATA AVAILABILITY
The data that support the findings of this study are available within the article.
|
2021-04-28T05:13:24.759Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c6fbe304a1c9f7d8355e6d1ff8fb50fcf0b09d79",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0041906",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6fbe304a1c9f7d8355e6d1ff8fb50fcf0b09d79",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225237248
|
pes2o/s2orc
|
v3-fos-license
|
Combating non-communicable diseases in rural Lucknow: Are the skills and knowledge of female health workers adequate?
India, like other developing countries, is facing the surging trend of non-communicable diseases (NCDs). Studies show that 38 million (68%) of all global deaths and about 5.87 million (60%) of all deaths in India are caused by NCDs and estimates predict a further increase in the figure by 2020. 3, 4 In India, the problem is further exacerbated by an early age of onset of NCDs, multiple underlying conditions, lack of knowledge, and insufficient health care access. With India still grappling with the problem of infectious and parasitic diseases, the rising numbers of NCD cases resulting in a double burden of disease present a heavy burden on health facilities, and pose a substantial challenge to the public health system in the country. It has become an inevitable need to address this problem at the primary health care level and there have been recommendations for a community-based approach for NCD care. 7 Forecasting the burden for NCDs, the Government of India (GOI) rolled out an umbrella program in 2010 across the country, i.e., National Programme for Prevention and Control of Cancer, Diabetes, Cardiovascular Diseases, and Stroke ABSTRACT
INTRODUCTION
India, like other developing countries, is facing the surging trend of non-communicable diseases (NCDs) 1 . Studies show that 38 million (68%) of all global deaths and about 5.87 million (60%) of all deaths in India are caused by NCDs and estimates predict a further increase in the figure by 2020. 2,3,4 In India, the problem is further exacerbated by an early age of onset of NCDs, multiple underlying conditions, lack of knowledge, and insufficient health care access. 5 With India still grappling with the problem of infectious and parasitic diseases, the rising numbers of NCD cases resulting in a double burden of disease present a heavy burden on health facilities, and pose a substantial challenge to the public health system in the country. 6 It has become an inevitable need to address this problem at the primary health care level and there have been recommendations for a community-based approach for NCD care. 2,7 Forecasting the burden for NCDs, the Government of India (GOI) rolled out an umbrella program in 2010 across the country, i.e., National Programme for Prevention and Control of Cancer, Diabetes, Cardiovascular Diseases, and Stroke (NPCDCS). Considering the critical shortages in healthcare workforce of the country, the GOI has envisaged the Female Health Workers (FHW), especially, Accredited Social Health Activists (ASHAs) and Auxiliary Mid Wives (ANMs), for the last mile delivery of health services under the programme. 8,9 FHWs are set to become the backbone of primary health care service in the country due to the cost-effectiveness in comparison to other cadres of the health system and their success in delivery of essential maternal and child health, family planning and nutritional health services in the country. 8,10,11 Role of ASHAs under NPCDCS is to conduct a 'Complete Community Based Assessment' for NCD screening, to identify individuals with high risk behaviours, raising knowledge about NCDs and health promotion; additionally, ANMs also undertake 'Population-Based Screening' for various NCDs at the sub-centre level, follow-up of patients with NCDs, referring the individual who needs confirmation, reporting and recording of data.
ANMs backed up with ASHAs should have the skills to use the BP apparatus, glucometer, and to conduct 'Clinical Breast Examination' for the screening process. 12,13,14,15 The proportion of ASHAs trained to support population-based screening of NCDs is 42.5%. 16 There is little information describing the processes of training and supervision required to integrate and orient the female health workers into NCD care services. 17 Available literature shows that FHWs lack essential knowledge regarding chronic diseases. 18 Moreover, studies regarding their effectiveness in NCD prevention and intervention delivery in developing country settings are limited. 19 This study attempts to fill these gaps by studying knowledge and skills regarding NCDs among FHWs in rural Lucknow.
Objectives of the study were to explore the knowledge about NCDs among ASHAs and ANMs in rural Lucknow and Assessment of skills related to NCD screening among ANMs in rural Lucknow.
Study design
Cross-Sectional Study design was selected for the study.
Study area
The present study was conducted at Lucknow, Uttar Pradesh, India.
Study period
The study was conducted from June 2018 to August 2018. One month of the study period was utilized for review of literature, development of interview schedule and pilot testing, two months for data collection, and one month for data compilation and analysis.
Study universe
The study universe consists of female health workers (FHWs) of Lucknow.
Study population
FHWs (ASHAs and ANMs) enrolled under the Public Health System of Lucknow.
Study unit
Individual ASHA and ANM enrolled under Community Health Centre (CHC), Sarojini Nagar, Lucknow.
Study setting
CHC selected in the study district of Lucknow.
Inclusion criteria
ASHAs and ANMs who gave consent for the study.
Exclusion criteria
ASHAs and ANMs did not respond/come to the CHC when called for three consecutive times.
Sample size and sampling procedure
The field practice area of the Department of Community Medicine, KGMU, under CHC Sarojini Nagar was selected. A total of 199 ASHAs and 67 ANMs were enrolled under the concerned CHC during the study period. All the ASHAs and ANMs were prior informed to come to the CHC. The CHC was visited on different days for data collection. ASHAs and ANMs who did not respond when called for three consecutive times were excluded from the study. Informed consent was obtained from the participants. They were briefed about the problem statement and its various aspects. Data was collected using a pre-designed, pre-tested, semi-structured questionnaire. A total of 152 ASHAs and 34 ANMs fulfilling the inclusion and exclusion criteria were personally interviewed.
Tools for data collection
A pre-tested, semi-structured, questionnaire containing information on the biosocial characteristics of the participants; questions pertaining to risk factors, symptoms, non-pharmacological ways of management, screening methods for NCDs (cervical cancer, breast cancer, diabetes mellitus, cardiovascular diseases, stroke); a checklist was used to assess the steps taken by ANMs to measure blood pressure, blood sugar, and perform breast examination. This checklist was developed according to their training module.
Digital blood pressure monitor (Citizen: REF CH-432) was used to assess the steps taken for blood pressure measurement.
Glucometer (ACCU-CHEK Performa, NC): was used to assess the steps taken for blood sugar measurement.
Data processing and analysis
Data was processed and analysed using SPSS 24.0. Descriptive statistics were presented as the frequency with percentages (for categorical data). Findings were also presented through graphs. Association between categorical variables was tested using the Chi-Square test. Binary logistic regression analysis was used to identify the factors for the outcome variables.
Knowledge about NCDs
The questionnaire comprised of 25 questions to assess the knowledge regarding the aforementioned NCDs. Each response was assigned a score of '1' to correct response/s, '0' to incorrect or don't know. Scores were added to get a total score, the maximum for which was 45. The median score was calculated, which was 13; respondents were graded as per their scores.
RESULTS
A total of 152 ASHAs and 34 ANMs were enrolled for the study. Among ASHAs, majority of them were in the age group of 31-40 years (49.3%) and had education up to matriculation (67.8%). Whereas among ANMs, the majority of them were in the age group of >40 years (67.6%) and had the education of higher secondary and above (88.2%). (Table 1) While all the ANMs knew the meaning of NCDs and were able to name Diabetes and Hypertension as NCDs, only about half of them named Cardiovascular Diseases (CVD) (47.1%) and Stroke (44.1%) as NCDs. Among ASHAs majority of them (83.6%) could explain the meaning of NCDs. About three-fourth of them could name Diabetes (78.9%), and Hypertension (72.4%) as NCDs. Only a few of them (6.6%) and a lesser number (3.9%) could name CVD and Stroke as NCDs respectively; less than one-fifth (14.5%) of them couldn't name any NCD. (Table 2) Table 2).
The only risk factor mentioned for Diabetes was obesity; very few ASHAs (3.3%) and ANMs (5.9%) were able to allude so. Increased frequency of urine was most frequently mentioned symptom by ASHA (30.9%) and ANMs (44.1%); however, about two-third of ASHAs (63.2%) and less than half of ANMs (38.2%) could not name any sugar intake was most frequently mentioned non-pharmacological measure for the disease management by ASHAs (82.2%) and ANMs (91.2%). About half of the ANMs (52.9%) and very few ASHAs (4.6%) could correctly tell the level of random blood sugar above which a patient is referred to the PHC ( Table 2).
Regarding CVD, Hypertension was the most frequently named risk factor by ASHAs (61.8%) and ANMs (79.4%); likewise, anxiety was the most frequently named symptom by ASHAs (45.4%) and ANMs (79.4%). Reduced salt consumption was most frequently mentioned non-pharmacological measure by ASHAs (64.5%) and ANMs (88.2%). All the ANMs and nearly half of the ASHAs (42.8%) could correctly tell the level blood pressure above which a patient is referred to the PHC ( Only risk factor known for the development of stroke was hypertension, by ASHAs (61.2%) and ANMs (23.5%); similarly, paralysis was the only symptom named by ASHAs (23%) and ANMs (47%). Regular physical activity was mentioned as a non-pharmacological measure for prevention of stroke, by ASHAs (3.3%) and ANMs (8.8%) ( Table 2).
About half (41.2%) of the ANMs and a lesser proportion (15.1%) of the ASHAs had received training for NCDs within the last one year of the study (Table 3).
Though the majority of the ANMs (79.4%) were able to correctly demonstrate all steps of measurement of blood pressure, only about half of them (47.1%) were able to demonstrate correct steps to measure blood sugar; less than one-fourth of ANMs (17.6%) were correctly able to demonstrate all the steps of Clinical Breast Examination (Table 4).
Among the FHWs, knowledge about NCDs was found to be higher for those with the age more than 40 years (77.3%), with the education of higher secondary and above (69.9%) and also amongst those who had received previous training for NCDs (91.9%). Association of the score with all these factors was statistically significant (Table 5).
Binary logistic regression analysis was used to compute predictors of knowledge about NCDs (based on score) using various study variables. For the total score, the model predicts that respondents who had previous training for NCDs were 2.99 times more likely to get higher score [OR=20.072 (5.339-74.624), p<0.001]; also, respondents who had an education of senior secondary and above were 1.4 times more likely to obtain higher score [OR=4.070 (1.491-11.113), p=0.006]; similarly, respondents who were in the age group of more than 40 years were 1.6 times more likely to get higher score [OR=5.094 (2.452-10.583), p<0.001]. About one-third (36.8%) of the ASHAs and all the ANMs received average and above score for knowledge regarding NCDs.
DISCUSSION
The primary objective of this study was to assess the knowledge regarding non-communicable diseases among female health workers (FHW) of rural Lucknow.
Although all the ANMs had above average score for knowledge regarding NCDs, only about one-third of the ASHAs had above average; similarly, in a study conducted in Andhra Pradesh, medical officers also perceived that ASHAs do not have the requisite knowledge to provide NCD services. 20 Obesity was the only risk factor identified by the FHW for the development of diabetes. Increased frequency of urine was most frequently named symptom, which was named by only one-third of the health workers. More than half of the FHW could not name any symptom. Only a meagre proportion of FHW could name delayed wound healing, polydipsia, polyphagia as a symptom, which spots the gap in knowledge among them. Similarly, in a study conducted in Karnataka, baseline knowledge regarding diabetes was found to be inadequate and training was found to impart a positive effect on knowledge. 21 Although, two-third of the ASHA and ANMs were aware of hypertension as a risk factor of CVD, only about half of them knew the range of normal blood pressure and a staggering lesser proportion of ANM i.e. 17 More than two-third of the FHW could not name any risk factor for cervical cancer; a dismal number of them were able to identify multiple sexual partners and early marriage as a risk factor. One-third of the ASHAs and two-third of ANMs knew about the method for early detection of cervical cancer, and very few ASHAs knew about the availability of the vaccine for protection against cervical cancer. Findings similar to ours were observed in a study conducted in Lucknow, where about two-third of the ASHAs scored below average. 23 We found that about one-fourth of the In the present study, knowledge score for NCDs was found to be higher among those who had received previous training for NCDs; similar results were found in various studies conducted in India and abroad. 20,22,23 Skills for measuring blood pressure, blood sugar and breast examination were found to be low among the ANMs. Similar findings were observed in a study conducted in Himachal Pradesh, where a contracted proportion i.e. 2% of the health care workers were able to measure blood sugar and do a breast examination for cancer screening. 26
CONCLUSION
In the present study, the majority of the respondents were found to have poor knowledge regarding NCDs. Additionally, older age, training and higher education are found to be associated with good knowledge. In India, the educational criteria for recruitment of ASHAs are that she should be a literate woman with due preference in selection to those who are qualified up to 10 Standard. The proportion of ASHAs trained in NCDs is less than fifty percent as per the survey of Government of India.
Hitherto, as education of recruited ASHAs could not be addressed; it is important that there is a need for training of grass root level workers for them to correctly identify the symptoms of various NCDs. without them knowing the risk factors, symptoms, and preventive measures; it is not expected of them to educate the community members regarding NCDs. With proper training they can educate women, men and adolescents regarding the determinants of NCDs and various associated risk factors like unhealthy diet, physical inactivity, intake of tobacco, alcohol, and stress, etc.
Finally, the overall prevalence of poor knowledge in the present study was much higher and since a meagre number of studies were conducted on the topic in the Indian subcontinent in the past, it is suggested that further studies will have to be undertaken on the subject.
Strengths and limitations
Each participant underwent an informative session wherein they were briefed about the problem statement and how they can make a recuperative contribution by answering honestly to the questions.
Maintenance of heterogeneity was tried by selecting different groups of FHWs, i.e., ASHAs and ANMs.
The results were shown for both cumulative and individual groups. However, there are a few limitations to the study.
A convenient sampling technique was used for feasibility. It may not be right to generalize the results to all parts of India due to the diversity in education, quality of training, and other factors.
|
2020-09-03T09:05:19.021Z
|
2020-08-28T00:00:00.000
|
{
"year": 2020,
"sha1": "9cfc2e5954c2a511c67b30f60d0e8af8fbd6a04d",
"oa_license": null,
"oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/6712/4231",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "28a4e8e6444754e1fa4e9ec882cd948c4632d8c0",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255595791
|
pes2o/s2orc
|
v3-fos-license
|
Topics in Contextualised Attention Embeddings
Contextualised word vectors obtained via pre-trained language models encode a variety of knowledge that has already been exploited in applications. Complementary to these language models are probabilistic topic models that learn thematic patterns from the text. Recent work has demonstrated that conducting clustering on the word-level contextual representations from a language model emulates word clusters that are discovered in latent topics of words from Latent Dirichlet Allocation. The important question is how such topical word clusters are automatically formed, through clustering, in the language model when it has not been explicitly designed to model latent topics. To address this question, we design different probe experiments. Using BERT and DistilBERT, we find that the attention framework plays a key role in modelling such word topic clusters. We strongly believe that our work paves way for further research into the relationships between probabilistic topic models and pre-trained language models.
Introduction
Pre-trained language models (PLMs), e.g., ELMo [35], Generative Pre-trained Transformer (GPT) [37], PaLM [11], and Bidirectional Encoder Representations from Transformers (BERT) [14] are pre-trained using large amounts of text data [24], for instance, BERT has been pre-trained on the BookCorpus and Wikipedia collections. During the domain-independent pre-training process, these models encode a variety of latent information, for instance, semantic and syntactic properties [57], as a result, these models can make reliable predictions even under a zero-shot setting in different applications [20,41,43]. While the pre-training process is computationally [51] and financially expensive [47], these models can be cheaply fine-tuned to reliably handle different downstream tasks such as document classification [1] and information retrieval [61,50], a process that is commonly referred to as transfer learning [32]. For instance, BERT has shown strong performance in natural language understanding [63], text summarisation [25], document classification [10] and other Natural Language Processing (NLP) downstream applications [43].
Another class of models that continues to dominate the text mining landscape are probabilistic topic models (PTMs) [8,7]. These models are probabilistic approaches toward determining dominant topics in a text corpus in a completely arXiv:2301.04339v1 [cs.CL] 11 Jan 2023 unsupervised way. A latent topic is described as a probability distribution of words. Latent Dirichlet Allocation (LDA) [8] is a popular model for discovering topics. In LDA, the model learns to represent a document as a mixture of latent topics and each topic is represented by a mixture of words. When LDA is viewed as a matrix factorisation model, given a term document co-occurrence matrix as input and the number of topics, the model factorises the matrix into two lowdimensional matrices that are word topic and document topic representations. The word topic matrix captures the importance of words in the vocabulary of each topic whereas the document topic matrix captures the topic distribution in every document. While LDA has been a popular model that is based on Bayesian learning, a class of linear algebra-based model called Non-negative Matrix Factorisation (NMF) [56,26] has become equally popular to learn topics [33].
In [43], the authors dissected BERT to understand the property of every layer. They find that lower layers, i.e., layer 1 or 2 capture the linear word order, while the BERT's middle layers learn the syntactic information reliably and the higher layers capture the contextualised information. The authors in [49] and [45] showed that BERT word embedding clustering via simple algorithms such as k-means results in word clusters as if they are learned by a topic model. The authors conducted a series of qualitative probe experiments to find out that most of the word clusters of BERT resemble what is often discovered by the LDA model. While these studies make relevant observations, what is not well studied is how the topic information is encoded at the time of pre-training given that BERT or any other contextual language model is not designed to model topical word clusters. In this work, by conducting different probe experiments, we answer how BERT and DistilBERT [44] can capture clusters of words that resemble what is learnt by topic models. We find that it is the attention [4,9] mechanism in these language models that plays a key role in modelling what resembles word topics as discovered by the topic model.
Related work
The main goal of PLMs [31] is to simulate human language understanding by finding the most probable words sequence and patterns. The traditional language model used probability distribution to predict the next word, but they were not very scalable such as those based on unigram, bigram or trigram language models [36]. The recently developed PLMs are trained using large amounts of text data where some of them exploit a strategy called masked language modelling in a self-supervised way. Once these models have been trained, they have been applied in a wide variety of applications. The key advantage of PLMs is that they can be applied on different downstream tasks [15] reliably.
BERT has been developed with stacked transformers [52] layers where each layer captures different properties in text data, e.g., some layers are ideal to capture semantic information [53,48]. Transformers consist of encoder-decoder structures. The encoder transforms the sequence of input tokens into a high-level dimension. Decoder predicts input data from encoder [18]. However, in BERT (a) Attention mechanism in BERT via visualisation in Layer 12. We observe that words that are central to the context are assigned high attention weights.
(b) Words, ordered by decreasing probability, obtained from the LDA model.
only the encoder part of transformers has been used. There is an important concept in BERT called attention that assigns weights to different input features given their importance in the underlying task. One example is: given the text about cats, the model will pay more attention, via attention weights, to words such as fur, eyes, etc. BERT's attention has also been studied in [12] where the authors find that different attention heads focus on different aspects of language, e.g., they find that heads direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possessive pronouns with far greater accuracy. While they have studied the syntactic and semantic information encoded in different attention heads, they have not separately probed latent topics as learned by the topic models such as LDA and NMF. In Figure 1a, we depict how attention works obtained via a popular visualisation tool 3 . We input two sentences in sequence, where the first sentence "The player plays football." is followed by the second sentence "Football is played in a stadium.", and both describe the sport football. The visualisation tool depicts the case when we select the token "football" in the first sentence and how other semantically related tokens such as "football", "stadium", and "played" are highlighted with high attention weights.
Topic modelling is a machine learning technique that automatically discovers hidden topics in unlabelled data. A topic is defined as a probability distribution of words. While these topic models have been inspired by the latent conceptbased models such as Latent Semantic Analysis (LSA) [13] and Probabilistic Latent Semantic Analysis (pLSA) [21], Latent Dirichlet Allocation (LDA) [8] has been widely applied to discover latent topics because it addressed some of the fundamental challenges in LSA and pLSA such as scaling on large datasets and overfitting. While in [27], the authors demonstrated that static word embeddings are related to SVD, which is the core algorithm used in LSA, what we demonstrate here is that models such as PLMs implicitly learn latent topic information as encoded by the PTMs.
LDA has been trained considering the exchangeability [17] assumption meaning that word order does not matter in a document. These models describe documents as a mixture of words and each document comprises a mixture of topics defined by the user. Note that BERT does not model document-level information; there are extensions such as Sentence-BERT (SBERT) [40] to model documents.
In Figure 1b, we depict a typical output obtained from LDA using a freely available online topic modelling visualisation tool 4 . We can observe from this output that there are five top-ranked probability words in some topics that are indexed by topic labels as discrete numbers. From topic index 57, we can infer that the topic describes computer or mobile applications and their development. Topic number 38 describes video gaming.
BERT has demonstrated state-of-the-art results in many NLP downstream tasks, such as natural language inference and information retrieval. Some previous studies have emphasised the importance of contextual information as an additional feature of topic modelling. In [3], for example, the importance of sentence contextual representation and neural topic model was investigated. In SBERT [40], embedding representation was used as the input to the prodLDA [46] neural topic model. If an input document length exceeded the SBERT predefined length, the rest of the document would be omitted. Despite this limitation, the model produced a higher coherence score when compared to Bag-of-Words (BoW) representation embedding. Some other studies have focused on how, and if, adding topic modelling information to a BERT model can lead to an improvement in its performance. In a study conducted by Peinelt et al., [34], they have used topic modelling to improve the BERT performance of semantic similarity domain applications like question answering. They have used BERT-base final layer's [CLS] token embedding as the corresponding embedding of an input document. Wang et al., [55] have argued that BERT contextual embedding can be improved by adding topical information to it. In their study, BERT embedding was derived from topics in the corpus. The findings of this research suggest that a word vector representation is equal to the weighted average of different topical vectors. If a topic has high importance in a corpus, words that are related to that topic gain higher importance.
In another related research conducted by [23], topical text classification was applied to a scientific domain dataset. The authors compared the findings of their research with SciBERT [2], which is a pre-trained language model based on BERT, but on scientific documents. Concatenation of BERT embedding and document topic vector was used as an input to a two-layer feed-forward neural network. In a recent study, [49], the role of BERT embedding was examined from a different perspective. This research argued that clustering token-level BERT embedding shares many similarities with topic modelling. The authors used different PLMs such as BERT, GPT-2 [38] and RoBERTa's [28] last three layers of embedding. This work found that except RoBERTa, BERT and GPT-2 word-level clustering resulted in clusters that resemble close to those obtained using the LDA model. While LDA learns topics as a probability distribution of words, the word clusters obtained by clustering token-level embeddings in PLM cannot be confused with a probability distribution of words. What the authors showed is that there are some similarities between the word clusters of a PTM when compared with the clusters obtained from a PLM.
While the works mentioned above demonstrate important relationships between PTMs and PLMs, what is currently lacking is a further understanding of how latent topics are encoded in the PLM vectors and what component helps in encoding this information. There are works mentioned above that have trained latent topics with pre-trained language models in a unified way. The question is whether it is needed to learn latent topics with pre-trained language models again. While these works have shown quantitative improvements, it is unclear how latent topics are helping them improve upon the results.
Probe Tasks
The problem that we intend to study in this paper is whether latent topic information is automatically encoded in contextualised word embeddings. While it is not explicitly evident that latent topic information is encoded, we must design probe tasks. Our key goal is thus to understand how PLMs such as BERT and DistilBERT can discover word clusters that are often discovered by PTMs when they are not specifically designed to model such information. To this end, we first chose to study in more detail the role that attention heads play in the PLM model. It is because just as in a topic model, words that are central to the document's global context are assigned a high probability and words that are central to a topic are assigned a high probability. For instance, if the document is about "sports", words such as "football", "goal", and "player" will have a high probability in that document. Similarly, these words will occur with a high probability in the topic that is about sports. The attention mechanism too shows similar behaviour in the document where words that are central within the given contextual window are assigned high attention weights. Attention weight specifies the importance of a particular word when it is accompanied by other words [12] in a certain pre-defined contextual window.
We consider BERT-base uncased and DistilBERT-based uncased models as our PLMs because of their popularity and computational ease. We also know that the LDA model outputs word and document topic representations [8]. Given the number of factors or latent dimensions, NMF factorises the co-occurrence matrix into two low-dimensional matrices where one matrix encodes word clusters and the other matrix encodes document clusters. Since language models capture word-level patterns, we thus choose word topics in LDA and NMF. Since both LDA and NMF can explicitly be assigned to soft clusters based on their probability values, in the case of the attention representations, we must cluster them using a soft clustering algorithm. This would help us produce word clusters with cluster assignments.
There are other components that we could also study such as the role played when different transformers layers when stacked together. However, previous studies have already found out that the different layers capture different proper-ties of text data, e.g., in BERT, lower layers capture linear word order, middle layers capture syntactic information whereas higher layers capture semantic information. None of these studies has found that word clusters resembling latent topics are also modelled by one of these layers after thorough experimentation. As a result, given their findings, we focus on the attention heads in PLMs first.
In BERT-base, there are 12 layers, each layer containing 12 attention heads. The attention head computes attention weights between all pairs of word combinations in an input sentence. Attention weight can be interpreted as an important criterion when considering two words simultaneously. For example (weather, sunny) pair's attention weight is higher than the (weather, desk) pair. It is because when BERT is trained on billions of tokens (weather, sunny) combinations occurred more frequently than other words such as "desk". Similarly, in the LDA model, if words such as "sports" and "football" occur, they will be assigned a high probability value in the word topic. DistilBERT is also based on the BERT-base model but is much lighter weight with respect to its parameters. It has been obtained after a process known as knowledge distillation [29,19] where the original bigger model known as the teacher was used to train the lighter-weight compressed student model to mimic its behaviour. It was found that in the case of DistilBERT, it retained most of BERT's advantages with a much-reduced parameter set.
Using two publicly available benchmark datasets, we conduct two different probe tasks to demonstrate the generalisability of our findings. In the first probe task, we conduct word-level clustering on the representations obtained from PTM and PLM models and compute the coherence measure which has been popularly used in topic models to evaluate the quality of the topics. In the case of the language models, we extract attention weights from each layer of the model and we obtain the word-attention representations for every word in the vocabulary. We then cluster these attention vectors using a clustering algorithm where the attention vectors are used as features. Through this attention clustering, we expect that words that are semantically related are clustered in one cluster. The motivation is that if the word clusters contain thematically related words, the clusters will demonstrate a high coherence measure. While there have been debates around the usefulness of coherence measure [22], in our study, we use the same measure to compare all models quantitatively.
We intend to probe if there is a comparable coherence performance between a layer of PLM and the word-topic representations obtained from PTM. By comparable, we mean whether the coherence results are numerically close to each other. If the coherence results are comparable, we can expect that in terms of the thematic modelling of words, the language model and the topic models are learning semantically related content. While the coherence probe task might not completely be relied upon, we design an additional probe task to find out the word overlaps between the word clusters obtained from the PLMs and PTMs. Our motivation is that if the coherence value between the clusters is high then there must be a reliable overlap between the words in the clusters. Since the higher layers, 10, 11 and 12 in the case of the BERT-base model capture semantic information more than the lower layers, we expect that the clusters of words in the high layers of the language model will show higher commonality with those clusters that are learnt by the PLMs.
Experimental Settings
Datasets: We have used 20 NewsGroups (20NG) and IMDB datasets which are two popular datasets commonly used in the text mining community. The 20NG dataset contains about 18,000 documents in 20 news categories after removing duplicate and empty instances. IMDB dataset contains 50,000 movie reviews that have been labelled as positive or negative. The 20NG dataset contains several long documents whereas IMDB contains relatively short documents with relatively more noisy text.
Text preprocessing: In the case of the PTMs, we have followed a common preprocessing strategy such as the removal of the stop words, the removal of punctuation, and non-ASCII characters. Through our experiments, we have found that if we do not remove stopwords from text, they tend to dominate most of the topics including increasing the dimensionality of the semantic space resulting in high space and time complexities. While some workaround have been proposed to model natural language using PTMs such as using asymmetric priors, they can be computationally intensive on large datasets [54]. In the case of the PLMs, we let the default pre-processor handle pre-processing, for instance, the BERT-base model has the WordPiece tokenizer. Using NLTK [5], we conducted sentence segmentation.
PLM attention weights: For every word in the vocabulary, we obtain the word attention weights from the BERT-base uncased and DistilBERT models. As BERT uses wordPiece tokenisation, if tokenised sentence length is more than 512 tokens, the input sentence is split which is common in the literature. Attention weights of all tokens in a sentence would be stored. If a word appears in different sentences, the average of all words' attention is used which is also commonly done including taking their average embedding of their word pieces [59]. We have obtained attention weights from every layer of BERT. BERT attention weight has been defined as an average of all attention heads in each layer.
We have obtained attention weights from the vanilla BERT-base model. Besides that, we have also obtained attention weights from the fine-tuned version of the BERT model to gauge the role fine-tuning might play in the process. Fine-tuning was done on the text classification task using labels associated with labelled instances in the 20NG and IMDB datasets. Through cross-validation in the fine-tuning process, we present the results of the best-performing model on the test set with the ideal model parameters obtained via a 30% held-out dataset. We have followed the same configuration with the vanilla DistillBERTbase model. results. We have used the NMF model implemented in Gensim. According to [58], larger datasets tend to have more topics than smaller ones. As a result, we have chosen different topic pools in different datasets. We have not chosen the number of topics to be equal to the dimensionality of the word vectors obtained from PTMs because PTMs tend to encode a variety of information in their vectors, e.g., syntactic and semantic information. Besides that, having many topics larger than what we have chosen above tends to result in sub-optimal latent topics leading to the deterioration of performance. Clustering: We have used the soft Gaussian Mixture Models (GMM) [6] clustering algorithm on the embeddings obtained from PLMs. LDA is already a soft clustering model where probability values are used to assign soft clusters to instances [60]. In LDA, we can automatically obtain the word-topic assignments based on the probability values of words in each topic which is also true for clusters obtained via the GMM model. We used GMM because its implementation is widely and freely available in different software libraries.
Evaluation: In topic modelling, coherence measure has been widely used to evaluate the quality of the latent topics [30]. Coherence score "c-v" has been used in our setting which is available in the Gensim library. This measure has been adapted from the work of Roder et. al. [42]. We use coherence to measure the semantic relatedness of tokens in the word clusters obtained from both PLM and PTM models. We also use the number of word overlaps between the top-k words in clusters obtained from the two models to gauge the word overlaps among the clusters. We set k = 20 which gives a reliable trade-off between selecting the most thematically related top-k words and not choosing (general or noisy) words with low probability estimated in the word clusters. To compute the word overlap values, for every topic in PTM and every layer's word cluster in PLMs, we computed the overlap between the top-k words, followed by computing the "mode" value. While there are metrics such as entropy and exclusivity [49], we will use these metrics in the extended version of this paper.
Discussion
We have computed cluster coherence values on two different datasets. Given two clusters with their respective coherence values. If one cluster's coherence value is higher than the other, the one with the higher coherence values is regarded as a coherent cluster, for instance, in the case of text, the tokens in the coherent clusters tend to be semantically associated with each other. In both LDA and NMF models, we have varied the number of topics to demonstrate the impact of topic clusters. In Table 1 we present the topic coherence results in the 20 Newsgroups and IMDB datasets for the LDA and NMF models. We observe that in the LDA model when the number of topics is 20, we obtain the best coherence value of 0.518 in the 20NG dataset. In the case of the NMF model, the best coherence value is when the number of factors is 30 with 0.504 in the 20NG dataset. In the IMDB dataset, we also obtain the best coherence value when the number of topics is 20 in the LDA model with a value of 0.461 and the NMF model gives us the value 0.300 when the number of factors is 20.
PLM & 20NG dataset: In the case of the vanilla BERT-base model in Table 2 (left), i.e., 20NG dataset, we notice that when the number of soft attention clusters is 50 there is some comparable performance with the coherence results. Precisely, we read from the table that for VB50 the coherence value is 0.503 in layer 11. This coherence value is numerically close to 0.518 when the number of topics is 20, and in the case of the NMF model, it is approximately equal to 0.504 when the number of factors is 30. This suggests that both LDA and vanilla BERT-base attention word clusters are semantically coherent when the number of soft clusters is 50. We also notice that the contextual layers are mainly playing a key role in modelling such semantically close words, i.e., layer 11. When we refer to the word overlaps in Table 4, we notice that the top 20 word overlaps are also consistent with the BERT-base model in layers 7, 8, 9 and 11. It means that out of 20 words, there are 17 overlapping words.
Upon comparing the results of the fine-tuned version of the BERT-base model where the fine-tuning was done on the classification task, we notice that soft clusters 50 and 100 in Table 2 lead to comparable coherence performances obtained by the LDA and NMF models in Table 1. Precisely, we read from the table that when the number of clusters is 50 and 100, we obtain the coherence value of 0.508 and 0.503, respectively that again are numerically comparable to 0.518 in the coherence table for LDA and 0.504 for the NMF model, i.e., Table 1. While it would be ideal to have these coherence results be equal, such results are difficult to obtain considering noise in the data and the randomness involved when initiating the training process of these semantic models. What is interesting in the case of the fine-tuned version of the BERT-base model is that two layers show comparable coherence performances and both these layers learn contextual information.
When we look at the topic associated with "computing technology" in the 20NG dataset, we noticed that words such as "organisation", "com", and "nntp" were among the overlapping words which suggest that both BERT and LDA learn thematically the same words. While it can be argued that even simple clustering algorithms such as k-means might generate clusters that are coherent and with high-overlapping words, we have found out that k-means does not lead to coherent clusters and the word overlap count was also very low, for instance, in most cases we found the word overlap values to be sometimes 1, and most often, 0.
In the case of the vanilla DistilBERT model presented in Table 3 (left), we notice that the higher layers demonstrate the highest soft cluster coherence results. What we notice is that the contextual layers show a higher degree of cluster coherence comparable to performance with the LDA model than with the NMF model in Table 1, for instance, the vanilla DistilBERT version with 200 soft clusters shows a relatively comparable performance when compared with the LDA model in Table 1. It can be argued that in terms of the absolute numbers the results in Table 3 are much higher than in Table 1 when we only look at the highest DistilBERT layers values. One of the reasons is that different pre-processing strategies have been chosen in both models. However, this was unavoidable because including stop words in the PTM models would result in noisy topics. Note that other layers such as Layer 4, soft cluster 30, in the case of the vanilla DistilBERT model compare well with the LDA coherence results. Layer 4 in the case of the DistilBERT model compares reliably with the soft cluster 30 when we consider the NMF model.
PLM & IMDB dataset: In the IMDB dataset, Table 1 presents the ideal coherence value when the number of topics/factors is 20 for the LDA and the NMF models. For the LDA model, the coherence value is 0.461 and for the NMF model, the coherence value is 0.300. Referring to Table 2 (right), we see that the comparable LDA value is obtained in layer 6 in the vanilla BERT-base version when the number of soft clusters is 10. In the fine-tuned version, we see the comparable value in layer 8 when compared to the LDA model and when the number of soft clusters is 150. If we consider topic 30 in Table 1, we notice two comparable values in Table 2 in layer 12 which is a layer that captures contextual information more than any other layer when the vanilla soft clusters are 20 and 30.
In Table 4, most word overlaps occur in layers 5,9,11, and 12 and these results are consistent with the 20NG results where higher contextual layers have the maximum word overlap. We also notice that layers 6 and above have the most ideal coherence values indicating that if the clusters are coherent, they also have maximum word overlaps. It means that these clusters share common words. In DistilBERT, in Tables 3 and 4 we see that the NMF model tends to show comparable coherence values in the higher layers. In Table 4, we observe that the word overlaps are fairly uniformly distributed across layers. While the lower layers have shown to have maximum overlaps, we can notice that the upper layers too have a word similar overlaps. However, their coherence values are not comparable. It is because IMDB instances are short noisy sentences where the model seems to be performing not very reliably unlike the 20NG dataset. What is also noticeable from the results is that the fine-tuned version of the DistilBERT model does not show comparable coherence performance when compared with the NMF model. This could suggest that classification fine-tuning helps DistilBERT lose the latent topic information.
In summary: 1) the attention mechanism is an important component in the PLMs that help capture some patterns that are also captured by PTMs. 2) there is correspondence between the coherence results obtained from PLMs and PTMs because in most cases we obtain comparable coherence performance. 3) in PLMs, there are high word overlaps in the contextualised layers and clusters of words obtained from PTMs. 4) in most cases, it is the contextualised layer that captures the most commonality with PTMs.
One of the limitations of our work is that it does not experiment with other language models very different from BERT such as XLNet [62] and GPT-3 [16] to ascertain that similar conclusions could be also derived from them. However, what is important to note is that our conclusions point toward the importance of the attention mechanism rather than the way pre-training is done or the size of the dataset that has been used to pre-train the model, or the model design. We also have to verify whether the results are generalizable to even larger models such as BERT-large which requires much more computational resources to conduct this study.
We show another finding through Figure 2 where we demonstrate the importance of the attention mechanism and how topic weights (probabilities) and attention weights tend to focus on the same words in a given context. To generate the figure, we have taken an example from the IMDB dataset. In the BERT-base model, layer 11 is examined because it is the contextual layer and has the highest word overlaps in Table 4: BERT (left) and DistilBERT (right) attention word overlap with LDA. Fig. 2: Illustrating attention using a sentence from the IMDB dataset as an example. We have presented these results from the BERT-base layer 11 and DistilBERT-based layer 5. The number of topics/factors in the case of PTM is 20. The figure is used to demonstrate that these models tend to focus on relevant tokens within their context and assign lower weights to general tokens such as stopwords.
selected layer 5 given that it is one of the contextual layers and has one of the highest word overlaps in Table 4. We have selected the number of topics as 20 and the number of NMF factors as 20 which is based on the results obtained in Table 1. What we observe from the figure all the models tend to focus on the relevant keywords in the context, for instance, we observe that PLMs focus on the words such as "good", "effects", "terrible", "movie" that are relevant to the movie and the PTMs too tend to focus on the same tokens in this context. What we learn from the figure is that PTMs and PLMs, while they are different, both tend to focus on the relevant words in a given contextual window. This figure helps us to draw some relationships between the attention weights and the topic probabilities in that they focus on the important words only. We also notice that common words such as stopwords are given less weightage by the models.
While the authors in [49] have found out that the word clusters obtained from some PLMs tend to cluster the contextualised word vectors that resemble what is learned by a topic model, our result suggests that it is the attention mechanism that is playing a key role in obtaining such results which is the key contribution of our work. It can also be argued that the contextualised token embeddings obtained from a PLM model can lead to almost similar conclusions, in this work, we wanted to explicitly study the role of the attention weights.
Conclusions
Topic modelling has remained a dominant modelling paradigm in the last decade with several topic models developed in the literature [64]. Topic models were not only modelled using Bayesian statistics but also linear algebra-based such as the NMF model. While both these models are formulated differently, they both tend to exhibit similar clustering properties. With the development of PLMs, these models have now taken over the landscape in text mining and NLP because they have outperformed existing baselines. Recent research points out that word-level clustering on BERT embeddings results in word clusters that share a close relationship with those discovered using topic models. As a result, this motivated us to study the reason which component in the language model helps capture such topic information when the model has not been explicitly designed to model latent word topics. Through probe tasks, we find that it is the attention mechanism that plays a key role in modelling word patterns that resemble something that is also discovered using topic models. We strongly believe that our work helps add further insight into the relationships between topic models and PLMs including the role that is played by the attention mechanism in the language model. In the future, we will conduct a thorough theoretical analysis to find out the key theoretical similarities between a topic model and a PLM. We will also study how different PLMs other than those that are based on BERT encode latent topics using attention weights.
Our results are not only applicable to NLP and document modelling fields in general, but the results are also relevant to information retrieval. For instance, in an information retrieval setting, we can only use features obtained from PLMs to retrieve relevant documents without having to worry about latent topics features that would potentially increase the number of features that might even degrade the performance of an information retrieval engine. Besides that, we may be injecting more redundant features into the information retrieval model. Topic models have been shown to improve information retrieval results and PLMs have been shown to demonstrate even better results. This could be because PLMs already have encoded a variety of features in their rich vector space that includes latent topics. As a result, the improvement that we see also comes from topics implicitly encoded in the PLM attention vectors. We thus believe that our paper will have a significant impact in the information retrieval field too.
|
2023-01-12T06:42:44.159Z
|
2023-01-11T00:00:00.000
|
{
"year": 2023,
"sha1": "1470b23c089db20c222094d2d155853a2bd26f3c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1470b23c089db20c222094d2d155853a2bd26f3c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1582851
|
pes2o/s2orc
|
v3-fos-license
|
Effects of metal treatment on DNA repair in polyamine-depleted HeLa cells with special reference to nickel.
Human cells depleted of the naturally occurring polyamines putrescine, spermidine, and spermine exhibit altered chromatin structure and marked deficiencies in DNA replicative and repair processes. Similar effects have been observed following treatment of normal mammalian cells with various heavy metal salts. In an attempt to better understand how metals interfere with normal DNA metabolic processes, a series of studies was carried out in which the toxicity and repair-inhibitory properties of various metals were evaluated in polyamine-depleted HeLa cells. Cytotoxicity of copper, zinc, magnesium, and cadmium was not altered in cells carrying lower polyamine pools. However, the sensitivity to nickel was markedly increased upon polyamine depletion, a condition that was readily reversed by polyamine supplementation. Nucleoid sedimentation analysis indicated that a greater amount of nickel-induced DNA damage occurred in polyamine-depleted cells than in normal cells, possibly serving as the basis for the increased sensitivity. Both polyamine depletion and nickel treatment result in decreased repair of DNA strand breaks and decreased cloning efficiency following X-ray and ultraviolet irradiation. Nickel treatment of polyamine-depleted cells resulted in synergistic sensitivity to both radiation treatments. None of the other metals tested enhanced X-ray or ultraviolet sensitivity of polyamine-depleted cells. Analysis of retarded repair sites following ultraviolet irradiation indicated those sites to be nonligatable in polyamine-depleted and nickel-treated cells, suggesting a block in the normal gap-sealing process.
Introduction
Considerable evidence now exists suggesting that heavy metals interfere with normal cellular DNA repair processes and that this may result in potentiation of the mutagenic, clastogenic, and carcinogenic effects induced by a variety of agents (1,2). Due to the tremendous complexities inherent in cellular metal interactions, the exact nature of this interference is likely to be very difficult to ascertain. However, some clues may be obtained from studies in which repair responses are measured in cells that have been additionally pharmacologically altered. For example, enhanced effects of metals on repair have been observed in human cells depleted of nonprotein thiols (3), the largest effects being observed with thiol-reactive metals. These findings are consistent with a number of interpretations and may serve only to underline the probable importance of cellular thiol balance for DNA repair.
The present study was similarly designed to examine effects of metals in human cells depleted of the naturally occurring polyamines putrescine (PUT), spermidine (SD), and spermine (SP). It has been previously demonstrated that treatment of human cells with a-difluoro-methylornithine (DFMO), an inhibitor of polyamine biosynthesis, depletes cells of PUT and SD, and results in altered chromatin structure (4), and deficiencies in repair of both X-ray-(5) and ultravioletinduced (6) DNA lesions. Moreover, DFMO-treated mammalian cells also exhibit altered sensitivities to a variety of chemical agents (7). The cause of these altered sensitivities is not clear but could relate either to modified chromatin structure or the deficiencies in repair in the polyamine-depleted condition. It was hoped that an examination of the effects of metals in polyamine-depleted cells might help to identify points at which metals interact in their modulation of cellular responses to DNA damaging insults. NiCI2 concentration (gM) Figure 1. Effect of nickel on colony-forming ability of polyamine-depleted HeLa cells. Untreated (m) or DFMOtreated (0) cultures were exposed to nickel for 2 hr in HEPES/glucose buffer and immediately harvested and reseeded at clonal density for colony formation. In some cases, 1 Polyami-ne Analysis Cellular polyamine levels were measured by reversed phase HPLC as described previously (8). Irradiation X-irradiation of cultures was carried out in complete growth medium in a TFI Bigshot X-ray unit (TFI Corp., West Haven, CT) run at 3 mA, 50 KV, 1.5 mm Be filtration, and generating 1.6 Gy/min. Ultraviolet irradiation was carried out on medium using a GE germicidal lamp emitting 1.2 J/M2/sec at 19 in. All media were removed during UV irradiation.
Materials and Methods
Nudeoid Sedimentation Analysis of DNA Strand Breaks and Repair Nucleoid sedimentation was performed basically as described by Cook and Brazell (9) with modifications described in detail elsewhere (3).
Structure ofRepairing Sites
To determine the molecular structure of aborted repairing sites, a modification of the procedure of Cleaver (10) was employed as described previously (6). Basically, the assay measures the ability of exonuclease III to remove repair-incorporated [3H]-bromodeoxyuridine from purified cellular DNA following treatment of that DNA with T4 DNA ligase. See footnotes to Table 5 for details.
Results
Inhibition of the intracellular polyamine biosynthetic enzymes, ornithine decarboxylase (ODC) and S-adenosylmethionine decarboxylase (SAMDC) by a-difluoromethylornithine (DFMO) (11) and MDL 73,811 (12), respectively, results in a predictable reduction in polyamine pools. Table 1 demonstrates that DFMO treatment markedly reduces PUT and SD but has little or no effect on SP pools. All three polyamines can be reduced by combined inhibitor treatment. These effects have been well studied and the above results are consistent entirely with what is known regarding modulation of cellular polyamine levels with these inhibitors. It was discovered that polyamine-depleted HeLa cells exhibited increased sensitivity to killing by NiCl2, whereas sensitivity to Cu , Zn , Mg and Cd was not markedly affected (Table 2). Since the Ni2+ sensitivity of double inhibitor-treated cultures was similar to that of DFMO-treated cultures, all subsequent studies involving colony-forming ability were conducted with DFMO-treated cultures. As shown in Figure 1, a 3-hr treatment with 1 mM 2+~~~2 PUT prior to Ni2 exposure completely restored normal sensitivity to Ni2 , whereas PUT addition after Nit+ exposure (at the time of reseeding) had no such protective effect. The basis for the increased sensitivity in polyamine-depleted cells appears to be an increased amount of Ni2+-induced DNA damage. Table 3 demonstrates that DNA single-strand breaks are seen in control HeLa cells only after exposure to about 225 pM NiCl2. In contrast, breaks are clearly observed in polyamine-depleted cells following doses as low as 25 jiM NiCl2 (lower doses were not examined).
Since, in these studies, cells are analyzed for DNA breaks after extended exposure to metal, breaks observed result both from direct Ni2-induced DNA damage and as a consequence of the excision repair process operating at those sites. To determine if the apparent increased damage was due to retarded break resealing, repair was monitored as a function of time. Table 3 shows that following NiCl2 exposures that induced roughly the same amount of damage, the rate of disappearance of breaks was similar in untreated and DFMO-treated HeLa cells. Thus it is likely that the elevation of single-strand breaks was due to increased yield of damage in polyaminedepleted cells. Polyamine-depleted HeLa cells exhibit increased radiosensitivity relative to normal cells (Figure 2). The Do (the reciprocal of the slope of the terminal linear region of the survival curve) was 4.25 and 2.8 Gy for control and DFMO-treated cultures, respectively. This translates to a reduction in the X-ray IC50 for killing from 3.72 Gy to 1.93 Gy. HeLa cells treated for 2 hr with 1 mM NiCI2 also showed enhanced radiosensitivity (Do = 3.50 Gy, X-ray IC50=3.10 Gy). Treatment of DFMOtreated cultures with 80 pM NiCl2 for 2 hr (a metal dose that resulted in equivalent lowering of colony forming ability to that of nonpolyamine-depleted cells) resulted in an even greater radiosensitivity (Do = 2.07 Gy, X-ray IC50= 1.51 Gy).
Polyamine-depleted cells have also been shown to exhibit retarded sealing of X-rayinduced DNA strand breaks (5). Figure 3 demonstrates that this inhibition is very apparent at short times after irradiation and that by 4 hr most breaks are sealed, as suggested by the return of nucleoids to the nonirradiated control position in the gradient. We have also previously reported that repair of X-ray breaks is retarded in cells treated with NiCl2 (13). To reexamine this in DFMO-treated cells, NiCl2 doses were chosen to minimize the background of DNA breaks induced by metal treatment alone. Thus, control and DFMO-treated cells were exposed for 2 hr to 200 and 25 pM NiCl2, respectively, prior to irradiation. Nucleoid sedimentation under these conditions was nearly normal and was used as the baseline for all subsequent X-ray studies. Figure 3 shows that Ni2+ alone has weak activity in retarding repair but appears synergistic in this regard in DFMO-treated cells with only 20% recovery seen at 4 hr postirradiation. Under these treatment conditions and X-ray dose (16 Gy) CFA was 4, 4, 0.5, and 0.09% for control, con-trol+Ni2+, DFMO, and DFMO+Ni2+, respectively. These studies suggest that Ni2+-induced radiosensitivity may be due to a potentiation of the already weakened repair response of polyamine-depleted cells.
DFMO-treated cells are also sensitive to UV irradiation (6). In the present studies, the UV dose required for 90% killing was reduced from 30 to 19 J/M2 in DFMOtreated cells (Table 4). Nickel did not markedly enhance UV killing of either control or DFMO-treated cells when administered at its IC50 dose in each case. Cells allowed to repair for 1 hr following irradiation exhibit repair-dependent DNA breaks which are greatly enhanced when repair occurs in the presence of repair inhibitors such as ara-C (Table 4). Both DFMO-and nickel-treated cultures exhibit more such breaks than controls and additivity is seen upon combined treatment. Thus, UV repair appears to be affected similarly to X-ray repair by Ni2+ treatment.
Cleaver (10) developed an assay for probing both the completeness of excision repair and the structure of aborted or retarded repairing sites. In that assay, cellular DNA containing repair-incorporated [3HI bromodeoxyuridine was purified from isopycnic cesium chloride gradients and digested with exonuclease III. Radioactivity released was assumed to be at sites that had not completed repair. If extensive DNA ligase treatment prior to exonuclease digestion reduced the amount of released radio- label, it was concluded that the incomplete sites were capable of being ligated. We used this assay previously to demonstrate that retarded repairing sites in DFMO-treated cells were not ligatable, i.e., gaps rather than nicks. Table 5 confirms these results and extends the findings to nickel treatment. Exonuclease III releases radiolabel from 2+ UV-irradiated Ni -treated cells and T4 DNA ligase does not reduce this. Consistent with the UV repair data above, nickel treatment of polyamine-depleted cells leads to further increased release of radiolabel. It is concluded from these studies that Ni2+treatment impedes repair most likely through inhibition of the gap sealing process.
Discussion
The present studies indicate that HeLa cells depleted of PUT and SD are hypersensitive to killing by Ni2+ but not by several other divalent metal cations. This sensitivity is not augmented by additional inhibitor treatments that also reduce cellular SP pools, suggesting that SP may not play a role in modulating sensitivity to nickel. Putrescine supplementation readily restores nickel sensitivity. In the polyamine-depleted state, Ni2+ apparently induces more DNA damage (Table 3), which probably accounts for Volume 102, Supplement 3, September 1994 1,C ara-C, 20 pM ND 290 ND = not determined. Cultures were irradiated with 2.0 J/M 2 UV2 and incubated for 1 hr in fresh medium containing no inhibitors (except in case of araC, in which inhibitor was added only during postirradiation incubation). After incubation, cells were harvested and analyzed for DNA strand breaks by the nucleoid sedimentation assay. alUV dose which reduces cloning efficiency by 90%. bRad equivalent breaks calculated from nucleoid position in gradients after correction, where necessary, for nickel-induced breaks. (15)(16)(17). Although the exact nature of the chromatin changes associated with DFMO treatment are not known, it is not unreasonable to assume that they might allow for greater nickel interactions with DNA. Inhibition of DNA repair processes by nickel has been previously reported (13,(18)(19)(20) but little is known of the nature of this effect. Thiol depletion in HeLa cells enhanced nickel toxicity approximately 40fold but did not markedly enhance the radiosensitization of those cells by nickel (3). In contrast, polyamine depletion enhanced nickel toxicity about 10-fold ( Figure 1) and also resulted in apparent synergistic effects on X-ray repair (Table 3; Figures 2,3) and at least additive effects on UV repair ( Table 4). The present studies do not allow a determination of how polyamine depletion accentuates the repair inhibitory properties of nickel. As argued above, however, it is possible that more uncomplexed Ni2+ is available for interaction with cellular macromolecules or repair enzymes. The recent finding by Hartwig et al. (20) that Mg2+ antagonizes the repair inhibitory effects of Ni + is consistent with the notion that metals might act through altering the catalytic function of repair enzymes. Polyamines may serve many of the same cellular functions as Mg2+ and have been shown to stimulate repair enzymes (21), presumably through stabilization of enzyme/DNA complexes. A likely scenario, then, is one in which repair enzymes have difficulty in interacting with DNA due to the deficiency of cellular polyamines following DFMO treatment. These enzymes are then further susceptible 2+ to interaction with other cations, e.g., Ni which may additionally compromise their catalytic functions.
|
2014-10-01T00:00:00.000Z
|
1994-09-01T00:00:00.000
|
{
"year": 1994,
"sha1": "8da4525e66604077c71ca8f1494589f41cfa95ef",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.94102s351",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8da4525e66604077c71ca8f1494589f41cfa95ef",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
141051708
|
pes2o/s2orc
|
v3-fos-license
|
How Does Work Engagement Mediate the Association between Human Resources Management and Organizational Performance?
The aim of this paper is to understand how workers’ perceptions and behaviors contribute to understanding the association between human resources management (HRM) and organizational performance (OP). Over the past few decades, theory construction has lagged the intermediate linkages between HRM and OP, and, therefore, there are still many unanswered questions with regards to such an association. To sustain the HRM-OP link, the authors highlight the potential influence of employees’ work engagement (WE), with the aim of exploring some of the intermediating variables, focusing on the perceptions of employees’ attitudes and behaviors. This research emphasizes that line managers have a crucial role to play in stimulating employees’ efforts and in shaping HR-related outcomes. Line managers act as crucial intermediaries in determining how HR policies that lead to OP can be designed and administered. Nevertheless, line managers have the capability to disrupt or stimulate the system, which has a significant impact on employees’ engagement with the organization. The empirical research is based on a sample of 1,609 employees and 40 organizations and was carried out in two settings. Results suggest that line managers and employees’ perceptions of HR policies were positively related to line managers’ perceptions of OP. The results also support a path model, whereby WE strengthens HR systems’ association with enhanced levels of OP. The discussion reviews the implications of these results and suggests future directions for research.
INTRODUCTION
As a result of rapid changes and new trends in the business environment, the business world has been facing challenges and demands at a fast pace. Traditional sources of competitive advantage are necessary but are not sufficient (Savaneviciene & Stankeviciute, 2012a). As a result, research needs to explore new approaches to management and new social dynamics and ways to manage people, as well as to understand how these factors contribute to building and maintaining competitiveness (Gonçalves & Neves, 2012). Human resource management (HRM) represents a key organizational function to achieve competitive advantage (Boudreau & Ramstad, 1998), and its contribution to the overall organizational performance (OP) is increasingly acknowledged (Budhwar, 2000). This has led the researchers to look into those HR practices that are associated with OP (e.g., MacDuffie, 1995; Gooderham, Parry, & Ringdal, 2008), as well as other dimensions in the HRM system that are linked with performance, such as the HR process (e.g., Sanders, Shipton, & Gomes, 2014), HRM strength "as part of building theoretical rationales" (e.g., Ostroff & Bowen, 2016, pp. 197), or even attributions made of practices (Nishii, Lepak, & Schneider, 2008). In sum, the general view is that the way people are managed can make a difference (Colakoglu, Lepak, & Hong, 2006;Adeniji, Osibanjo, Omotayo, & Abiodun, 2013). Consequently, the current research aims at investigating the relationship between employee-manager perceptions. This study clarifies the gap of employees' work engagement and its relationship to managers' perception of performance. Additionally, results suggest that line managers are responsible for enhancing employees' work engagement, and this has an impact on performance.
Despite this growing evidence of the positive influence of HRM on OP, there are still many unanswered questions with regard to such an association (Purcell, Kinnie, Hutchinson, Rayton, & Swart, 2003). Some researchers have proposed that HR policies are associated with employees' outcomes (EO) through their influence on employee attitudes and behaviors (e.g., Huselid, 1995;Wright, McCormick, Sherman, & McMahon, 1999;Wright, McMahan, & McWilliams, 1994), however, this chain still needs to be empirically supported and explained.
The above association, and indeed the whole research stream is made more difficult by a single fact that has only recently been recognized: the role of employees has been largely neglected, which is quite surprising, as employees are the usual target of most HR policies and practices. Delmotte (2008, p. 107) captures this gap, when he says that "each employee makes his own construction of reality", which means that the content of HRM intentions are probably perceived differently by employees. Therefore, within the same HR policies and practices, different employees will have distinct perceptions of reality, and consequently will exhibit heterogeneity in behaviors and results. This new trend has less to do with denying the role of HR policies and practices, but is more about recognizing that human beings are active players in organizations, and, hence, variety in behaviors and performances are bound to happen everywhere, the whole time.
An ambitious challenge is also to expose the "way", i.e., which HR policies influence OP? Delmotte's quote points to the need to pay attention to the active role played by employees with regards, to the individual and social construction processes within organizations, including the way HR policies influence OP. Savaneviciene and Stankeviciute (2012b) had already alerted to the intervening variables that compose the "black box" in HRM, and subsequently researchers have been proposing various mediation variables in the HRM-OP linkage ( & Nishii, 2006). Some of the previous studies have provided the stepping off point for future developments, focused on the role of line managers. Line managers play an important role in determining the actual form that HR policies take in practice, which is likely to influence OP (Currie & Procter, 2005). Therefore, line managers become part of the system, with impact on the increase of performance . Furthermore, the way employees' attitudes are shaped is the key issue of all HRM and performance linkage models, and there has been a dearth of research evidence based on employees' responses to HR (Macky & Boxall, 2007).
The current research focuses not only on the HRM content, but also on HR practices, as it assumes that a variety of HR practices interact to shape employees' attitudes (Sanders, Dorenbosch, & Reuver, 2008). Furthermore, this research also explores how employees contribute to the HRM-OP relationship (Sanders, Shipton, & Gomes, 2014). This means that HRM is not only about the content of what is conveyed to employees, but it is also about how such content is conveyed to them, as employees' attribution and sense-making processes are affected by the means (especially line managers) used to communicate organizational messages (Kelley, 1973;Weick, 1979). This raises the problem of the match between line managers' views of HRM, and the corresponding views of their employees about the same object. This problem is largely unexplored, and, hence, the main goal of this paper is to analyze the relationship between, on the one hand, the differences in employees'/line managers' perceptions of HRM, and, on the other hand, employees' attitudes, behaviors, and performance.
THEORETICAL BACKGROUND AND HYPOTHESES
Early theorists writing about HRM have proposed that people have a basic need to understand behaviors and their main causes (Heider, 1958). Therefore, to understand what makes interaction meaningful, one needs to provide and relate actions to subsequent behaviors and attitudes (Kelley & Michela, 1980). The lack of explanation about how and why HRM influences OP is highlighted as being a critical limitation (Hutchinson, 2013), and it has been labelled by many as the "black box" of HRM.
Searching inside the "black box" requires specifying the HR causal chain (Purcell & Kinnie, 2007 , 1997). The effectiveness of practices, e.g., the daily enactment of HR philosophies is more important than the occurrence of HR policies (Schuler, 1992), e.g., formal statements of an organization's intent, which serve to directly and partially constrain employees' behavior and their relationship with their employer and their influence on employees' behaviors and attitudes (Hutchinson, 2013). According to Becker, Huselid, Pinckus, and Spratt (1997), HR policies influence the behaviors of employees, which are accordingly reflected in the performance of operational, financial, and share price outcomes. In this way, to understand "the relationship between HR practices and employee outcomes, it is critical to draw logical inferences concerning the HR-performance causal chain as a whole" (Kehoe & Wright, 2013, p. 369). But these are only inferences, which means that much is still left unexplained regarding how such connections unfold. The problem is amplified by the fact that employees' attitudinal and behavioral responses to a HR system largely depend on employees' perceptions of HR.
To understand this unresolved mystery (Gerhart, 2005), research needs to: i) elaborate on more precise mechanisms; ii) theorize deeply about HR policies; and iii) explore linkages with outcomes (Guest, 1997 Just as important as the ability, motivation, and opportunity provided to employees is the focus on their perceptions. From this basic premise, scholars begin to explore attributes about "why" these practices were implemented in the first place (Nishii, Lepak, & Schneider, 2008) and how they convey employee's expectations to line managers. In fact, employees modify their behaviors because of their calculation of anticipated outcomes (Chen & Fang, 2008). This calls attention to employees' perceptions in work settings, and it is now the time to highlight the importance of line managers in the HRM-OP linkage, as they may provide different experiences for employees, i.e., by shaping different affective HR reactions, or even enabling the discovery of different kinds of talent. Therefore, research has sought to identify the characteristics of what constitutes a favorable HRM-OP associa-tion (Kehoe & Wright, 2013), focusing on the relationship between employees and line managers, which is finally starting to unlock the "black box" of the relationship between HR policies and EO. The assumption of the AMO theory is that HR policies affect employees' abilities, motivations, and opportunity to participate, which, in turn, positively influences OP (Purcell & Hutchinson, 2007). Even though some policies contribute to commitment and job satisfaction, they are also mediated by the way management applies them, and by how these are embraced by employees. Guest (1997) suggests that to gain more promising performance, employees not only must be motivated at the individual level, but they also need to possess the necessary and right mix of skills, abilities, and knowledge. According to Harney and Jordan (2008, p. 227), theoretical and empirical research "suggest that these three independent system components shape individual and aggregate employee characteristics, thereby contributing to organizational success". Further, organizations need to develop HR practices that motivate staff to achieve wanted skills, abilities and desired behaviors ( . The implementation of HR policies by line managers is likely to have higher impact on employee behavior, motivation, and satisfaction than the design of HR policies by HR professionals, i.e., line managers occupy a central position in accomplishing organization goals and probably have higher and more direct impact on employees' behaviors and attitudes. As line managers are in close contact with employees daily, greater involvement and more effective control can occur (Budhwar & Sparrow, 1997).
In sum, line managers serve as critical intermediaries, shaping HR practices and overall performance. Good communication helps to keep internal processes running smoothly and helps to create superior relationships with people (Jyoti & Sharma, 2017).
They can provide employees with much more than just monetary incentives or other tangibles resources, and with their sense giving regarding intangible values and relationships to fully engage employees in their job and in the organization (Gruman & Saks, 2011; Smith, Plowman, & Duchon, 2010). The way the job is done, and the speed, care, innovation and style of job delivery, as well as other discretionary behaviors, are all associated with supervision, where line managers play a vital role in setting the direction, i.e., in influencing employee attitudes and behaviors by the way they put policies forward, and by creating a culture of success (Purcell, 2002). This delegation of HRM decisions to line managers will commonly result in a greater scope for disparity and inconsistencies between the policy formulated at HR department level, on the one hand, and, on the other hand, the decisions taken by line managers (McCarthy, Darcy, & Grady, 2010, p. 160). This means that line managers are faced with a possible role conflict in trying to reconcile their HR responsibilities, on the one hand, while also being open and accommodating to the realities of employee experiences, on the other hand (Harney, 2014). Therefore, line managers play a critical role in influencing employee attitudes and behaviors by the way they put forward designed HR policies into practice, and they can be essential in improving organizations' outcomes . Line managers play a key role by changing, reinforcing, or stimulating how employees perceive and interpret HR policies and the whole HR system. Line managers do not "just bring policies to life" (Hutchinson, Kinnie, & Purcell, 2002, p. 22), but they are compromized, in the sense that the way policies are implemented is related to how employees perceive these policies. This ongoing delegation of HRM implementation to line managers will certainly result in a "greater scope for disparity and inconsistencies between the policy formulated at a senior HR level and the actual decisions taken by line managers" (McCarthy, Darcy, & Grady, 2010).
In other words, although line managers can respond more effectively at the lower level (Budhwar, 2000), difficulties will also arise, due to various reasons, an example being line managers not willing to take up this responsibility or having to add HRrelated activities to several other actions already in course (Larsen & Brewster, 2003;Cunningham & Hyman, 1999;Martins, 2007). Line managers may even suffer exhaustion from assuming responsibility for HR tasks, or they can lack a broader organizational or long-term view. However, it is not unlikely that line managers should develop or adjust their own practices. In some situations, line managers are in close contact with employees and control the key environmental factors that moti-vate employees (Latham & Ernst, 2006
Line managers and their commitment to employees
Additionally, the way line managers implement HR practices will influence employees' perceptions of the effectiveness of HRM, which points to managers' effort and effectiveness in contributing to employees' engagement in the organization ( When line managers are willing to take up the responsibility of putting into practice the designed policies, then, supportive work environment will emerge. Furthermore, when successfully "executing performance appraisals, giving feedback, offering training to execute the job more accurately, and providing back up when a colleague falls sick will all give employees the feeling that they are supported and encouraged by their line managers to execute their job effectively, now, and in the future" (Gilbert, De Winne, & Sels, 2010, p. 7). Line managers can and should emphasize the importance of a positive teamwork environment at every level of the organization. The goal is to achieve discretionary behavior, i.e., employees working with diligence and dedication, taking the employee-organization relationship to the next level of trust, and developing a psychological contract (Besanko, Deanove, Shanley, & Schefer, 2013). Nevertheless, the perception of a teamwork environment should be a one-on-one relationship between, on the one hand, line managers and each of their employees, and, on the other hand, employees among themselves. Additionally, a positive workplace is essential for employees to get involved with the organization's mission and values. Furthermore, an upbeat team-based environment characterized by sharing and open discussions will allow employ-ees to contribute with their views and perspectives, and, hence, organizational goals are more likely to be attained. Even so, the way employees react and perceive line managers' intentions will be heavily affected by the relationship between the two (Boxall & Purcell, 2008). Based on this statement, we can formulate the following hypotheses: H1: There is a relationship between employee-manager perceptual differences regarding HR policies, and manager's perception of performance. More specifically, the smaller the difference between employees' perceptions of HR policies and managers' perceptions of HR policies, the higher the managers' perceptions of performance.
H2: There is a positive relationship between employee-manager perceptual differences regarding HR policies and employees' work engagement. More specifically, the smaller the difference between employees' perceptions of HR policies and managers' perceptions of HR policies, the higher the employees' work engagement.
H3: There is a positive relationship between employees' work engagement and manager's perception of performance.
H4: Employees' work engagement mediates the relationship between employee-manager perceptual differences regarding HR policies (independent variable) and manager's perception of performance (outcome variable). Figure 1 presents the overall representation of the theoretical framework that depicts the relationship between HR policies, WE, and OP.
Mediation model
Supported by the literature review, the proposed mediation model is aligned with the guidelines provided by Baron and Kenny (1986) concerning the definition and status of a mediator. The mediation model explains why employee-manager perceptual differences regarding HR policies are related to managers' perception of performance, in which one variable is hypothesized to be intermediating the relation between an independent antecedent and an outcome (Fairchild & Mackinnon, 2009).
This model, presented in Figure 1, has three variables and two causal paths feeding into the outcome variable (Y), i.e., the direct impact of the independent variable (X) on the "path c", and the impact of the mediator (M) on the "path b" (Baron & Kenny, 1986).
METHOD
According to Baron andKenny (1986, p. 1176), a variable acts as a mediator when it follows these criteria: i) employee-manager perceptual differences regarding HR policies are correlated with managers' perception of performance (path c), i.e., a simple regression analysis with X predicting Y Lastly, in step iv), mediation is likely to be supported if the effect of employees' work engagement (path b) remains significant after controlling for employee-manager perceptual differences regarding HR policies. If employee-manager perceptual differences regarding HR policies are not statistically significant, when employees' work engagement is controlled, then, the finding supports total mediation. However, if employee-manager perceptual differences regarding HR policies are significant, then, the finding supports partial mediation. Sobel (1982) tests were also conducted to further support the mediation model, as proposed. This test is designed to assess whether a mediating variable (employees' work engagement) carries the effects of the independent variable (employee-manager perceptual differences regarding HR policies) to a dependent variable (managers' perception of performance). The computed statistic measures the indirect effect of the independent variable on the dependent variable by way of the mediator. Reported p-values are obtained from the unit normal distribution, under the assumption of a two-tailed test of the hypothesis that the mediated effect equals zero in the population using -1.96 as the critical value, which contains the central 95% of the unit normal distribution (Preacher & Hayes, 2004). Under this test, a significant p-value indicates support for mediation. Finally, Aroian's (1944/1947) test of mediation was used to further verify the results, as provided in Table 5.
CONSTRUCTS AND MEASURES
Employee-manager perceptual differences regarding HR policies We adhere to the research stream on HRM-OP link that uses for the first construct the differences between line managers and employees, as far as HR policies are concerned. The appropriateness of deviation scores for estimating differences between measures units continues to be a source of diversities (Edwards, 2001). However, according to Smith and Tisak (1993), deviation scores are both reliable and unbiased. Weighing arguments by both positions and recognizing the grounding of our research, we judged deviation scores appropriate for use.
These were calculated based on the research by Sanders, Dorenbosch and Reuver (2008), in which line managers and employees were asked to indicate, on a six-point scale, their level of agreement with the content of 17 sentences linked to five HRM practices/policies: i) extensive training; ii) internal mobility; iii) participation; iv) pay performance; and v) employee security. Sample items include "I am often asked to participate in decisions".
Upon data collection, the database was organized in three steps. In the first step, data were arranged by organization (40 organizations): for each organization, the researchers calculated the mean of line managers' answers to the 17 items; in the second step, the average value for all managers within each organization was calculated. Steps 1 and 2 resulted in 40 different values, which denoted the mean average of manager's perceptions as far as the aggregate 17 items were concerned.
In the third step, these values were used to calculate the perceptual difference between line managers and employees: for each organization, the researchers calculated the difference between each employee's aggregated value regarding the 17 items, and the correspondent managers' aggregated value. This resulted in a new variable: "employee-manager perceptual differences regarding HR policies".
Employees' work engagement
The second construct is employees' work engagement, measured using Schaufeli and Bakker's (2003) scale. The employees' work engagement scale consists of nine items, and employees were asked to indicate, on a six-point scale, their level of agreement with three constituting aspects of WE: i) vigor; ii) dedication; and iii) absorption. Each participant indicated the extent to which he/ she agreed with the statements, such as "I really "throw" myself into my job". An aggregated measure of WE was used in the hypotheses testing.
Managers' perception of performance
Finally, the six dimensions of managers' perception of performance (customer satisfaction; growth; market share; product/service to market; customer retention; new customer attraction) were measured using indexes previously offered by other researchers, such as Tzafrir (2005), and Dany, Guedri, and Hatt (2008). Such indexes require respondents to indicate the extent to which they perceive organizational performance in comparison to competitors. These data were collected regarding line managers' perception only and are labelled "managers' perception of performance". Table 1 shows the constructs and their operational definition. Table 2 shows the values of Cronbach's alpha for all variables. Cronbach's alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group. The alpha coefficients for the three items are higher than 0.88, suggesting that the items have high internal consistency.
Sample
The data come from a survey of employees and line managers from 32 organizations in Portugal, four organizations in Norway and four organizations in Denmark, from several industry sectors, ranging from energy and water to transport, communication, and finance and business. The justification for targeting employees and line managers in these three countries is to embrace diversity in the service, production, and consumption services sectors. According to the country destination, the questionnaires were administered in two languages, i.e., Portuguese or English.
There were 1,855 sets of questionnaires distributed to line managers and employees, of which 264 are line managers and 1,345 are employees, i.e., total of 1,609 sets of questionnaires were returned, giving a response rate of 86.74%. However, after checking differences between employees' perceptions of HR policies and managers' perceptions of HR policies, and removing outliers, only 1,331 questionnaires were properly completed and accepted for the study. More than 50% of participants are between 25 and 40 years old, and the majority have an academic degree.
The data obtained were analyzed for reliability, validity, adequacy, and suitability in answering research questions. For this reason, the data are expected to enhance the reliability and validity of the study. Table 3 presents the descriptive statistics and correlations of the variables comprising the study. The analysis of the results brings out the perceptual differences regarding HR policies between employees and line managers (mean = 0.424). As expected, HR policies are related to WE (r = 0.369, p = 0.05) and OP (r = 0.269, p = 0.05).
Assumptions for a multiple regression
Statistical tests rely upon certain assumptions about the variables used in the analysis. Specifically, we will discuss the assumptions of normality, e.g., the Mahalanobis distance test and independence of sampling, e.g., the Durbin-Watson test for independence, linearity, homoscedasticity, and multicollinearity.
Absence of multivariate outliers is checked by assessing Mahalanobis distances among the participants. As the obtained result was 13.512, it means that the critical value of 13.816 is met, which indicates the normality of the data.
The second assumption is the Durbin-Watson test for independence. The Durbin-Watson test is a measure of autocorrelation (also called serial correlation) in residuals from regression analysis. To be considered uncorrelated, the required Durbin-Watson statistic should be between 1.5 and 2.5 (Dufour & Dagenais, 1985). The Durbin-Watson d = 1.675 is between the two critical values of 1.5 < d < 2.5, and, therefore, we can assume that there is no first order linear auto-correlation in the data.
The next assumption is linearity. A linearity test aims to determine if the relationship between independent variables and the dependent variable is linear. If there is good research in the regression model, then, there should be a linear relationship between the independent variable and dependent variable. The linearity assumption can be tested with scatter plots. The obtained scatter plot follows a linear pattern (i.e., not a curvilinear pattern), which shows that linearity assumption has been met.
The next assumption is homocesdasticity. The assumption of equal variances (i.e., assumption of homoscedasticity) assumes that different samples have the same variance, even if they came from different populations. Breusch-Pagan (LM = 3.773; Sig. = 0.152) and Koenker (LM = 4.181; Sig. = 0.124) test the null hypothesis that error variances are all equal versus the alternative that error variances are a multiplicative function of one or more variables. Therefore, we do not reject the null hypothesis and can assume that the error variances are all equal.
Finally, the last assumption is multicollinearity. Multicollinearity is a state of very high inter-correlations or inter-associations among independent variables. It is, therefore, a type of disturbance in the data, and if present in the data, the statistical inferences may not be reliable. Collinearity statistics output reveals tolerance higher than 0.1, e.g., 0.864 for HR policies and WE and a variance inflation factor lower than 10.00, e.g., 1.158 for HR policies and WE, meaning that we do not violate this assumption. Therefore, multicollinearity does not remain a dire problem in this study.
Hypotheses testing
A two-step regression analysis was performed for each dependent variable. The first regression analysis was carried out to determine the relationship between employee-manager perceptual differences regarding HR policies and managers' perception of performance, as provided in Table 3. The R-square value indicates that 7.2% of variance in managers' perception of performance can be explained by employee-manager perceptual differences regarding HR policies (R = 0.269; F = 103.883; p < 0.05), i.e., path c. The regression results in Table 3 also show a very similar relationship between path a and b, i.e., the R-square value indicates that 13.6% of variance in employees' work engagement can be explained by employee-manager perceptual differences regarding HR policies (R = 0.369; F = 209.512; p < 0.05) and 10.4% of variance in managers' perception of performance can be explained by employees' work engagement (R = 0.322; F = 154.237; p < 0.05), thus, supporting hypotheses H1, H2 and H3. Four conditions are required for the existence of an effect of mediation (Baron & Kenny, 1986), the first three conditions are reflected in Table 4. First, the independent variable, employee-manager perceptual differences regarding HR policies, and the managers' perception of performance, dependent variable, are correlated (0.0746, p < 0.05). Second, the independent variable, employee-manager perceptual differences regarding HR policies, and the employees' work engagement, mediator, are correlated (-0.1867, p < 0.05). Third, the employees' work engagement, mediator, and the managers' perception of performance, dependent variable, are correlated (0.176, p < 0.05). Lastly, the effect of the independent variable on the dependent variable should change when the mediating variable is introduced. The reported p-values (rounded to 8 decimal places) are drawn from the unit normal distribution under the assumption of a two-tailed z-test of the hypothesis that the mediated effect equals zero in the population. +/-1.96 are the critical values of the test ratio, which contain the central 95% of the unit normal distribution. According to the p-value in Table 5, all the three tests confirm that there is mediation, i.e., the coefficient is significant.
We used the macro process for SPSS, version 2.15, written by Andrew F. Hayes. The macro process applies a bootstrapping test, i.e., a non-parametric method based on resampling with a replacement, which, in this case, was done 5,000 times. From each of these samples, the indirect effect is calculated and a sampling distribution can be empirically generated. A confidence interval is calculated, and it is checked to determine if zero is in the interval. If zero is not in the interval, then, the researcher can be confident that the indirect effect is different from zero. Table 6 shows the last condition, i.e., the step iv) described in Figure 1, which is required for the existence of an effect of mediation (Baron & Kenny, 1986). The effect of the independent variable (e.g., employee-manager perceptual differences regarding HR policies) on the dependent variable (e.g., managers' perception of performance) should decrease for a partial mediation, or even approach zero for a total mediation, when the mediating variable (e.g., employees' work engagement) is introduced. The effect of employee-manager perceptual differences regarding HR policies on managers' perception of performance does not reduce the effect of the main effect, but rather increases it. Contrary to expectations, the effect of the independent variable on the dependent variable does not decrease for a partial mediation, or even approach zero for a total mediation. This means that the direct effect is subsumed by the mediation effect. Table 6 reveals what MacKinnon, Fairchild, and Fritz (2007) refer to as "inconsistent mediation". The direct effect of employee-manager perceptual differences regarding HR policies on managers' perception of performance is, thus, likely to be overestimated, because the indirect effect will tend to be equal to the sum of total effects. The total effect is equal to the sum of direct and indirect effects. This pattern of coefficients indicates the presence of inconsistent mediation (i.e., a suppres-sor effect). Suppression focuses on the adjustment of the relationship between the independent and dependent variables, but in an unusual manner, as the size of the effect increases when the suppressor variable is added. In the mediation framework, a suppressor model corresponds to an inconsistent mediation model, where the mediated and direct effect have opposite signs. In other words, it cannot be directly calculated, as shown in Table 6. Table 7 reveals the effects of the mediator in the research model, ignoring positive or negative relations. In conclusion, the outputs mean that the mediator, employees' work engagement, significantly explained that managers' perception of performance was determined by the predictor (employee-manager perceptual differences regarding HR policies) with the help of the mediator, i.e., employees' work engagement does mediate the relationship between employee-manager perceptual differences regarding HR policies and managers' perception of performance.
It was also found that perceptions of HR policies rated by line managers were positively related to employees' perceptions and significantly associated with each other. These results are likely related to leadership and further highlight the importance of developing good relationships among the staff, e.g., line managers and employees.
DISCUSSION AND CONCLUSION
This paper examined how employee-manager perceptual differences regarding HR policies, employees' work engagement and managers' perception of performance are related and inquired as to whether the relationship between employee-manager perceptual differences regarding HR policies and employees' work engagement affected managers' perception of performance. Finally, it analyzed the relationship between the match between employees and line managers' perceptions of HR policies. This study contributes to the unresolved "black box" mystery and fulfils the gap of employees' work engagement and its relationship to managers' perception of performance. This paper goes beyond the classic vision of the mediating role of employees' work engagement to investigate the relationship between employee-manager perceptual differences regarding HR policies and managers' perception of performance, by exploring employees' and line managers' perceptions of HR policies. . Additionally, the study revealed an optimistic and significant relationship between employees' and managers' perceptual differences regarding HR policies and employees' work engagement (i.e., path a), which is the same for employees' work engagement and managers' perception of performance (i.e., path b), thus, supporting part of the condition of mediation suggested by Barron and Kenny (1986). However, when the mediating variable is introduced, the effect of employee-manager perceptual differences regarding HR policies on managers' perception of performance was reduced, leading us to conclude that there is a mediation effect. In an inconsistent mediation, a suppression effect would be present when the direct and mediated effects of an independent variable on a dependent variable have opposite signs (Cliff & Earleywine, 1994;Tzelgov & Henik, 1991). In our model, we have an inconsistent mediation (Davis, 1985). The results of this study show that the effect of the employee-manager perceptual differences regarding HR policies on managers' perception of performance changes drastically. However, because of the inconsistent mediation phenomenon, i.e., suppression, it does not reduce the effect of the main effect, but increases it (see Table 6). There is an adjustment of the relationship between the independent and dependent variables, but in an unusual way, where the size of the effect increases when the suppressor variable is added, meaning that the direct effect is subsumed by the mediation effect.
When HR policies are designed by HR professionals, the goal is to stimulate employees' skills and capabilities by promoting right behaviors. Moreover, employees' work engagement does mediate the relationship between employee-manager perceptual differences regarding HR policies and manager's perception of performance. However, we cannot disregard that leadership by front line managers has a crucial role in applying employees' efforts and ability to elicit discretionary behavior. In sum, line managers shape the actual employee perceptions regarding HR policies and, moreover, they shape overall performance, and, hence, provide employees with support and resources to fully engage in their job and in the organization (Gruman & Saks, 2011). However, this does not necessarily mean achieving OP through employees' work engagement, as every so often, ambitious HR policies may result in long-term exhaustion and diminished interest in work. The assumption is that when HRM makes sense to employees, work-related attitudes and behaviors will turn out to be more effective (Sanders, Shipton, & Gomes, 2014). Additionally, studies point to the importance of matching employees' and line managers' perceptions of HR policies (Nishii, Lepak, & Schneider, 2008;Wright & Nishii, 2006), as this will allow organizations to achieve better managers' perception of performance.
Furthermore, the empirical work has demonstrated that employees' perceptions of HR policies significantly vary from managerial reports of the HR policies in use (Liao, Toya, Lepak, & Hong, 2009). Employees' perceptions of HR policies necessarily follow managers' HR policy implementation (Nishii & Wright, 2008). In this regard, results do not evidence different employees' and line managers' perceptions of HRM. This alignment of perceptions is most likely to occur at the beginning of the relationship, when line managers clarify and interpret HR policies, i.e., line manager's explanations are more likely to influence employees to count on such information and to construct expected HR policy reality. By concentrating employee's attentions on certain practices, line managers are structuring employees' attention (Salancik & Pfeffer, 1978). Therefore, further knowledge about which practices should be considered to enhance employees' and line managers' perceptions and how those practices are perceived is needed. Consequently, this research has both theoretical and practical reference value.
Implications for practice
Our results suggest that if line managers engage themselves in assuming their HRM role, they can be a powerful partner of the HR department in enhancing employees' work engagement, and this has an impact on performance. Additionally, as line manager's enactment of perceptions regarding HR policies and relations-oriented behavior turn out to have an influence on employees, HR departments should work together with line managers and provide enough support and advice to line managers in their leadership tasks. Training activities should also embrace leadership development program to develop the leadership skills of line managers. Additionally, researchers should pay more attention to contextual issues, e.g., the size of the organization may emphasize the leading role of line managers. Specifically, the configuration of these factors, and others, may be used as a framework to enrich future research.
Limitations of the study and future research
Although this research has made several contributions to the knowledge, it has several limitations, as follow: a) the study measures the variables at a single point of time, i.e., cross-sectional design. Therefore, changes in the relationship between line managers' and employees' perceptions were not covered in the study; b) it only includes individuals from Portugal, Norway and Denmark, thus, the generalizability of the results is restricted; and lastly c) this study has only identified perceptual differences regarding HR policies.
Future research could examine the conceptual model used in this study, but with a larger sample size, in order that the outcomes can be generalized to a larger population. For the purpose of association, it would be interesting to replicate this study in a longitudinal design, so that it could be determined whether the match employees' and line managers' perceptions in multiple variables, e.g., employees' and managers' perceptions of work engagement and performance, are conditions for shaping the actual form that transforms overall performance.
|
2019-01-02T05:01:08.097Z
|
2018-07-18T00:00:00.000
|
{
"year": 2018,
"sha1": "9ba71ce0d7c44705c1e88db2c14f6aea9eb3933f",
"oa_license": "CCBY",
"oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/10643/PPM_2018_03_Pombo.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "156c11995d33e149319044596ace75b64e05bd6f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
81750336
|
pes2o/s2orc
|
v3-fos-license
|
Nutritional potential of leaves and tubers of crem ( Tropaeolum
To determine the centesimal composition of minerals, fatty acids and vitamin C of leaves and content was observed, with 62.60g/100g of starch and 3.43g/100g of fiber, as well as high potassium (0.58g/100g), sulfur (447.14g/100), calcium (205.54g/100g) and phosphorus (530.07g/100g) levels. The vitamin C content of tubers was 78.43mg/100g and the linoleic acid content was 0.455g/100g. The intake of 100g of crem leaves may contribute with 65% of the recommended dietary intake of sulfur. The intake of 100g of crem tuber may contribute with 106% of the recommended dietary intake of sulfur and 21% of the recommended dietary intake of Vitamin C. The chemical composition of crem ( Tropaeolum pentaphyllum Lam.) tubers and leaves demonstrated an important contribution of nutrients, mainly sulfur, vitamin C and linoleic acid in its tubers, indicating a high nutritional potential of this species.
A B S T R A C T Objective
To determine the centesimal composition of minerals, fatty acids and vitamin C of leaves and tubers of crem, and to discuss the nutritional potential of the T. pentaphyllum species.
Methods
The centesimal composition of protein, lipid, fi ber, ash and carbohydrate was determined by gravimetric analysis. Mineral composition was determined by optical emission spectrometry. Vitamin C was determined by dinitrophenylhydrazine method. Fatty acids were determined by gas chromatography. The percentage of recommended dietary intake of leaves and tubers of crem was calculated for each nutrient.
Results
A high content of fi brous fraction (63.07g/100g), potassium (4.55g/100g), magnesium (553.64mg/100g) and sulfur (480.79mg/100g) was observed in the chemical composition of leaves. In tubers, a high carbohydrate content was observed, with 62.60g/100g of starch and 3.43g/100g of fiber, as well as high potassium (0.58g/100g), sulfur (447.14g/100), calcium (205.54g/100g) and phosphorus (530.07g/100g) levels. The vitamin C content of tubers was 78.43mg/100g and the linoleic acid content was 0.455g/100g. The intake of 100g of crem leaves may contribute with 65% of the recommended dietary intake of sulfur. The intake of 100g of crem tuber may contribute with 106% of the recommended dietary intake of sulfur and 21% of the recommended dietary intake of Vitamin C.
Conclusion
The chemical composition of crem (Tropaeolum pentaphyllum Lam.) tubers and leaves demonstrated an important contribution of nutrients, mainly sulfur, vitamin C and linoleic acid in its tubers, indicating a high nutritional potential of this species.
I N T R O D U C T I O N
Tropaeolum pentaphyllum Lam. is a native food species from Southern Brazil [1] belonging to the family Tropaeolaceae [2][3] and popularly known as batata-crem or crem [4]. Crem is consumed as traditional food by descendants of European immigrants [1]. Crem leaves can be used in salads, as the tubers can be processed and canned [4]. The nutritional potential of crem has not been not fully elucidated. There have been few recent pieces of information on the mineral composition of crem [4] and the presence of phenolic compounds in crem tubers [5], being necessary the development of studies involving the chemical composition of its leaves and tubers.
In popular therapies, the consumption of canned tubers is recommended as antiscorbutic [4]. Vitamin C is an antiscorbutic micronutrient and mediator of the interface between the genome and environment [6]. The antioxidant activity of vitamin C [7] has already been determined in Andean tubers (T. tuberosum) [8]. The intake of vegetables has been encouraged [9] in the modern diet to ensure the reference daily intake of 90mg/day of vitamin C for an adult man [10] and thus prevent various diseases related to oxidative stress [7].
The fatty acid composition of crem tubers (T. pentaphyllum) has not been sufficiently investigated, but palmitic acid and oleic acid have already been identified [11]. Fatty acids have nutritional and therapeutic benefits in the body [12], such as cholesterol reduction [13] and prevention of cardiovascular diseases [14][15][16]. Crem tubers have been used to prevent hypercholesterolemia [4] and as an antidiabetic [17]. In the cosmopolitan species (T. majus), linoleic acid has been identified [18], which helps to reduce the risk of cardiovascular disease [19] and may be related to the positive effect of crem use in cholesterol control [4].
Considering the few studies that have been conducted on crem from a chemical perspective, the objective was to determine the mineral and centesimal composition of crem leaves and to determine the mineral and centesimal composition, and the content of vitamin C and fatty acids of crem tubers. Another objective of this study was to discuss the nutritional potential of T. pentaphyllum species considering the chemical composition of crem leaves and tubers. The leaves and tubers collected were homogenized and constituted a sample of each, totalizing 983g of tubers and 375g of crem leaves, in natural matter. Later, the sample of crem tubers was divided into two subsamples: one for the analysis of mineral and centesimal composition of dried tubers (903g) and the other for the analysis of fatty acids and vitamin C of fresh tubers (80g). The leaf sample and one of the tuber subsamples were subjected to forced air oven, drying at 60°C for 48h, after which 253g of dried tubers and 56g of dried leaves were obtained. After dried, the samples were grinded in a 1mm sieve-cutting mill. All analyzes were performed in triplicate.
Centesimal composition analysis
In the dried and ground crem leaves and tubers (forced air drying at 60°C for 48 hours and grinding in a cutting mill), gravimetric moisture analyzes were conducted in an oven at 105ºC (method 930.15), proteins by the Kjeldahl method (method 984.13), lipids by soxhlet extraction with petroleum ether (method 920.39), fibers via sequential digestion in strong acids and bases (method 978. 10
Mineral composition analysis
The analyzes of minerals in dried and ground crem leafs and tubers were performed by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) adapted for plant material [21]. The wet digestion technique followed by determination by ICP-OES was used for the analysis of phosphorus, potassium, calcium, magnesium, sulfur, copper, zinc, manganese and iron. The dry digestion technique, followed by ICP-OES determination, was used for boron analysis. Data were expressed as g/100g in the dry matter.
Vitamin C determination analysis
The content of vitamin C was analyzed in fresh tubers by the dinitrophenylhydrazine method and determined by spectrophotometry at 540nm [22]. The result was expressed in mg of ascorbic acid per 100g of dried crem tubers.
Fatty acids analysis
The technique used for the extraction of fatty acids was static maceration [23] with hexane as solvent, in 2g fresh tubers. The fatty acid composition was determined by gas chromatography coupled to mass spectrometry (GC-MS). Fatty acids were transesterified by the alkaline method [23]. The GC-MS operating conditions were: programmed temperature of the column at 50°C for 5min; heating rate at 3°C per min up to 240°C/5min; injector temperature at 260°C; the drag gas used was helium, with a flow rate of 1.0mL per minute. Fatty acids were identified by comparing their retention times and mass spectra with reference substances (FAME-Mix:18919-1, Supelco, Bellefonte, Pennsylvania). The total yield of fatty acids was 0.0056g in a 2g sample. The total yield of fatty acids was 0. The percentage of RDA of each component was determined according to the equation, exemplified with the calculation for energy in the tuber: %RDA Energy =[(Energy of the Tuber*100)*RDA Energy -1], where the %RDA Energy is the percentage of the reference daily intake served by the consumption of 100g of tuber in natura per day; Tuber Energy is the analyzed value of Energy in Kcal/100g in natural matter; RDA Energy is the value of the reference daily intake for an adult man with 70kg of body mass [10] in kcal/day.
Statistical analysis
The results obtained from the centesimal composition of minerals, vitamin C and fatty acids were expressed in means and standard deviation through descriptive statistical analysis.
The results of the centesimal composition of mineral, vitamin C and fatty acids of crem leaves and tubers were expressed in g/100g in the dry matter in order to express the proportion of each nutrient in 100 grams of dried leaves and tubers and thus, compared to other leaves and tubers, that may exhibit variable moisture levels. The results of the centesimal composition of mineral and vitamin C, used to evaluate the nutritional potential of crem leaves and tubers, were expressed in g/100g of leaves and tubers in natural matter in order to express the composition of food the way it will be consumed, and thus relate to the reference daily intake of each nutrient.
R E S U L T S Chemical composition analysis
In the chemical composition of crem leaves, a high moisture content was observed, resulting in an average dry matter of 13.61g/100g (Table 1). Carbohydrates and fibers represented a high fibrous fraction content in the leaves (63.07g/100g). The leaves showed a high content of ash in relation to the mineral content, highlighting the K; Mg and S.
The chemical composition of crem tubers showed a high moisture content, with a mean dry matter weight of 25.66g/100g. In tubers, the carbohydrates were highlighted, presenting 62.60g/100g of starch and 3.42g/100g of fibers. The mineral composition of tubers accentuated K, S, Ca and P. The vitamin C content for crem tubers was high. In the composition of fatty acids, linoleic acid was the major component, while palmitic, oleic and gamma-linolenic acids were also detected.
Nutritional potential assessment
Leaves constituted a source of minerals, highlighting S with 65%, Ca and Mn with 43% of RDA each ( Table 2). The crem tubers stood out in S and vitamin C contributing with 106% and 21% of the RDA respectively.
D I S C U S S I O N
There are few studies on the chemical composition of crem [4,5,11]. The present work was based on researches made on crem (T. pentaphyllum) and also on species of the same genus, mashua (T. tuberosum) [8,20,21] and capuchin (T. majus) [18,23,24].
In the unpublished centesimal composition of crem leaves, a high ash content is related to the mineral composition of the leaves, rich in K, Mg and S. It was verified a different mineral composition of crem leaves collected in Porto Alegre [4], where K (3.1g/100g) and S (300mg/100g) levels were lower, and Mg (800mg/100g) levels were higher than in the present study.
In the chemical composition of crem tubers, carbohydrates with high starch content and low fiber content stood out. However, a lower amount of carbohydrates (3.44g/100g) and higher fiber content (16.79g/100g) were previously observed [5]. In the centesimal composition of mashua tubers (T. tuberosum) there is a lower content of carbohydrates (74.60g/100g) [8] and starch (41.35g/100g) [24] and higher fiber content (8.2/100g) [8], differing from crem tubers. In the mineral composition of crem tubers cultivated in Porto Alegre [4], the authors determined a content of K about 3 times higher than in the present study (1.5g/100g) and, in the same way, Ca was two times higher (400mg/100g). On the other hand, these authors verified a similar level of S (400mg/100g) in this study, while the determined level of P was 2 times lower than those analyzed in the present study (200mg/100g).
The differences in the chemical composition of crem leaves and tubers may be of genetic origin, as in mashua [25], or due to planting sites, with different edaphoclimatic conditions and cultivation systems.
The popular use of crem as an antiscorbutic [4] may be related to the antioxidant properties of vitamin C [7], which have already been detected in Andean tubers [8]. However, there was as yet no record of vitamin C content in crem. Despite the lack of data on vitamin C content in the subject species, it was possible to verify that vitamin C content of crem tubers was high when compared to Colocasia esculenta tubers, popularly known as taro (0.20mg/100g of fresh taro) [26]. However, the vitamin C content of capuchin (T. majus) both in fresh flowers (256mg/100g) [27] and in lyophilized leaves and flowers (10mg/g) [28] was higher than in this study. Consequently, the vitamin C content of crem tubers determined here was unprecedented and could be used for future comparisons.
In the composition of fatty acids, the major component was linoleic acid, followed by palmitic, oleic and gamma-linolenic acids. Palmitic and oleic acids have already been identified in crem tubers [11] and linoleic acid has been identified in capuchin leaves and flowers [18]. These fatty acids have health benefits [12][13][14], from cholesterol reduction [13] to prevention [16], protection [15] and risk reduction [19] of cardiovascular diseases. Cholesterol reduction is also associated to the intake of myristic and palmitic fatty acids [13].
The consumption of monounsaturated fatty acid, such as oleic acid, has a beneficial effect on lipid profile markers, besides helping to prevent cardiovascular diseases [16]. Polyunsaturated fatty acids, such as gammalinolenic acid, also act as protectors in the prevention of cardiovascular diseases [15]. The higher consumption of linoleic acid is strongly associated with a decreased risk of cardiovascular diseases [19]. Therefore, the fatty acid composition of crem tubers may be related to the therapeutic effects reported by the popular use of crem as hypocholesterolemic [4].
The nutritional potential of crem leaves was as a source of minerals, mainly S, Ca, Mn, all above 20% of the RDA. Despite this high contribution, there is no information on the bioavailability of these minerals, which may be related to oxalates and phytates [29], which means that these minerals cannot be used by the organism. Therefore, the high concentration of sulfur (65% of the RDA) may restrict the consumption of crem leaves, as this element adds more bitterness to the taste of the leaves [30].
Crem tubers were significantly higher in relation to the RDAs of sulfur and vitamin C, representing a high nutritional potential. The most prominent mineral element in crem, both in leaves and tubers, was sulfur. Sulfur may be related to glucosinolates [26] present in species of the genus Tropaeolum [18,31].
Glucotropaleolin was determined in capuchin leaves [32], one of the main glucosinolates found in the genus Tropaeolum [18]. Glucosinolates and sulfur have antifungal properties [11] that may be related to the therapeutic effects of crem [32].
Vitamin C contribution of crem tubers can reach 21% of the RDA for adult men. The popular recommendation of canned crem as antiscorbutic [4] seems consistent, since the intake of crem can contribute with a greater quantity of vitamin C in the body, besides other nutrients. Vitamin C also acts as a biological antioxidant [7], which justifies the incentive to consume vegetables in the modern diet [9], which includes crem tubers and leaves. In this context, the consumption of crem tubers may contribute to the prevention of scurvy [4] by supplying part of the reference daily intake of vitamin C and in the prevention of diseases related to oxidative stress [7].
The analysis of the chemical composition of crem leaves and tubers evidenced, in an unprecedented way, that both constitute high quality material for food use. In the leaf composition, a high fiber content was highlighted, and in crem tubers a high carbohydrate content stood out. Regarding mineral composition, the most present element was S, with the possibility of contributing with more than 60% of the reference daily intake established for an adult. In relation to tubers, the contents of vitamin C and linoleic acid in the composition of fatty acids were highlighted. Therefore, crem showed a high nutritional potential in its chemical composition.
C O N C L U S I O N
Crem leaves are composed mainly of fibers. Crem tubers are rich in carbohydrates, especially starch, and contain significant amounts of vitamin C and linoleic acid.
Both leaves and tubers of crem showed an important contribution of nutrients, mainly sulfur. Tubers serve more than 20% of the RDA in vitamin C, indicating the high nutritional potential of this species.
|
2019-03-18T14:06:43.136Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f0759ea97f30adc5d39d72273924cb43ee4bd0d4",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rn/a/CFHCxdDKNfjGFjqmQTHQgLM/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c4fa4bf7d78b81e1fc302abe9e94819098a79887",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
231606342
|
pes2o/s2orc
|
v3-fos-license
|
Regional Prevalence of Dyslipidemia, Healthcare Utilization, and Cardiovascular Disease Risk in South Korean: A Retrospective Cohort Study
Background: Health disparities between different populations have long been recognized as a problem, and they are still an unsolved public health issue. Many factors can make a difference, and disparities for cardiovascular diseases (CVDs) are especially pronounced. This study aimed to assess South Korean regional variations for dyslipidemia prevalence, differences in healthcare utilization, and CVD risk. Methods: We used data from 52,377 patients from the National Health Insurance Sampling. Outcome variables were the risk of CVD, healthcare utilization (outpatient visits), and healthcare expenditures. A generalized estimating equation model was used to identify associations between the region and CVD risk, a Poisson regression model was used for evaluating outpatient visits, and a generalized linear model (gamma and log link function) was used to evaluate healthcare expenditures. Results: A total of 12,443 (23.8%) patients were diagnosed with CVD. Dyslipidemia prevalence varied by region, and the most frequent dyslipidemia factor was high total cholesterol. CVD risk was increased in low population-density regions compared to high-density regions (odds ratio [OR]: 1.133, 95% confidence interval [CI]: 1.037–1.238). Healthcare expenditures and outpatient visits were also higher in low-density regions compared to high-density regions. Conclusions: This study provides a regional assessment of dyslipidemia prevalence, healthcare utilization, and CVD risk. To bridge differences across regions, consideration should be given not only to general socio-economic factors but also to specific regional factors that can affect these differences, and a region-based approach should be considered for reducing disparities in general health and healthcare quality.
Introduction
Cardiovascular disease (CVD) is the leading cause of death worldwide, and many factors have been identified as targets for reducing its prevalence. Most population CVD risk factors, such as high total cholesterol levels and hypertension, are modifiable and can be reduced by changes in behavior [1,2]. In other words, changes in healthcare and the adoption of healthy behaviors, including physical activity and dietary habits, in the general population mean that people can live healthier lives. However, some populations are more vulnerable than others due to health and healthcare disparities, and these health gaps between different populations have been recognized as an important issue to be addressed [3].
Health disparities between populations have long been recognized as a problem, and they are still an unsolved public health issue. Many factors, including race/ethnicity, urban versus rural location, and socio-economic status influence healthcare, especially CVD [4][5][6]. CVD prevalence varies geographically from region to region, and patients have poor outcomes in areas where disease management is difficult [7]. Access to healthcare is an important factor that contributes to the widening health gap between urban and rural areas, leading to Int. J. Environ. Res. Public Health 2021, 18, 538 2 of 11 further CVD risk-factor disparities [8]. Residents living in urban areas have more opportunities to visit healthcare services than in rural areas, resulting in lower mortality and morbidity and exacerbating health disparities between regions [9]. Several interventions have been applied to reduce this rural versus urban health gap, and some have had positive results in reducing blood pressure and blood cholesterol levels [10]. However, these efforts have not reduced these health disparities, and this problem will only become more severe, especially in patients with chronic diseases.
Dyslipidemia is one CVD risk factor, and its prevalence is increasing with lifestyle changes. In Korea, the prevalence of dyslipidemia was 16.58% in 2013, but only 24.14% of patients were aware of their condition, and the treatment rate was low [11,12]. In other words, dyslipidemia is considered less important than other chronic diseases such as hypertension and diabetes and therefore may affect a patient's disease management and outcome. Furthermore, this difference in emphasis can be different in rural versus urban areas, and if patients are not properly managed, it can lead to even wider health gaps.
In the Asian population, previous studies have shown that the prevalence of dyslipidemia is higher in urban than in rural areas [13,14], that obesity prevalence shows a similar pattern, and that CVD risk factors are different in rural compared to urban areas [15]. For stroke patients, there are large regional differences in healthcare quality; to reduce these, changes to better maintain continuity of care through physician allocation efficiency have been suggested [16,17]. In Korean high-risk groups for CVD, the need for effective strategies to better control low-density lipoprotein cholesterol (LDL-C) levels has also been identified [18]. Although many previous studies have evaluated dyslipidemia prevalence by region, none have examined regional dyslipidemia patient outcomes and healthcare utilization.
This study aimed to assess regional dyslipidemia prevalence, differences in healthcare utilization by dyslipidemia patients, and their CVD risk. For healthcare utilization, we assessed the number of outpatient visits and healthcare expenditures.
Database and Data Collection
This study used the National Health Insurance Sampling (NHIS) cohort data from 2007 to 2015. The baseline population, 1,025,340 participants who were randomly selected, represented 2.2% of the total eligible Korean population in 2002 [19]. These data included personal demographic information, medical treatment data, health examinations, and hospital characteristics. Health examinations occurred biennially or annually according to workplace rules; blue-collar workers had examinations annually. Medical data for all subjects were included as part of insurance-claim data and included diagnoses, comorbidities, medications, visit dates, and costs. In addition, we obtained regional population data from Statistics Korea based on the smallest administrative unit available (si-gun-gu) in Korea.
We defined newly diagnosed dyslipidemia based on International Classification of Disease (ICD)-10 codes (E78) and based on patients who were prescribed statin medications. A total of 171,750 patients were newly diagnosed with dyslipidemia from 2007 to 2014. Exclusion criteria were: patients diagnosed with dyslipidemia between 2007 to 2008; patients diagnosed either in long-term care facilities or in hospitals; patients under 20 years old; patients without health examinations; patients without serum cholesterol information; patients diagnosed with CVD before being diagnosed with dyslipidemia, and patients with incomplete demographic or health examination information. After these exclusions, 52,377 patients were included in the study.
Variables
Based on Statistics Korea population data from the original 257 si-gun-gu administrative regions, we created six population categories: <100,000; 100,000-200,000; 200,000-300,000; 300,000-400,000; 400,000-500,000, and >500,000. In general, population increased with more development or with proximity to a metropolitan area. The si-gun-gu popula-tions in Seoul, the capital of South Korea, varied from 200,000 to over 500,000, but those from Gangwon-do (a rural area) had less than 100,000 people.
Outcome variables included healthcare utilization and CVD risk for patients with dyslipidemia. Healthcare utilization included the average number of outpatient visits per year and total annual healthcare expenditures during the study period. Outpatient visits were counted based on the main diagnosis indicated by insurance-claim data (ICD-10 code E78), and healthcare expenditures included inpatient and outpatient care except for medical costs not covered by National Health Insurance. Only visits and costs for dyslipidemia treatment were included. CVD was an assessment based on ICD-10 codes and included IHD (I20-I25), HTN (I10-I15), and cerebrovascular disease (I60-I69). During the study periods, those diagnosed with the ICD-10 codes as the main diagnosis was considered to develop CVD. Data for serum cholesterol levels included total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C) to evaluate regional dyslipidemia patient prevalence. Cholesterol levels were defined according to 2018 Korean Dyslipidemia Management guidelines [12] as follows: high TC, ≥240 mg/dL; high TG, ≥200 mg/dL; high LDL-C, ≥160 mg/dL; low HDL-C, <40 mg/dL. Based on these serum cholesterol levels, the distribution of abnormal serum cholesterol levels by region in Korea was evaluated.
Diabetes diagnosis was measured based on the ICD10 code (E10-E14). Medication data included whether or not the patient was prescribed a statin at diagnosis. To consider patients with a high risk of CVD among dyslipidemia patients, we scored the major risk factors for CVD and considered them as variables. The risk score was calculated based on age (male, ≥45 years; female, ≥55 years), positive family history of coronary artery disease, hypertension (systolic blood pressure [BP], ≥140 mmHg or diastolic BP, ≥90 mmHg), positive history of smoking, and low HDL-C (<40 mg/dL) [12]. Patients with high HDL-C (≥6 0mg/dL) are considered protective factors and are coded as -1. For each risk factor, patients were coded 1 or 0 (except high HDL-C), and the final scores were summed and categorized as 0, 1, 2, or ≥3. The data were adjusted for demographic characteristics by sex (male, female), age, income (low, low-moderate, moderate-high, high), insurance type (Medicaid, self-employed, employee), Body Mass Index (BMI), Charlson Comorbidity Index (CCI), and year (2009 to 2014).
Ethical Consideration
This study was approved by the Institutional Review Board, Eulji University (IRB number: EUIRB2019-13).
Patient and Public Involvement
Patients and or the public were not involved in this study. There are no plans to disseminate the research results to study participants.
Statistical Analysis
The distribution of each categorical variable was examined by an analysis of frequencies and percentages, and χ 2 tests were performed. T-tests were also performed for continuous variables to compare mean and standard deviation values. In the fully adjusted model, all variables were entered simultaneously. The generalized estimating equation (GEE) model was used to identify these variables and the incidence of CVD while controlling for potential confounding variables. Cox proportional hazard modeling was performed that included both patient characteristics and detailed CVD onset. The start date was defined as the date of initial diagnosis of dyslipidemia or prescribed medication, and the last date was the date of CVD diagnosis or the end of the study periods (31 December 2015) or death date. We used the Poisson regression model to evaluate associations between regions and the average number of outpatient visits. The gamma generalized linear model, using the log link function, was used to evaluate healthcare expenditure differences between regions. All statistical analyses were performed using SAS statistical software version 9.4 (SAS Institute, Cary, NC, USA). A p-value < 0.05 was considered statistically significant. Additionally, we used the Statistical Geographic Information Service by Statistics Korea to create regional distribution maps for dyslipidemia and the development of the cardiovascular disease.
Results
The general characteristics of the study population are shown in Table 1. A total of 52,377 patients were newly diagnosed with dyslipidemia, and 12,443 of these (23.8%) were diagnosed with CVD. Regions with the lowest populations (<100,000) had the highest CVD risk (n = 1635; 28.8%), and regions with more than 200,000 people had similar CVD risks (22.2-23.1%). Patients with diabetes (n = 3832; 30.9%) had a higher risk of CVD than patients without diabetes (n = 8611; 21.5%). Patients with high-risk scores also had the highest risk for CVD (scores = 0, 16.6%; scores = 1, 27.9%; scores = 2, 38.4%; scores >3, 53.7%). Average outpatient visits (mean ± SD) were higher for patients who had CVD (4.18 ± 3.08) than for non-CVD patients (2.93 ± 2.58, p < 0.0001). Similar to the results for outpatient visits, healthcare costs were also higher for CVD patients (KRW 83,887 ± 133,275) than for non-CVD patients without (KRW 64,262 ± 153,921, p < 0.0001). Figure 1 shows the regional population classifications, the number and distribution of dyslipidemia patients per 100,000 population by region, and the distribution of CVD in dyslipidemia patients. In general, the population was concentrated in the capital and metropolitan areas, and dyslipidemia was more prevalent in rural areas where the population density was lowest. The incidence of CVD in dyslipidemia patients was higher in regions with the lowest population densities. Figure 1 shows the regional population classifications, the number and distribution of dyslipidemia patients per 100,000 population by region, and the distribution of CVD in dyslipidemia patients. In general, the population was concentrated in the capital and metropolitan areas, and dyslipidemia was more prevalent in rural areas where the population density was lowest. The incidence of CVD in dyslipidemia patients was higher in regions with the lowest population densities. Figure 2 shows the distribution of abnormal serum cholesterol levels by region. In Korea, dyslipidemia patients showed a higher proportion (31.3%) of abnormalities in TC levels, and a lower proportion (12.5%) of abnormalities for low HDL-C levels. In regions of low population density, the proportion of abnormal TG levels was highest, but the proportion of high LDL-C levels was lowest. In regions with populations of more than 200,000, high TC had a similar distribution, and the proportion of low HDL-C was lower in regions with populations over 400,000.
6
of low population density, the proportion of abnormal TG levels was highest, but the proportion of high LDL-C levels was lowest. In regions with populations of more than 200,000, high TC had a similar distribution, and the proportion of low HDL-C was lower in regions with populations over 400,000. Table 2 shows the association between the region and the risk of CVD. CVD risk increased in low-density regions compared to high-density regions, but only regions with populations less than 100,000 were statistically significant (odds ratio [OR], 1.147; 95% confidence interval [CI], 1.051-1.252). CVD risk was higher for patients with diabetes than for patients without diabetes (OR, 1.070; 95% CI, 1.017-1.125). Higher risk scores were significantly associated with an increase in CVD (score 1: OR: 1.460, 95% CI: 1.387-1.536; score 2: OR: 2.035, 95% CI: 1.898-2.182; score ≥ 3: OR: 3.591, 95% CI: 3.010-4.283). Of the CVD types, the risk of IHD was higher in regions with population less than 300,000 compare to those in high-density region [<100,000: hazard ratio (HR): 1.137 95% CI: 1.146-1.561; <300,000: HR: 1.263, 95% CI: 1.100-1.450). Similar results were observed for cerebrovascular disease, with increased risk in small population densities. Table 2 shows the association between the region and the risk of CVD. CVD risk increased in low-density regions compared to high-density regions, but only regions with populations less than 100,000 were statistically significant (odds ratio [OR], 1.147; 95% confidence interval [CI], 1.051-1.252). CVD risk was higher for patients with diabetes than for patients without diabetes (OR, 1.070; 95% CI, 1.017-1.125). Higher risk scores were significantly associated with an increase in CVD (score 1: OR: 1.460, 95% CI: 1.387-1.536; score 2: OR: 2.035, 95% CI: 1.898-2.182; score ≥ 3: OR: 3.591, 95% CI: 3.010-4.283). Of the CVD types, the risk of IHD was higher in regions with population less than 300,000 compare to those in high-density region (<100,000: hazard ratio (HR): 1.137 95% CI: 1.146-1.561; <300,000: HR: 1.263, 95% CI: 1.100-1.450). Similar results were observed for cerebrovascular disease, with increased risk in small population densities. Table 3 shows the results of the GEE model for the association between region and healthcare expenditure and outpatient visiting. Healthcare expenditures were higher in low-density areas compared to the high-density areas, but this difference was only significant in areas with less than 100,000 people (Rate Ratio [RR], 1.072; 95% CI, 1.017-1.130). Outpatient visits were also higher in low-density regions compared to high-density regions. Healthcare expenditures (RR, 1.461; 95% CI, 1.409-1.515) and outpatient visits (RR, 1.620; 95% CI, 1.592-1.646) were higher for patients with diabetes than for patients without diabetes.
Discussion
Health disparities within a population are major concerns in many countries, and efforts have been made to bridge these disparity gaps for those affected. Such disparities also exist in Korea, and health inequalities are increasing not only by socio-economic status but also by region [20,21]. This study aimed to evaluate regional disparities in the dyslipidemia prevalence, health utilization, and the risk of CVD.
In general, the prevalence of dyslipidemia was higher in low population-density regions compared to those with high population densities. The most frequent findings for dyslipidemia were high TC, LDL-C, and TG. These results are similar to those of previous Korean studies, and confirm the differences seen between Korea and other countries that have high TG and low HDL levels [22][23][24]. Dyslipidemia distributions varied within si-gun-gu areas of the same district. Possible explanations for differences in serum cholesterol distributions may be related to regional variations in socio-economic status, dietary habits, physical activity [6,15], and differences in the quality of available healthcare.
We also found that the risk of CVD was highest in regions with low population densities, especially in those with populations under 100,000, compared to regions with high population densities. These results may also be related to differences in the quality of healthcare between regions. Access to care is one of the most important factors for preventing disease and having better patient care outcomes [25]. In general, most of the large hospitals in Korea are concentrated in the capital area, rather than in less-populated rural areas, so high-quality healthcare services are only available in regions with high population densities. Therefore, there may be quality gaps between regions, and compared to patients living in urban areas, rural dyslipidemia patients may receive relatively lower quality healthcare and have worse outcomes [3,26]. In these vulnerable areas, the role of primary care providers will be important. In clinical practice, primary healthcare providers should educate patients with dyslipidemia to take their medications regularly to prevent the risk of CVD. In particular, patients with diabetes or at high risk of CVD will need early intensive intervention or management, such as regular exercise and change diet habits, to reduce the risk of CVD, and the role of the primary healthcare provider will be important. Finally, dyslipidemia patients with low socioeconomic status are at high risk for CVD, and these patients should be properly managed through social support along with regular health examination for their health status.
Healthcare utilization, assessed by healthcare expenditures and outpatient visits, was higher in low population-density regions than in those with higher population densities. Patients in rural areas may visit more hospitals than urban patients, leading to increased healthcare expenditures, and despite more visits to healthcare providers, patients from lowdensity areas did not have better results than those from high-density areas. These results provide evidence that a regional approach is needed for reducing gaps in both healthcare and patient health.
Recently, the Korean government introduced a pilot program for community-based healthcare that provides comprehensive care at the community level for patients with chronic diseases. This approach underlines the importance of healthcare at the community level, and regional differences should also be considered for successful policymaking. This study provides evidence that there are regional differences in the quality of healthcare as well as the prevalence of the chronic disease. To reduce these differences in quality and disease burden, a region-based approach should be considered, especially for quality improvement in low-density areas. More research is needed to clarify regional differences in population health and healthcare quality.
This study has several strengths. First, we used data from a large representative cohort sample, so the results should be considered meaningful for policymakers. Second, although there have been many previous reports on regional disparities in healthcare, no research exists for regional disparities in healthcare utilization and patient outcomes. This study provides evidence for regional healthcare disparities and highlights the importance of a regional approach to reduce quality gaps between regions. Third, the results suggest that Asian patients with dyslipidemia differ from those in Western countries, a significant finding in any health-gap study.
Despite these strengths, this study does have some limitations. First, patient factors that were not considered in our studies, such as physical activity behavior, level of educa-tion, dietary habits, and occupation, may have influenced CVD risk. Second, we did not consider physician-related factors that might affect patient outcomes. Patient outcomes can vary depending on a healthcare provider's ability to manage chronic diseases. Third, regions were classified only according to the population. It is possible that within this classification, other factors besides population density could have explained the results depending on the region (e.g., Seoul, as the capital city). However, we do not consider this potential bias to be large because we used a nested model that accounted for district regions to reduce inter-regional variation. Finally, it is possible that other, unmeasured factors may have affected these quality gaps between regions, and further research is needed to take these factors into account.
Conclusions
This study examined regional variations in dyslipidemia prevalence, healthcare utilization, and the risk of CVD. Regional prevalence variations occurred according to population density, with low-density regions having a higher risk for CVD, more visits to healthcare providers, and more healthcare spending. To bridge these regional health gaps, consideration should be given not only to general socio-economic factors, but also to specific regional factors, and a region-based approach should be adopted. Finally, healthcare providers should be considered early intensive intervention or management to reduce the risk of CVD in patients living in a vulnerable region.
Author Contributions: K.-T.H. had an interpretation of data, acquisition of data, and drafting the article. S.K. had the conception and design of the study, analysis, and interpretation of data, and drafting the article. All authors have read and agreed to the published version of the manuscript.
Funding: This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT, and Future Planning (NRF-2019R1G1A1006476). However, the funding sources did not have interventions such as study design and data interpretation.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Eulji University (IRB number: EUIRB2019-13).
Informed Consent Statement: Not applicable.
Data Availability Statement: The authors have no authority over the data, and the data is provided upon request to the National Health Insurance Services.
|
2021-01-15T06:16:23.816Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8d20144c4781cab7f4cf7df6f6b50c4b0ff70ecc",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7827736?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "59f381599bd027b59330af0ed164b2827cbd0ec3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251852095
|
pes2o/s2orc
|
v3-fos-license
|
Design and On-Field Validation of an Embedded System for Monitoring Second-Life Electric Vehicle Lithium-Ion Batteries
In the last few years, the growing demand for electric vehicles (EVs) in the transportation sector has contributed to the increased use of electric rechargeable batteries. At present, lithium-ion (Li-ion) batteries are the most commonly used in electric vehicles. Although once their storage capacity has dropped to below 80–70% it is no longer possible to use these batteries in EVs, it is feasible to use them in second-life applications as stationary energy storage systems. The purpose of this study is to present an embedded system that allows a Nissan® LEAF Li-ion battery to communicate with an Ingecon® Sun Storage 1Play inverter, for control and monitoring purposes. The prototype was developed using an Arduino® microcontroller and a graphical user interface (GUI) on LabVIEW®. The experimental tests have allowed us to determine the feasibility of using Li-ion battery packs (BPs) coming from the automotive sector with an inverter with no need for a prior disassembly and rebuilding process. Furthermore, this research presents a programming and hardware methodology for the development of the embedded systems focused on second-life electric vehicle Li-ion batteries. One second-life battery pack coming from a Nissan® Leaf and aged under real driving conditions was integrated into a residential microgrid serving as an energy storage system (ESS).
Introduction
Today, fossil fuel-based power sources are contributing to increased environmental pollution and causing damage to human health and ecosystems. The need for transportation in cities is progressively rising due to the growing population and to industrial activities, leading to increased greenhouse gas emissions. Vehicle traffic accounts for 74% of total transport energy consumption [1]. Renewable sources based on wind and solar energy [2,3] stand as an alternative to produce clean power, while electric vehicles powered by green energy offer an opportunity for sustainable growth [4,5]. The EVs electrochemical energy storage systems are mainly based on Li-ion batteries. Thanks to their high energy and power density compared to other technologies, these devices are preferred by automotive manufacturers. Unfortunately, the life cycle decreases as a result of the continuous charge/discharge processes, reaching an end-of-life state when its storage capacity drops to around 70-80% [6], requiring replacement with a new one. The retired Li-ion batteries are then destined to be either disposed of, dismantled for recovery of their mineral constituents, or repurposed in a second-life application [7]. To help reduce the investment costs and environmental pollution, the ideal choice is to repurpose the Li-ion batteries for a second life in less demanding stationary applications that do not require performance at full energy storage capacity [8], such as residential, industrial, and renewable systems [9]. Studies we propose an experimental setup based on a hardware architecture capable to acquire these battery data charge through CAN communication with the BMS and a GUI that visualizes them. H. Wang et al. designed an interface based on the Texas Instruments ® MSP430 microcontroller acquiring VCT variables via CAN without the implementation of a GUI as HMI [25]. In addition to this present study, the state-of-the-art literature shows studies based on the Li-ion battery implementation from a Nissan Leaf ® automobile applied as a stationary ESS, presented by M.T. Smith et al. and E. Braco et al. [10,26,27]. There are even reports that expose the Nissan Leal ® Li-ion batteries second-life applications as ESSs on business models by companies such as 4R Energy ® in Yokohama, which offers commercial energy storage services [28], and FreeWire ® in California, which provides a mobile and portable charging station to EVs [29].
Considering the set-up of a stationary ESS based on a Li-ion battery and a PV inverter, we propose the design and development of a practical plug-and-play technology that is programmable and able to perform at any time in order to verify the current data, ready to act automatically in the presence of faults, and incorporates an HMI. With this aim in mind, we designed an interface based on an embedded system. The embedded systems are computer architectures based on microcontrollers/microprocessors energized by an internal or external power source, able to control actuators, acquire data from sensors through electric circuits, and integrate communication peripherals into the study process and into the GUI as an HMI. We therefore propose an embedded system to control and monitor the Li-ion battery operating parameters (VCT and SOC) and tasks (charge and discharge) when working with an INGECON Sun Storage 1Play 3TL/6TL inverter to manage an energy storage system for a residential microgrid. The proposed system comprises an Arduino MEGA ® ATmega2560 microcontroller and a GUI based on LabVIEW ® . Our system has presented has been implemented with low-cost devices. The study is structured as follows. Section 2 explains the theoretical principles of Li-ion BPs, inverters, and embedded systems. Section 3 presents the materials and methods. Section 4 shows the experimental design of the software and hardware embedded in the system. Section 5 describes the experimental results obtained when the system developed is tested in a micro-grid to characterize its stability. Finally, the main conclusions are discussed in Section 6.
EV Li-Ion Batteries Second-Life Applications
Nowadays, Li-ion batteries are used in a range of applications such as those for aerospace, spaceflights, drones, automotive applications, and grid storage [22]. The battery lifetime is reduced by constant and prolonged use, heavy use, or harsh temperature conditions [24]. Charge and discharge operations lead to the degradation of the energy storage capacity of these batteries. In the automotive industry, a drop of more than 20% in the nominal capacity leads to the non-optimal performance of the EV. Considering the environmental and economic impacts, it is recommendable to re-use Li-ion batteries in ESS in residential, industrial, or renewable applications. Before repurposing a Li-ion battery as a stationary ESS, it is first necessary to verify its remaining capacity and internal health. A technical diagnosis also needs to be made [10,30]. At present, these batteries are extremely popular due to their long life cycle, high voltage performance, and low self-discharge. For this reason, they are well suited to ESS applications. The current Li-ion batteries must be controlled and supervised correctly with a device to ensure that each cell works within the VCT operating ranges, and the device that performs these operations is the BMS [31].
Battery Management System: BMS
The BMS is the central unit inside the Li-ion BP, responsible for acquiring the VCT data and controlling the actuators. Its principal task is to ensure that the battery is operating within safe limits and that it achieves optimal performance over its useful life under various operating and environmental conditions. Specifically, it needs to keep the temperature stable and the electrical operating variables within the predetermined limits to ensure that the system delivers the energy demanded at any time and to guarantee the health of the Li-ion battery [31,32]. According to Y. Xing et al., a BMS for an EV comprises the following stages: battery management (user interface, electrical control, thermal management, and communication), battery state (state determination and safety protection), and battery monitoring (VCT data acquisition) [16], see Figure 1. A BMS comprises sensors, actuators, controllers, and communication interfaces.
Battery Management System: BMS
The BMS is the central unit inside the Li-ion BP, responsible for acquiring the VCT data and controlling the actuators. Its principal task is to ensure that the battery is operating within safe limits and that it achieves optimal performance over its useful life under various operating and environmental conditions. Specifically, it needs to keep the temperature stable and the electrical operating variables within the predetermined limits to ensure that the system delivers the energy demanded at any time and to guarantee the health of the Li-ion battery [31,32]. According to Y. Xing et al., a BMS for an EV comprises the following stages: battery management (user interface, electrical control, thermal management, and communication), battery state (state determination and safety protection), and battery monitoring (VCT data acquisition) [16], see Figure 1. A BMS comprises sensors, actuators, controllers, and communication interfaces. Using the CAN communication protocol, the BMS communicates with the vehicle management system (VMS), which controls all the vehicle parameters and processes. Each car manufacturer defines certain messages to exchange communication between the BMS and the VMS through this protocol [23]. Due to the distinct design, it is mandatory to design and implement an interface between the Li-ion battery and PV inverter that interprets the messages from the BMS and establishes two-way communication. Furthermore, it must perform as an HMI via programming sequences and a graphical user interface. This can be achieved by an embedded system.
Embedded Systems
Embedded systems (ESs) are an efficient technology strategy to perform dedicated functions supported by a computer architecture based on two general sections: hardware and software, permitting data acquisition and processing. The microprocessor (μP)/microcontroller (μC) is the core data processing unit, responsible for taking the control decisions to manage one or several processes. It is the main ES hardware element. The hardware comprises a μP/μC interconnection with electric circuits (drive circuitry and signal conditioning), actuators, sensors, and communication peripherals; i.e., the physical compo- Using the CAN communication protocol, the BMS communicates with the vehicle management system (VMS), which controls all the vehicle parameters and processes. Each car manufacturer defines certain messages to exchange communication between the BMS and the VMS through this protocol [23]. Due to the distinct design, it is mandatory to design and implement an interface between the Li-ion battery and PV inverter that interprets the messages from the BMS and establishes two-way communication. Furthermore, it must perform as an HMI via programming sequences and a graphical user interface. This can be achieved by an embedded system.
Embedded Systems
Embedded systems (ESs) are an efficient technology strategy to perform dedicated functions supported by a computer architecture based on two general sections: hardware and software, permitting data acquisition and processing. The microprocessor (µP)/microcontroller (µC) is the core data processing unit, responsible for taking the control decisions to manage one or several processes. It is the main ES hardware element. The hardware comprises a µP/µC interconnection with electric circuits (drive circuitry and signal conditioning), actuators, sensors, and communication peripherals; i.e., the physical components. In the µP/µC the programmer writes the machine language coding through integrated development environment (IDE) software. The software is a set of instructions (called codes) written for the µP/µC to perform tasks called programs. The codes are based on programming sentences to achieve the management of its own hardware, which is able to control the study processes via communication interfaces (wired or wireless). These codes and data from the study processes are stored and entered, respectively, in the µP/µC memory [33]. As long as the programming IDE software impacts the programming directed at operator/administrator monitoring, it is possible to use a different software to develop the GUI for user operation [16]. The general architecture of an ES is illustrated in Figure 2. To summarize, P. Zhan et al. defines an ES as "a computer based on hardware and software, which both, are physically embedded within a large industrial process system. Its mission is to ensure the communication and control over the study processes components, in order to achieve the overall system management" [33]. Therefore, as indicated in the explanation above, a BMS is an ES directed at controlling the EV energy.
to control the study processes via communication interfaces (wired or wireless). These codes and data from the study processes are stored and entered, respectively, in the μP/μC memory [33]. As long as the programming IDE software impacts the programming directed at operator/administrator monitoring, it is possible to use a different software to develop the GUI for user operation [16]. The general architecture of an ES is illustrated in Figure 2. To summarize, P. Zhan et al. defines an ES as "a computer based on hardware and software, which both, are physically embedded within a large industrial process system. Its mission is to ensure the communication and control over the study processes components, in order to achieve the overall system management" [33]. Therefore, as indicated in the explanation above, a BMS is an ES directed at controlling the EV energy.
Materials and Methods
Regarding the electric vehicle Li-ion battery pack characterization and its second-life application to stationary energy storage, in this section we explain the materials and methods required to design our ES prototype. This device is capable of establishing communication between the Nissan ® EV First Generation Pack and the Ingeteam ® Ingecon Storage 1 Play inverter. The BP comes from an EV in which the driving conditions and the current state of the battery are unknown. Therefore, a characterization test was performed before its utilization. The measured parameters are current capacity, energy, energy efficiency, and coulombic efficiency. Moreover, the SOH is calculated as the ratio between the current
Materials and Methods
Regarding the electric vehicle Li-ion battery pack characterization and its secondlife application to stationary energy storage, in this section we explain the materials and methods required to design our ES prototype. This device is capable of establishing communication between the Nissan ® EV First Generation Pack and the Ingeteam ® Ingecon Storage 1 Play inverter. The BP comes from an EV in which the driving conditions and the current state of the battery are unknown. Therefore, a characterization test was performed before its utilization. The measured parameters are current capacity, energy, energy efficiency, and coulombic efficiency. Moreover, the SOH is calculated as the ratio between the current capacity and the nominal capacity. With that aim, the battery was charged and discharged three times. The charging sequence consists of a constant current phase at 22 A (C/3) followed by a constant voltage phase until a cut-off current of 2.64 A (C/25) was reached. The discharge protocol consists of a constant current phase at 22 A between the maximum and minimum voltage limits. The characterization test was conducted at a room temperature of 22 ± 3 • C. Table 1 compiles the main parameters defined by the manufacturer and the mean value of the parameters measured on the test. Figure 3 shows the Arduino Mega ® ATmega2560 as the core system which, through communication peripherals, interprets the messages sent by the BMS to be transmitted to the inverter and monitored by the GUI. Therefore, the ES hardware acts as a liaison device (LD).
Embedded System Design for Monitoring Second-Life EV Li-ion Batteries
In this section, we explain the ES design and development for monitoring second-life EV Li-ion batteries integrated into a stationary ESS. This is carried out by a software-and hardware-based methodology defined to perform the experimental tests and the complete system implementation in a residential microgrid in order to achieve its characterization. In the interest of establishing communication between the ES via the Arduino ® Mega ATmega 2560 μC to control and monitor the battery operation towards the overall energy storage with an Ingecon ® Sun Storage inverter, it is necessary to manage the charge and discharge processes. The power of the inverter depends on the power balance of the microgrid, the state of the battery, and the strategy, as later discussed in Section 5. Therefore, active communication between the BMS, the ES, and the inverter is required. The inverter establishes a communication channel via the LD and its internal commands with the BP. Once this has been achieved, the GUI shows the operator or administrator the system performance.
Communication Processes between the BMS and Li-ion Battery
Firstly, one of the requirements to address when the BP is removed from the EV and The LabVIEW ® GUI determines the actions to be executed over the virtual power controls and interprets the messages received via the indicators. Communication between the LD, the battery, and the inverter is two-way via CAN protocol. The communication via CAN protocol was implemented using the CAN-BUS Shield v1.2, which is based on the CAN MCP251 and MCP2551 controllers, and an SPI interface. It is important to consider power actuators to achieve the µC control over the battery as these are necessary to activate the BP internal relays. An Arduino ® 4-channel relay module was employed to do so. The data transmission between the Arduino ® and the LabVIEW ® GUI is made through a serial port.
Embedded System Design for Monitoring Second-Life EV Li-Ion Batteries
In this section, we explain the ES design and development for monitoring second-life EV Li-ion batteries integrated into a stationary ESS. This is carried out by a software-and hardware-based methodology defined to perform the experimental tests and the complete system implementation in a residential microgrid in order to achieve its characterization. In the interest of establishing communication between the ES via the Arduino ® Mega ATmega 2560 µC to control and monitor the battery operation towards the overall energy storage with an Ingecon ® Sun Storage inverter, it is necessary to manage the charge and discharge processes. The power of the inverter depends on the power balance of the microgrid, the state of the battery, and the strategy, as later discussed in Section 5. Therefore, active communication between the BMS, the ES, and the inverter is required. The inverter establishes a communication channel via the LD and its internal commands with the BP. Once this has been achieved, the GUI shows the operator or administrator the system performance.
Communication Processes between the BMS and Li-Ion Battery
Firstly, one of the requirements to address when the BP is removed from the EV and reused is to establish the communication with the BMS by the LD. To realize this task is mandatory to know that each manufacturer establishes its own internal communication messages and processes. The Nissan ® EV first generation BP includes three processes: activation, charge/discharge, and deactivation. The µC interprets these processes as a translator and sends them to the inverter. Thanks to this approach, the Li-ion battery retired from an EV can be connected to a stationary inverter, thereby allowing its second use in a less demanding application in terms of power and energy density. Table 2 describes the steps required to address the three processes. If the BMS cannot be turned on, the process ends. iv.
If the BMS can be turned on, the high voltage output is then available. v.
If the BMS sends an alert message, this must be interpreted to guarantee the safety of the device. vi.
Initialize the charge/discharge processes. vii. The process ends.
During the process to turn on the battery, the embedded system is able to control the BMS and to comply with the current process. The battery is ready to perform the charge/discharge process.
2
Charge/Discharge i. Connect the battery and inverter high voltage outputs. ii.
Initialize communication between the battery and inverter. iii.
Obtain the battery operating parameters (VCT, SOC, and messages). iv.
Send these parameters to the inverter. v.
Read the inverter current state. vi.
Repeat steps iii, iv, and v while the BMS is working or until it is deactivated.
In this process, the battery is available to be charged/discharged. The BMS periodically sends the operating parameters to the embedded system and to the inverter.
3 Deactivation i. The process initializes. ii.
The high voltage output is deactivated. iii.
The BMS power supply is deactivated.
The BMS outputs high voltage and the power supply is deactivated.
Second-Life EV Li-Ion Battery Work Sequence
This subsection begins by considering that, the battery originally is not designed to communicate directly with the inverter. Therefore, we propose a work sequence capable of managing and establishing the communication between the Li-ion battery and the inverter through the plug-and-play LD for a second-life stationary ESS application described by the flow diagram depicted in Figure 4. The work sequence proposed is based on the battery operational processes described previously. The first task is to verify the battery activation process by the operator through the GUI. When this occurs, it is possible to initiate the communication between the battery with the inverter through the. Once the communication is established, the inverter recognizes the battery operating parameters such as VCT, SOC, and messages. The second task to attend is the battery charge/discharge process, which depends on the operating parameters, and the operator decisions. The SOC value shows the usable charge capacity of the battery as a percentage. Afterward, the operators can execute the battery deactivation process whenever they want, disabling the battery, inverter, and ES power supplies. It is worth mentioning that the GUI notifies the operators and automatically deactivates the ESS if any hazardous electrical or thermal battery situations occur during charging or discharging processes.
Logical Design of the ES Operation
This section describes the embedded system's architecture that constitutes the second-life ESS. In Figure 5, this is explained by a flowchart denoting the activation, charge/discharge and deactivation processes, and the involved parts tasks such as the operator (blue), GUI (green), LD (yellow), battery BMS (gray), and inverter (orange) and their interactions. The operator through the GUI is capable of visualizing and controlling the processes, knowing in real-time the battery operational variables (VCT, SOC, and messages). The execution starts with the activation process when the operator makes the request via the GUI. The LD programming is based on the sequential structure allowing to execute a series of instructions in order. Firstly, the LD initializes the communication with the BMS and the inverter, awaiting their activation confirmation. After that, when is confirmed the activation of both devices, the next stage can be performed. The charge/discharge process is available when the battery sends the operational parameters and alerts to the μC, acting as an information transmitter and emitter for the inverter. The GUI informs the operator in real-time about the current state of operational parameters. Based on that information, the operator decides which task to execute, either charge or discharge, if and only if no alerts are coming from the BMS. In real-time, the operator is informed by the GUI about the operational parameters' current state and decides which task to realize charge or discharge, considering that there are not any alerts present from the BMS. These alerts, shown to the operator through the GUI, can be caused by low or high voltage/current flow readings or the presence of overheating. The operator receives realtime information about the current state and based on those can decide whether to start. The current process follows a repetitive cycle until the operator makes a new demand. Finally, the deactivation process occurs when the operator sends the request via the GUI, leading to the battery high-voltage output being turned off.
Logical Design of the ES Operation
This section describes the embedded system's architecture that constitutes the secondlife ESS. In Figure 5, this is explained by a flowchart denoting the activation, charge/discharge and deactivation processes, and the involved parts tasks such as the operator (blue), GUI (green), LD (yellow), battery BMS (gray), and inverter (orange) and their interactions. The operator through the GUI is capable of visualizing and controlling the processes, knowing in real-time the battery operational variables (VCT, SOC, and messages). The execution starts with the activation process when the operator makes the request via the GUI. The LD programming is based on the sequential structure allowing to execute a series of instructions in order. Firstly, the LD initializes the communication with the BMS and the inverter, awaiting their activation confirmation. After that, when is confirmed the activation of both devices, the next stage can be performed. The charge/discharge process is available when the battery sends the operational parameters and alerts to the µC, acting as an information transmitter and emitter for the inverter. The GUI informs the operator in real-time about the current state of operational parameters. Based on that information, the operator decides which task to execute, either charge or discharge, if and only if no alerts are coming from the BMS. In real-time, the operator is informed by the GUI about the operational parameters' current state and decides which task to realize charge or discharge, considering that there are not any alerts present from the BMS. These alerts, shown to the operator through the GUI, can be caused by low or high voltage/current flow readings or the presence of overheating. The operator receives real-time information about the current state and based on those can decide whether to start. The current process follows a repetitive cycle until the operator makes a new demand. Finally, the deactivation process occurs when the operator sends the request via the GUI, leading to the battery high-voltage output being turned off.
Liaison Plug-and-Play Device
This section describes the plug-and-play LD, which allows acquiring data directly from the BMS without needing external devices or sensors. This device has been designed based on the manufacturer's technical specifications. Figure 6 shows the LD hardware architecture. Figure 7 depicts the developed interface, indicating its components.
Liaison Plug-and-Play Device
This section describes the plug-and-play LD, which allows acquiring data directly from the BMS without needing external devices or sensors. This device has been designed based on the manufacturer's technical specifications. Figure 6 shows the LD hardware architecture. Figure 7 The electrical socket provides power to the internal power supply (i.e., RT-65D AC-DC 24V), which transforms the outlet voltage level to one compatible with the liaison interface (e.g., 5V). The communication between the μC, the BMS, and the inverter is made with two connectors: RJ-45 and 8-pin, respectively. Furthermore, the interface includes a start/stop button and 5 LEDs to indicate the current status, namely, normal operation (green) or a parameter that is out of limit (green), fault (red), warning (yellow), or beginning protocol (yellow). The Arduino ® Mega ATmega2560 achieves communication through two CAN-BUS Shields v.1.2. The LD communication task starts when the BMS sends the operating data such as VCT, SOC, and messages to the μC. Then, this information is processed by the Arduino to be sent to the inverter based on the CAN-BUS protocol established by each manufacturer. It is important to mention that both devices, the inverter and BMS, have distinct baud rates, so the μC needs to act as a buffer. To control The electrical socket provides power to the internal power supply (i.e., RT-65D AC-DC 24V), which transforms the outlet voltage level to one compatible with the liaison interface (e.g., 5V). The communication between the μC, the BMS, and the inverter is made with two connectors: RJ-45 and 8-pin, respectively. Furthermore, the interface includes a start/stop button and 5 LEDs to indicate the current status, namely, normal operation (green) or a parameter that is out of limit (green), fault (red), warning (yellow), or beginning protocol (yellow). The Arduino ® Mega ATmega2560 achieves communication through two CAN-BUS Shields v.1.2. The LD communication task starts when the BMS sends the operating data such as VCT, SOC, and messages to the μC. Then, this information is processed by the Arduino to be sent to the inverter based on the CAN-BUS protocol established by each manufacturer. It is important to mention that both devices, the inverter and BMS, have distinct baud rates, so the μC needs to act as a buffer. To control The electrical socket provides power to the internal power supply (i.e., RT-65D AC-DC 24 V), which transforms the outlet voltage level to one compatible with the liaison interface (e.g., 5 V). The communication between the µC, the BMS, and the inverter is made with two connectors: RJ-45 and 8-pin, respectively. Furthermore, the interface includes a start/stop button and 5 LEDs to indicate the current status, namely, normal operation (green) or a parameter that is out of limit (green), fault (red), warning (yellow), or beginning protocol (yellow). The Arduino ® Mega ATmega2560 achieves communication through two CAN-BUS Shields v.1.2. The LD communication task starts when the BMS sends the operating data such as VCT, SOC, and messages to the µC. Then, this information is processed by the Arduino to be sent to the inverter based on the CAN-BUS protocol established by each manufacturer. It is important to mention that both devices, the inverter and BMS, have distinct baud rates, so the µC needs to act as a buffer. To control the activation and deactivation processes, an Arduino 4 Relay Board is included to allow the LD control over the BMS. Furthermore, one of the relays is applied to control an external high voltage (HV) contactor (Omron G9EC1, 400 V, and 200 A DC). This HV contactor is able to interrupt the current flow if an alarm or error message occurs. For safety reasons, this protection system is based on two physically decoupled circuits. One is formed by the relay and controlled by Arduino through a power supply that does not exceed 24 V (located in the BP), and the other is formed by the HV contactor and controlled by the protective relay. To avoid noise coupling and false triggers, the HV contactor is located outside the BP. It is important to mention that the electrical sensing is not made by additional sensors or another experimental setup, and the electrical measurements are obtained via the BMS data. This is mentioned due to the fact that other studies, such as [12][13][14][15][16], have required external devices to obtain the electrical variables.
Graphical User Interface-GUI
This section describes the GUI and its functionalities. The GUI is able to receive the data sent by the hardware interface and the battery through a LabVIEW ® virtual instrument. The user can control and monitor the activation, deactivation and battery charge/discharge processes through the GUI. The two-way communication from the hardware interface to the GUI relies on a USB cable. Figure 8 shows the GUI. Controls and indicators are grouped according to the message received. For reasons of confidentiality, it is not possible for us to mention the message names. For practical purposes these are named sections A, B, C, D, and E. The GUI is divided into the following sections: 1.
The serial port control allows the user to select the communication port through which the µC is connected to the PC, while the stop button reestablishes the connection and stops data acquisition and display.
2.
The interface control section allows the user to switch the hardware-interface ON/OFF. This section also shows the BMS and inverter connection status and if they are currently communicating with the µC.
3.
The serial port data section shows the messages received and its sole function is to display current activity.
4.
Message section A displays real-time battery data such as voltage and current flow, relay cut requests, main relay on, full charge, interlock, discharge power status.
5.
Message section B monitors real-time battery data such as remaining capacity, new full capacity, remaining capacity segment, remaining capacity segment switch, SOC, average temperature, output power limit reason, and remaining charge time.
6.
Message section C displays real-time battery data such as switch flag, high/low voltage times, temperature, wakeup phase, integrated current, cell voltage, state of health, and DTC, which is a variable with battery diagnosis information.
7.
Message section D displays real-time battery data such as SOC, IR sensor wave voltage, ALU answer (a diagnosis register for the CAN communication), IR sensor Malf (an alarm triggered if the insulation resistance sensor is malfunctioning), capacity empty, and refuse to sleep.
8.
Message section E monitors the real-time battery charge/discharge process data such as discharge power limit, charge power limit, charge power status, maximum power charge, and battery pack maximum UPRATE. 9.
The flags section uses virtual LED indicators to monitor the charge and discharge status such as overcharge, high voltage, high current, stop requests, over discharge, low voltage, and high current as well as the general battery status, as follows: high temperature, insulation resistance, CAN communication error, and unavailable values. Finally, the flags section mentions the current status of the relays.
are activated, and the system is disactivated. When the voltage is between 215V and 274V, this represents a warning state and the low voltage and stop request alert indicators are activated. When at the recommended voltage operating range from 275V to 400V, there are no alerts or errors. When the voltage is over 400V to 407V the high voltage and stop request alert indicators are activated. Finally, when the voltage is higher than 408V, the overcharge and stop request are activated, so the system is disactivated. Figure 8 depicts the GUI for real-time monitoring and control of BMS inverters, focusing on a battery discharge test considering a current-flow constant. The serial port control section visualizes the USB acquisition data by the PC Port COM4. The interface control section ensures that the BMS and inverter are connected to the LD; i.e., both devices are activated. The serial port data section shows the messages received by the BMS interpreted by the following sections. The message section A displays the operational battery voltage 371 V meaning that, this variable is inside from the recommended voltage operating range. The discharge battery current flow is -20 A. Besides, the message section D visualizes the SOC at 96%. Therefore, there are no alerts or fails in the flags section. The failsafe status and discharge power status equal the relay cut request, and the main RLY manifests the BMS inverter GUI's correct performance.
Results
In this section, we present the real-time experimental tests performed on our ES applied to manage the energy storage for a residential microgrid. These tests were conducted at the Laboratory for Energy Storage and Microgrids of the Public University of Navarre. The ES was tested to establish the communication between the Nissan ® EV First Generation Pack and the Ingeteam ® Ingecon Storage 1 Play inverter. To assess the robustness and reliability of the device described above, the battery and the ES were integrated into a real microgrid, and the implementation is shown in Figure 9. A collective self-consumption scenario was emulated in the microgrid of the Public University of Navarre in northern It is important to deploy the flags section in the GUI as this allows the operator to monitor the battery voltage range during the charge and discharge processes. When the battery voltage is below 124 V the low voltage, over-discharge, and stop request indicators are activated, and the system is disactivated. When the voltage is between 215 V and 274 V, this represents a warning state and the low voltage and stop request alert indicators are activated. When at the recommended voltage operating range from 275 V to 400 V, there are no alerts or errors. When the voltage is over 400 V to 407 V the high voltage and stop request alert indicators are activated. Finally, when the voltage is higher than 408 V, the overcharge and stop request are activated, so the system is disactivated. Figure 8 depicts the GUI for real-time monitoring and control of BMS inverters, focusing on a battery discharge test considering a current-flow constant. The serial port control section visualizes the USB acquisition data by the PC Port COM4. The interface control section ensures that the BMS and inverter are connected to the LD; i.e., both devices are activated. The serial port data section shows the messages received by the BMS interpreted by the following sections. The message section A displays the operational battery voltage 371 V meaning that, this variable is inside from the recommended voltage operating range. The discharge battery current flow is −20 A. Besides, the message section D visualizes the SOC at 96%. Therefore, there are no alerts or fails in the flags section. The failsafe status and discharge power status equal the relay cut request, and the main RLY manifests the BMS inverter GUI's correct performance.
Results
In this section, we present the real-time experimental tests performed on our ES applied to manage the energy storage for a residential microgrid. These tests were conducted at the Laboratory for Energy Storage and Microgrids of the Public University of Navarre. The ES was tested to establish the communication between the Nissan ® EV First Generation Pack and the Ingeteam ® Ingecon Storage 1 Play inverter. To assess the robustness and reliability of the device described above, the battery and the ES were integrated into a real microgrid, and the implementation is shown in Figure 9. A collective self-consumption scenario was emulated in the microgrid of the Public University of Navarre in northern Spain. To do so, the power consumption of four houses located in the vicinity of the university was monitored. Apart from the inverter, the microgrid includes a photovoltaic array of 11.5 kWp, and a power management system (PMS). Spain. To do so, the power consumption of four houses located in the vicinity of the university was monitored. Apart from the inverter, the microgrid includes a photovoltaic array of 11.5 kWp, and a power management system (PMS). The PMS is responsible for monitoring all the variables and setting the inverter power setpoint, which depends on the strategy implemented and the variables measured. In addition to the variables monitored by the microgrid, a Yokogawa WT-1800 power analyzer was included for the accurate measurement of the battery current and voltage. In the case of the results presented in this paper, the strategy aimed to maximize self-consumption. The energy is banked in the battery when the PV generation exceeds the consumption. Afterward, the battery is discharged when the PV generation does not meet the consumption. More information concerning the microgrid, boundary conditions, and the strategy can be found in a previous publication [34]. As an example to demonstrate the correct performance of the ES constituted by its hardware and software, Figure 10a shows the energy balance of the microgrid over 5 days, with a 5min resolution. The yellow area corresponds to the energy consumed directly from the PV generation, whereas the orange area is the one used to charge the battery. The green area represents the energy delivered by the battery. Finally, the grey area is the energy consumed from the public grid. The energy consumption of the houses is represented by a solid black line. The integration of the ESS into the microgrid made it possible, on the one hand, to reduce the peak power absorbed from the grid and, on the other hand, to reduce the energy consumption from the grid, maximizing self-consumption. Table 3 compares the most relevant factors for a self-consumption installation: (1) peak power required from the grid, (2) energy consumed from the grid, and (3) self-consumption ratio. From Table 3, some of the ad- The PMS is responsible for monitoring all the variables and setting the inverter power setpoint, which depends on the strategy implemented and the variables measured. In addition to the variables monitored by the microgrid, a Yokogawa WT-1800 power analyzer was included for the accurate measurement of the battery current and voltage. In the case of the results presented in this paper, the strategy aimed to maximize self-consumption. The energy is banked in the battery when the PV generation exceeds the consumption. Afterward, the battery is discharged when the PV generation does not meet the consumption. More information concerning the microgrid, boundary conditions, and the strategy can be found in a previous publication [34]. As an example to demonstrate the correct performance of the ES constituted by its hardware and software, Figure 10a shows the energy balance of the microgrid over 5 days, with a 5 min resolution. The yellow area corresponds to the energy consumed directly from the PV generation, whereas the orange area is the one used to charge the battery. The green area represents the energy delivered by the battery. Finally, the grey area is the energy consumed from the public grid. The energy consumption of the houses is represented by a solid black line. The integration of the ESS into the microgrid made it possible, on the one hand, to reduce the peak power absorbed from the grid and, on the other hand, to reduce the energy consumption from the grid, maximizing self-consumption. Table 3 compares the most relevant factors for a self-consumption installation: (1) peak power required from the grid, (2) energy consumed from the grid, and (3) self-consumption ratio. From Table 3, some of the advantages of including an ESS into a microgrid can be drawn. However, it is not the objective of this article to conduct a techno-economical analysis of the benefits of integrating a BP but to prove the feasibility of the developed ES. Figure 10b shows the Li-ion battery current and voltage throughout the 5 days of testing. A current positive sign implies battery charging, whereas a negative sign is set for discharge. The ES operated as expected for several weeks, demonstrating its suitability for stationary applications. The on-field validation was intentionally interrupted since a new strategy was uploaded into the PMS and the system was rebooted.
vantages of including an ESS into a microgrid can be drawn. However, it is not the objective of this article to conduct a techno-economical analysis of the benefits of integrating a BP but to prove the feasibility of the developed ES. Figure 10b shows the Li-ion battery current and voltage throughout the 5 days of testing. A current positive sign implies battery charging, whereas a negative sign is set for discharge. The ES operated as expected for several weeks, demonstrating its suitability for stationary applications. The on-field validation was intentionally interrupted since a new strategy was uploaded into the PMS and the system was rebooted. The data acquired and transferred by the ES concerning battery SOC, voltage, and current is used by the energy management algorithm to make the real-time calculation of the required battery power. Figure 10c depicts the SOC estimated and sent by the ES during the 5 days of operation. The graph shows how the battery performs approximately one cycle of charge and discharge per day. The good performance of the ES in terms of data reliability and transmission speed is demonstrated by the good performance of the microgrid. Moreover, the variables regarding alarms and warnings are required by the inverter to allow the safe operation of the system. The lack of undesired power limitations or interruptions demonstrate the good performance of the designed ES in the management of alarms and warning events. Finally, the designed GUI that facilitates the interaction between humans, and the battery makes it easy to turn the system on and off; there is also a screen that provides the user with the relevant information about battery status, with variables such as voltage and SOC, as well as warning and alarm events that may be The data acquired and transferred by the ES concerning battery SOC, voltage, and current is used by the energy management algorithm to make the real-time calculation of the required battery power. Figure 10c depicts the SOC estimated and sent by the ES during the 5 days of operation. The graph shows how the battery performs approximately one cycle of charge and discharge per day. The good performance of the ES in terms of data reliability and transmission speed is demonstrated by the good performance of the microgrid. Moreover, the variables regarding alarms and warnings are required by the inverter to allow the safe operation of the system. The lack of undesired power limitations or interruptions demonstrate the good performance of the designed ES in the management of alarms and warning events. Finally, the designed GUI that facilitates the interaction between humans, and the battery makes it easy to turn the system on and off; there is also a screen that provides the user with the relevant information about battery status, with variables such as voltage and SOC, as well as warning and alarm events that may be activated due to a malfunctioning of the system. Not only was the performance of the ES suitable for a single second-life battery, but it also operated correctly with an additional first and second generation battery coming from different EVs. Thus, the system presented in this paper is valid for different batteries coming from the same manufacturer and could be easily replicated by other research centers or companies. It should be noted that all the components used are standard commercial products.
Conclusions
The paper presents an ES architecture that allows continuous data monitoring and control of a repurposed BP and an inverter. To establish the activation process, a number of tests were performed to demonstrate this, starting at the time of the communication between the µC and the BMS to identify the operating parameters, messages, flags, and alarms. Therefore, the aforementioned comments and results demonstrate that our ES is able to ensure battery control and monitoring. This makes it possible to obtain the information regarding its own characterization and, in this respect, to determine whether these BPs can be repurposed without disassembly and rebuilding for use with the INGECON SUN ® Storage 1Play inverter in stationary energy storage applications. This reduces ESS investment costs. It has therefore been corroborated that our proposed programming methodology is useful for the development of interfaces focused on second-life ESS management based on Li-ion batteries. For the implementation of our proposal in a second-life self-consumption installation, this study focused its efforts on its improvement as a plug-and-play system based on two aspects. Firstly, the proposed system is able to provide the operators with information on the state of safety, usage, performance, and battery longevity. Secondly, the interface makes it possible to implement a SOC estimator for second-life batteries, described in A. Soto et al. [34]., which is a significant issue for the energy management of a second-life battery.
Finally, it is relevant to mention that our ES proposal can be applied to any other EV Li-ion battery different from the Nissan Leaf ® BP to a second-life implementation as stationary ESS, such as Ford Focus ® , Mini E ® , Mitsubishi i-Miev ® , Smart ED ® , Tesla Roadster ® , Renault Zoe ® , BYD F3DM ® , Opel Ampera ® , and Toyota Prius ® [35]. This is possible due our ES communication protocol is the same used by all the Li-ion battery automotive manufacturers: CAN-BUS. Additionally, to realize this, it is necessary to define in software the message libraries that are established by each BP manufacturer because these messages are not the same from another BP model.
Acknowledgments:
The European Commission is committed to supporting initiatives that represent progress in sustainability in general terms. Within the context of the STARDUST project of Smart Cities and Communities Lighthouse, technical solutions and innovative business models are being tested and validated to promote the sustainable development of cities. The results presented in this work are directed in this respect.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-08-27T15:13:19.802Z
|
2022-08-24T00:00:00.000
|
{
"year": 2022,
"sha1": "298a3604bac8a7114e470003a54e2f289efd3754",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/17/6376/pdf?version=1661349508",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33e533e5f19054744e5f442651e947384ec415bd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259268826
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Impact and Safety of Non-Target Punctures (NTP) during Portal Vein Access in TIPS Procedure
Background: Although non-target puncture (NPT)-related complications are well known to clinicians performing TIPS, there is no NTP-focused study to assess the true clinical sequalae of NTP-related complications. In this study, the aim was to evaluate the incidence, safety, clinical outcomes and complications related to NTPs during the portal access of TIPS procedures. Methods: A retrospective review of 369 TIPS procedures from October 2007 to September 2019 was performed. We identified inadvertent NTPs, including biliary, hepatic artery, lymphatic and capsular punctures. Next, the medical records and images were reviewed and analyzed to assess the safety and clinical outcomes of these cohorts. Results: A total of 71 NTPs were identified in 56 patients (15.18% of 369 patients). Of 369 TIPS patients, there were (1) 28 biliary punctures (7.6%), (2) 16 extracapsular punctures (4.3%), (3) 15 lymphatic punctures (4.1%) and (4) 12 hepatic artery punctures (3.3%). The overall complication rate was 2.2% (8/369). Based on the Clavien–Dindo classification, three patients (0.8%) had a minor complication. In addition, five patients (1.4%) experienced grade II–V major complications, such as symptomatic hemoperitoneum, arterio-biliary fistula or hemorrhagic shock leading to death. Mortality (0.5%) was only caused by extracapsular puncture combined with other NTP. Conclusions: NTPs during the portal access of TIPS procedures are associated with low complication risk. However, when extracapsular punctures are combined with other NTPs, a more severe complication, including mortality, can occur. Nevertheless, all patients with NTP should be closely monitored at a higher level of care after TIPS placement.
Introduction
A transjugular intrahepatic portosystemic shunt (TIPS) is a widely accepted procedure to treat the complications of portal hypertension. It relieves portal hypertension by creating a shunt between portal and systemic circulations. Two most common indications are for the secondary treatment of variceal bleeding and refractory ascites [1]. Although TIPS is considered minimally invasive and shows high clinical efficacy, it may cause complications, such as intraperitoneal bleeding, hemobilia, TIPS dysfunction, hepatic encephalopathy and liver failure. Some of these complications can be resolved and improved with technical advancements, such as use of ePTFE stent graft to improve TIPS patency. However, some complications continue to occur (e.g., non-target puncture (NTP) Biomedicines 2023, 11, 1630 2 of 9 during portal access, causing intraperitoneal bleeding) and are reported persistently despite these technical improvements [1][2][3][4].
NTP is considered one of the most serious complications of the TIPS procedure, occurring in 0.6-4.3% of the patients undergoing TIPS [3,5]. It is caused by inadvertent needle puncture through non-target structures, such as the hepatic artery, bile duct, intrahepatic lymphatic or liver capsule. Due to the technical challenges of performing intrahepatic access of the portal vein, many image-guided techniques were previously proposed to help identify the portal vein, such as direct portography [6,7], wedge hepatic venography, balloon occlusion venography [8][9][10] or imaging guidance [11][12][13][14][15]. However, these imageguided techniques caused additional complications [16][17][18][19][20]. Furthermore, many of the techniques did not improve clinical outcomes of patients undergoing the TIPS procedure or reduce complications from NTP [11,21,22].
Although NTP-related complications are well known to clinicians performing TIPS, there is no NTP-focused study that assesses the true clinical sequalae of NTP-related complications. Therefore, in this study, we aimed to evaluate the incidence, safety and clinical outcomes of complications caused by NTPs during TIPS procedures.
Materials and Methods
After obtaining approval from our Institutional Review Board (IRB) (IRB# 10-000464), 369 angiograms and medical records were retrospectively reviewed, and clinical data were collected from patients who underwent TIPS from December 2007 to September 2019 at a single institution. TIPS procedures were performed by interventional radiologists with varying degrees of experience, ranging from 1 year to 35 years.
TIPS Procedure
The standard TIPS technique was used [23,24]. In brief, with the right internal jugular venous access, the Rosch-Uchida transjugular access set (Cook Medical, Bloomington, IN, USA) was placed into the hepatic vein. After the catheter was placed into a desirable location in the hepatic vein, the needle was advanced intra-hepatically through parenchyma into the portal vein. This step was where the NTP of other structures could have occurred. Once the needle access of the portal vein was confirmed, the portal pressure and systemic pressure in the right atrium were measured to calculate the portosystemic pressure gradient (PSPG). A porto-venogram was performed to evaluate the anatomy and flow dynamics. Next, the portal vein was accessed further with a sheath, and the Viatorr endoprosthesis stent-graft (Gore Medical Inc., Flagstaff, AZ, USA) was placed through the sheath, connecting the portal vein to the HV/IVC confluent. An appropriately sized balloon (6-10 mm) was then used to dilate the stent-graft. Through the post-stented and ballooned TIPS, post-TIPS PSPG was measured, and a final porto-venogram was obtained. Variceal embolization was performed in selected cases.
Data Collection and Statistical Analysis
Each angiographic image of the TIPS procedures was collected and evaluated for inadvertent non-target puncture of other structures. Demographic data, MELD scores, Child-Pugh scores, times from procedure to discharge, readmission rates, liver transplant rates, outcome parameters, relevant clinical data, and pre-and post-imaging analyses were collected. Continuous data were summarized as means with standard deviation or median with inter-quartile ranges, depending on the distribution of the data. Qualitative variables, including gender, TIPS indication, emergent case, hemodynamic success, readmission rate and Child-Pugh score, were shown as raw numbers and percentages. Quantitative variables, including age, MELD score, creatinine, INR, total bilirubin, sodium, albumin and fluoroscopic times and length of hospitalization, are reported as means with standard deviation (SD). The Student's t-test was used to compare the differences between continuous variables, and either Pearson's chi-square test or Fisher's exact test was used to compare categorical variables between the groups. P values < 0.05 were regarded as statistically significant. All statistical analyses were performed using SPSS software version 22.0 (IBM, Chicago, IL, USA).
Results
All of the 369 angiographic TIPS images were retrospectively reviewed for inadvertent NTP of vital structures, including bile ducts, hepatic artery, intrahepatic lymphatic and capsular punctures ( Figure 1). All NTP data were compared to the entire TIPS cohort (Table 1). A total of 71 non-target punctures (with 11 combinations of different NTPs, Figure 2) were identified among 56 patients (15.2% of all patients). Among 56 patients, 39 were male and 17 were female. Of 56 patients, 51 (91.2%) achieved hemodynamic success. A total of 13 patients (23.2%) underwent emergent TIPS. The number of each type of NTPs ( Table 2) were 28 biliary punctures (7.6%), 16 extra-capsular punctures (4.3%), 15 lymphatic punctures (4.1%) and 12 hepatic artery punctures (3.3%). The mean (SD) length of stay of the patients was 10.2 ± 16.8 days, which was not statistically different to non-NTP group (12.34 ± 42.5, p = 0.498). The readmission rate within one month of discharge was 48.2%, and the liver transplant rate was 21.4%. The average Pre-TIPS MELD-Na score was 17.6 ± 9.1. Child-Pugh scores were A (4, 7.1%), B (28, 50.0%) and C (21, 37.5%). Three of the patients were not able to be scored due to missing data. Notably, in comparison, one-month readmission rates for NTP patients were statistically higher than for the entire TIPS cohorts, i.e., 48.2% vs. 18.2% (p < 0.0001). Otherwise, all clinical parameters were not significantly different. Table 3) [25]. Grade I complications were found in three cases, namely one case of focal segmental biliary ductal dilatation, one case of intra-operative mild hypotension and one case of transient elevation of AST/ALT. These three cases did not require any additional treatment. Grade II complications were found in two patients who had cases of hemoperitoneum that required blood transfusion. No Grade III complications were noted. A Grade IVB complication was found in one patient who experienced hemorrhagic shock following TIPS procedure, which led to multi-organ failure and death within 2 months of TIPS insertion. Grade V complications were found in two cases. One case was caused by intraperitoneal bleeding and hypovolemic shock, which led to multi-organ failure and death within a week. In the other case, the patient suffered from bleeding and hypovolemic shock immediately after TIPS placement. The patient was immediately brought back to IR, and the hepatic angiogram was performed, which demonstrated an arterio-biliary fistula that was successfully embolized. However, the patient did not recover from multiorgan failure and died about 1 month later. Complications were observed in eight patients (2.2% of all TIPS patients, or 14.3% of NTP patients), all of whom were graded according to the Clavien-Dindo classification (Table 3) [25]. Grade I complications were found in three cases, namely one case of focal segmental biliary ductal dilatation, one case of intra-operative mild hypotension and one case of transient elevation of AST/ALT. These three cases did not require any additional treatment. Grade II complications were found in two patients who had cases of hemoperitoneum that required blood transfusion. No Grade III complications were noted. A Grade IVB complication was found in one patient who experienced hemorrhagic shock following TIPS procedure, which led to multi-organ failure and death within 2 months of TIPS insertion. Grade V complications were found in two cases. One case was caused by intraperitoneal bleeding and hypovolemic shock, which led to multi-organ failure and death within a week. In the other case, the patient suffered from bleeding and hypovolemic shock immediately after TIPS placement. The patient was immediately brought back to IR, and the hepatic angiogram was performed, which demonstrated an arterio-biliary fistula that was successfully embolized. However, the patient did not recover from multi-organ failure and died about 1 month later.
Discussion
Multiple studies showed that non-target puncture (NTP) is difficult to avoid in the TIPS procedure, as the artery, lymphatics and bile ducts are in close proximity to the portal vein [26][27][28][29][30]. Uflacker et al. published a pathological study demonstrating a co-existence of the portal vein, bile ducts or hepatic artery along the path of portal puncture, extending from RHV to portal vein bifurcation. Uflacker noted that 96% of hepatic arteries and bile ducts from segments VII and VIII are above portal vein bifurcation when the right portal vein is targeted as an access [31]. A small cirrhotic liver causes these structures to be cramped in a confined space and forced into closer proximity. Therefore, TIPS in a small liver may cause more NTPs, as well as biliary and vascular injury, compared to TIPS in a liver of normal size. Moreover, in a microscopic view of the portal triads, the portal veins were surrounded by arterial branches, bile ducts and lymphatics. Therefore, NTP is inevitable and common during TIPS procedures. In addition, biliary and hepatic arterial anatomic variants are very common, occurring in up to 45% and 42% of patients, respectively [32]. These anatomic variants can also contribute to NTP during TIPS. Pathophysiologically, a portal vein puncture could be more challenging in a cirrhotic liver due to portal vein thrombosis or narrowing of the portal system, which may also increase the chance of NTP.
Several studies reported NTP-induced complications during TIPS procedures and identified capsular puncture as a cause of morbidity and mortality from intraperitoneal bleeding [5,17,33,34]. Freedman et al. found capsular puncture in 30% of all TIPS cases, though only one case was found to cause a serious hemoperitoneum. Moreover, Loffroy et al. found that capsular puncture may occur in up to 33% of cases, and 1-2% of those transcapsular punctures result in intraperitoneal hemorrhage. In addition, Haskal et al. reported a case of hepatic artery injury from a TIPS procedure [35]. Although arterial injury can lead to a lethal complication, they concluded that arterial injury is uncommon. Several studies reported fistulous connections between the bile ducts and hepatic artery, as well as between the portal vein and hepatic vein. For example, Willner et al. reported a case in which a patient had recurrent infection after TIPS placement and found a porto-biliary fistula in the explant after liver transplantation [36]. In another case, Menzel et al. found a hepatic arterio-biliary fistula that caused a massive hemobilia post-TIPS [37]. Lastly, Mallery et al. found a hepatic veno-biliary fistula in a patient who developed persistent sepsis after the TIPS procedure [27]. However, none of these reports or other studies conducted a comprehensive analysis of complications and outcomes of NTP in TIPS patients.
David et al. assessed peri-procedural complications caused by NTP in trans-abdominal ultrasound-guided portal vein access in TIPS [22]. This study suggested that an ultrasoundguided portal vein puncture may be a safer method, with a rate of 5.4% for overall puncturerelated complications in 224 TIPS. The study found a lower rate of complication compared to prior studies [26,34]. However, this study found a higher complication rate than we found, which was 2.2%. In addition, this study failed to describe NTP without complications and did not compare its outcomes to any control group (non-US guided TIPS). Therefore, it is not necessarily convincing that US-guided TIPS is safer.
The 1-month mortality rate due to NTP during the TIPS procedure was 0.5% (2/369). None of the NTP patients experienced biliary tract infection, biliary obstruction, hepatic artery aneurysm or lymphatic leakage within 30 days of TIPS placement. The mortality rate is relatively low compared to the larger rate found by Barton et al., which was a 1.7% procedural mortality rate from hemoperitoneum, retroperitoneum, mediastinum, laceration of hepatic artery, portal vein, liver capsule and right heart failure [38].
Our study systematically collected data from all NTP cases that occurred during TIPS procedures to identify the types of punctures that lead to morbidity or mortality. We found NTP-related complications in eight patients. Notably, four out of five patients with clinically significant complications had a combination of capsular puncture and NTP of other structures. Accordingly, all mortalities were associated with a combination of capsular puncture and other NTP. The remaining patient with a clinically significant complication from NTP had markedly decreased hemoglobin levels (dropping from 8 to 5.9), thus requiring transfusion. This patient had hepatic artery NTP without capsular puncture or evidence of intraperitoneal hemorrhage. Therefore, the drop in hemoglobin may be contributed to other causes, such as hemodilution or hemolysis. After the transfusion, the patient was stabilized without further intervention/treatment. None of the inadvertent isolated capsular punctures led to clinically significant symptoms. There is no statistically significant difference in NTP complications, post-TIPS complications, readmission rates, lengths of stays, or mortality rates in either the combination of other non-capsular puncture NTPs or the isolated capsular puncture cohort. Similarly, no clinically significant outcomes were solely caused by arteries, lymphatics, bile ducts or combined multiple NTPs without capsular puncture, including NTP complications, post-TIPS complications, readmission rates and lengths of stays. None of the NTP patients in these groups died. Therefore, we recommend even closer clinical observation and, if necessary, additional imaging in patients who had an inadvertent capsular puncture combined with other NTPs. This approach may help clinicians to avoid delayed diagnosis of complications, such as peritoneal hemorrhage, as well as minimize prolonged hypovolemic shock from bleeding.
Our study has a number of limitations that warrant further discussion. Firstly, it was a retrospective single-center study. There was a selection bias for patient inclusion, given that patients were only included if they had imaging evidence of NTP during their TIPS procedure. Therefore, some patients who had NTPs without images could have been excluded. Future prospective studies are needed to eliminate this selection bias. Another limitation is that the follow-up data are based on the medical records within our system, and it may or may not include hospitalizations at outside facilities. Given that those outside hospital records were not available, some data may be missing. Again, a larger prospective study may address this issue and potentially eliminate this limitation. Lastly, although data are from a single-center experience, several interventional radiologists performed TIPS at our institution with a range of experiences and skill sets. This fact may have affected the outcomes of this study. However, in evaluation, it was noted that the NTP-related complications were not associated with the length of experience or skill sets.
Conclusions
In conclusion, non-target puncture injury during TIPS is not uncommon. An isolated NTP of any major intrahepatic structures, including arteries, bile ducts, lymphatics or liver capsule, does not appear to cause any clinically significant complications. However, morbidity and mortality increase if a NTP of the liver capsule is combined with other nontarget punctures. Therefore, closer observation and a higher level of monitoring, including additional imaging, may be warranted to prevent unexpected clinical outcomes.
|
2023-06-29T05:15:37.562Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9e1dc1d070a5368c4fb33f9f4a01c0d4d6480fee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biomedicines11061630",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e1dc1d070a5368c4fb33f9f4a01c0d4d6480fee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1885437
|
pes2o/s2orc
|
v3-fos-license
|
Prenatal Rosiglitazone Administration to Neonatal Rat Pups Does Not Alter the Adult Metabolic Phenotype
Prenatally administered rosiglitazone (RGZ) is effective in enhancing lung maturity; however, its long-term safety remains unknown. This study aimed to determine the effects of prenatally administered RGZ on the metabolic phenotype of adult rats. Methods. Pregnant Sprague-Dawley rat dams were administered either placebo or RGZ at embryonic days 18 and 19. Between 12 and 20 weeks of age, the rats underwent glucose and insulin tolerance tests and de novo fatty acid synthesis assays. The lungs, liver, skeletal muscle, and fat tissue were processed by Western hybridization for peroxisome proliferator-activated receptor (PPAR)γ, adipose differentiation-related protein (ADRP), and surfactant proteins B (SPB) and C (SPC). Plasma was assayed for triglycerides, cholesterol, insulin, glucagon, and troponin-I levels. Lungs were also morphometrically analyzed. Results. Insulin and glucose challenges, de novo fatty acid synthesis, and all serum assays revealed no differences among all groups. Western hybridization for PPARγ, ADRP, SPB, and SPC in lung, liver, muscle, and fat tissue showed equal levels. Histologic analyses showed a similar number of alveoli and septal thickness in all experimental groups. Conclusions. When administered prenatally, RGZ does not affect long-term fetal programming and may be safe for enhancing fetal lung maturation.
Introduction
Peroxisome proliferator-activated receptor (PPAR)γ is a ligand-activated transcription factor that belongs to the superfamily of nuclear hormone receptors [1]. Several studies have evaluated the role of PPARγ in lung maturation, demonstrating its critical significance in stimulating the alveolar epithelial-mesenchymal paracrine signaling pathway [2][3][4][5]. Recent studies have also shown that PPARγ agonists such as rosiglitazone (RGZ) significantly enhance lung maturation when administered antenatally. Its efficacy in enhancing pulmonary maturation and neonatal and longterm safety following postnatal administration has also been demonstrated recently [5,6]. In those studies, lack of any significant impact on the neonatal and long-term metabolic profile of the exposed offspring was demonstrated [5,6]. However, data on the long-term effects of RGZ are sparse, and to date no study has examined the effects of RGZ on the metabolic profile of adult rats when administered prenatally.
Despite the morbidity and mortality associated with bronchopulmonary dysplasia (BPD), there are no effective pharmacologic preventive or therapeutic options available. Antenatal steroid administration is the standard of care for augmenting pulmonary maturity in the presence of imminent premature labor [7,8]. However, steroids have both limitations and concerning side effects [9]. Given that antenatal PPARγ administration enhances lung maturation and may be an alternative to antenatal steroids, it is critically important to determine its long-term safety before this treatment modality can be considered for human use. Therefore, we wanted to determine the adult metabolic profile and lung structure of adult rats exposed to RGZ antenatally and compare these to the metabolic profile and lung structure of rats exposed to dexamethasone antenatally. To accomplish this, we utilized a previously described animal model to study the effects of prenatally administered RGZ on markers of lung maturation and the metabolic programming [5,6].
Based on previous studies, we hypothesized that a PPARγ agonist given prenatally to accelerate lung development would not significantly alter the metabolic profile or phenotype [10]. Given the known effects of PPARγ agonists on the regulation of insulin and lipid metabolism, we examined the effects of antenatal RGZ on the basic metabolic profile by measuring body weight, glucose and insulin tolerance tests, de novo fatty acid synthesis, plasma troponin-I, cholesterol, triglycerides, insulin, and glucagon levels [11][12][13]. Lung maturation in adult animals was assessed by examining the expression of surfactant proteins B (SPB) and C (SPC), PPARγ and ADRP, key alveolar epithelial, and mesenchymal molecular markers [5,14]. Lung morphometry was assessed by determining radial alveolar counts and septal thickness.
Methods
Pathogen-free, time-mated, first-time pregnant Sprague-Dawley rats (285-295 g) were obtained at day 16 of gestation (day 21= term). They were allowed food and water ad libitum in a humidity-and temperature-controlled room on a 12-h : 12-h-light : dark cycle. Rats were assigned to each of the 4 treatment groups, receiving either diluent, (cottonseed oil), 0.3 mg/kg of RGZ (Cayman Chemicals, Ann Arbor, MI), 3 mg/kg of RGZ, or 0.25 mg/kg of dexamethasone (Dexa) intraperitoneally (i.p.). The diluent, RGZ or Dexa, was administered using a microsyringe in 100 μL volumes injected i.p. once daily on gestational days 18 and 19, 24 hours apart, for a total of two doses each. On day 22 of pregnancy, the dams delivered spontaneously. A total of 33 pups from 4 litters (for each study group), with a minimum of 2 males and 3 females in each group were studied. Pups were breast-fed ad libitum and then weaned to rat chow on postnatal day 21. Glucose tolerance and insulin tolerance tests were performed at 12 weeks of age. To perform these studies, either glucose or insulin was administered after an overnight fast. At 20 weeks, the left lungs were collected and flash-frozen for later Western hybridization to determine the expression of PPARγ, ADRP, SPB, and SPC. Right lungs were inflated with saline at a pressure of 20 cm H 2 O, after which the trachea was immediately ligated to maintain inflation. Subsequently, the lungs were stored in 30% dextrose for 2 weeks after which they were embedded in paraffin for further sectioning, H&E staining, and light microscopy. Liver, muscle, and perinephric fat were also collected and flash-frozen to determine the effects of prenatal RGZ on the expression of PPARγ and ADRP, a downstream target of PPARγ. Peripheral blood was collected and stored at −80 • C for later determination of cholesterol, triglyceride, glucagon, insulin, cardiac troponin, and fatty acids. In a subset of animals (n = 6 for each group; 3M : 3F), at 19 weeks de novo fatty acid synthesis and incorporation into tissues were analyzed by deuterium (D 2 O) labeling and mass spectrometry, as previously described [15]. Briefly, animals received deuterated water (99.9%) prepared in normal saline in an amount equal to ∼4% of body weight and administered intraperitoneally and then were given free access to drinking water containing 6% D 2 O for 7 days. At the end of the experimental period (∼20 weeks of age), the animals were sacrificed using 0.1 mL euthasol (= 39 mg pentobarbital sodium, Virbac AH, Ft. Worth, TX) per rat. All animal procedures were performed following the guidelines of the National Institutes of Health for the care and use of laboratory animals and approved by the Los Angeles Biomedical Research Institute, Animal Care and Use Committee.
Glucose Tolerance Test and Insulin Tolerance Test.
Either glucose (1 g/kg body wt, intraperitoneal) or insulin (1 unit/kg, subcutaneous) was administered after an overnight fast. Serum glucose levels were assayed at different time points (0, 15, 30, 60, 120, and 180 minutes) using a glucometer (Home Diagnostics, Fort Lauderdale, FL), according to the manufacturer's protocol.
Cholesterol and Triglyceride
Assays. Cholesterol and triglyceride levels were determined using the RAICHEM kit (Cliniqa Corporation, San Marcos, CA, with a dynamic range of 0-600 mg/dl, an intra-assay coefficient of variation of 1.7%), and the Cayman kit (Caymen Chemical Company, Ann Arbor, MI, dynamic range of 0-200 mg/dl, and intraassay coefficient of variation of 1.34%), respectively, following the manufacturer's protocol.
Plasma Insulin and Glucagon.
Plasma insulin was measured using an ELISA kit (detection limit of 0.2 ng/mL and 100% specificity) and glucagon was measured via an RIA kit (detection limit of 20 pg/mL and cross-reactivity with oxyntomodulin : <0.1%) purchased from Linco (Linco Research, St. Charles, MO).
Measurement of Plasma Cardiac Troponin-I Levels.
Determination of cardiac troponin-I levels was done based on a rat cardiac Troponin-I ELISA kit as per the manufacturer's protocol (Cat. no. 2010-2-HSP, Life Diagnostics, Weight (grams) Week of life Figure 2: Effect of RGZ on glucose tolerance test (GTT). At 12 weeks of age, glucose was administered at 1 g/kg body weight i.p. after an overnight fast. Glucose was assayed at time = 0 (baseline), 15, 30, 60, 120, and 180 minutes following glucose administration. There were no significant differences (P > 0.05) in timed serum glucose values in the treated groups compared with controls during the GTT.
2.6. Fatty Acid Analysis. De novo fatty acid synthesis was analyzed by deuterium labeling, followed by mass spectrometry, as previously described [5,15].
Histologic Analysis.
Lung morphometry was performed following previously described methods [10].
Statistical Analysis.
Analysis of variance and two-tailed Student's t-test with Bonferroni correction for multiple comparisons were used to analyze the experimental data. P values < 0.05 were considered to be statistically significant.
Effect of RGZ on Body Weight.
A total of 33 pups (8-9/group) were studied in each group. There were no significant differences in birth weight of pups from each experimental group. Body weight was determined every 2 weeks as an overall measure of growth and metabolism starting on day 30 of life until week 14 and at sacrifice. We found no significant differences (P > 0.05) in body weight among the treatment groups at all time-points examined ( Figure 1).
Effect of RGZ on Glucose and Insulin Tolerance.
Glucose and insulin tolerance tests showed no significant differences in glucose values among the different groups at all timepoints examined (Figures 2 and 3).
Effect of RGZ on Insulin, Glucagon, and Cardiac
Troponin Levels. Table 1 shows that there were no significant differences in insulin, glucagon, or cardiac troponin-I levels among any of the groups, (P > 0.05 for all). Table 1 shows no significant differences in plasma cholesterol and triglyceride levels in the control group versus the RGZ or Dexa-treated group, (P > 0.05 for all).
Effect of RGZ on Fatty Acid
Synthesis. Analyses of de novo fatty acid synthesis and their incorporation into tissues at 19 weeks showed that the fraction of de novo synthesized palmitate molecules in the RGZ-and Dexa-treated groups were comparable to the control group (Table 2). Table 1: Insulin, glucagon, lipids, and troponin measurements. Plasma samples taken at 20 weeks for metabolic analyses showed there were no significant differences (P > 0.05) in insulin, glucagon, lipids, and troponin measurements in RGZ-and dexamethasone-treated groups compared with controls. Values are mean ± SD. N = 24 (6 in each group).
Effect of RGZ on Alveolar Differentiation.
Western blot analysis for SPB, SPC, PPARγ, and ADRP on protein lysates from whole lung samples from different groups showed that when compared to control, Dexa-and RGZ-treated groups had no significant effect on the expression of all the molecular markers probed (P > 0.05 for all, Figure 4).
Effect of RGZ on PPARγ and ADRP Expression in
Liver, Muscle, and Perinephric Fat. Figure 5 shows Western Blot results for the extrapulmonary PPARγ-and ADRPexpressing tissues (liver, muscle, and perinephric fat) examined. There were no significant differences in PPARγ and ADRP protein levels among the different treatment groups when compared with controls (P > 0.05 for all).
Lung Histology.
Morphometric analysis showed no significant differences in septal thickness and alveolar count between the control, Dexa, and RGZ-treated groups (P > 0.05, Figure 6).
Discussion
In view of the increasing survival of extremely low birth weight infants and the accompanying increased prevalence of BPD, it is imperative that we find optimal preventive and therapeutic interventions to decrease the morbidities and mortality associated with this condition [16]. At present, the standard of care to augment lung maturity during imminent premature delivery is antenatal steroid administration; however, evidence suggests steroids may increase the risk for significant adverse effects like altered neuronal development [17]. Despite the necessity to find an optimal treatment for lung immaturity, extensive research in the field has not succeeded in finding such an alternative to antenatal steroids. In the last decade, the possibility of using PPARγ agonists to enhance lung maturation and promote lung injury repair has been explored [1][2][3]. In addition, our laboratory has shown that in the developing lung PPARγ agonists can prevent lung injury induced by infection, nicotine, or hyperoxia [18,19]. Similarly, a recent study by Garg et al. has provided evidence that early postnatal administration of PPARγ agonists can reverse the effects of growth restriction [20]. Regardless of its evident efficacy, the long-term safety of prenatally administered PPARγ agonists is unknown. Our present study is the first to examine the long-lasting molecular effects of prenatally administered RGZ, a potent PPARγ agonist. Our results demonstrate that all of the metabolic parameters examined did not change, and RGZ did not alter the adult phenotype of our experimental groups compared with controls. Given RGZ's known effects on insulin and fat metabolism, we determined the body weight patterns across all study groups and observed no significant differences in growth rate and adult weight at 20 weeks of age [12,19]. Since PPARγ activation regulates the transcription of insulin-responsive genes involved in the metabolism of glucose, we also studied the effects of RGZ on glucose and insulin tolerance as well as glucagon and insulin levels in adults following prenatal RGZ administration [21]. We found that RGZ did not affect either the glucose or insulin tolerance tests, nor the serum insulin or glucagon levels in any of the experimental groups.
Given that PPARγ-related genes are involved in the regulation of lipid metabolism and have effects on the lipid profile, we also determined serum cholesterol and triglyceride levels, as well as de novo fatty acid synthesis, among the experimental groups and found no alterations in either serum triglyceride or cholesterol levels when compared to non-treated animals [11]. Results of mass spectrometric analyses did not show alterations in the rate of de novo fatty acid synthesis in the experimental groups.
Rosiglitazone is widely used in the adult population for the treatment of hyperglycemia in diabetes [21,22]. Recent reports have associated RGZ at a dose of 4 mg twice daily for a period of 20 weeks with an elevated risk of cardiovascular events in this population [23]. We measured cardiac troponin due to its well-established validity as a marker for cardiac injury and to allow for comparison with previous data on rat cardiac function studies [24]. Our study did not reflect any differences in troponin-I levels among the study groups. In contrast to the human data, the absence of cardiotoxicity of RGZ is probably due to much shorter and lower doses used in our animal study compared to much greater exposure in adults (2 doses in our study versus 280 doses in the adult studies).
In addition, we measured the effect of antenatally administered RGZ on the expression of SBP, SPC, PPARγ, and ADRP (the downstream target of PPARγ) in the lung, and in selected extrapulmonary PPARγ-expressing tissues such as the liver, adipose tissue, and muscle. Our results show that when compared to controls, there were no significant differences in the expression of SPB, SPC, PPARγ, or ADRP in either the pulmonary or extra-pulmonary tissues examined. Lastly, morphologic studies did not show any differences in the septal thickness and number of alveoli between the experimental and control groups.
In summary, long-term followup after prenatal administration of RGZ showed no effects on body weight, insulin and glucagon tolerance tests as well as on insulin, glucagon, triglyceride, cholesterol, or troponin-I levels. In addition, RGZ did not have any effects on fatty acid synthesis or lung morphology, suggesting absence of any long-term metabolic or pulmonary effects following antenatal exposure.
Among the various thiazolidinediones, RGZ was selected for this study based on extensive clinical experience of others and our studies on its role in perinatal lung maturation [5,[25][26][27]. The results of this study should be interpreted with caution since given the small sample size, the possibility Figure 5: Effect of RGZ on PPARγ and ADRP expression in liver, skeletal muscle, and perinephric fat. Utilizing Western blot assay, PPARγ (a) and ADRP (b) protein levels were examined in the whole tissue lysates of liver, skeletal muscle, and perinephric fat. There were no significant differences (P > 0.05) in the protein levels of PPARγ and ADRP in the liver, skeletal muscle, or perinephric fat, normalized to GAPDH, in the treated groups compared with the control. Representative Western blots and the corresponding density histograms are shown (n = 4 in each group). of a type II error canot be ruled out. However, the promising benefits of thiazolidinediones at the doses used in our studies and the favorable long-term results in the present study strengthen the argument for the use of PPARγ agonists as an effective and safe alternative for the prevention of BPD.
Conclusions
RGZ is an effective intervention in the enhancement of lung maturity and the promotion of lung injury repair. Longterm followup of antenatally treated subjects in our study did not show any changes in their metabolic profile or in their phenotype, suggesting that PPARγ agonists are a safe alternative for the prevention and treatment of BPD. Though human studies have shown increased cardiovascular risk associated with RGZ, such adverse effects were not seen in this study, probably due to very different dosing regimens [23,28]. RGZ is a prototype for the thiazolidinedione group of drugs and our results possibly demonstrate a beneficial class effect suggesting the need for pharmacokinetic and pharmacodynamic studies in humans with the goal of developing this class of drugs as an effective and safe alternative to enhance fetal lung maturation.
|
2016-05-12T22:15:10.714Z
|
2010-04-01T00:00:00.000
|
{
"year": 2012,
"sha1": "2cd7b01c2387abe4a79264ada841b3f1cd322461",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ppar/2012/604216.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79405dde02b5ce38cd705ed8294f2f89bacb09e6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236090707
|
pes2o/s2orc
|
v3-fos-license
|
Effect of door-to-door distribution of HIV self-testing kits on HIV testing and antiretroviral therapy initiation: a cluster randomised trial in Malawi
Introduction Reaching high coverage of HIV testing remains essential for HIV diagnosis, treatment and prevention. We evaluated the effectiveness and safety of door-to-door distribution of HIV self-testing (HIVST) kits in rural Malawi. Methods This cluster randomised trial, conducted between September 2016 and January 2018, used restricted 1:1 randomisation to allocate 22 health facilities and their defined areas to door-to-door HIVST alongside the standard of care (SOC) or the SOC alone. The study population included residents (≥16 years). HIVST kits were provided door-to-door by community-based distribution agents (CBDAs) for at least 12 months. The primary outcome was recent HIV testing (in the last 12 months) measured through an endline survey. Secondary outcomes were lifetime HIV testing and cumulative 16-month antiretroviral therapy (ART) initiations, which were captured at health facilities. Social harms were reported through community reporting systems. Analysis compared cluster-level outcomes by arm. Results Overall, 203 CBDAs distributed 273 729 HIVST kits. The endline survey included 2582 participants in 11 HIVST clusters and 2908 participants in 11 SOC clusters. Recent testing was higher in the HIVST arm (68.5%, 1768/2582) than the SOC arm (48.9%, 1422/2908), with adjusted risk difference (RD) of 16.1% (95% CI 6.5% to 25.7%). Lifetime testing was also higher in the HIVST arm (86.9%, 2243/2582) compared with the SOC arm (78.5%, 2283/2908; adjusted RD 6.3%, 95% CI 2.3% to 10.3%). Differences were most pronounced for adolescents aged 16–19 years (adjusted RD 18.6%, 95% CI 7.3% to 29.9%) and men (adjusted RD 10.2%, 95% CI 3.1% to 17.2%). Cumulative incidence of ART initiation was 1187.2 and 909.0 per 100 000 population in the HIVST and SOC arms, respectively (adjusted RD 309.1, 95% CI −95.5 to 713.7). Self-reported HIVST use was 42.5% (1097/2582), with minimal social harms reported. Conclusion Door-to-door HIVST increased recent and lifetime testing at population level and showed high safety, underscoring potential for HIVST to contribute to HIV elimination goals in priority settings. Trial registration number NCT02718274.
INTRODUCTION
In 2016, an estimated 19.4 million people were living with HIV in southern and eastern Africa. 1 Despite expansion of HIV testing and treatment programmes, one-quarter of people living with HIV remained unaware of their HIV status. HIV testing gaps were highest in adolescents and men, including in Malawi. 1 In 2015-2016, the proportion of undiagnosed HIV was 46% among HIV-positive adolescents and young adults aged 15-24 years, the highest across age groups. 2 Of men with HIV, 28% were unaware of their status compared with 20% of women with HIV. 2 Reaching high coverage of HIV testing remains essential for
WHAT IS ALREADY KNOWN?
⇒ HIV self-testing (HIVST) can further extend coverage of HIV testing among underserved population subgroups. ⇒ Limited data were previously available on the effectiveness and safety of HIVST from rural, underserved populations in high HIV prevalence settings.
WHAT ARE THE NEW FINDINGS?
⇒ Door-to-door distribution of HIVST kits by community-based distribution agents increased recent HIV testing and lifetime HIV testing, with differences most pronounced among adolescents aged 16-19 years and men. ⇒ Cumulative incidence of antiretroviral therapy initiations was not shown to increase for the overall 16-month intervention period. ⇒ Self-reported HIVST use was 42.5%, with minimal social harms reported.
WHAT DO THE NEW FINDINGS IMPLY?
⇒ Door-to-door HIVST demonstrates significant potential to contribute to HIV elimination goals in priority settings.
BMJ Global Health
HIV diagnosis, treatment and prevention, 3 but access of facility-based HIV services can be limited by social, economic and health system barriers. [4][5][6] Community-based HIV testing strategies can identify HIV-positive persons at earlier stages of infection and improve antiretroviral therapy (ART) initiation and retention when provided with universal treatment services. 7 8 Provision of HIV self-testing (HIVST) through community-based approaches can further extend coverage of HIV testing among underserved population subgroups. 9 In Malawi, urban communitybased distribution of HIVST kits achieved high uptake, with offer of home-based HIV care further increasing demand for ART. 10 11 Introducing HIVST with door-todoor HIV testing services (HTS) by community health workers increased knowledge of HIV status among urban Zambians. 12 Community-based HIVST is therefore a promising approach for providing HIV testing, though lower literacy and healthcare access among rural populations could influence uptake of self-care technologies. 13 Limited data were previously available on the effectiveness and safety of HIVST from rural, underserved populations in high HIV prevalence settings.
In this study, we used a cluster randomised trial to evaluate the effectiveness and safety of door-to-door distribution of HIVST kits in rural Malawi. Specifically, we aimed to assess whether distribution of HIVST kits through community-based distribution agents (CBDAs) increased the proportion of the population who tested for HIV and were initiated on ART at cluster level. Our study is part of a multicountry evaluation of community-based distribution of HIVST kits under the Unitaid/Population Services International (PSI) HIV Self-Testing Africa (STAR) Initiative.
METHODS Design, setting and participants
We conducted a parallel cluster randomised trial of door-to-door distribution of HIVST kits. 14 The study was based in 22 government primary health centres and their defined areas in four high HIV prevalence districts (Blantyre, Machinga, Mwanza, Neno). A cluster randomised design was adopted since the intervention was implemented at the health facility level. The study team enrolled health facilities providing HIV testing and ART services to rural communities in their catchment areas, with verbal consent obtained from facility representatives. Boundaries were drawn for: (1) the facility catchment area, and (2) the evaluation area within the facility catchment area. The intervention was delivered throughout the facility catchment area, while primary and secondary outcomes were measured among residents from the evaluation area. Specifically, the study population included residents aged 16 years and older from the evaluation area.
Randomisation
The 22 health facilities were randomised 1:1 to the HIVST intervention alongside the standard of care (SOC) or the SOC alone, which primarily consisted of facility-based HTS (figure 1). A computer-generated random sample was drawn by MN from 150 855 unique combinations of allocating health facilities to one of the two study arms, restricted by district, catchment population size, number of HTS clients and the proportion of clients testing HIV positive. 15 The final allocation was assigned at a public ceremony on 21 March 2016. Numbered balls were selected by community and government representatives from an opaque bag that corresponded to a unique allocation. Blinding of the implementation team and residents was not feasible due to the nature of the intervention, but masking was maintained where possible, including data collection, management and analysis without reference to the study arms.
A planned second randomisation of home-based HIV care in the HIVST arm was not implemented due to delays in initiating the intervention, leaving an insufficient interval for assessment. 14
Procedures
The HIVST intervention was delivered for at least 12 months within the evaluation area of eligible health facilities before expanding to the rest of the facility catchment area. HIVST kits were distributed by existing CBDAs, who provided reproductive health products prior to HIVST distribution, and newly recruited CBDAs selected in consultation with village heads.
PSI Malawi conducted 1-week trainings based on an HIVST training curriculum developed in collaboration with the Ministry of Health. The training included basic information on HIV diagnosis and treatment; promoting
BMJ Global Health
HIVST using social marketing; using kits and interpreting results; providing pretest and post-test information and support, including referral for confirmatory HIV testing and ART following a positive self-test; anticipating and managing social harms; storing kits; and collecting data. National HIV testing and counselling practices and principles on voluntariness, consent and protection of client privacy and confidentiality were also covered in the training.
CBDAs then provided the OraQuick HIV Self-Test (OraSure Technologies, Thailand), along with locally adapted instructions for use, 16 an opaque envelope for disposal and a self-referral card to facilitate linkage to routine HIV services at health facilities. In their respective areas, CBDAs distributed HIVST kits door-to-door or on request to residents aged 16 years and older, with their sociodemographic characteristics recorded in registers. Residents could self-test with CBDAs or in private. If residents elected to self-test privately, CBDAs followed up within 7 days of distribution to provide optional post-test support. Disclosure was not required, and HIVST results were not recorded in registers. Residents were also asked to place their used kits in envelopes to be returned to CBDAs or deposited in locked boxes located centrally in each village. PSI provided monthly supervision to verify data in CBDA registers, collect used kits and restock supplies. CBDAs were remunerated for each kit distributed (MWK100/US$0.15) and each kit distributed with linkage to HIV care (MWK150/USD$0.23).
The SOC in both arms included HIV testing and ART services under the Ministry of Health, offered primarily at health facilities. Standard HIV testing used bloodbased rapid diagnostic testing algorithms, with ART initiated immediately following a confirmed HIV diagnosis.
Outcomes and measurement
The primary outcome compared between arms the proportion of individuals aged 16 years and older who self-reported recent testing for HIV (in the last 12 months), measured at cluster level using an endline survey. Secondary outcomes compared (1) self-reported lifetime HIV testing, and (2) cumulative 16-month incidence of ART initiations per 100 000 population, which was ascertained using ART clinic records during the intervention period.
HIV testing outcomes were measured through a crosssectional survey administered at the end of the intervention period. In each evaluation area, two villages with a minimum of 250 residents aged 16 years and older were randomly selected, with one village surveyed at endline and one village surveyed at baseline. The baseline survey was conducted prior to the intervention to adjust for imbalance between arms in the primary outcome.
Households in the evaluation villages were enumerated and randomly selected to provide a sample of at least 250 participants per village. All individuals aged 16 years and older in selected households were eligible for the survey, with multiple visits for interviews attempted to maximise the response rate. Informed verbal consent or assent was obtained. Participants were then interviewed on household and sociodemographic characteristics and prior use of HIV testing, treatment and prevention services.
ART initiation data were extracted from registers at each of the health facilities for the 16-month intervention period and the 12-month period preceding the intervention. Eligibility criteria included ART patients aged 16 years and older from the evaluation area. Population estimates for the evaluation area, which were used as the denominator for the ART outcome, were obtained from facility and village registers.
The proportion of lifetime HIVST use and the number of HIVST kits distributed were evaluated using the endline survey and CBDA registers. Adverse events related to HIVST were also measured using the endline survey in addition to a community reporting system established in evaluation villages to identify and manage potential adverse events. 17 Community stakeholders, including village heads, community health workers, religious leaders and police officers, documented, investigated and managed social harms related to HIV testing and self-testing. Adverse events were reported to the study team and assessed, categorised by severity and followed up as appropriate. 17 Sample size With 11 clusters per arm and 250 participants per cluster, we had at least 80% power at a 5% significance level to detect a 30% relative increase in the primary outcome of recent HIV testing in the HIVST arm, assuming 25%-40% coverage in the SOC arm. 18 The study was also powered to identify a 45% relative increase in lifetime HIV testing in the HIVST arm, assuming 42%-60% coverage in the SOC arm. The sample size was calculated using a coefficient of variation (k) in clusters of 0.25. 15
Statistical analysis
We conducted an intention-to-treat analysis based on cluster assignment to study arms and used methods appropriate for cluster randomised trials. 15 The risk difference (RD) and risk ratio were calculated respectively from cluster-level risks and log risks, which were compared by arm using a t-test. For HIV testing outcomes, we adjusted for imbalances in individual-level covariates based on a two-stage approach. 15 The first stage used logistic regression with individual-level covariates to obtain predicted values, which were summed at the cluster level and applied to calculate the difference and ratio of observed and predicted values. The second stage used linear regression of covariate-adjusted residuals obtained from the first stage and included the study arm. To adjust for imbalance in the primary outcome prior to the intervention, the cluster-level baseline covariate of recent HIV testing was also included in the regression model. The ART initiation outcome adjusted for ART uptake in the 12-month preintervention period.
BMJ Global Health
For recent HIV testing, a priori subgroup analyses were specified by sex, age group (16-19 years, 20 years and older) and socioeconomic status (lowest, middle, highest strata). Post hoc analysis used alternative categories of age group (16-19 years, 20-39 years, 40 years and older). Further, subgroup analyses were conducted for lifetime HIV testing by sex, age group and socioeconomic status, and for ART initiations by intervention period (0-5, 6-11, 12-16 months). Statistical analysis used Stata V.14.0. Population characteristics for the endline survey are summarised in table 1. The proportion of men was 42.6% (2339/5490) and the median age was 31 years old. The majority of participants did not have beyond primarylevel education (84.9%, 4661/5490). Most characteristics were well balanced by arm. Differences were observed for marital status, with 69.5% (1795/2582) married in the HIVST arm and 63.2% (1838/2908) married in the SOC arm.
Implementation
The baseline survey was administered between May and August 2016. Of listed individuals, 78.5% (2809/3577) and 74.7% (2664/3567) were surveyed in the HIVST and SOC arms, respectively (online supplemental tables 2 and 3). Baseline coverage of HIV testing in the last 12 months was higher in the HIVST arm (56.0%, 1574/2809) than the SOC arm (48.4%, 1289/2664; table 1). We therefore adjusted for baseline differences in analysis of primary and secondary outcomes. Self-reported lifetime use of HIVST at baseline was limited (7/5473).
Process outcomes
Consistent with the high number of HIVST kits distributed, there were large differences between arms in awareness and use of HIVST at endline. The proportion of participants who had heard of HIVST at endline was 88.8% (2294/2582) in the HIVST arm and 31.5% (917/2908) in the SOC arm (table 3). Self-reported lifetime HIVST use was respectively 42.5% (1097/2582) in the HIVST arm, with uptake highest in young men aged 20-24 years (58%) and adolescent boys (49.0%; online supplemental figure 2). Similar coverage was reported for HIVST use in the last 12 months. HIVST use was 8.3% (240/2908) in the SOC arm, with one cluster exposed to an external community-based HIVST programme in 2017. Among participants who recently self-tested in the HIVST arm (n=794), most received HIVST kits from the CBDA (97.9%, n=777) and collected their kits at home (76.7%, n=609). Further, 0.8% (n=6) reported a new HIV-positive result and 2.0% (n=16) reported a §Denominator for ART initiations is the estimated population of adults ≥16 years in the evaluation area, which was estimated using village and health facility registers.
BMJ Global Health
ART, antiretroviral therapy; GM, geometric mean (of cluster-level proportions); HIVST, HIV self-testing; k, coefficient of variation in health facility-defined clusters; RD, risk difference; SOC, standard of care.
Safety outcomes
At endline, 0.5% (4/794) of participants reported being forced to self-test or disclose their self-test results (table 3). Three events of social harm related to HIVST were reported, managed and resolved through the community reporting system: one case involved discrimination from household members for collecting an HIVST kit; two cases involved temporary separation between couples, with one event from self-testing and one event due to newly identified serodiscordancy within the couple. In an additional event reported to implementers in nonevaluation areas, a perinatally infected adolescent under the eligible age acquired an HIVST kit and suffered a highly stigmatising response following self-testing with her friends. These events have been described in detail elsewhere. 17
DISCUSSION
The main findings from this cluster randomised trial were that door-to-door distribution of HIVST kits by CBDAs increased recent and lifetime HIV testing at population level in rural Malawi. Our primary outcome of recent testing increased by 16.1%. Lifetime testing increased by 6.3%, with differences between arms most pronounced among priority subgroups: adolescents aged 16-19 years and men. The HIVST intervention did not show an effect on cumulative incidence of ART initiations at health facilities for the overall 16-month intervention period. HIVST use was reported by 42.5% of participants in the HIVST arm, with uptake highest among young men aged 20-24 years and adolescent boys. Few serious adverse events were reported. Our results therefore support door-to-door HIVST as an effective and safe strategy that can be used to meet HIV testing needs in underserved rural populations.
Our study is one of three community-based randomised trials from rural settings in southern Africa that were implemented as part of STAR. 14 19 20 Affordable, convenient and safe HIV testing strategies are important for rural populations, who often have more pronounced barriers to accessing healthcare. 13 The STAR trials had critical differences that can be used to guide policy and future research priorities. Our results showed increased recent and lifetime HIV testing from door-to-door HIVST, consistent with a separate Zambian trial, which added HIVST to an intensive community-based HIV programme. 12 High coverage of lifetime testing (88.7%) and lifetime HIVST use (50.2%) was reported for both arms of the STAR Zimbabwe trial, which compared the impact of remuneration strategies under campaign-style distribution by CBDAs on linkage to HIV care. 19 Our study, along with the Zimbabwe trial, implemented doorto-door distribution. In contrast, provision of HIVST kits at home, high-density community sites and health facilities under the STAR Zambia trial resulted in lower HIVST use (26.3%) and no measurable increase in lifetime or recent testing. 20 Process outcomes, such as HIVST awareness, were also lower in Zambia than in Malawi and Zimbabwe, suggesting that door-to-door distribution can lead to higher penetration than broader communitybased models. 19 20 We showed encouraging uptake of HIVST, with minimal social harms reported. Uptake was highest among young men aged 20-24 years followed by adolescent boys aged 16-19 years. Our study also reported increased lifetime HIV testing among adolescents and men. HIVST can bypass barriers that prevent uptake of standard HTS by these priority subgroups, 10 21 with HIVST valued for the convenience and confidentiality afforded. 5 22 However, our results demonstrated lower uptake compared with the STAR Zimbabwe trial, which evaluated more intensive distribution across a shorter period of time. 19 Similarly, a previous study in urban Malawi reported 84% uptake from distribution of HIVST kits by community volunteers, which may indicate higher acceptability BMJ Global Health among urban counterparts. 10 Understanding remaining demand-side barriers may allow for further optimisation of community-based HIVST strategies to maximise coverage and impact among underserved subgroups. Alternative HIVST strategies should also be considered. In Malawi, facility-based provision of HIVST kits among outpatients increased coverage of HIV testing, especially among adolescents. 23 Another study in Malawi found that secondary distribution to male partners of pregnant women extended testing coverage. 21 Our study did not observe an increase in ART initiations for the overall 16-month intervention period but in subgroup analysis for the 6-11 months' postintervention period. Further, 0.8% of participants reported a new positive result from HIVST, with frequent repeat testing among participants already known to be HIV positive. Impact on ART uptake varied across STAR trials. 24 A nonrandomised evaluation accompanying the Zimbabwe trial estimated a 27% increase in ART initiation rates, 19 while no difference was observed in Zambia. 20 Linkage to HIV care is practically difficult to capture, with potential for measurement errors. 24 True impact on ART demand from HIVST will depend on the prevalence of untreated HIV, which has been declining in southern and eastern Africa. 2 The intensity and reach of HIVST distribution strategies will also influence population-level impact. Additionally, interventions to encourage timely linkage to health facilities may be required, such as provision of home-based HIV care or more substantial financial incentives. 11 21 The benefits of community-based HTS are well established, with the main barrier to implementation including high cost per test and cost per new diagnosis, especially as countries reach the 'First 90' targets. 25 Economic analysis of our intervention is reported separately. 26 CBDA distribution showed average cost of 2017 US$8.15 per HIVST kit distributed, with the main cost contributors including personnel and HIVST kits. 26 Unit cost of communitybased HIVST was higher than the average cost of facilitybased HTS (2016 US$4.92) and facility-based HIVST (US$4.99) in Malawi. 23 27 While community-based HIVST is likely to maintain higher levels of knowledge of recent HIV status than standard HTS alone, sustainable provision will require further reductions in costs and optimisation of linkage to HIV treatment and prevention. For example, providing periodic campaigns is likely to be less costly than maintaining a continuous programme, especially if targeted to high-prevalence populations or underserved subgroups with ongoing HIV risk. Alternatively, a community-led approach for delivering HIVST has potential to further reduce costs. 28 The main strength of this study is the use of a robust cluster randomised design to report on the effectiveness and safety of large-scale implementation of communitybased HIVST. Further, CBDAs are commonly used to distribute health commodities in Malawi, with our findings potentially generalisable to settings in sub-Saharan Africa with similar community health cadres. We also add to the body of evidence on effective strategies for expanding HIV testing coverage in rural, HIV-prevalent populations and among population subgroups with substantial undiagnosed HIV.
Limitations included HIV testing outcomes that were self-reported and therefore susceptible to misreporting. ART initiations may be underestimated if study residents accessed non-study health facilities, which we aimed to minimise with our inclusion criteria of health facilities. Non-participation in the endline survey could result in ascertainment bias, with response rates lower among men than women. We did not account for householdlevel clustering, though this was unlikely to have altered our findings. 29 We discontinued second randomisation of home-based HIV care in the HIVST arm; however, the outcomes reported in this study were not affected. Data on social harms were passively collected through community reporting systems, potentially under-reporting the number of adverse events. Finally, our findings are limited to our intervention design, which included doorto-door implementation through remunerated CBDAs.
CONCLUSION
Door-to-door distribution of HIVST kits by CBDAs increased recent and lifetime HIV testing in rural, underserved populations, including among adolescents 16-19 years and men. ART initiations showed no differences between arms for the overall 16-month intervention period. HIVST was very acceptable and safe, with uptake highest among young men and adolescent boys. Doorto-door HIVST demonstrates significant potential to contribute to HIV elimination goals in priority settings. Further, as countries approach the 'First 90' targets, this approach could be adapted for periodic implementation to meet the ongoing need for HTS in settings with high undiagnosed HIV.
BMJ Global Health
Research Programme and Population Services International Malawi teams; and the technical advisory group.
Contributors ELC conceptualised the study. PPI, KF, RC, ND, KH and ELC contributed to the study design. PPI, MK, RC and RNz supervised the implementation of the study, including the intervention and data collection. PPI, MN and KF conducted the statistical analysis. PI, KF, RNy, CCJ, MT, ND, KH and ELC critically interpreted the results. PPI wrote the first draft of the manuscript. All authors reviewed and approved the final manuscript.
Funding The study was funded by Unitaid (grant number: PO 8477-0-600). ELC is also funded by the Wellcome Trust (WT091769).
Competing interests None declared.
Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available in a public repository. Data are available upon request through datacompass. lshtm. ac. uk. The protocol, statistical analysis plan, and CONSORT checklist are available as online supplementary files.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed under the terms of the Creative Commons Attribution IGO License (CC BY NC 3.0 IGO), which permits use, distribution,and reproduction in any medium, provided the original work is properly cited. In any reproduction of this article there should not be any suggestion that WHO or this article endorse any specific organization or products. The use of the WHO logo is not permitted. This notice should be preserved along with the article's original URL. Disclaimer: The author is a staff member of the World Health Organization. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the views, decisions or policies of the World Health Organization.
|
2021-07-20T06:22:49.334Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "360185916025c80bbbb306e118eed1bf39207a62",
"oa_license": "CCBYNC",
"oa_url": "https://gh.bmj.com/content/bmjgh/6/Suppl_4/e004269.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "87cd45667dac4e9b5ba3094f10a8750f917e58ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253520926
|
pes2o/s2orc
|
v3-fos-license
|
Income differences in partial life expectancy between ages 35 and 64 from 1988 to 2017: the contribution of living arrangements
Abstract Background Socioeconomic differences in mortality among the working-age population have increased in several high-income countries. The aim of this study was to assess whether changes in the living arrangement composition of income groups have contributed to changing income differences in life expectancy during the past 30 years. Methods We used Finnish register data covering the total population to calculate partial life expectancies between ages 35 and 64 by income quartile in 1988–2017. The contribution of living arrangements to these differences was assessed by direct standardization. Decomposition methods were used to determine the extent of life expectancy differences due to external (accidental, violent and alcohol-related) causes of death. Results The life expectancy gap between the highest and lowest income quartile increased until 2003–07, but decreased thereafter. The contribution of living arrangements to these differences remained mostly stable: 36–39% among men and 15–23% among women. Those living without children consistently showed the greatest life expectancy differences by income. External causes of death significantly contributed to income differences in life expectancy. Conclusions The living arrangement composition of income groups explained part of the differences in life expectancy, but not their changes. Our results on the contribution of external causes of death imply that both the persistent income gradient in mortality as well as the mortality disparities by living arrangements are at least partially related to similar selection or causal mechanisms.
Introduction
I ncreasing socioeconomic differences in mortality among the working-age population have been documented in several highincome countries in recent decades. [1][2][3][4] Inequalities may be particularly stark between income groups, 5 since the indicator allows for the identification of the most economically disadvantaged. Besides economic disadvantage, health-related selection, health behaviours and sociodemographic factors have been suggested to explain income differences in mortality. Previous research suggests that part, but not all, of the income-mortality association may be explained by pre-existing health status. [6][7][8] Behavioural arguments are supported by findings indicating a significant contribution of alcohol-and smokingrelated mortality to socioeconomic life expectancy differences, as well as changes therein. [9][10][11] The contribution of sociodemographic factors, such as living arrangements and family characteristics, has been less studied, although these factors seem to account for some of the observed socioeconomic differences in mortality. 3,12,13 Simultaneously, a related but largely separate line of research has documented increasing mortality differences by marital status, mainly stemming from a sharp and unparalleled decline in mortality among the married in the late 20th century. 14, 15 These disparities have been attributed to selection into and out of marriage based on health status [16][17][18][19] or other, e.g. socioeconomic, characteristics 18 as well as health-promoting material [19][20][21][22] and psychosocial resources shared between spouses, such as social support and control of health behaviours. 19,[22][23][24][25] Increases in accidental, violent and alcohol-related causes of death among the non-married suggest that changes in health-related behaviours may have played a role in growing marital status differences in mortality in 1976-2000. 14 Mortality differences by marital status have also been found to attenuate after controlling for socioeconomic status, 14,26,27 but their increase over time does not seem to be explained by changes in socioeconomic factors, such as education or occupation. 14,28 Since marital status alone does not capture the relationships and resources within families and households, a more comprehensive classification of living arrangements has been proposed to examine differences in health and mortality. 26,27 The relevance of household composition is further demonstrated by findings suggesting that living alone and the presence of a partner may in fact be stronger independent predictors of mortality differences than marital status per se, 29 and that childlessness and non-residential parenthood seem to be related to excess mortality among the working-age population. 27,30,31 Research on the joint contribution of socioeconomic status and living arrangements to mortality differences has been scarce. As controlling for household characteristics has been found to attenuate mortality differences by socioeconomic status, 3,12,13 and vice versa, 14,26,27 there is reason to believe that similar mechanisms, such as health-related selection and health behaviours, may induce inequalities along both axes. Persons with poor health are more likely to become unemployed, 32,33 leading to lower income, and less likely to marry, 16,17 which may be reflected in mortality differences across income and marital status groups. Also, mortality trends and differences by income, marital status and living arrangements tend to be especially pronounced for behaviour-related i.e. accidental, violent and alcohol-and smoking-related causes of death. 9,11,14,27 Furthermore, a causal link between socioeconomic and family characteristics cannot be excluded: a lack of socioeconomic resources may reduce the chances of union formation, 34 while certain family transitions, such as divorce, may entail a loss of income. 35 In recent decades, unmarried cohabitation, single parenting and living alone have become increasingly common, while the proportion of married couples and families with children has declined in many countries. [36][37][38] It is noteworthy that increases have taken place in living arrangements associated with poorer health and higher mortality, and in some cases, lower socioeconomic status. 27,39 Whether recent changes in living arrangements have implications for socioeconomic mortality differences depends on whether selection into different living arrangements or the strength of the association between household characteristics and mortality have changed. If, e.g. living alone is increasingly concentrated among those with low income, this 'double burden' may increase mortality differences by income. Given the interrelations between living arrangements, socioeconomic status and health, it is important to consider how changes in living arrangements may be reflected in trends in socioeconomic mortality differences.
To our knowledge, there are no previous studies examining the contribution of living arrangements to income differences in life expectancy in the working-age population. This study had four specific aims: (i) to quantify changes in income differences in partial life expectancy between ages 35 and 64 during the past 30 years, (ii) to assess how the contribution of living arrangements to the differences between the highest and lowest income group has changed, (iii) to examine how life expectancy differences by income differ across living arrangement groups and (iv) to determine to what extent these differences were attributable to accidental, violent and alcoholrelated causes of death.
Methods
The data used in this study cover all persons aged 35-64 years residing in Finland between 1988 and 2017. Statistics Finland linked data from various registers to death records using personal identification codes. Register linkage was authorized by data protection authorities in Statistics Finland (permission TK-53-1490-18). The study population was followed for mortality in 1988-2017, stratifying personyears and deaths by sex, 5-year age group, income quartile and living arrangements measured at the end of the year prior to follow-up. Person-years and deaths were aggregated for six 5-year periods (1988-92 through 2013-17). Those reaching the age of 65 years during follow-up were censored at the end of the year, and those turning 35 years were included in the data from the next year onwards. Individuals who emigrated or were institutionalized were censored at the end of the year. Altogether, the dataset included 63.1 million person-years and 288 185 deaths.
Information on income was derived from the registers of the Finnish Tax Administration and the Social Insurance Institution. We used the annual sum of taxable income of all household members, consisting of wages, capital income and taxable income transfers. Household composition was taken into account by dividing the total household income by the square root of the number of household members. 40 Cut-off points for income quartiles were calculated within ages 35-64 separately for each year and sex.
Individuals' living arrangements were determined according to information on their marital and cohabitation status, family composition and household size. Seven groups were formed: married with/without children, cohabiting with/without children, single parent, living alone and other. Statistics Finland defined cohabiters as persons who were living in the same dwelling, unmarried, at least 18 years of age, of different sex, not siblings and had an age difference of no more than 15 years. The group of others covered adults living with their parents or someone else than their partner or children, and those whose living arrangements were unknown.
Directly age-standardized mortality rates were calculated using the total male and female population aged 35-64 years in 1988-92 and 2013-17 as the standard population. Age-specific mortality rates by 5-year age group, sex, income quartile and living arrangements were used to obtain abridged period life tables with partial life expectancies between ages 35 and 64, and their confidence intervals, for each income quartile, 5-year period and living arrangement group. Life expectancy differences between the highest and lowest income quartile were decomposed by cause of death in each living arrangement group using Arriaga's method. 41,42 Causes of death were classified as either internal or external using the harmonized cause-ofdeath classification of Statistics Finland, based on the International Classification of Diseases (ICD). The external category encompassed accidents, violence and suicides, and alcohol-attributable diseases and poisoning (see table 2 for specific ICD-10 codes). All other causes of death, as well as unknown causes (0.33%), were included in the internal category. To assess the contribution of living arrangements to differences in partial life expectancy between the highest and lowest income quartile, we calculated adjusted mortality rates for the lowest income quartile. This was done by weighting the agespecific mortality rates of the lowest income quartile so that the living arrangement distribution of each 5-year age group corresponded to that of the highest quartile.
Results
A clear income gradient in mortality was present for men and women in both the first and last period of follow-up, with particularly high death rates in the lowest income quartile (table 1). High mortality was also observed among those living alone and in 'other' living arrangements. There were notable changes in living arrangement distributions between periods, with the share of married persons living with children decreasing and the proportion of cohabiters and those living alone increasing.
Partial life expectancy between ages 35 and 64 increased in all income quartiles during the study period (figure 1). For men, the increase was largest in the lowest income quartile (1.6 years, vs. 0.9 years in the highest quartile), whereas for women, gains in life expectancy were similar in all income quartiles (0.4-0.5 years). Among both sexes, the gap between the highest and lowest income quartile widened until 2003-07, and narrowed thereafter. As a result, the 1-year gap among women remained, while the difference decreased from 3.0 to 2.3 years among men during the 30-year follow-up. The contribution of living arrangements to the life expectancy gap was mostly stable throughout the study period. Adjusting for living arrangements explained 36-39% and 15-23% of the difference between the top and bottom quartiles among men and women, respectively (figure 2).
Partial life expectancy increased in all living arrangement groups in both the highest and lowest income quartile during the study period (table 2). Income differences in partial life expectancy were smaller among women than men in each living arrangement group. Differences between the highest and lowest quartile were greatest among those living alone and cohabiting without children. The gap in life expectancy narrowed the most in these living arrangement groups due to substantial life expectancy increases in the lowest income quartile. By 2013-17, life expectancy differences among men living alone and cohabiting without children had decreased to 3.2 and 2.1 years, respectively. The corresponding differences among women were 1.6 and 1.7 years. There were only minor changes among women in other living arrangement groups. Among men, increases in life expectancy were greater in the lowest income quartile in all living arrangement groups. Those married with children showed the smallest income differences in life expectancy among men in both periods, matched by cohabiting men with children in 2013-17, with respective differences of 0.5 and 0.6 years. Among women, the gap in life expectancy was smallest among those living with children, whether married, cohabiting or single parents, at 0.3, 0.4 and 0.4 years, respectively, in 2013-17. Among both sexes, life expectancy differences between living arrangement groups were much smaller within the highest income group, further converging during the study period.
The contribution of accidental, violent and alcohol-related causes of death to income differences in life expectancy remained stable between the first and last period at 47-48% among men and 34% among women (table 2). However, the contribution of external causes varied between 21% and 63% across living arrangement groups, so that these causes tended to contribute more among groups with larger life expectancy differences by income. Living with children was associated with smaller contributions of external causes of death.
Principal findings
Income differences in partial life expectancy between ages 35 and 64 increased from 1988-92 to 2003-07, followed by a decrease from 2003-07 to 2013-17. For men, these developments resulted in a net decrease in life expectancy differences, while differences remained largely unchanged for women. Despite changes in the distribution of living arrangements, their contribution to life expectancy differences by income remained stable throughout the study period. Living arrangements explained 36-39% and 15-23% of the life expectancy gap between the highest and lowest income quartile among men and women, respectively.
The life expectancy gap between the highest and lowest income quartile was largest among cohabiters without children and those living alone, but also narrowed the most in these groups. The gap was smallest among those living with children, whether married, cohabiting or single parents, especially among women. By 2013-17, life expectancy differences between living arrangement groups had almost disappeared in the highest income quartile. The contribution of external causes of death to income differences in life expectancy remained stable during the study period at 47-48% among men and 34% among women. This contribution was greater among those living without children.
Interpretation of the results
Increasing income differences in life expectancy have been documented in several high-income countries in recent decades.
Substantial increases have been reported in the USA, 1,43 while more moderate ones have been observed in Canada, 44 Denmark 43 and Norway. 2 These results mostly apply to differences in life expectancy at all adult ages, although increasing inequalities have been observed especially among the working-age population. 9 In contrast to these prior findings, we observed a decrease in life expectancy differences by income from 2003-07 to 2013-17. Existing evidence suggests that this decline originates from a reduction in alcoholrelated mortality in the lowest income group, particularly among men. 45 Previous studies on socioeconomic mortality differences have rarely considered the contribution of living arrangements, although family and household characteristics are known to be socially patterned and to predict mortality outcomes especially in the workingage population. 14,46,47 Our analyses revealed that life expectancy differences by income tended to be greatest in living arrangement groups associated with lower life expectancy, such as living alone or cohabiting without children. The magnitude of these differences implied a significant life expectancy disadvantage, a 'double burden', for persons living alone or cohabiting without children in the lowest income quartile. Despite the largest relative disadvantage, life expectancy differences by income were reduced the most in these living arrangement groups during the study period. These changes may be associated with relative improvements in the status of these groups. Living alone and cohabiting without children increased notably during the study period and, with the exception of women living alone, these groups also experienced the greatest absolute gains in life expectancy.
We observed substantial attenuations in life expectancy differences by income after adjusting for living arrangements. These attenuations reflect the fact that persons with lower income tend to live in less favourable living arrangements more often, e.g. are more likely to be unmarried and live alone (Supplementary table S1). Our results suggest that living with children may have mortality-protective effects among the working-age population, which is in line with previous findings. 27,29,30 Furthermore, this advantage seems to extend to single parents, especially single mothers, despite previous studies indicating poorer health among them. 40,48,49 Life expectancy differences by income were smaller among those living with children, as was the contribution of accidental, violent and alcohol-related Table 1 Age-adjusted mortality rates and distribution of person-years by income and living arrangements among men and women aged 35-64 in 1988- The 95% confidence intervals were calculated using Chiang's I method. For some of the highest quartile groups with no deaths within 5-year age group, the variance of the conditional probability of death was calculated assuming one death in the age group. These 95% CIs are in italics. a: ICD-10 codes F10, G312, G4051, G621, G721, I426, K292, K70, K860, K8600, O354, P043, X45, V01-X44, X46-X59, X85-Y89, X60-X84 and Y870.
causes of death to these disparities. These findings suggest that parenthood may be particularly protective among those with low income. Life expectancy differences by income, as well as the contribution of living arrangements to these differences, were greater among men than women. Men in the lowest income quartile are more likely than their higher-income or female counterparts to live in unfavourable arrangements with respect to mortality, as living without a partner is more common among those with lower income, and child custody mainly remains a female responsibility. In 2013-17, 42% of men and 33% of women in the lowest income quartile were living alone, whereas 24% of women and only 4% of men in the lowest income quartile were living in single parent households (Supplementary table S1). The notion that similar causal or selection mechanisms may underlie mortality differences by income and living arrangements is supported by our results on the contribution of accidental, violent and alcohol-related causes of death to income differences in life expectancy. The excess mortality of the lowest income group relative to the highest one was explained by external causes of death to a greater extent in living arrangements associated with a lower life expectancy and greater life expectancy differences by income. However, these results do not allow us to disentangle causal and selection effects to determine to which extent mortality differences are due to the influence of socioeconomic and household characteristics on health behaviours, or behaviour-related selection into income and living arrangement groups. In addition, more upstream social determinants of mortality, such as education and occupational class, as well as relationship trajectories preceding current living arrangements are likely underlying factors behind our findings. We encourage future studies to investigate the contribution of these determinants to income and living arrangement differences in mortality across the life course.
Methodological considerations
The register data used in the study provide reliable measurement free of self-report bias, covering the entire Finnish population over three decades with virtually no loss to follow-up and avoiding problems related to self-reported income. Individuals with zero income comprised only 0.3% of the study population in the first and last 5-year period, as we measured income at the household level, including taxable social security benefits. As an indicator of socioeconomic status, income quantiles allow for keeping the proportions of groups constant over time and are therefore well suited for the study of temporal changes in mortality. To test the sensitivity of our results to the categorization of income, alternative analyses were performed using income quintiles (Supplementary table S2). The results were highly consistent with those obtained with income quartiles, with life Statistics Finland's classification of cohabitation used in the study inevitably results in the misclassification of some individuals, since the definition does not cover couples with a large age difference or of the same sex, but might classify persons not living in a relationship, such as flatmates, as cohabiting partners. However, this is unlikely to considerably affect the results, as the share of persons classified as cohabiters resembles that obtained from nationally representative survey data in 2017, although survey estimates were somewhat higher among women aged 30-39 and men aged 40-49. 50
Conclusions
Income differences in partial life expectancy in working age (35-64 years) increased until 2003-07, but decreased towards 2013-17. The contribution of living arrangements to differences between the top and bottom income quartiles was substantial and remained stable throughout the study period. Differences by income were greater among those living in households without children, particularly men living alone, and were largely related to accidental, violent and alcohol-related causes of death. Our results on the importance of these causes of death imply that both the persistent income gradient in mortality as well as the mortality disparities by living arrangements are at least partially related to similar selection or causal mechanisms.
Supplementary data
Supplementary data are available at EURPUB online.
Funding
This work was supported by grants 308247 and 345219 from the Academy of Finland and grant 101019329 from the European Research Council.
|
2022-11-16T06:16:51.056Z
|
2022-11-15T00:00:00.000
|
{
"year": 2022,
"sha1": "ab98b961a56cbec62559f9ecbfc86fec26224175",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/eurpub/article-pdf/33/1/13/49088920/ckac159.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "79e5a2ce067aad8933f2f20f071806dd6c0745ab",
"s2fieldsofstudy": [
"Medicine",
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208384181
|
pes2o/s2orc
|
v3-fos-license
|
A meta-analysis of the safety and efficacy of bosentan therapy combined with prostacyclin analogues or phosphodiesterase type-5 inhibitors for pulmonary arterial hypertension
Bosentan is an effective drug for the treatment of pulmonary arterial hypertension (PAH). The aim of the present meta-analysis was to examine the evidence concerning the efficacy and safety of bosentan therapy combined with prostacyclin analogues or phosphodiesterase type 5 (PDE-5) inhibitors for treating PAH. Eligible published studies were collected from Embase, PubMed, The Cochrane Library and the www.clinicaltrials.gov website. Heterogeneity was assessed using the Cochran Q-statistic test. Results were presented as risk ratios or mean differences with 95% confidence intervals (CI). A total of five studies, comprising 310 patients were included for analysis. No significant improvements in six-minute walk distance (6MWD; mean difference, 16.43 m), clinical worsening (risk ratio, 0.54) and the World Health Organization functional classification (class I: risk ratio, 1.17; class II: risk ratio, 1.18) were observed in patients treated with bosentan in combination with prostacyclin analogues or PDE-5 inhibitors. However, a significant reduction in the mean pulmonary artery pressure (mPAP; 95% CI: −17.06, −6.83; P<0.0001) following bosentan combination therapy was observed. Comparisons of adverse event rates in the bosentan combination therapy (55.6%) and monotherapy (51.8%) suggested that there is no reduction in adverse events (risk ratio, 1.10). The results indicated that bosentan combined with prostacyclin analogues or PDE-5 inhibitors may not improve 6MWD, cardiac function, clinical worsening and adverse events. However, bosentan combined with prostacyclin analogues or PDE-5 inhibitor therapy was able to significantly reduce mPAP compared with the effect of bosentan monotherapy.
Introduction
Pulmonary arterial hypertension (PAH) is a progressive disease associated with a massive increase in pulmonary vascular resistance and pulmonary artery pressure (PAP). PAH is a rare disease with an incidence of approximately 2.4-7.6 cases per million (1), which may lead to fatal right heart failure in the absence of appropriate treatment. The pathogenesis of PAH has not been fully elucidated; however, dysfunction of three metabolic/physiological pathways, including the endothelin pathway, the prostacyclin pathway and the nitric oxide pathway, have been attributed to the pathogenesis of PAH (2,3). Targeting of these pathways had been rationally exploited for the discovery of chemotherapeutics against PAH. For example, prostacyclin analogues, phosphodiesterase type 5 (PDE-5) inhibitors and endothelin receptor antagonists (ERAs) are drugs that are commonly used for the treatment of PAH (4). These drugs relieve symptoms, raise exercise capacity and improve hemodynamics. However, the efficacies of these commonly used drugs in delaying the progression of the disease are limited. Owing to these limitations, the current treatment options for PAH are not satisfactory. Combination therapies exist, where two or three drugs aimed at different pathways, such as ERAs, prostacyclin analogues and PDE-5 inhibitors are simultaneously used (5). Previous studies had indicated that combination therapy significantly improved activity tolerance, hemodynamic parameters, clinical deterioration time and quality of life for patients with PAH (6)(7)(8).
Results from a previous meta-analysis suggest that combination therapies only offered a modest increase in exercise ability (9). The evidence to support these treatment options is limited. Bosentan, an ERA, serves a crucial role in proliferation inhibition, improvement of endothelial function and expansion of pulmonary vessels (10). ERA treatment significantly improved the activity tolerance and exercise capacity of PAH A meta-analysis of the safety and efficacy of bosentan therapy combined with prostacyclin analogues or phosphodiesterase type-5 inhibitors for pulmonary arterial hypertension patients as well as prolonging the survival time (11); however, PAH remains a progressive disease with a high mortality rate (12)(13)(14)(15). The mortality rate of pulmonary hypertension in the United States was about 4.5-12.3/10 million in 2015 (16). In order to achieve long-term efficacy, combined therapy has been widely used in clinical practice. However, only a few randomized controlled trials are available regarding the efficacy and safety of combining bosentan with prostacyclin analogues or PDE-5 inhibitors, and there is limited evidence to support the superior effects of bosentan combination therapy over monotherapy (17). The present meta-analysis focused on providing an improved analysis of bosentan combination for PAH treatment, and laying a theoretical foundation for the development of other treatment strategies in the future. Bosentan was the first oral PAH targeted drug in 2002 (18). Subsequently, a number of multi-center, randomized controlled clinical trials have been published to confirm its efficacy in controlling pulmonary hypertension (19,20). Bosentan was approved in China for the treatment of PAH in 2006 and was permitted for use as class I drug in 2015, according to European Society of Cardiology-European Respiratory Society Guidelines on Pulmonary Hypertension (21). The evidence-based medicine of bosentan combined with prostacyclin analogues or PDE-5 inhibitors is lacking. Therefore, it is necessary to conduct meta-analyses of randomized controlled trials to evaluate the effects of bosentan combined with prostacyclin analogues or PDE-5 inhibitors for the treatment of PAH. The present meta-analysis may provide evidence of the efficacy and safety of bosentan therapy combined with prostacyclin analogues or PDE-5 inhibitors.
Materials and methods
Study inclusion and exclusion criteria. The following criteria was used for selecting previous studies to analyse: i) Only randomized controlled trials (RCT) that combined bosentan with prostacyclin analogues or PDE-5 inhibitors for the treatment of PAH were included; ii) studies in which the control group was treated with bosentan or placebo were included (bosentan monotherapy), and follow-up time in the study was ≥12 weeks.; iii) studies of the bosentan treatment within 3 months prior to randomization were included. Efficacy indicator of primary endpoint was indicated as six-minute walk distance (6MWD), and adverse events were examined to evaluate safety.
Literature search. RCT of bosentan combination therapy vs. bosentan monotherapy for treatment of PAH were searched from PubMed (www.ncbi. nlm.nih.gov/pubmed), Embase (www.embase.com), and the Cochrane library (www.cochrane library. com). The following keywords were used for searching the relevant trial studies included in this meta-analysis: ʻPhosphodiesterase type 5 inhibitor̓ or ʻPDE-5 inhibitor̓ or ʻsildenafil̓ or ʻtadalafil̓ or ʻvardenafil̓ or ʻprostacyclin analogs̓ or ʻepoprostenol̓ or ʻiloprost̓ or ʻtreprostinil̓ paired with ʻpulmonary arterial hypertension̓ or ʻPAH̓ and ʻbosentan̓.
Quality assessment. The selected studies were also assessed for the quality of trials using the Cochrane Collaboration recommended tool for assessing risk of bias (22). This tool included the domains of selection bias (random sequence generation and allocation concealment), performance bias (blinding of participants and personnel), detection bias (blinding of outcome assessment), attrition bias (incomplete outcome data), reporting bias (selective reporting) and other sources of bias. The ʻrisk of biasʼ assessment tool was used to further review bias among individual studies (22).
Statistical analysis. Results are represented as risk ratios for dichotomous data and mean differences for continuous data with 95% confidence intervals (CI). Statistical heterogeneity across studies was tested using Cochran's Q test. The fixed effects model was selected for analysis when no significant heterogeneity between the studies was found (P>0.10; I 2 ≤50%). Alternatively, the random effects model was used in case heterogeneity among studies. The RevMan software package (version 5.2; Cochrane) was used for all statistical analyses.
Results
Study characteristics. In the present meta-analysis, one RCT (PHIRST-1: Tadalafil in the Treatment of Pulmonary Arterial Hypertension) was reported twice (23,24). The trials consisted of patients with congenital heart disease-associated PAH (25) and Eisenmenger's syndrome (26). The subjects of six studies were treated with ERA or PDE-5 inhibitor as background treatment (27-32); therefore, these were excluded from the analysis.
Overall, a total of 310 subjects in 5 RCTs were included in this analysis ( Fig. 1 and Table I).
Data quality. The quality of the five studies was assessed and the risk of bias was estimated by Cochrane Collaboration's tool. The results were shown in Fig. 2. The majority of the included studies had a low risk of bias according to the following criteria: Selection bias, performance bias, detection bias, attrition bias, reporting bias and other sources of bias.
Meta-analysis results. 6MWD was used as an indicator of exercise ability in all the five trials included in the present study. Compared with bosentan monotherapy, four of the five studies in bosentan combination therapy reported a significant improvement in walking distance (Fig. 3). The mean difference of 6MWD in bosentan combination therapy was 16.43 m (95% CI: -4.91, 37.76), but there was no statistical significance between bosentan combination and bosentan monotherapy (P= 0.13). No significant heterogeneity (I 2 =0%; P=0.81) was detected in bosentan combination therapy compared with bosentan monotherapy. Cardiac functional improvement was also one of the efficacy indicators of this meta-analysis. The New York Heart Association (NYHA) and the World Health Organization (WHO) functional classification systems were used to identify functional impairment in PAH. McLaughlin et al (33) and Hoeper et al (34) performed their studies using the NYHA functional classification, the remaining three studies were performed using the WHO functional classification (23,35,36). After meta-analysis, the result showed that there was significant heterogeneity (I 2 =73%; P=0.02) in WHO functional class improvement I between bosentan combination therapy and bosentan monotherapy (Fig. 4). The random effects model was used for the analysis. Functional class improvement I from baseline to endpoint of study was indicated to be 18% (18/100) in bosentan combination therapy and 17% (18/105) in bosentan monotherapy (Fig. 4A). The WHO functional class improvement II from baseline to endpoint of study was 4% (4/100) in bosentan combination therapy and 2.9% (3/105) in bosentan monotherapy, without significant heterogeneity (I 2 =0%; P=0.44) (Fig. 4B). Therefore, functional class improvements I and II exhibited no significant difference between the bosentan combination and monotherapy groups (P>0.05).
Two of the five trial studies reported the effects of bosentan combination therapy on mean PAP (mPAP; Fig. 5) (33,35).
The difference of mPAP demonstrated an average of only 11.95 mmHg (95% CI: -17.06, -6.83; P<0.00001) between bosentan combination therapy and monotherapy, and there was no heterogeneity between the groups (I 2 =6%; P=0.30). These data suggested that combination therapy may significantly reduce mPAP.
One study did not include any data of clinical worsening (35) The clinical worsening rate in combination therapy was 5.5% (8/145) compared with that of monotherapy of 10.5% (16/152). The heterogeneity between the groups was found to be non-significant (I 2 =13%; P=0.33). Clinical worsening incidence in the combination therapy was below that of monotherapy (risk ratio, 0.54; 95% CI: 0.25, 1.20), but without statistical significance (P=0.13; Fig. 6). 6MWD, 6-min walk distance; BIPH, bosentan with iloprost in the treatment of pulmonary hypertension patients; COMBI, combination therapy of bosentan and aerosolised iloprost in idiopathic pulmonary arterial hypertension; CT, bosentan combination therapy; MT, bosentan monotherapy; PHIRST, pulmonary arterial hypertension and response to tadalafil; STEP, safety and pilot efficacy trial in combination with bosentan for evaluation in pulmonary arterial hypertension. All of the five trials described adverse events, but in one study, detailed data on adverse events was not provided (23). These adverse events mainly included headaches, coughing, flushing, chest pains, nausea, dizziness and diarrhea. A total of 71 events (51.8%; n=137) were reported in the monotherapy group, whereas 75 adverse events (55.6%, n=135) were reported in the combination therapy group (Fig. 7). The risk ratio of adverse events between combination and monotherapy was 1.1 (95% CI: 0.91, 1.32). However, the difference between the groups was not statistically significant (P= 0.33). Thus, the incidence of adverse events was not significantly different between the bosentan combination therapy and the monotherapy groups.
Discussion
For the present meta-analysis, rigorous selection criteria were applied. Studies of the bosentan treatment within 3 months prior to randomization, and studies in which the control group was treated with bosentan or placebo were included. These criteria resulted in only five studies that were included in this analysis, which comprised a total of 310 subjects. The present meta-analysis referred to the outcomes of previous studies of combination therapy, and also formed the basis on the safety and efficacy of combining bosentan with prostacyclin analogues or PDE-5 inhibitors.
The results from the present meta-analysis demonstrated that bosentan combined with prostacyclin analogues or PDE-5 inhibitors was superior to the bosentan monotherapy in reducing mPAP by 11.95 mmHg. However, compared with bosentan monotherapy, bosentan combined with prostacyclin analogues or PDE-5 inhibitors did not improve exercise capacity, cardiac function or clinical worsening in PAH. Notably 5.5% of the patients in the combination therapy developed clinical worsening compared with 10.5% in monotherapy. The clinical worsening rate was significantly Figure 6. Effect of bosentan combined with prostacyclin analogues or phosphodiesterase type 5 inhibitors vs. bosentan monotherapy on clinical worsening. The heterogeneity between the groups was found to be non-significant. Clinical worsening incidence in the combination therapy was below that of monotherapy, but without statistical significance (P>0.05). CI, confidence intervals; CT, combination therapy; M-H, Mantel-Haenszel; MT, monotherapy. Figure 7. Effect of bosentan combined with prostacyclin analogues or phosphodiesterase type 5 inhibitors vs. bosentan monotherapy on adverse events. The incidence of adverse events was not significantly different between the bosentan combination therapy and the monotherapy groups (P>0.05). CI, confidence intervals; CT, combination therapy; M-H, Mantel-Haenszel; MT, monotherapy. Figure 5. Effect of bosentan combined with prostacyclin analogues or phosphodiesterase type 5 inhibitors vs. bosentan monotherapy on mean pulmonary artery pressure. Compared with bosentan monotherapy, combination therapy may significantly reduce mPAP (P<0.05). CI, confidence intervals; CT, combination therapy; IV, inverse variance; MT, monotherapy; SD, standard deviation; mPAP, mean pulmonary artery pressure. reduced in the bosentan combination therapy group although the curative effect of treatment was not significant. These data indicated that although bosentan therapy relieved the patient of the symptoms and clinical worsening, it still failed to prevent and slow the progression of PAH. The incidence of adverse events in bosentan combination therapy was similar to that of monotherapy, which suggested that bosentan combination therapy was safe for PAH patients.
Drug interaction is a problem that cannot be ignored in combination drug therapies (37). Although the combination of bosentan with sildenafil or tadalafil could lead to a decrease in plasma concentration of sildenafil and tadalafil, bosentan concentration was increased by inducing cytochrome P450 3A4 isoenzyme. However, the clinical significance of this interaction has not been well established. Currently, there is no evidence that interactions between bosentan and sildenafil decrease drug safety.
Since the treatment regimens among the five studies were different, the effect of bosentan combination therapy could not be harmonized. In some studies, treatment with prostacyclin analogues or PDE-5 inhibitors was initiated prior to treatment with bosentan, whereas in other studies patients were treated with bosentan for some time and then given prostacyclin analogues or PDE-5 inhibitors (27-32). It was difficult to determine which combination therapy regimen was most effective, and current published guidelines do not offer a specific recommended regimen. As the current findings may be limited by the relatively short duration (12 weeks) of the trials, it was not possible to determine the long-term efficacy and safety of bosentan combination therapy. Therefore, the true clinical features and progression of the disease in patients could not be determined. Given the limited number of studies included in the present analysis, the results should be confirmed through future research. A larger randomized controlled trial should be designed for future studies to adequately assess the efficacy and safety of bosentan combination therapy.
In conclusion, results from the present meta-analysis suggested that bosentan combined with prostacyclin analogues or PDE-5 inhibitors do not impart additional advantages for the improvement of the 6MWD, cardiac function, clinical worsening, and incidence of adverse events. However, bosentan combined with prostacyclin analogues or PDE-5 inhibitors may significantly reduce mPAP compared with bosentan monotherapy.
|
2019-10-31T08:56:23.700Z
|
2019-10-29T00:00:00.000
|
{
"year": 2019,
"sha1": "89d601cc5282045534cf67e63981debcbeb23e32",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2019.8142/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ee78cbfcabbfda00f3ba381a6ca7f3a380b1b25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3304249
|
pes2o/s2orc
|
v3-fos-license
|
Matrix degradability controls multicellularity of 3D cell migration
A major challenge in tissue engineering is the development of materials that can support angiogenesis, wherein endothelial cells from existing vasculature invade the surrounding matrix to form new vascular structures. To identify material properties that impact angiogenesis, here we have developed an in vitro model whereby molded tubular channels inside a synthetic hydrogel are seeded with endothelial cells and subjected to chemokine gradients within a microfluidic device. To accomplish precision molding of hydrogels and successful integration with microfluidics, we developed a class of hydrogels that could be macromolded and micromolded with high shape and size fidelity by eliminating swelling after polymerization. Using this material, we demonstrate that matrix degradability switches three-dimensional endothelial cell invasion between two distinct modes: single-cell migration and the multicellular, strand-like invasion required for angiogenesis. The ability to incorporate these tunable hydrogels into geometrically constrained settings will enable a wide range of previously inaccessible biomedical applications.
S ynthetic hydrogels are widely used as models of the extracellular matrix (ECM) and have been instrumental in developing our understanding of how physical and adhesive properties of the ECM can regulate cell function [1][2][3] . Within organs, the architecture of the ECM organizes and segregates cell populations, and defines fluid vs. solid domains (e.g., vascular vs. stromal spaces). To recapitulate such features, whether for "onchip" models of organs or for engineered tissues for replacement therapies, synthetic hydrogels amenable to precision molding could aid in accurately defining architectures found in biological tissues. However, most existing synthetic hydrogel systems are difficult to precision mold due to the swelling of the material that occurs upon equilibrium hydration 4 . Such swelling not only alters the gross geometry and dimensions of the initially defined structure, but also changes the mechanical properties 5 and nanoscale structure of these materials.
Equilibrium swelling of hydrogels occurs when elastic retractive forces of the polymer network are balanced by attractive forces between the polymer chains and water, which increase with polymer hydrophilicity 4 . We hypothesized that increasing the hydrophobicity of the polymer backbone through the attachment of hydrophobic pendant side chains could modulate and possibly eliminate swelling, thereby generating hydrogels amenable to precision molding. We illustrate the utility of this system in several settings, and through integration with a microfluidic device, reveal that endothelial cell migration into the surrounding 3D matrix adopts two distinct modes as a function of matrix degradability.
Results
Design of non-swelling hydrogels via tuning hydrophobicity.
To develop a cytocompatible hydrogel with controlled swelling upon equilibrium hydration, we chose methacrylated dextran (DexMA) 6-8 as a base material that is biologically inert, in that it resists protein adsorption and has no known cell surface receptor binding activity (Fig. 1a). DexMA macromers were crosslinked through Michael-type addition with matrix metalloproteinase (MMP) labile dicysteine peptide sequences (Fig. 1a), a strategy that has been demonstrated with other backbones 9,10 . We speculated that increasing the hydrophobicity of the dextran polymer chains would result in a reproducible reduction in swelling of the resulting hydrogels, and that we could achieve this tuning through the attachment of increasing amounts of intrinsically hydrophobic methacrylates to the dextran backbone prior to crosslinking ( Supplementary Fig. 1). Indeed, when we gradually increased the level of methacrylate functionalization from 3 to 70% (at higher methacrylations, DexMA was no longer water soluble), hydrogel swelling (final gel volume normalized to initial volume) decreased from 55 to 0% (Fig. 1b, c and Supplementary Fig. 2). In addition to individual macromer hydrophobicity, we hypothesized that the aggregate hydrophobicity defined by mixtures of dextrans of varying hydrophobicities would also dictate swelling. Indeed, mixing DexMA with higher (11%) and lower (3%) levels of methacrylation at different ratios ( Fig. 1d) produced hydrogels that swelled according to the average methacrylation levels of the macromers. Thus, hydrogel swelling behavior appears to be defined by the overall hydrophobicity of the component mixture, even if the individual monomer units are distinct. An important outcome of the approach is that the degree of swelling is not affected by the amount of crosslinking introduced to the polymer networks, allowing swelling to be controlled independently of hydrogel stiffness (Fig. 1e).
Multi-scale molding applications for non-swelling hydrogels. Next, we demonstrated the ability of this non-swelling hydrogel to define and maintain precise geometric structures. Injection molding of DexMA recreated the complex macroscopic anatomic features of a femur (Fig. 2a). To examine whether geometric features even at the micrometer scale could be prescribed, we micromolded DexMA hydrogels into photolithographically defined molds of various geometries. The resultant microgels matched the micrometer scale dimensions and shapes of the molds from which they were cast (Fig. 2b). In addition to the generation of free-standing constructs with well-defined shapes and dimensions, non-swelling materials are critical for integrating hydrogels into multi-component devices, such as lab-on-chip systems. To illustrate, we show how non-swelling DexMA a DM18 DM11 DexMA hydrogels with different degrees of methacrylation (label indicates % methacrylation), corresponding to the magneta-labelled data points in c after equilibrium swelling. Initial dimensions immediately following crosslinking are outlined by dotted lines (scale bar, 500 μm). c Swelling (% change in gel diameter relative to initial diameter before equilibrium hydration) as a function of dextran backbone methacrylation. d Swelling of UV-crosslinked hydrogels made of DexMA with 3 and 11% methacrylation mixed at various ratios. e Swelling as a function of the mechanical properties (Young's modulus in kPa) of the gels. All data is presented as a mean ± s.d hydrogels can be cast to form well-defined microfluidic channels within a PDMS casing that integrates microfluidic ports (Fig. 2d).
To examine the impact of swelling on the fidelity of precision molding, we generated channel structures inside dextran and non-dextran-based hydrogels of varying hydrophobicities. Results show that the channel diameters shrink upon equilibrium hydration as a function of hydrogel swelling, and only nonswelling DexMA gels are able to maintain the predetermined channel diameter (Fig. 2e, f). While the impact of swelling on simple cylindrical channels is merely changes in channel diameter, faithful fabrication of more complex features such as channels with sharp turns requires non-swelling materials (Fig. 2c).
Integrating hydrogels with microfluidics to study angiogenesis. This ability to integrate non-swelling DexMA hydrogels into microfluidic devices, together with the ability to incorporate other bioactive features essential to controlling cell behavior (e.g., cell adhesiveness and tunable mechanical properties ( Supplementary Fig. 3a, b)), provides an ideal system for investigating how matrix properties regulate cellular behaviors observed most readily in microfluidic systems. To explore this utility, we incorporated these gels into a previously developed microfluidic platform used to recapitulate angiogenesis, as understanding how biomaterial properties can be tuned to promote angiogenesis would enable numerous regenerative medicine applications 11 . In this system, angiogenic sprouting occurs from a perfused endothelial celllined channel triggered by pro-angiogenic soluble gradients across a collagen matrix 12 (Fig. 3a); earlier attempts to incorporate synthetic hydrogels were thwarted by hydrogel swelling and subsequent channel collapse (Fig. 2e, f). To generate the endothelial cell-lined channel, one cylindrical channel passing through a non-swelling MMP-degradable DexMA gel (Fig. 3a, b) was seeded with endothelial cells to form a confluent endothelium serving as a parent vessel (Fig. 3c). Introducing a cocktail of angiogenic growth factors in a second parallel channel under continuous flow created a chemoattractive gradient (Supplementary Fig. 4) and triggered multicellular sprouting. We then used this system to ask whether the sprouting response was affected by changing the stiffness of the matrix, as matrix stiffness has previously been described to modulate a number of cell functions including proliferation, differentiation, and migration [13][14][15] . Holding the degree of dextran methacrylation constant (70% of repeat units contained a methacrylate group), we tuned hydrogel stiffness via the concentration of MMP labile dicysteine peptides, thereby modulating the number of resulting crosslinks within the hydrogel network (Fig. 3d). Despite increased hydrophobicity via methacrylation, these gels did not exhibit nonspecific cell adhesion over the full range of crosslinking explored in these studies, as serum-exposed gels did not support cell adhesion without RGD coupling. Increased stiffness not only decreased sprout length (Fig. 3e, f), but also altered the morphology of the leading tip cell, from very open, branched structures with long filopodia in lightly crosslinked matrices, to narrow branches with short and spiky filopodia in highly crosslinked DexMA gels (Fig. 3g).
Matrix crosslinking influences angiogenic sprouting. While increasing crosslinking density would be expected to limit the extent of invasion, an unexpected change occurred in the critical a Replica-molding of DexMA to duplicate the gross anatomical features of a bone. A negative mold of the bone was generated in PDMS, filled with DexMA gel precursor solution, and UV crosslinked to yield the bone replicate (scale bar, 1 cm). b DexMA micro-gels were similarly fabricated by replica-molding using PDMS molds generated by traditional SU8 photolithography (scale bars, 100 μm). c Patent micron-scale fluidic channels embedded within non-swelling DexMA gels. Sacrificial channel structures were micromolded in gelatin as in ref. 32 , DexMA gels were cast on top, and gelatin was dissolved at 37°C to yield open channels. Brightfield and fluorescence image demonstrating channel perfusion by red and green fluorescent beads (scale bar, 500 μm). d Schematic depicting fabrication of tubular channels embedded within 3D hydrogels. e Images of channels immediately after needle removal, and following equilibrium hydration. Swelling hydrogels (DexMA3, DexMA5, MeHA, PEG) all result in channel closure in contrast to non-swelling DexMA11. Scale bar, 100 µm. f Degree of channel closure (% of initial channel width) as a function of hydrogel swelling (defined as % of initial gel diameter) NATURE COMMUNICATIONS | DOI: 10.1038/s41467-017-00418-6 ARTICLE NATURE COMMUNICATIONS | 8: 371 | DOI: 10.1038/s41467-017-00418-6 | www.nature.com/naturecommunications ability of cells to sprout collectively as multicellular strands. We observed a switch from highly collective multicellular sprouting in gels with intermediate crosslinking, to less coordinated migration involving single or small groups of cells invading in lightly crosslinked gels. However, given the substantially deeper invasion of cells in lightly crosslinked gels that may lead to variations in chemokine concentrations experienced by the tip cells, it was difficult to directly attribute changes in sprout multicellularity to changes in matrix properties per se. That is, the loss of connectivity among endothelial cells was most dramatic in the samples where cells invaded to the greatest extent, suggesting the possibility that the slower invading cells in stiffer gels would dissociate when given additional time to invade further. To address this possibility, we repeated the study but fixed samples at different time points when cells and sprouts reached the same invasion depth (Fig. 4a). Even when controlling for invasion depth, cells primarily invaded alone into matrices of low , 50 μm). All data are presented as a mean ± s.d., *P < 0.05 crosslinking density, whereas intermediate crosslinking densities gave rise to multicellular sprouts and a higher cell density (Fig. 4b-d and Supplementary Figs. 5 and 6). Interestingly, allowing sufficient time for cells to invade substantially into highly crosslinked hydrogels again revealed single-cell migration, showing that the collective mode of invasion appears to be biphasic with respect to matrix crosslinking density (Fig. 4b-d and Supplementary Figs. 5 and 6). To exclude the possibility that varying crosslinking density alters the chemokine gradient profile, which could directly alter sprout morphology, we characterized the diffusivity and hydraulic permeability of hydrogels with low and intermediate crosslinking and found comparable values (Supplementary Table 1). In addition, we eliminated the possibility that cell proliferation contributed to multicellular sprout formation, as proliferation determined by EdU incorporation was equivalently negligible in all hydrogel conditions ( Supplementary Fig. 8).
ECM degradability dictates angiogenic sprout multicellularity. In the context of 3D culture, changes in crosslinking density not only impact matrix stiffness, but also how rapidly cell-mediated degradation of the hydrogel yields open space required for cell spreading, migration, and angiogenic sprouting. Hydrogel degradability, or the rate at which cells solubilize a given volume of hydrogel, is a function of the susceptibility of crosslinks to proteolytic cleavage and the number of crosslinks present. Thus, although one possibility is that the stiffness of the surrounding matrix influenced cell-cell adhesion 16 triggering the switch between collective and single cell migration observed in our studies, another explanation is that the higher degradation rate of gels at lower crosslinking densities accelerated cell migration and enabled cells to break cell-cell junctions during migration. To tease apart the relative contributions of matrix stiffness vs. gel degradation rate in the observed differences in migration mode, we prepared hydrogels with a stiffness of 1000 Pa (a value correlated with single-cell migration in earlier studies) while modulating the susceptibility of the crosslinker sequence to MMP cleavage in order to modulate the degradation rate of the gel. This was achieved by replacing the standard sequence taken from the cleavage site of natural collagen 17 (termed NCD for native collagen degradability) used in the above studies with a similar sequence containing a single amino acid mismatch that lowers MMP binding affinity 17 (termed LD for lower degradability sequence) (Supplementary Fig. 3c). As anticipated, lowering crosslink susceptibility to MMP cleavage reduced the invasion speed of the sprouts (Supplementary Fig. 7). Strikingly, the transition to non-collective migration that occurs in lightly crosslinked matrices using NCD was reversed when using the LD crosslinker sequence. That is, collective invasion was rescued by lowering the rate of degradation of the hydrogel, suggesting that the mechanism by which collective migration is lost or maintained is due to matrix degradability rather than matrix stiffness (Fig. 5a, b).
To provide further support for this mechanism, we confirmed that rapid MMP-mediated degradation of the surrounding hydrogel matrix caused cells to invade as single cells. Rather than vary the crosslinker sequence, which could potentially introduce unintended biological activity in cellular interactions, we directly curbed enzymatic activity of cellular MMPs through pharmacologic means. Exposure to Marimastat, a broad spectrum MMP inhibitor, slowed cell invasion, and again rescued multicellular sprout formation in soft gels crosslinked with the NCD sequence, confirming the important interaction between cell-generated proteases and hydrogel degradation rate in multicellular invasion and sprout morphogenesis (Fig. 5c, d). Lastly, to determine whether invasion speed itself was a critical determinant in this response, we lowered the concentration of chemoattractant in the angiogenic growth factor cocktail to slow cell invasion. Halving the concentration of sphingosine-1phosphate (S1P, 125 nM) directly slowed invasion speed without altering the hydrogel matrix or cellular MMP activity, and led to an enhancement in the number of multicellular sprouts invading into NCD-crosslinked gels as compared to the standard S1P concentration (250 nM) (Fig. 5e, f). Taken together, these studies provide evidence for a relationship between matrix crosslinking density and degradability on the one hand and the degradative activity of cell-produced MMPs on the other, the balance of which regulates 3D invasion speed and toggles cells between single and multicellular modes of migration.
Discussion
Numerous studies using natural matrices such as fibrin or collagen have suggested that the physical properties of the ECM can regulate angiogenic sprouting. Specifically, matrix density 18,19 , ligand density, 20 and matrix stiffness 21,22 have been suggested to be important parameters influencing angiogenic invasion. However, because these matrix properties are intrinsically coupled in natural ECMs, it is difficult to isolate the relative contribution of any one of these factors. Using a synthetic hydrogel to tune these properties orthogonally, we found that matrix crosslinking plays a critical role in modulating the extent, morphology, and even multicellularity of cell invasion. Crosslinking of synthetic gels has classically been used to vary the stiffness of a 2D substrate on which cells are seeded; this stiffness has been shown to dramatically impact cell spreading, proliferation, migration and differentiation 13,14 . However, in 3D settings such as those investigated here, the degree of crosslinking alters not only matrix stiffness but also its degradability-the rate at which cells can carve space out of the matrix. Degradability has previously been shown to affect the ability of fully encapsulated single cells to spread into the matrix 23,24 . By tuning degradability independently from matrix stiffness, we reveal here that ECM degradability is a key regulator of the collective nature of multicellular invasion. Multicellular strand-like cell migration is critical to not only the formation of functional blood vessels, but also to the creation of other numerous developmental structures. In addition, it is a fundamental process co-opted by disease processes, for example during cancer metastasis 25,26 .
A hydrogel system where shape and dimensions can be defined upon initial crosslinking without subsequent swelling should have broad utility. While defined feature sizes and geometries have many potential uses, we have provided some examples in geometrically defined microgels, anatomically shaped tissue engineered constructs, and the incorporation of hydrogels into microfluidic devices. Some other classes of gels such as those based on natural components like collagen and fibrin are also non-swelling and could potentially be used for molding applications. However, their fibrous nature renders them structurally and mechanically more complex and further, their properties are more difficult to modulate orthogonally. Thus, such gels are difficult to use in studies that require control over individual matrix properties. Recently, several alternative strategies designing non-swelling hydrogels have been introduced including the use of lower polymer content 27 or hydrophilic and thermoresponsive polymer building blocks exhibiting swelling and shrinking properties that oppose each other 5 . However, since these systems are based on tetra-armed poly(ethylene glycol) units with no additional reactive groups along the polymer backbone, it is more challenging to modify these hydrogels with cell adhesive functionalities without having to adjust conditions to maintain constant crosslinking. In our approach, we make use of a sugar-based polymer with ample reactive groups per repeat unit to independently tune hydrogel swelling, matrix mechanics, degradability, and ligand density. The concept of tuning hydrophobicity to modulate swelling could in principle be extended to many other polymer systems, enabling a wide range of applications where swelling has historically been a limitation.
Methods
Reagents. All reagents were purchased from Sigma Aldrich and used as received, unless otherwise stated.
Synthesis of DexMA. Dextran (MP Biomedicals, MW 86,000 Da) was modified with methacrylate groups, as previously described 6 . In brief, dextran (20 g) and 4-dimethylaminopyridine (2 g) were dissolved in 100 mL anhydrous dimethyl sulfoxide, and varying amounts of glycidyl methacrylate (GMA) were added under vigorous stirring. The mixture was heated to 45°C and allowed to react for 24 h. The solution was then cooled on ice and precipitated into 1 L ice-cold 2-propanol. The crude product was recovered by centrifugation, re-dissolved in milli-Q water and dialyzed against milli-Q water for 3 days with two solvent exchanges daily. Finally, the solution was lyophilized to obtain the pure product, which was characterized by 1 H NMR spectroscopy in D 2 O. The degree of functionalization was calculated as the ratio of the proton integral (6.174 and 5.713 ppm) and the anomeric proton of the glycopyranosyl ring (5.166 and 4.923 ppm). As the signal of the anomeric proton of α-1,3 linkages (5.166 ppm) partially overlaps with other protons, a pre-determined ratio of 4% α-1,3 linkages was assumed and the total anomeric proton integral was calculated solely based on the integral at 4.923 ppm. A methacrylate/dextran repeat unit ratio of 0.7 was determined.
The input amount of GMA determined the resultant degree of dextran methacrylation. The formulation used to fabricate hydrogels for all cell studies was functionalized with 1.5 molar equivalents (relative to dextran) GMA, resulting in an average of one methacrylate on 71% of all repeat units.
Preparation of non-cleavable DexMA hydrogels. For swelling studies, DexMA was dissolved at 10% w/v in PBS. 100 mg/mL Irgacure 2959 in ethanol was added to a final concentration of 0.2%. Solutions were mixed and photo-polymerized in poly(dimethylsiloxane) (PDMS, Sylgard 184, Dow Corning) molds using an Omnicure S2000 UV lamp (Exfo, Ontario, Canada) at 100 mW/cm 2 (measured at 365 nm). The molds were removed, and the gels were allowed to swell in PBS for at least 24 h. Food coloring was added to the solution to better visualize the gel outlines for quantification of swelling.
For cell studies, a mixture of DexMA (18% methacrylation, 6.3% w/v) and cyclo [RGDfK(C)] (cRGD, Peptides International) (0.55 mM) was prepared in M199 media (Gibco) containing sodium bicarbonate (3.5% w/v) and HEPES (10 mM). The pH was adjusted to 8 with 1 M sodium hydroxide (NaOH) solution, initiating the Michael addition reaction between methacrylate and cRGD cysteine functionalities. After 30 min, the solution was neutralized with 1 M HCl and Irgacure 2959 in ethanol was added to a final concentration of 0.02%. Precursor solutions were spread onto glass coverslips and photo-polymerized at 20 mW/cm 2 under argon for varying durations. Exposure times of 30, 40, 50, and 60 s yielded hydrogels with Young's moduli of 0.5, 1.5, 4.4, and 8.9 kPa, respectively.
Preparation of PEG and HA hydrogels. Poly(ethylene glycol) diacrylate (PEGDA, MW 6000 Da) was synthesized and hydrogels were photopolymerized following a previously published procedure 28 . Hyaluronic acid (HA) hydrogels were prepared from methacrylated HA with a degree of functionalization of 40% (relative to the number of repeat units) following a literature protocol 29 .
A solution of DexMA (71% methacrylation, 4.4% w/v) and CGRGDS (3 mM) was prepared in M199 media containing sodium bicarbonate (3.5% w/v) and HEPES (10 mM). The pH was adjusted to 8 with 1 M NaOH to couple CGRGDS to DexMA. After 30 min, varying amounts (17-44 mM) of crosslinker peptide NCD or LD were added, the pH was re-adjusted to 8, initiating the hydrogel formation. Concentrations of crosslinkers determined the stiffness of the hydrogel (Fig. 3d). Hydrogels were allowed to polymerize for 1 h.
For 3D cell encapsulation, cells were added to the gel precursor solution at 0.5 × 10 6 directly after the second pH neutralization step. Drops of the solution were added onto glass coverslips and the samples were polymerized under ambient conditions for 1 h. Gels were cultured in media, as described below.
Mechanical testing. To determine the Young's modulus of DexMA hydrogels, nanoindentation testing was performed with an atomic force microscope (MFP-3D, Asylum Research, Santa Barbara, CA) using a silicon-nitride tip (0.06 N/m) loaded with a 25 µm diameter polystyrene microsphere. Young's modulus was determined by fitting force-indentation curves to established models for Hertzian contact of a spherical indentor on an elastic half space, assuming a Poisson ratio of 0.5.
Angiogenic device fabrication. Angiogenic devices were fabricated according to a previously published procedure 12 . In brief, two patterned layers of PDMS, molded from photolithographically generated silicon masters, were bonded to each other and sealed against a glass coverslip to form the device housing. Two 400 μm diameter acupuncture needles (Hwato) were coated with 5 wt/vol% gelatin solution, cooled to 4°C for 5 min, sterilized using UV and inserted into the device. An MMP-cleavable DexMA gel was cast inside the device, allowed to polymerize for 60 min and hydrated in PBS overnight. Devices were warmed to 37°C for 1 h to melt away the gelatin coating prior to needle extraction. The gel was washed thoroughly with PBS and EGM-2 prior to cell seeding.
Human umbilical cord vein endothelial cells (HUVECs, Lonza) were cultured in fully supplemented EGM-2 media (Lonza) and expanded to passage 4 prior to use in experiments. For angiogenic device experiments, HUVECs were seeded into one channel at 10 7 /mL and allowed to adhere to the bottom surface for 30 min. The device was inverted, cells were seeded to the top surface at 10 7 /mL, and allowed to adhere and spread for 2 h. Unattached cells were thoroughly washed out with EGM-2 and the devices were placed on a platform rocker (BenchRocker BR2000) to generate gravity-driven flow through both channels. Eight hours after seeding, an angiogenic growth factor cocktail consisting of 75 ng/mL VEGF (R&D Systems), 75 ng/mL MCP-1 (R&D Systems), 150 ng/mL PMA (Sigma), and 250 nM S1P (Cayman Chemical) was introduced to the second channel to induce angiogenic sprouting. MMP inhibitor Marimastat (Tocris Bioscience) was administered into both channels at 500 nM.
Fluorescent staining and microscopy. HFFs on hydrogel samples were fixed with 4% PFA for 15 min (2D) or 1 h (3D) at room temperature. To visualize the organization of the actin cytoskeleton, cells were permeabilized with Triton X-100 for 5 min and stained with phalloidin-Alexa Fluor 488 (Life Technologies) for 1 h at room temperature. Nuclei were counterstained with DAPI.
HUVECs in devices were fixed with 3.7% glutaraldehyde for 30 min at room temperature. Samples were stained with phalloidin-Alexa Fluor 488 and DAPI overnight.
All samples were imaged at ×10 or ×40 on a Zeiss 200 M with a spinning disk head (Yokogawa CSU-10 with Borealis), environmental chamber, four laser lines, and photometric Evolve EMCCD camera. Images are presented as maximum intensity projections. Cell area was determined with a custom Matlab script. Sprout multicellularity was analyzed manually by counting the number of nuclei per sprout structure. Sprouts with more than six nuclei were defined as multicellular, and presented relative to the total number of sprouts. Cell density was determined from the total number of nuclei and the volume of gel containing sprout structures within each confocal stack.
Hydraulic permeability. The hydraulic permeability of dextran gels was measured as previously described 30 . Briefly, gels were formed in microfluidic channels and a hydrostatic pressure gradient was established across the gel. By measuring the volumetric flow rate through the gel, the hydraulic permeability was computed using Darcy's Law.
Diffusivity. Diffusion coefficients were determined by introducing fluorescently tagged 3, 10, and 70 kDa dextran into 160 μm channels formed through dextran gels as previously described 12 . Time-lapse fluorescence microscopy was used to image the labeled dextran as it diffused into the hydrogels. The fluorescence intensity in the hydrogel was measured as a function of time, and the resulting profile was fit to the 1D unsteady solution of Fick's second law of diffusion of dilute species 31 .
Poisson's ratio. To determine the Poisson ratio, cylindrical DexMA gels (6 mm diameter, 5 mm height) were cast in a PDMS mold and allowed to swell in PBS overnight. A micrometer-driven indenter was used to apply compressive axial strains (E z ) while imaging the gel from the side to quantify transverse strain (E xy ). The Poisson ratio was estimated for small strains (<5%) as −E xy /E z .
Rheology. Shear (Gʹ) and loss (Gʺ) moduli of DexMA gels were measured using an AR-G2 rheometer (TA Instruments, New Castle, DE), equipped with a solvent trap and a 20 mm stainless steel plate geometry. Gel samples were prepared using identical reagents and methods as in the 3D angiogenic sprouting experiments. Once the geometry made firm contact with the samples, frequency sweeps from 0.1 to 10 Hz at 1% strain were performed, followed by strain sweeps between 0.1 and 50% at 1 Hz. Data were collected from multiple measurements of three independent samples.
Statistics. Statistical differences were determined by ANOVA or Student's t-test where appropriate, with significance indicated by P < 0.05. Sample size is indicated within corresponding figure legends. All data are presented as a mean ± standard deviation.
Each study was repeated three times. For experiments involving single-cell analysis, n ≥ 50 cells, for sprouting experiments, n ≥ 4 fields of view and for mechanical characterization, n ≥ 8 positions were analyzed.
Data availability. The data sets generated and analyzed during the current study are available from the corresponding authors upon reasonable request.
|
2017-11-15T01:56:20.385Z
|
2017-08-29T00:00:00.000
|
{
"year": 2017,
"sha1": "10a4c73513fef8194978ee0f705f4a122f75acd9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-00418-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c612123ad7e047d089f9052f12de03272d685dd",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
147705087
|
pes2o/s2orc
|
v3-fos-license
|
Nonmuscle Myosin Heavy Chain IIA Recognizes Sialic Acids on Sialylated RNA Viruses To Suppress Proinflammatory Responses via the DAP12-Syk Pathway
NMHC-IIA, a subunit of nonmuscle myosin IIA (NM-IIA), takes part in diverse physiological processes, including cell movement, cell shape maintenance, and signal transduction. Recently, NMHC-IIA has been demonstrated to be a receptor or factor contributing to viral infections. Here, we identified that NMHC-IIA recognizes sialic acids on sialylated RNA viruses, vesicular stomatitis virus (VSV) and porcine reproductive and respiratory syndrome virus (PRRSV). Upon recognition, NMHC-IIA associates with the transmembrane region of DAP12 to recruit Syk. Activation of the DAP12-Syk pathway impairs the host antiviral proinflammatory cytokine production and signaling cascades. More importantly, sialic acid mimics and sialylated RNA viruses enable the antagonism of LPS-triggered proinflammatory responses through engaging the NMHC-IIA–DAP12-Syk pathway. These results actually support that NMHC-IIA is involved in negative modulation of the host innate immune system, which provides a molecular basis for prevention and control of the sialylated RNA viruses and treatment of inflammatory diseases.
IL-8, IL-1, and other effectors (2). All these responses contribute to restraining viral infections, promoting infected cell clearance, and activating the host adaptive immune system. However, excessive host immune responses usually lead to individuals' dysfunctions and disorders, thereby requiring a fine-tuned modulation by various negative regulators.
In particular, an appropriate production of proinflammatory cytokines induces acute inflammation to suppress viral replication and prevent other opportunistic pathogens (3). Once viral triggers disappear, proinflammatory responses are immediately terminated by different mechanisms (4). However, if viruses are not eliminated during acute inflammation, the chronic inflammatory state will be established with continuous production of inflammatory cytokines, which may be detrimental to hosts (5). For example, AIDS, with the symptom of sustained inflammation, is caused by HIV persistent infection (6). In some cases, overproduction of proinflammatory cytokines, namely, cytokine storm, can be life-threatening, e.g., lethal viral septic shock (7). Consequently, antiviral proinflammatory responses are normally under precise control to achieve an efficient clearance of invading viruses and avoid immune damage.
Proinflammatory signaling cascades converge on the activation of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-B) as well as the mitogen-activated protein kinase (MAPK) family members, p38 MAPK, c-Jun N-terminal kinase (JNK), and extracellular signal-regulated kinase 1/2 (ERK1/2). NF-B activation is characterized by degradation of the phosphorylated inhibitory protein IB␣ and translocation of phosphorylated p65/p50 dimers to the nuclei, whereas MAPKs are activated with phosphorylation (8,9). Their activation eventually increases the transcription of various proinflammatory cytokines. These proinflammatory signaling pathways are therefore often hijacked by viruses to establish infections or targeted by anti-inflammation modulators to prevent uncontrolled proinflammatory responses (10,11).
Nonmuscle myosin heavy chain IIA (NMHC-IIA) participates in a variety of cellular physiological processes, such as cell contractility, shape maintenance, and signal transduction (12). In addition, cell surface NMHC-IIA has been reported to facilitate viral infections (13,14). Here, we revealed a novel mechanism of negative regulation of host proinflammatory responses, where NMHC-IIA recognizes sialic acids on the sialylated RNA viruses or sialic acid mimics to suppress proinflammatory responses through the DAP12-Syk pathway.
We subsequently sought to determine the inhibitory mechanism of DAP12. Viral infections have been shown to induce tyrosine (Y) phosphorylation within the DAP12 immunoreceptor tyrosine-based activation motif (ITAM) and then recruit phosphorylated Syk (17). In this study, obvious phosphorylation of DAP12 and Syk was observed during PRRSV early infection (Fig. 1E). Immunoprecipitation (IP) assays indicated that DAP12 was constitutively associated with Syk, and the association was enhanced upon PRRSV infection in PAMs (Fig. 1F) and CRL-2843-CD163 cells (Fig. S3A). The interaction of these two proteins was also verified in the overexpression system of human embryonic kidney 293T (HEK-293T) cells ( Fig. S3B and C). Y86 and Y97 within the DAP12 ITAM were further shown to be indispensable for its interaction with Syk, while aspartic acid (D) 50 within the DAP12 transmembrane domain (TMD) was dispensable (Fig. S3D).
Syk knockdown ( Fig. S4A and B) or noncytotoxic Syk inhibitor R406 (Fig. S4D) in PAMs significantly promoted PRRSV-induced proinflammatory cytokine transcription and restrained PRRSV infection as DAP12 knockdown did ( Fig. 1G and Fig. S4C, E, and F). Additionally, DAP12 overexpression failed to inhibit the transcription of proinflammatory cytokines in Syk knockdown CRL-2843-CD163 cells, resulting in the decreased PRRSV replication (Fig. S4G to I). These results illustrated that the DAP12-Syk pathway was involved in antagonism of PRRSV-triggered proinflammatory cytokine production.
To determine which proinflammatory cascades were targeted by the DAP12-Syk pathway, we used noncytotoxic inhibitors of NF-B and MAPKs to treat PAMs and then inoculated PAMs with PRRSV (Fig. S5A). We observed that PRRSV-induced proinflammatory cytokine transcription was mediated by activation of NF-B and MAPK (p38 and ERK1/2) pathways ( Fig. S5B and C). DAP12 or Syk knockdown enforced these activations in response to PRRSV (Fig. 1H). In CRL-2843-CD163 cells, DAP12 overexpression repressed PRRSV-triggered NF-B activation and proinflammatory cytokine transcription, while Syk knockdown restored the proinflammatory responses ( Fig. S5D to F). Phosphorylated p38 and ERK1/2 were hardly detected in CRL-2843-CD163 cells (data not shown). The results demonstrated that the DAP12-Syk pathway suppressed MAPK (ERK1/2, p38)-and NF-B-mediated proinflammatory responses triggered by PRRSV.
NMHC-IIA is identified to interact with DAP12. In general, association of a receptor with DAP12 is responsible for activating the DAP12-Syk pathway (18). Here, we carried out IP assays followed by mass spectrometry (MS) analysis to screen potential DAP12-associated receptors. Among the MS-identified proteins, NMHC-IIA was one of the most prominent proteins binding to DAP12 ( Fig. 2A). The binding of NMHC-IIA to DAP12 was also confirmed by immunoblotting (IB) analysis (Fig. 2B). We further found that the association of endogenous NMHC-IIA and DAP12 was augmented upon PRRSV infection (Fig. 2C). Moreover, localization of NMHC-IIA and DAP12 was visualized in CRL-2843-CD163 cells cotransfected with enhanced green fluorescent protein (EGFP)-NMHC-IIA and DAP12-monomeric red fluorescent protein (mRFP). Though NMHC-IIA and DAP12 sparsely colocalized in the cell membranes, their colocalization was in-FIG 1 DAP12-Syk axis is activated to inhibit PRRSV-triggered proinflammatory responses. (A to C) DAP12 knockdown enhances PRRSV-triggered production of proinflammatory cytokines and restricts PRRSV infection. PAMs were transfected with siDAP12-433# or siRNA-NC for 36 h and infected with PPRSV (MOI ϭ 0.1) for indicated time periods (0, 3, 6, 9, and 12 h). "0 h" indicated that PRRSV was added at this time point and washed off immediately. qRT-PCR was used to measure proinflammatory cytokine (TNF-␣, IL-6, IL-8, and IL-1) mRNAs (A) and PRRSV ORF7 (C). TNF-␣ production was measured by ELISA (B). (D) DAP12 knockdown promotes PRRSV-induced transcription of proinflammatory cytokines at different MOIs. PAMs with DAP12 knockdown were infected by PRRSV (MOI ϭ 0.1 or 1) for 6 h. qRT-PCR was performed to detect mRNA abundance of TNF-␣, IL-6, IL-8, and IL-1. (E) PRRSV early infection induces phosphorylation of DAP12 and Syk. PAMs were infected with PRRSV (MOI ϭ 1) for indicated time periods (0, 0.5, 1, 2, and 3 h). DAP12 was immunoprecipitated by anti-DAP12 antibody. Phosphorylated DAP12 and Syk were detected by IB. (F) PRRSV infection enhances the association of DAP12 and Syk. PAMs were infected with PRRSV for 1 h. IB was conducted for DAP12-immunoprecipitated protein detection. (G) Syk knockdown promotes PRRSV-induced proinflammatory cytokine transcription. PAMs were transfected with siSyk-443# or siRNA-NC for 36 h and then infected with PRRSV (MOI ϭ 0.1) for indicated time periods (0, 6, 9, and 12 h). TNFA, IL-6, and IL1B transcription was detected by qRT-PCR. (H) DAP12 or Syk knockdown promotes PRRSV-induced phosphorylation of NF-B, p38, and ERK1/2. DAP12 or Syk knockdown PAMs were infected with PRRSV (MOI ϭ 1) for 1 h. IB analysis was performed with indicated antibodies. Experiments in all panels were repeated at least three times, and similar results were obtained. Quantitation data were shown as mean Ϯ SD from three replicates. Statistical analysis used in qRT-PCR was determined by Student's t test: *, P Ͻ 0.05; **, P Ͻ 0.01; ***, P Ͻ 0.001; ns, not significant.
Liu et al.
® creased after PRRSV infection (Fig. S6A). We also immunoprecipitated NMHC-IIA from membrane proteins of PRRSV-infected PAMs. The results of immunoblotting (IB) analysis showed that PRRSV infection intensified the association of NMHC-IIA and DAP12 on the cell surface (Fig. 2D). We further observed that NMHC-IIA specifically interacted with DAP12 but not with Syk in CRL-2843-CD163 cells transfected with 3ϫFlag-DAP12 and Syk-myc-His (Fig. 2E). Additional evidence from immunofluorescence assay (IFA) indicated that punctately distributed NMHC-IIA was located at the cell membranes where DAP12 colocalized with Syk in mock-infected cells, while PRRSV infection increased their colocalization (Fig. S6B). Collectively, NMHC-IIA was identified to be a novel binding protein for DAP12.
Next, we explored how NMHC-IIA interacted with DAP12. On one hand, NMHC-IIA was divided into three fragments from N to C terminus, denominated IIA-A (residues 1 to 742; the numbering is according to UniProt entry F1SKJ1), containing myosin N-terminal SRC homology 3 domain (SH3)-like domain and actin-binding domain; IIA-B (residues 743 to 1560), containing the coiled-coil rod domain; and IIA-C (residues 1561 to 1957), with the nonhelical tail. Glutathione S-transferase (GST) pulldown assays showed that IIA-B was responsible for the interaction with DAP12 (Fig. 2F). IP assay further indicated that the interaction was independent of D50 and the five Y's in DAP12 (Fig. S7A). On the other hand, DAP12 was separated into two parts, ΔICD with deletion of the intracellular domain (ICD) and ΔECD with deletion of the extracellular domain (ECD). Pulldown assays determined that both ΔICD and ΔECD bound to IIA-B, suggesting that the DAP12 TMD might be the binding region ( Fig. 2G and Fig. S7B). Furthermore, a short stretch (residues 51 to 57; the numbering is according to UniProt entry Q9TU45) within the DAP12 TMD was proven to be indispensable for the interaction ( Fig. S7C and D).
NMHC-IIA recognizes the sialic acids on PRRSV to inhibit proinflammatory responses via the DAP12-Syk pathway. To identify the ligands required for NMHC-IIA-DAP12-Syk pathway activation, we inoculated PAMs with the same amounts of naive, UV-inactivated, and heat-inactivated PRRSV virions, respectively. Almost identical to naive ones, both UV-and heat-inactivated virions induced phosphorylation of DAP12 and Syk, as well as the interaction of NMHC-IIA or Syk with DAP12 (Fig. 4A). Various amounts of heat-inactivated virions all enhanced DAP12 phosphorylation and Syk binding to DAP12 (Fig. S9A). MYH9 knockdown significantly inhibited phosphorylation of DAP12 and Syk induced by heat-inactivated PRRSV (Fig. S9B). The results suggested that neither PRRSV structural proteins nor viral RNA genome affected activation of the DAP12-Syk pathway.
NMHC-IIA Suppresses Antiviral Responses
® acids, respectively (20). The virions where ␣2-6-linked sialic acids remained induced the activation of the DAP12-Syk pathway as the naive ones did, while the virions with removal of ␣2-3,6-linked sialic acids failed to activate the DAP12-Syk pathway (Fig. 4B). These results demonstrated that sialic acids on PRRSV were required for activation of the DAP12-Syk pathway.
To further elucidate whether sialic acids were the ligands of NMHC-IIA, we conducted Fc pulldown assays with Fc-fused NMHC-IIA and purified PRRSV virions treated with the indicated enzymes (21). The results indicated that PRRSV interacted with NMHC-IIA partially dependent on the ␣2-3,6-linked sialic acids, because the removal of N-glycans or ␣2-3,6-linked sialic acids decreased by about 50% or 60% the amount of PRRSV bound to NMHC-IIA, respectively (Fig. 4C). In addition, competitive Fc pulldown showed that the mixtures of sialic acid mimics (3=-sialyllactose or 6=-sialyllactose mimicking ␣2-3or ␣2-6-linked sialic acids) competed against PRRSV to interact with NMHC-IIA (Fig. S9C). Moreover, NMHC-IIA nonhelical tail IIA-C was demonstrated to strongly interact with PRRSV in GST pulldown assays (Fig. 4D). PRRSV glycoprotein 5 (GP5) modified with sialic acids was shown to be important for PRRSV infection in vitro and in vivo (22,23). We conducted pulldown assays with naive or PNGase F-or ␣2-3,6,8 neuraminidase-treated GP5 and IIA-C (21) and observed that GP5 interacted with IIA-C partially dependent on the sialic acids ( Fig. 4E and F). The above results identified that the sialic acids on PRRSV were the ligands of NMHC-IIA.
We also evaluated the effects of heat-inactivated PRRSV virions on LPS-stimulated proinflammatory responses. The virions attenuated NF-B activation ( Fig. 4G and H) as well as TNFA, IL-6, and IL1B transcription in response to LPS (Fig. S9D). In contrast, virions with removal of N-glycans or ␣2-3,6-linked sialic acids lost the inhibitory function ( Fig. 4G and H and Fig. S9D). Considering that sialic acids on the cell surface might cis-interact with NMHC-IIA, we removed sialic acids on PAMs with ␣2-3,6,8 neuraminidase treatment before LPS stimulation. Sialic acid mimics still inhibit LPSinduced transcription of proinflammatory cytokines in trans (Fig. S9E).
Taken together, these results showed that NMHC-IIA recognized sialic acids on PRRSV to repress the proinflammatory responses via the DAP12-Syk pathway.
NMHC-IIA recognizes the sialic acids on VSV to inhibit proinflammatory responses by activating the DAP12-Syk pathway. To determine whether other sialylated RNA viruses were recognized by the NMHC-IIA-DAP12-Syk pathway for suppressing proinflammatory responses, we used vesicular stomatitis virus (VSV) with abundant ␣2-3-linked sialic acids (24) to inoculate murine peritoneal macrophage-like RAW 264.7 cells. Noncytotoxic knockdown of NMHC-IIA, DAP12, or Syk enhanced the TNFA and IL1B transcription in response to VSV (Fig. 5A). However, knockdown of NMHC-IIA decreased the amounts of VSV virions after 1 h of incubation at 37°C (Fig. 5A), suggesting that NMHC-IIA might be involved in VSV invasion. Therefore, we used heat-inactivated VSV virions instead of naive ones in the following experiments. IP analysis showed that heat-inactivated VSV (MOI ϭ 20) enhanced phosphorylation of DAP12 and binding of NMHC-IIA or Syk to DAP12 at indicated time points (Fig. 5B). The virions also augmented the association of NMHC-IIA with DAP12 on cell membranes (Fig. 5C). Next, we investigated their effects on LPS-triggered proinflammatory responses. The virions inhibited the LPS-triggered proinflammatory responses in control RAW 264.7 cells but not in NMHC-IIA inhibitor-treated or NMHC-IIA, DAP12, or Syk knockdown cells (Fig. 5D to F). Differently, MAPK activation was not influenced (Fig. 5F), and IL-8 transcription was not detected (data not shown). Intriguingly, we found that sialic acid mimics exerted a similar function as the virions in response to LPS, and 3=-sialyllactose had a stronger inhibition of NF-B activation and IL-6 transcription than 6=-sialyllactose ( Fig. 5G and H). Furthermore, PNGase F-or ␣2-3,6,8 neuraminidasetreated virions lost the inhibitory function ( Fig. 5G and H). All these findings demonstrated that NMHC-IIA recognized sialic acids on VSV to suppress the proinflammatory responses via the DAP12-Syk pathway.
Sialic acid mimics inhibit LPS-induced proinflammatory responses through the NMHC-IIA-DAP12-Syk pathway. Based on the above results, we wondered whether sialic acids repressed LPS-induced proinflammatory responses through the NMHC-IIA-DAP12-Syk pathway independent of sialylated RNA viruses. Sialic acid mimics were shown to induce the activation of the NMHC-IIA-DAP12-Syk pathway and antagonize NF-B activation ( Fig. 6A and B). Interestingly, after analyzing the relative level of phosphorylated DAP12 (pDAP12) induced by sialic acid mimics, we found that pDAP12 levels in 3=-sialyllactose-treated RAW 264.7 cells were about twice those of 6=sialyllactose-treated ones. In contrast, pDAP12 levels elevated by 6=-sialyllactose were about 2.4-fold higher than those elevated by 3=-sialyllactose in PAMs (Fig. 6C). We also observed that sialic acid mimics inhibited LPS-induced transcription of proinflammatory cytokines, and the inhibitory function of 3=-sialyllactose was stronger than 6=sialyllactose in RAW 264.7 cells, while 6=-sialyllactose was the stronger one in PAMs ( Fig. 6D and E). Additionally, sialic acid mimics suppressed LPS-triggered proinflammatory responses in RAW 264.7 cells (Fig. 7A to D) or PAMs (Fig. 8A to D). As expected, MYH9, DAP12, and Syk knockdown all terminated the inhibitory function of sialic acid mimics ( Fig. 7 and 8) at different time points. Collectively, these results demonstrated that NMHC-IIA recognized sialic acids to inhibit LPS-induced proinflammatory responses by activating the NMHC-IIA-DAP12-Syk pathway.
DISCUSSION
To eradicate invading viruses, various host signaling cascades are activated to induce effective proinflammatory responses. However, excessive host antiviral proinflammatory responses may develop into acute or chronic inflammatory disorders, and therefore, multiple negative regulatory mechanisms are needed to maintain homeostasis (25). In particular, Siglecs are a family of type I membrane proteins which specifically recognize sialic acid-modified glycans. It is known that several pathogens have evolved the capacity to gain sialic acids from their hosts and produce their own sialylated glycoconjugates (26). This capacity seems to be crucial for their survival in mammalian hosts, possibly by mimicking the host cell surface molecules to negatively regulate the innate immune responses and avoid the immune attack. For example, Siglec 1, Siglec H, and Siglec G were reported to be exploited by viruses to antagonize antiviral immune responses (27)(28)(29). Siglec 1 also played a role in establishing an immunosuppressive state of inflammation (30).
NMHC-IIA plays important roles in cell adhesion, cell migration, and cell division (31). Increasing evidence indicates that NMHC-IIA is required for entry of viruses such as herpes simplex virus 1 (13), Epstein-Barr virus (14,32), severe fever with thrombocytopenia syndrome virus (33), and PRRSV (34). Here, we identified that NMHC-IIA functioned as a Siglec to negatively regulate NF-Band MAPK-mediated proinflammatory responses. GST pulldown assays demonstrated that NMHC-IIA directly interacted with sialylated RNA viruses partially dependent on sialic acids ( Fig. 4; see also Fig. S9 in the supplemental material). Furthermore, knockdown of NMHC-IIA led to augmented transcription of proinflammatory cytokines and activation of NF-B, p38, and ERK1/2 in response to sialylated RNA viruses ( Fig. 3; Fig. S5 and S8). To distinguish the immunoregulation function from its role in viral invasion, we used heat-inactivated virions or monovalent sialic acid mimics instead of naive viruses to inoculate LPS-stimulated cells, which still resulted in decreased proinflammatory cytokine production and attenuated NF-B activation ( Fig. 4 and 8 and Fig. S9). In contrast, MYH9 knockdown recovered the proinflammatory responses ( Fig. 5 and 8). Interestingly, we found that the suppression of LPS-triggered inflammation by sialic acid mimics in RAW 264.7 cells was different from that in PAMs. 3=-Sialyllactose showed higher inhibitory potency than 6=sialyllactose in RAW 264.7 cells, while 6=-sialyllactose was the more potent one in PAMs. . Experiments in all panels were repeated at least three times, and similar results were obtained. Quantitation shown was from three independent experiments (mean Ϯ SD from three replicates). Statistical analysis used in qRT-PCR was performed using Student's t test: *, P Ͻ 0.05; **, P Ͻ 0.01; ***, P Ͻ 0.001; ns, not significant. . IB was performed to detect p65 phosphorylation. Experiments in all panels were repeated at least three times, and similar results were obtained. Quantitation data are mean Ϯ SD (n ϭ 3). Statistical analysis used in qRT-PCR was performed using Student's t test: *, P Ͻ 0.05; **, P Ͻ 0.01; ***, P Ͻ 0.001; ns, not significant.
NMHC-IIA Suppresses Antiviral Responses
Liu et al.
®
The difference might be due to cell species. We did not obtain the appropriate multivalent sialic acid mimics to investigate their effects on LPS-triggered proinflammatory responses, which needed a further exploration. These findings revealed a novel role of NMHC-IIA in negative modulation of innate immune responses and may be applied to design of anti-inflammatory drugs. DAP12, also called TYROBP (tyrosine kinase binding protein) and KARAP (killer cell activating receptor-associated protein), functions as an adaptor in various immune cells, including macrophages, microglia, monocytes, DCs, and natural killer (NK) cells (35). DAP12 contains a small ECD, a TMD, and an ICD. The ECD possesses an essential cysteine required for the homodimer formation, the TMD possesses an aspartic acid indispensable for interaction with DAP12-associated receptors, and the ICD possesses an ITAM responsible for interaction with Syk and delivering signals (18). The receptor-DAP12-Syk axis is a classic pathway involved in immune responses such as synergistic activation of proinflammatory cytokine production (36) and IFN augmentation (37). Interestingly, the axis is also involved in negative regulation of proinflammatory responses. DAP12-and Syk-deficient macrophages from the corresponding knockout mice displayed an increased secretion of proinflammatory cytokines in response to LPS, CpG DNA, and synthetic lipopeptide (38). In our study, we observed activation of the DAP12-Syk pathway during viral early infection ( Fig. 1 and 5). Knockdown of DAP12 or Syk enhanced both virus-triggered production of proinflammatory cytokines and NF-B/MAPK activation ( Fig. 1 and 5; Fig. S1 and S4). In contrast, DAP12 overexpression antagonized these responses (Fig. S2). These results suggested that the DAP12-Syk pathway was hijacked to suppress the antiviral proinflammatory responses.
So far, only the triggering receptor expressed on myeloid cells 2 (TREM-2) was identified to be a DAP12-associated receptor mediating inhibition of Toll-like receptor (TLR) and FcR signaling (39). DAP12-associated receptors in negative regulation of proinflammatory responses were not fully illustrated. Here, we identified a new DAP12associated receptor, NMHC-IIA, through IP-MS, IFA, and co-IP assays ( Fig. 2 and Fig. S6). The results showed a direct interaction between NMHC-IIA and DAP12, with a requirement of NMHC-IIA domain B (mainly containing ␣-helices) and a short stretch (residues 51 to 57) within the DAP12 TMD ( Fig. 2 and Fig. S7). In addition, we identified the sialic acids on RNA viruses (VSV and PRRSV) as the ligands of NMHC-IIA to activate the DAP12-Syk pathway (Fig. 4 and 5 and Fig. S9).
PRRSV, a sialylated RNA virus, has been shown to inhibit NF-B-mediated inflammatory responses during early infection while triggering the responses in the late infection (40). VSV, another sialylated RNA virus (41), has also been reported to induce a delayed NF-B activation (42). Consequently, the two sialylated RNA viruses are utilized to explore the interaction between sialic acids and host innate immunity during viral infections. Knockdown of NMHC-IIA, DAP12, and Syk all promoted virus-triggered proinflammatory responses ( Fig. 1 and 3 and Fig. S8). We also found that DAP12 knockdown promotes proinflammatory cytokine transcription at time zero (Fig. 1), which might be explained by DAP12 being involved in negative regulation of immune responses for maintaining homeostasis (16,43). However, IL-10 transcription was not induced during PRRSV infection, a finding which was different from previous reports where PRRSV upregulated IL-10 expression (44,45). This divergence might be due to the PRRSV strains and PAMs used and needs to be further elucidated. Next, we identified that sialic acids on viruses were the stimuli and exerted the anti-inflammatory effects (Fig. 4 and 5 and Fig. S9). This may be a common mechanism in negative regulation of antiviral proinflammatory responses (Fig. 9), which needs further demonstration in other sialylated RNA viruses, and even DNA viruses or bacteria.
Indeed, there are some issues left unsolved in the current work. First, we performed knockdown instead of knockout of NMHC-IIA in vitro since MYH9 knockdown with more than 70% efficiency was detrimental to the cells (Fig. S8). We failed to get the MYH9 knockout mice and perform in vivo experiments because the loss of NMHC-IIA was lethal for mice, which is consistent with a previous study (46). Second, it might be better to generate a stable cell line with DAP12 or Syk knockdown to determine their inhibitory functions. However, it is hard to obtain such stable cell lines for primary PAMs. Third, factors downstream of Syk delivering the inhibitory signals have not been discovered, and the related work is being carried out in our laboratory. Fourth, DAP10 is similar to DAP12 in its TMDs. There are certain receptors associated with DAP12 which are also identified to pair with DAP10 (43). Whether DAP10 is associated with NMHC-IIA needs to be further studied. Our initial work showed that DAP12 knockdown increased type I IFN production. Whether NMHC-IIA-DAP12-Syk has effects on IFN responses is our next issue to be explored.
In conclusion, we identify that NMHC-IIA recognizes sialic acids on sialylated RNA viruses and activates the DAP12-Syk pathway to suppress virus-triggered proinflammatory responses. More importantly, the NMHC-IIA-DAP12-Syk pathway is shown to inhibit LPS-induced proinflammatory responses on recognition of heat-inactivated sialylated RNA viruses and sialic acid mimics. Taken together, we have revealed a novel negative regulation mechanism of proinflammatory responses, which expands our knowledge of the host innate immune system and provides clues for struggling with the sialylated RNA viruses and inflammatory diseases. HEK-293T, and RAW 264.7 cell lines were purchased from Cellbio (Shanghai, China). CRL-2843-CD163, a cell line stably expressing CD163 in CRL-2843, was constructed in our laboratory. PAMs and CRL-2843-CD163 cells were routinely maintained in Roswell Park Memorial Institute 1640 medium (RPMI 1640) supplemented with 10% heat-inactivated fetal bovine serum (FBS; Gibco), penicillin (100 U/ml), and streptomycin (100 mg/ml) in a humidified 37°C, 5% CO 2 incubator. MARC-145, HEK-293T, and RAW 264.7 cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% FBS and antibiotics.
MATERIALS AND METHODS
Highly pathogenic (HP) PRRSV strain HN07-1 (isolated during an HP-PRRSV outbreak in the Henan province of China in 2007 by our laboratory) and VSV (Indiana serotype kept in our laboratory) were prepared according to our laboratory's previous study (47). UV-inactivated viruses were generated by irradiation with short-wave UV light (254 nm) for 1 h, and heat-inactivated ones were gained by a water bath at 65°C for 15 min. The viral infectivity was confirmed to be completely lost. The viruses were purified through tangential flow filtration and Sepharose 4 Fast Flow gel chromatography, and their infectivity was comparable to that of naive ones. Table 1.
qRT-PCR. Total RNAs were extracted with TRIzol reagent (Invitrogen), and the reverse transcription cDNAs were prepared from total RNAs using the PrimeScript RT reagent kit with gDNA Eraser (TaKaRa). The cDNAs from different samples were amplified by quantitative real-time PCR (qRT-PCR) to measure mRNA abundance on a 7500 Fast RT-PCR system (Applied Biosystems, Foster City, CA, USA). Relative mRNA level was evaluated by the 2 ϪΔΔCT method with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA as an endogenous control (48). The primers for qRT-PCR analysis are listed in Table S1 in the supplemental material.
IB and IP. Cells were harvested and lysed in radioimmunoprecipitation assay (RIPA) lysis buffer (Beyotime Biotechnology), supplemented with protease and phosphatase inhibitors. Whole-cell lysates (WCLs) were normalized to equal amounts of -actin, separated by 8 to 15% gradient sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), and electrotransferred onto Immobilon-P membranes (Merck Millipore). The membranes were blocked in 5% skimmed milk for 1 h and probed with the indicated primary antibodies. After incubation with horseradish peroxidase (HRP)-labeled goat antimouse or rabbit IgG antibody as secondary antibodies, the indicated proteins were visualized with enhanced chemiluminescence (ECL) reagent (Solarbio). For IP, the indicated primary antibodies were first bound to protein A/G beads at 4°C for 4 h. Samples were subsequently incubated with the beads at 4°C overnight, and potential associated proteins were tested by IB as stated above. The relative levels of target proteins were analyzed using Image J software, and the ratio was displayed as fold change below the images.
ELISA. PAMs with DAP12 knockdown were inoculated with PRRSV at an MOI of 0.1 for indicated time periods (3, 6, 9, 12, 24, and 36 h). The cell supernatants were collected for measurement of TNF-␣ using ELISA kits according to the manufacturer's instructions.
Virus titration assay. The transfected cells were inoculated with PRRSV at an MOI of 0.1, 1, or 5. At indicated time points (24,36, and 48 h) postinfection, the progeny virus titers were measured by the 50% tissue culture infective dose (TCID 50 ) assay in MARC-145 cells.
Eukaryotic expression was conducted by transfection of each expression vector into HEK-293T or CRL-2843-CD163 cells for 36 h using Lipofectamine 2000 or Lipofectamine LTX with Plus reagent according to the manufacturer's instructions (Thermo Fisher Scientific). The transfected cells were lysed in RIPA lysis buffer supplemented with protease and phosphatase inhibitors and clarified by centrifugation at 12,000 rpm at 4°C for 15 min to collect supernatants. Protein A/G beads were incubated with the indicated antibodies and WCLs or cell supernatants at 4°C and eluted by 0.05 M glycine-HCl buffer, pH 2.2 (0.2 M glycine, 0.2 M HCl).
Inhibitor treatments. PAMs were seeded onto 24-well plates and treated with specific inhibitors of MAPKs (ERK1/2, p38, and JNK) and NF-B at different concentrations (5, 10, 15, 20, 25, and 30 M), Syk inhibitor at 5 M for 12 h, or NMHC-IIA inhibitor at 20 M for 1 h. Cell viability measurement was performed as stated above. Then, PAMs were inoculated with PRRSV at an MOI of 0.1 for 1 h, and transcription of proinflammatory cytokines was tested by qRT-PCR. Phosphorylation of indicated proteins was detected by IB.
TABLE 1 siRNAs in this study
Mass spectrometry analysis. PAMs were inoculated with PRRSV for 1 h and lysed for IP with DAP12 primary antibody or isotype control IgG. DAP12-associated proteins were eluted and subjected to SDS-PAGE and silver staining. Discrepant bands with intensive signal compared with isotype control IgG were cut and digested, followed by analysis using matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) by Shanghai Sangon Biotech Co. Ltd.
IFA. CRL-2843-CD163 cells were grown on coverslips in 6-well plates at 30 to 50% confluence. For visualization of the distribution of NMHC-IIA, DAP12, or Syk, cells were transfected with EGFP-NMHC-IIA or/and DAP12-mRFP for 36 h and then mock infected or infected with PRRSV (MOI ϭ 20) for 1 h. The cells were fixed with 4% paraformaldehyde for 15 min at room temperature and then blocked with 5% BSA-PBST for 30 min. Next, cells were incubated with anti-Syk MAb at 4°C for 2 h, followed by incubation with DyLight 350 (blue)-conjugated anti-mouse IgG for an additional 45 min. The localization of NMHC-IIA, DAP12, or Syk was observed under an inverted fluorescence and phase-contrast microscope (Carl Zeiss AG, Oberkochen, Germany). Images were taken at a ϫ400 magnification.
In vitro pulldown assays. GST resins or protein A/G beads were incubated with purified GST-or Fc-tagged proteins at 4°C for 2 h and then with the indicated purified Flag-tagged proteins or differently treated virions at 4°C for another 30 min or overnight. After extensive washing with TBS 4 times, proteins were eluted and subjected to IB with the indicated antibodies.
Deglycosylation treatments. Heat-inactivated virions were treated with PNGase F in PBS at 37°C for 90 min to remove all N-glycans (high-mannose and complex, sialic acid-containing glycans). To remove sialic acids, the virions were incubated with specific neuraminidases in PBS at 37°C for 90 min or only the neuraminidase buffer (0 unit of neuraminidase) as a negative control. PAMs or RAW 264.7 cells were stimulated with LPS (10 g/ml) for indicated time periods (0, 15, and 30 min or 0, 30, and 120 min) in the presence of deglycosylation virions for 1 h. After extensive washing, cells were collected to conduct protein or mRNA extraction. The removal of sialic acids on the purified PRRSV GP5 was performed in parallel. The desialylated protein was further applied for pulldown assays. To remove sialic acids on the cells before infection, PAMs or RAW 264.7 cells were incubated with ␣2-3,6,8 neuraminidase at 37°C for 1 h and washed extensively to remove the enzyme (20,49).
Competition experiments with monovalent sialic acid mimics. The protein A/G beads with Fc-tagged NMHC-IIA were incubated with the purified PRRSV virions in the presence or absence of mixtures of 20 M 3=-sialyllactose and 6=-sialyllactose sodium salt at 4°C for 3 h. The virions were eluted and measured by IB.
Protein extraction. The nuclear and cytoplasmic proteins were separately extracted using a nuclear and cytoplasmic protein extraction kit and applied for IB detection. The membrane and cytoplasmic proteins were harvested using the ProteoExtract native membrane protein extraction kit.
Treatment with virions and sialic acid mimics. PAMs were incubated with the same amount (MOI ϭ 10 or 20) of heat-inactivated virions, desialylatd virions, or 10 M sialic acid mimics (soluble 3=-sialyllactose and 6=-sialyllactose sodium salt) for 1 h and later stimulated with LPS (10 g/ml) for indicated time periods (0, 15, and 30 min). RAW 264.7 cells were stimulated with LPS (10 g/ml) at indicated time points (30 and 120 min) in the presence of the specifically treated virions or the sialic acid mimics for 1 h. The indicated proteins and proinflammatory cytokines were analyzed by IB and qRT-PCR, respectively.
Statistical analysis. Three replicates were included in all experiments, and each experiment was independently repeated at least three times. The experimental data were presented as group mean and standard deviation (SD) and analyzed by the unpaired two-tailed Student t test with GraphPad (Graph-Pad Software, San Diego, CA, USA). Asterisks indicate statistical significance as follows: ns, not significant; *, P Ͻ 0.05; **, P Ͻ 0.01; ***, P Ͻ 0.001.
SUPPLEMENTAL MATERIAL
Supplemental material for this article may be found at https://doi.org/10.1128/mBio .00574-19. funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
The authors declare that there is no conflict of interest.
|
2019-05-09T13:09:54.722Z
|
2019-05-07T00:00:00.000
|
{
"year": 2019,
"sha1": "ba8bcebc148fea7a36d03373b47c7e77a7d82d59",
"oa_license": "CCBY",
"oa_url": "https://mbio.asm.org/content/mbio/10/3/e00574-19.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edc93afe3259d2600f9f5cfbe07bb08e749e1221",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
245763610
|
pes2o/s2orc
|
v3-fos-license
|
Combining indicators for better decisions – Algorithms vs experts on lakes ecological status assessment
The results of ecological condition assessments of ecosystems are related to key decisions taken for the purpose of remedial measures or maintaining their current state. In the assessment process, experts come across extensive datasets, the quality, and completeness of which do not always allow for a reliable evaluation, especially if a single empirical approach is used. In this paper, results of machine learning algorithms are presented, with a focus on Self-Organizing Maps. In this context, measurements of component parameters for the assessment of the ecological state of lake ecosystems were subjected to the process of unsupervised machine learning with the aim to create an alternative assessment approach based on the capabilities of neural networks. Results are mapped and compared with expert evaluations, allowing to extend knowledge about sub-clusters present in the data. The primary target of this paper is the ecological assessment expert. At an early stage, information was obtained about the presence of ecological outliers that may be subject to separate monitoring or verification of environmental activities and objectives. In the back-mapping process, the presented technique of map construction and clustering with various versions of the division was referenced to a set of expert classification findings, revealing the underlying structure of the results when addressed with an unsupervised data comprehension. The approach introduced here does not intend to interfere with the format of an original assessment methodology. Rather it aims at obtaining useful additional information which may help in making better decisions.
Introduction
Maintaining and enhancing the ecological status of water ecosystems is a particularly difficult issue faced by policymakers today (Derakhshannia et al., 2020;Fattahi Nafchi et al., 2021;Geist and Hawkins, 2016;Ostad-Ali-Askari and Shayannejad, 2021;Saarikoski et al., 2018). One of the main concerns of applied ecology is of an empirical nature. At the same time, sufficiently large datasets are becoming increasingly available in many cases (Peters et al., 2014;Zhang and Chen, 2017). Researchers and government agencies need access to a variety of tools that are best suited for analyzing these critical datasets. Many of the difficulties related to lake water quality may be solved using complicated, time-consuming and costly physical models of the environment (Cosgrove and Loucks, 2015;Ostad-Ali-Askari et al., 2017;Salehi-Hafshejani et al., 2019). As data scientists become more involved in the study of environmental concerns, machine learning models may increasingly have a role to play not just in ecological state modelling, but also in the development of density and point projections. This is very important when engaging in a process of knowledge based decisionmaking (Blair et al., 2019;Hadjimichael et al., 2016;Kanevski et al., 2004). Whilst successful ecological modelling does not require the use of structural models, it necessitates the completion of data pre-processing that underpins reduced-form, robust methods (Alamelu Mangai and Gulyani, 2020;Chrobak et al., 2021).
The European Union's Water Framework Directive (WFD) establishes operational definitions for assessing ecological conditions for defining management goals, and for harmonizing the ecological assessment systems of EU Member States (Hendry, 2017). The WFD aims at all European Union's lakes being in 'good' ecological conditions shortly. In this context, evaluation of particular aquatic assemblages referred to as biological quality components, is used to determine 'good' ecological status (Nõges et al., 2009). Phytoplankton, aquatic flora (including macrophytes, macroalgae, and phytobenthos), benthic invertebrates, and fish are all parts of these components (Kolada et al., 2016). Moreover, especially in the case of lacustrine ecosystems, the measured parameters include levels of phosphorous, nitrogen, chlorophyll a or water conductivity (Bhateria and Jain, 2016;Javadinejad et al., 2019;Ostad-Ali-Askar et al., 2018;Pirnazar et al., 2018). The WFD deems biological quality elements (BQE) to be of good ecological quality if they deviate from near-natural reference conditions only minimally (Europe Environment Agency, 2018). Member States are responsible for periodically analyzing certain BQEs in order to categorize the quality of their water bodies into one of the five WFD classes: high, good, moderate, poor, or bad. In many cases, bioindication technologies are used to minimize the costs of water quality monitoring (Golian et al., 2020;Holt and Miller, 2011;Kohlmann et al., 2018;Navabpour et al., 2018;Vanani et al., 2017). Assessments are used to highlight not only the current level of water contamination but also to categorize water quality as a foundation for environmental and economic considerations (Hu et al., 2018;Javadinejad et al., 2021;Kishimoto and Ichise, 2013;Liu et al., 2016;Talebmorad et al., 2020). Thus, they are useful not only in practical terms e.g. water management and establishing environmental solutions, but also in terms of revealing how various factors impact ecosystem characteristics such as population composition, biological productivity, and overall ecological fitness. Environmental flows are increasingly being included into freshwater ecosystem conservation management methods. The impacts of various hydrologically based flows on fish, macroinvertebrates, and macrophytes were investigated by adding artificially modified flow and discharge variations (Kuriqi et al., 2021(Kuriqi et al., , 2019Pander et al., 2019). Various models for estimating the ecosystem status of lakes coexist in different parts of Europe, using different methodologies (Frassl et al., 2019;Menshutkin et al., 2014;Mooij et al., 2010;Vinçon-Leite and Casenave, 2019). Some of them are directly reliant on the WFD's criteria being implemented. In the current monitoring of aquatic environments, two types of data are available: (1) chemical, physical, morphometric, and climatically relevant variables labelled generally as environmental data, and biotic data such as the number, abundance, and biomass of algal, invertebrate, and aquatic plant species; as well as (2) bioindication groups represented by the number of species in each, and indices calculated for aquatic communities and their environments such as PMPL, ESMI, IOJ, and many others (Alizamir et al., 2021;Legendre, 2018;Lepš and Š milauer, 2006;Zhang et al., 2021). Due to the comprehensive approach to assessment, the popularity of methods based on environmental mapping is also growing, especially in the basin approach. However, this belongs to the solutions that are time-consuming and hard to interpret. The multitude of approaches and available indicators that often correlate through the use of similar (or the same) variables generates a number of problems, e.g. during data assimilation procedures (Beven, 2018;Mclaughlin et al., 2006;Wu et al., 2014).
According to existing research, there are over 300 aquatic ecological evaluation methodologies in use across Europe (Poikane et al., 2014). One of the main tasks today is to provide a methodology for intercalibrating existing models (Birk et al., 2013;Lyche Solheim et al., 2019;Mooij et al., 2010). Despite progress in standardizing methodologies, decisions in different countries are made on the basis of locally developed solutions (Arhonditsis et al., 2019;Liu et al., 2008;Moss, 2008;Paruch et al., 2017;Sojka et al., 2019). Some of them, based on the framework defined in ecological status classes, do not sufficiently cope with the presence of outliers. Here, activities should be conducted that take into account their specific conditions (Cristóbal et al., 2014;Díaz Muñiz et al., 2012;Jackson and Chen, 2004;Lin et al., 2020). Differentiation within classes, low sensitivity to outliers, as well as deficiencies in the sets of measurements negatively impact decisionmaking for remedial action. This is also a problem when monitoring the achievement of environmental goals of lake ecosystems (Liu et al., 2019;Park and Hwang, 2016). Shortcomings like these make it difficult to reach a compromise between science and practice. As will be discussed further below, the rapidly developing areas of machine learning and artificial intelligence can address some of these difficulties.
Machine learning techniques are used in applied water ecology, related to the assessment of the quality of ecosystems. This applies primarily to case studies, where machine learning methods have been demonstrated to outperform standard statistical approaches for a range of ecological indicators (Bui et al., 2020;Ebron et al., 2020;Li et al., 2021;Liu et al., 2019;Ostad-Ali-Askari et al., 2019;Sahaya Vasant and Adish Kum, 2019;Singh et al., 2021;Vinçon-Leite and Casenave, 2019). Such techniques are robust especially in situations when measured ecological data are known to be non-linear and highly dimensional, with strong interaction effects. Whilst methods that assume linearity are unable to cope with interaction effects they are still being used. Frequently some modifications are introduced for aiding their performance. Among the methods used to date, approaches focused on forecasting the ecological state of lakes, using decision trees, random forests, support vector machines, k-nearest neighbors, and artificial neural networks, deserve particular attention (Chen et al., 2020;Hadjisolomou et al., 2021;Hrnjica and Bonacci, 2019;Mellios et al., 2020). Only a little research has been devoted to retrospective analysis, which aims at being an alternative to testing existing methods and producing algorithms for assessing the current ecological state of lakes (Gophen, 2021;Klinard et al., 2019;Moges et al., 2017). This is particularly important in the context of measures that aim at saving endangered ecosystems, where decisions are made on the basis of an already completed evaluation, or that are based on the assessment of long-term activities carried out over several ecological classification processes. In these cases, decision support based on existing data and using machine learning algorithms act as additional tools, aimed at supporting experts making decisions in situations of high uncertainty and complexity of the assessment process, is particularly useful (Uddin et al., 2021).
The main aim of this paper is to provide support for current methods of assessing the ecological status of lakes, so as to avoid changing the approach developed among experts. The proposed solution may serve as a tool supporting the work of the expert team in the selection of priority water bodies in terms of the implementation schedule of remedial measures in the next period of the water management plan. The unsupervised classifier is introduced as a parallel tool to the expert assessment process in this paper, which is a novel approach in ecological state assessment of lakes. This allows for obtaining information supporting the prioritization of objects for remedial actions in a situation when the number of ecological status classes included in the source methodology does not allow for a case study for each water body. An additional advantage of the solution is the ability to adjust the class factor to the internal (relative) range of the analyzed data, which places the prioritization result in the context of the dataset, while maintaining the higher-order division resulting from the WFD provisions. In this context, the information capacity of previous measurement campaigns will be kept. Our approach was constructed using available measurement data from four measuring campaigns. Based on the outcomes of experts' work following WFD standards, an ecological status classifier for lake ecosystems was built in the study underlying this paper. The Self Organizing Map model used in this approach functions as an unsupervised data reinterpreter, creating subgroups of ecologically similar objects without disturbing the original class structure.
Subsequently, a dataset and pre-processing description is provided, this is followed by the introduction of methods, along with an explanation of the general analytical scope and a description of the specific machine learning-based method. The supporting classifier that has been created is introduced next, along with a description of the different model options. Results are summarized in the discussion section, including the difficulties encountered as well as the relevance of the study findings. Finally, the usefulness of the findings is explained and suggestions are made for future study.
Data and pre-processing
The Chief Inspectorate of Environmental Protection in Poland provided the input data for the study underlying this paper. The same repository is used as the one which is the basis for reporting to the European Commission concerning environmental monitoring (GIOŚ, 2015). Due to the fulfillment of the acquis communautaire obligations in the field of surface water monitoring and evaluation, measurements are carried out in accordance with the requirements set out in national as well as European Union legislation (Mantzouki et al., 2018). The data collected cover the ecological state assessment results supplied with data on 496 lakes in Poland from 2010 to 2015. Ecological State Macrophyte Index (ESMI), chlorophyll a, conductivity, Diatom Index for Lakes (IOJ), nitrogen, phosphorus, phytoplankton, Phytoplankton Method for Polish Lakes (PMPL), and visibility were measures used to determine the ecological state of lakes during an assessment conducted with expert analyses.
The dataset was pre-processed during correlation analysis and missing data imputation (Chrobak et al., 2021). These procedures were performed to construct a model reproducing the course of expert assessment, so that in the future it could be an auxiliary tool, especially in terms of re-evaluation of ecosystems when specific remedial actions are to be implemented. Research published contains formulas and a formal record of the applied calculations, as well as the definitions employed (Chrobak et al., 2021). The pre-processed dataset, used as input to this paper is available in raw format as Appendix A. Moreover, Appendix B contains an R language script that converts all of the analysis procedures in this paper into an executable, repeatable code. The research was conducted with the use of software, providing: data visualization (Tableau 2019.1.19, https://www.tableau.com/), data modelling (R 3.6.1 via RStudio 1.2.5033 "Orange Blossom", https:// www.r-project.org/, https://rstudio.com/), and diagram development (draw.io 14.1.8, https://www.diagrams.net/). The list of all acronyms used in this work is presented below in an alphabetical order: ANN -Artificial Neural Network BMU -Best Matching Unit BQE -Biological Quality Elements ESMI -Ecological State Macrophyte Index EU -European Union IOJ -Diatom Index for Lakes SOM -Self Organizing Map WFD -Water Framework Directive
Self-Organizing maps (SOM)
Self-Organizing Maps are a type of artificial neural network (ANN) algorithm that, in this study, was trained using an unsupervised learning process to produce a two-dimensional, discretized representation of the input space of training samples obtained from measured ecological parameters of lake ecosystems (Kohonen, 1998). Thus, here, the SOM acts as a dimensionality reduction method in the first place (Thrun and Ultsch, 2020). Moreover, by grouping related features together, SOM also represented the clustering notion (Vesanto and Alhoniemi, 2000). As a result, an algorithm was used to employ dimensionality reduction to cluster a nine-dimensional dataset. This is depicted by using feature maps. By competing for representation, each object in the dataset was recognized in the context of inter-set similarity and placed adequately on the map (Kohonen, 2013). The initialization of the weight vectors for neurons was the first stage in the mapping process. Then, at random, a sample vector was chosen and the map of weight vectors was searched for the weight which is able to best describe that sample. Each vector had neighborhood weights in its immediate vicinity. The chosen weight was rewarded by the ability to become increasingly similar to the randomly picked sample vector. The neighbors of that weight were rewarded as well, as they were able to become more like the sample vector chosen. This allowed the map to expand and take on new forms during the training process (Kohonen, 1990). The algorithm used in this study consisted of the following steps, which are detailed further in the following subsections (following Ali Hameed et al., 2019): Step 1. The node weight vectors of the map were distributed randomly.
Step 2. An input target vector D(t) was selected at random.
Step 3. Each node in the map was iterated: Step 3.1. The Euclidean distance formula was used to find the similarity between the input vector and the map's node's weight vector.
Step 3.2. The node that generated the shortest distance was found and tagged as the best matching unit (BMU).
Step 4. By drawing the weight vectors of the nodes in the BMU's neighborhood (with BMU included) closer to the input vector, they were updated as: where: W is the current weight vector of the node, t is the current time step, L is the learning rate in the form of exponential decay function: for t being the current time step and λ acting as the constant depending on the selected number of iterations, λ is the influence rate which decides the size (radius) of the neighborhoodas the number of iterations grows, the size gradually shrinks until reaching 0 at the end of the training: ) with E dist being Euclidean distance between each pair of the neurons and σ representing radius size.
To measure the quality of the developed map, the node counts metric was used, which provides the count of objects (lakes) mapped to each node (Fig. 1). In order to facilitate the next steps, and to avoid the occurrence of 'empty nodes', the sample distribution should be relatively uniform. Moreover, the grid was checked in search of the presence of high values (counts) in the map area. This implies that a larger map might be advantageous (Tatoian and Hamel, 2018). The grid was created with 10x10 hexagonal nodes, allowed to achieve the goal of obtaining an average of close to 5 observations in the node. The adopted mesh did not contain 'empty nodes'. However, in the case of nodes: 1, 16, and 29, they contained 1 element each. The maximum number of objects in the node (12) was assigned to node 26. At the stage of constructing the map, the neighbor distance analysis was performed, the illustration of which is the so-called 'U-Matrix', showing the distances between each node and its neighbors (Fig. 2) (Stefanovič and Kurasova, 2011). The detected low neighbor distance (<5) pointed to groups of similar nodes. The observed distance with a value of above 5 indicates dissimilarities between object groups.
Variables distribution in SOM
In the initial phase of the analysis of the results of the parameters measured for the lakes, their distributions across the grid were traced on a previously defined map (Fig. 3). It was feasible to trace distribution patterns and hence determine the nature of the relationship between the indicators as a result of this. In some circumstances (e.g., IOJ -visibility, phosphorus -PMPL), the observed results provide a basis for the data set's dimensionality reduction. Similarly, some variables show a noticeable inverse relationship, which in the case of PMPL -ESMI or phosphorus -IOJ pairs may affect the formation of the resulting map. It is possible to illustrate locations where lakes with a particularly low (upper left corner) or high (lower right corner) ecological status will be concentrated at this level of the investigation. Because the default algorithm (which depicts the normalized version of the dataset) was reaggregated to be able to represent the variables from the original measurement set, this is intuitive at this early stage (Qian et al., 2019). As a result, the output is scaled to the true values of the eco-state variables. Both, the size of the clusters and the class distribution for lakes that could fall into any of the categories in between the extreme classes are unknown at this stage. As a result, it was decided to develop a weight vector map prior to the model training in order to make pattern identification easier (Fig. 4) (Chaudhary et al., 2014). The findings of the map's weighted juxtaposition of variables were afterwards used to support the explanation skill of a specific object (or group of objects) that acted as an outlier in the final SOM (Stefanovic and Kurasovay, 2018).
The heatmaps' patterns can be seen again, with relatively large proportions of chlorophyll a, nitrogen, phosphorus, and PMPL readings influencing the list's eventual left end. On the opposite side of the map, the IOJ, ESMI, and visibility indicators dominate (higher values indicate lower water turbidity in lakes). ESMI also has a scale that spans from 0 to 1, with 1 signifying a high level of ecological status. The interaction of conductivity-IOJ-ESMI (upper right-hand corner) and conductivity-PMPL-IOJ (lower left-hand corner) has also shaped some distinguishable locations. Across the whole cross-section of measurements, there were also lakes with relatively low values, with single markers critical for a shift in one of the major directions. The shape of data segmentations was determined in the following phases so that the model's results could be compared to the categorization presented in the original dataset (Wehrens and Kruisselbrink, 2018).
Model training results
The training dataset consisted of a collection of normalized measurements that were one of the outcomes of a prior study's approach (see 2.1 'Data and pre-processing'). The data has been preserved in the form of a matrix with the result of the original classification hidden from the algorithm. Thanks to this, an unsupervised version of SOM could be implemented. The aim of subsequent iterations of the training stage was to reduce the distance from each node's weights to the samples represented by that node (Stefanovič and Kurasova, 2011). Ideally, this distance should reach a minimum plateau (Nuhoǧlu and Yildirim, 2018). The mean value developed around consecutive noticeable drops. The series reached a turning point at iteration 8 364, when the mean distance steadied at around 0.013 on average, with the lowest value being 0.0012. The training process was terminated after an iteration of 10 000 due to detection of local stabilization of the variable waveform when increasing repetitions (See Figs. 5-8).
Clustering and segmentation
The clustering was performed with the use of the k-means algorithm (Yang et al., 2012). The total within mean of squares was considered in the context of 1 to 15 possible cluster counts. In order to detect the optimal number of clusters for the considered set of observations, the classic elbow method was used (Ghayekhloo et al., 2015). The number of three clusters was found to be the most optimal. In addition, the result of an alternative way of determining the number of clusters -the average silhouette, which revealed two clusters -was examined (Susilowati et al., 2020). In this research, the results of the techniques for determining the appropriate number of clusters played a supportive role. They were utilized as anchoring points for the segmentation method, ranging from two to five clusters (as recommended by the elbow technique), allowing the set's unsupervised segmentation to be traced back to the number of classes corresponding to the initial categorization of lake ecological state (five classes). Thus, it is possible to assign a value in the selected range (e.g. 1 to 3 or 1 to 5) to each of the objects as a measure accompanying the expert result.
In the case of segmentation with the use of two clusters, nodes clearly stand out from lakes with a particularly bad ecological condition (original class: very bad) (Fig. 7a) However, the cluster that separates these objects from the rest is not homogeneous. The objects were Fig. 1. The count of objects mapped to a given node. The average node has ca. 5 objects. The main goal was to prevent the occurrence of empty nodes and also to avoid creating nodes with large values comparing to those existing in the entire grid. Fig. 2. The unified distance matrix shows the Euclidean distance between each node and its neighbours. The dissimilarities are visible in the upper left part of the matrix, pointing to the potential boundaries within the studied set of lakes.
assigned to a common group on the basis of the results of the measurements of nitrogen, PMPL and phosphorus. In this way, 16 objects were separated from the original 'very poor' class, creating a separate group of lakes. The three-class option proposed by the elbow algorithm produced a cluster separating a subset of lakes initially included in the 'good' and 'very good' classes -a total of 188 items (Fig. 7b). They are primarily distinguished by high visibility, IOJ, and ESMI measurements. The four-cluster version is the first to come out of the framework imposed by segmentation shaping methods (Fig. 7c). In the early stages of modeling (when k = 2), the class of lakes with poor ecological condition was split. The three lakes therefore formed their own category, although they retained the original categorization, which was based mostly on the 'very bad' category (except for one case). The fourth splitsequence, which resulted in a total of five clusters, was used to refer to the initial set of scores (Fig. 7d). Hence, the results of unsupervised SOM on the dataset served as notes indicating the position of the object in one of the meta classification options. This stage allowed for the identification of lakes with higher shares of results for PMPL, conductivity and IOJ, thus separating a valuable subset for the classification results inheriting features from the original 'very poor' class.
An interesting case is the early separation (when k = 2) of an object originally belonging to the 'poor class' and an assignment to a cluster grouping lakes with a particularly poor ecological condition, visible in the resulting diagram of the process of back-mapping the model results to the original set of classes. Case study analysis appears to be an acceptable strategy in situations like these to detect the source of the original expert choice at an early stage of future analyses, where the given assessment could be supported by conditions outside the scope of the measured indicators. Also, in such cases, human error or shortcomings in the evaluation methodology cannot be excluded. Nevertheless, even if the in-depth analysis of the case study does not show significant arguments in favor of a correction in the primary classification, the authors propose, in accordance with the approach used in this work, to leave the object in its original class, and assign to it the attribute of high risk of transition to a lower class in the absence of remedial actions taken until the next evaluation.
Obtaining solid information regarding the need to prioritize remedial measures in lakes that were identified at an early stage as a subgroup with an exceptionally low ecological state is one of the outcomes of the SOM development process. The issue of selecting lakes in terms of recommended pro-ecological practices also applies to cases with the potential for promotion from the moderate to a good group, where, based on the classification of similarity in the cluster, there are 8 such sites. Thus, the direction of corrective actions is to be indicated by: a) analysis of the results contained in heatmaps -reduction of uncertainty related to the lack of measurement results by recognizing the patterns of shaping the results in the entire dataset, b) detection of cumulative impacts in the dimensionally reduced dataset (vector maps) along with the definition of groups of factors influencing the ecological state of the lake, c) identification of ecologically similar lakes within clusters, which enables the selection of specific remedial methods, if they were used (or function in the current water management program), and also helps in assessing their potential effectiveness, d) prioritization supported by classification methods in the field of machine learning to obtain additional information resulting from the analysis of variables in n-dimensional terms.
Discussion
Based on measurements supplied for 497 lakes in Poland, the decision support potential of a machine learning-based unsupervised ecological state classifier for lake ecosystems is investigated. We proposed the use of the Self Organizing Maps algorithm as a tool complementing the results of the methodology used in expert research. Classification support is important in cases where the used capacity of the original class, resulting from the provisions of the WFD, does not allow for selecting subgroups of ecologically similar lakes. As shown by the example from this study, it was possible to separate from the area of "very poor" and "poor" classes sixteen objects forming a subgroup of lakes with a particularly bad ecological condition, with similar measurement results identified in the space reduced to two leading dimensions. In the subsequent stages of class segmentation, substructures of the set of observations were revealed. This gave a premise for the use of the solution as decision support in the process of prioritizing the use of remedial action groups in ecosystems. This means that before the lake reaching its target ecological status the logical goal is a qualitative transfer to the subgroup that has the best chance of doing so in the future evaluation. A program of evaluation of actions taken on lakes that, retrospectively, were ecologically similar, may be helpful in effectively proposing specific actions aimed at the identified problems, and then, in 6. The evaluation of distance between objects assigned to each cluster plotted against the number of clusters. An inflection points to three clusters as the optimal solution (k = 3). Taking into account that the selection of the number of clusters depends on the number of nodes maps, the versions with the number of clusters 2,4 and 5 were also taken into account. the next campaign, change the subgroup to "more favorable".
The detection of ecological outliers at an early stage (k = 1) clearly indicates a possible approach to prioritizing ecosystems in the context of remedial actions within water management programs. The use of the knumber of cluster parameters at the selection stages makes it possible to separate groups of outliers in the initial phase, and then repeat the function on the set reduced by these objects. Therefore, the given number of clusters is constant in the process and results from the adopted segmentation methodology. Lakes are classified until the sample is exhausted. Individual groups indicated in subsequent iterations can then be assigned a prioritization level corresponding to the n th iteration in which the classifier indicated the outlier subset. The grouping conditions at each stage can be checked and interpreted on the basis of heatmaps or weight vector maps.
One of the main issues connected with the presented method concerns the quality of the input data (Flexer, 1997). In order to build a map, there is a need to value each dimension of each sample member. This is a limiting aspect of the usage of SOMs, sometimes referred to as missing data problems, because it is not always possible and frequently particularly difficult to obtain all of the required data. Another issue is that each SOM is unique and discovers distinct patterns in the sample vectors (Kohonen, 2013). SOMs organize sample data so that comparable samples are generally surrounded by similar samples in the end result; nevertheless, similar samples are not necessarily close to each other. For instance, if there are hues of a given color in the map, occasionally, the clusters may divide, resulting in two (or more) groupings with the same color (Halgamuge and Wang, 2005). The final issue with SOMs is that they are computationally costly, which is a substantial disadvantage since as the dimensions of the data become larger, dimension reduction visualization techniques become more essential, but the time required to calculate them grows as well (Liu et al., 2006).
Despite the indicated limitations, the solution has the advantage of high information capability, which makes the results easy to understand and thus, interpret (Qu et al., 2021;Wehrens and Kruisselbrink, 2018). Moreover, algorithms efficiently categorize data and then assess their own quality, allowing for the estimation of how good a map is and how strong the similarities between items are (Oprea et al., 2020;Yotova et al., 2021). Thanks to this, the solution meets the requirements of process transparency and, with conscious use, the results can be consulted between experts and policymakers on the basis of intuitive visualizations presented in this paper (Chung et al., 2018;Khamassi et al., 2006;Paini et al., 2010). The presented method highlights the need for more effective and transparent visual communication between experts, society and decision makers. This facilitates the process of public consultation for the results of updated water management plans (Tokarczyk-Dorociak et al., 2019). The positive impact of conscious Fig. 7. a) result map with two clusters (k = 2) showing a group of lakes with a particularly low ecological status (upper left corner), also separating similar objects (3 lakes), creating a cluster that is not contiguous on the map surface; b) the map version with three clusters (k = 3), selected as optimal by the "elbow method" does not affect the structure of the cluster created in the previous example, but the main area of the map has been divided, creating two new subclasses, the continuity of which is disturbed by three objects in the upper right-hand corner of the map; c) in the case of a solution with four clusters (k = 4), the cluster created in the first approach (k = 2) was separated by dividing a subset of lakes with a low ecological status into two separate subgroups. On the other hand, the map fragment created in version 1 remains unchanged, when k = 3; d) the five-cluster version (k = 5) was created to reflect the original ecological status class division. The newly created division included a group of lakes intermediate between the ecologically worst and the average subgroup, including in this cluster three objects disturbing the continuity of the division in versions k = 3 and k = 4.
increasing of communication skills on the course of the environmental assessment process has been widely discussed in the literature (Few, 2006;Murchie and Diomede, 2020;Ståhl and Kaihovirta, 2019;Xiong et al., 2020). In addition, from the analytical point of view, the results obtained from the SOM analysis can be used to predict the ecological status of lakes by combining non-linear map representation with linear statistical forecasting methods for each homogenous sub-group to improve prediction accuracy.
Conclusions
The solution suggested in the paper aims at supporting existing expert systems for assessing the ecological condition of lakes, using an unsupervised machine learning algorithm taking a format of a Self-Organizing Map. Presenting the set of measurements, using heatmaps, makes it possible to intuitively trace the shaping of variables of their mutual relations. Moreover, the use of a weight vector map contributes to an increase in the interpretative ability of measurements in a map reduced to two dimensions.
In this paper, the introduced procedure of map design and clustering with different versions of the division was referred to the set of expert classification results in the back-mapping process, showing the underlying structure of the results when treated with unsupervised data comprehension approach. The requirement to optimize the selection of groups of pro-ecological actions based on a mix of empirical facts and advantages supplied by latent knowledge led to the extension of the classic method with an additional supportive priority inside decision making. Despite taking a step toward increasing the informative skill of lake ecological condition evaluation, we recognize the necessity to give answers to the problem of data efficiency, such as in instances of data shortage, which results in significant, and frequently uncontrollable, uncertainty of generated findings. With this in mind, the next step is to research the assimilation of measurement data of lake ecosystems in order to minimize the impact of identified quality deficiencies in the data on the application of further stages of the evaluation procedure.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 8.
Results of back-mapping the segmentation obtained with Self Organizing Maps to original ecological classes. Already in the case of the map with k = 2, there is a visible separation within the "very bad" class, which persists in subsequent runs of the model (only deepening the division). Thanks to the division into successive segments, it is possible to assign lakes (groups of lakes) to individual subtypes within one primary class.
|
2022-01-06T20:27:30.512Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "31acb7544714bdea0e1b4213bcdf0a16cd0b5e08",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ecolind.2021.108318",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c8b4fb0391894abf7300cb764d2684045f4ad4df",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249832353
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Inoculation With Acinetobacter on Fermentation of Cigar Tobacco Leaves
Metabolic activity of the microbial community greatly affects the quality of cigar tobacco leaves (CTLs). To improve the quality of CTLs, two extrinsic microbes (Acinetobacter sp. 1H8 and Acinetobacter indicus 3B2) were inoculated into CTLs. The quality of CTLs were significantly improved after fermentation. The content of solanone, 6-methyl-5-hepten-2-one, benzeneacetic acid, ethyl ester, cyclohexanone, octanal, acetophenone, and 3,5,5-trimethyl-2-cyclohexen-1-one were significantly increased after inoculated Acinetobacter sp. 1H8. The inoculation of Acinetobacter sp. 1H8 enhanced the normal evolutionary trend of bacterial community. The content of trimethyl-pyrazine, 2,6-dimethyl-pyrazine, and megastigmatrienone were significantly increased after inoculated Acinetobacter indicus 3B2. The inoculation of Acinetobacter indicus 3B2 completely changed the original bacterial community. Network analysis revealed that Acinetobacter was negatively correlated with Aquabacterium, positively correlated with Bacillus, and had significant correlations with many volatile flavor compounds. This work may be helpful for improving fermentation product quality by regulating microbial community, and gain insight into the microbial ecosystem.
INTRODUCTION
Traditional fermentation processes, such as Chinese liquor, cigar, and fermented vegetable, are mainly driven by complex microbial communities (Song et al., 2017;Liu et al., 2019;Smyth et al., 2019). However, these processes are long and uncontrolled (Jin et al., 2017;Wu et al., 2021). Modern artificial fermentation has been trying to control these processes by adjusting the fermentation conditions or adding specially formulated starters to the natural fermentation system (Shilei et al., 2019). These methods were widely used in the traditional multi-species fermentation industry because of their various advantages, including shortening fermentation times, improving product qualities, and improving safety. For example, the pH condition considerably affected hydrogen fermentation, hydrogen gas was efficiently produced with unconditioned anaerobic sludge when the pH was adjusted to 6.0 throughout the culture period (Kawagoshi et al., 2005). When Debaryomyces hansenii and Yarrowia lipolytica were incorporated June 2022 | Volume 13 | Article 911791 Zheng et al.
Inoculation of Acinetobacter to Tobacco into the cheese as part of the starter, the cheese could significantly shorten the ripening time while maintaining a good strong flavor (Ferreira and Viljoen, 2003). The inoculation of Bacillus licheniformis could affect the microbial community structure and enzyme activity of Daqu, and increased the content of pyrazines and aromatic compounds (Wang et al., 2017). Additionally, the inoculation of Saccharomyces uvarum and Saccharomyces servazzii significantly improved the quality of Chinese liquor (Wu et al., 2016). However, adjusting fermentation conditions is sometimes not effective, because some traditional fermentation processes have been adjusted thousands of times in their long history. Exogenous addition of microorganisms does not always produce positive results, because inoculated microorganisms cannot promote the evolution of the community towards favorable product fermentation (Wu et al., 2016). The main reason for the failure of externally added microbes is that microbial interactions between the inoculated microbes (extrinsic) and the native microbes (intrinsic) lead to incomplete structure and function of microbial communities. Microbial interactions, such as symbiotic, synergistic, predation, parasitic, and competitive, can greatly affect the metabolic activity of the microbial communities (Ghosh et al., 2016;Nawaz et al., 2022;Pierce and Dutton, 2022). Rational addition of exogenous microorganisms to activate interactions between microorganisms could strengthen the metabolic capacity of the microbial community. However, it is difficult to determine whether the addition of microbes can promote the fermentation of product unless the microbial interaction is deeply studied. Therefore, it is very important to select suitable microorganisms and study the effects of externally added microbes on the structure and function of microbial communities.
Cigars are one of the oldest traditional tobacco fermented products (Reid et al., 1937). Flavor-producing microbes play an important role in tobacco fermentation (Liu et al., 2021). In our previous work, we found that Acinetobacter were the important producers of the aldehydes and ketones. And we isolated two flavor producers, Acinetobacter sp. 1H8 and Acinetobacter indicus 3B2 from high-quality cigar tobacco leaves (CTLs). We speculated that their addition could improve the quality of some ordinary CTLs. In this study, we investigated whether these two extrinsic stains could improve the quality and flavor of the CTLs and their effects on the structure and function of bacterial communities.
Strains
The strains used in this study were isolated from the CTLs and deposited in the China General Microbiological Culture Collection Center (CGMCC): Acinetobacter sp. 1H8 CGMCC NO.23678 and Acinetobacter indicus 3B2 CGMCC NO.23679.
Inoculation Experiment
The fermentation medium for culturing strains was prepared with sucrose 20 g/l, peptone 20 g/l, yeast powder 15 g/l, K 2 HPO 4 1.5 g/l NaH 2 PO 4 3 g/l, and autoclaved at 121°C for 15 min. Strains were inoculated by sterile loops in 250-ml flasks with 50 ml fermentation medium, and cultured at 220 rpm 30°C for 36 h. Then the fermentation broth was inoculated into CTLs with 30% inoculation amount (v/g) and fermented in a biochemical incubator. All fermentations were conducted without agitation at 30°C for 15 days. The unfermented CTLs and uninoculated CTLs were prepared as Control 1 and Control 2, respectively. All experiments were performed in triplicate. The quality of CTLs was blindly assessed by three professional tasters. With 10-20 years of testing experience, these tobacco tasters have conducted sensory evaluation on more than 2,000 cigar samples, and can accurately, consistently, and repeatedly evaluate cigars.
Volatile Flavor Compound Analysis
Volatile flavor compounds (VFCs) in CTLs were analyzed by headspace solid phase microextraction-gas chromatography-mass spectrometry (HS-SPME-GC-MS). CTLs were dried at 40°C and pulverized by a grinder. 1.5-g powder was placed in a 10 ml glass vial and extracted by headspace solid-phase microextraction (50/30 μm DVB/CAR/PDMS fibre, Supelco, Bellefonte, PA, United States) at 60°C for 30 min. After extraction, volatile flavor compounds (VCFs) were identified using a Pegasus BT GC-TOFMS (LECO Co., St. Joseph, MI, United States), with a DB-5MS column (60 m × 0.25 mm id × 0.25 μm film thickness). Helium C-60 was used as a carrier gas with a flow rate of 1 ml/min, and the injector port was heated to 250°C. The oven temperature was fixed at 40°C for 2 min, increased to 250°C at a rate 10°C/min, and then held for 5 min. Meanwhile, the transfer line and ion source temperatures were maintained at 280°C and 210°C, respectively. Electron impact (EI) was used as the ionization mode, with an EI voltage of 70 eV, and a mass scan range of 33-400 m/z was used for full-scan mode with an acquisition rate of 10 scans/s. The acquired GC-MS raw data from the QC sample was analyzed with the Automatic Mass Spectral Deconvolution and Identification System (AMDIS) to verify individual analytes' presence and deconvolute the co-eluting peaks as previously described. The parameters for the deconvolution were set as follows: component width = 20, adjacent peaks subtraction = 2, resolution = high, sensitivity = low, and shape requirements = medium. The parameters for peak detection were set at the default values. The detected compounds with an abundance of <1,000 and a signal-to-noise value of <50 were removed to discard identification errors or duplication. Compounds were identified by comparing the mass spectra and retention index with those in the NIST08 Mass Spectral Database, the Agilent Fiehn Metabolomics Retention Time Locked (RTL) Library, and an in-house library focused on tobacco metabolites. Standards for constructing the in-house library were prepared as described above. A SCAN + SIM method and an in-house automatic integration method were then established in the Agilent MSD Chem Station and applied to quantify the selective ion traces as previously described. Manual corrections were performed to guarantee the accuracy of the integration. A three-dimensional matrix was generated, including the sample information, peak retention time, and relative peak intensities. Internal standards and any known artificial peaks, such as peaks caused by noise, column bleed, and derivatization procedures, were removed from the matrix.
The 16S rRNA gene sequences were processed using QIIME 2 (Bolyen et al., 2019). Briefly, raw sequencing reads with exact matches to the barcodes were assigned to respective samples and identified as valid sequences. The low-quality sequences were filtered through the following criteria (Chen and Jiang, 2014): sequences that had a length of <150 bp, sequences that had average Phred scores of <20, sequences that contained ambiguous bases, and sequences that contained mononucleotide repeats of >8 bp. Paired-end reads were assembled using FLASH (Magoc and Salzberg, 2011). After chimera detection, the remaining high-quality sequences were clustered into amplicon sequence variants (ASVs). Taxonomic classification was performed using the plugin q2-feature-classifier using the classify-sklearn method (Pedregosa et al., 2011) and the pre-trained SILVA version 132 database (Quast et al., 2013) with 99% similarity. Alpha diversity indexes were calculated by using the command "qiime diversity alpha-rarefaction. " PICRUSt (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) function prediction was performed using bacterial 16S rRNA sequencing data to determine the abundance of microbial functional genes in the KEGG metabolic pathway (Langille et al., 2013).
Statistical Analysis
Heat maps and cluster analyses were performed in the statistical environment R v. 4.0.0. The Galaxy 1 was used for linear discriminant analysis effect size (LEfSe) analysis to assess significant differences of CTLs with different treatments. Additionally, the correlation between the representative bacteria and core VFCs based on Spearman's correlation coefficients (p < 0.05, |r| > 0.3), network analysis was performed by using Gephi software.
Bacterial Composition Varied After Inoculation and Fermentation
To explore the influence of the inoculation of Acinetobacter on original bacterial community, bacteria in CTLs with different treatments were sequenced. The high-throughput sequencing generated 642,151 sequence reads from sequence reads from 12 CTLs. After the quality control processing, including filtered, denoised, merged, and nonchimeric, there were 483,897 highquality sequences, with an average of 40,324 sequences, and obtained 1,914 ASVs. The abundance of bacterial taxa was shown in Figure 1, Proteobacteria, Firmicutes, and Bacteroidetes were predominant phyla, Aquabacterium, Bacillus, Acinetobacter, Muribaculaceae, and Pseudomonas were predominant genera. There was no doubt that fermentation changed the composition of bacterial communities. At phyla level, the abundance of Proteobacteria and Firmicutes increased from 59.9%5 and 9.98% to 77.59% and 15.58% after fermentation, respectively, while the abundance of Bacteroidetes decreased from 23.10% to 2.16%. At genera level, the abundance of Aquabacterium increased from 14.26% to 41.08%, while the abundance of Muribaculaceae and Pseudomonas decreased from 17% and 14.04% to <0.01%. Similarly, external microorganisms also changed the composition of bacterial communities. At phyla level, the abundance of Proteobacteria increased from 59.95% to 95.61% after inoculation with Acinetobacter sp. 1H8 and decreased to 37.63% after inoculation with Acinetobacter indicus 3B2, while the abundance of Bacteroidetes decreased from 15.58% to 3.1% after inoculation with Acinetobacter sp. 1H8 and increased to 37.63% after inoculation with Acinetobacter indicus 3B2. At genera level, the abundance of Aquabacterium increased from 41.08% to 59.01% after inoculation with Acinetobacter sp. 1H8 and decreased to 27.61% after inoculation with Acinetobacter indicus 3B2. The abundance of Bacillus and Acinetobacter increased from 1.83% and 0.62% to 33.68% and 22.43% after inoculation with Acinetobacter indicus 3B2, respectively.
The alpha diversity of the bacterial community was evaluated by indices including Chao1 index, Shannon index, and Simpson index, in which the first and the latter two represent richness and diversity, respectively (Figure 2A). They all showed external microorganisms significantly reduced the richness and evenness of original bacterial community. Unconstrained principal coordinate analysis (PCoA) of Weighted-unifrac distance revealed that the microbiota of CTLs with different treatment formed three distinct clusters (Figure 2B), which separated along the second coordinate axis. As expected, the inoculation of Acinetobacter indisputably altered the structure of bacterial community.
Significant Different Microbes After Inoculation and Fermentation
To explore the different bacteria among CTLs with different treatments, LEfSe analysis was conducted to reveal the significant differences below the level of phylum (Figure 3). The circles from inner to outer represents bacteria classification from phylum to genus levels, and corresponding colors in every group denotes bacteria taxa with a significant difference. Notably, 81 different bacteria appeared in the LDA threshold of 2 judging by statistically significant differences (p < 0.05), which consist of 5 phyla, 4 classes, 18 orders, 25 families, and 29 genera. In detail, 17 genera were significantly enriched in uninoculated CTLs, such as Rhodococcus, Blastococcus, Promicromonospora, and Sanguibacter. Eight genera were significantly enriched in unfermented CTLs, such as Lactobacillus, Sphingobium, and Rhizopus. Two genera were significantly enriched in CTLs inoculated Acinetobacter indicus 3B2, such as Bacillus and Acinetobacter. Only Aquabacterium was significantly enriched in CTLs inoculated Acinetobacter sp. 1H8.
Changes in Bacterial Metabolic Pathways After Inoculation and Fermentation
PICRUSt analysis results showed that there were more functional genes annotated to biosynthesis, degradation/utilization/ assimilation, generation of precursor metabolite and energy, glycan pathways, macromolecule modification, and metabolic clusters (Figure 4). The cluster analysis identified the CTLs inoculated Acinetobacter sp. IH8 and uninoculated CTLs were classified into one cluster with the similar abundance of bacterial metabolic pathway, this may be because they have similar bacterial community. The addition of external microorganisms not only changed the structure of bacterial community, but also changed the function of bacterial community. The abundance of a large number of bacterial metabolic pathways decreased significantly after inoculation. LEfSe analysis was conducted to identify the significant different bacterial metabolic pathways in CTLs with different treatments (Figure 5). Metabolic pathways involved in biosynthesis, including fatty acid and lipid biosynthesis, amino acid biosynthesis, nucleoside and nucleotide biosynthesis, aromatic compound biosynthesis, and carbohydrate biosynthesis, were significantly enriched in unfermented CTLs. Metabolic pathways involved in degradation, including aromatic compound degradation, fatty acid and lipid degradation, nucleoside and nucleotide degradation, were significantly enriched in CTLs inoculated Acinetobacter sp. IH8. Metabolic pathways involved in cell growth, including cell structure biosynthesis, inorganic nutrient metabolism, amino acid biosynthesis, and TCA cycle, were significantly enriched in CTLs inoculated Acinetobacter indicus 3B2.
Interactions Between Extrinsic and the Intrinsic Microbes
Microbial interactions are the main factors shaping community structure (Mould and Hogan, 2021). To elucidate interactions between the inoculated strains and the native bacteria, an association network was established based on bacterial abundance (Kumar et al., 2019). As showed in Figure 6, Acinetobacter was negative correlative with Aquabacterium, and was positive correlative with Bacillus. There must be a non-cooperative relationship between Acinetobacter and Aquabacterium, such as competition, parasitism, predation, and antagonism. There must also be a cooperative relationship of mutualism and symbiosis between Acinetobacter and Bacillus. The inoculated Acinetobacter 1H8 was inhibited by Aquabacterium, while inoculated Acinetobacter indicus 3B2 inhibited Aquabacterium and promoted Bacillus. They in turn affected other microbiotas, such as Staphylococcus, Aerococcus, Lutispora, and Zoogloea. In addition to the negative correlation between Acinetobacter and Aquabacterium, other interactions were positive.
Changes in the Profiles of Volatile Flavor Compounds After Inoculation and Fermentation
Acinetobacter were found to be the main producer of produce aldehydes and ketones in CTLs in our previous studies. They could produce flavor-related aldehydes and ketones in a simple synthetic medium, such as benzaldehyde, phenylacetaldehyde, 4-hydroxy-3-ethoxy-benzaldehyde, and 3,5,5-trimethyl-2cyclohexene-1-one. There is no doubt that they are able to produce more products in CTLs because they have more nutrients available.
Correlation Analysis of the Predominant Bacteria and VFCs
The changes of volatile flavor compounds were mainly caused by microbial metabolism, so it was very important to determine the relationship between bacteria and VFCs. The relationships between VFCs (n:37) and representative bacteria (n: 20) were analyzed by Spearman's correlation coefficients and visualized by Gephi. As shown in Figure 8, Acinetobacter were positively related to acetic acid, ethyl ester, 2-ethyl-1-hexanol, and megastigmatrienone, and negatively related to 6-methyl-5-hepten-2-one, hexanal, 3-methyl−butanal, acetophenone, benzeneacetic acid, ethyl ester, octanal, 3,5-octadien-2-one, and 2,6-dimethyl-2,6-octadiene. Aquabacterium and bacillus, which were significantly associated with Acinetobacter, were also related to many volatile flavor compounds. Aquabacterium was positively related to most volatile flavor compounds, while Bacillus was negatively related to most volatile flavor compounds.
Changes in Flavor of Cigar Tobacco Leaves After Inoculation and Fermentation
Not surprising, the quality of CTLs was significantly changed after inoculation and fermentation. Surprisingly, however, the tasters gave the same evaluation to CTLs inoculated different strains. They felt the bean fragrance, mellow, smooth, aftertaste, sweetness, cleanliness, and aftertaste were significantly improved, and the impurities and irritation were reduced. By analyzing the function of VFCs, it was found that solanone may be the main force to improve the quality of CTLs, because it could reduce impurities and irritation, and increase the mellow, fluency, lingering, sweetness, cleanliness, and aftertaste of CTLs. Bean flavor might be composed of different combinations and proportions of VFCs.
DISCUSSION
This work revealed that the quality of CTLs were improved by the addition of extrinsic microbes. Changes in the quality of CTLs were not only related to the flavor production ability of the extrinsic microbes, but also to the interaction between the external and internal microbes. Microbial interactions were important forces in reconstructing the microbial community (Romdhane et al., 2022). The interactions between the external and internal microbes changed the original microbial community. Meanwhile, changes in microbial community led to variations in VFCs and quality of CTLs. Therefore, it is important to gain deep insight into these changes and the underlying reasons for these changes. In this work, two extrinsic microbes (Acinetobacter sp. 1H8 and Acinetobacter indicus 3B2) were inoculated into CTLs, they exerted different effects on the original microbial community. The abundance of Acinetobacter remained stable in uninoculated CTLs after fermentation. When Acinetobacter sp. 1H8 was inoculated in CTLs, this strain was completely inhibited by the native microbiotas, the abundance of Acinetobacter was significantly reduced after fermentation. The inoculation of Acinetobacter sp. 1H8 caused the endogenous microbes to turn from competition to cooperation to compete together against the extrinsic microbes to suppress the extrinsic perturbation. Normally, the microbial community remains stable due to the complete suppression of extrinsic microbes by endogenous microbes (Wu et al., 2016). However, although Acinetobacter sp. 1H8 grew poorly, it still influenced the overall metabolic activity of the intrinsic microbes. Aquabacterium proliferated wildly upon stimulation by exogenous microorganisms, their abundance was significantly increased. Its influence has had a positive effect. From the perspective of the overall change of microbial community, the inoculation of Acinetobacter sp. 1H8 enhanced the normal evolutionary trend of bacterial community. Similar result was also found in in the cocultures of Metschnikowia pulcherrima and S. cerevisiae, the positive effect leads to increases in the types and amounts of metabolites, such as fatty acids, ethyl esters and acetates, and terpinol (Sadoudi et al., 2012). It may be helpful to promote the succession of microbial community, accelerated the fermentation process, and shortened the fermentation time . When Acinetobacter indicus 3B2 was inoculated in CTLs, they were successfully colonized CTLs, the abundance of Acinetobacter was significantly increased after fermentation, meanwhile, they also caused an increase in the abundance of Bacillus and a decrease in the abundance of aquabacterium. The inoculation of Acinetobacter indicus 3B2 greatly changed structure of the microbial community in CTLs. Acinetobacter indicus 3B2 might be considered keystone specie in CTLs, which has an extremely high impact on a particular ecosystem (Rottjers and Faust, 2019). It was also critical for the overall structure and function of an ecosystem (Banerjee et al., 2018). The inoculation of Acinetobacter also significantly changed the metabolic functions in bacterial communities. Metabolic pathways involved in degradation were significantly enriched in CTLs inoculated Acinetobacter sp. IH8. The inoculation of Acinetobacter sp. IH8 significantly increased the degradation ability of macromolecular substances in microbial community, it would increase precursors of VFCs. The inoculation of Acinetobacter indicus 3B2 significantly promoted the growth of some functional microbiotas, such as bacillus and Acinetobacter, which were found to be the main producers of aldehyde and ketones in CTLs in our previous study. Microbial interaction analysis found Acinetobacter was negative correlative with Aquabacterium, and was positive correlative with Bacillus. Acinetobacter were found to be the main producer of produce aldehydes and ketones in CTLs in our previous studies. They could produce flavor-related aldehydes and ketones in a simple synthetic medium, such as benzaldehyde, phenylacetaldehyde, 4-hydroxy-3-ethoxy-benzaldehyde, and 3,5,5-trimethyl-2cyclohexene-1-one. There is no doubt that they are able to produce more products in CTLs because they have more nutrients available. Their inoculation also greatly changed volatile flavor compound profile of CTLs. The inoculation of Acinetobacter sp. 1H8 significantly increased the content of solanone, 6-methyl-5-hepten-2-one, benzeneacetic acid, ethyl ester, cyclohexanone, octanal, acetophenone, and 3,5,5-trimethyl-2-cyclohexen-1-one. The inoculation of Acinetobacter indicus 3B2 significantly increased the content of trimethyl-pyrazine, 2,6-dimethyl-pyrazine, and megastigmatrienone. These VFCs have an important contribution to the flavor of CTLs. For example, solanone may be the main force to reduce impurities and irritation, and increase the mellow, fluency, lingering, sweetness, cleanliness, and aftertaste of CTLs. Its content was greatly increased in CTLs inoculated both microorganisms. The increase of pyrazine would enhance baked, roasted, rosy, and honey-like aroma. However, due to the increase and decrease of a variety of VFCs, the mixed flavor compounds produce a new flavor. Bean flavor might be composed of a variety of VFCs, and there were differences in the composition of the two CTLs. This may be the main reason why CTLs inoculated different microorganisms showed a flavor characteristic.
CONCLUSION
When some traditional fermented products that are produced by spontaneous fermentation unable to meet the demands of consumers, inoculating extrinsic microbes may improve these products. In this work, we demonstrate that the inoculation of two extrinsic microbes (Acinetobacter sp. 1H8 and Acinetobacter indicus 3B2) improved the quality and flavor of CTLs. Inoculated microbes can not only exert their own metabolic ability, but also affect the structure and function of native microbial community. We revealed that the interaction between exogenous microorganisms and native microbes, the different effects of exogenous microorganisms on original microbial community, the association between microbes and VFCs, and the formation mechanism of tobacco flavor. Collectively, our present work has proved the effect of inoculated microorganisms in traditional fermentation industry and elucidated changes in the overall structure and function of microbial communities after inoculation. These results suggest that controlling the microbial community could significantly improve the quality and safety of fermentation products, and this work may provide a good way to gain insight into the microbial ecosystem of traditional fermentation.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: NCBI SRA BioProject, accession no: PRJNA813020.
AUTHOR CONTRIBUTIONS
TZ: conceptualization, data curation, formal analysis, methodology, software, and writing-original draft. QiZ, QW, and PL: investigation, methodology, and resources. XW, QuZ, and WC: methodology, resources, and project administration. JZ, GD, and DL: funding acquisition, supervision, and writingreview and editing. All authors contributed to the article and approved the submitted version.
|
2022-06-19T15:09:30.391Z
|
2022-06-17T00:00:00.000
|
{
"year": 2022,
"sha1": "83803a8d1fd5803085c5049c9f0e3c612fa0a13f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2022.911791/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60f4a2410321efb8756beba625da962cf7aca95c",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249584592
|
pes2o/s2orc
|
v3-fos-license
|
Effect of surgical simulation training on the complication rate of resident-performed phacoemulsification
Objective To study the effect of additional training with ophthalmic surgical simulation on the intraoperative complication rates of phacoemulsification performed by residents. Methods and materials This was a retrospective study of phacoemulsification surgeries performed by third-year residents at Siriraj Hospital. The operations were classified into two groups according to the experience of the surgeon in simulation training, that is, trained vs untrained. The main outcome was the total rate of complications. Other outcomes, including posterior capsule rupture, anterior capsulorhexis tearing, zonular dehiscence, retaining of lens material and intraocular lens (IOL) implantation methods, were also analysed. Results In total, 2971 operations were performed, comprising 1656 operations by 21 residents in the trained group, and 1315 by 20 residents in the untrained group. The total rate of complications in the simulator-trained group was lower than in the untrained group (13.6% vs 17.3%, p=0.005). Only the rate of retaining lens material showed a statistically significantly reduction (p<0.001); however, the rates of posterior capsule rupture, anterior capsulorhexis tearing and zonular dehiscence were not significantly different (p=0.08, 0.17, 0.23, respectively). The IOL implantation methods and surgical aphakia rate between the two groups were similar (p=0.44). In the subgroup analysis, the posterior capsule rupture rate in the first half of all cases performed by the residents was lower in the trained group (8.8% vs 12.4%, p=0.02). Conclusion Ophthalmic simulation training reduces the total rate of complications of resident-performed phacoemulsification. It also shortens the learning curve for cataract surgery training, as indicated by the decreased posterior capsule rupture rate in the initial cases of cataract surgery.
INTRODUCTION
Phacoemulsification is a kind of cataract surgery, which is the most commonly performed surgery in the ophthalmology field. The complication rate in this surgery is higher among surgical trainees because becoming an expert in cataract surgery requires learning experience and the development of complex visuospatial skills. Many studies on resident-performed phacoemulsification have reported varying complications, and it has been reported that resident-performed surgery is an independent risk factor of posterior capsule rupture, 1 with the posterior capsule rupture rate ranging from 1.8% to 14.7%. 2 3 The ophthalmic surgical simulator Eyesi (VRmagic, Mannheim, Germany) was first introduced to the market in 2001 and later used in cataract surgery training in 2003. It simulates cataract surgery through the use of 3D virtual reality technology and enables residents to practice each step in phacoemulsification. Many studies have demonstrated the construct validity of the simulator. [4][5][6][7][8] Recent research has aimed to evaluate the correlation between ophthalmic simulation training and real-life surgery outcomes and complications. A few studies have reported that simulation training can reduce complications and the posterior capsule rupture rate WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Ophthalmic surgical simulation training could reduce complications and the posterior capsule rupture rate of phacoemulsification surgery performed by residents.
WHAT THIS STUDY ADDS
⇒ Simulation training reduced the overall intraoperative complications and posterior capsule rupture in the first half of cases of phacoemulsification surgeries performed by residents. ⇒ Simulation training improved residents' surgical performance by shifting the learning curve of trainee surgeons.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE AND/OR POLICY ⇒ The supplementation of simulation training to conventional training is beneficial for residents and can improve patient safety. ⇒ Further studies need to be done to evaluate the effect of the duration of training and the simulator tasks scores on the actual proficiency of residents in performing real surgeries.
Open access of phacoemulsification surgery performed by trainee surgeons. [9][10][11][12] In contrast, one study found that simulation training did not reduce the complication rate but did shorten the learning curve in real-life surgeries. 13 The aim of this study was to demonstrate the effect of surgical simulation training on the complication rate of phacoemulsification surgery performed by third-year residents.
METHODS AND MATERIALS
The electronic medical records of Siriraj Hospital were reviewed to identify all the patients who had underwent phacoemulsification by third-year residents from 2010 to 2017.
In the Siriraj Hospital residency programme, secondyear residents begin by performing extracapsular cataract extraction surgery and then take a phacoemulsification wet LAB instruction course using pig eyes in the middle of their second academic year. In their third year, the residents begin performing phacoemulsification cataract surgery. The ophthalmic surgical simulator was introduced to Siriraj Hospital in 2012 and has been used since as a supplement to traditional training. Initially, from 2012 to 2014, simulation training was not a formal requirement before real surgery. In 2015, the simulation training curriculum was developed and completion of the standard simulation training course, comprising three categories (categories A, B and C), became mandatory before residents are allowed to perform real surgeries. Details of the curriculum are shown in figure 1.
Because of the variation in the simulation training of residents in 2012-2014, we excluded all resident operations performed during this period. We separated the remaining cases into two groups. The first group was the trained group and included cases performed during the 2015-2016 academic year by 21 residents who had additional training with the simulator. The second group was the untrained group and included cases performed by 20 residents who had no experience with simulation training during the 2010-2011 academic year, that is, before the simulator was introduced to Siriraj Hospital.
Between 2010 and 2017, there were no major changes in the wet lab curriculum, surgical equipment and surgical technique. The wet lab course was instructed by the same instructor and followed the core curriculum. Surgeons in both groups performed the operation by creating a 3 mm temporal limbal corneal incision. A peristaltic-pumpsystem phaco machine, 30° straight phaco tip and coaxial I/A tip were used in the surgeries.
The patients' demographics, including age, gender and systemic disease, were recorded. The LogMAR visual acuity, ocular comorbidity, cataract grading and ocular biometry measured by the IOLMaster instrument or ultrasonography were also collected. The operation information was retrospectively reviewed for the date of the operation, laterality of the eye, anaesthetic method, phaco time and intraoperative complications.
The main outcome sought was the total rate of complications. Other outcomes, including the posterior capsule rupture rate, anterior capsulorhexis tearing, zonular dehiscence, retaining of lens material and intraocular lens (IOL) implantation methods, were also studied.
Statistical analysis was performed using SPSS statistics software. The total rate of complications and specific complications in each group were compared using χ 2 test. ORs were reported as OR (95% CI). Statistical significance was defined as p<0.05. The nominal data were also compared by χ 2 test and the mean interval scale data were compared using independent t-test. Logistic regression analysis was performed and multivariable regression analysis was done using the forward stepwise method.
RESULTS
In this study, 41 residents performed 2971 operations at Siriraj Hospital, Mahidol University. Of these, 21 residents in the trained group performed 1656 phacoemulsification surgeries and 20 residents in the untrained group performed 1315 operations. The number of surgical cases per resident was higher in the former group. The laterality was similar in both groups but the anaesthetic method was different. The number of subconjunctival and topical anaesthesia cases were higher in the trained group, while the number of retrobulbar anaesthesia cases was lower. There were no differences in terms of age, gender, preoperative best-corrected visual acuity (BCVA), ocular pathology, axial length and anterior chamber Figure 1 Surgical simulation training curriculum. The standard courseware consists of three categories. In each category, regardless of the time taken, the trainee is required to achieve the minimum score three consecutive times in each step before proceeding to the next task (minimum score of 50 for category A, 70 for category B and 85 for category C). After completion of all the tasks in each category, the learner must pass the examination before they are permitted to start training in the next category. The exam consists of multiple tasks that are randomly selected by the instructor and a time limitation is set. To pass the exam, the trainees are required to finish all the tasks in the time limit with a higher score than the minimum criteria. IOL, intraocular lens.
Open access depth between the groups of patients. The number of patients with diabetes in the trained group was greater, but the number of patients with hypertension and heart disease in the trained group were lower than in the untrained group. The degree of nuclear cataract grading was also different; whereby the number of patients with nuclear sclerosis grade 1 in the trained group was greater, while nuclear sclerosis grade 3 was lower in the trained group than in the untrained group (table 1).
In the total 2971 cataract surgeries included in this study, there were 453 cases of complications: 225 in the trained group and 228 in the untrained group. The total rate of complications in the trained group was lower compared with in the untrained group (13.6% vs 17.3% OR=0.75; 95% CI 0.61 to 0.92, p=0.005) (table 2). Although the number and rate of posterior capsule rupture, anterior capsule tear, zonular dialysis, retaining of lens material and other complications seemed to be lower in the trained group, only the rate of retaining lens material was statistically significantly reduced (p<0.001). Most cases achieved IOL implantation in the bag. The IOL implantation methods were not different (p=0.44) and the rate of surgical aphakia was similar (0.4% vs 0.8%) (table 2).
The operations were split for each resident into two proportions: the first half of cases performed by the residents and then the second half. Subgroup analysis of the first and second halves of the total operations was then performed. Among the first halves of the operations, the posterior capsule rupture rate in the trained group was significantly lower than in the untrained group (8.8% vs 12.4%, p=0.02). In contrast, the posterior capsule rupture rate in the latter halves of the total cases was not significantly different (8.1% vs 8.2%, p=0.93) (table 3).
Logistic regression analysis was conducted to determine how several factors affected the total rate of complications of the resident-performed phacoemulsification. Preoperative BCVA, anaesthetic methods, and simulation training were factors that affected the complication rate in the univariable analysis. However, only preoperative BCVA and simulation training were significantly correlated with the complication rate in the multivariable analysis. Patients with a poorer preoperative BCVA had higher odds of developing complications (OR=1.32; 95% CI 1.10 to 1.59, p=0.003), while simulation training was associated with lower odds of complications (OR=0.75; 95% CI 0.61 to 0.92, p=0.005) (table 4).
DISCUSSION
The main goal of this research was to demonstrate the effectiveness of ophthalmic surgical simulation training in resident-performed cataract surgery. The results supported the belief that simulation training is related to a lower total rate of complications. Although the rate of posterior capsule ruptures was not significantly different, it was lower in the first half of the surgeries performed. To the best of our knowledge, there are five studies that have compared the effect of simulation training on the rate of complications of resident-performed cataract surgery. [9][10][11][12][13] Lucas et al found that simulation training significantly reduced the total complications and posterior capsule ruptures in the first 10 cataract surgeries performed by residents. 11 In a larger sample size study, Staropoli et al showed a significant reduction in total complications and posterior capsule ruptures after simulator training had been added as a supplement to traditional cataract surgery training. 10 Ferris et al also found that the posterior capsule rupture rate was reduced by 38% following the implementation of a simulator. 12 In contrast, Pokroy et al did not demonstrate a difference in the total complication and posterior capsule rupture rates after simulation training. 13 Simulation training allows residents to gain hands-on and situational experience. Pokroy et al found that it could shorten the learning curve of residents for performing phacoemulsification. 13 This supports the reason why simulation training reduced the total complications in our study. Although our study did not demonstrate a difference in the overall posterior capsule rupture rate, it was found to be significantly lower in the trained group for the first half of the residents' total cases but was not different in the latter half of the cases according to the subgroup analysis (table 4). This may be explained by the learning curve effect. Residents with experience of simulation training will have built up some skills and experience before performing real surgery, which shifted the learning curve in the first half of cases. In later operations, both groups of residents had now gained more experience and become more proficient in surgery and thus their proficiency reached a plateau. Moreover, this study involved a greater number of surgical cases than other single-centre studies and included all phacoemulsification surgeries performed by third-year residents in both a trained and untrained group. Therefore, the untrained residents could gain more experience by performing real surgery and they could catch up with simulation-trained residents in the later performed surgeries. This relationship between increasing surgical experience and lower complications in later cataract surgery performed by residents was also demonstrated in previous studies. 3 14 We reviewed the baseline clinical characteristic data (table 1) that could confound the total rate of complications. We also performed multivariable analysis in order to minimise the limitations of this retrospective study. Most the characteristics of the cases in the trained and untrained group were similar except for the anaesthetic method, and incidence of diabetes mellitus, hypertension, coronary artery disease, and degree of nuclear sclerosis. The higher number of subconjunctival and topical anaesthesia cases in the trained group may imply that simulation training could increase confidence and surgical skills. However, in previous studies, the anaesthetic factor was not reported to be associated with the complication and posterior capsule rupture *Data were unavailable in some cases due to incomplete medical records. †Data were unavailable in some cases due to incomplete medical records and some patients had other types of cataract. BCVA, best-corrected visual acuity.
Open access rates in resident-performed phacoemulsification. [15][16][17][18] Despite diabetes being related with a higher incidence of complications in a previous study, the patients in the trained group who had a higher incidence of diabetes had lower complications. 19 White mature and dense nuclear cataracts were risk factors of posterior capsule rupture, there was no difference between the trained and untrained groups for these types of cataract. 20 In the multivariable regression analysis, only the simulation training and preoperative BCVA were found to be related to the total rate of complications in resident-performed phacoemulsification. Blomquist et al also found that a worse preoperative BCVA was correlated with intraoperative vitreous complications in resident-performed phacoemulsification. 21 In contrast, Rutar et al did not demonstrate any correlation between preoperative BCVA and total complications. 15 There were some limitations of this study to note related to it being conducted as a retrospective cohort. We compared different groups of residents from different times, which may have caused a confounding effect and some bias. However, there were no major changes in the surgical equipment and techniques used between the two groups. We reviewed the literature on the risk factors of complications in resident-performed phacoemulsification to identify the known risk factors. In our study, we collected data on the previously known risk factors and conducted multivariable analysis to mitigate the limitation. We considered a prospective study design but decided this may not be suitable for this study because some previous studies have demonstrated the construct validities and benefits of the simulation training. Research on the curriculum design, assessment of training and relationship between performance in simulation and real-life surgery should be further conducted.
In our hospital context, the availability of a simulation training curriculum supplemental to conventional training is highly valuable for residents. The benefits include shortening the learning curve of the residents, allowing the residents to practice their surgical skills in a safe setting, and helping them build their confidence for
Open access
real-life surgery. The patients also benefit from a reduction in the complication rate when undergoing cataract surgery performed by a simulation-trained resident. In spite of the advantages, the high cost of setting up the equipment and course can be a significant obstacle to the adoption of simulation training. The initial cost of the machine was £130 000 in 2012 and the maintenance with insurance cost is about £30 000 per year. Sharing the simulator and its cost with other teaching centres could be an option to maximise resource utilisation. This idea would allow several residents to access the simulation training and would add benefits to the conventional training. On the other hand, the effects of such training in shortening the residents' learning curve, lowering the complication rate, and improving safety are invaluable.
They should be weighed with the cost in implementation of such simulation training.
CONCLUSION
This study demonstrates that ophthalmic surgical simulation training significantly reduced overall intraoperative complications and posterior capsule rupture in the first half of cases of resident-performed phacoemulsification surgeries. The training improved residents' surgical performance by shifting the learning curve of trainee surgeons.
Acknowledgements The authors would like to thank Suthipol Udompunthurak for advising on appropriate data collection and on the statistical analysis. Bold values indicate statistical significance. *OR and 95% CI could not be calculated due to the insufficient number of cases. BCVA, best-corrected visual acuity.
|
2022-06-12T15:19:42.051Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a7b5a1ce76356f557ac3382a5db14385dc6fa3b4",
"oa_license": "CCBYNC",
"oa_url": "https://bmjophth.bmj.com/content/bmjophth/7/1/e000958.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e702e2b3cf2a04f43220175ef1694bc42c63f752",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
20384648
|
pes2o/s2orc
|
v3-fos-license
|
Recovery from Bell Palsy after Transplantation of Peripheral Blood Mononuclear Cells and Platelet-Rich Plasma
Summary: Peripheral blood mononuclear cells (PBMCs) are multipotent, and plasma contains growth factors involving tissue regeneration. We hypothesized that transplantation of PBMC-plasma will promote the recovery of paralyzed facial muscles in Bell palsy. This case report describes the effects of PBMC-plasma transplantations in a 27-year-old female patient with right side Bell palsy. On the affected side of the face, the treatment resulted in both morphological and functional recovery including voluntary facial movements. These findings suggest that PBMC-plasma has the capacity of facial muscle regeneration and provides a promising treatment strategy for patients suffering from Bell palsy or other neuromuscular disorders.
However, an expanded latency of the early monosynaptic (R1) of the blink reflex by right-sided stimulation of the face was recorded, whereas the latency of the late reflex response (R2) was normal at both sides. By left-sided stimulation, latency of R1 and R2 was normal ipsilaterally, but R2 was found to be expanded contralaterally.
Brainstem auditory evoked potential revealed normal peak and interpeak latencies without pathological signs. Based on these findings, abnormalities in trigemino-facial reflex at the right side were diagnosed.
From the age of 11 years, she was treated with therapeutic stimulus current treatments using periodical transcutaneous electric nerve stimulation (Corposano, KS-1/ A1) 2 and therapeutic active face exercises. These, however, did not improve her conditions. At the age of 15 years, otorhinolaryngology examination was performed revealing intact outer ears and tympanic membranes at both sides. To assess cerebellar function, Barany test, Romberg test, Babinski-Weil test, finger-to-nose test, rebound test, and dysdiadochokinesia test 3-7 were performed. These tests confirmed normal cerebellar functions.
Audiology examination did not detect reduction in hearing. In addition, audiometry revealed intact hearing at both sides. Electronystagmography test revealed no nystagmus. Further examinations confirmed abnormalities in central vestibular function and intact peripheral vestibular function.
In December 2013, the patient was admitted and examined in our clinic with symptoms as previously documented. Thus, with the permission of the patient,
Reconstructive
Case RepoRt autologous blood cell transplantation therapy was applied. 8 The local Ethical Committee approved this therapy, and written informed consent was obtained before the therapy. Following that, she received repeated autologous suspension of peripheral blood mononuclear cell (PBMC)-platelet (PLT)plasma transplantation 9 times in 1 year (Table 1).
TECHNIQUE DESCRIPTION
A total of 25 mL peripheral blood was harvested from the median cubital vein into 50 mL heparinized vacutainer tubes. Plasma of 20 mL blood was separated by centrifugation (10 minutes at 630 ×g). PBMCs of 5 mL blood were isolated using density gradient centrifugation as previously described by Nilsson et al. 9 Briefly, 1:1 dilution of blood with Dulbecco's Phosphate-Buffered Saline (Gibco, Invitrogen, Budapest, Hungary) was pelleted (20 minutes at 1,020 ×g) with Ficoll-Paque PREMIUM 1.077 g/mL density gradient media (GE Healthcare, CTS, Life Sciences, Budapest, Hungary). The interphase consisting of PBMCs was collected and washed 2 times with Dulbecco's Phosphate-Buffered Saline. The resulting pellet was resuspended with 9.0 ± 0.1 mL of autologous plasma.
A total of 9.9 ± 0.1 mL of PBMC-PLT-plasma was locally injected in even proportions on the right side of the face in the areas of facial nerve (FN, CN VII) innervations. Injec-tions were given subcutaneously and muscularly approximately 1.2 mL to each region (temporal, orbicularis oculi, buccinator, levator anguli oris, orbicularis oris, zygomaticus major and minor, risorius, and levator labii superioris) using 3 mL syringes connected to 0.30 × 1/2 needle. This treatment was repeated 9 times within a year (Table 1).
TREATMENT RESULTS
Posttreatment anamnesis revealed significant improvement in the voluntary motion of the facial muscles. There was a remarkable improvement in facial contouring, and the facial asymmetry was significantly reduced (Fig. 1). Nasolabial fold and tear trough were noticeably developed on the right side (Fig. 1). Cheek augmentation was slightly reduced on the left side, whereas it emerged on the right side (Fig. 1). Contours of the asymmetrically drooping corner of left lips were slightly improved (Fig. 1). Following treatments, the patient was able to close her eyelid completely on left and by 80.7% on the right side (Figs. 2, 3). The drooping of angle of the mouth was remarkably reduced (42.2%) on the right side as compared with that of before treatment (Fig. 4). Taste sensation was maintained, and there was no pain in or behind the ear, and no numbness in the affected side of her face. . after treatment, symmetry of the face remarkably improved, the atrophied areas were significantly reduced, the right corner of the lips was significantly elevated, and a nasolabial fold appeared on the right side of the face.
CONCLUSIONS
According to the significant recovery that we observed after transplantation of autologous PBMCs and PLT-plasma therapy, 10 this therapy has the potential to restore neuronal-muscular atrophy and provides a promising future strategy to cure facial atrophy.
ACKNOWLEDGMENTS
We would like to thank Kalman Wilhelm and Attila Schneider for their assistance in the treatments and Zoltan Szabo for performing measurements on the patient's photographs. after treatment (B). after treatment, the patient was able to close her eyelid completely on the left side and partially on the right side. Fig. 3. area of sclera during eyelid closure after pBMCs and pLt-plasma therapy. pretreatment (a) and posttreatment (B) periocular regions after forced eyelid closure. area of sclera on the right side before treatment: 201.7 mm 2 ; after treatment: 38.9 mm 2 . Fig. 4. Development of a reduction in drooping corner of the right lip after pBMCs and pLt-plasma therapy. after treatment, the right corner was significantly elevated as compared with one before treatment. Black angles: pre-and posttreatment angles between horizontal and lower lines representing the pre-and posttreatment contours of the lips. Red angle shows the difference between preand posttreatment contours of the drooping of the corners of the lips. angle of the mouth on the right side as compared with horizontal lip level before treatment: 16.1°; after treatment: 9.3°.
|
2018-04-03T06:16:04.733Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f5abaac25ccf88ac7d5162a8b395dfaf73edc350",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/gox.0000000000001376",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5abaac25ccf88ac7d5162a8b395dfaf73edc350",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233354535
|
pes2o/s2orc
|
v3-fos-license
|
Surface Circulation and Vertical Structure of Upper Ocean Variability Around Fernando de Noronha Archipelago and Rocas Atoll During Spring 2015 and Fall 2017
Using current, hydrographic and satellite observations collected off Northeast Brazil around the Fernando de Noronha Archipelago and Rocas Atoll during two oceanographic cruises (spring 2015 and fall 2017), we investigated the general oceanic circulation and its modifications induced by the islands. In spring 2015, the area was characterized by lower SST (26.6°C) and deep mixed-layer (∼90 m). At this depth, a strong current shear was observed between the central branch of the eastward flowing near-surface South Equatorial Current and the westward flowing South Equatorial Undercurrent. In contrast, in fall 2017, SST was higher (∼28.8°C) and the mixed-layer shallower (∼50 m). The shear between the central South Equatorial Current and the South Equatorial Undercurrent was weaker during this period. Interestingly, no oxygen-rich water from the south (retroflection of the North Brazil undercurrent) was observed in the region in fall 2017. In contrast, we revealed the presence of an oxygen-rich water entrained by the South Equatorial Undercurrent reaching Rocas Atoll in spring 2015. Beside these global patterns, island wake effects were noted. The presence of islands, in particular Fernando de Noronha, strongly perturbs central South Equatorial Current and South Equatorial Undercurrent features, with an upstream core splitting and a reorganization of single current core structures downstream of the islands. Near islands, flow disturbances impact the thermohaline structure and biogeochemistry, with a negative anomaly in temperature (−1.3°C) and salinity (−0.15) between 200 and 400 m depth in the southeast side of Fernando Noronha (station 5), where the fluorescence peak (>1.0 mg m–3) was shallower than at other stations located around Fernando de Noronha, reinforcing the influence of flow-topography. Satellite maps of sea-surface temperature and chlorophyll-a confirmed the presence of several submesoscale features in the study region. Altimetry data suggested the presence of a cyclonic mesoscale eddy around Rocas Atoll in spring 2015. A cyclonic vortex (radius of 28 km) was actually observed in subsurface (150–350 m depth) southeast of Rocas Atoll. This vortex was associated with topographically induced South Equatorial Undercurrent flow separation. These features are likely key processes providing an enrichment from the subsurface to the euphotic layer near islands, supplying local productivity.
Using current, hydrographic and satellite observations collected off Northeast Brazil around the Fernando de Noronha Archipelago and Rocas Atoll during two oceanographic cruises (spring 2015 and fall 2017), we investigated the general oceanic circulation and its modifications induced by the islands. In spring 2015, the area was characterized by lower SST (26.6 • C) and deep mixed-layer (∼90 m). At this depth, a strong current shear was observed between the central branch of the eastward flowing near-surface South Equatorial Current and the westward flowing South Equatorial Undercurrent. In contrast, in fall 2017, SST was higher (∼28.8 • C) and the mixed-layer shallower (∼50 m). The shear between the central South Equatorial Current and the South Equatorial Undercurrent was weaker during this period. Interestingly, no oxygenrich water from the south (retroflection of the North Brazil undercurrent) was observed in the region in fall 2017. In contrast, we revealed the presence of an oxygen-rich water entrained by the South Equatorial Undercurrent reaching Rocas Atoll in spring 2015. Beside these global patterns, island wake effects were noted. The presence of islands, in particular Fernando de Noronha, strongly perturbs central South Equatorial Current and South Equatorial Undercurrent features, with an upstream core splitting and a reorganization of single current core structures downstream of the islands. Near islands, flow disturbances impact the thermohaline structure and biogeochemistry, with a negative anomaly in temperature (−1.3 • C) and salinity (−0.15) between 200 and 400 m depth in the southeast side of Fernando Noronha (station 5), where the fluorescence peak (>1.0 mg m −3 ) was shallower than at other stations located around Fernando de Noronha, reinforcing the influence of flow-topography. Satellite maps of seasurface temperature and chlorophyll-a confirmed the presence of several submesoscale features in the study region. Altimetry data suggested the presence of a cyclonic
INTRODUCTION
The tropical Atlantic presents a relatively strong static stability with a well-marked thermocline, which is seasonally modulated by the meridional displacement of the Intertropical Convergence Zone (ITCZ), controlling the regime of precipitation and trade winds (Araujo et al., 2011;Nogueira Neto et al., 2018;Assunção et al., 2020). In such regions, vertical mixing, and upwelling are usually restricted to local mechanisms such as divergence of currents, winds and interactions between ocean currents and topography. Interactions between currents and topography, such as oceanic islands and seamounts, can lead to the generation of (sub)mesoscale eddies, changes in current intensity and directions, disturbances of the thermohaline structure, or orographically-induced upwelling [e.g., the special issues dealing with flow encountering abrupt topography (Oceanography, 2019;Vol. 32, No 4) and bio-physical coupling around seamounts (Deep-Sea Research, 2020;Vol. 176)]. This kind of processes is observed in the western tropical Atlantic off northeast Brazil around the oceanic islands of Fernando de Noronha Archipelago (FN) and the Rocas Atoll (RA) (Lessa et al., 1999;Travassos et al., 1999;Chaves et al., 2006;Tchamabi et al., 2017Tchamabi et al., , 2018. FN and RA, located ∼350 km from the mainland (Figure 1) encompass oceanic ecosystems classified as "Ecologically or Biologically significant Marine Areas (EBSAs)" 1 . Oceanic areas nearby FN and RA are energetic regions subjected to strong seasonally driven features, such as the complex system of zonal equatorial currents and countercurrents, the confluence of water masses, or the trade winds systems (Araujo et al., 2011;Tchamabi et al., 2017Tchamabi et al., , 2018Foltz et al., 2019). The principal currents of the region are the central branch of the South Equatorial Current (cSEC) located north of the South Equatorial Countercurrent, and the South Equatorial Undercurrent (SEUC) centered at about 4 • S (Figure 1). These zonal currents flow in opposite direction, with cSEC flowing westwards and SEUC flowing eastwards (Silveira et al., 1994;Stramma and Schott, 1999;Lumpkin and Garzoli, 2005). The near-surface circulation in the region is mostly driven by the meridional migration of the ITCZ. In austral winter (June to August), the ITCZ is located north of the equator and trade winds are stronger. Conversely, in austral fall (March-May), the ITCZ is located close to the equator and the winds are relaxed (Servain et al., 2014;Hounsou-Gbo et al., 2015. The seasonal ITCZ displacement also influences precipitation regime at FN and RA, with a rainy season extending from March 1 http://www.cbd.int/marine/doc/azores-brochure-en.pdf to July, and a dry season extending from August to January (Assunção et al., 2016).
In the western tropical Atlantic, many complex physical processes around oceanic island wake are not well described because most of survey efforts have focused on broader processes. The main large-scale currents were well identified by several historical programs developed along the western edge of the tropical Atlantic (e.g., the Global Atmospheric Research Program-GARP; Atlantic Tropical Experiment-GATE; Francais Océan Climat Atlantique Equatorial-FOCAL; Prediction and Research moored Array in the Tropical Atlantic-PIRATA programs, and the ETAMBOT and CITHER projects). Although numerous, all those previous initiatives focused on a largescale picture of the tropical Atlantic circulation and knowledge about the interaction of the large-scale currents with RA and FN is still scarce. To provide a detailed picture of the currentisland interactions, here we use current, hydrographic, and satellite data collected during the Acoustic along the BRAzilian COaSt (ABRACOS) cruises in Austral spring 2015 and fall 2017 (Bertrand et al., 2015;. These two periods were found to be representative of canonical spring and fall conditions in the area (Assunção et al., 2020;Dossa et al., 2021). More specifically, we describe the upper-ocean circulation around FN and RA and highlight some mesoscale features observed on currents, thermohaline structure and primary productivity.
In situ Observations
In situ data were collected during the two ABRACOS surveys carried out onboard the French R/V Antea in austral spring 2015 (ABRACOS 1, 30 September-08 October 2015) and fall 2017 (ABRACOS 2, 26 April-03 May 2017) (Figure 2). The in situ datasets described below are publically available (Bertrand et al., 2015(Bertrand et al., , 2017. Vertical profiles of physical and biogeochemical parameters were collected from the surface to 1,000 m depth using a Seabird SBE911+ conductivity temperature depth (CTD) probe equipped with dissolved oxygen (SBE43) and fluorescence (Wetlabs R ECO) sensors. Data were acquired at a frequency of 24 Hz and averaged every 0.1 dbar. All the sensors were laboratory-calibrated before and after each cruise. In the study area, a total of 20 (15, respectively) CTD profiles were acquired during ABRACOS 1 (ABRACOS 2) (Figure 2). Conductivity, temperature, pressure, and dissolved oxygen accuracies are of 3 mS/m, 0.001 • C, 0.7 dbar, and 0.09 ml l −1 , respectively. The fluorescence sensor measures chlorophyll concentration in the range 0-125 mg m −3 with a sensitivity of 0.02 mg m −3 . A total of 30 water samples (15 for each survey) were collected using Niskin bottles to determine dissolved oxygen (DO) concentrations using the Winkler titration method (Grasshoff et al., 1983).
In order to investigate the stability of the water column, the Brunt-Väisälä (N) frequency was computed from temperature and salinity profiles, using the equation: where z is depth (in m), g is gravity, ρ is density and ρ 0 = 1,025 kg m −3 is the reference density. Seawater density was based on the atmosphere International Equation of State of Seawater (UNESCO, 1981). We use vertical profiles of current velocity and hydrographic data to calculate the Richardson number. The Richardson number (Ri) is a measure of the dynamic stability associated with the competing effects of stratification and shear in the flow. It is expressed as the ratio of the vertical gradient of buoyancy over the vertical shear of horizontal velocity: The vortex Rossby Number (Ro) is used to compare the local relative vorticity of the eddy to the planetary vorticity (Eq. 3). Rossby number and Burger number were used to determine the significance of Coriolis acceleration and stratification, respectively, and their impacts on flow dynamics.
Where f is the Coriolis parameter, U is the maximum velocity and L is the horizontal length scale of the vortex. The Burger number (Eq. 4) (Pedlosky, 1987) is a dimensionless parameter related to the aspect ratio (H/L), where H is the vertical length scale of the vortex. The Burger number can also be defined as the square of the ratio of the deformation radius to the horizontal scale of the vortex. The Rossby deformation radius is given in Eq. 5.
Vertical profiles of current velocity were continuously acquired along the ship track using a ship-mounted Acoustic Doppler Current Profiler (SADCP) from Teledyne-RDI (OS75 instrument). Raw SADCP data were collected every 3 s in deep water (water-depth < 150 m), and every 1 s in shallow water (water-depth < 150 m) using a vertical bin length of 8 m, and averaged into 10 min profiles. SADCP data were processed and edited using the Common Ocean Data Access System (CODAS) software package developed by the SOEST, University of Hawaii 2 . The relative velocities were rotated from the transducer to the Earth reference frame using the ship gyrocompass. The global positioning system (GPS) was used to retrieve the absolute current velocities. The orientation of the transducer relative to the gyroscopic compass and an amplitude correction factor for the SADCP were determined by standard calibration procedures (Joyce, 1989;Pollard and Read, 1989). Finally, velocity profiles were averaged hourly, providing profiles in the 19-600 m range.
To better describe the main currents, we separated the data into 2 layers: the surface layer (0-100 m depth), which includes the cSEC, and the subsurface layer (100-400 m depth), where mainly the SEUC is present.
Satellite Data
To provide a more comprehensive image of dynamical processes acting around FN and RA and help the interpretation of 2 http://currents.soest.hawaii.edu results obtained from in situ data, several satellite products distributed by the Copernicus Marine Environment Monitoring Service (CMEMS) 3 were extracted in the study region for the periods corresponding to the ABRACOS cruises. The sea surface temperature (SST) product is the so-called OSTIA (Operational SST and Ice Analysis) that combines satellite and in situ data. This SST product is daily available from 1 October 1981 to 31 December 2018 on a regular grid of 0.05 • resolution (Donlon et al., 2012). Sea-surface chlorophyll-a (Chl-a) concentration maps were from the Copernicus-GlobColour product provided by the ACRI-ST company. Chl-a was obtained from the merging of multi sensors such as SeaWiFS, MODIS, and MERIS. This product is daily available from 2007 to the present with a spatial resolution of 4 km (Garnesson, 2013). Wind-stress data were produced by CERSAT/IFREMER and consist on a blended wind dataset based on remotely sensed surface winds derived from scatterometters and radiometers. It spans from 1992 to present and is daily available on a regular grid of 0.25 • (Bentamy and Croizé-Fillon, 2011). Altimetry data is the Salto/Duacs gridded product of sea-surface height (SSH) and derived geostrophic currents. This product, available from January 1993 to the present, was computed from several multimissions altimeter measurements of SSH, interpolated daily onto a 0.25 • × 0.25 • longitude/latitude grid (Ducet et al., 2000;Pujol et al., 2016;Taburet et al., 2019). (Schott et al., 1998;Stramma and Schott, 1999) four main water masses were identified in the 0-1,000 m depth-range: the Tropical Surface Water (TSW), the Subtropical Underwater (SUW), the South Atlantic Central Water (SACW), and the Antarctic Intermediate Water (AAIW). The TSW is located on the surface layer above the σ θ = 24.5 kg m −3 isopycnal, located at about 100 m depth. During spring 2015 (Figures 3A,C) the TSW was characterized by relatively high values of temperature (>26 • C), dissolved oxygen (4.3-5.0 ml l −1 ), and relatively low values of fluorescence (<1 mg m −3 ) and salinity (<36.5) in the mixed layer (limited to ∼90 m depth). During fall 2017, the mixed layer was shallower (∼50 m depth), and TSW was characterized by temperature higher than 27 • C, salinity lower than 36.5, dissolved oxygen concentrations between 4.3 and 4.7 ml l −1 and fluorescence in the range 0-0.9 mg m −3 (Figures 3B,D). At both seasons, just below the TSW between σ θ = 24.5 kg m −3 and σ θ = 25.5 kg m −3 , lied the SUW characterized by a local maximum in salinity (>36.5) (Figure 3). This water-mass, which was more clearly observed in spring 2015 than in fall 2017, is also characterized by relatively high oxygen (>4 ml l −1 ) and fluorescence (>0.5 mg m −3 ) values.
Water Masses and Thermohaline Structure
Below the SUW, takes place the SACW characterized by a nearly linear temperature-salinity relationship covering wide temperature (10-20 • C) and salinity (34.9-36.2) ranges. This water-mass was associated with a relative oxygen minimum of 3.3-3.5 ml l −1 (2.3-2.7 ml l −1 , respectively) during spring 2015 (fall 2017) at 150-500 m depth. At these depths, fluorescence values were weak due to light limitation. Finally, the isopycnal σ θ = 27.1 kg m −3 (about 500 m) marks the transition between SACW and AAIW. AAIW is characterized by a local salinity minimum of ∼34.5 and a local oxygen maximum of ∼3-3.5 ml l −1 . Fluorescence is almost zero at the depth of the AAIW.
In the surface layer, DO concentrations were similar and of ∼4.5 ml l −1 during both cruises (Figures 4D, 5D, 6D). During spring 2015, CTD stations located north and south of RA also presented higher DO concentrations (≥3.5 ml l −1 ) below the thermocline layer. During fall 2017 low DO concentrations (∼2.5 ml l −1 ) were observed in subsurface around FN and RA (Figures 5D, 6D). More specifically, during spring 2015, temperature profiles at stations 11 and 12, close to RA, presented positive temperature (+1 • C) and salinity (+0.16) anomalies in the depth range 200-350 m ( Figure 5A). Corresponding DO concentration were lower than 3.5 ml l −1 (Figure 5D). Conversely, at station 05 close to FN, a negative anomaly in temperature (−1.3 • C) and salinity (−0.15) was observed in the depth range 200-400 m (Figure 4A). At this station, the fluorescence peak was shallower than at other stations located around FN ( Figure 4C). In addition, a shallower thermocline and halocline was observed at stations 18, 19, and 20, located north of RA, when compared to the other stations around RA ( Figure 5A). In those stations, surface fluorescence concentration was higher (0.18-0.24 mg m −3 ) and the peaks of maximum fluorescence were shallower (above 100 m) ( Figure 5C). Finally, stations 14-22 located north and south of RA did not show a marked DO minimum ( Figure 5D).
During fall 2017, at stations 48 and 54, the thermocline and halocline were deeper than at other stations ( Figure 6A). In addition, station 42 located north of FN presented a positive temperature (+1 • C) and salinity (+0.1) anomaly between 350 and 450 m depth. At this station, in the depth range 425-600 m, DO concentration was higher (>3.2 ml l −1 ) than at other stations.
Finally, the upper thermocline was deeper in spring 2015 (∼90 m) than fall 2017 (∼50 m). Specifically, in spring 2015, stations 05, 09, 12, 20, and 22 (Figures 7A-E) presented deeper halocline and thermocline when compared to stations 53 in fall 2017 (Figures 7E,F). The area north of FN and RA (stations 09 and 20; Figures 7B,D) presented stronger vertical gradients at the lower limit of the mixed layer depth than in the southern area (stations 05 and 12, Figures 7A,C). Station 20 located north of RA presented a low Ri between 150 and 250 m depth.
Below the surface layer (near 250 m depth), stations 12 and 22 show low Ri values (<0.5 and <0.3, respectively) in the southeast side of RA (Figures 7C,E).
During spring 2015, the upper thermocline and maximum salinity depth (between 50 and 100 m depth) presented a higher peak of Brunt-Väisälä frequency near in surface than in fall 2017 (Figure 7F), confirming a stronger vertical stratification and static stability.
Circulation Patterns cSEC and SEUC Volume Transports Around FN
Although several SADCP sections were performed around FN and RA, we selected a meridional section (crossing FN from North to South (transect Ta in Figure 2) to represent the ocean circulation in the region. In spring 2015, the cSEC flowed westward above 100 m depth on both sides of FN (Figure 8) but was more intense on the northern side (U ∼ −50 cm s −1 ) than on the southern side (U ∼ −20 cm s −1 ) ( Figure 8A). The cSEC westward transport was estimated to be of 1.2 ± 0.1 Sv across this meridional section in spring 2015. In fall 2017, the cSEC was restricted above 70 m ( Figure 8B) and its zonal velocity component was higher than in spring 2015, with an average of ∼−60 cm s −1 in the surface layer (0-100 m) in both sides of the island (Figure 8B). Corresponding westward transport was estimated to be of 4 ± 0.2 Sv.
In the subsurface layer, between 100 and 400 m depth, the zonal component of the flow reversed and the current was oriented eastward, as is typical of the SEUC. Therefore, a strong current shear occurred between the near-surface cSEC and the subsurface SEUC. In spring 2015, the SEUC presented a maximum zonal velocity of 70 cm s −1 south of FN at ∼200 m depth (Figure 8A). North of FN, the SEUC zonal velocity was weaker (≤40 cm s −1 ). The SEUC eastward transport, integrated between 100 and 400 m depth along the meridional section, was estimated to 5.6 ± 0.5 Sv. In fall 2017, the SEUC was weaker than in spring 2015, having a maximum zonal velocity of 30 cm s −1 south of FN around 150 m depth ( Figure 8B). On the northern side of FN, the SEUC velocity was much weaker (<10 cm s −1 ) ( Figure 8B). In fall 2017, the integrated SEUC transport was estimated to 3.8 ± 0.7 Sv. Finally, below ∼450 m, the circulation was weak and not further investigated in this study.
Near-Surface and Subsurface Circulation Around FN and RA
In order to depict the regional circulation around FN and RA, we divided the circulation into two distinct layers: the near-surface layer (0-100 m depth) that includes the cSEC and the subsurface layer (100-400 m depth) mainly associated with the SEUC.
In spring 2015, the near-surface circulation was dominated by the cSEC (Figures 8A, 9A). Around FN, surface currents varied in direction and intensity on both sides (north and south) of the archipelago. On the northern side of the archipelago, surface current was more intense (50 cm s −1 ) with a prevailing northwestward flow (Figure 9A). A similar northwestward flow was also observed on the northern side of RA. In fall 2017, the cSEC had a prevailing westward direction with mean velocities of 50 cm s −1 (Figure 9C). During this period, no clear currentislands interaction was observed.
In the subsurface layer (100-400 m depth), during spring 2015, the SEUC dominated around FN with maximum velocity (30 cm s −1 ) south of the island. Near RA, currents flowed predominantly south/southeastwards, being more intense on the western side of the atoll. Between FN and RA (about 33 • W), the eastward flow was weaker (<10 cm s −1 ). Interestingly, in spring 2015, a small-scale subsurface cyclonic circulation was observed southeast of RA (illustrated by a black circle in Figure 9B). This subsurface cyclonic eddy had an estimated radius of ∼28 km and was probably driven by the topographically induced flow separation ( Figure 9B). This eddy-like structure, centered at 33.38 • W and 4 • S and having a typical swirl velocity of 20 cm s −1 , did not have a signature in the near-surface layer dominated by the cSEC. In fall 2017 the influence of the SEUC was observed in the 100-400 m depth layer around FN, with an average current velocity of 20 cm s −1 flowing eastward south of FN. Northwest of FN, the northeast flow was weaker (≤10 cm s −1 ). Finally, west of RA currents moved southwards ( Figure 9D). Figure 2). The contour interval is 10 cm s −1 . Figure 8B indicates the cyclonic mesoscale circulation.
Regional Surface Characteristics From Satellite Observations
SST was lower in spring 2015 (<27 • C) than in fall 2017 (>28 • C) (Figures 10A,B), in agreement with the SST values observed from CTD data. In spring 2015, the SST showed a large-scale northwestward gradient, varying from 26 • C southeast of FN to 27 • C northwest of RA. This SST distribution suggests that the cSEC tended to cool the downstream regions. In fall 2017, the SST was much more homogeneous in the study region and varied by less than 0.4 • C. Several mesoscale temperature structures were observed during both seasons (Figures 10A,B).
Surface Chl-a concentration was lower in spring 2015 (about 0.1 mg m −3 ) than in fall 2017 (0.2 mg m −3 ) (Figures 10C,D). Higher surface fluorescence values were also observed in fall 2017 from CTD data. In spring 2015, satellite Chl-a concentrations were higher in the vicinity of FN and RA, suggesting a topographic influence on the primary productivity. In fall 2017, a large-scale northeastward Chl-a gradient was observed, with mean Chl-a concentrations varying from 0.10 to 0.23 mg m −3 .
Surface winds blew northwestwards and intensified westwards (Figures 10E,F and Supplementary Figures 1E,F). It might be related with coastal effects associated with the proximity of land. This intensification was more pronounced in spring 2015 than in fall 2017. Overall surface winds were higher in spring 2015 (≥0.1 N m −2 ) than fall 2017 (≤0.1 N m −2 ).
In spring 2015, some changes were observed in the predominant directions of the surface currents (0-100 m depth) around FN and RA, which was not observed in fall 2017. In spring 2015, SSH and associated geostrophic currents depicted a surface current flowing northward on the northern and eastern sides of FN. On the northern side of RA, a northeastward flow was observed (Figure 10G). The presence of a mesoscale cyclonic eddy around RA was depicted. In fall 2017, a clear geostrophic westward flow was observed ( Figure 10H) around FN and RA and no eddy was observed (Supplementary Figure 1H).
DISCUSSION
In situ and satellite data are used to discuss the temporal variability of the oceanic characteristics around FN and RA in spring 2015 and fall 2017 that have been shown to be representative of canonical spring and fall conditions (Assunção et al., 2020;Dossa et al., 2021). We also discuss the impacts of FN and RA on the thermohaline structure, local circulation, including mesoscale features, primary productivity and dissolved oxygen distribution.
Spatiotemporal Variability of Physical and Biogeochemical Parameters
In the southwestern tropical Atlantic, negative ocean-atmosphere heat fluxes and stronger winds are observed in spring, leading to a lower SST and deeper mixed-layer. In contrast, in fall, positive buoyancy due to positive heat fluxes and the relaxation of southeast trade winds lead to a higher SST and shallower mixedlayer (Araujo et al., 2011;Servain et al., 2014;Nogueira Neto et al., 2018;Assunção et al., 2020). Similarly, in situ ABRACOS measurements showed higher SST values in fall 2017 (28.8 • C) than in spring 2015 (26.7 • C). In fall 2017, during the rainy season, relatively low surface salinity and low wind stress were associated with warmer SST in the western tropical Atlantic (Supplementary Figure 1).
Previous studies (Schott et al., 1998(Schott et al., , 2003Stramma and Schott, 1999) described the SEUC characteristics across two meridional sections (35 and 31 • W) between 2 and 5 • S. These studies underlined the relatively small variability of the SEUC vertical position at 35 • W in a depth range of 200-500 m between 2.5 and 4 • S. However, using the ABRACOS mesoscale cruises, we showed that the SEUC can exhibit important temporal variability. The SEUC flowed eastward with greater intensity in the southern part of FN during spring 2015, with a weakening in fall 2017. In the northern part of FN, the influence of the SEUC was not observed in fall 2017.
We observed an influence of the cSEC on the southern and northern sides of the island in the surface layer. This strong surface current was also associated with a variation of the mixed layer depth, which was shallower around FN and RA during fall 2017.
In situ and satellite data revealed a relatively strong variability of near surface fluorescence/Chl-a concentrations in the oceanic area around FN and RA among cruise periods with high concentration in fall 2017 when the mixed layer was shallower. In spring 2015, Chl-a was slightly higher in the northern part of the study area ( Figure 10C and Supplementary Figure 1C), along the equatorial region, which may be associated to equatorial wind-driven upwelling. In fall 2017, a clear surface zonal "tongue" of maximum Chl-a was observed at 3 • S, involving FN and RA areas, reaching a maximum of about 0.2 mg m −3 . This high productivity may be associated to nutrient rich waters transported westward by the cSEC, which is stronger during fall 2017 reaching a maximum of about 0.2 mg m −3 (Figures 10G,H and Supplementary Figures 1G,H). CTD profiles confirm the increase in fluorescence concentrations around FN and RA during fall 2017, with two to fourfold higher concentrations when compared to spring 2015 (Figures 4C, 5C, 6C). Note that the Chl-a variability in the equatorial Atlantic can be influenced by several mechanisms, such as the seasonal variations of upwelling driven by the meridional displacement of the ITCZ, the westward advection by the SEC, or the perturbation of the equatorial upwelling by eastward propagating Kelvin waves (Servain et al., 1982;Grodsky et al., 2008). We here highlighted that FN and RA can also locally impact the Chl-a distribution.
Around FN and RA, DO concentration was lower than ∼3 ml l −1 in the depth range 150-350 m in fall 2017 ( Figure 6D). In spring 2015, this pattern was observed around FN but 5 out of 8 profiles around AR presented DO > 3 ml l −1 in this depth range (Figures 4D, 5D). These oxygen maxima have been reported in the SEUC farther east (Tsuchiya, 1986;Schott et al., 1998). These oxygen signatures are used to diagnose the origin of the SEUC. Our results can provide further insight in a current debate. Indeed, according to Bourlès et al. (1999), in the region where the NBC forms, the NBUC weakens and retroflects to feed the SEUC. On the contrary, Schott et al. (1998) indicated that the SEUC is not supplied by the oxygenrich and high-salinity NBUC waters, but is mostly made up of low-oxygen interior recirculation waters out of the SEC. This was also supported by Goes et al. (2005), that stated that apart from the gyre recirculation, there is a minor contribution from the NBUC to the SEUC. Finally, recently, Dossa et al. (2021) also showed that the NBUC retroflection does not feed the SEUC, which instead originates from the SEC retroflection, at least in fall. Still, the presence of profiles with higher DO concentration in spring 2015 raise again the question about a potential contribution of the NBUC to the SEUC. The orientation of the subsurface currents in the area of RA in spring 2015 may also indicate the presence of a retroflection. In the same sense, using float trajectories (around 200 m depth), Fischer et al. (2008) reported that, in austral spring, a float deployed south of the SEUC followed its eastward flow but then drifted westwards. The float was then entrained by the NBUC and reentered the SEUC northwest of RA. At 23 • W, Brandt et al. (2008) observed an oxygen maximum in the deepest part of the SEUC (∼400 m), indicating either a direct connection to the western boundary flow or a recirculation of oxygen-rich water from the south. Around RA, we observed a south/southeastwards flow (100-400 m depth) (Figure 9B). It seems therefore that in spring, the NBUC can retroflect to reach RA. However, the contribution of this retroflection to the SEUC remains unclear since the NBUC signature was lost close to RA and no more observable around FN. Therefore, it seems that as proposed by Goes et al. (2005), the NBUC can retroflect, at least in spring, but the contribution of the NBUC to the SEUC is likely negligible. Further studies are needed to quantify such effects and fully explain the mechanisms potentially involved in the NBUC-SEUC connection.
Island Wake Observed From CTDO and SADCP Data
To study possible effects of local flow-topography interactions over the large-scale circulation patterns, in Figures 11, 12, we examined the vertical zonal velocity profiles (0-600 m depth) in the western and eastern side of FN (transects Tb and Tc in Figure 2) and in the northwestern and southeastern side of RA (transect Td in Figure 2). The westward near-surface cSEC and eastward subsurface SEUC dominated at both periods, with stronger cSEC velocities in fall 2017 than spring 2015. On the opposite and, the subsurface SEUC transport was stronger in spring 2015 (Figure 11). In addition, a quite different circulation patterns was observed west and east of FN for both periods. In spring 2015, near-surface (0-100 m depth) zonal currents undergo important changes in their direction and intensity between upstream and downstream areas near FN, with maximal values higher in the western (U ∼ 20 cm s −1 ) than the eastern (U ∼ 10 cm s −1 ) side of FN (Figures 11A,B). The presence of the archipelago also induced strong perturbations of the cSEC, with a splitting of the cSEC core upstream of the archipelago in both periods ( Figure 11A versus Figure 11B, and Figure 11C versus Figure 11D), although higher currents were observed in fall 2017, with maxima intensity (∼80 cm s −1 ) measured at the western side of FN ( Figure 11C).
Below the surface layer, between 100 and 400 m depth, the effects of island wake on the SEUC was also visible. A current core splitting was indeed observed in the eastern side of FN, with maximum velocities occurring north of 3.9 • S (U ∼30 cm s −1 ) and a core of 20 cm s −1 centered at 3.7 • S in spring 2015 ( Figure 11A). A stronger and single SEUC core was observed downstream of FN, suggesting a reorganization of the eastward subsurface flow in the eastern portion of the archipelago, with eastward velocities higher than 50 cm s −1 at 3.9-4.2 • S ( Figure 11B). A similar scenario was observed in fall 2017 west of FN and north of 3.9 • S, although with a much less intense and even a reversed transport north of the archipelago (U ∼ −10 cm s −1 at 3.7 • S). East of FN, the SEUC presented Figure 2). The contour interval is 10 cm.s −1 . In each panel, the bold black solid line contours represents null value of the velocity. a maximum intensity of 40 cm s −1 at ∼4.5 • S ( Figure 11C). In the subsurface layer (100-400 m) SEUC intensity was maximal of 40 cm s −1 in the east side of FN at ∼200 m depth ( Figure 11D).
These changes in the intensity and direction of the currents around the islands can be related to local topography. Cross-sectional vertical profiles of the temperature, salinity, fluorescence/Chl-a and dissolved oxygen concentrations are presented in Supplementary Figure 2. Transects were constructed from CTDO stations 05, 04, 01, and 08 (spring 2015), and 45, 44, 50, and 51 (fall 2017), represented by black triangles (see Figure 2 for the stations position).
Besides currents (intensity and direction) differences, vertical distributions of temperature and salinity are not the same in both sides of FN (Hydrographic transects in Supplementary Figures 2C,D). During spring 2015, we notice below 100 m depth a deepening of isotherms (see for example 10 • C, Supplementary Figure 2A) and isohalines (for example 35.2 and 34.9, Supplementary Figure 2C) in the west side of FN, which is associated to observed splitting in SEUC structure due to the presence of the island (Figure 11A). Although less intense, this scenario is also observed during fall 2017 for the same isotherms and isohalines (Supplementary Figures 2B,D), which seems to be also associated to perturbations in SEUC structure imposed by the island wake ( Figure 11C).
Indeed numerical simulations (e.g., Tchamabi et al., 2017), showed a subsurface cooling around the FN and RA, which was mainly driven by the interruption of cSEC by the bathymetry, enhancing vertical mixing and mesoscale eddy activity in the thermocline. It suggests that the island wake leads to an enrichment from the subsurface to the euphotic layer near FN and RA, supplying the productivity in these regions. This locally enhanced productivity is also visible on the satellite Chl-a observations ( Figure 10C).
Another section is carried out on the northwestern and southeastern sides of the atoll (transects Td in Figure 2). In the surface layer (0-100 m), there is a predominantly eastward flow (U > 0 and V < 0) and below the surface layer there is an eastward flow centered at 250 meters in the northwestern part of RA (Figure 12). As shown later, in the subsurface layer (around 150-350 m depth), a cyclonic vortex structure was observed downstream (southeast side) of RA (red circle in Figures 9B, 12). However, the application of existing Richardson number (station 12 and 22) shows low Ri (<0.3) values around 200-400 m depth indicating the vortex (Figures 7D,E).
The maximal vortex velocity Umax ≈ 20 cm.s −1 is reached for the characteristic radius Rmax ≈ 28 km, resulting in a vortex Rossby number Ro ≈ 0.7 of this cyclonic vortex. The Rossby radius of deformation (Rd) is 23 km, a value very close to the vortex radius. In the equatorial region, Rd is one order of magnitude larger (Houry et al., 1987;Simoes-Sousa et al., 2021). Strong surface Eddies close to the equator present very small values of the Coriolis parameter f, leading to high Rossby number. Although some of these eddies are also highly circular, most of them have very small amplitudes (Douglass and Richman, 2015). The Burger number of the vortex is about 3. These values are consistent with the processes associated with a cyclonic eddy shedding observed in the Gulf of Guinea (Djakouré et al., 2014).
Corresponding temperature and salinity signatures were observed between 150 and 350 m depth at the same position (station 12 and 22, Figures 7C,E). At this depth range, an anomalous (and almost stepwise) increase in temperature and salinity was observed, associated with subsurface mesoscale changes in currents (Figure 12) and small Richardson numbers (Figures 7C,E). This subsurface vortex structure was also probably generated by the island wake (e.g., Arístegui et al., 1997;Chérubin and Garavelli, 2016). The maximum of subsurface fluorescence was also observed at Station 22, in the edge of the eddy-like feature (Figure 5A). Cyclonic and anticyclonic eddies are known to strongly modulate primary production in oligotrophic waters around islands (e.g., Arístegui et al., 1997). Maximum primary productivity is often observed near eddy edges, where numerous filaments are observed due to enhanced lateral straining, stretching and stirring (e.g., Mahadevan, 2016;Lévy et al., 2018).
CONCLUSION
Based on two regional mesoscale cruises realized off the Northeast Brazilian coast, we described the upper-ocean circulation and how island wake impact the main features around Fernando de Noronha island and Rocas Atoll in two contrasted periods, Austral spring 2015 and fall 2017, considered as representative of the mean spring and fall conditions. In spring, the area was characterized by a lower SST (26.6 • C) and deeper mixed-layer (∼90 m). At this depth, a strong vertical shear was observed between the surface cSEC and the subsurface SEUC. In contrast, in fall, SST was higher (∼28.8 • C), the mixed-layer shallower (∼50 m), and the vertical shear between the cSEC and the SEUC weaker. Our study suggested that SEUC was fed (not fed, respectively) by the NBUC in spring 2015 (fall 2017). However, from the available datasets, it was not possible to quantify how much this retroflection fed the SEUC in spring 2015. To unravel the scientific debate on the NBUC-SEUC connection, dedicated oceanographic cruises and numerical modeling approaches would be needed. Beside these global patterns, the physical processes in the wakes of islands were clear with the splitting of the large-scale currents, the presence of mesoscale meanders and a subsurface eddy-like structure. These features are likely key processes providing an enrichment from the subsurface to the euphotic layer near FN and RA, supplying the local productivity. Enhancement of primary production around the archipelago was also observed from satellite data in spring 2015. In addition to the new information described above, this work enables the planning of future cruises to be carried out for a better understanding of mesoscale vortex processes, water mass transport around the islands and in the tropical Atlantic.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: ABRACOS Data (doi: 10.17600/15005600 and doi: 10.17600/17004100).
AUTHOR CONTRIBUTIONS
ACo and AB planned and organized oceanographic surveys. GE, AB, AD, and ACo worked in data processing and QC. All authors wrote and reviewed the manuscript.
|
2021-04-23T13:18:05.583Z
|
2021-04-23T00:00:00.000
|
{
"year": 2021,
"sha1": "80bd0a6e03f06c316043a2bb62ecc70f4eba3265",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.598101/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "80bd0a6e03f06c316043a2bb62ecc70f4eba3265",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
56631870
|
pes2o/s2orc
|
v3-fos-license
|
Improvement of Diabetic Macular Edema in the Fellow Eye After Monocular Intravitreal Bevacizumab Injection
Copyright © 2017 The Hospital Practices and Research. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Bevacizumab is a recombinant humanized monoclonal antibody directed against human vascular endothelial growth factor (VEGF). 1 Recent studies have demonstrated the beneficial effects and safety of anti-VEGFs in the treatment of diabetic macular edema (DME). 2,3Bilateral injection of anti-VEGFs is usually performed, because DME frequently presents bilaterally. 4There are reports of therapeutic effects in the fellow eye after the monocular injection of anti-VEGFs, including decreased fluorescein leakage in the fellow eye after intravitreal injection of 1.25 mg of bevacizumab in patients with DME. 5 The current study describes an interesting case of significant improvement of DME in the contralateral eye after the monocular intravitreal injection of bevacizumab.2A & B).The patient was not conferring for further follow up.
Discussion
Today, the potential effect of unilateral injection of anti-VEGFs in the fellow eye is controversial.In a prospective study, Gamulescu and Helbig showed no effects in the fellow eye of 26 eyes with bilateral neovascular age-related macular degeneration treated with ranibizumab. 6Another study by Velez-Montoya et al showed that unilateral bevacizumab injection in 23 patients with bilateral DME had no effect on the fellow eye. 7everal studies have demonstrated the potential effects of unilateral anti-VEGF injection in fellow eye; for example, Hanhart et al reported that unilateral bevacizumab injection in patients with bilateral DME is often associated with bilateral response. 4Zlatcavitch et al reported a patient with progression to macula-off tractional retinal detachment in a contralateral eye within 7 days after intravitreal injection of bevacizumab. 8Wu et al described a patient with macular edema due to branch retinal vein occlusion (BRVO) in the right eye and choroidal neovascularization in the left eye.They showed that intravitreal injection of bevacizumab in the right eye resulted in a reduction in macular edema in the left eye and vise versa. 9ecent studies have revealed that the anti-VEGF concentration in the fellow eye increased after unilateral injection.The peak concentration of bevacizumab in the aqueous humor of the fellow eye after the unilateral injection of 1.25 mg bevacizumab occurred 7 days after injection 10,11 ; about 4 weeks after injection, peak concentration of bevacizumab occurred in the vitreous of the fellow eye. 10 In this study, we report a patient with bilateral DME in which significant improvement in macular edema was seen after the intravitreal injection of 1.25 mg of bevacizumab in the contralateral eye.Therefore, therapeutic effects of bevacizumab in the fellow eye are possible, and further research is required to document such fellow eye effects.Unilateral injection of anti-VEGFs for bilateral disease, if probable, can lead to a reduction of costs and complications.
Conclusion
The unilateral injection of intravitreal bevacizumab may have therapeutic effects in the fellow eye and reduce the http://www.jhpr.irHosp Pract Res.2017 May;2(2):54-55 doi 10.15171/hpr.2017.13Case Report HPR Hospital Practices and Research was performed in the right eye.Two months later the patient presented for examination, and OCT was done.Surprisingly, significant improvement of DME had occurred in both eyes, including the left eye that had not received intravitreal injection.Central macular thickness was 245 µ in the right eye and 250 µ in the left eye (Figures
Figure 1 .
Figure 1.Optical Coherence Tomography of the Right Eye (a) and Left Eye (b) Before Treatment.
Figure 2 .
Figure 2. Optical Coherence Tomography of the Right Eye (A) and Left Eye (B) After Intravitreal Injection of 1.25 mg Bevacizumab in the Right Eye.
|
2017-11-02T02:11:04.734Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "7d55789571e27f099fb691b67b8b908c5d7422ca",
"oa_license": "CCBY",
"oa_url": "http://www.jhpr.ir/article_46539_48199310e48d25f3ebc314284bf068ba.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7d55789571e27f099fb691b67b8b908c5d7422ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199506999
|
pes2o/s2orc
|
v3-fos-license
|
Do Relatives With Greater Reproductive Potential Get Help First?: A Test of the Inclusive Fitness Explanation of Kin Altruism
According to inclusive fitness theory, people are more willing to help those they are genetically related to because relatives share a kin altruism gene and are able to pass it along. We tested this theory by examining the effect of reproductive potential on altruism. Participants read hypothetical scenarios and chose between cousins (Studies 1 and 2) and cousins and friends (Study 3) to help with mundane chores or a life-or-death rescue. In life-or-death situations, participants were more willing to help a cousin preparing to conceive rather than adopt a child (Study 1) and a cousin with high rather than low chance of reproducing (Studies 2 and 3). Patterns in the mundane condition were less consistent. Emotional closeness also contributed to helping intentions (Studies 1 and 2). By experimentally manipulating reproductive potential while controlling for genetic relatedness and emotional closeness, we provide a demonstration of the direct causal effects of reproductive potential on helping intentions, supporting the inclusive fitness explanation of kin altruism.
Perhaps nowhere is helping behavior in greater supply than in family relationships. Relatives are often present when we enter this world, and we usually want to have them near as we get ready to leave it. In the memories which populate those intervening years, family members often play an important role in major life events and can usually be counted on for comfort, love, and support. It is then not surprising that a great deal of evidence indicates that people are more willing to help relatives than nonrelatives and to endure greater hardships to deliver that help. This tendency toward kin altruism, or preferential helping behavior directed toward organisms to whom one is genetically related, is well-documented in animal species (e.g., Baglione, Canestrari, Marcos, & Ekman, 2003;Clutton-Brock, 2009;Silk, Brosnan, Henrich, Lambeth, & Shapiro, 2013).
In research with humans, the majority of studies on kin altruism employ hypothetical scenarios to assess respondents' intentions to help. For example, Gesselman and Webster (2012) asked respondents to imagine a scenario in which someone insulted their sibling, their cousin, or a stranger, and the researchers observed that respondents were more likely to express aggressive intentions to someone insulting their sibling or cousin than a stranger. Similarly, Jonason, Izzo, and Webster (2007) observed that when respondents were asked to imagine helping someone find a romantic partner, they reported greater willingness to help a family member find a long-term romantic partner than to help a nonfamily member to do the same. Preferential helping intentions to family members have also been demonstrated in higher risk scenarios. For example, Kruger (2003) observed that respondents reported a greater willingness to help a sibling than a friend escape from a hypothetical situation involving life-threatening fire. Similarly, participants in Curry, Roberts, and Dunbar's (2013) study reported a greater willingness to donate their kidney to a family member than to a friend. When assessing actual helping behavior, rather than willingness to help in hypothetical situations, Madsen and colleagues (2007) observed that participants spent significantly more time enduring physical pain for the benefit of their parent or sibling than of a nonrelative (Study 2) and someone with whom they were more distantly related (Studies 1-3).
Although evidence cited so far suggests a greater willingness to help family than nonfamily, not all family members are helped equally. Humans, like other animals, have evolved mechanisms for identifying genetic relatives and use that kin detection mechanism to make helping decisions (Lieberman, Tooby, & Cosmides, 2007;Mateo, 2015). Evidence linking genetic relatedness to helping has been found in research examining people's willingness to help different family members. For example, Tifferet, Pollet, Bar, and Efrati (2016) asked their participants with a younger sibling to indicate the perceived similarity between themselves and this younger sibling and their routine investment in this sibling (e.g., care for wellbeing, amount of communication, time and money spent). The researchers observed that perceived similarity positively predicted investment toward their younger sibling and interpreted this observation as perceived similarity cueing a stronger sense of kinship. Although Tiffert et al.'s findings are consistent with the notion that genetic relatedness may foster greater helping behavior, the actual genetic similarity was not assessed in the study. Webster (2003) observed that shared genetic relatedness positively predicted a greater allocation of lottery money when he asked participants to allocate hypothetical lottery money to various blood relatives and to specify each relative's relationship to them (e.g., "father's sister" for "aunt"). Although genetic relatedness was assessed in Webster's study, it was not experimentally manipulated. Hence, it may be the case that factors related to genetic relatednesses (e.g., frequency of interaction, emotional closeness) rather than genetic relatedness per se that contributed to the willingness to help (assuming that people generally interact more with, and feel emotionally closer to, genetically close relatives than genetically distant relative). Fitzgerald and Whitaker (2009) experimentally varied genetic relatedness by randomly assigning participants to indicate their willingness to help a sibling, a cousin, or a friend in a hypothetical violent (e.g., burning house) or nonviolent (e.g., chased by an attacker) life-threatening situation; participants in this study reported higher likelihood of helping in the sibling scenario in comparison to the cousin scenario, which did not differ from the friend scenario, regardless of the violence of the situations.
Preference to invest in genetically related family members has also been observed outside of the lab in real-world giving and wealth distribution situation. For example, Mysterud, Devron, and Slagsvold (2006) found that the amount of money Norwegian graduate students spent on presents for family members during the preceding Christmas was significantly associated with their degree of relatedness to the recipient; the closer a graduate student was to a family member genetically, the more money they would spend on that person's Christmas gift. These preferences are more clearly demonstrated in research on bequests. In their analysis of 1,000 probated wills from Vancouver, British Columbia, for example, Smith, Kish, and Crawford (1987) found that the deceased typically bequeath more to close relatives than distant relatives. In another study, Judge and Hrdy (1992) examined wills from Sacramento, CA, and found that women, but not men, who are survived by their spouse leave a greater share of their wealth to their children than to their spouse; and the researchers attributed this sex difference to women being more certain of their children's genetic relatedness to themselves than are men. Bossong (2000) later replicated these findings in the laboratory demonstrating that women, but not men, who imagined being survived by their spouse reported greater willingness to leave their wealth to their children than to their spouse. Thus, when it comes to making helping decisions between family members, the degree of genetic similarity between the helper and the recipient appears to play an important role.
In summary, existing research evidence suggests that humans are more willing to help family than nonfamily members, and when choosing between family members, preference is given to family members with closer genetic relatedness.
Emotional Closeness and Family Helping Behavior
Although genetic relatedness appears to play a role in family helping decisions, there is more to being a family than biological commonalities. There is some evidence pointing to emotional closeness as a factor that plays a role in family helping. Emotional closeness refers to one's concern for, trust in, and enjoyment of their relationship with another person (Korchmaros & Kenny, 2006). Having a need to belong with other people and feel emotional closeness with them would have given our ancestors a survival advantage over others without this need, and so emotional closeness may itself have evolutionary origins (Miller, 2015). When people feel a sense of connection to a group, regardless of their genetic relationship, they are more inclined to give help (Pavey, Greitemeyer, & Sparks, 2011). For example, Rachlin and Jones (2008) asked participants to imagine dividing a pot of money between themselves and different members of their social network and found that genetic relatives, the people whom participants are most willing to help, also tend to be the people that participants report feeling closest to within their social networks. In another example demonstrating the effect of emotional closeness on family helping behavior, Korchmaros and Kenny (2001) asked participants to pick, in round-robin style, relatives varying in emotional closeness to hypothetically save from a burning building; they found that the association between genetic relatedness and help intentions was partially mediated by emotional closeness. Furthermore, emotional closeness along with neediness and familial obligation accounted for over 60% of the variance in helping intention attributed to genetic relatedness (Korchmaros & Kenny, 2006), although Curry et al. (2013) found that genetic relatedness still predicted help intentions for family members when controlling for emotional closeness.
In summary, previous research has shown that people tend to help family over friends and genetically close family members over genetically distant family members, and the association between genetic relatedness and helping intention has been at least partially attributed to emotional closeness.
Inclusive Fitness, Kin Altruism, and Reproductive Potential
Inclusive fitness theory outlines an evolutionary account for the origins of family helping behavior or kin altruism (Hamilton, 1964). For any gene to increase in frequency in a population, it must either do so directly, by replicating itself, or indirectly, by promoting the survival and reproduction of other organisms who also have a copy of the gene. This second option is most likely to occur when the cost of promoting the other organism's survival (c) is less than the product of the degree of relatedness between the two organisms (r) and the potential benefit of helping to the reproduction of the gene (B). Thus, according to Hamilton's rule, kin altruism will occur when rB > c. According to the theory of inclusive fitness, family helping behavior developed as a way of promoting the indirect replication of a kin altruism gene through the survival and reproduction of related others (who are presumed to also be carrying the gene). As a consequence, humans have evolved to make choices about helping that are sensitive to the parameters mentioned above (the cost to helper, relatedness of the recipient, and reproductive benefit); and in the right set of circumstances, these choices can have a powerful effect on help intentions (Korchmaros & Kenny, 2006).
The principle of inclusive fitness may seem fairly innocuous, but it has at least one somewhat disturbing implication. If people are motivated to help their family members based on the probability that they will help reproduce the kin altruism gene (B) and some relatives have a very low probability of being able to reproduce, then people should be less willing to help those family members than others with a greater chance of producing biological offspring. There are some preliminary findings suggesting that this may be the case. For example, in research by Fitzgerald and Colarelli (2009), participants indicated their intentions to help fictional relatives with a variety of circumstances that limit their reproductive potential such as functional limitations (e.g., mental illness), physical limitations (i.e., malnutrition), and sexual limitations (e.g., homosexuality). Participants reported less willing to provide help to people who had one of these limitations but only when the cost of helping was high (e.g., helping someone escape a burning house). When the cost of helping was low (e.g., helping someone pick up a few items from the store), the limitations had little or no effect on help intentions. This finding makes evolutionary sense: If people are motivated to help relatives based on their likelihood of passing on genes, then recipients' reproductive potential should have a greater influence on helping decisions in situations where their ability to reproduce is threatened than when it is not (Korchmaros & Kenny, 2006). Furthermore, according to Hamilton's rule, kin altruism is most likely to occur when the cost of helping (c) is less than the product of the reproductive benefit to helping the organism (B) and the degree of relatedness (r). When the cost of helping is low, helping may still occur when the degree of relatedness or reproductive benefit are also low (as long as the product of the two is still higher than the cost). As the cost of helping increases, however, it becomes more important that the product of reproductive benefit and relatedness also increase to compensate. Thus, a family members' potential to reproduce should have the greatest influence on helping behavior when the decision has direct consequences for the recipients' potential to reproduce and the cost of helping is high. Indeed, Burnstein, Crandall, and Kitayama (1994) randomly assigned participants to imagine providing help to family members in either mundane or life-or-death situations, and found that although participants gave priority to family members based on who they perceived as needing help the most (such as the poor, the sick, the very old, or the very young) in mundane situations; in lifeor-death situations, they gave priority to those who were most likely to successfully produce genetic offspring (such as the young, the healthy, the wealthy, and the premenopausal). Similarly, Stewart-Williams (2007) found that people were more willing to hypothetically donate a kidney to a sibling than to a friend but were equally likely to help friends and siblings who were ill and were more willing to give emotional support to friends than siblings. These patterns were again observed in Chuang and Wu's (2017) study in which they asked participants to choose one family member from a triad to give help in either mundane or life-or-death hypothetical scenarios. In the hypothetical life-or-death scenarios, participants were most willing to provide help to relatives with the highest reproductive value or the greatest expectation of producing future offspring based on their relative age, even if they had indicated a preference to help another member of the triad in the hypothetical mundane help condition.
In summary, previous psychological research on kin altruism in support of inclusive fitness shows that people are more willing to help relatives than nonrelatives and genetically close relatives than genetically distant relatives. Additionally, when the cost of helping is high, relatives with distinct reproductive limitations or characteristics that make them less likely to reproduce are less likely to be helped than relatives without these limitations or characteristics. The logic of these studies is built on the assumption that people are less likely to help relatives who are less likely to reproduce and therefore pass on the kin altruism gene, and research findings so far are consistent with that assumption. To our knowledge, however, the precise effect of explicit reproductive potential on a person's likelihood of receiving kin help has not yet been tested. To rigorously examine whether people's perception of a family member's reproductive potential can influence their intention to help this family member, experimental research in which the reproductive potential of a family member is manipulated is necessary.
Current Research
The present research uses an experimental design to directly and explicitly test the effect of a kin's reproductive potential on their likelihood of receiving help. Based on the logic of inclusive fitness theory, we expect to find that the potential of reproducing (and passing on the kin altruism gene) will have a causal effect on kin help intention, particularly when the cost of helping is high. Although previous research suggests that this is the case by using proxies for reproductive potential (e.g., age, health, wealth, reproductive value, and the presence or absence of reproductive limitations), manipulating explicit reproductive potential information is essential for rigorously addressing the issue of reproductivity.
Additionally, the current research seeks to disentangle emotional closeness from family relationships to clearly examine the causal effect of emotional closeness on intentions to help while holding family relationship constant. Furthermore, the current research makes a unique contribution to the understanding of whether a kin's reproductive potential drives helping for the kin independent of emotional closeness to the kin. By conducting the study with these parameters, we are able to strictly test the applicability of inclusive fitness theory as an explanation for helping intentions, as well as the unique effect of emotional closeness apart from genetic relatedness, and the unique effect of the kin's reproductive potential apart from emotional closeness.
We used a series of decision-making tasks in a fully experimental design to examine the causal effects of reproductive potential, emotional closeness, and helping cost on intentions to engage in kin altruism. Because genetic relatedness and emotional closeness can covary with familial relationship, we held the relationship of the kin constant for Studies 1 and 2 (always a female cousin 1 ) and manipulated it for Study 3 (either a female cousin or a female friend). We hypothesized that cousins more likely to reproduce would receive help earlier, particularly if the cost of helping was high (in a life-ordeath situation) rather than low (in a mundane situation). We also manipulated emotional closeness in Studies 1 and 2 and hypothesized that relatives who were high in emotional closeness would receive help earlier than relatives who were low in emotional closeness, regardless of their reproductive potential or the type of help required.
In Study 1, we manipulated female cousins' potential to pass on genes by presenting them as getting ready to bear a biological child or to adopt a child as well as emotional closeness. In Study 2, we manipulated reproductive potential, the capability to reproduce offspring, by varying each cousin's ability to conceive a child as well as emotional closeness. In Study 3, holding emotional closeness constant, we compared intentions to help a cousin high in reproductive potential with intentions to help three other targets. In all three studies, participants were randomly assigned to make helping decisions regarding the targets in either a mundane or a life-or-death situation. All studies were approved by the University of New Brunswick Research Ethics Board.
Participants
Participants were 203 residents of the United States and of the legal age of majority. Data on gender, ethnic background, age, and religion were not available for 15 participants who did not complete the demographic questionnaire due to a programing error. One hundred and eighty-eight participants completed the demographic questionnaire (103 men and 85 women; mean age ¼ 34.77 years, SD ¼ 11.80, range ¼ 19-70). Of those, 68.7% were White, 6.2% Asian, 6.6% Hispanic/Latino, 5.7% African American, 1.9% American Indian (10.9% did not provide their ethnicity); 44.5% were Christian, 3.8% Buddhist, 1.4% Jewish, 1.4% Muslim, 0.5% Baha'i, 0.5% Hindu, 0.5% Zorostrian, and 36.0% indicated "other" with the majority of these identifying as atheist or not religious (11.4% did not report their religion). Among our participants, 86.3% reported having at least one cousin.
Procedure
Participants were recruited from the population of registered users on the Amazon M-Turk website and received US$3 in credits on Amazon.com for completing the study. M-Turk allows Amazon members ("workers") to perform human intelligence tasks in exchange for monetary compensation. Data collected from M-Turk meet acceptable psychometric standards in terms of internal consistency and test-retest reliability and are more diverse than samples collected from introductory psychology course pools (Buhrmester, Kwang, & Gosling, 2011). The study was listed on the M-Turk site among thousands of other human intelligence tasks available for participation. Participants completed an informed consent form and were then directed to the scenario task and a demographic questionnaire.
Help type manipulation. Participants were randomly assigned to either the mundane helping condition or the life-or-death helping condition. Participants were asked to imagine a scenario where four female cousins either required their help to complete important errands (mundane help condition) or were asleep in a burning building and required their help to escape (life-or-death help condition). Participants were told that if they did not help, the cousins would not be able to complete their errands (in the mundane condition) or would die in the burning building (life-or-death condition), only one cousin could be helped at a time, and those helped later would be less likely to complete their errands (mundane condition) or live (life-ordeath condition). Thus, only the cousin helped first was guaranteed to be helped (in the mundane condition) or saved (in the life-or-death condition). The four cousins were presented in a randomized order.
Reproductive potential manipulation. To manipulate the potential of passing on kin altruism genes, we varied the cousins' childbearing status. Two cousins in each scenario were described as "preparing to conceive a biological child"; the other two were described as "preparing to adopt a child." Emotional closeness manipulation. Two cousins in each scenario were described as someone with whom the participant is "emotionally close"; the other two were described as someone with whom the participant is "not emotionally close." Participants' task was to indicate the order in which they would help the four cousins described in the scenario, to which they were randomly assigned. Participants then completed the demographics questionnaire and were directed to a debriefing form.
Results and Discussion
We conducted a 2 (help type: mundane, life-or-death) Â 2 (reproductive potential: conceive, adopt) Â 2 (emotional closeness: high, low) mixed analysis of variance (ANOVA) on the order of helping, 2 with reproductive potential and emotional closeness as within-subjects variables and help type as a between-subjects variable. 3 See Table 1 for the descriptive statistics.
A significant main effect of reproductive potential revealed that across the two types of helping scenarios, cousins preparing to conceive a child were generally helped earlier (M ¼ 2.33, SE ¼ .04) than cousins preparing to adopt a child (M ¼ 2.68, SE ¼ .04), F(1, 201) ¼ 17.13, p < .001, Z 2 p ¼ .08. However, we had predicted that the effect of reproductive potential would be stronger in the life-and-death condition than in the mundane condition. As predicted, the Reproductive Potential  Help Type interaction was significant, F(1, 201) ¼ 7.19, p < .01, Z 2 p ¼ .04. In the life-or-death condition, participants helped cousins preparing to conceive a child (M ¼ 2.22, SE ¼ .06) earlier than cousins preparing to adopt a child (M ¼ 2.78, SE ¼ .06), F(1, 103) ¼ 25.44, p < .001, Z 2 p ¼ .2. However, in the mundane condition, cousins who were preparing to conceive a child (M ¼ 2.44, SE ¼ .06) were helped no earlier than cousins preparing to adopt a child (M ¼ 2.56, SE ¼ .06), F(1, 98) ¼ 0.97, p > .05, Z 2 p ¼ .01 (see Figure 1). The cousins preparing to conceive stand to transmit a partial copy of a participant's genes, but the cousins preparing to adopt do not. Consistent with inclusive fitness theory, it seems that people are willing to improve their overall fitness by being more willing to save the lives of those who can transmit shared genetic material.
Not surprisingly, we also found a main effect for emotional closeness, with cousins high in emotional closeness (M ¼ 1.74, SE ¼ .04) receiving help significantly earlier than cousins low in emotional closeness (M ¼ 3.25, SE ¼ .04), F(1, 201) ¼ 474.01, p < .001, Z 2 p ¼ .70. No other effects were statistically significant (all p ! .50). As predicted, across the two help conditions, emotional closeness led to helping earlier. Also as predicted, the main effect of emotional closeness was not qualified by the type of help required, nor the reproductive potential of the target, but had an overall effect above and beyond these two variables. Previous research showed that the relationship between kinship and help intentions is at least partially mediated by perceived emotional closeness (Korchmaros & Kenny, 2001. By experimentally manipulating emotional closeness, we demonstrated a causal relationship between emotional closeness and helping intentions when holding kinship constant.
Study 2 Introduction
In Study 1, some cousins were described as "preparing to conceive a biological child," but no mention was made of Note. SDs are given in parentheses. whether they were actually capable of doing so. In Study 2, therefore, we made a small but important adjustment to our design and manipulated cousins' actual reproductive potential-their chance to reproduce-rather than the intention to adopt or conceive. A second addition to the design of Study 2 was the inclusion of an open-ended question to examine participants' rationale behind their decisions. Because one's reproductive potential has a direct effect on their ability to pass on the kin altruism gene and thus their reproductive benefit (B), we hypothesized that a cousin's higher reproductive potential would lead to being helped earlier in the life-ordeath condition, but not necessarily in the mundane condition. Additionally, we hypothesized that emotional closeness would again have an unqualified effect on the order in which the cousins were helped.
Participants
Two hundred and seven participants were recruited for the second study (100 women, 102 men, 1 transgender person, and 4 declined to give their gender; mean age ¼ 34.42, SD ¼ 11.67, range ¼ 18-70, and 3 participants did not report their age). All participants were residents of the United States and of the legal age of majority. Ethnic backgrounds were 80.2% White, 7.2% Asian, 5.8% African American, 3.4% Hispanic/Latino, and 1.4% American Indian or Alaska Native; 2.0% declined to give their ethnic background. Participants were 36.7% Christian, 2.4% Hindu, 1.0% Buddhist, 2.4% Jewish, 1.0% Muslim, and 0.5% Baha'i. Of the remaining sample, 51.2% reported "other" which was again predominantly atheist or agnostic, 2.9% reported "no religion," and 1.9% gave no information about their religion. Among our participants, 92.3% reported having at least one cousin.
Procedure
As in Study 1, participants were recruited on the Amazon M-Turk website and received US$3 in credits on Amazon.com for completing the study. After completing an informed consent form, they were directed to the scenario. The type of help and emotional closeness manipulations were identical to those of Study 1, as was the task. The four cousins were again presented in a randomized order.
Reproductive potential manipulation. Two cousins were described as having "a 95% chance to conceive a child"; the other two were described as having "a 95% chance to not conceive a child." Thus, two cousins had a high potential to reproduce and thus pass on the kin altruism genes and two cousins had a low potential of doing so. After completing the ordering task, participants answered an open-ended question for the cousin helped first only, asking them to "tell us a little about what you were thinking when deciding to help this cousin." Participants then completed a demographics questionnaire and were directed to a debriefing form.
Result and Discussion
We had hypothesized that high reproductive potential would lead to getting help earlier in the life-or-death scenario but not in the mundane scenario. To test this hypothesis, we conducted a 2 (help type: mundane, life-or-death) Â 2 (emotional closeness: high, low) Â 2 (reproductive potential: high, low) mixed ANOVA on the ordering data, with emotional closeness and reproductive potential as within-subjects variables and help type as a between-subjects variable. 4 See Table 2 for the descriptive statistics. As predicted, and consistent with the findings of Study 1, there was a significant interaction effect between a cousin's reproductive potential and type of help, F(1, 203) ¼ 68.69, p < .001, Z 2 p ¼ .25. Replicating Study 1, in the life-or-death scenario, cousins high in reproductive potential (M ¼ 2.11, SE ¼ .04) were helped significantly earlier than cousins low in reproductive potential (M ¼ 2.89, SE ¼ .04), F(1, 98) ¼ 99.29, p < .001, Z 2 p ¼ .50. In the mundane scenario, however, cousins high in reproductive potential (M ¼ 2.67, SE ¼ .05) were helped significantly later than cousins low in reproductive potential (M ¼ 2.33, SE ¼ .05), F(1, 105) ¼ 9.89, p < .01, Z 2 p ¼ .09 (see Figure 2). In Study 1, there was no effect of reproductive potential in the mundane condition; however, the effect is significant in Study 2, with the cousins who are most likely to reproduce being the last to receive help. According to inclusive fitness theory, when the cost of helping is high, the ability to reproduce and pass on genes has a strong influence on helping intentions. As the cost of help decreases, however, the ability to pass on genes becomes less important, allowing other factors to have a greater relative influence on helping intentions. This might explain the finding from Burnstein et al. (1994) that, in mundane situations, participants gave priority to family members based on perceived need, not their ability to pass on genes. It might also explain why, even though people are more willing to give a kidney to a family member than to a friend, the latter is much more likely to be the recipient of emotional support than the former (Stewart-Williams, 2007). Similarly, in our studies, in the life-or-death conditions, the cost of helping was high and so the ability to pass on genes had a strong influence on helping intentions and hence cousins who were likely to conceive were helped earlier than cousins unlikely to conceive. In the mundane conditions, however, the cost of helping was low and so the ability to pass on genes had less importance. We speculate that participants might have felt sorry for the cousins with a low chance of conceiving and hence were willing to help them first with a mundane task. That is, the decisions of participants in the mundane condition might have been influenced by sympathy-a common motivator of helping behavior (Dovidio, Piliavin, Gaertner, Schroeder, & Clark, 1991). To assess whether sympathy played a role in helping the low reproductive potential cousin in the mundane condition, we had two independent coders analyze the content of open-ended responses for mentions of feelings of sympathy, empathy, or pity (specifically, statements that mentioned feeling bad for, feeling pity for, feeling sorry for, empathy and sympathy or sympathize); agreement between the two coders was 100%. We found that 19 of 106 participants in the mundane condition mentioned sympathy for cousins low in reproductive potential (e.g., "The fact that there is a 95% chance she is unable to have a child makes me feel sympathetic toward her and want to help her"), but only 1 participant of 99 in the life-or-death condition mentioned sympathy for cousins low in reproductive potential. None of the participants reported feeling sympathy, empathy, or pity for cousins high in reproductive potential. It appears, then, that sympathy had motivated participants to help the cousin with low reproductive potential when it comes to a mundane task, but not when it comes to saving their life.
We had also hypothesized that emotional closeness would again have an unqualified effect on one's likelihood of receiving help. Consistent with the findings in Study 1, there was a main effect of emotional closeness, with cousins high in emotional closeness (M ¼ 1.75, SE ¼ .04) helped earlier than cousins low in emotional closeness (M ¼ 3.25, SE ¼ .04), F(1, 203) ¼ 385.70, p < .001, Z 2 p ¼ .66. Again, this main effect was not qualified by the reproductive potential of the targets, nor the type of help required, demonstrating the unqualified causal effect of emotional closeness on family helping intentions. No other effects were significant (ps > .50).
In summary, the results from Study 2 are consistent with the results from Study 1 in suggesting that both emotional closeness and reproductive potential influence helping. In general, people give preference to helping cousins to whom they feel emotionally close. Also, when the cost of help is high, cousins with a higher chance of having a genetically related offspring receive help earlier. When help cost is low, however, the effect is reversed: Cousins with a low chance of having genetically related offspring were more likely to be helped first, likely due to participants' sympathy toward them. 5
Study 3 Introduction
In Studies 1 and 2, we held genetic relatedness constant so that it would be possible to demonstrate the causal effect of emotional closeness on family helping intentions. Because genetic relatedness has been held constant so far, we cannot know if the effect of reproductive potential demonstrated in Studies 1 and 2 was motivated by a desire to pass on one's kin altruism gene or simply by participants' inclination to save the lives of women who can become pregnant-regardless of whether or not they are relatives. In Study 3, we manipulated both reproductive potential and relatedness. In line with the premises of the inclusive fitness theory, we predicted that reproductive potential should affect helping only relatives, not friends, in the lifeor-death condition. In this study, participants selected the order in which they would help four targets: a high-fecundity cousin, a low-fecundity cousin, a high-fecundity friend, and a lowfecundity friend, all equal in emotional closeness. Because the high-fecundity cousin is the only target with a high likelihood of passing on one's kin altruism genes and thus a high reproductive benefit (B) to compensate for the increase in cost (c), we hypothesized the high-fecundity cousin in life-or-death scenario would be helped earlier than any of the other targets and the other targets would not differ in order of receiving help.
Procedure
As in previous studies, participants were recruited from the Amazon M-Turk website. They received a US$1 credit on Amazon.com. Participants completed an informed consent form and then were randomly assigned to imagine either a life-or-death or mundane helping scenario. The type of help manipulation was identical to the previous two studies. Participants were presented with four targets and asked to list the order of helping. Emotional closeness was held constant by describing all targets as someone "you are close to emotionally." Participants then completed a demographics questionnaire and were directed to a debriefing form.
Target manipulation. As in Study 2, to manipulate reproductive potential, two targets were described as having "a 95% chance of being able to conceive a child," while the other two targets were described as having "a 95% chance of not being able to conceive a child." To manipulate relationship, two targets were presented as a "female cousin" and two targets were presented as a "female friend." Finally, to manipulate helping condition, half of the participants had to make helping decisions in a lifeor-death scenario and half in a mundane scenario. All targets were described as a person "you are very close to emotionally." Targets were presented in a randomized order.
Results and Discussion
We conducted a 2 (reproductive potential: high or low)  2 (relationship: cousin or friend)  2 (condition: life-or-death or mundane) mixed ANOVA. 6 See Table 3 We specifically predicted that the high reproductive potential cousin in the life-or-death condition would be helped earlier than any other targets, which would not differ from each other. This means that we expected one cell to be different from the seven other cells that do not differ from one another. Although our Reproductive Potential  Relationship  Condition interaction did not reach significance (p > .05), a threeway ANOVA is not the proper way of assessing our prediction because the F-ratio compares variance in a model due to the manipulation against variance in a model due to chance. A large F-ratio requires the manipulation to be responsible for more differences between individual groups than would be expected at random (Field, 2009). Hence, the distinct variance of one cell (the one with the high-fecundity cousin in the lifeor-death scenario) would be likely drowned out by the similarities between the other seven cells. In an attempt to adjust for this, we conducted a 4 (target: high-fecundity cousin, highfecundity friend, low-fecundity cousin, low-fecundity friend)  2 (condition: life-or-death, mundane) ANOVA with planned comparisons to assess the prediction that only in the life-ordeath condition the high-fecundity cousin would be helped earlier than the other targets, but not in the mundane condition. These preliminary results demonstrated an overall significant interaction, F(3, 187) ¼ 8.12, p < .001, Z 2 p ¼ .12. The simple effect of target was not significant in the mundane condition, F(3, 93) ¼ 2.48, p > .05, Z 2 p ¼ .07, but it was significant in the life-or-death condition, F(3, 92) ¼ 17.27, p < .001, Z 2 p ¼ .36. In the life-or-death condition, planned comparisons revealed a statistically significant difference in intention to help from cousins who were likely to conceive (M ¼ 1.80, SD ¼ 1.14) to the average of cousins unlikely to conceive (M ¼ 2.60, SD ¼ 0.83), friends likely to conceive (M ¼ 2.4, SD ¼ 0.94), and friends unlikely to conceive (M ¼ 3.20, SD ¼ 1.09) with a mean difference of À.93, 95% confidence interval (CI) [À1.23, À0.63], p < .001. In the mundane condition, however, there was not a significant difference, with cousins likely to conceive (M ¼ 2.32, SD ¼ 1.13) no more likely than the average of cousins unlikely to conceive (M ¼ 2.26, SD ¼ 1.03),
General Discussion
Inclusive fitness theory holds that people help family members because they are likely to pass on a shared kin altruism gene. Previous psychological research on kin altruism in support of this theory shows that people with limitations or characteristics that may make it less likely that they will reproduce are also less likely to receive help from family members than people without those limitations or characteristics (Curry, Roberts, & Dunbar, 2013;Fitzgerald & Colarelli, 2009). No research to date, however, has directly and experimentally tested the effect of targets' reproductive potential on their likelihood of receiving help.
The results of our studies demonstrate that, when helping affects relatives' chances of survival and the cost of helping is high, people give priority to the relatives with the greatest likelihood of passing on the kin altruism gene: a cousin who is preparing to conceive rather than adopt a child (Study 1) and cousins with a high likelihood of conceiving rather than a low likelihood (Studies 2 and 3). When the situation does not directly affect survival and the cost of helping is low, a relative's likelihood of passing on the shared kin altruism gene has less of an effect on helping. Across three studies, we found that cousins who were less likely to reproduce were helped later with mundane tasks (Study 1), earlier (Study 2), or at the same time (in Study 3) as cousins who were more likely to reproduce. When the cost of helping is high, it is important that the reproductive benefit of helping is also high, but when the cost is low, having a high reproductive benefit becomes less important. In these situations, reasons for helping associated with social norms or other practical and social considerations, such as reciprocity, proximity, or sympathy (as was found in Study 2), may have more of an effect. Hence, when help does not affect a target's likelihood of survival, the effect of reproductive potential on kin altruism becomes less certain.
The role of emotional closeness in family helping intentions has been well-documented and is demonstrated in this research as well. In both Studies 1 and 2, emotional closeness led to receiving help earlier regardless of target reproductive potential or type of help required. Although previous research on helping intentions and behavior in support of inclusive fitness theory has shown preference for helping family over friends and close family members over distant family members (Curry et al., 2013;Kruger, 2003;Madsen et al., 2007;Smith, Kish, & Crawford, 1987;Webster, 2003), it is unclear if this helping is due to emotional closeness or familial relationship. In the current research, we disentangle emotional closeness from family relationships, by manipulating the former while holding the latter constant. We thereby contribute to the literature by showing the causal effect of emotional closeness on helping intentions in family relationships. Thus, future research that assumes that the effect of emotional closeness on kin altruism is bound inexorably to familial closeness should take note of these findings.
More importantly, however, by manipulating reproductive potential while controlling for genetic relatedness and emotional closeness, we demonstrate that reproductive potential has a causal effect on helping intentions when helping directly affects the survival of the person being helped, strongly supporting the inclusive fitness explanation of kin altruism. Previous studies have used proxies for reproductive potential, such as age, health, wealth, reproductive value, and the presence or absence of reproductive limitations to infer the effect of reproductive potential, but none have assessed the effect directly. By experimentally manipulating this variable directly, the present research adds to the literature by demonstrating its causal effect on the intention to provide help, corroborating the assumptions on which these previous studies and inclusive fitness theory itself are founded. To our knowledge, this is the most direct and precise incorporation of an inclusive fitness account of kin altruism into a fully experimental design to date.
There are several limitations to what can be said about family helping intentions based solely on the results of this research. Although our data suggest that reproductive potential plays a role in how people decide which family members to help, it is beyond the scope of that data to determine how that process takes place. One possibility is that the process is largely unconscious-knowledge about the reproductive potential of family members is accumulated over time and can affect the way we treat relatives, even if we seldom reflect on that information consciously. In our study, the only information participants had about the targets was their degree of reproductive potential and level of emotional closeness, and so participants were restricted to making decisions based only on that information. Another possibility is that people begin with the assumption that their family members are willing and able to reproduce, and that assumption is qualified by evidence to the contrary. In our study, then, the knowledge that family members were unlikely to reproduce gave them a special status that made them receive help later in the life-ordeath condition but sooner in the everyday condition. Future research should look for ways to follow up on the comparative plausibility of each explanation.
Although our experiments allowed us to make causal inferences, hypothetical scenarios set limitations on the external validity of the research. The cousins that participants were asked to help were imagined, unidimensional characters, and so there might have been less information processing required in making helping decisions for those characters than there would be for actual family members. There is also the possibility that our participants might think and behave differently if they were facing down a real burning building rather than a hypothetical one. The results may also have been different if we had only used participants who had cousins and used the actual names of those cousins in the scenarios. Thus, although the use of real burning buildings with real cousins in them will be clearly frowned upon by most ethics committees, clever and safe designs should test these effects in less tightly controlled environments and should seek to replicate our results in more real-world contexts to examine the generalizability of the phenomenon.
Additionally, whereas this study focused exclusively on female cousins (with the addition of female friends in Study 3), investigating the relationship between male reproductive potential and intentions to help noncousin family members (e.g., siblings, aunts) would serve to further extend this research. Finally, most of our participants were from a modern Western society, which is characterized by an emphasis on independence and self-actualization (Markus & Kitayama, 1991). How might inclusive fitness look in a collectivist culture, where social groups are given greater emphasis, or in more traditional cultures that may place different values on childbearing? Future research should investigate the degree to which these observed effects hold across cultures.
Conclusion
Taken together, the present research demonstrates that people prefer to help family members who are more likely to reproduce biologically. This set of responses may be due to evolved decision-making rules which are sensitive to the parameters outlined by inclusive fitness theory, however, there are two important caveats to these findings. The first caveat is that it is only when one's life is in danger that the ability to reproduce has a clear bearing on who receives help. In the majority of situations, a person's ability to bear children may have no relation to or even increase their likelihood of receiving help. The second caveat is that the effect of emotional closeness still has a clear, significant effect on helping intentions. It is comforting, then, to know that even if our helping decisions are clearly and directly affected by the survival strategies of our ancient ancestors, how we feel about our family still plays an undeniably important role in kin altruism.
|
2019-08-10T13:03:56.904Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "a12097b85d5b354b68db3a061818b0da0cbf36ad",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1474704919867094",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "15b8bd877dca3d0f4e55f9f9ad8151b51e866e45",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
218504146
|
pes2o/s2orc
|
v3-fos-license
|
Longitudinal Progression of Estimated GFR in HIV-1-Infected Patients with Normal Renal Function on Tenofovir-Based Therapy in China
Purpose Estimated glomerular filtration rate (eGFR) decline in HIV-1-infected patients exposure to tenofovir disoproxil fumarate (TDF) has been widely assessed using linear models, but nonlinear assumption is not well validated. We constructed a retrospective cohort study to assess whether eGFR decline follows nonlinearity during antiviral therapy. Patients and Methods We examined 823 (299 of TDF users and 524 of non-TDF users) treatment-naïve HIV-1-infected participants (age ≥ 17 years, initial eGFR ≥ 90 mL/min/1.73m2). Estimated GFR trajectories were compared by one-linear and piecewise-linear mixed effects models, before and after propensity score matching, respectively. Whether the incidence of renal dysfunction (reduced renal function [RRF], eGFR < 90 mL/min/1.73 m2 and rapid kidney function decline [RKFD], eGFR > −3 mL/min/1.73 m2/year) follows nonlinearity was assessed by logistic regression. Results The median follow-up time of this study was 10 (interquartile range, 2–20) months, during which 178 (21.6%) experienced RRF, and 451 (54.8%) experienced RKFD. The slopes (mL/min/1.73 m2/year) of eGFR were −5.31 (95% CI: −6.57, −4.06) before 1.40 years, 4.83 (95% CI: 1.38, 8.28) from years 1.40 to 2.30 and −3.71 (95% CI: −5.97, −1.45) after 2.30 years among TDF users. Within years 1.40–2.30, each year of TDF exposure was associated with a 78% decreased risk of RKFD (95% CI: −91%, −49%). In comparison, eGFR increased slightly at the initiation of antiviral therapy, declined after 2.15 years (−4.96; 95% CI: −5.76, −4.17) among non-TDF users. Such a progression nonlinear trajectory was missed on the assumption of one-linearity, whether in TDF or non-TDF users. Conclusion Over the piecewise mixed-effects analyses with the advantage of revealing the true nature of the exposure outcome relationships, an interesting reverse S-shaped relationship was observed. A routine screen based on nonlinearity could be more helpful for patient management.
Introduction
The widespread use of combination antiretroviral therapy (cART) has essentially improved the life expectancy of human immunodeficiency virus (HIV)-positive individuals. 1 Tenofovir disoproxil fumarate (TDF), an inhibitor of nucleotide analogue reverse transcriptase, which is widely used in most countries around the world as a conventional component of cART for HIV treatment and is considered as the most cost-effective drug against HIV. 2,3 In addition, TDF has been approved as part of a pre-exposure prophylaxis (PreP) to prevent the spread of HIV in those who are at high risk for contracting this virus. 4 However, TDF is similar to adefovir and cidofovir, which possess potential nephrotoxicity, lifelong use of TDF can cause or exacerbate renal impairment, 5,6 and more and more concerns have been raised on renal toxicity of TDF to improve patients' quality of life during this drug exposure. Thus, accurate predictive analyses of renal function overtime will be helpful for the management of these patients.
Estimated glomerular filtration rate (eGFR) is a common indicator of renal function. 7,8 Studies have consistently demonstrated that TDF is associated with a decline of eGFR and renal dysfunction in a subpopulation. [9][10][11][12][13][14] Delineating exactly the eGFR progression trajectories on TDF therapy through routine screening is undoubtedly helpful in this scenario. Since a linear figure seems convenient to interpret, most of the relevant studies so far considered the decline of eGFR to be approximately linear. The real trajectory of eGFR over time is however missed in these simplified models, thus hinders the optimization of TDF therapy based on renal function progression. In chronic kidney disease (CKD) population, several groups have reported nonlinear trajectories of eGFR in the past few years, its implications on risk estimation have gained interest and encouraged researchers to identify time-dependent factors associated with this phenomenon in CKD with different origins. [15][16][17] However, no studies from HIV-1-infected patients have yet rigorously assessed the nonlinear changes of eGFR over time, especially in patients with normal eGFR on initiation of TDF-based antiviral therapy.
The objective of this study was to comprehensively analyze the trajectory of eGFR over time, and to compare the impact of regimens with or without TDF on this trajectory, in a Chinese cohort of treatment-naïve HIV-1-positive individuals. We also assessed the incidence of renal dysfunction based on nonlinear changes in eGFR, by using a two-piecewise logistic regression model.
Study Population
This is a retrospective, observational cohort study conducted at the infectious diseases department at Xixi Hospital of Hangzhou (Zhejiang, Southeast China). All treatment-naïve HIV-1-positive patients with records of cART initiation between January 26, 2010 and December 31, 2015 were screened for eligibility. All data were anonymized to comply with the provisions of personal data protection legislation. Due to the retrospective nature of this study and all data were collected anonymously, written informed consent was not required. This study was approved by the Institutional Review Board of Xixi Hospital.
Data Collection and Inclusion Criteria
Data extracted from the medical records included demographic parameters, date of cART initiation, details of the cART regimens, route of HIV-1 transmission, comorbidities, laboratory variables (HIV-1 RNA viral load, CD4+ lymphocyte cell count, and serum creatinine [SCr]) at baseline, and SCr at 2 weeks, 1 month, 2 months, 3 months, and every 3 months thereafter until January 2017. Isotope dilution mass spectrometry traceable calibration method was used to standardize the measurement of SCr. Baseline was defined as the date of starting cART. Each enrolled patient was 17 years old or more, had a normal baseline eGFR, and had at least one additional eGFR measurement since January 2010. The flowchart is detailed in Figure 1.
Quantitative Variables
The three-variable Modification of Diet in Renal Disease (MDRD) formula adjusted for Chinese populations was used to calculate the eGFR values, as the Chinese eGFR investigation collaboration recommend the use of MDRD equation for Chinese, rather than CKD-EPI. [18][19][20] Combination ART was defined as the combined use of three or more ARVs from any drug class. Patients who took TDF alone or any TDF-containing regimen (TDF + lamivudine [3TC], or emtricitabine [FTC] + nevirapine [NVP], or efavirenz [EFV], or zidovudine [AZT]) were classified as TDF users. Patients exposed to any ARVs except TDF (AZT, or stavudine [d4T] + 3TC + NVP, or EFV) were classified as non-TDF users.
Statistical Analyses
Baseline characteristics were compared between TDF users and non-TDF users. Three models were used to analyze eGFR progression over time since ART initiation in each group (Table 1). Model 1, the crude one, was not adjusted for any covariates. Model 2 was adjusted for age, sex, weight, height, body mass index (BMI), CD4 count, eGFR, dyslipidemia, HIV/AIDS risk factors (sexual orientation and intravenous drug use), WHO stage (III/IV HIV/AIDS), hepatitis B positivity, hepatitis C positivity, anemia, diabetes, and HIV-1 RNA viral load at baseline. Model 3 used propensity score matching (PSM) to reduce preexisting imbalances in the covariates and potential confounding, 23,24 and a covariate was considered well balanced when the P value was more than 0.05 (Table 2), more technical details were as in additional Table S1.
The nonlinear trajectories of eGFR were determined by smooth curve fitting using a generalized additive model (GAM). Two methods were used to identify significant time points (inflection points on the smooth curves): one determined whether the difference of segmented slopes was equal to zero by the Wald test; the other applied a log likelihood ratio test to compare a nonlinear regression model with a one-linear regression model (Table 1). Eventually, the time points were determined by constructing a maximum likelihood model using a recursion method. A twopiecewise linear mixed effects model, with random intercepts, was applied to quantify the average change per year of eGFR during different periods on cART (Table 3). In addition, a two-piecewise logistic regression model based on Generalized Estimating Equation (GEE) was used to estimate the relationship of cART duration with RRF and with RKFD (Table 4). All multivariate regression models were adjusted for the covariates used in Model 2.
Data on HIV-1 RNA viral load were not available in up to 50% of patients, so a missing value category was used in the main analyses. 25,26 In addition, to reduce bias caused by exclusion of individuals with any missing data at baseline, five imputed datasets (established by multiple imputation with chained equations) were developed and run separately, and the results were combined using Rubin's method (Supplementary file: Tables S2 and S3). 27,28 Another sensitivity analysis was conducted to exclude patients receiving protease inhibitors (PIs), because of the possible association of these drugs with nephrotoxicity and impaired renal function (Supplementary file: Tables S4 and S5). [29][30][31] All analyses were performed using the R software, version 3.3.1 (http://www.R-project.org). A result was considered statistically significant when the two-tailed P value was below 0.05.
Patient Selection and Propensity Score Matching
As shown in the flowchart (Figure 1), a total of 1065 patients were screened and 823 patients were eligible for participation, 299 of whom (36.3%) started a TDFcontaining cART. Table 2 shows the baseline characteristics of TDF users and non-TDF users before and after PSM. After matching, there were 130 (33.3%) patients in the TDF group, and all baseline variables were well balanced (P > 0.05 for all).
Comparison of One-Linear and Piecewise-Linear Mixed Effects Models
We compared eGFR trajectories by one-linear and piecewiselinear models (Table 1), with the piecewise model allowing a change of the eGFR slope at a given time point. Log likelihood ratio test between the two models indicated that the One-linear model <0.001 Non-linear model Notes: a Exp(β) represents the difference of segmented slopes (mL/min/1.73 m 2 /year), along with a p value from Wald test. b Log likelihood ratio test was used to compare one-linear regression model with two-piecewise regression model, below 0.05 indicates two-piecewise regression model was a better fit to the data than the one-linear model that assumed a single slope across the entire period of observation. Model 1: unadjusted for any variables at baseline. Model 2: adjusted for age, sex, weight, height, body mass index (BMI), CD4 count, eGFR, dyslipidemia, HIV/AIDS risk factors (sexual orientation and intravenous drug use), WHO stage III/IV HIV/AIDS, hepatitis B positivity, hepatitis C positivity, anemia, diabetes, and HIV-1 RNA viral load at baseline. Model 3: propensity score-matched sample.
The Relationship Between eGFR and Duration of cART
The eGFR changed over time in both groups ( Figure 2, Supplementary file: Figures S1 and S2). There was a reverse S-shaped relationship between eGFR and duration of cART for TDF users, but a different temporal trajectory for non-TDF users, in all three models. The S-shaped trajectory was observed markedly in model 1 (Supplementary file: Figure S1B) and model 2 ( Figure 2B). (Table 3). For non-TDF users, before the time points, a longer duration of cART was associated with a slight increased eGFR in all three models; after the time points, there was an inverse association between eGFR and duration of cART (Table 3).
Nonlinear Progression of Renal Function Over Time
Two outcome definitions, RRF and RKFD, were used to assess whether renal dysfunction progression consists of the nonlinear trajectory of eGFR (Table 4). For patients without TDF exposure, use of cART for 2.15 years or more, the risk of RRF increased steadily to 2.05 per year (95% CI: 1.54, There was no increased risk of RKFD among non-TDF users who received cART for 2.15 years or more, nor among TDF users who received cART for less than 1.40 years. But, each additional 1 year of TDF exposure was associated with a 78% (95% CI: −91%, −49%) decreased risk of RKFD from 1.40 to 2.30 years, and a nearly threefold (95% CI: 1.08, 7.27) increased risk of RKFD for
Sensitivity Analyses
Two sensitivity analyses, one conducted with imputed datasets and the other with patients not using PIs, indicated these results were robust (Supplementary file: Tables S2-S5).
Discussion
This was the first study, to our knowledge, to investigate whether eGFR progression follows a nonlinear trajectory in HIV-1-infected patients initiating cART with normal eGFR. We present evidence from two analyses (the piecewise-linear and logistic regression model) that the traditional assumption of a steady, linear decline does not apply to HIV-1 infected patients on treatment, especially those on TDF-based therapies. Our results showed that these patients experienced periods of acceleration or deceleration of kidney function decline. Analyses over nonlinear patterns seemly speak to the true nature of the exposure-outcome relationships.
The comparison of one-linear and piecewise-linear models suggested that the nonlinear trajectory of eGFR was more accurate than a single linear process (log likelihood ratio test: P < 0.001 for all). When a single slope was fitted to the data, eGFR decline was either over-or under-estimated during the partial period of cART. Intriguingly, nonlinear trajectories accurately depicted the periods of acceleration or deceleration of renal function decline, especially in TDF users who had an obvious heterogeneity in eGFR over time. This acceleration or deceleration, which was quantified by the piecewiselinear mixed effects model, could be clearly identified from the data and smooth curves (Table 3 and Figure 2). As illustrated for TDF users in model 2 (Table 3), there was an increase of eGFR for intermediate cART durations (1.40-2.30 years), comparing markedly with the significant decline of eGFR either for short (<1.40 years) or long cART durations (>2.30 years). Certainly, these findings were similar in model 1 and model 3.
As expected, the effects of nonlinearity of eGFR on renal dysfunction progression were well supported by the results of RRF and RKFD. In particular, the trends over time of RRF were completely consistent with nonlinear changes of eGFR (Table 4). This finding was also robust enough based on a range of sensitivity analyses. This phenomenon can not be explained explicitly thus far. 32 A speculation of far from mature is that TDF, as a wellknown nephrotoxic antiretroviral, causes a rapid stress in renal tubular at the beginning exposure followed by a transient recovery possibly from the self-repairing mechanisms of kidney; then, an inevitable damage occurs if beyond the ability of self repairment over time. 33 Among TDF users, during the increasing period (1.40-2.30 years) of eGFR, the incidences of both outcomes, especially RKFD definitely declined (suggesting a recovery of renal function), even though TDF continued. This is consistent with previous studies suggested an overall limited effect of TDF on renal function decline. 10,21 A meta-analysis that compared ART regimens with or without TDF demonstrated a mean difference in eGFR of only 3.92 mL/min/1.73 m 2 on a short-term follow-up. 10 Interestingly, a cohort study reported the cumulative decline of eGFR attributable to TDF was 3.05, 4.05 and 2.42 (mL/min/1.73 m 2 ) at year 1, 2, 3, respectively; this indicates that the eGFR decline attributable to TDF was lower 3 years after than that of before, suggesting a partial eGFR recovery from years 2 to 3. 21 However, specific time points for renal function recovery are difficult to obtain by their one-linear analysis of eGFR.
We also found that continuous TDF exposure inevitably led to renal impairment in a substantial population. TDFinduced nephrotoxicity was reported in 0.5-45% of HIVpositive patients. 6 The wide range of prevalence is attributed to different populations and definitions of TDF-induced nephrotoxicity and duration of follow-up. Renal function assessment and monitoring at baseline and during TDF treatment is the main approach of prevention of TDF-induced nephrotoxicity. But how to monitor appropriately is a challenging issue in daily practice. The incidence of RRFbut not the severe RKFDincreased during the initial use of TDF, incidences of both outcomes increased significantly later, suggesting that persistent TDF exposure can lead to cumulative and irreversible renal impairment, even in those with a normal baseline renal function. This was in agreement with that of the prospective international cohort study published recently, the increased incidence of CKD per year of exposure to TDF was initially small (14%; 95% CI: 10%, 19%), yet doubled for a treatment period of 5 years. 5 Regrettably, the authors used also the conventional linear analysis to address this issue, thereby the nonlinear trajectories of eGFR progression, if exist, remain unknown. As suggested by studies from CKD cohorts, linear regression methods do not exactly estimate kidney function trajectories, 17 considering the big heterogeneity with respect to kidney function, dropout and number of kidney function estimates. 34 Nonlinear statistical methods, such as piecewiselinear mixed effects model, 16 are able to better characterize the different profiles of renal function progression, as well as to investigate specific risk factors associated with each profile. 15,17 Therefore, our study provides a new avenue for this difficult task, at least in HIV patients with normal renal function. Future external validation with prospective international cohort like D:A:D Study would benefit a lot to characterize the real trajectories of eGFR progression, as well as the potential time window to salvage renal function and to investigate the underlying mechanisms of TDF related nephrotoxicity.
This present study has several implications for our understanding of renal dysfunction progression in HIV-1 infected patients during cART with initial normal renal function. First, periods of slight increasing eGFR followed by periods of eGFR decline and increasing risk of adverse events in non-TDF users suggesting that irrespective of the cART regimen (with or without TDF), loss of renal function to some extent seems inevitable following prolonged use of these drugs, especially after 2 years exposure or more. Screening frequencies on renal function should be planned according to this finding. Second, for TDF users, periods of rapid eGFR decline followed by periods of eGFR improvement, indicating that eGFR decline may sometimes be ameliorated over a given extended period. One should be aware of early loss of renal function may not reflect permanent loss of renal function. The S-shaped nonlinear trajectory of eGFR may also open new avenues of diagnostic and treatment options so as to delay the progression of renal impairment among these long-term users of TDF.
This study has several strengths. First, the research has longitudinal data for up to 7 years of follow-up and regular eGFR assessments every 3 months for characterizing nonlinear trajectories of eGFR during cART. Second, by using PSM, we were able to reduce confounding bias and balance the baseline characteristics of TDF exposure and non-exposure group. The results of this emulation of a randomized controlled trial were similar with model 1 and model 2, suggesting that our findings were robust. Third, the time points suggested by our study were determined by a range of powerful statistical analyses (Wald test, piecewise-linear mixed effects model along with maximum likelihood model and recursion method), together with two robust sensitivity analyses, thus is more accurate and powerful than the traditional paradigm based on clinical experience. 5,14,21 Our study has several limitations. First, the inherent shortcomings belong to retrospective observational singlecenter study, small sample size and short-term follow-up make it difficult to address the causality between TDF and CKD and reach a firm conclusion, the powerful statistical analysis thus is a trade-off to minimize these biases and confounding. Second, the patients in this study came exclusively from China and mainly with no history of drug abusing which is a risk factor for HIV, the findings may not simply apply to other populations and thus further validations from different races are warranted. Third, nonlinear trajectory of eGFR progression in patients complicated with CKD at baseline needs further investigation, after all, an interesting curve has already been identified by our population characterized by normal renal function. Fourth, this study did not investigate the predictive factors that may contribute to nonlinearity patterns of renal function, as well as TDF induced nephrotoxicity other than glomerular filtration function. All above limitations require further study to be overcome, nonetheless, our primary results provided moderate yet important illumination for this topic.
Conclusion
The present study suggests that renal function progression exists heterogeneity in HIV-infected patients with a normal eGFR initiating ART in Chinese. There are significant differences in renal function trajectories between TDF and non-TDF therapy. Continuous TDF exposure inevitably led to renal impairment in a substantial population, but the changes in eGFR were inconsistent over time. Analyses assuming nonlinear patterns over piecewise mixed effects models speak to the true nature of the exposure-outcome relationships in this scenario. An interesting reverse S-shaped nonlinear trajectory, the transient yet definitely recovery of renal impairment about 1.4 years after TDF initiation, do exist and could be helpful for the management of HIV-1-infected patients on TDF.
Data Sharing Statement
The data set used for this manuscript will be available from the corresponding author upon reasonable request.
Ethics and Consent Statement
This study was approved by the Institutional Review Board of Xixi Hospital. All data were anonymized to comply with the provisions of personal data protection legislation. Due to the retrospective nature of this study and due to the fact that only historical medical data were collected, written informed consent was not required.
|
2020-04-23T09:07:03.709Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "71e55d6268ad08b07cab1afcacbb7da665d22dd1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/tcrm.s243913",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "421bc4221481f10930906efa121220e24c6350ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6949333
|
pes2o/s2orc
|
v3-fos-license
|
Methylmercury intoxication and histochemical demonstration of NADPH-diaphorase activity in the striate cortex of adult cats
The effects of methylmercury (MeHg) on histochemical demonstration of the NADPH-diaphorase (NADPH-d) activity in the striate cortex were studied in 4 adult cats. Two animals were used as control. The contaminated animals received 50 ml milk containing 0.42 μg MeHg and 100 g fish containing 0.03 μg MeHg daily for 2 months. The level of MeHg in area 17 of intoxicated animals was 3.2 μg/g wet weight brain tissue. Two cats were perfused 24 h after the last dose (group 1) and the other animals were perfused 6 months later (group 2). After microtomy, sections were processed for NADPHd histochemistry procedures using the malic enzyme method. Dendritic branch counts were performed from camera lucida drawings for control and intoxicated animals (N = 80). Average, standard deviation and Student t-test were calculated for each data group. The concentrations of mercury (Hg) in milk, fish and brain tissue were measured by acid digestion of samples, followed by reduction of total Hg in the digested sample to metallic Hg using stannous chloride followed by atomic fluorescence analysis. Only group 2 revealed a reduction of the neuropil enzyme activity and morphometric analysis showed a reduction in dendritic field area and in the number of distal dendrite branches of the NADPHd neurons in the white matter (P<0.05). These results suggest that NADPHd neurons in the white matter are more vulnerable to the long-term effects of MeHg than NADPHd neurons in the gray matter. Correspondence
• NADPH-dehydrogenase Mercury (Hg) in both organic and inorganic forms is a potent neurotoxic chemical.Methylmercury (MeHg), the organic form, is a poison which can affect different organs and systems.However, in all species the main target is the nervous system.In the central nervous system (CNS), the visual cortex and granular layer of the cerebellum are the major targets for the effects of MeHg.
These effects depend on factors such as the dose and time of exposure.Adult individuals contaminated with MeHg present neuronal lesions and the earliest clinical symptoms include dysarthria, ataxia and tunnel vision (1).The specific mechanism by which MeHg damages neurons is not fully understood.It was recently proposed that MeHg may impair astrocyte function with subsequent neu-ronal lesion (2)(3)(4).Indeed, a number of papers suggest that glial cells have an important role in neurotoxicity (4).Nevertheless, MeHg inhibits several mitochondrial enzymes, interferes with protein synthesis and alters the activity and transport of enzymes such as glutathione peroxidase, adenyl cyclase and dehydrogenases (1,5,6).Inhibition of protein synthesis is observed after in vivo or in vitro exposure to MeHg and may be an early effect of MeHg (5).Thus, MeHg may alter the synthesis and activity of key enzymes in cell metabolism.These effects may extend to neurons that contain NADPH-diaphorase (NADPH-d), an NADPH-dependent enzyme that has been identified to be resistant to several neuropathological conditions such as NMDA neurotoxicity, stroke and some neurodegenerative disorders, such as Huntington, Parkinson and Alzheimer diseases (7,8).
Mapping of NADPH-d throughout the nervous system became possible due to the pioneering work of Thomas and Pearse (7) who described "active solitary neurons" in non-fixed matter in the cerebral cortex and basal ganglia of various species.These cells were shown to be resistant to metabolic poisoning by substances such as carbon monoxide, sulfanylamide, thalidomide and tetrachloromethane vapor.Studies on these neurons were intensified after the discovery by Scherer-Singler et al. ( 9) that the enzyme activity was preserved in matter fixed with aldehydes.The work of Hope et al. (10) demonstrated that neuronal NADPH-d is a nitric oxide synthase (NOS), the enzyme that synthesizes nitric oxide (NO), a novel and unusual gaseous messenger molecule in mammalian tissues (11).Throughout the brain and peripheral paraformaldehyde-fixed tissues, all NOS-staining neurons also stain for NADPH-d (12).Paraformaldehyde fixation presumably inactivates virtually all NADPH-dependent oxidative enzymes except NOS, supporting the idea that the NADPH-d stain labels NOS neurons selec-tively (11).Therefore, NADPH-d histochemistry provides a simple method for the localization of this novel messenger system in the CNS.NO has been implicated in several physiological as well as pathological roles in the CNS which have been recently reviewed (13).
Despite the growing literature on the participation of NADPH-d/NOS in pathological conditions, at this time no studies have investigated the effects of MeHg on NADPHd-positive neurons (NADPHdn) of the striate cortex.Our study describes the histochemical activity pattern of this cell subgroup in the striate cortex of adult cats intoxicated with methylmercury chloride (MeHgCl).Four cats were fed contaminated fish and milk daily for 2 months.Two animals were used as control.The contaminated animals received 50 ml milk containing 0.42 µg MeHg and 100 g fish containing 0.03 µg MeHg.Two intoxicated and 1 control cats were perfused 24 h after the last dose (group 1).The other 3 cats were perfused 6 months later (group 2) to compare the short-and long-term effects of MeHg.Perfusion was performed transcardially under deep anesthesia with intramuscular injection of ketamine (11 mg/kg body weight) and xylazine (0.55 mg/kg body weight) with 0.9% NaCl, 4% paraformaldehyde in 0.1 M sodium phosphate buffer, pH 7.2-7.4,followed by sucrose/glycerol cryoprotective solutions (10, 20 and 40%).After perfusion, sections of 200 µm were prepared with a freezing microtome.All sections from the different animals were processed under the same incubation conditions for diaphorase histochemistry procedures using the malic enzyme method (9).Qualitative and quantitative analyses were done using photomicrographs and camera lucida drawings.A microcomputer program (Autocad) was used to measure soma and dendritic field area.Dendritic branch counts were performed from camera lucida drawings for control and intoxicated animals (N = 80).Average, stan-dard deviation and Student t-test were calculated for each data group.The concentrations of mercury in milk, fish and brain tissue were measured by acid digestion of samples, followed by reduction of total Hg in the digested sample to metallic Hg using stannous chloride followed by atomic fluorescence analysis.
The morphologic pattern of NADPHdn in the control animals was similar to that already described elsewhere (14).The final concentration of MeHg in the visual cortex was 3.2 µg/g wet weight brain tissue.No differences in NADPHdn in gray matter were found between groups 1 and 2 for all parameters analyzed (P>0.05).Analysis of NADPHdn in the white matter showed a reduction of the number of distal branches (Figure 1) in group 2 (P<0.05) and of the dendritic area field (Figure 2).However, morphological alterations in NADPHdn in the white and gray matter were virtually impossible to distinguish by qualitative light microscopy analysis.The enzyme activity in the neuropil was considerably less intense in the brains of intoxicated animals belonging to group 2 when compared to control (not illustrated).No alterations were observed in group 1.
Different results for NADPHdn were observed in adult cat striate cortex after a short and long time of severe intoxication.The long-term effects of MeHg seem to affect mainly NADPH-diaphorase neurons in the white matter and the neuropil activity in the gray matter.Neurons in the gray matter did not seem to be altered by the schedule of treatment with MeHg.Our results for the gray matter are similar to those reported by other authors who suggest a resistance of NADPH-positive neurons in gray matter to pathologic conditions like neurotoxicity mediated by NMDA receptors underlying stroke and neurodegenerative disorders, such as Huntington, Parkinson and Alzheimer diseases (7)(8).It is unknown why these cells are resistant.This phenomenon may be a char-acteristic of gabaergic neurons, which have been shown to be resistant to toxic levels of NMDA in mouse cell culture (15).Cortical NADPHdn seem to synthesize GABA in the cerebral cortex (16).These inhibitory cells may present some protective "intrinsic safety factor" due to their important role in regulating cerebral excitability (15).Teleologically, this may denote that critical neuronal systems may be less vulnerable to pathological conditions in the CNS.Inhibitory cells constitute a critical neuronal system in cerebral cortex in that, despite their small number (only 20 to 25% of the total neurons in the cerebral cortex), they control the excitability of the entire brain.The resistance of NADPHdn to pathological insults is likely to be due to a number of functional adaptations including -perhaps the major characteristican unusual metabolism.
The fact that the effects of MeHg were observed only after a long survival time of 6 months (group 2) may indicate that a long silent period is necessary for the effects of MeHg to appear.Indeed, it has been reported that some patients with Minamata disease developed the clinical features of severe poisoning after they had stopped eating the The decreased number of more distal branches in the white matter NADPH-diaphorase neurons matched the decreased dendritic field area.However, extensive quantification was necessary to show these results, indicating that NADPH-diaphorase neurons are affected only by high MeHg doses.Other authors have demonstrated a similar fact for NMDA excitoxicity in culture (15).Differences in neuronal vulnerability to pathological insults between white and gray matter may be related to glial function.A number of recent papers have suggested that astrocytes may play a major role in MeHg neurotoxicity (2)(3)(4).Astrocytes show basal levels of metallothioneins (MTs), metal-binding proteins whose biosynthesis is greatly enhanced by various factors including heavy metals such as Zn, Co, Cd and Hg (2).By virtue of their high thiol group (-SH) content, MTs have a very high affinity for MeHg.Indeed, wherever an MeHg compound was identified in biological fluids, it was complexed with SH-containing ligands (2-3).The MeHg-MT complex may keep MeHg in a relatively nontoxic form in astrocytes, thereby protecting both astrocytes and juxtaposed neurons from the cytotoxic effects of the metal (3).Thus, MT induction may cause CNS tolerance to MeHg, at least during the early stages of intoxication.A recent report showed that in the human brain MT immunoreactivity was limited to a subpopulation that probably represented protoplasmic astrocytes (17).This astrocyte type is more characteristic in the gray matter, a possible reason for its lower vulnerability.
We found a striking decrease in diaphorase neuropil reactivity in group 2, six months after MeHg intake.The anatomical substrate for diaphorase neuropil reactivity has not been fully determined.There is evidence from electron microscopy studies that the NADPHd reactivity in lamina 4C of pri-mates is mainly due to presynaptic axon terminals both from intra-and extracortical projections (18).The alterations of the intensity in NADPH-d activity in neuropil could suggest changes in transport processes of NADPH-diaphorase/NOS enzymes to distal branch regions of dendrites and axon terminals.This possibility agrees with the evidence that MeHg acts by altering both protein synthesis and transport (1,5,6).The specific mechanism for these effects is unknown.Alterations in the integrity of microtubules has been reported in a variety of experimental systems (1).The decreased NADPHd neuropil reactivity may also be related to some type of astrocyte dysfunction.The astrocyte-mediated tolerance to MeHg (discussed above) may be an important factor to be overcome during the silent period of MeHg intoxication.Direct lesion of NADPHd axonal terminals may then occur.This possibility is under investigation by our group.Axonal terminals stained by iontophoretic injection of biocytin into the striate of the cats belonging to group 1 did not display qualitatively perceptible morphological alterations.However, we cannot rule out some subtle alterations such as a reduced number of axonal boutons (Gomes-Leal W, Jesus-Silva SG, Oliveira RB and Picanço-Diniz CW, unpublished results).
Further studies are necessary to elucidate how MeHg damages the CNS as well as the specific dehydrogenase responses to MeHg neurotoxicity.However, for a proper estimate of the chronic effects of MeHg, lower doses of the compound should be administered for prolonged periods of time.This approach would be closer to the actual situation occurring in the food chain around the contaminated environment.The fact that MeHg has great affinity for -SH groups may be both an important physiological and pathological finding in the CNS.Vulnerable -SH groups in both glial cells and neuron receptors could be potential targets for the effects of high MeHg levels during chronic expo-sure.Astroglial glutamate transporters carry out most of the functional glutamate transport and are essential for maintaining low extracellular glutamate (19).Higher doses and longer periods of exposure to MeHg may impair astrocyte function in terms of excitatory transmitter uptake (5).Thus, MeHg may damage the CNS through excitotoxic mechanisms like those mediated by NMDA.Indeed, ligand and voltage-gated ion channels represent a plausible early target for the action of MeHg (20).Since a large battery of events are mediated by ion channels, it follows that their disruption by mercurials could lead to potentially deleterious consequences for the cell.Thus, studies on mercury and other metal effects on the ion channels of neurons (including the NADPHdn subpopulation) and astrocytes represent a more sensitive method than histochemistry to study the chronic neurotoxic effects of metals on the nervous system.Such studies are now required to reveal the mechanisms that ensure protection of NADPHdn in the gray matter described in the present study and to determine why NADPHdn in the white matter and neuropil seem to be more vulnerable.
Figure 1 -
Figure 1 -Number of dendritic branches of white matter NADPHd-positive neurons in the striate cortex of the adult cat 6 months after intoxication.Note that the number of more distal dendritic branches is significantly decreased in the treated animal compared to the control one (*P<0.05).
Figure 2 -
Figure 2 -Dendritic field area of white matter NADPHd-positive neurons in the striate cortex of the adult cat, 6 months after intoxication.Note the significant reduction of the values in the treated animal compared to the control one (*P<0.05).Dendritic field area (mm 2 ) 350 300 250 200 150 100 50 0
|
2017-08-15T12:49:01.487Z
|
1998-09-01T00:00:00.000
|
{
"year": 1998,
"sha1": "5152aa16a9934a8005252b7e81f49da848144902",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bjmbr/a/ww7FJMYS9gm7Lch93ZsDjYN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5152aa16a9934a8005252b7e81f49da848144902",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
264488495
|
pes2o/s2orc
|
v3-fos-license
|
Low birth weight and reduced postnatal nutrition lead to cardiac dysfunction in piglets
Abstract Heart disease is the leading cause of death in humans and evidence suggests early life growth-restriction increases heart disease risk in adulthood. Therefore, this study sought to investigate the effects of low birth weight (LBW) and postnatal restricted nutrition (RN) on cardiac function in neonatal pigs. We hypothesized that LBW and RN would reduce cardiac function in pigs but this effect would be reversed with refeeding. To investigate this hypothesis, pigs born weighing <1.5 kg were assigned LBW, and pigs born >1.5 kg were assigned normal birth weight (NBW). Half the LBW and NBW pigs underwent ~25% total nutrient restriction via intermittent suckling (assigned RN) for the first 4 wk post-farrowing. The other half of piglets were allowed unrestricted suckling access to the sow (assigned NN). At 28 d of age (weaning), pigs were weaned and provided ad libitum access to a standard diet. Echocardiographic, vascular ultrasound, and blood pressure (BP) measurements were performed on day 28 and again on day 56 to assess cardiovascular structure and function. A full factorial three-way ANOVA (NN vs. RN, LBW vs. NBW, male vs. female) was performed. Key findings include reduced diastolic BP (P = 0.0401) and passive ventricular filling (P = 0.0062) in RN pigs at 28 d but this was reversed after refeeding. LBW piglets have reduced cardiac output index (P = 0.0037) and diastolic and systolic wall thickness (P = 0.0293 and P = 0.0472) at 56 d. Therefore, cardiac dysfunction from RN is recovered with adequate refeeding while LBW programs irreversible cardiac dysfunction despite proper refeeding in neonatal pigs.
Introduction
Heart disease is the leading cause of death in adults, with 17 million deaths occurring per year (WHO, 2020).There are several known modifiable risk factors linked to the development of heart disease including sedentary lifestyle, smoking, and poor dietary nutrient intake (Hajar, 2017).However, more recently, studies have confirmed that restricted nutrition (RN) during either intrauterine growth or postnatal growth are risk factor for impaired cardiac development in human and animal models (Katsumata et al., 2000;Kaijser et al., 2008;Fouzas et al., 2014;Fabiansen et al., 2015;Ferguson et al., 2019Ferguson et al., , 2021;;Visker et al., 2020).Globally, 160 million children under the age of five experience a poor nutritive environment leading to growth restriction (Onis and Branca, 2016;Blencowe et al., 2019).Retrospective cohort studies in humans have shown that low birth weight (LBW) is associated with an increased risk of cardiovascular disease in adulthood, which may be exacerbated by poor postnatal nutrition (Osmond and Barker, 2000;Lumey et al., 2007;Kaijser et al., 2008;Thornburg, 2015).Due to the ethical concerns and logistical difficulties in conducting controlled human/infant studies on growth restriction, translational animal models are necessary to determine physiological mechanisms and develop therapeutic countermeasures (Baker, 2008;Odle et al., 2014).
Rodents are commonly used to model human growth-restriction due to their accelerated lifespans, similar physiology, and genetic structure (Clarke, 2002;Bryda, 2013;Yue et al., 2014;Mitchell et al., 2015;Kõks et al., 2016).The literature has shown that growth-restricted mice have smaller hearts, impaired cardiac excitation-contraction mechanics, and reduced cardiac function which manifests as reduced functional capacity (Murça et al., 2012;Visker and Ferguson, 2018;Ferguson et al., 2019;Pendergrast et al., 2020).While this information is important for human health care, a mouse heart continues to develop postnatally while human hearts achieve terminal differentiation during gestation, which could limit translatability of mouse studies to humans.However, pigs are thought to be the most closely related to humans of any animal model other than primates (Baker, 2008) A pig's heart shares more similarities in anatomy and physiology to a human heart and a pig's cardiac morphogenesis is mostly completed by gestational day 42, which is similar to human development (Gabriel et al., 2021).These similarities make pigs an ideal preclinical model for translational relevance for human cardiovascular health.Therefore, the next step is to determine how early life growth restriction influences cardiac function as the pig ages and if refeeding an adequate diet could mitigate cardiac dysfunction.(Baker, 2008;Devaskar and Chu, 2016;Huting et al., 2019).
The aim of the present investigation was to determine the influence of birth weight and postnatal nutrient intake on cardiac function and structure (as measured by echocardiography) in neonatal pigs.The specific objectives were to (1) determine baseline cardiac function and vascular health of weanling piglets that were both naturally occurring low and normal birth weight (NBW)s at 28 d old after adequate or inadequate postnatal nutrition and (2) determine the same cardiac and vascular parameters in LBW and NBW piglets (post-weaning), who have been re-fed with proper nutrition for 4 wk (56 d old).We hypothesized that LBW and RN during the immediate postnatal period would result in poor cardiac development that would lead to cardiac dysfunction in piglets but would be reversed by proper refeeding post-weaning.
Methods
All procedures contributing to this work complied with the ethical standards of the Canadian Council of Animal Care guidelines for the care and use of farm animals in research and were approved by the Animal Research Ethics Board of the University of Saskatchewan (AUP# 20190042).
Animal housing and experimental design
The present investigation is a subset of data collected from a larger investigation by Rodrigues et al. (Rodrigues et al., 2020).Briefly, a nutritive model for LBW pigs was developed as follows: within 48 h after farrowing, piglets were cross-fostered, if required, between sows to standardize litter size to 12-14 piglets/litter.Piglets that were randomly born less than 1.5 kg at birth were considered LBW and greater than 1.5 kg were considered NBW, an identifying convention accepted in the established peer-reviewed literature (Beaulieu et al., 2010).All sows were treated identically and were provided the same commercial swine diet during both gestation and lactation (Rodrigues et al., 2020).Postnatal RN was induced in four piglets per litter (two LBW and two NBW) via intermittent suckling (Berkeveld et al., 2007;Kuller et al., 2007).Intermittent suckling was induced by isolating piglets from the sow for 6 h/d from 08:00 to 14:00 h from day 3 post-farrow until weaning at day 28.This results in a ~25% reduction in feed intake as published previously and estimated in the original study (Berkeveld et al., 2007;Rodrigues et al., 2020).(Berkeveld et al., 2007;Rodrigues et al., 2020).All other piglets were allowed unrestricted suckling access to the sow (normal nutrition [NN]).At the end of the suckling period (day 28), 32 piglets, eight per treatment group were randomly chosen for cardiac assessment (NBW NN n = 3 males, five females; LBW NN n = 4 males, four females; NBW RN n = 4 males, four females; LBW RN n = 4 males, four females).The remaining piglets were weaned onto a commercial nursery diet (Masterfeeds) that was formulated to meet nutrient requirements (Council, 2012) and provided ad libitum until 56 d old.Weaned piglets were housed in groups of 3-6/pen within their treatment group.After 4 wk (day 56), an additional 32 piglets were randomly chosen for cardiac assessment (NBW NN n = 4 males, 4 females; LBW NN n = 4 males, four females; NBW RN n = 4 males, four females; LBW RN n = 3 males, five females).
Echocardiography and vascular ultrasonography
During the nutrient restriction and refeeding phases of the study, piglets were housed at the Prairie Swine Centre (Saskatoon, SK).At days 27 and 55, four piglets from each group were randomly selected, weighed, and transported to the Western College of Veterinary Medicine's Animal Care Unit (Saskatoon, SK) where they were fasted for 24 h prior to echocardiography and ultrasonography measurements.Piglets were anesthetized with 5% isoflurane and maintained under 2% isoflurane during cardiac measurements.No premedication was used, and the palpebral reflex was assessed to ensure piglets were fully under anesthesia.As such, anesthesia maintenance was adjusted as needed for each individual pig based on real-time physiological parameters.Blood pressure (BP), heart rate (HR), and temperature were recorded throughout the procedure.BP was measured at least every 5 min using a high-definition oscillometric BP machine (VET HDO High-Definition Oscillometer, Babenhausen, Germany) on the right forelimb mid-length over the radial and ulnar bones.An average of readings taken during the 20-min sonographic exam for each pig was used to determine the systolic and diastolic pressures.HR was recorded from electrocardiography (ECG) and values reported were an average of all values throughout the procedure.
A SonoSite M Turbo Ultrasound System (Fujifilm Sonosite, Markham ON Canada) was used to obtain all echocardiography and Doppler images.Piglets were placed in left lateral recumbency to obtain two-chamber and four-chamber apical views.The Doppler cursor was placed directly above mitral valve.Using Color mode and Doppler mode in the five-chamber view Isovolumetric Relaxation Time (IVRT) and IVRT as a fraction of cardiac cycles was calculated.E and A Waves were obtained in a two-chamber view (Schwarz et al., 2019).Finally, piglets were flipped to right lateral recumbency to obtain a short axis view of the left ventricle (LV) at the level of the papillary muscles.M-mode in short axis view was used to obtain ventricular wall movement, calculate stroke volume, cardiac output, ejection fraction, and fractional shortening.M-mode was also used to determine ventricular wall thickness and left ventricular volumes (Geva and Velde, 2006).Free wall and interventricular wall movement was qualitatively evaluated based on a five-point scale, with lower scores representing little to no wall movement indicative of wall stiffness.All measurements were calculated and analyzed during the procedure.
The same software (Fujifilm SonoSite M Turbo Ultrasound System) was used to obtain all vascular ultrasonography images.However, there is no doppler analysis available on the Fujifilm SonoSite vascular probe, so a separate program was used for analysis of vascular images (Adobe Premiere Elements 2019).Piglets were placed back into left lateral recumbency, and the left forelimb was used for vascular analysis.The Doppler cursor was placed on the brachial artery proximal to branching.Vessel diameter was measured at baseline (VDt0) and 30 seconds post a 60-second occlusion (VDt90).Images of brachial artery at maximum dilation were pulled from ultrasound video clips using Adobe Premiere Elements 2019 and V max for Doppler waves and average vessel diameter were analyzed with Image Pro 10.
BP was obtained using a high-definition small animal oscillometer BP cuff (VET HDO High-Definition Oscillometer, Babenhausen, Germany) placed over the brachial artery in the right forelimb and recorded every 5 min during the ultrasound examination.Readings over the course of the examination were averaged, with a minimum of three readings with good agreement utilized to establish diastolic and systolic pressures.
Statistics
Statistical analysis was performed using JMP Pro v14.0 (SAS, Cary, NC).Data were evaluated for normality with Shapiro-Wilk's Normality test and outliers were determined from Grubb's Outlier test.Outliers were removed and if variables failed normality, a Log-transformation was used before running a full factorial three-way ANOVA with feed (NN vs. RN), birth weight (LBW vs. NBW), and sex (male vs female) as the variables at 28 and 56 d.Variables that were log-transformed included: cardiac output, cardiac output index, E-wave time integral, LV wall movement, free wall movement, VDt0 velocity, VDt90 velocity, flow-mediated dilation, A-wave velocity and time, IVRT as a fraction, VDT0, corrected EDV and corrected ESV.Echocardiography structural components were run with a covariate of body surface area (BSA) and functional components were run with a covariate of HR.BSA was calculated by 734 × (Body Weight^0.656)(Kelley et al., 1973).No BSA covariate was used on variables already corrected by echocardiogram software.All significant, multiple comparisons were then assessed with a Tukey's HSD post hoc test.The alpha level was set at P < 0.05 for two-way interactions and P < 0.10 for three-way interactions.All data is expressed as mean ± SEM.Figures are condensed for clarity by showing only the significant main or interaction effect.
Cardiac structure and function
All measured cardiac variables were significantly different between 28 and 56 d old, in line with typical growth and maturation data growth data published by Rodrigues et al., (2020).There were no significant differences in end-systolic volume (ESV) or end-diastolic volume (EDV) between birth weight or diet groups at 28 d (Table 1).ESV and EDV were not different between birth weights at 56 d when normalized by body mass, but a sex effect showed females had smaller ESV and EDV than males (Table 2, P = 0.0467, P = 0.0109).LBW pigs had reduced wall thickness during diastole at 28 d but differences in wall thickness during systole (Figure 1, P = 0.0492).At 56 d, LBW pigs had reduced wall thickness during diastole and systole as compared to NBW pigs (Figure 2, P = 0.0293 and P = 0.0472, respectively).
There was no difference in resting HR measures between groups.Stroke volume (SV) was lower in RN pigs when compared to NN pigs at 28 d old (Figure 3, P = 0.0495), but this effect was no longer present when indexed to BSA (Figure 3).At 56 d, SV was not smaller in RN pigs but instead, SV was lower in LBW when compared to NBW pigs (Figure 4, P = 0.0397).When indexed to BSA, differences in SV were not significant between groups (Svi, Figure 4).Absolute cardiac output (Q) was lower in RN pigs when compared to NN pigs at 28 d (Figure 5, P = 0.0451), but when Q was indexed to BSA, Q was not smaller in RN pigs (Qi, Figure 5) Absolute Q was not smaller in RN pigs as compared to NN pigs at 56 d, but was smaller in LBW pigs compared to NBW pigs at 56 d (Figure P = 0.0037).When Q was indexed to BSA (Qi), LBW pigs still had lower Q than NBW (Figure 6, P = 0.0337).Ejection fraction (EF) and fractional shortening (FS) were not affected by birthweight or diet at 28 or 56 d; however, female pigs had greater EF than males at 28 and 56 d (Tables 1 and 2, P = 0.0400 and P = 0.0007, respectively).FS was larger in females at 56 d (P = 0.0009, Table 2).There were no diet or birthweight effects on isovolumic relaxation time (IVRT) (Tables 1 and 2).Female pigs displayed a trend for greater free wall movement than males at both time points (P = 0.0677) but no differences in intraventricular (IV) wall movement were observed (Supplementary Table S1 and S2).
Vascular structure and function
Systolic and diastolic BP were not different between the twotime points.At 28 d of age, all RN pigs had reduced diastolic BP (P = 0.0290), while the male RN pigs also had reduced systolic BP as compared to all other pigs (P = 0.0401, Figure 7).There were no diet or birth weight effects on systolic or diastolic BP at 56 d (Table 2).
There were no age effects on vascular function variables between the two-time points.At 28 d old, there were no birth weight or diet effects on E-wave or A-wave peak velocity (Figure 8).RN had a decreased E/A ratio compared to NN at 28 d (Figure 8, P = 0.0062).RN pigs no longer had a reduced E/A ratio as compared to NN pigs at 56 d (Table 2).E-wave and A-wave velocity time integrals were not different between diet, birth weight, or sex groups at either time point (Supplementary Table S1 and S2).
No diet or birth weight effects were observed on flow-mediated dilation at either 28 or 56 d (Supplementary Table S1 and S2).At 28 d LBW pigs had smaller brachial artery diameter and vessel peak blood velocity time integral at baseline but no differences at t90 (Supplementary Table S1).At 56 d LBW pigs had smaller brachial artery vessel diameter at baseline and t90 than NBWpigs (Supplementary Table S2).
due to altered cardiac morphology (Fouzas et al., 2014).Additionally, poor cardiac health in livestock could hinder pork production if the pigs die before reaching the market.Thus, the issue of poor cardiac health is detrimental to human health and longevity, as well as agricultural production.The present investigation examined the effects of LBW and RN on cardiac structure and function in preweaning and re-fed piglets.The most important findings were (1) RN pigs had transient reduced cardiac function (E/A ratio and DBP) at 28 d old that was reversed once refeeding occurred and (2) LBW pigs presented with permanent cardiac dysfunction at 56 d (reduced LV volume, wall thickness, and Q) that were not recovered by refeeding.
From the original study by Rodrigues et al. (Rodrigues et al., 2020), LBW pigs maintained a smaller body mass than NBW pigs throughout the study, indicating a permanent stunting of growth in LBW pigs, which is associated with increased disease risk (Rodrigues et al., 2020).In contrast, the RN pigs were smaller than NN at 28 d old but were not different at 56 d, likely due to catch-up growth, defined as a compensatory accelerated growth after a period of growth inhibition, during the refeeding phase.Furthermore, Rodrigues et al. (Rodrigues et al., 2020) demonstrated reduced heart weight in both LBW and RN pigs at 28 d.However, at 56 d the RN pigs' heart mass increased, indicating cardiac-specific growth, while the LBW pigs still had a smaller heart weight than NBW (Rodrigues et al., 2020).
The lack of differences when LV volumes were normalized to body mass highlights the size of the heart was appropriate for the smaller LBW pigs.However, diastolic wall thickness was reduced in LBW pigs which agrees with earlier growth-restricted mice studies demonstrating reduced wall thickness and chamber volumes due to decreases in cardiomyocyte size and nucleation (Cohen et al., 2016;Ferguson et al., 2019).The reduction in wall thickness in LBW pigs is likely from smaller cardiomyocytes and is indicative of LV dilation to compensate for weaker cardiac muscles and to prevent a decrease in systolic function (Murça et al., 2012;Japp et al., 2016).LV dilation is a thinning of the ventricular walls and enlargement of the LV chamber, which is a well-recognized precursor to cardiac dysfunction and heart failure (Vasan et al., 1997).Since the pigs in this study are still juvenile, it is possible the LBW pigs will experience worsening cardiac function with an increased risk of heart failure in adulthood, due to morphological changes in the heart from early life.Further evidence for worsening cardiac function with aging comes from a recent study by Wellington et al. that reported impaired glucose tolerance in LBW piglets (Wellington et al., 2022).Wellington's study suggests LBW pigs rely on fatty acid oxidation for energy due to an inability to metabolically switch to glucose metabolism due to growth restriction (Wellington et al., 2022).If true, this would support the current study where the heart is able to function seemingly normal in early life (as the heart relies primarily on fatty acid oxidation) but, with aging, glucose intolerance would disrupt cardiac function (Karwi et al., 2018).
There was no difference in resting HR between groups ensuring differences were due to interventions and not due to differences in contraction rates or anesthesia effects (Conrad et al., 1982;Lang et al., 2015;Wang et al., 2016).LBW pigs had reduced Qi at 56 d old, indicating reduced cardiac function as they aged.Previous literature is divided on what changes early life growth restriction have on systolic function and when they occur.Several studies have found LBW increases IVCT and FS but reduces SV to maintain a normal Q with a higher HR (Cohen et al., 2016;Ferguson et al., 2019), while others have reported no change at all in systolic parameters at rest (Shoukry et al., 1986;Fabiansen et al., 2015).Interestingly, the LBW pigs in this study did not show any changes in IVRT, FS, or SV and as such presented only with a decrease in Qi.Considering an anesthetic effect on HR, the reduction in Qi is likely related to the signs of LV dilation in LBW pigs, as a weaker dilated LV will not be able to pump efficiently and can get worse with aging (Akasheva et al., 2015).Thus, it is important to note the early age (56 d) at which the pigs in the current study were examined and the possibility of worsening cardiac dysfunction with maturity.
In contrast to the growth-restricted pigs in the current study, other animal models and human studies of LBW and RN have reported increased IVRT indicating ventricular stiffness and reduced diastolic function in adulthood (Fouzas et al., 2014;Fabiansen et al., 2015;Ferguson et al., 2019;Pendergrast et al., 2020;Visker et al., 2020).Since our study did not allow the pigs to age into adulthood, this may be one reason why IVRT was preserved in our pigs.Additionally, the use of different models to induce LBW and the difference in nutrition restriction methodologies (total vs. macronutrient restriction) can induce different physiological responses (Cohen et al., 2016;Dai et al., 2021).It is also possible that IVRT is not altered at rest, but only when the heart is under exertional stress (i.e., physical stress, pharmaceutical stress) is IVRT prolonged (Schmitz et al., 2004).Since our study only investigated parameters at rest, we cannot exclude the possibility that IVRT would be prolonged during exertion.Although IVRT can indicate cardiac stiffness and dysfunction, the E/A ratio is also an important indicator of impaired filling and diastolic dysfunction (Kossaify and Nasr, 2019).Thus, the overall decrease in the E/A ratio in all RN pigs, indicated poor passive ventricular filling and diastolic dysfunction (E/A < 0.8) (Kuznetsova et al., 2009;Cohen et al., 2016).Importantly, this did not persist at 56-dold indicating adequate refeeding during a growth period was able to reverse cardiac dysfunction in RN pigs.It is also possible aging itself leads to growth in the pigs as 28 to 56 d is a post-weaning growth period.Despite not having a group continually RN, evidence in other animals suggests groups that are restricted longer (throughout life) than the current study do not recover from cardiac dysfunction as a result of aging (Marshall et al., 2022).It is likely during the refeeding period, RN pigs experienced normal developmental growth as the RN pigs' body mass and heart mass were no longer smaller than the NN pigs at 56.Thus, refeeding during a period of growth was able to improve the E/A ratio through increased body mass and heart development and maturation (Harada et al., 1996).
In contrast to the growth-restricted pigs in the current study, other animal models and human studies of LBW and RN have reported increased IVRT indicating ventricular stiffness and reduced diastolic function in adulthood (Fouzas et al., 2014;Fabiansen et al., 2015;Ferguson et al., 2019;Pendergrast et al., 2020;Visker et al., 2020).Since our study did not allow the pigs to age into adulthood, this may be one reason why IVRT was preserved in our pigs.Additionally, the use of different models to induce LBW and the difference in nutrition restriction methodologies (total versus macronutrient restriction) can induce different physiological responses (Cohen et al., 2016;Dai et al., 2021).It is also possible that IVRT is not altered at rest, but only when the heart is under exertional stress (i.e., physical stress, pharmaceutical stress) is IVRT prolonged (Schmitz et al., 2004).Since our study only investigated parameters at rest, we cannot exclude the possibility that IVRT would be prolonged during exertion.Although IVRT can indicate cardiac stiffness and dysfunction, the E/A ratio is also an important indicator of impaired filling and diastolic dysfunction (Kossaify and Nasr, 2019).Thus, the overall decrease in the E/A ratio in all RN pigs, indicated poor passive ventricular filling and diastolic dysfunction (E/A < 0.8) (Kuznetsova et al., 2009;Cohen et al., 2016).Importantly, this did not persist at 56 d indicating adequate refeeding during a growth period was able to reverse cardiac dysfunction in RN pigs.During refeeding RN pigs experienced catch-up normal developmental growth as the RN pigs' body mass and heart mass were the same size as NN pigs at 56 d.Thus, refeeding during a period of growth was able to improve the E/A ratio through catch-up growth, increased body mass, and heart development (Harada et al., 1996).
It is necessary to point out the value of the pig model in cardiovascular research.Pigs share many characteristics with humans not only in anatomy but also in physiology related to excitation-contraction coupling, myofilament composition, and contractile and relaxation kinetics.Pigs even respond to stressors (i.e., exercise) similarly to humans.Thus, the appropriate use of pig models has the potential to accelerate translation of data obtained to human infants and adults.Additionally, the pig is an agriculturally important livestock species, of which the pork industry has found a leading cause of deficits in pork production (i.e., pig deaths) are due to preexisting cardiac abnormalities.LBW has increased in pig offspring due to larger litter sizes and it is important to understand how nutrition effects each aspect of pig development.The current investigation not only adds evidence that pigs are a good model for translational studies but also contributes novel information for the pork production industry to focus on early feeding strategies and interventions (Shen et al., 2012;Heo et al., 2013).
Limitations
As with many translational animal model studies, there are limitations to the study execution.Although a pig's cardiovascular system is very similar to humans, it is still not a human system.Particularly, a pig's cardiac growth does not allometrically scale with overall growth which leads to sudden cardiac death in pigs (Essen et al., 2018).However, this is an issue in larger pigs and is unlikely to effect early-life studies.This study was designed to have medium effect size with beta of 0.80 and alpha of 0.05, which is adequate for agricultural body mass studies (Rodrigues et al., 2020).Since this is a post hoc analysis of the previous study, we were adequately powered to achieve a medium effect size (Faul et al., 2007(Faul et al., , 2009)).To achieve a small effect size, 22 pigs per group would be needed; however, because of the expense of raising pigs, the current investigation adequately provides evidence for trends in cardiac function and warrants further research to continue.Due to the invasive nature of the larger study (Rodrigues et al., 2020), the pigs measured on day 28 are not the same pigs measured on day 56.BP measurements could be under or over-estimated in this study based on cuff size as cuff was fit based on closest cuff to fit 40% of limb circumference.Although some of our results conflict with previous literature, these differences are likely due to the methods of undernutrition and the LBW cutoff.Our study included pigs weighing ≤1.5 kg as LBW while other studies used ≤1.2 kg for LBW, but this was based on characteristics of the offspring population and previously established birth weight categories (Beaulieu et al., 2010).It is important to note that all our cardiac measures were taken under anesthesia and at rest, and some growth-restricted literature has shown normal cardiac function at rest but pathological dysfunction under stress (Drenckhahn et al., 2015;Visker and Ferguson, 2018).Thus, future studies should aim to examine cardiac function under stress as well.
Conclusion
Our findings support the hypothesis that both LBW and RN offspring are linked to poor cardiac development, albeit with different effects for each insult.Poor cardiac development linked to LBW appears to be permanent as refeeding was not able to reverse cardiac dysfunction in the pigs at 56 d.Meanwhile, poor cardiac development is transient in RN pigs as refeeding was able to reverse cardiac dysfunction by 56 d.More research is needed to understand why diastolic BP and passive filling are most affected by LBW in early life by investigating cardiac structural (i.e., fibrosis) and functional changes (i.e., excessive sympathetic tone).
Figure 1 .
Figure 1.Left ventricle wall thickness at 28 d old.(A) low birth weight (LBW) pigs had thinner diastolic walls (P = 0.0492), (B) No differences existed in systolic wall thickness (P > 0.05).The echocardiography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.normal birth weight (NBW) NN n = 4 males, four females, LBW NN n = 4 males, four females NBW restricted nutrition (RN) n = 4 males, four females, and LBW RN n = 3 males, five females.
Figure 2 .
Figure 2. Left ventricle wall thickness at 56 d old.(A) low birth weight (LBW) pigs had thinner diastolic walls (P = 0.0293) and (B) thinner systolic walls as compared to normal birth weight (NBW) pigs (P = 0.0472).The echocardiography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.NBW NN n = 4 males, four females, LBW NN n = 4 males, four females NBW restricted nutrition (RN) n = 4 males, four females and LBW RN n = 3 males, five females.
Figure 3 .
Figure 3. Stroke volume at 28 d old.(A) restricted nutrition (RN) pigs had smaller stroke volumes than NN pigs (P = 0.0495) (B) When normalized to body surface area, there were no differences in stroke volume (P > 0.05).The echocardiography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.normal birth weight (NBW) NN n = 4 males, four females, low birth weight (LBW) NN n = 4 males, 4 females NBW RN n = 4 males, four females and LBW RN n = 3 males, five females.
Figure 4 .
Figure 4. Stroke volume at 56 d old.(A) low birth weight (LBW) pigs had smaller stroke volumes than normal birth weight (NBW) pigs (P = 0.0397) (B) When normalized to body surface area, there were no differences in stroke volume (P > 0.05).The echocardiography measures were analyzed with a three-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.NBW NN n = 4 males, four females, LBW NN n = 4 males, four females NBW restricted nutrition (RN) n = 4 males, four females and LBW RN n = 3 males, five females.
Figure 5 .
Figure 5. Cardiac output at 28 d old.(A) restricted nutrition (RN) pigs had smaller cardiac output than NN pigs (P = 0.0451) (B) When normalized to body surface area, there were no differences in cardiac output (P > 0.05).The echocardiography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data is presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.normal birth weight (NBW) NN n = 4 males, four females, low birth weight (LBW) NN n = 4 males, four females NBW RN n = 4 males, four females and LBW RN n = 3 males, five females.
Figure 6 .
Figure 6.Cardiac output (Q) at 56 d old.(A) Absolute Q was smaller in low birth weight (LBW) pigs compared to normal birth weight (NBW) pigs (P = 0.0037), (B) LBW pigs still had lower Q than NBW when Q was indexed to body surface area (P = 0.0337).The echocardiography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data is presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.NBW NN n = 4 males, four females, LBW NN n = 4 males, four females NBW restricted nutrition (RN) n = 4 males, four females and LBW RN n = 3 males, five females.
Figure 7 .
Figure 7. Blood pressure (BP) at 28 d old.(A) Male restricted nutrition (RN) pigs also had reduced systolic BP as compared to NN pigs (P = 0.0401) (B) All RN pigs had reduced diastolic BP (P = 0.0290).The vascular ultrasonography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.normal birth weight (NBW) NN n = 4 males, four females, low birth weight (LBW) NN n = 4 males, four females NBW RN n = 4 males, four females and low birth weight (LBW) RN n = 3 males, five females.
Figure 8 .
Figure 8. Diastolic function at 28 d old.(A) There were no birth weight or diet effects on E-wave (P > 0.05), (B) There were no birth weight or diet effects on A-wave peak velocity (P > 0.05).(C) restricted nutrition (RN) pigs had a decreased E/A ratio compared to NN at 28 d (P = 0.0062).The vascular ultrasonography measures were analyzed with a 3-way full factorial ANOVA.Figures are condensed for clarity by showing only the significant main or interaction effect.Data are presented as mean ± SEM.Differences between groups (P < 0.05) are denoted by '*'.normal birth weight (NBW) NN n = 4 males, four females, low birth weight (LBW) NN n = 4 males, four females NBW RN n = 4 males, four females and LBW RN n = 3 males, five females.
Table 1 .
Echocardiography and sonography measures at age 28 d 1
Table 2 .
Echocardiography and sonography measures at age 56 d 1
|
2023-10-27T06:17:29.490Z
|
2023-10-25T00:00:00.000
|
{
"year": 2023,
"sha1": "afa85fd2b86989766ae0e64ff293154a5689035b",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jas/advance-article-pdf/doi/10.1093/jas/skad364/52556153/skad364.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c72530062a8694363bb4be8ad72dc1194fd36c1a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266854939
|
pes2o/s2orc
|
v3-fos-license
|
Development of Basic Tissue Histology Atlas to Improve Student Motivation and Learning Outcomes on Animal Tissue Structure Materials in High School
Abstract
INTRODUCTION
Learning media is a tool that facilitates teachers and students in the teaching and learning process.Learning media can also be an intermediary for teachers in conveying information or learning materials.The existence of learning media today, especially biology learning media, is needed by both teachers and students.If you look at the characteristics of biological material, you can find many materials that are on average verbalistic in nature and very few presentations of images that can support the explanation of the material.
Based on the results of the interviews, the teacher found it difficult to carry out observations in the laboratory because of the lack of facilities such as microscopes and the absence of preserved preparations provided by the school, as well as the lack of time to make observations due to the large number of materials that students had to learn, especially biology material in class XI.This situation not only makes it difficult for teachers, but students who cannot understand properly how the actual shape of animal tissue that composes the organs of the animal and human bodies.
Based on the curriculum mandate in KD (Kompetensi Dasar/Basic Competences) 3.4, previous students were able to observe the structure of animal cells using a microscope and observation materials that can be easily found and presented, such as the epithelium on the human cheek and in plant cells by using observation material in the form of a thin membrane on red onions.Meanwhile, in observing animal tissue, the teacher can only facilitate students with pictures of animal tissue which are actually less representative according to the teacher from Google or the results of the workshop that are displayed through power points to be shown to students.
The results of filling out the questionnaire on the aspect of the level of learning motivation, the average number of students answered was still low.This is supported by the results of observations made by researchers, there are some students who are still lazy and busy themselves when the teacher is giving the material.The attitude of these students due to low motivation to learn finally has an impact on the results of student test scores which on average are still lacking and some students do not reach the KKM (Kriteria Ketuntasan Minimum/Minimum Criteria of Mastery Learning).The existence of motivation in learning certainly affects student learning outcomes.
The Basic Tissue Histology Atlas deserves to be developed as a learning media because it can overcome the limited time for teachers to make observations or identify.The use of Histology Atlas in learning biology is able to overcome the problems caused by the lack of learning facilities such as microscopes and preserved preparations.Based on the facts and existing problems, it is necessary to develop a Basic Tissue Histology Atlas that can be used as a learning medium so that it can overcome the limitations of teachers in learning material on animal tissue structures in high school.
RESEARCH METHODS
This research was conducted at SMA Negeri Gondangrejo in the even semester of the 2020/2021 school year.The design of this research is Research and Development (R&D) using the Sugiyono development model.The research subject was students of grade XI SMA Negeri Gondangrejo which included 30 students as subjects of small-scale trials and 35 students as subjects of large-scale trials.The method of data collection was carried out by questionnaires and tests.The research instrument used was a teacher and student response questionnaire and a learning motivation questionnaire, with 20 multiple choice questions to measure the improvement in student learning outcomes before and after using the atlas.Data analysis includes analysis of atlas validity, teacher and student responses, learning motivation using the percentage descriptive method and improving student learning outcomes with the N-gain test.
RESULTS AND DISCUSSION
The purpose of this study was to analyze the characteristics, validity, applicability, and effectiveness of the Basic Tissue Histology Atlas to improve students' motivation and learning outcomes on the material of animal tissue structure.Atlas characteristics can be seen from each component in the atlas which includes atlas cover design, foreword, Table of contents, atlas content design, bibliography, and atlas index.The success of measuring validity, applicability, and effectiveness can be seen from the achievement based on predetermined indicators, namely, valid if obtaining a total percentage of at least 71% with valid criteria, can be applied if obtaining a minimum percentage of teachers and students 81% criteria can be applied, effectively increasing motivation study with a minimum percentage of 75% classically with motivated and highly motivated criteria, and student learning outcomes obtained classical completeness, namely 75% of students reached KKM (Kriteria Ketuntasan Minimum/Minimum Criteria of Mastery Learning) 75 and learning outcomes increased with moderate criteria.
Characteristics of Basic Tissue Histology Atlas
Characteristics of the Basic Tissue Histology Atlas developed can be seen from several components contained in this atlas including the atlas title located on the cover page, foreword, Table of contents, atlas contents section, bibliography, and index.Atlas cover design is an important part of the atlas, because it contains an overview of the contents of the atlas.Therefore, the illustrations used must also reflect the contents of the atlas.The illustration used on the cover of the atlas by taking one of the pictures of the preparations, namely skin preparations as an illustration.
The foreword in this atlas is structured like the foreword in general in a written work.Presentation of the foreword is based on the aspect of the purpose of writing the atlas, for whom the atlas was prepared or written, and expressions of gratitude.The Table of contents is a section that contains a collection of titles for chapters written in a work, so that it can be used as a guide or map to make it easier for readers to find information based on the title and the intended page number.
The design of the content of the atlas displays presentations that are able to attract learning motivation for students.This can be seen from the images that are displayed in full color, neatly and systematically arranged, the presentation of sequential material from the basic epithelial tissue, then connective tissue, muscle tissue and nervous tissue.This systematic presentation can help students process information in a coherent manner so that they can make connections between networks with one another.
The bibliography is important to include in the writing because it contains reference sources used by the author, so that if the reader wants to know more, they can cite the sources listed and look for them on online search sites or google.There is no special design in the index, it is written and compiled using the Aristotelian style with a modern format.The writing of the index in this atlas contains scientific words or terms and important terms that characterize the material of animal tissue structure.
Validity of Basic Tissue Histology Atlas
Product validation was carried out by filling out a Basic Tissue Histology Atlas validation questionnaire by media and material experts.The result of the final validity of the product that has been developed is the average percentage value of the validation value by media experts and material experts.The following presents the results of the atlas validity assessment by media experts in Table 1. 1, it is known that media experts assess the validity of Basic Tissue Histology Atlas based on two aspects, obtaining a score of 83 on the graphic aspect and a score of 18 on the presentation aspect.Each aspect gets a percentage of 94.3% on the graphic aspect and 90% on the presentation aspect.The percentage of total media validity obtained from the validator is 93.5% with very valid criteria.
The graphic aspect is divided into several indicators, namely atlas size, atlas cover design and atlas content design.A high percentage of validity indicates that the atlas made in accordance with the ISO standard size (The International Organization for Standardization) is B5 with a size of 176 x 250 mm.Atlas cover design pays attention to the appearance of layout elements with harmonious colors that can clarify the material.The use of illustrations on the cover is able to reflect the content and characteristics of the atlas material.In addition, the atlas cover design also pays attention to the balance or proportionality of the shape, font size, illustration size, and the color of the letters used.
The design of the content of the atlas includes the title of the chapter, the title of the picture of the preparation, the description of the picture of the preparation, and the complete page number.The description of the images presented is able to clarify the presentation of the material supported by a color composition that attracts reading interest.Placement of image captions adjacent to the image so that it is easier for students to understand.The addition and placement of illustrations in the atlas does not interfere with important information in the material such as titles, image descriptions, and page numbers.The ease of use of the developed atlas is also seen in the consistent use of symbols or icons and the variety of letters used is not excessive, so that it does not interfere with students' reading focus.Based on the presentation aspect, the atlas displays the right center of view for the title so that it can become the initial attraction for the reader.The presentation of the index on the atlas received a positive response from the validator, because the index included was very helpful for students in finding a list of words or terms that had been completed with page numbers and arranged alphabetically.2, it is known that the material validity assessment by material experts is based on three aspects, obtaining a score of 44 on the aspect of material feasibility, a score of 33 on the language feasibility aspect, and a score of 3 on the contextualization aspect.Each percentage in sequence on the three aspects above are 91.7%,91.7% and 75%.The percentage of the total validity of the material obtained from the validator is 90.9% with very valid criteria.A high percentage indicates that the atlas developed has met the aspects of material feasibility, language, and contextualization.
The first aspect that is assessed is the feasibility of the material.The aspect of material feasibility obtained a percentage value of 91.7%.This shows that the material presented is in accordance with the KD (Kompetensi Dasar/Basic Competences) achievement determined by the curriculum.The presentation of the atlas material is adjusted to the level of needs of high school students.The material presented does not cause many interpretations and is in accordance with the concepts that apply in the biological sciences.The accuracy of the images in the atlas can be seen in the presentation of images that can visualize the structure of the tissue making up the organs of the animal and human organs in a concrete way according to the concept of biological science.This is in accordance with the opinion of Mustika (2015), image or photo-based media has advantages because it is visually concrete so that it can display objects according to their original form and not verbalistic.The presentation of the material in the atlas is able to stimulate curiosity and encourage students to study and seek information thoroughly.
The second aspect in assessing the validity of the material is linguistic feasibility.The linguistic aspect obtained a percentage of 91.7%.The indicators of linguistic feasibility assessed are straightforward, communicative, dialogical and interactive, conformity to the level of student development, and the use of terms or symbols.The use of sentences in the atlas is able to represent the content of information in accordance with good and correct Indonesian rules.The language used in explaining the concept is in accordance with the level of cognitive development of high school students, using simple sentences, easy to understand and not convoluted so that it can evoke a sense of pleasure when students read it.The term language does not only refer to written or spoken forms, but also in the form of symbols (Wicaksono, 2016).Therefore, the use of symbols in this atlas can also facilitate learning communication between teachers and students.
The third or last aspect is the aspect of contextuality.The percentage on the aspect of contextuality gets a score of 75%.This shows that the use of atlas is able to encourage students to make connections between the knowledge they have and their application in everyday life related to the material structure of animal tissues.The contextuality of the material in learning can help students combine knowledge with action and application in life (Parhan, 2018).
After the assessment, the experts also provided suggestions for improvement of the Basic Tissue Histology Atlasthat had been developed.After the basic tissue-based histological atlas media were repaired and assessed by media experts and material experts, the results of the two experts' assessments were then averaged.The following presents the average results of the assessment of the validity of the Basic Tissue Histology Atlasby the two experts in Table 3.Based on Table 3 it is known that the percentage of validity values by media experts and material experts get different values.The percentage value of validity by media experts is 93.5% and by material experts is 90.9%.After being averaged, the percentage of the final validity value of the Basic Tissue Histology Atlas learning media was 92.2% with very valid criteria.
Applicability of Basic Tissue Histology Atlas
Assessment of the applicability of the Basic Tissue Histology Atlas as a learning medium was obtained through filling out questionnaire responses by biology subject teachers and students.Data on teacher and student responses were obtained during small-scale trials.The teacher response questionnaire was filled out by one biology subject teacher and the student response questionnaire was filled out by students in one class consisting of 30 students.
1) Teacher's Response
The teacher's response questionnaire sheet contains several questions related to the teacher's response to the basic network-based histology atlas media which consists of three aspects, namely, material aspects, aspects of applicability and ease of use, and language aspects.Following are the results of the assessment of the applicability of the Basic Tissue Histology Atlas through the teacher's response questionnaire in Table 4. Based on the results in Table 4, it is obtained a score of 49 on the material aspect, a score of 23 on the aspect of applicability and ease of use, and a score of 23 on the language aspect.The percentages obtained sequentially in each aspect are 94.2%, 95.8%, and 95.8%.The total percentage obtained from the teacher's response assessment is 95% with the criteria that can be applied.
The first aspect is that the material gets a percentage value of 94.2%.The high score percentage shows that in terms of material, Basic Tissue Histology Atlas can be applied in learning.The material component that received a positive response from the teacher was the suitability of the material with the standard of achievement of KD (Kompetensi Dasar/Basic Competences) and the level of needs of high school students.The presentation of images in the atlas is able to support students' understanding of the material structure of animal tissues.This is supported by the results of Karyati's research (2017), concluding that image-based print media as an alternative learning media that is effective in increasing students' understanding so that they can improve their learning outcomes.Pictures and compositions of writing with attractive color arrangements are able to foster students' reading interest in Basic Tissue Histology Atlas.
The second assessment aspect is applicability and ease of use.The percentage value obtained in the second aspect is 95.8%.Applicability and ease of use are shown by the positive response of teachers to the atlas which is easy and practical to use in learning biology.The use of Basic Tissue Histology Atlas is able to overcome the limitations of time and senses in learning, especially material on animal tissue structures.Maulida (2013) states that atlases have advantages, namely they are visual, can overcome the limitations of space, time, and senses, and provide easy access for both teachers and students.In addition, the use of the atlas is also able to overcome the limitations of facilities such as microscopes and preserved preparations so that if it is not possible to conduct observations, then as an alternative, teachers can do learning with this atlas.
The third aspect is language.The teacher's response to the aspects of the language used in this atlas is very important.This is because later teachers and students will use this atlas as a learning medium, so that the ease of communication used in writing on the language aspect can streamline the learning process.The assessment of the language aspect by the teacher obtained a percentage value of 95.8%.The high percentage is indicated by the positive response of the teacher with the use of simple, uncomplicated, and easy to understand language, and the use of language in the atlas able to explain the delivery of concepts according to the level of students' cognitive development.
2) Student's Responses
Besides being obtained from the teacher's responses, the applicability value of the Basic Tissue Histology Atlas was also obtained from the results of student responses.The student response questionnaire sheet contains several questions with four aspects of assessment, namely aspects of interest, aspects of applicability and ease of use, aspects of material, and aspects of language.Following are the results of the assessment of the applicability of the Basic Tissue Histology Atlas through student response questionnaires in Table 5. Completing student responses through questionnaires obtained a classical percentage of 100%.The percentage of grades is classically obtained from the number of students who provide responses with good and very good criteria.
Overall, the Basic Tissue Histology Atlas that has been developed has several advantages, including being easily accessible by teachers and students because it is a print media so it does not require energy in its use such as electricity and internet networks.Easy to learn anytime and anywhere and under any circumstances because it does not require other means.This is in accordance with the theory stated by Indriana (2011), that print media has advantages in terms of ease of access and without the help of other means so as not to worry as online media if there is a power outage or internet network.
1) Learning Motivation
Assessment of students' learning motivation is done by filling out questionnaires by students.The assessment questionnaire used is based on indicators of learning motivation adapted from Uno (2016) and Sardiman (2012).The frequency distribution of student learning motivation is presented in Table 6.Based on the results of Table 6, it is known that classical student learning motivation is 82.8% with motivated and highly motivated criteria.The percentage of classical learning motivation has met the indicators achieved in this study, namely 75% in the learning process with motivated and highly motivated criteria.Total of 4 students out of 35 students were in the highly motivated category, 25 students out of 35 students were in the motivated category, and 6 students out of 35 students were in the moderately motivated category.The learning motivation that is expected in this study is the student's interest in the Basic Tissue Histology Atlas that has been developed, so that it can increase student enthusiasm in learning and linearly improve student learning outcomes.
Interest in the learning process is the initial mover of students in learning to achieve the desired learning objectives.This is in accordance with the results of Fauziah's research (2017) which states that an interest in learning is a picture of a student who wants to achieve his goals.Students' interest in learning is needed because when students have an interest in the material, they will try to learn it thoroughly.Motivation is one of the important factors to foster enthusiasm for learning in students.The existence of motivation will be followed by the desire or ideals, so that students who have high learning motivation will understand what will be the goal in learning.
Learning motivation is influenced by intrinsic and extrinsic factors.Intrinsic factors can be in the form of desire and desire to achieve success, encouragement of learning needs, and desired hopes or ideals.While external factors are in the form of awards, a conducive environment, and interesting learning (Uno, 2016).Interesting learning can be created and facilitated by teachers, one of which is by using learning media.The use of learning media is able to create a pleasant learning climate so that it can affect learning motivation.
Motivation and learning are two words that are integrated with each other, both influence each other and cannot be separated.Learning with a strong motivation will certainly have a positive impact on learning outcomes, and vice versa (Maryam, 2016).Motivation arises because of the desire to fulfill needs.The needs of students in learning is a strong motivation for students to achieve maximum achievement.The need for clear and directed learning has an effect in the form of a strong urge to study each material seriously so that the learning process becomes more active and effective.
2) Learning Outcomes
Student learning outcomes were obtained by giving a written test in the form of 20 multiple choice questions.The data obtained are in the form of pretest and posttest scores which are done individually by students, to measure students' cognitive abilities before and after using Basic Tissue Histology Atlas as a medium for learning biology on animal tissue structure material.Data on student learning outcomes are presented in Table 7. 7, it is known that student learning outcomes before and after using Basic Tissue Histology Atlas media have increased.The average percentage of students' classical completeness increased from 20% before using the atlas to 100% after using the atlas.How much increase in student learning outcomes is further analyzed by the Normalized Gain (N-gain) formula.Data on improving learning outcomes is presented in Table 8 below.The use of Basic Tissue Histology Atlas as a learning medium is not only able to increase learning motivation, but also to improve student learning outcomes.This can be seen in Table 7, classically the average percentage of student learning completeness increased from 20% before using the atlas to 100% after using the atlas.Total of 7 students from 35 students completed the initial assessment (pretest) and the remaining 28 students did not complete.After learning using a Basic Tissue Histology Atlas, as many as 35 students or all students completed the final assessment (posttest).The increase in student learning outcomes after the N-gain test obtained a value of 0.67 was in the medium category.Based on these results, the Basic Tissue Histology Atlas is effectively used as a learning medium because it has reached the feasibility indicator specified in this study, namely the student classical posttest results (≥75% of the total number of students achieved a Minimum Criteria of Mastery Learning score of 75) and student learning outcomes increased at least are in the moderate criteria.
The use of learning media is able to facilitate the process of delivering material from the teacher to students, because it can clarify abstract and verbalistic material to be more real and not verbalistic.The use of media in learning is one way for teachers to increase student motivation and learning outcomes.Based on Wahyu's research (2014) stated that learning using media will significantly improve learning outcomes when compared to learning that does not use learning media.The teaching and learning process using the media really attracts students' attention, because learning is done not only monotonously by listening to lectures from Learning outcome criteria Medium the teacher.The use of media greatly helps the effectiveness of the learning process and the delivery of lesson information at that time.
The increase in student learning outcomes as a result of using learning media is shown in the increase in student learning motivation which can indirectly have a positive effect on increasing student learning outcomes.This is because an attractive visualization on a Basic Tissue Histology Atlas with colored images will increase the attractiveness or motivation of students towards learning and the complete arrangement of network images will enrich the histology reference sources so as to make it easier for students to understand and have a positive impact on improving student learning outcomes.The role of motivation in learning is a psychic driving force that is able to encourage students to learn, provide enthusiasm and a sense of pleasure that creates energy for learning.The energy generated by the motivation can influence students to be active in their studies so that it has a positive impact on high learning outcomes and vice versa (Palittin, 2019).
CONCLUSION
Based on the analysis of the results and discussion, it can be concluded that the characteristics of Basic Tissue Histology Atlas are composed of simple to complex tissue and full color display.The Basic Tissue Histology Atlas valid and can be applied as a learning medium, effectively used to improve student motivation and learning outcomes.
Table 1 .
Recapitulation of Validation Results by Media Experts
Table 2 .
Recapitulation of Validation Results by Material Expert
Table 3 .
Recapitulation of the Average Assessment Results by Media and Material Experts
Table 4 .
Recapitulation of Teacher Response Assessment Results
Table 5 .
Recapitulation of Student Response Assessment Results
Table 6 .
Frequency distribution of student learning motivation
Table 7 .
Students Learning OutcomesBased on Table
Table 8 .
Improved Learning Outcomes with N-gain test
|
2024-01-09T16:24:23.808Z
|
2023-08-17T00:00:00.000
|
{
"year": 2023,
"sha1": "af32a0b3c775293101ba2feb530d31f4b0342a53",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/sju/index.php/ujbe/article/download/49119/25001",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d8538bb3b47bef777478b05f018244dd8d7513dc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
251427810
|
pes2o/s2orc
|
v3-fos-license
|
Thermal Death Kinetics of Three Representative Salmonella enterica Strains in Toasted Oats Cereal
Several reports have indicated that the thermal tolerance of Salmonella at low-water activity increases significantly, but information on the impact of diverse food matrices is still scarce. The goal of this research was to determine the kinetic parameters (decimal reduction time, D; time required for the first decimal reduction, δ) of thermal resistance of Salmonella in a previously cooked low water activity food. Commercial toasted oats cereal (TOC) was used as the food model, with or without sucrose (25%) addition. TOC samples were inoculated with 108 CFU/mL of a single strain of one of three Salmonella serovars (Agona, Tennessee, Typhimurium). TOC samples were ground and equilibrated to aw values of 0.11, 0.33 and 0.53, respectively. Ground TOC was heated at temperatures between 65 °C and 105 °C and viable counts were determined over time (depending on the temperature for up to 6 h). Death kinetic parameters were determined using linear and Weibull regression models. More than 70% of Weibull’s adjusted regression coefficients (Radj2) and only 38% of the linear model’s Radj2 had values greater than 0.8. For all serovars, both D and δ values increased consistently at a 0.11 aw compared to 0.33 and 0.53. At 0.33 aw, the δ values for Typhimurium, Tennessee and Agona were 0.55, 1.01 and 2.87, respectively, at 85 °C, but these values increased to 65, 105 and 64 min, respectively, at 0.11 aw. At 100 °C, δ values were 0.9, 5.5 and 2.3 min, respectively, at 0.11 aw. The addition of sucrose resulted in a consistent reduction of eight out of nine δ values determined at 0.11 aw at 85, 95 and 100 °C, but this trend was not consistent at 0.33 and 0.53 aw. The Z values (increase of temperature required to decrease δ-value one log) were determined with modified δ values for a fixed β (a fitting parameter that describes the shape of the curve), and ranged between 8.9 °C and 13.4 °C; they were not influenced by aw, strain or sugar content. These findings indicated that in TOC, high thermal tolerance was consistent among serovars and thermal tolerance was inversely dependent on aw.
Introduction
Due to its pervasive presence and its tolerance to different stresses, Salmonella can easily contaminate food products and processing plants. Salmonella is a frequent contaminant of animal origin foods, such as beef, poultry, pork and eggs [1][2][3][4]; it also contaminates fish, shrimp and dairy products [5]. Most of these foods are high-water activity (a w ) products. In recent years, however, foodborne disease caused by Salmonella has also been linked to low a w foods such as grain flours, raw and processed nuts, dried milk, black and red pepper, peanut butter, rice and animal feeds [6][7][8][9][10][11][12][13][14].
Epidemiological reports confirmed that in 2015, Salmonella was the leading bacterial cause of foodborne disease in the United States; it was responsible for 7728 cases, 2074 hospitalizations and 32 deaths [15]. In Europe, the latest EFSA report points out that there were 52,702 confirmed cases of human salmonellosis for the year 2020 [16]. Salmonella causes more than 100 outbreaks of foodborne disease in the US annually, and most of these are caused by poultry and eggs [17]. The increasing number of salmonellosis outbreaks associated with multiple low a w foods has prompted the industry to adopt proactive measures and good practices, including enhanced surveillance and food testing [18].
The reduction of available water, especially though drying, has been a well-established strategy to control bacteria [19], but the increased incidence of low-moisture food-related outbreaks suggests that Salmonella is capable of surviving for long periods of time in dry foods and in low-water activity matrices [20,21]. Many of the foods associated with Salmonella outbreaks are often subjected to heat treatments, such as baking or roasting. This is the case, for example, for puffed cereals, peanut butter or chocolate; however, these foods have been associated with outbreaks [9,10,22], suggesting that thermal treatment of these food products may not be sufficient to kill Salmonella [13,[23][24][25]. Different factors that influence Salmonella heat tolerance in low water activity foods might include the intrinsic structural property of the food matrix, the presence of different microenvironments [26], and the specific composition of the food, such as its salt, sugar and fat content [27][28][29][30]. In the case of toasted oats cereals, manufacturers often include commercial variants that contain sugar.
Some of the studies that determined kinetic parameters of Salmonella inactivation in low a w foods included those concerning dry corn flour, almonds, peanut butter, flour, cocoa, hazelnut and whey protein powder shells [23][24][25]29,31,32]. On dry corn flour, decimal reduction times (D-value) at 49 • C of eight different serovars of Salmonella varied from 0.3 to 9.9 h [24], while D-values at 100 • C for S. Oranienburg and Enteritidis were around 2.5 min in cocoa and between 7 and 11 min in hazelnuts (both at 4% moisture). An increase in moisture (up to 7% moisture) markedly affected their thermal tolerance to D-values at 80 • C for between 5.4 and 7.7 in cocoa and between 2.5 and 4.5 min in hazelnuts. These findings clearly indicated that this pathogen is especially difficult to kill in low a w foods.
In order to study Salmonella's thermal response under these conditions, the traditional linear inactivation model [33] as well as non-linear models such as the Weibull model [34] are often applied. The development of predictive models often requires a comparison of the statistical parameters associated with these models [35]. For low-water-activity foods, recent studies have indicated that a single predictive model does not always fit every condition tested [36]. These studies reported that the Weibull distribution generally provides a better fit than the linear model, but its correct use depends on the experimental design [37].
Despite the occurrence of two salmonellosis outbreaks in toasted oats cereal products in 1998 and 2008 [6,9], this food matrix has never been investigated as a food matrix to determine inactivation kinetics of Salmonella. This study was undertaken to determine the effect of low-water activity on a toasted oats cereal (TOC) matrix, and to evaluate the impact of sugar addition on the thermal inactivation kinetics of three Salmonella enterica subsp. enterica strains of different serovars.
Strains and Culture Preparation
The strains of Salmonella enterica subsp. enterica serovars used in this research were S. Typhimurium E2009005811, S. Tennessee E200700502 and S. Agona. The first two strains were provided by the Minnesota Department of Health; they were isolated from patients linked to the S. Typhimurium E2009005811 and S. Tennessee E200700502 peanut butter outbreaks of 2009 and 2007, respectively [8,10]. The S. Agona strain was originally isolated from an outbreak related to toasted oats cereal from 1998 [6]. The stock cultures of the three serovars were stored in a 1:1 ratio of glycerol and tryptic soy broth (TSB; Neogen, Inc., East Lansing, MI, USA) at −55 • C. The working cultures of each serovar were prepared from frozen stocks and inoculated into TSB, grown overnight at 37 • C and then stored at 4 • C. In order to test the working stocks, they were re-transferred once a week and streaked onto tryptic soy agar (TSA; Neogen, Inc.) containing 0.8 g/L ferric ammonium citrate (Sigma-Aldrich™, St. Louis, MO, USA) and 6.8 g/L sodium thiosulfate. (Acros Organics, Morris Plains, NJ, USA). This medium was formulated to provide a differential but non-selective agar. Periodically, the serovars were also streaked onto bismuth sulfate agar (Neogen, Inc.) and xylose lysine deoxycholate agar (Neogen, Inc.).
Inoculation and Drying Procedure
From working stocks, inoculation cultures were incubated overnight in 40 mL of TSB at 37 • C, and added to bottles containing 360 mL of sterile water; they were gently shaken until a final count of approximately 10 8 CFU/mL was obtained. Twenty grams of a commercially available TOC brand were added to the bottles and mixed by repeated inversion for one minute. TOC samples had an initial (immediately after opening the package) a w value that ranged from 0.18 to 0.30. The inoculated cereal samples were separated with sterile kitchen strainers and spread out on sterile perforated kitchen baking sheets. The baking sheets with cereal were placed in an incubator at 40 • C for 12 to 18 h, in order to facilitate drying. The final weights of TOC samples were verified to be 20.0 ± 0.5 g. The cereal samples were then ground using a sterilized mortar and pestle in a biosafety cabinet, and then placed on aluminum foil trays before water activity equilibration. The particle size of ground TOC was less than 1 mm.
For the treatments involving added sucrose, samples of 7.5 g of commercial food grade powdered sucrose (Domino, Yonkers, NY, USA) were added to cereals that were spread onto baking sheets. Sucrose was mixed into the cereal by folding in small increments, and then mixing the sucrose into the wet cereal with sterile spoons. Once trays were removed from the incubator at the end of the drying period, they were weighed to ensure that TOC samples had a total weight of 26.5 ± 0.5 g, or approximately 25% by weight of sucrose. Sucrose-containing cereal was also ground using the same procedure as described above.
Preparation of Samples for Thermal Inactivation
A total of 26.5 g of TOC or 20 g TOC without sucrose was separated and placed onto separate foil trays. TOC samples were equilibrated to specific water activities by storage for periods between 7 to 12 d in desiccators that contained saturated solutions of lithium chloride (Acros Organics), magnesium chloride (Sigma-Aldrich™) and magnesium nitrate (Acros Organics), in order to attain equilibrium at 0.11, 0.33 and 0.53 a w , respectively. After the storage period, trays containing the cereal were removed and the water activity was measured in order to ensure that the samples were within 0.02 a w of the target value of the specific desiccator that the samples were put into. Water activity of the TOC samples was measured using a water activity meter (Pawkit Model, Decagon Devices, Inc., Pullman, WA, USA) that was calibrated every other day according to the manufacturer's procedure. If the cereal was not within 0.02 a w of the target water activity within 7 to 12 days, the sample was not used.
Sterile 12-cubic-centimeter syringes were used to fill capillary tubes (1.5-1.8 × 90 mm borosilicate glass) with ground TOC by inserting the tubes through the Luer-lock tip of the syringes. Sterile ram rods (118 mm × 1 mm stainless steel) were used to completely fill the capillary tubes when necessary. The TOC-filled tubes were heat sealed using a propane hand torch and placed in a solution of 10% commercial chlorine bleach (5.25% sodium hypochlorite concentration) for at least one minute, in order to sterilize the exterior of the tubes. The average weight of TOC in the capillary tubes was 0.05 g. In order to assure that the water activity of the cereals had not changed, after all capillary tubes had been sterilized, a w was measured using the remaining cereal in the syringe. If the remaining cereal's water activity varied by more than 0.02 a w , none of the capillary tubes were used. For all tested conditions, three strains, three temperatures, three water activities, with and without sucrose were used, and at least two independent experiments were performed.
Thermal Inactivation
All sealed capillary tubes were placed into either an oil bath (High Temp Bath 160 A, Fisher Scientific, Inc., Waltham, MA, USA) or water bath (Isotemp 205, Fisher Scientific, Inc.) depending on the temperature tested, and were calibrated once a month. The water baths were set to the testing temperatures of between 60 to 95 • C, while the oil bath temperatures were set to between 85 and 105 • C. Two capillary tubes were removed at predetermined time intervals and immediately placed in an ice bath for one minute. From the ice bath, the tubes were placed in a solution of 10% bleach and rinsed with sterile water. Individual capillary tubes were then placed in separate, sterilized 24 × 150 mm screw cap test tubes, each containing a magnetic stir bar (25 × 5 mm); they were vortexed until the capillary tubes were ground to release their contents. Ten milliliters of phosphate buffer (PB) were added to the test tubes and mixed for 10 s. These buffer suspensions were serially diluted by transferring 1 mL serially into 9-milliliter PB tubes. Volumes of 0.1 mL from each dilution were spread plated in duplicate, on TSA containing 0.8 g/L ferric ammonium citrate and 6.8 g/L sodium thiosulfate. This growth medium was intended to address the possibility of cell injury caused by heat, instead of using a standard selective Salmonella medium. The plates were incubated for 24 h at 37 • C before counting colonies was performed. Noninoculated TOC samples were routinely tested to determine the count of naturally present organisms capable of producing black precipitate on modified TSA. None of those control samples was positive for hydrogen sulfide-producing microorganisms (detection limit of 100 CFU/g).
The counts of surviving cells were calculated using the aerobic plate count formula from the Food and Drug Administration's Bacteriological Analytical Manual, and adjusted for a 0.1-milliliter plating volume (Maturin and Peeler, 1998). Each capillary tube sample testing was calculated individually and averaged with its replicates.
The log-linear model [33] used is shown below: where N t is the population at time t (CFU/g), N 0 is the population at time 0 (CFU/g), k max is the maximum specific inactivation rate (min −1 ) and the D value = ln10 k max . The Weibull model [34] used is shown below: where N t and, N 0 are as previously described, δ is the time required for the first decimal reduction (min) and β is a fitting parameter that describes the shape of the curve (β > 1 convex, β < 1 concave). In order to evaluate the goodness-of-fit of the two models to the inactivation data, the ad- (6) and (7)), the root mean square error (RMSE) (Equation (13)) and the corrected Akaike information criterion (AIC c ) (Equation (15)) were calculated according to the formulas shown below, for which n is the total number of observations at all time points, m is the number of time points, p is the number of parameters in the model and k = p + 1. where and where where The Pearson correlation coefficient was calculated for each strain using Microsoft Excel 2016, in order to evaluate the correlation between temperatures, a w and sugar on the inactivation parameters δ and β. As a result of the strong correlation between the δ and β values (Couvert, O., Gaillard, S., Savy, N., Mafart, P. and Leguérinel, 2005) the β value was fixed, which allowed us to compare the δ-values and further calculate Z-values for the first decimal reduction (the increase in temperature required to decrease δ-value by one log). For a given strain at each a w , the mean of the β values that was obtained at each temperature for which the data passed the F test (95% confidence interval) was used to obtain a value for the fixed β. The δ-value was then re-estimated using this value.
The Z value was then calculated according to Equation (16) shown below, where δ * is the first decimal reduction for temperature T * : Differences between means of δ-values were determined using Student's t-test with a p < 0.05 significance.
Results
When inoculated samples of TOC were thermally treated, the D-and δ-values obtained for each of the three strains decreased as the temperature increased; however, the extent of the decline was affected by water activity (Figure 1). At an a w of 0.11 and 85 • C, the D-and δ-values for the strains ranged from 148 to 201 min and from 64 to 105 min, respectively. The same thermal treatment using the samples that were pre-incubated at a w 0.33 resulted in a significantly lower range of D-and δ-values of 6-22 min and 0.5-2.9 min, respectively. The effect of low-water activity increasing thermal inactivation rates was not consistently observed at 0.33 a w as compared to 0.53. At 75 • C, δ-values ranged from 16 to 29 min at a w 0.33, and from 5 to 18 min at a w 0.53. The extent of this overlap was greater at 80 • C, with calculated δ-values of 2, 2.3 and 9 min at a w 0.33, and 0.2, 3.2 and 3.6 min at a w 0.53. These values were comparable to δ-values obtained at 100 • C of 0.9, 5.4 and 5.5 min, but at an a w of 0.11 (Figure 1, Tables S1-S3).
In plain TOC, out of 36 individual treatments (3 strains × 3 a w × 4 temperatures), 33 treatments had R 2 adj values that were greater for the Weibull model than for the linear model (Figure 1, Tables S1-S3). When the linear and Weibull models were fitted to the thermal inactivation data, it was possible to observe that the Weibull model also fit the data better, with a smaller root mean square error (RMSE) and smaller corrected Akaike information criteria (AICc) values for all three strains (Table 1). However, for some conditions, the data did not pass the F test (95% confidence interval) as shown in Supplementary Material Tables S1-S3, where the parameters used to generate Figure 1 are shown; those conditions are marked with an asterisk. Overall, the δ-values of the three strains obtained by fitting the Weibull model increased substantially as the water activity declined. This effect is, however, much more evident from a w 0.33 to a w 0.11 than it is from a w 0.53 to a w 0.33. Table 1. Evaluation of the goodness-of-fit of the linear and Weibull models used to describe the inactivation of three Salmonella enterica serovars by means of the root mean square error (RMSE) and the corrected Akaike information criterion (AIC c ) calculated for each water activity and temperature in toasted oats cereal (TOC). When inoculated samples of TOC were supplemented with 25% sucrose, the D-and δ-values obtained for each of the three strains decreased as the temperature increased (Figure 2). At an a w of 0.11 and 85 • C, the D-and δ-values for the strains ranged from 45 to 65 min and from 12 to 22 min, respectively. These values were significantly lower (p < 0.05) than the equivalent values obtained without sucrose at the same temperature-a w combination. The same trend was observed at 95 • C and 100 • C at 0.11 a w . At higher water activity levels, the addition of sucrose did not consistently affect the D-and δ-values for the three strains.
In the samples that contained sucrose and were incubated at an a w value of 0.33 and heated at 85 • C, the D-and δ-values were no more than 10% and 5%, respectively, of the same values determined at 0.11 a w (Figure 2, Tables S4-S6). In contrast to plain TOC samples, however, for all three strains the δ-values were consistently greater at 0.33 a w in comparison to 0.53 a w (p < 0.05). At 0.33 a w , the average δ-values were 32.3, 11.9 and 3.6 min at 75, 80 and 85 • C, respectively, while at 0.53 a w , corresponding δ-values were 8.3, 2.9 and 0.39 min. The Weibull fitting for all TOC treatment samples with 25% sucrose resulted in R 2 adj values greater than 0.84 (Tables S4-S6). For all treatments, R 2 adj values for the Weibull model were consistently higher than for the linear model. The better fit of the Weibull model was also corroborated with smaller RMSE and smaller AICc values for all three strains (Table 2). Table 2. Evaluation of the goodness-of-fit of the linear and Weibull models used to describe the inactivation of three Salmonella enterica serovars by means of the root mean square error (RMSE) and the corrected Akaike information criterion (AIC c ) calculated for each water activity and temperature in toasted oats cereal (TOC) containing 25% sucrose. Table 3 shows the Pearson correlation coefficients for the relationship of a w and sugar, and their effect on the inactivation parameters δ and β. Table 3. Pearson (ρ) coefficients of the correlation between temperatures, a w and sucrose content, with the inactivation parameters δ and β of toasted oats cereal (TOC) with and without sucrose. When the change in temperature required to reduce the δ-value by one log (Z value) for the first log reduction was calculated using a log-linear model for most of the strains in all conditions, the adjusted R 2 values were high (greater than 0.96) for 16 out of 18 conditions. However, the obtained Z values did not change much either with the water activity, the strain or with the addition of sucrose to the TOC. The values ranged from 8.86 • C to 12.04 • C in the absence of sucrose, and from 9.47 • C to 13.45 • C in the presence of sucrose (Table 4). Table 4. Change in temperature needed to obtain a 90% reduction in δ-values (Z-value, • C) for the first log reduction of Salmonella in toasted oats cereal (TOC), as affected by water activity (a w ) and sucrose addition.
Discussion
Although Salmonella enterica is one of the top causative agents of foodborne diseases, the mechanisms by which this pathogen enters and survives through the food production chain and survives food processing are still unclear, especially for low-moisture foods. Under low-water activity conditions, the food matrix, storage conditions as well as duration of storage have been reported to influence Salmonella's ability to survive [38]. Several reports have shown that the low a w typical of dry foods such as peanut butter, flour, cocoa, almonds, hazelnut and spices can also enhance the thermal tolerance of Salmonella cells [23][24][25]31,32,39]. The current study assessed the heat tolerance of three different Salmonella serovars that were previously isolated from outbreaks linked to low water activity foods, at three different low-water activities, with and without the presence of sucrose, for four temperature treatments, using a commercially available toasted oats breakfast cereal matrix which is similar to one of the products associated with the outbreaks.
Our study identified an inverse relationship between heat resistance of Salmonella and low-water activity in a food matrix. The inverse correlation between heat resistance and water activity appears to be greater in Salmonella than in other organisms reported in the literature [24,40,41]. We observed very similar thermal inactivation kinetic parameters for three Salmonella strains that were isolated from different foods. Although not significantly different, S. Agona seemed to be more heat tolerant among the strains analyzed, since it had the highest δ-values in at least 10 out of 24 temperature-a w combinations compared to the other serovars. This observation is in agreement with a previous study performed by Santillana Farakos et al. [29]. That study also reported a greater tolerance of S. Agona along with S. Tennessee during a two-day storage challenge at 70 • C using a cocktail of serovars as the inoculum. Even in this case, there was no significant difference found between the two strains. However, VanCauwenberge et al. [24] observed the largest D-value for S. Tennessee in 15% moisture flour at 49 • C when it was compared to nine other Salmonella serovars.
As indicated above, several researchers have reported that survival rates increase as a w values decrease, but the mechanisms involved in this response have yet to be fully elucidated. It can be hypothesized that possible mechanisms such as the increased stabilization of ribosomes, the influence of small amounts of osmoprotectants, the induction of viable but nonculturable states of bacterial cells, and the coagulation or oxidation of proteins in microbial cells may be determinants of survival [42]. Specifically, a global gene regulator such as RpoS has also been linked to increased survival rates at lower a w [43]. The role of noncoding DNA and RNA has also been proposed as a protective mechanism by a few authors [44,45].
Thermal inactivation kinetics of Salmonella in high-moisture foods have been extensively studied, but research intended to determine inactivation kinetics in extruded cereal foods is relatively limited. Those reports of Salmonella inactivation in low a w foods have investigated different matrices that included peanut butter, flour, pet food, cocoa, almonds and hazelnut shells. Our study is unique for using TOC because a commercial TOC product was previously involved in an outbreak. This study is also one of the few studies that have investigated thermal tolerance at one of the lowest a w values. Other studies have measured the thermal resistance of Salmonella serovars below 0.5 a w , but on different matrices such as peanut butter (approx. 0.45 a w ) [23,46,47]. The heat resistance of Salmonella has also been determined in dry carbohydrate-based foods in two separate studies that used corn or wheat flour [24,31].
Some reports have investigated the thermal inactivation of Salmonella at temperatures of 90 • C or higher. D-values at 90 • C from 9 to 13 min at 0.45 a w [23] and from 4 to 7 min at 0.2 a w were determined in peanut butter [46]. In TOC with sucrose, the D-values ranged from 15 to 24 min at 0.11 a w , and from 3.6 to 4.4 min at 0.33 a w at the same temperature. At 100 • C, D-values of 2.5 and 7 to 11 min were determined in cocoa bean shells and hazelnut shells, respectively, using S. Enteritidis, S. Montevideo, S. Napoli, S. Oranienburg, S. Poona, S. Senftenberg and S. Typhimurium [32]. At the same temperature, we measured D-values in plain TOC that ranged from 13 to 22 min at 0.11 a w .
Two studies that evaluated heat tolerance in flours utilized methods that were different than the approach used in this investigation. Both studies used dry air as a heat source while the inoculated flour was spread into thin layers on foil trays. Only one of those studies measured the correlation between Salmonella thermal tolerance and changes in water activity; the authors of that study could not find a clear trend [31]. In contrast, we found that δ-values increased as the water activity was lowered to 0.11 a w .
In our study, the sensitivity of Salmonella to temperature changes, defined by the Z-value for the first log reduction, did not change under the conditions tested. In highmoisture foods, such as ground beef and chicken, several studies have reported Z-values of 10 ± 4 • C [48,49]. Studies that are focused on estimating Z-values for low-water activity food matrices are relatively rare. However, two such studies reported large Z-values of 39 to 56 • C in peanut butter, and of 15 to 54 • C in corn flour [23,24]. In our study, the average Z-values for the three serovars was consistently between 8.9 and 13.4 • C for TOC with and without sucrose. These values agree with previously published research that reported a Z-value of 8.3 • C in almonds [25].
It has been hypothesized that the food matrix composition can affect the thermal resistance of Salmonella, and some studies have investigated the impact of salt on thermal tolerance [29]. However, studies on the impact of sugar are more limited, and are generally focused on the long-term survival of Salmonella during storage rather than on its thermal resistance [50,51]. The observation that sucrose enhanced heat resistance of the bacterial proteins in different solutions has led to the hypothesis that this carbohydrate could also stabilize proteins in the cell, especially as water activity is lowered [52][53][54]. In hyperosmotic environments, microorganisms can produce or uptake certain molecules, referred to as 'osmolytes,' that counteract the effects of high osmotic pressure [55][56][57]. While glycerol and other polyols have been shown to increase heat resistance, sucrose has been reported to be the most effective [53].
The metabolic and genetic components of Salmonella's ability to survive desiccation, in addition to its thermal tolerance abilities, have yet to be fully elucidated, but recent publications have identified that global regulators and specific components are involved. A study in chicken litter reported significant up-regulation of rpoS in Salmonella cells adapted to desiccation upon 3, 12, and 24 h [58]. All rpoS mutants exhibited a decreased tolerance to heat, compared to desiccated and non-desiccated wild types, suggesting that mutations in rpoS could lead to the loss of thermal tolerance in Salmonella. Maserati et al. showed that two genes involved in type III secretion systems and previously identified as virulence factors, sopD and sseD, were overexpressed in cells that were subjected to very low water activity [59]. Those genes were necessary for survival during desiccation, as the viability in low a w of knockout mutants was markedly reduced compared to wild-type strains. These recent advances suggest that the response of Salmonella to such adverse conditions may be quite complex.
This study measured the effects that sucrose has on the heat resistance of Salmonella in low-water activity environments. Despite the lack of a clear trend on the δ-value observed for the overall water activities with or without sucrose, at a water activity of 0.11 we measured consistently shorter inactivation times than what we observed in the controls. This observation suggests that the protective effect of sucrose might only be relevant at higher water activities. In fact, our results corroborate the trends observed in previous reports that observed that sucrose could have a protective effect on cells' viability for highwater activity. In TSB media with 35% (w/w) sucrose (a w = 0.95), Peña-Meléndez et al. [60] observed a protective effect on three Salmonella serovars under both adaptation and osmotic shock, rendering the highest D 55 • C -values when compared to the other humectants tested (glycerol and NaCl).
The findings of the present study support the validation and the development of more effective processing of dry foods, in order to improve commercial high-temperature processes and ultimately assure the food safety of low-moisture foods.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms10081570/s1, Table S1: Thermal inactivation rates of Salmonella enterica serovar Typhimurium in toasted oats cereal (TOC) affected by water activity (a w ) and temperature; Table S2: Thermal inactivation rates of Salmonella enterica serovar Tennessee in toasted oats cereal (TOC affected by water activity (a w ) and temperature; Table S3: Thermal inactivation rates of Salmonella enterica serovar Agona in toasted oats cereal (TOC) affected by water activity (a w ) and temperature; Table S4: Thermal inactivation rates of Salmonella enterica serovar Typhimurium in toasted oats cereal (TOC) affected by water activity (a w ) and temperature in the presence of 25% sucrose; Table S5: Thermal inactivation rates of Salmonella enterica serovar Tennessee in toasted oats cereal (TOC) affected by water activity (a w ) and temperature in the presence of 25% sucrose; Table S6 Thermal inactivation rates of Salmonella enterica serovar Agona in toasted oats cereal (TOC) affected by water activity (a w ) and temperature in the presence of 25% sucrose.
|
2022-08-09T15:25:54.886Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "9db66ae47b8a8ddb8a67bd7f9fea19a17980fd41",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/8/1570/pdf?version=1659604878",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69983838a2381c11b717662460cde54595fdebc2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
43208698
|
pes2o/s2orc
|
v3-fos-license
|
Second-order optimality and duality in vector optimization over cones
In this paper, we introduce the notion of a second-order coneconvex function involving second-order directional derivative. Also, second-order cone-pseudoconvex, second-order cone-quasiconvex and other related functions are defined. Second-order optimality and Mond-Weir type duality results are derived for a vector optimization problem over cones using the introduced classes of functions.
Introduction
Generalized convexity notions have always been a significant aid in the progress of optimization theory.In 1981 Hanson [6] generalized convexity to invexity.Kaul and Kaur [11] named differentiable invex functions as η-convex and further generalized them to η-pseudoconvex and η-quasiconvex functions.Craven [5] extended the concept of convex functions to cone-convex functions.Recently Ivanov [7,8,9] illuminated the following definition of a second-order directional derivative obtained by solving the Taylor expansion formula of a function with respect to the second-order term.Definition 1.1 ([7, 8, 9]) Let S be a nonempty open subset of R n and f : S → R be a differentiable function.The second-order directional derivative f ′′ (x, d) of f at the point x ∈ S in the direction d ∈ R n is defined as an element of R given by If f ′′ (x, d) exists and is finite, then f is said to be second-order directionally differentiable at the point x ∈ S in the direction d ∈ R n and f ′′ (x, d) is called its second-order directional derivative.The function f is said to be second-order directionally differentiable on S if the derivative f ′′ (x, d) exists for each x ∈ S and every direction d ∈ R n .
SECOND-ORDER OPTIMALITY AND DUALITY IN VECTOR OPTIMIZATION OVER
Ivanov [8] used the above definition of second-order directional derivative to introduce a second-order invex function in the following manner: The function f is said to be second-order invex (or 2-invex) at x ∈ S if there exist vector-valued functions η, ξ : S × S → R n such that for all y ∈ S, the second-order directional derivative f ′′ (x, ξ(x, y)) exists and If the above inequality holds for all x, y ∈ S, then f is called second-order invex on S.
In this paper, we unify the notions of η-convex functions [11] and second-order invex functions [8] to define a new class of second-order cone-(η, ξ)-convex functions.We also define related classes of second-order conepseudoconvex and second-order cone-quasiconvex functions.
Second-order optimality conditions for vector optimization problems have been widely studied in the past, mainly due to their usefulness in sensitivity analysis of optimal solutions and convergence analysis of various algorithms.Several researchers, like Andreani [1], Ben-Tal [2], Burke [4], Kawasaki [12], have considered secondorder optimality conditions in terms of Hessians of the involved functions.However, various kinds of second-order directional derivatives have also been introduced to enable the development of second-order optimality conditions in the absence of second-order differentiability (see for example Ben-Tal, Zowe [3], Ivanov [7,8,9], Studniarski [15], Yang [16] and the references therein).
We employ the introduced classes of functions to obtain second-order necessary and sufficient Karush-Kuhn-Tucker (KKT) type conditions for a vector optimization problem over cones in terms of the second-order directional derivatives of the functions involved.Furthermore, a second-order Mond-Weir type dual is associated to the considered problem and weak and strong duality results are established.
Second-order cone-convexity and related concepts
In this section, we introduce the following new classes of second-order cone-(η, ξ)-convex functions.
Let S be a nonempty open subset of R n and f = (f 1 , . . ., f m ) t : S → R m be a differentiable vector-valued function.
Definition 2.1
The function f is said to be second-order K-(η, ξ)-convex at x ∈ S on S if there exist vector-valued functions η, ξ : S × S → R n such that for all x ∈ S, the second-order directional derivative f ′′ (x, ξ(x, x)) exists and (ii) If ξ ≡ 0 then Definition 2.1 becomes the definition of K-invexity given by Yen and Sach [17].
(iii) If f is a scalar valued function, K = R + and ξ ≡ 0 then Definition 2.1 becomes the definition of η-convexity given by Kaul and Kaur [11].
(iv) If ξ ≡ 0 and η(x, y) = x − y, x, y ∈ S, then the above definition reduces to the definition of cone-convex functions introduced in [5].
To justify the introduction of second-order K-(η, ξ)-convex functions, we give an example of a function which is second-order and η is an ) / ∈ K .
Definition 2.4
The function f is said to be second-order K-(η, ξ)-pseudoconvex at x ∈ S on S if there exist vector-valued functions η, ξ : S × S → R n such that for all x ∈ S, the second-order directional derivative f ′′ (x, ξ(x, x)) exists and Remark 2.5 (i) If f is twice differentiable and ξ = η, then the above definition becomes the definition of a Ksecond order pseudoinvex function with respect to η at x for p = η(x, x) ∈ R n introduced by Mishra and Lai [14].
(iii) If f is a scalar valued function, K = R + and ξ ≡ 0 then Definition 2.4 reduces to the definition of η-pseudo convexity given by Kaul and Kaur [11].It is clear that every second-order K-(η, ξ)-convex function is K-(η, ξ)-pseudoconvex.However the converse may not be true as shown by the following example.
SECOND-ORDER OPTIMALITY AND DUALITY IN VECTOR OPTIMIZATION OVER CONES
Remark 2.7 In the above example, f is a vector-valued function which is not twice differentiable.Hence, f is not K-second order pseudoinvex as defined by Mishra and Lai [14].Further, if ξ ≡ 0, then f is not K-pseudoinvex in the sense of Khurana [13].Therefore Definition 2.4 widens the field of applications of generalized convex functions.
Definition 2.8
The function f is said to be second-order K-(η, ξ)-strictly pseudoconvex at x ∈ S on S if there exist vector-valued functions η, ξ : S × S → R n such that for all x ∈ S, f ′′ (x, ξ(x, x)) exists and Definition 2.9 The function f is said to be second-order K-(η, ξ)-quasiconvex at x ∈ S on S if there exist vector-valued functions η, ξ : S × S → R n such that for all x ∈ S, f ′′ (x, ξ(x, x)) exists and If f is second-order K-(η, ξ)-convex (pseudoconvex, strictly pseudoconvex, quasiconvex) at every x ∈ S on S then f is said to be second-order K-(η, ξ)-convex (pseudoconvex, strictly pseudoconvex, quasiconvex) on S.
We shall study the following vector optimization problem (VOP) over cones: where f : S → R m and g : S → R p are differentiable functions and K ⊆ R m , Q ⊆ R p are closed convex cones with nonempty interior.Let S 0 = {x ∈ S : −g(x) ∈ Q} denote the set of feasible solutions to (VOP).
Definition 2.10
Let K ⊆ R m be a closed convex pointed cone with nonempty interior and let int K denote the interior of K.The positive dual cone K * and the strict positive dual cone K s * of K, are respectively defined as
Definition 2.11
A point x ∈ S 0 is said to be (i) a weak minimum of (VOP) if for every x ∈ S 0 ,
Second-order necessary conditions over cones
We now prove second-order necessary optimality conditions for the problem (VOP) in terms of second-order directional derivatives.
Theorem 3.1
Let x be a weak minimum of (VOP).If We assert that the system If possible, let there be a solution ( d1 , d2 ) ∈ R n × R n of (3).Then, and and −[∇g(x) d1 + lim and where lim Since S is a nonempty open set, we can find s 0 > 0 such that for all s ∈ (0, s 0 ), x + s d2 ∈ S, and which is a contradiction as x is a weak minimum of (VOP).Hence the system (3) has no solution Therefore by the Alternative Theorem given in Jeyakumar [10], there exist λ ∈ K * , µ ∈ Q * not both zero such that Taking Hence, from (4) we obtain Now we give an example to illustrate the result obtained in Theorem 3.1.Clearly x = (0, 0) is a weak minimum for the problem, ) ) , then for all We now introduce the following second-order Slater-type constraint qualification over cones.
Definition 3.3
The problem (VOP) is said to satisfy second-order Slater-type cone-constraint qualification at x if g is Q-(η, ξ)convex at x and there exists x ∈ S such that −g(x) ∈ int Q.
Theorem 3.4
Let x be a weak minimum of (VOP) at which second-order Slater-type cone-constraint qualification holds.If 1) and (2) hold.
Since second-order Slater-type cone-constraint qualification holds at x, g is Q-(η, ξ)-convex at x and there exists x ∈ S such that −g(x) ∈ int Q.We have to prove that λ ̸ = 0.
Let if possible λ = 0, then µ ̸ = 0 and from (1) we get Also since g is Q-(η, ξ)-convex at x there exist vector-valued functions η, ξ : S × S → R n such that for all x ∈ S, the second-order directional derivative g ′′ (x, ξ(x, x)) exists and Using (2) and ( 5) Theorem 3.5 Let x be a weak minimum of (VOP) at which second-order Slater-type cone-constraint qualification holds.If Proof Since all the conditions of Theorem 3.4 hold, there exist λ ∈ K * \ {0} and µ ∈ Q * such that (1) holds along with µ t g(x) = 0.
Taking d 2 = 0 in (1), we have As the above inequality holds for all d 1 ∈ R n , we get Again, taking d 1 = 0 in (1), we obtain Or, This completes the proof.
Second-order sufficient optimality conditions over cones
We now provide several second-order sufficient conditions for the existence of a weak minimum or minimum for (VOP).
a scalar valued function and K = R + , the above definition reduces to second-order invexity introduced by Ivanov[8].
|
2017-09-06T09:55:23.058Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "1326d6b1a869dd61a481bef05df7a306656f297b",
"oa_license": "CCBY",
"oa_url": "http://www.iapress.org/index.php/soic/article/download/20160607/246",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1326d6b1a869dd61a481bef05df7a306656f297b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
218995702
|
pes2o/s2orc
|
v3-fos-license
|
Enhancing supplier integration through e-design and e-negotiation in small and medium enterprises
The effects of globalisation, e-procurement and supply chain integration have become paramount to the success of supply chain management especially to small and medium enterprises (SMEs) in a developing country context such as South Africa. To initiate the growth of SMEs, infrastructures such as e-procurement systems, together with supply chain integration, have been increasingly embedded in most firms (Vaast & Walsham 2017:547). Therefore, the usage of various e-procurement systems and integration with suppliers timeously is considered as ‘an innovation strategy action’ (Mishra & Agarwal 2010:249) and a firm’s competitive advantage (Boehmke & Hazen 2017:163). Most studies on e-procurement and supplier integration have focused on large firms (Chang, Tsai & Hsu 2013:38).
Introduction
The effects of globalisation, e-procurement and supply chain integration have become paramount to the success of supply chain management especially to small and medium enterprises (SMEs) in a developing country context such as South Africa. To initiate the growth of SMEs, infrastructures such as e-procurement systems, together with supply chain integration, have been increasingly embedded in most firms (Vaast & Walsham 2017:547). Therefore, the usage of various e-procurement systems and integration with suppliers timeously is considered as 'an innovation strategy action' (Mishra & Agarwal 2010:249) and a firm's competitive advantage (Boehmke & Hazen 2017:163). Most studies on e-procurement and supplier integration have focused on large firms (Chang, Tsai & Hsu 2013:38).
Current knowledge involving SMEs in relation to e-procurement and supplier integration in developing countries such as South Africa is still limited, which creates a need for further research to occupy this research gap (Boehmke & Hazen 2017:163). Furthermore, the South African government is increasingly adopting and encouraging e-procurement in SMEs. This is in line with the objectives of the National Development Plan (NDP) vision 2030, which include innovation, employment creation and the adoption of technology as mechanisms for the economic development of the country (Zarenda 2013:5). The South African government is eager to develop and streamline SME operations because SMEs make an important contribution to the economy. The relationship between e-procurement and supplier integration in SMEs in South Africa has not been fully investigated (Zheng et al. 2016:290).
These challenges inhibit collaboration among supply chain partners and consequently affect SME performance. Lack of appropriate technology has been cited as impediment to SME collaboration, innovation and growth. Therefore, the most persistent challenge to greater supplier integration is lack of adequate information systems. Insufficient information system support is a barrier because collaboration is essentially information-based. Therefore, in the current climate of global supply chain competition, supplier integration is regarded as a prerequisite for winning performance (Njagi & Ogutu 2014:191).
The aim of this study was to investigate how SMEs can enhance their supplier integration through e-design and e-negotiation in the Gauteng Province of South Africa. The literature review of the research constructs (e-procurement, SMEs in South Africa and supplier integration) is discussed in the next section. Thereafter, the conceptual framework and hypothesis development follow. The research methodology, as well as results and discussion, is elaborated. Finally, conclusion, managerial implications, limitations of the study and directions for future research are explored in this article.
Theoretical framework
The theoretical rationale underpinning this study is the Configuration Theory (Miller 1986:233). According to Miller (1986) and Sinha et al. (2005:389), the Configuration Theory allows for detailed examination of the dimension of supply chain integration and information technology (IT). This theory is appropriate because it can handle complicated organisational phenomena from a holistic perspective.
The configuration approach involves dominant gestalts or configurations of observable characteristics or behaviours that may lead to an outcome (Ward, Bickford & Leong 1996:599). The Configuration Theory indicates the need to consider organisational arrangements, that is, configurations, to obtain high performance. Therefore, this study considers the combination of e-procurement systems as the configuration of organisational resources to obtain better supplier integration.
E-procurement
E-procurement is one of the developments in the contemporary supply chain management (Chirchir, Ngeno & Chepkwony 2015:26). E-procurement refers to 'an information technology (IT) based business model that facilitates the necessary processes conducted between business parties in a procurement transaction' (Smart 2010:423;Tai 2011:5398). Similarly, McCue and Roma (2012:58) define e-procurement as 'the use of information technology to facilitate business-tobusiness purchase transactions for materials and services'. It is clear from these two definitions that e-procurement is not merely a system for making purchases online but a link between customer and supplier.
According to McCue and Roma (2012:62), tools such as 'e-notice, e-auction, e-catalogue, e-dossier, e-submission and e-signatures' are components of e-procurement. In this study, e-design and e-negotiation are considered as the e-procurement systems. For this study, e-design refers to the 'setting of purchasing requirements on an electronic procurement system' (Chang et al. 2013:35). E-negotiation is defined as 'the process of conducting negotiations between business partners using electronic means' (Rinderle-Ma 2005:2). Thus, e-negotiation is used to make significant savings in the purchase of goods and services through the Internet (Scot & Morrison 2007:332). Therefore, e-procurement if maintained properly will allow the company to establish and maintain competitive advantages and reduce staff time and paperwork (Tai 2011:5397).
Small and medium enterprises in South Africa
The National Small Business Act No. 26 of South Africa 1996, as amended in 2003, defines an SME as: [A] separate and distinct entity including co-operative enterprises and non-governmental organisations managed by one owner or more, including its branches or subsidiaries if any, is predominantly carried out in any sector or subsector of the economy mentioned in the schedule of size standards and can be classified as an SME by satisfying the criteria mentioned in the schedule of size standards (Government Gazette 2003:8 [A] small enterprise in South Africa is one that employs 50 people or less and has a total turnover of up to R19m, with a total asset value of R3m. A medium enterprise employs from 50 up to 200 people and has a total turnover of R39m with a total asset value of R6m. (p. 8)
Supplier integration
Supplier integration refers to the process of interaction and collaboration between the firm and its suppliers to ensure effective flow of supplies (Flynn, Hou & Zhao 2010:58;Zhao et al. 2011:372). Zhao et al. (2008:371) state that many organisations across the globe are creating co-operative, mutually beneficial partnerships with supply chain partners because of increasing global competition (Zhao et al. 2008:371). These authors further state that 'companies need to implement supply chain integration to meet the new challenges of the global competitive environment'.
Small and medium enterprises constantly face the problem of on-time delivery (Zhao, Feng & Wang 2015:166). Through integration with suppliers, SMEs can share order and inventory information with suppliers. Furthermore, supplier integration, which includes proper communication, sharing information and working together with suppliers, can reduce upstream complexity (Zhao et al. 2015:167-168). The benefits of supplier integration are that it enhances responsiveness, flexibility and time-saving. Supplier integration also plays a role in reducing transaction costs through the reduction of uncertainties and reducing of production costs (Flynn et al. 2010:58). Therefore, supplier integration has a positive impact on operational performance (Yu et al. 2014:683). In supplier integration, opportunistic behaviours are greatly reduced under shared visions and cooperative goals (Prajogo, Oke & Olhanger 2015:102;Wong, Tjosvold & Yu 2005:782).
The conceptual framework and hypotheses development
The conceptual framework is provided in Figure 1. This highlights the proposed linkage between the constructs under investigation in this study.
E-design and supplier integration
E-design refers 'to the setting of purchasing requirements on an online procurement system' (Chang et al. 2013:35). E-design 'facilitates supplier involvement in the specification development process of a product. It also facilitates reduced time-to-market cycles by overcoming the silo effect of the traditionally sequential design activities' (Presutti 2003:220). This means that suppliers are involved in the design process. Thus, e-design is an important function in the e-procurement system as it enables collaboration of suppliers as well as enabling the purchasing process to be quick and efficient.
On the basis of the above, the first hypothesis is derived: H1: E-design positively influences supplier integration in the SME sector.
E-negotiation and supplier integration
E-negotiation refers to business partners negotiating over Internet or IT platforms. E-negotiation is a key tool in e-procurement because it enables the collaboration of various key stakeholders particularly the suppliers. Thus, for e-negotiation to be successful, supplier integration is key in the process.
Therefore, on the basis of the above, the second and last hypothesis is derived: H2: E-negotiation positively influences supplier integration in the SME sector.
Research methodology
In this study, a quantitative research methodology was adopted and considered more appropriate because addressing the research problem depended on the analysis of quantitative data collected on many survey questions around e-procurement and supplier integration in SMEs. Data were collected through a survey questionnaire. A total of 350 questionnaires were distributed to respondents, 294 were returned and 11 were discarded because of incomplete responses to different parts of the questionnaire. A total of 283 questionnaires were finally used in the study. Data were analysed with Statistical Package for the Social Sciences (SPSS) version 24 and Analysis of a Moment Structures (AMOS) software version 24 for the structural equation modelling (SEM).
Measuring instrument and operationalisation
The questionnaire used in this study consisted out of three sections. Section A consists of four items and sought general demographic information about the respondents and SME profile. Section B consisted out of eight items, which covered all the two e-procurement systems (e-design and e-negotiation), adapted from Chang et al. (2013:39) and Ombat (2015:718). Section C seeked respondents' views on supplier integration, using eight items adapted from Zhao et al. (2013:130). All measurement scales are measured using 5-point Likerttype scales, where 1 = strongly disagree and 5 = strongly agree. The Likert-type scale was adopted mainly because it was easy to analyse the quantitative information and draw conclusions.
Reliability tests
Reliability in this study was ascertained using Cronbach's alpha coefficient, the average variance extracted (AVE), item-to-total values and composite reliability (CR). Table 1 presents the results of the reliability tests.
In this study, the results of the CR range from 0.77 to 0.94 as shown in Table 1 and thus confirm the existence of internal reliability for all constructs of the study. The measurement items used in this study were reliable because all the Cronbach's alpha coefficients were above the recommended 0.7 threshold.
Validity tests
Validity is the extent to which the instrument that was selected reflected the reality of the constructs that were being measured (Newsome 2016:58 research study used three experts in supply chain management to judge the questions independently (Schindler 2010:65). Also, an extensive literature review was conducted to ensure that the instrument is related to previous studies. Thereafter, previous studies were consulted to construct the research instrument. To further ascertain validity in this study, a pilot study was conducted with a conveniently selected sample of 42 SME owners and managers in the Vaal Triangle region of Gauteng Province, South Africa. Input from the pilot sample was used to improve the questionnaire in terms of its wording and technical layout.
Ethical consideration
Participants were not forced to participate. Participants were also told that participation might be terminated at any given time with no adverse consequences should they wish not to continue with the study (completing the questionnaire). Information provided by participants or respondents was treated with utmost confidentiality and anonymity. The data were securely stored by the researcher and no one else had access to the data.
Results and discussion
The results of gender and experience of SME owners and managers are shown in Table 2 and Figure 2. This section covers the sample characteristics, testing for the unidimensionality of scales and hypothesis testing results. Table 2 presents a graphical representation of the gender distribution of the sample. Males constitute 54.0% (n = 153) and females constitute 46.0% (n = 130) of the sample. Nieman and Nieuwenhuizen (2009:143) found that there are few women owning or managing businesses because of start-up capital problems.
Sample characteristics
The study revealed that 2.8% (n = 8) of respondents had served their organisations for less than 1 year. Approximately 9.2% (n = 26) of respondents served their organisation between 1 and 5 years, while 22.3% (n = 63) served the organisation between 5 and 10 years. Approximately 28.3% (n = 80) of the sample had served their organisation between 10 and 15 years, while 37.5% (n = 106) of the respondents served their organisation for more than 15 years.
Regarding the race or ethnicity, the African race constituted 201 (72%), while coloureds were 18 (6%). The white people were 62, representing 21% of the sample, with other category with two respondents reflecting a 1%. The qualifications of SME owners and managers were as follows: matric/no qualifications were 13, degree/diploma were 259, while those who hold master's degree were 8 and finally with PhD/D-Tech were 3.
Testing for the unidimensionality of scales
The different scales used in the study were tested for unidimensionality through exploratory factor analysis. Prior to the factor analysis, the Bartlett's test of sphericity and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy were computed to establish whether the data were suitable for factor analysis (see Table 3).
According to Chinomona (2012:341), the sampling is adequate if the value of KMO test is greater than 0.5.
For this study, as indicated in Table 2, the KMO value is 0.753, exceeding the recommended minimum of 0.5 (Cerny & Kaiser 1977:43;Kaiser 1974:35). This shows that the feasibility of factor analysis is fulfilled. The factor extraction through principal component analysis for each construct is reported in Table 4, indicating that only one factor was extracted for each variable. The principal component analysis was used to: [C]ompress the maximum amount of information into first two columns of the transformed matrix known as the principal components by neglecting the other vectors that carries the negligible information or redundant data. (Pallant 2007:153;Pooe & Mahlangu 2017:112) Table 4 shows that the factor loadings for e-design were all were close to 0.7 except one statement which reads as follows: 'there is a design of the purchase requirement' and has a factor loading of 0.605. E-negotiation factor loadings were all above the 0.7 threshold. In supplier integration, only one statement was below 0.7.
The hypotheses testing stage and results
Structural equation modelling was used in this study to estimate the relationship between the constructs. Therefore, SEM seeks to understand the relationships between latent variables and the observed variables, which form the structural framework from which they are derived. Path analysis allows path coefficients (the relationship between variables) to be determined. In addition, path analysis requires recursivity (that the path direction is one way with no feedback loops) (Chinomona 2012:10 135). The advantage of path analysis is that the researcher can see which variables exert effects on others. As shown in Table 5, this study reports a chi-square/degree of freedom value of 1.46 as indicative of a good model fit. Table 5 further shows IFI, CFI and TLI values (0.94, 0.92 and 0.95, respectively) that are above the recommended threshold of 0.9 or above (Chinomona 2011(Chinomona :302, 2013. These results further confirm that the estimated model fits the sample data in this study well, which provides a good model fit. Table 4 also depicts a root mean square error of approximation (RMSEA) value of 0.04, which provides a very good model fit (Chinomona et al. 2010:47;Pallant 2007). Overall, the model fit indices provide a good fit.
E-design positively influences supplier integration in the small and medium enterprises sector (Hypothesis 1)
A linear relationship (positive and significant) was hypothesised between e-design and supplier integration.
This hypothesis was formulated from the objective that aimed to investigate the influence of e-design on supplier integration. Results are shown in Table 6.
As shown in Table 6, e-design has a positive and significant linear relationship with supplier integration. H1 is therefore supported. This study posited a positive influence of e-design on supplier integration and the results of this study confirmed it. A positive path coefficient ( β = 0.33; p < 0.05) validates the hypothesised positive influence that e-design has on supplier integration. These findings mean that SMEs that effectively implement e-design systems, integrating with their suppliers, increase their chances of improving performance and cutting supply chain costs.
These findings are consistent with those of Chang and Wong (2012:342), who posit that e-design is the infrastructure aspect
Measures Values
Contrast media-induced nephropathy (CMIN) 1038.61 Chi-square/df 1.46 The incremental fit index (IFI) 0.94 The Tucker-Lewis index (TLI) 0.92 The comparative-fit-index (CFI) 0.95 The root mean square error of approximation (RMSEA) 0.04 df, degree of freedom. There is a design of the purchase requirement 0.605 The design of the purchase requirement or the standardised purchasing norm between the organisation and the supplier will be communicated or negotiated via the internet 0.698 Our company designs the format of marketing demands using the information system 0.679
E-negotiation
Our company negotiates the general procedures of purchasing with the supplier through the internet 0.769 The use of the internet for negotiations results in significant savings for this company 0.866 The use of the internet for negotiations results in lower purchase costs 0.793
Supplier integration
There is extensive participation with our major supplier in the design stage 0.617 Our major suppliers share their production schedule with our company 0.733 Our major suppliers share their production capacity with our company 0.775 Our major suppliers share available inventory with our company 0.801 Our company shares production plans with its major suppliers 0.787 Our company shares demand forecasts with its major suppliers 0.777 Our company shares inventory levels with its major suppliers 0.784 Our company helps its major suppliers to improve their processes to better meet the needs of our company 0.747 that brings in higher levels of partnerships and improved supply chain performance. This notion is also supported by Shank and Brown (2007:190), who found that successful companies or firms using e-design systems effectively ultimately lead to greater supplier integration. Therefore, superior e-design systems are associated with greater supplier integration.
Thus, validation of a positive influence of e-design on supplier integration means that if SMEs effectively implement e-design systems they increase their chances of collaborating with their key supply chain members and this may result in minimisation of costs such as supply chain costs, thus consequently improving supply chain performance. These findings further suggest that supply chain member firms that invest in and use e-design tools for their buying and selling with each other can learn collectively and create a strong supplier integration.
E-negotiation positively influences supplier integration in the small and medium enterprises sector (Hypothesis 2)
A positive and significant influence of e-negotiation on supplier integration was posited. The SEM results that validate or invalidate this hypothesis are shown in Table 7.
As shown in Table 7, e-negotiation has a positive and significant relationship with supplier integration. H2 is therefore supported. This study posited a positive influence of e-negotiation on supplier integration. A positive path coefficient ( β = 0.175; p < 0.1) validates the hypothesised positive influence that e-negotiation has on supplier integration.
A positive path coefficient may be because suppliers collaborate more often when they do their contract agreements electronically. These contract agreements will, in turn, improve their relations in business, and thus contribute to higher levels of engagement and consequently improve supply chain performance. As posited in H2, the findings of this study suggest that in the firms surveyed there are some contract negotiations taking place with suppliers through technology.
Conclusion
To achieve the first hypothesis, the SEM test was conducted to examine the effect of e-design on supplier integration. The test revealed a statistically positive and significant relationship. These findings mean that firms that effectively implement e-design systems, integrating with their suppliers, increase their chances of improving performance as well as cutting supply chain costs.
To achieve the second hypothesis, the SEM test was conducted to examine the effect of e-negotiation on supplier integration. The test revealed a statistically positive and significant relationship. Thus, the findings of this study suggest that in the firms surveyed there are some contract negotiations taking place with suppliers through technology. Therefore, this study concludes that firms can use e-design and e-negotiation systems to enhance supplier integration.
Managerial implications
The results of this study showed that e-design has a positive influence on supplier integration. This serves as an implication that SME owners and managers should begin to work towards developing a deeper understanding of e-design tools and systems -so that they can develop strategies that will contribute to the improvement of supplier integration, which will, in turn, positively influence supply chain performance.
Therefore, it means that the SME owners and managers should invest more in e-design systems for their buying and selling as this will create further collaborations.
The results also showed that e-negotiation has a positive influence on supplier integration. Therefore, it is recommended that SME owners and managers recognise e-negotiation as an important e-procurement element to foster ongoing relationships with supply chain member firms. SME owners and managers must also enrol for e-procurement training workshops or courses. This training should emphasise the importance of e-procurement functions such as e-design and e-negotiation as the key drivers of supplier integration.
Contribution of the study
Firstly, a contribution is made to the existing literature on SMEs in South Africa, which was noted to be scant.
The study developed an integrative model, which might be used by SME practitioners in South Africa, thus contributing to the existing literature. Because the model paid attention to e-procurement (e-design and e-negotiation) and supplier integration, possible strategies such as investing in supplier collaborations through e-procurement could be derived from the model. Thus, SME owners and managers will be in a better position to increase the levels of supply chain performance within their firms. Overall, the findings reveal that by investing in e-procurement functions such as e-design and e-negotiation, SMEs can improve their own performance through good supplier collaborations. The findings of this study are also important to other SMEs in South Africa. They may use these findings as a benchmark for the best practices in supply chain management (SCM) and e-procurement practices.
Limitations and directions for future research
The use of only SME owners and managers as chief informants in the survey could be a limitation. Data were collected only from SME owners and managers; future research could broaden the scope to include customers, manufacturers (suppliers) and low-level subordinates.
E-procurement is a multidimensional concept and the study investigated only two important dimensions, namely, e-negotiation and e-design. There are many other e-procurement functions, such as e-sourcing, e-evaluation, e-informing, e-payment, e-catalogue, e-tendering, e-tailing, e-purchasing and e-transportation, among others. Future research should investigate the relationship between other e-procurement functions, supplier integration and supply chain performance.
Because this study adopted only the quantitative approach, another study involving a qualitative approach or a mixedmethod approach is recommended as this will provide an indepth analysis.
|
2020-04-30T09:07:47.367Z
|
2020-04-23T00:00:00.000
|
{
"year": 2020,
"sha1": "06b15e538f1206242c8be9d308a6792d8d67c96a",
"oa_license": "CCBY",
"oa_url": "https://sajesbm.co.za/index.php/sajesbm/article/download/300/402",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bef7e2d81e0ddb524c96c6609dd84bd1c3ce6a70",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
248027592
|
pes2o/s2orc
|
v3-fos-license
|
Voronoi-Based Discrete Element Analyses to Assess the Influence of the Grain Size and Its Uniformity on the Apparent Fracture Toughness of Notched Rock Specimens
The fracture toughness reflects the rock resistance to crack propagation, and therefore represents an important parameter for rock fracture assessments. From a strict point of view, the real fracture toughness (KIC\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\it {K}_{\mathrm{\it IC}}$$\end{document}) corresponds to a cracked situation in which the notch radius is theoretically equal to zero. However, most of the defects in rocks have a finite radius and, therefore, should be studied as notch-type defects. Here, the notch effect is numerically studied together with the influence of the grain size and the sorting coefficient (grain size uniformity) on the apparent fracture toughness (KIN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\it {K}_{\mathrm{\it IN}}$$\end{document}). To this end, several four-point bending tests with different U-shaped notch radii, mean grain sizes and degrees of uniformity in grain size and shape have been simulated using the Discrete Element Method. In order to represent the grains of the rocks, the Voronoi tessellation is used to create randomly sized and distributed polygonal blocks. These Voronoi polygons have been defined, on the one hand, by an average edge length of 1, 2 and 3 mm, and, on the other hand, by a different number of iterations (n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n$$\end{document}) in the relaxation process during the generation of the polygons, which defines the grain size uniformity. The numerical analyses performed and the interpretation of the results show a clear notch effect in all the studied cases, as the apparent fracture toughness (KIN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\it {K}_{\mathrm{\it IN}}$$\end{document}) increases with notch radius. Finally, the obtained stress fields at the notch tip have been compared to those obtained from the traditional finite element method. Four-point bending tests with U-shaped notches simulated using the Discrete Element Method. Different notch radii, mean grain sizes and degrees of uniformity are studied. Results show a clear notch effect, increase of apparent fracture toughness with notch radius. Interpretation of the results using the Theory of Critical Distances. A linear relation between the critical distance of the rock and the grain size is observed. The critical distance slightly increases when less uniform grains are studied. Four-point bending tests with U-shaped notches simulated using the Discrete Element Method. Different notch radii, mean grain sizes and degrees of uniformity are studied. Results show a clear notch effect, increase of apparent fracture toughness with notch radius. Interpretation of the results using the Theory of Critical Distances. A linear relation between the critical distance of the rock and the grain size is observed. The critical distance slightly increases when less uniform grains are studied.
Introduction
A comprehensive understanding of rock fracture processes is a major issue of interest in many engineering fields such as civil engineering (e.g., slopes, foundations), underground engineering (e.g., tunneling, mining) or energy engineering (e.g., gas-oil extractions, coal gasification, geothermal energy). The fracture initiation is affected to a great extent by the boundary conditions that are not linked to the rock 1 3 mass or rock matrix itself (e.g., external loads, presence of water). However, other microstructural and macrostructural aspects such as rock composition and mineralogy (e.g., grain size, grain bonding) or the presence of different scale defects also play a key role in the fracture processes (e.g., Hoek 1968;Palmström 1995;Hudson and Harrison 1997;Jaeger et al. 2007).
Defects are generally classified as crack-type defects, those with a theoretical vanishing (or at least negligible) root radius, or as notch-type defects, those with a finite and non-negligible root radius. Rock fracture mechanics (e.g., Whittaker et al. 1992;Aliabadi 1999;Jaeger et al. 2007) traditionally addresses different practical applications such as rock cutting, hydraulic fracturing or underground excavations from a conservative perspective, assuming that the analysed stress risers behave as crack-type defects, based on the use of the conventional Stress Intensity Factor (SIF). However, that approach might be too conservative in many practical situations, since notch-type defects generate less demanding stress fields than crack-like defects and, therefore, develop a higher load-bearing capacity (e.g. , Neuber 1958;Peterson 1959;Pluvinage 1998;Taylor 2007). This is what is generally called the notch effect, which is numerically evaluated in this work. This paper is based on two previous works of the authors (Justo et al. 2017(Justo et al. , 2020b. In the first one, the notch effect of four different rocks was experimentally studied based on the application of the Theory of Critical Distances (TCD) through a parameter called the critical distance ( L ), which is assumed by the TCD as an intrinsic material parameter with length units. The physical meaning of L is still a fundamental challenge among researchers (Taylor 2017), but it is independent of the geometrical features of the stress concentrator and is related to the size of the dominant source of microstructural heterogeneity in the material (e.g., Askes and Susmel 2015). A common source of microstructural heterogeneity in the case of rocks is the grain size (e.g., Taylor 2017). In fact, Justo et al. (2017) confirmed that there is a linear relation between the critical distance of rocks and their mean grain size. Aiming to go deeper into those results, the second work was focused on numerically simulating the influence of grain size on the fracture behaviour of rocks, namely on the main mechanical properties (tensile strength, fracture toughness, Young's modulus and Poisson's ratio) and on the notch effect (through the analysis of the apparent fracture toughness). To do so, several discrete numerical analyses were performed to simulate unconfined compression tests, Brazilian tests, and four-point bending tests (as those carried out in the laboratory by Justo et al. 2017) with variable mean grain sizes. Particles were defined by means of Voronoi tessellations with relatively uniform shape in all the cases (with a sorting coefficient very close to unity), representing polygonal grains and ideal non-porous, crystalline and isotropic rocks. The modelled materials were ideal zero porosity crystalline rocks, whose parameters have the same order of magnitude as those of the Macael marble tested in Justo et al. (2017). To make the computational cost feasible, the modelled grains (1-3 mm) were much larger than those of the Macael marble (average grain size of 335 µm), and consequently, the intention was not to reproduce the same behaviour observed in the laboratory but to represent models of rock-like materials within a realistic order of magnitude.
Here, it is intended to extend those works and consider not only the influence of the mean grain size but also the grain size distribution (i.e., sorting coefficient) as a variable. Following the methodology used by Justo et al. (2020b), additional discrete numerical models have been constructed in this work to simulate the same type of Single Edge Notched Bend (SENB) specimens with variable notch radii and subjected to four-point bending conditions (i.e., mode I loading conditions), keeping the same mean grain sizes constant but varying the degree of uniformity of the grains (i.e., increasing the sorting coefficient). These models represent ideal non-porous, crystalline and isotropic rocks with different grain sizes and sorting coefficients, and allow to obtain, for each case, the stress state in the surrounding of the notches at the onset of crack initiation and propagation. The stresses derived from the Discrete Element Method (DEM) are interpreted according to the TCD, which is used to analyse the variation of the apparent fracture toughness with the notch radius. Finally, those DEM stress fields used for the analyses of the notch effect are compared to those obtained by the finite element method (FEM), to compare discrete and continuum approaches and to validate the considered methodology.
A limitation of this study is that only intergranular fracture is considered. This may be valid or representative for some specific rocks or materials, where intergranular fracture is the only fracture micromechanism (e.g., Ortiz and Suresh 1993). However, in many other rocks, such as in the marble used here as a reference, intergranular fracture coexists with transgranular fracture. If transgranular fracture was simulated (e.g., Hofmann et al. 2015;Peng and Wong 2017), the fracture toughness would be reduced, because in some cases, the fracture would occur earlier through the grains. Besides, the notch effect would also be reduced, because the presence of a grain at the notch tip would not restrict fracture as much as if the grains were unbreakable, as occurs in this paper. Finally, Li et al. (2019Li et al. ( , 2021) present a detailed discussion of the methods to consider transgranular fractures and their influence. For example, finite-discrete element methods (FDEM) allow for transgranular fracturing and also for advanced contact algorithms (e.g., Zhao et al. 2018;Wang et al. 2021a). Likewise, in DEM, multi-scale tessellations can be applied to simulate transgranular fracturing (e.g., Wang and Cai 2018). A novel technique, such as the grain-based finite-discrete element methods (GB-FDEM) used by Abdelaziz et al. (2018) or Li et al. (2020), would be suitable to consider transgranular fractures in this problem and extend the present work.
Background
The notch effect in rocks, especially in crystalline rocks, depends on the grain size as a consequence of its relationship with L . For example, to evaluate when the notch root radius is negligible, this is compared with L , since it has been observed for different materials that the notch effect is generally negligible when the notch radius is smaller than L . Using laboratory tests to study the influence of the grain size on the notch effect is extremely difficult, because rocks have many different features, such as inhomogeneities, complex grain size distributions, grain aspect ratio, porosity, etc. Numerical analyses, and in particular the DEM, are available suitable tools at present for an in-depth analysis of this problem.
Block-based DEMs allow the rocks to be modelled as an assemblage of blocks (grains) with different boundary conditions, which makes them an appropriate tool for the problem addressed in this work. In the literature, four methods are typically used to simulate grain structure, namely disk-shaped grains (e.g., Potyondy and Cundall 2004), square-shaped grains (e.g., Li and Konietzky 2015), triangular grains (e.g., Kazerani 2013) and polygonal grains (e.g., Kazerani and Zhao 2010). The polygonal grain structure appears to be a more realistic representation of the microstructure of rock-type materials (especially crystalline rocks). The conventional polygonal structure is usually generated using the Voronoi tessellation technique (e.g., Gao et al. 2016). Here, the grains are generated using the Voronoi tessellation, which allows a random distribution of the grains to be defined with a controlled mean size. This technique provides blocks with similar shapes to the grains of zero porosity crystalline rocks, similar to those observed by the authors for different marbles (e.g., Justo et al. 2017). Voronoi-based discrete models have been successfully used to simulate rock-like materials under different considerations, for instance, for compression, Brazilian and fracture toughness tests (Chen et al. 2015). The influence of grain size, pore size and mineral composition have also been studied using this technique (e.g., Li et al. 2017a;Chung et al. 2019;Liang et al. 2021). Voronoi tessellations were first extended to 3D models by Ghazvinian et al. (2014), who simulated crack damage development in brittle rocks. Thus, this technique has proven to be appropriate for the problem under study. Finally, Lisjak and Grasselli (2014) and Zhang and Wong (2018) present reviews of these numerical techniques and the introductions of Li et al. (2019) and Wang et al. (2021b) summarise some of the latest numerical advances.
Different criteria can be found in the literature to predict fracture loads of notched components subjected to mode I loading (e.g., Kipp and Sih 1975;Carpinteri 1987;Seweryn 1994;Gómez et al. 2000;Lazzarin and Zambardi 2001;Yosibash et al. 2004;Taylor 2004) or even mixed mode loading conditions (e.g., Papadopoulos and Paniridis 1988;Seweryn and Mróz 1995;Yosibash et al. 2006;Berto et al. 2007). Among them, the most widely used criteria at the moment are probably the following ones: • The Cohesive Zone Model (CZM), first proposed by Barenblatt (1959) and Dugdale (1960) to describe the stress fields and fracture processes near the defect tip. • Finite Fracture Mechanics (FFM), based on the assumption that the crack grows by finite steps determined by a condition of consistency of both energy and stress requirements (Carpinteri et al. 2008). • The Strain Energy Density (SED) criterion, an energybased approach that combines the elementary volume proposed by Neuber (1958) and the local Mode I concept of Erdogan and Sih (1963). • The Theory of Critical Distances (TCD), first proposed by Neuber (1958) and Peterson (1959), although it has not been until the last decades with the development of finite element analyses that this methodology has been scientifically applied and further developed (e.g., Taylor and Wang 2000;Taylor 2001Taylor , 2007Susmel and Taylor 2003;Cicero et al. 2012).
One of the greatest advantages of the TCD consists of the possibility of obtaining (semi-) analytical results to correctly perform static assessments without any loss of accuracy (Cornetti et al. 2016), and without the need for significant computational efforts as commonly required by the SED criterion or by the CZM, for example. However, despite its simplicity and potential for the analysis of fracture processes as demonstrated for a wide range of materials (e.g., metals, ceramics, polymers and composites), scarce work can be found on the application of the TCD in the particular field of rock mechanics. Some examples of the application of the TCD on rock mechanics are, for instance, the works of Lajtai (1972), Ito and Hayasi (1991), Ito (2008) or Schwartzkopff et al. (2017). The authors have recently applied the TCD for the fracture assessment of several rocks under different temperature and loading conditions (Justo et al. 2017(Justo et al. , 2020a(Justo et al. , 2021, and given the successful results, the evaluation of the notch effect in this work is also performed using this local failure criterion. All the failure criteria indicated above, including the TCD, use the fracture toughness ( K IC ) for the fracture assessment of structural components. Indeed, this parameter represents the residual strength of a cracked component to crack propagation. By definition, the fracture toughness is an intrinsic property of the material. It addresses crack problems where the notch radius can be assumed to be equal to zero and, therefore, where no notch effect is deployed. However, when notch-type defects are analysed, an apparent fracture toughness ( K IN ) should be considered instead of the strictly real value of K IC , as demonstrated, for example, by the authors in previous works (Cicero et al. 2014;Justo et al. 2017). With this, the interpretation of the notch effect is performed in this work by analysing the variation of the apparent fracture toughness with the notch radius.
For the correct application of the TCD, the stresses around the notch tip must be assessed. In the particular cases of mode I loading conditions, as those considered in this work, the stresses are evaluated along the bisector plane of the notch tip and, to do so, numerical methods provide a suitable tool. The different geological processes that rocks have undergone over millennia makes them highly heterogeneous materials. However, despite this generally accepted condition, it is a common practice to consider the rock as a continuum in order to simplify the rock mechanics analyses. In fact, applications of the TCD found in the literature are generally based on continuum stress assessments regardless of the analysed material, even for rock-type materials. This continuum approach might be a suitable option when global responses are of interest (e.g., Eberhardt et al. 2004), but it is less appropriate for applications where detailed information is required for more accurate assessments. For example, Gui et al. (2016), investigated the grain size effect simulating Brazilian disks and unconfined compression tests using distinct element analyses, and they reported that larger particle sizes produce a higher stiffness and strength of the intact rock. Li et al. (2017b) also used distinct element analyses to simulate crack initiation and propagation of a granular rock. Liu et al. (2018) numerically studied the so-called Fracture Process Zone (FPZ) by means of the DEM, addressing size effect and particle size. Likewise, Wang et al. (2019) studied the influence of mineral heterogeneity on the thermomechanical behaviour of rocks also using the DEM. Considering all of the above, it is clear that detailed microstructural analyses require discrete numerical approaches.
Numerical Analyses
Two types of numerical analyses have been considered in this work. The first one uses the DEM to simulate the rock specimens as an assemblage of grains. Idealized crystalline and non-porous rocks are modelled, in which grain size and uniformity are the analysed variables. This microstructure could correspond to a marble, such as the Macael marble analysed by Justo et al. (2017). In fact, this work takes as a reference the experimental results obtained in Justo et al. (2017), for the definition of the parameters of the models. However, it is not the purpose of this research to simulate the exact behaviour of the previously analysed Macael marble (which has a relatively smaller grain size than the studied models), but to model ideal rock-like materials with comparable and realistic properties.
The proposed ideal rock simulations allow the effect of the grain size and the degree of uniformity to be studied in isolation, without the influence of other possible variables. Thus, the interpretation of the results becomes straightforward and generalizable conclusions can be reported. The second approach is based on the use of the FEM and aims to compare the stress fields around the notches and, in particular, at the bisector plane, where stresses are assessed according to the TCD.
Discontinuum Approach
In this work, the Universal Distinct Element Code, UDEC v6.00 (Itasca 2010), is used to numerically study the influence of the grain size and its uniformity on the apparent fracture toughness of notched rock specimens. This code refers to a particular DEM scheme that uses deformable contacts and an explicit, time-domain solution. It consists of a blockbased method that models the rock masses as an assembly of blocks with interfaces (or contacts), allowing the simulations of rock fracturing. The rock behaviour is described by the interaction between these blocks. Thus, the contact laws govern the macroscopic response of the material according to normal and tangential behaviour at the interfaces. The DEM allows finite displacements and rotations of discrete bodies (including complete detachment), and considers the interaction of the fractured rock fragments.
To represent the grains of the rock, the Voronoi tessellation is used to generate randomly sized polygonal blocks. These polygons are defined by an average edge length ( l ), which has been varied in this work to consider the effect of the grain size. Likewise, the blocks (grains) are discretised into constant-strain finite-difference triangular zones defined by a maximum edge length ( e ). These zones make the blocks deformable. Here, e = l has been considered to keep the proportion of zones within the blocks roughly constant and, on average, each grain is approximately subdivided into seven zones in all the cases. The authors have not performed mesh sensitivity analyses of the influence of the parameter e , information about the influence of the l∕e ratio may be found in Fabjan et al. (2015). At the same time, the size and shape uniformity of the Voronoi polygons is defined by the number of iterations ( n ) in the relaxation process during the generation of the mesh (Itasca 2010), in such a way that the increasing number of iterations leads to Voronoi polygons of more uniform size and shape. n has been set to 30, to provide a relatively uniform distribution of the grain size, and also to 1 to provide less uniform grains, as observed in Fig. 1. This figure represents the considered Voronoi tessellations with average edge lengths ( l ) of 1, 2 and 3 mm, as well as with different degrees of uniformity by considering n = 30 and n = 1.
For a detailed description of the modelled grain conditions, Fig. 2 provides the size distribution curves for all the considered combinations of l and n , in which the grain diameter corresponds to the diameter of ideal circular grains with the same area as the actual modelled grains. Both frequency and cumulative frequency are represented in percentage.
It is observed that those grains modelled with n = 30 present a very narrow variation of the grain size, i.e. nearly uniform distribution. In contrast, those curves in Fig. 2 corresponding to n = 1 keep the same mean grain size as in the models with n = 30 but present a larger variation of the grain size. With this, Table 1 summarises some statistical values of the modelled grains, namely the mean grain diameters (corresponding to the diameters of ideal circular grains with an equivalent area), their standard deviations, the first quartile (Q1), the third quartile (Q3) and the sorting coefficients defined as the ratio between the first and third quartiles (Q1/Q3). This latter parameter indicates that the size distribution is relatively uniform for n = 30 as the coefficient is close to 1. In contrast, as the sorting coefficient increases for n = 1, the size distribution of the grains is less uniform.
Regarding the definition of the constitutive models, the grain behaviour has been defined in this work by a linearly elastic isotropic model, using the parameters indicated in Table 2.
On the other hand, the grain boundaries or contacts have been defined by the Coulomb slip model with residual strength. This model provides a linear representation of joint stiffness and yield limit, and is based upon elastic stiffness, frictional, cohesive and tensile strength properties, and dilation characteristics common to rock joints (Itasca 2010). This residual strength simulates the displacement-weakening of the joint by loss of frictional, cohesive and/or tensile strength at the onset of shear or tensile failure. That is, when a joint is fractured, the joint tensile strength, the joint friction angle and the joint cohesion are set to residual values. Table 3 gathers the used parameters (Justo et al. 2020b) for the Coulomb slip constitutive model and the residual values, as well as the joint normal and shear stiffnesses, which are zone size dependent (Itasca 2010). It should be highlighted that the ratio between the cohesion and the tensile strength values indicated in Table 3 looks quite unusual and would be wrong considering classical constitutive laws in continuum mechanics, where cohesion comprises both adhesion between particles and interlocking. However, since polyhedral Voronoi elements and only intergranular fracture are being considered, the interlocking is very strong. Thus, the contact cohesion has been reduced to get realistic strength values. The tensile strength test models reported in Justo et al. (2020b) show only tensile failure along the grain boundaries, leading to a single vertical macro-fracture. In contrast, the uniaxial compression test models showed tensile fracturing, but also some shear fracturing in the Fig. 1 Representation of the Voronoi tessellations for an average edge length ( l ) equal to 1, 2 and 3 mm and for n equal to 30 and 1 boundaries, which indicates that the parameters used lead to realistic failure modes.
The definition of these parameters is clarified by the authors in a previous work (Justo et al. 2020b). They correspond to an ideal zero porosity crystalline rock that loosely resembles a Macael marble tested by the authors (Justo et al. 2017). Since the modelled grains (1-3 mm) are much larger than those of the Macael marble (average grain size of 335 µm), a detailed calibration was not considered necessary and only a reasonable macroscopic response of the same order of magnitude as that of the marble was searched. Table 4 shows, for comparison purposes, the uniaxial compression strength ( c ) and tensile strength ( u ) values of the Macael marble together with those obtained from numerical models. In particular, the results of the uniaxial compression test, Brazilian test and direct tensile test models with an average edge length ( l ) of 1, 2 and 3 mm (and n = 30) are included in this table. The emergent values of c are slightly lower than those of the Macael marble, while the emergent values u of the Brazilian test models are slightly higher than those of the laboratory tests. No direct tensile tests were performed in the laboratory for the Macael marble, so the emergent values of u obtained from the direct tensile test models cannot be directly compared. However, the tensile strength of the Macael marble was also characterised by the authors (Justo 2020) by means of four-point bending and three point bending tests, obtaining u values of 17.53 MPa and 13.94 MPa, respectively. Thus, although the tensile strength values derived from the direct tensile test models are relatively high (possibly because of the selected contact tensile strength and cohesion values indicated in Table 3), they are still within a realistic order of magnitude.
Based on the aforementioned meshing criterion and constitutive models, several four-point bending tests have been numerically simulated. These models consist of 180 × 30 mm size specimens under plane strain conditions, with variable U-shaped notch radii ( ) of 1, 2, 3, 4, 7, 10 and 15 mm to assess the notch effect, all of them having a notch length equal to half of the height (i.e., 15 mm). The geometry of the models is depicted in Fig. 3. As observed in the figure, the Voronoi tessellation has only been generated within a specified range around the notch, where the fracture is assumed to start. The rest of the model is analysed by means of the Finite Difference Method (FDM) using the same material constitutive model for the whole geometry. For the sake of simplicity, the same values of the bulk ( K ) and shear ( G ) moduli have been used in the continuum domain as for the grain-based domain. From a strict point of view, the elastic parameters ( K and G ) of the continuum domain should be the emergent properties of the Grain-Based Models (GBM). Since these are not initially known, the same values as those of the grain-based domain were used. The models were not recalculated with the emergent values of K and G, because the differences were not large and it was assumed that, for isostatic problems as those studied in this work, their influence is small. Besides, different sizes have been checked for the DEM region, providing similar stress fields around the notch tip. For this reason, the Voronoi tessellation has only been applied within a 30 × 30 mm size region as shown in Fig. 3. The objective of simplifying the construction of the models by only introducing the Voronoi tessellation at the center of the specimen models was to reduce the computational cost. In this manner, the models are computationally efficient without a significant influence on the results.
The random arrangement of the Voronoi polygons (grains) produces a scatter of the results, as in reality. For this reason, a repetitiveness of six models has been considered for each mean grain size, degree of uniformity and notch radius combination, only varying the randomly generated Voronoi blocks. The simulations have been executed under displacement control, specifying a constant vertical velocity at the upper loading points (Fig. 3) sufficiently small to minimise shocks to the system, and zero vertical velocity conditions at the supports. Thus, quasi-static calculations have been performed under mode I loading. Considering these boundary conditions, the failure load F (with force/depth length units) has been calculated for each of the individual numerical models (see the scheme in Fig. 4). Considering the analysed specimens as beams, the bending moment ( M ) between the loading points is constant in a four-point bending configuration as depicted in Fig. 4: where s is the span between the outer supports. In parallel, the horizontal stress law along the bisector of the notch can be equated to a pair of forces ( f ) with a lever arm ( z ), or in other words, to a bending moment ( M ). This bending moment is related to that in Eq. (1) as follows: Thus, the failure load ( F ) is derived from the bending moment generated at the bisector of the notch tip just in the calculation step prior to the appearance of the first crack (Justo et al. 2020b): The obtained F values are collected in Table 5 for the different notch radii ( ), grain sizes ( l ) and degrees of uniformity ( n).
These models assume that the cracks will propagate only along the grain boundaries. Thus, only intergranular failure is being considered. As a consequence, the minimum radius of the aforementioned notches has been limited to the grain size in each case ( ≥ l ). This restriction is intended to avoid the possibility of a notch not having at least one grain boundary at the tip, which could cause the breakdown of the calculation model. Figure 5 shows, as an example, some representative numerical models for each grain size after the appearance of the cracks at the notch tip, once the failure load has been exceeded.
Continuum Approach
The influence of the grain size cannot be directly assessed using FEM analyses, since the materials are modelled as a continuum instead of as an assemblage of blocks. As a consequence, the failure criterion established for the discrete numerical models (crack initiation when shear or tensile strength of the grain contacts is reached) cannot be imposed in a continuum approach. However, in this work, the stress fields of both methods are compared, aiming to analyse the extent of the differences between the two types of analyses.
In this case, a finite element code called PLAXIS 2D (2017), has been used to model the test specimens. The simulated rock samples correspond to the same geometry as that shown in Fig. 3, including the same notch radii. The finite element mesh has been refined in the region surrounding the notches as depicted in Fig. 6, with a view to avoiding any possible influence of the mesh. For this reason, no repetitiveness of the models has been considered in this case, as the influence of the mesh is considered to be negligible.
On the other hand, a linear elastic constitutive model has been used to simulate the rock and, therefore, only two parameters are required: the Young's modulus ( E ) and the Poisson's ratio ( v ). These deformational parameters correspond to the macro-scale behaviour of the rocks. Thus, they cannot be derived from the parameters in Table 2 that define the behaviour of the grains. The parameters used in the FEM models have been obtained from the unconfined compression test simulated by the authors in a previous work (Justo et al. 2020b) by means of DEM, considering the same idealised non-porous crystalline rock material with different grain sizes. These parameters are collected in Table 6 and correspond to the emergent elastic properties for each of the analysed grain sizes.
With regards to the boundary conditions of the FEM models: all the contours have been set free except for the supporting points and loading points. In this case, the simulations are not performed under displacement control. Instead, to obtain comparable stress fields to those of the DEM approach, the failure loads derived from the DEM models (Table 5) have been directly introduced in the
Analytical Interpretation
This research studies the fracture of U-notched rock specimens. To this end, the fracture analysis is equated to a situation in a cracked component where the apparent fracture Srawley and Gross (1976), for Single Edge Notched Bend (SENB) specimens, as those modelled in this work and shown in Fig. 3: where F is the failure load of each of the simulated models (Table 5), h is the specimen height ( h = 30 mm) and Y is a compliance non-dimensional factor that only depends on the geometry of the specimen and is given by the following expression: Fig. 5 Examples of formed cracks in DEM models for an average edge length ( l ) equal to 1, 2 and 3 mm and for n equal to 30 and 1 Fig. 6 Representation of the simulated four-point bending test finite numerical models s o and s i represent the spans between the outer supporting rollers (150 mm) and the inner loading points (50 mm), respectively, and 0 is the relative crack length defined as the ratio between the initial notch length (15 mm) and the total height (30 mm) of the specimen ( 0 = 0.5 ). With all this, given the four-point bending test configuration and geometry depicted in Fig. 3, the Y factor is equal to 10.16. The analytical interpretation of these numerically obtained results is performed using the TCD, and more specifically the Line Method (LM). A detailed description of this methodology is presented by Taylor (2007). The LM is a local failure criterion based on the stress law over a certain distance ( d ) from the notch tip. It states that failure occurs when the average stress over d is equal to the inherent strength ( 0 ) of the material, as represented in Fig. 7.
The distance d is related to a material characteristic parameter called the critical distance ( L ) which, in the case of rocks, is in the order of a few millimeters (e.g., Cicero et al. 2014). It can easily be demonstrated that d = 2L (e.g., Taylor 2007). Thus, the failure criterion can be written as follows when the LM is considered: Based on this expression and considering the stress distribution function along the bisector of the notch tip proposed in the past by Creager and Paris (1967), the LM of the TCD provides the following analytical solution for the assessment of the apparent fracture toughness: This expression is used to assess the notch effect (i.e. the variation of K IN with the notch radius) and the influence of (6) the grain size and uniformity on it, using as a reference the numerical results derived from Eq. (4). Variation of the apparent fracture toughness with the notch radius for different grain sizes ( l ) of 1 mm, 2 mm and 3 mm, for the particular case of n = 1 1 3
Notch Effect
The notch effect is graphically shown in Figs. 8 and 9 documenting the variation of the apparent fracture toughness with the notch radius for n = 30 (Fig. 8), representing a relatively uniform grain structure, and n = 1 (Fig. 9), for less uniform grain structures. Each of the dots in the graphs corresponds to the individual results of the apparent fracture toughness ( K IN ) calculated with Eq. (4), while the dashed (Fig. 8) and solid ( Fig. 9) lines stand for the best-fit curves according to Eq. (8), which corresponds to the LM of the TCD. Three different curves are depicted in both graphs, for grain sizes of 1 mm (A), 2 mm (B) and 3 mm (C). The statistical method considered to fit the curves to the data points is the least squares method in all the cases, leaving fracture toughness ( K IC ) and the critical distance ( L ) as free variables.
In all the cases, the notch effect looks clear, since the apparent fracture toughness shows a continuous increment with the notch radius. This increment seems to lessen as the notch radius increases. However, the notch size after which the notch effect is negligible has not been captured with the analysed range of notches. In parallel, the notch effect is also supposed to be negligible below a certain notch radius as stated by Taylor (2017). However, this is neither observed in Figs. 8 and 9. The physical reason explaining this phenomenon is probably related to the stress concentration region around the notch tip. The smallest notches develop the highest stress concentration at the notch tip, probably affecting only a small number of grains, while the stresses in the case of the largest notches are not so concentrated just at the tip and are somehow distributed among more grains in the surrounding of the notch tip. Consequently, if the notch radius is sufficiently small, differences in stress concentrations would probably be insignificant compared to the scatter of the numerical or experimental results and, in the opposite situation of a theoretically infinite notch radius, the notch concentration would be null and would behave as a section reduction rather than as a stress riser.
Likewise, a clear displacement of the three curves is directly observed when analysing both Figs. 8 and 9. Similar trends are observed in both cases. The curves move upwards as the grain size increases, which implies an increment of the fracture toughness. On the other hand, the curves slightly flatten when the grain size increases, which seems to indicate that the notch effect is less significant when the grain size is larger.
With this, Table 7 gathers the corresponding results of both K IC and L variables derived from the adjustment of Eq. (8) for each of the analysed combinations of l and n . The coefficient of determination ( R 2 ) is also given in Table 7 to specify the goodness of fit of each of the curves with respect to the mean numerical values. It is observed that, in general terms, the value of the fracture toughness ( K IC ) is roughly constant for both values of the parameter n . Thus, K IC seems to be influenced only by the average grain size in statistical terms. On the other hand, the value of the critical distance ( L ) presents a certain decrease when moving from n = 30 to n = 1, or in other words, it reduces when grain size is less uniform. These trends are consistent for the three analysed grain sizes with l equal to 1, 2 and 3 mm. This is easily observed in Fig. 10, which gathers in a single plot the best-fit curves of the LM of the TCD represented in Figs. 8 and 9. It is observed that the three curves A, B and C tend to move downwards and to slightly flatten for more uniform grains . 10 Comparison between the best-fit curves of the variation of the apparent fracture toughness with the notch radius represented in Figs. 8 and 9 for different grain sizes ( l ) of 1 mm, 2 mm and 3 mm, for both n = 30 and n = 1 (i.e., n = 30), which gives as a result the values of K IC and L collected in Table 7. Similar responses have been observed in previous experimental studies as those performed by Justo et al. (2017), in four different rocks. Figure 11 represents the case of the Macael marble as an example. The dots correspond to the individual test results while the dashed line indicates the best-fit curve according to Eq. (8), the same as in Figs. 8, 9 and 10. Although it is not the purpose of this work to reproduce the same behaviour, both the experimental and the numerical results offer similar trends within a relatively similar order of magnitude.
Finally, comparing in Fig. 12, the values of the critical distances and the mean grain size, there seems to be a linear relation among them. This relation should be considered in qualitative terms, since the performed numerical simulations are an idealisation of the real problem. For example, 3D effects are neglected here, as plane strain conditions are being considered. Besides, only intergranular fractures are considered. Taylor (2017) related the critical distance of different materials with clearly distinguishable microstructural distances such as the grain size in the case of rocks. He reported that in most of the cases, L lies between 1 and 10 times this distance. Although an accurate definition of the relation between L and the grain size still requires further research, this research shows that the correlation exists for zero porosity crystalline rocks. This relation could allow simplified finite element analyses to be performed for the fracture assessment of rocks considering the influence the grain size through the critical distance.
Comparison of the Stress Fields
According to the TCD, the stresses are evaluated along the bisector plane of the notch tip in the case of mode I loading conditions, as those analysed in this work. For this reason, the stresses normal to this plane corresponding to both the DEM and FEM analyses are compared here. Figure 13 gathers some representative curves for both approaches, including the stress laws for = 3 mm (Fig. 13a), = 7 mm (Fig. 13b) and = 15 mm (Fig. 13c), all of them for a mean grain size of 1 mm and 3 mm and for the particular case of n = 30. In general terms, good agreement is observed between the solid curves corresponding to the continuum approach and the dotted curves corresponding to the discrete approach. The illustrated DEM curves correspond to a particular random case and can vary in each model depending on the actual position of the grains. Besides, 20 history points were considered along the bisector plane to represent the latter DEM curves in all the cases.
If the stresses of FEM and DEM results are compared for r = 0 mm, in general, the maximum stress at the notch tip seems to be slightly higher when FEM is used, although the difference is practically negligible when the grain size is sufficiently small. Besides, the stepped appearance of the dotted curves softens as the grain size decreases. In fact, in those curves corresponding to the models with a grain size of 1 mm (Fig. 13a1, b1, c1), the differences between the DEM and FEM curves are insignificant.
The observed influence of the grain size close to the notch is somehow absorbed by the LM of the TCD, which evaluates the stresses along a certain distance ( 2L ) from the notch tip instead of considering the maximum stress at the tip. This distance is assumed to be sufficiently small not to be affected by the boundary of the model. According to the critical distance values obtained from the numerical models and gathered in Table 7, this hypothesis is valid.
The consequences of the notch effect can be directly observed in the plots of Fig. 13. The stress concentrations around the notch tip are relatively higher for the smallest notch radii, as expected. When the notch radius is sufficiently large (e.g., 15 mm), the notch effect tends to vanish and the Fig. 14 Comparison of the numerically obtained DEM and FEM stress contours ( xx ) around the notch tips for ρ = 3 mm, ρ = 7 mm and ρ = 15 mm, both for grain sizes of 1 mm and 3 mm and for the particular case of n = 30 stresses approximate to those corresponding to a simple section reduction with no appreciable stress intensification.
Finally, aiming to support the results observed in Fig. 13, Fig. 14 represents the horizontal stress contours near the notch tip considering both the FEM and the DEM models. These stresses correspond to the moment just prior to failure and, as in Fig. 13, they stand for the same representative cases, i.e., for = 3 mm, = 7 mm and = 15 mm. All of them are for a mean grain size of 1 mm and 3 mm and for the particular case of n = 30. In consistence with Fig. 13, good agreement is observed between the stresses of the continuum and discontinuum approaches, not only along the bisector planes but also throughout the surroundings of the notches. The influence of the presence of grains on the stress contours is more pronounced for the largest grains (i.e., 3 mm), which confirms the staggered shape of the curves (see Fig. 13a2, b2 and c2). In contrast, for the smallest analysed grains (i.e., 1 mm), the stress variation is appreciably smoother and very close to that obtained with the continuum approach.
Conclusions
This work studies the influence of the grain size and its uniformity on the apparent fracture toughness ( K IN ) of an idealised non-porous crystalline rock. To this end, different four-point bending tests with variable notch radii, grain sizes and degrees of uniformity have been numerically studied, using block-based numerical models based on DEM. The main limitation of these models is the fact that only intergranular fracture is considered, not accounting for the possible coexistence between intergranular and transgranular fracturing. The interpretation of the results is based on the use of the LM of the TCD, which analyses the stress fields along the bisector of the notch tip. Finally, these stresses have been compared with those obtained from a traditional continuum approach.
Based on the obtained results, the following conclusions should be highlighted: • The notch effect is clear for the range of analysed notch radii regardless of the grain size or uniformity, as the apparent fracture toughness increases with the notch radius in all the cases. • The best-fit curves of K IN corresponding to the LM of the TCD show that the fracture toughness increases with the grain size. Additionally, it seems that the notch effect is relatively softened with the grain size as well, as the curves slightly flatten when the grain size increases. • The fracture toughness does not vary with the degree of uniformity, if the mean grain size is the same. On the other hand, the critical distance slightly decreases when less uniform grains are analysed. • There seems to be a linear relation between the critical distance of the rock and the grain size. However, the obtained relation should be understood in qualitative terms, as the problem studied corresponds to an idealised situation in which the 3D effect is neglected and only intergranular fracture is considered. • DEM and FEM provide similar stress fields at the notch tip when the grain size is sufficiently small. The staggered stress curves associated to the discrete approach soften as the grain size decreases. In general terms, good agreement between the two types of models have been obtained for the studied range of notch radii and grain sizes.
With all this, the TCD provides a suitable tool for the fracture assessment of rocks with notch-type defects. Likewise, discrete numerical approaches allow detailed analyses to be performed when precise information is required, as the influence of the grain size and its uniformity for example. However, in general terms, finite element analyses provide accurate approximations when the global behaviour is studied and the influence of microstructural aspects (e.g., grain size) can be neglected.
|
2022-04-09T13:30:52.887Z
|
2022-04-09T00:00:00.000
|
{
"year": 2022,
"sha1": "654735e97df9eb04d462738580a502499cc38cd2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00603-022-02852-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "a363f2ade722968de42aa8526673b4e5e2f03635",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
8687941
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of a Cyanobacterial Chloride-pumping Rhodopsin and Its Conversion into a Proton Pump*
Light-driven ion-pumping rhodopsins are widely distributed in microorganisms and are now classified into the categories of outward H+ and Na+ pumps and an inward Cl− pump. These different types share a common protein architecture and utilize the photoisomerization of the same chromophore, retinal, to evoke photoreactions. Despite these similarities, successful pump-to-pump conversion had been confined to only the H+ pump bacteriorhodopsin, which was converted to a Cl− pump in 1995 by a single amino acid replacement. In this study we report the first success of the reverse conversion from a Cl− pump to a H+ pump. A novel microbial rhodopsin (MrHR) from the cyanobacterium Mastigocladopsis repens functions as a Cl− pump and belongs to a cluster that is far distant from the known Cl− pumps. With a single amino acid replacement, MrHR is converted to a H+ pump in which dissociable residues function almost completely in the H+ relay reactions. MrHR most likely evolved from a H+ pump, but it has not yet been highly optimized into a mature Cl− pump.
represented by visual pigments in the eyes. The other is microbial rhodopsin, which shows divergent functions in unicellular microorganisms, such as light-driven ion pumps and light sensors for phototaxis and the regulation of gene expression. Different from animal rhodopsins, the microbial sensors represent a minority of the microbial rhodopsins. However, microbial sensors have individualities in their signaltransduction modes, which include interactions with other membrane proteins, interactions with cytoplasmic components, and light-gated ion channel activity.
Microbial rhodopsins were originally discovered in highly halophilic archaea in the early 1970's -1980's (3). Since 1999, their homologues have begun to be identified in various microorganisms inhabiting a broad range of environments (4)(5)(6). It is now clear that in the microbial world, ion-pumping rhodopsins are abundant and widely distributed. In microorganisms, these ion pumps are probably the most convenient system for light energy utilization. The first and second ion pumps to be discovered were bacteriorhodopsin (BR) (7) and halorhodopsin (HR) (8) from halophilic archaea, which are an outward H + pump and an inward Clpump, respectively. In 2000, an outward H + pump named proteorhodopsin (PR) was discovered in a marine proteobacterium (9). Later, the PR homologues were identified in eubacteria living in the oceans worldwide (10). Recently, two groups of ion pumps, the outward Na + pump (NaR) and inward Clpump (ClR), were also discovered in eubacteria (11,12). Their representatives are Krokinobacter eikastus rhodopsin 2 (KR2) (13) and Fulvimarina pelagi rhodopsin (FR) (14), respectively. The phylogenetic position of these ion pumps is shown in Fig 1A. In contrast to their functional diversity, these ion pumps have essentially the same structural folds composed of seven helices and the chromophore retinal, which binds to a conserved lysine residue via a protonated Schiff base (PSB; deprotonated Schiff base is abbreviated as SB). In the dark, most ionpumping rhodopsins dominantly contain retinal with the all-trans configuration, while some minorities such as BR and HR from archaeon Halobacterium salinarum (HsHR) can also accommodate 13-cis retinal (2). Regardless of this difference in the dark states, only the photoisomerization from all-trans to 13-cis can trigger the conformational changes for respective ion pumping functions. Thus, ion-pumping rhodopsins appear to share a common transport machinery, where the transportable ions are probably determined by essential residues at appropriate positions. This scenario was partially demonstrated in 1995, when the H + pump BR was converted to an inward Clpump by the single amino acid replacement of Asp85 with Thr, the corresponding amino acid in HR (15). However, the success of pump-to-pump conversion was confined to this BR case. The reverse conversion, that is, from a Clpump to a H + pump, has not been achieved (16)(17)(18) even after ten mutations of HR (18). Here, we report a new class composed of an inward Clpump and its conversion to an outward H + pump. Functional characterization was performed with a microbial rhodopsin from the cyanobacterium Mastigocladopsis repens, which was isolated from soil at Punta de la Mora, Tarragona in Spain. This microbial rhodopsin is designated as M. repens HR (MrHR) due to its similarities with HR. By a single amino acid replacement, MrHR begins to pump H + outwardly. Thus, this is the first successful conversion from an inward Clpump to an outward H + pump.
EXPERIMENTAL PROCEDURES
Phylogenetic Analysis-The 57 amino acid sequences were aligned using MUSCLE. All sequence data were obtained from the NCBI database. The phylogenetic tree was constructed by using the maximum likelihood method, with bootstrap percentages based on 1,000 replications. Initial trees for the heuristic search were obtained by applying the neighbor-joining method (19). Evolutionary distances were calculated with the JTT matrix-based method (20). The branch lengths denote the number of amino acid substitutions. All analyses were performed using MEGA6.
Gene preparation, protein expression and purification-Escherichia coli strain DH5α was used for DNA manipulation. An MrHR gene (NCBI accession ID: WP_017314391) with codons optimized for E. coli expression was chemically synthesized (Funakoshi, Japan) and inserted into the NdeI/XhoI site of the pET-21c(+) vector (Merck Japan, Tokyo, Japan). This plasmid results in MrHR with additional amino acids in the C-terminus (-LEHHHHHH). The T74D mutation was introduced using the QuikChange Site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA). The DNA sequences were confirmed by a standard method using an automated DNA sequencer (model 3100, Applied Biosystems, Foster City, CA). MrHR and the T74D mutant were expressed and purified from E. coli BL21(DE3) cells. The procedures were essentially the same as those previously described (21). The cells were grown at 37 ºC in 2×YT medium supplemented with 50 μg/mL ampicillin. At the late exponential growth phase, the expression was induced by the addition of 1 mM isopropyl-β-D-thiogalactpyranoside in the presence of 10 μM all-trans retinal. After 3 h of induction, pink-colored cells were harvested by centrifugation (6400 × g, 8 min at 4 °C) and washed once with buffer (50 mM Tris-HCl, pH 8.0) containing 5 mM MgCl 2 . Then, the cells were broken with a French press (Ohtake, Tokyo, Japan) (100 MPa × 4 times). After removing the undisrupted cells by centrifugation (5600 × g, 10 min at 4 °C), the supernatant was ultracentrifuged (178,000 × g, 90 min at 4 °C). The collected cell membrane fraction was suspended with the same buffer containing 300 mM NaCl and 5 mM imidazole and was then solubilized with 1.5% ndodecyl β-D-maltopyranoside (DDM) (Dojindo Lab, Kumamoto, Japan) at 4 °C overnight. After removal of the insoluble fraction by ultracentrifugation (178,000 × g, 60 min at 4 °C), the solubilized MrHR proteins were subjected to Ni-NTA agarose (Qiagen, Hilden, Germany). The unbound proteins were removed by washing the column with 10 column volumes of wash buffer (50 mM sodium phosphate, pH 7.5) containing 400 mM NaCl, 50 mM imidazole and 0.05% DDM. The bound protein was eluted with buffer (50 mM Tris-HCl, pH 7.0) containing 300 mM NaCl, 500 mM imidazole and 0.1% DDM. The yield of MrHR was 15 mg from 1 L culture. The concentration was determined from the absorbance at 537 nm under an assumed extinction coefficient of 40,000 M -1 cm -1 . The purified samples were replaced with an appropriate buffer solution by passage over Sephadex G-25 in a PD-10 column (Amersham Bioscience, Uppsala, Sweden).
Ion-pump activity measurements-The MrHR activity was measured in E. coli suspensions using a conventional pH electrode method (22), which detects the pH changes by the pump activity of H + itself or passive H + transfer in response to the membrane potential created by the pump activity for another ion. The E. coli suspensions were prepared as follows. The cells expressing MrHR were harvested at 3,600 g for 5 min at 4 ºC and washed twice with an unbuffered solution containing 200 mM salt (NaCl, Choline-Cl, NaBr, NaI or NaNO 3 ). They were resuspended in the same salt solution and gently shaken overnight at 4 ºC in the presence of 10 μM carbonyl cyanide m-chlorophenylhydrazone (CCCP). Then, the cells were washed twice with the same salt solution without CCCP and finally suspended at an A 660 of 0.5. This cell density was approximately 5% of the corresponding value for the original culture medium. As deduced from the purification yield, the original culture medium contains at least 15 μg/mL MrHR. Thus, the cell suspensions for the activity measurements contain at least 0.75 μg/mL of MrHR. For the activation of MrHR, 530 ± 17.5 nm green LED (LXHL-LM5C, Philips Lumileds Lighting Co., San Jose, CA) was used.
HPLC analysis-The retinal configurations of MrHR were examined in both of the dark/light-adapted states. For dark adaptation, the MrHR samples were kept in the dark for 1 week in 10 mM MOPS, pH 6.5, containing 100 mM NaCl and 0.05% DDM. For light adaptation, the samples were irradiated for 2 min by green LED light as described above. MrHR undergoes a prolonged photocycle as described below. To avoid the contamination of retinal in the photolyzed state, the retinal oxime extraction was carried out after 1 min incubation in the dark. The extraction and the following HPLC analysis was performed as previously described (22).
Absorption spectra measurements and Flash photolysis-UV-visible spectra of MrHR samples were measured with a UV-1800 spectrometer (Shimadzu, Kyoto, Japan). Flashinduced absorbance changes were obtained in the 5 μs to 10 s time range on a single wavelength kinetic system. For the photoexcitation, the second harmonic (7 ns, 532 nm) of a Q-switched Nd:YAG laser (Surelite I-10, Continuum, Santa Clara, CA) was used. The details have been previously described (21). To improve the S/N ratio, 30 laser pulses were used at each measurement wavelength. All measurements were performed at 25 ºC.
RESULTS AND DISCUSSION
Mastigocladopsis repens, isolated from the surface of soft powdery calcareous soil, is a blue-green alga belonging to the cyanobacterial order Stigonematales and was morphologically characterized according to the filaments it forms, which have many branches (23). In 2013, the existence of a gene encoding a microbial rhodopsin was revealed by whole genome sequencing of M. repens (24). We named this microbial rhodopsin "M. repens halorhodopsin (MrHR)" after its photochemical properties, described below. At present, six other homologues of MrHR are found in the NCBI database. Figure 1A shows the phylogenetic tree of microbial rhodopsins, which revealed that MrHR homologues constitute a new clade that is distinct from the known microbial rhodopsins. The six homologues are also encoded in cyanobacteria and share 63-89% amino acid identities with MrHR. Meanwhile, the host strains belong to different cyanobacterial orders (Chroococcales and Nostocales) from M. repens, reflecting morphological differences. The phylogenetic relationships of MrHR homologues are shown in Fig 1B together with the names, orders and sources of their host strains. In addition to M. repens, all other genomes were sequenced (24)(25)(26)(27)(28). Cyanobacteria are a genetically diverse group and are widespread in fresh water, marine, terrestrial and extreme environments such as hot springs and salt lakes. However, out of the seven strains of MrHR homologues, six were isolated from terrestrial environments, such as soil and rock scrapings, and biofilms on stone monuments and building facades ( Fig 1B) (23,29,30). Correspondingly, the desiccation resistance was experimentally confirmed for several strains (31). Two strains also contain the gene for Anabaena sensory rhodopsin (ASR), a putative sensor for chromatic adaptation that was originally discovered in the cyanobacterium Anabaena sp. PCC7120 (32). The other five strains do not contain other microbial rhodopsin genes.
Microbial rhodopsins are largely categorized into ion pumps and sensors. All ion pumps are solely encoded in their respective operons in the genomes, whereas all sensors are encoded to be adjacent to the genes encoding the cognate transducers within the same operons. For MrHR homologues, their genes are solely encoded, implying that they function as ion pumps. Previous studies on ion pumps (12, 13) have indicated that three amino acids are responsible for determining the transportable ions (Fig 2A and B). For H + pumps, a H + from PSB is translocated during the photoreaction. This H + is initially transferred to D85 in BR (D97 in PR), and SB subsequently accepts a H + from D96 in BR (E108 in PR) (Fig 2A). These residues are often referred to as the H + acceptor and donor, respectively. For Clpumps, the acceptor is replaced with a 'T' in HR and 'N' in FR (Fig 2B), which enables the binding of substrate Clas the counter ion of the PSB. Furthermore, for HR and KR2, another residue corresponding to T89 in BR is known to play a crucial role (13,21): These residues are 'S' in HR and 'D' in KR2, respectively (Fig 2B). Thus, three residues (D85, T89, and D96 in BR) are now designated as the "motif": These are DTD and DTE for BR and PR, TSA for HR, and NTQ and NDQ for FR and KR2 (Fig 2B). For MrHR, the motif is TSD (Fig 2A and B), which is close to TSA in HR. This implies that MrHR may pump Clinwardly, despite a low amino acid identity of 22% for HR and 12% for FR. The identities for the H + pumps are also low: 29% for BR and 17% for PR. However, the donor in the H + pumps is conserved as 'D' in MrHR, similar to BR(D) or PR(E). Furthermore, MrHR conserves E194 and E204 in BR, which consists of the H + releasing complex to the extracellular medium ( Fig 2B). Thus, MrHR conserves the residues that are characteristic of both the Clpump (HR) and the H + pump (BR and PR).
In this study, we expressed MrHR in the cell membrane of Escherichia coli and investigated the ion-pumping activity using the light-induced pH changes of the cell suspensions (Fig 3). In a NaCl solution, light-induced alkalization was observed. This pH change was not abolished by the addition of the protonophore CCCP, which eliminates the electrochemical gradient of the proton (33). This means that the alkalization was not caused by the active proton transport but was caused by the passive proton influx in response to the interior negative membrane potential, which should be created by outward Na + or inward Cltranslocation. In Choline-Cl, alkalization was also observed. Because choline is a large organic cation, microbial rhodopsin cannot transport it. This indicates that MrHR functions as a light-driven inward Clpump. The pump activity decreased in NaBr and disappeared in NaI and NaNO 3 . Thus, MrHR does not pump Na + but does pump smaller anions. The transportable anions are severely restricted to only Cland Br -, different from HR and FR, which can even transport NO 3 -(14,33). The retinal isomer composition of MrHR was examined by HPLC analysis (Fig. 4). As described above, BR and HsHR in the dark states can accommodate both all-trans and 13-cis retinals and show so-called "light-dark adaptation" (2). In their light-adapted states after continuous illumination, the unphotolyzed states predominantly contain all-trans retinal. Upon dark adaptation, their 13-cis contents increase approximately 50%. For MrHR (Fig 4), the isomer composition does not depend upon the dark/light adaptation and is predominantly alltrans, similar to most ion-pumping rhodopsins. Thus, the all-trans retinal is responsible for the ion-pumping function of MrHR.
Anion binding near the PSB was monitored by the color change of retinal using purified MrHR (Fig 5A). For other Clpumps, Clbinding causes a blueshift (14,34), except for HsHR (22). In contrast, a large redshift occurs for MrHR by its binding of Cl -, suggesting the differences in the Clbinding site from other Clpumps. Similar redshifts were also observed for Brand even for Iand NO 3 -. The dissociation constants (K d ) were determined from the absorbance changes at 550 nm (Fig 5B), and the results are summarized in Table 1. The K d values are close to those of HR, except for NO 3 -(171 mM): HR binds NO 3 more strongly (K d ~ 16 mM) (35), whereas FR binds these ions more weakly (K d = 40 -130 mM) (14). These results indicate that MrHR can bind Iand NO 3 but cannot transport them. These larger ions may be impossible to move toward a cytoplasmic half channel over the PSB region. Compared with HR and FR, the ion translocation pathway might be narrow and act as an ion-selective filter.
We also characterized the photocycle of MrHR by a flash photolysis method. Timedependent absorbance changes at selected wavelengths are shown in Fig 5C, The slow photocycle of MrHR might reflect some differences in the physiological role from other Cl --pumping rhodopsins. Although the details are not fully resolved, the Clpumps are believed to play roles in light-driven ATP production and/or maintaining the cellular osmotic balance during the volume increase of the growing cell (38)(39)(40)(41). All host strains of MrHR homologues contain a photosynthetic apparatus. Thus, the slow photocycle of MrHR seems to contribute little to cellular ATP production under illumination. Instead, MrHR might relate to a regulation of osmotic pressure. Unlike aquatic cells harboring other Clpumps, most host strains of MrHR homologues inhabit terrestrial and non-aquatic environments, where the cells are occasionally exposed to drought stress. Thus, the cells might utilize MrHR homologues to survive in the non-aquatic habitats. Under desiccated conditions, the internal salt concentration should be increased to preserve the intracellular water content. Due to the interior negative membrane potential, cations could be passively transported inside. However, anions need an active transport system, such as MrHR. Thus, a simple scenario might be that MrHR elevates the intracellular osmotic pressure, which in turn preserves the intracellular water content under drought conditions. For further discussion, we must await future investigations.
MrHR conserves the H + donor residue in the H + pump, which is an essential residue in the H + pump. Thus, we attempted the conversion of the MrHR to a H + pump; a mutant T74D was made in which the Asp residue was introduced as a possible proton acceptor. The λ max of T74D is located at 522 nm ( Fig 6A) and did not change upon the addition of Cl -. This indicates that there is no chloride binding near the PSB. The HPLC analyses showed that the retinal isomer composition is predominantly all-trans ( > 95%) in both dark and light-adapted states, similar to wild-type MrHR. Next, we examined the ionpumping activity using the pH change of the E. coli suspension. The results are shown in Fig 6B, which indicates the opposite pH change to the wild-type MrHR. Even in the NaCl solution, light-induced acidification was observed, showing proton extrusion. This pH change was decreased by the addition of CCCP, indicating that T74D actively pumps protons outward. Thus, a successful conversion from a Clpump to a H + pump was effected by the single amino acid replacement. Corresponding to this conversion, T74D undergoes a totally different photocycle from that of wild-type MrHR (Fig 6C). Specifically, the M-formation (400 nm) was observed, indicating that the introduced Asp residue acts as a H + acceptor from PSB. For HR, a T-to-D mutation was never observed to induce M-formation (16)(17)(18). For natural H + pumps in the dark, SB is protonated while the acceptor is deprotonated because the SB has a larger pKa than the acceptor. M-formation requires the inversion of the pKa values: The pKa of the acceptor must become larger than that of SB. T74D accomplishes these pKa changes. Following M decay, two intermediates appear sequentially, corresponding to N (470 nm) and O (590 nm) in BR. For H + pumps, the donor facilitates both M-N and N-O transitions because the M-N transition reflects proton movement from the donor to SB, and the subsequent N-O transition reflects proton uptake from the cytoplasmic medium by the donor. Thus, the donor-lacking mutants of H + pumps undergo substantially slower transitions after M formation (42,43). Interestingly, both transitions in T74D are fast compared with natural H + pumps, indicating that the donor functions very well. Moreover, M, N and O appear sequentially without quasi-equilibrium. This reflects the strict "accessibility switch" of the donor, which communicates with SB during the M-N transition; then, the accessibility switches to the cytoplasmic medium to facilitate the N-O transition. In a H + pump, this switch contributes to the one-way (irreversible) H + transport (42,43). These facts indicate that T74D contains the structural factors as well as the essential residues for an effective H + pump.
The overall photocycle of T74D resembles that of natural H + pumps but is prolonged due to the last intermediate (520 nm), which resembles MrHR' in the wild-type photocycle. A similar intermediate was also observed in PR (44), but MrHR' in T74D has a substantially long lifetime (~3 s). Another difference from a natural H + pump is the formation of an unknown intermediate (hereafter called X) at approximately 440 nm. X is formed almost simultaneously with the M-decay, but it decays independently of N and O. Thus, X probably forms from M by a branching process. SB reprotonates during the M-decay. For the M-N transition, the donor provides this proton. For the M-X transition, the proton is probably provided through another pathway, implying no contribution of X to the H + pumping activity. In natural H + pumps, the antecedent deprotonation of SB triggers the accessibility switching of SB from the acceptor to donor, which enables the exclusive H + transfer from the donor to SB (42,43). In T74D, this switching might not be stringently controlled.
MrHR functions as an inward Clpump but undergoes a prolonged photocycle. The donor residue is not essential for Cl --pumping activity, which was confirmed by the alanine replacement mutant (data not shown). In contrast, in T74D, both the introduced acceptor and the conserved donor function are comparable with those in natural H + pumps, which suggests that MrHR evolved from a H + pump, but the residues and the structure have not been optimized into a mature Clpump. The mutant T74D has reduced H + pumping activity as compared to natural H + pumps because of the slow decay of the last intermediate and the formation of the X intermediate. These weaknesses are probably related to the lost mechanisms, which are important for a H + pump but not for a Clpump. Thus, comparable studies of MrHR, Clpumps and H + pumps might provide insight into how sophisticated Clpumps evolved from H + pumps. Their NCBI accession IDs are indicated in the parentheses. The strains labeled with closed circles also contain the ASR gene. Other symbols indicate the cyanobacterial orders and the sources of the host strains. In both trees, nodes with less than 50% bootstrap percentages are collapsed. Dissociation constants (K d ), absorption maxima (λ max ) and their shifts (Δλ max ) by the anion bindings a K d s were determined by fitting analyses using Hill equation with n = 1. The best-fit curves are shown in Fig 5B. b λ max s were the determined values at the following anion concentrations: Cl -, 0.5 M; Br -, 0.5 M; I -, 0.5 M; NO 3 -, 2 M. At these concentrations, the λ max shifts were almost saturated. c Δλ max s denote the λ max shifts from the anion-free state (506 nm).
|
2018-04-03T00:50:21.606Z
|
2015-11-17T00:00:00.000
|
{
"year": 2015,
"sha1": "c2cfc041112d5f92865469f0cc48d08738f7913d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/291/1/355.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "979ec62c9329eebb3109011ab47601b6ccb2c556",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
16818730
|
pes2o/s2orc
|
v3-fos-license
|
Collective Multi-Vortex States in Periodic Arrays of Traps
We examine the vortex states in a 2D superconductor interacting with a square array of pinning sites. As a function of pinning size or strength we find a series of novel phases including multi-vortex and composite superlattice states such as aligned dimer and trimer configurations at individual pinning sites. Interactions of the vortices give rise to an orientational ordering of the internal vortex structures in each pinning site. We also show that these vortex states can give rise to a multi-stage melting behavior.
Such states include orientationaly ordered dimer, trimer and composite states. Transitions between different vortex states can be observed as jumps in the critical depinning force as a function of pinning size. We also show that in these systems multi-stage melting processes can occur where the orientational degrees of freedom of the internal vortex lattice structure in the pinning sites melt at a much lower temperature than the whole vortex lattice does.
Particular systems in which these states can be realized include thin-film superconductors with arrays of blind holes, disks or weak magnetic dots. Other possible physical realizations include charged colloidal particles or Wigner crystals in 2D trap arrays.
We consider a 2D system with periodic boundary conditions We numerically integrate the overdamped equation of motion for a vortex i The vortex-vortex interaction potential is chosen to be logarithmic U v = − ln(r). The force on vortex i from the other vortices is f vv where r ij = |r i − r j | is the distance between vortices i, j, and η is the Bardeen-Stephen friction. We evaluate the periodic long-range logarithmic interaction with the resummations given in Ref [14]. The pinning is modeled as a square array of attractive parabolic wells with f vp ik . Here Θ is the step function, r (p) k is the location of pinning site k, f p is the maximum pinning force, and r p is the radius of the pinning site. To obtain vortex configurations we start from a high temperature where the vortices are in a molten state and gradually cool until T = 0. We have verified that our cooling rate is sufficiently slow so that the final state no longer depends on the cooling rate. In this work we will focus on the case of B/B φ = 2, 3, and 4, where B φ is the field at which there is one vortex per pinning site. This is adequate to capture the general features of the vortex states for higher matching fields. Results for higher B/B φ and incommensurate filling fractions will be presented elsewhere. The results for this work are for 8 × 8 pinning arrays. We have also conducted simulations with larger systems and for different pinning lattice constants and have found the same features as seen for the 8 × 8 systems. When the pinning sites are sufficiently large, multiple vortices can be captured per pinning site. We note that it is possible that vortices in the pinning sites can for certain parameters form individual giant vortex states [10] which we do not consider here.
In Fig. 1 we show the four vortex states that are possible for B/B φ = 3 for varying pinning strength and size. In Fig. 1(a) for the weak f p = 0.25 and small pinning r p = 0.1 the vortexvortex interactions dominate and the vortex lattice forms a nearly triangular lattice. The vortex lattice still takes advantage of the pinning; however, only half the pinning sites can be occupied in order for the vortex lattice to have triangular ordering. We label this phase the commensurate elastic lattice. In Fig. 1(b), for stronger pinning, the pinning sites each capture one vortex and the vortex lattice is no longer triangular. The overall vortex lattice is still ordered with pairs of interstitial vortices alternating in positions. The state in Fig. 1 is identical to the state observed experimentally by Harada et al. [2], for the third matching field. For increased pinning radius r p = 0.5, and f p = 1.25 in Fig where only a fraction of the vortices are filled, as in region I for B/B φ = 3.0, which can be understood by considering that in Fig. 2(a) the vortex lattice is already triangular. The state in Fig. 2(a) was also observed in experiments [2]. In Fig. 2(b) where two vortices can be captured we do not find a completely ordered overall lattice. Here the lattice breaks up into domains with two separate orientations. This can be a finite size effect where the 8 × 8 system is incommensurate with the unit cell of order. In Fig. 2(c) three vortices are captured per pinning site where the trimers are oriented with respect to one another and the interstitial vortices form a square sub-lattice. In Fig. 2(d) where each pinning site captures four vortices, the vortices in the pinning sites form a square lattice with the same orientation as the pinning lattice.
We have also conducted simulations for B/B φ > 4.0 and observe the same general features of the vortex states as outlined above, in particular the orientational ordered multivortex lattice states and ordered interstitial sublattices. The number of different kinds of states increases with the field. In Fig. 3(a) we show the evolution of the phases for the B/B φ = 3.0 case for varied r p and f p with region I through IV corresponding to the phases in Fig. 1(a-d). As r p and f p are increased region II slowly decreases, region III maintains a roughly constant width, and region IV grows. Region I disappears for f p > 1.0. In Fig. 4(b) we plot the evolution of the phases for B/B φ = 4.0 with regions I ′ through IV ′ corresponding to the phases in Fig. 2(a-d). Here region I ′ is considerably larger than the other phases. For increasing f p region IV ′ grows while regions II and III keep roughly the same width.
The onset of the different vortex states as a function of r p at constant f p can also be observed as discrete jumps in the critical depinning force. The depinning is determined by adding an increasing driving force term to Eq. (1) in the symmetry direction of the pinning lattice and monitoring when the vortex velocities become non-zero.
In Fig. 4 we show that the collective multi-vortex states can give rise to a novel multi-stage melting behavior. We focus here on the simplest collective multi-vortex state at B/B φ = 2.0, when there are two vortex states. In the first state, every pinning site captures one vortex while interstial vortices form a square sub-lattice as previously observed in simulations [6] and experiments [2]. The second state, shown in Fig. 4(a), is an aligned dimer state with all the vortex dimers being aligned at 45 degrees. We apply a temperature by adding Gaussian noise to Eq. (1). The dimers stay aligned until T = 0.004 at which point they begin to freely rotate inside the wells as shown in the vortex trajectories in Fig 4(c). This destroys the orientational order, as seen in Fig. 4(b), and results in a liquid dimer state. As T is increased the vortices remain confined in the pins until T = 0.01 when the overall lattice melts with vortices diffusing randomly in the sample as shown in Fig. 4(d). To measure the melting transitions quantitatively in Fig. 4(e) we plot the angular correlation of the dimers C θ = nm exp(2i(θ n − θ m )) /N. We also plot a measure of the vortex displacements from their initial position at T = 0 and compared to the displacements at a higher T : Here τ is the time interval between temperature increments. For low T , C θ is near unity and d(T ) ≈ 0 as the dimers remain aligned. At T = 0.004, the dimers begin to freely rotate, as seen in the drop in C θ to ≈ 0.3 (notice that C θ would be close to zero for a large system simulated over a long time) as well as the finite jump in in d(T ).
Since the dimers are still confined to the pinning wells d(T ) stays at a constant value in the molten dimer state. For T > 0.09 d(T ) rises rapidly as the vortices begin to jump out of the wells and diffuse randomly in the liquid state. The melting temperature of the dimer states is reduced as the pinning lattice constant is increased or the pinning radius is reduced. For higher matching fields a similar multi-stage melting behavior is observed where the loss of orientational ordering of the vortices in the pinning sites occurs before the loss of order in the overall lattice. More work is needed to determine the nature of the melting of the dimer or trimer states, such as whether it is continuous and similar to the melting in XY type models. It may also be possible that additional melting stages occur when the vortices are still inside the pinning site similar to the melting behaviors of vortices [8] or colloids [13] inside individual traps.
We point out that in addition to simulations with logarithmically interacting vortices we have also conducted 2D simulations with a finite range Bessel function interaction and find (provided the interaction range is sufficiently large) similar features in the vortex structures and multi-stage melting indicating that many of the phases we observed here are general features of systems of 2D repulsive particles in periodic arrays.
In conclusion we have studied the vortex states in thin-film superconductors interacting with periodic pinning arrays in which multiple vortices can be trapped at individual sites.
We find that a rich variety of novel vortex states are possible as a function of pinning strength and pinning size. These states include collective dimer, trimer and composite states in which the vortex structures in the pinning sites exhibit an orientational ordering with each other.
Transitions between the different states can be observed as a series of discrete jumps in the critical depinning force for varied pin radius. We also show that these systems exhibit a multi-stage melting where the structures internal to the vortex lattice melt before the overall vortex lattice melts. Besides vortices in superconductors these states may be observable for charged colliods in multi-trap arrays.
We f p = 1.25, r p = 0.5. Every pinning site captures two vortices forming a dimer state with the dimers being orientationally ordered with respect to one another as well as with the interstitial vortices.
(d) f p = 1.25, r p = 0.7. Every pinning site captures three vortices forming a trimer state with the trimers orientationally ordered as seen in the unit cell.
|
2014-10-01T00:00:00.000Z
|
2000-04-12T00:00:00.000
|
{
"year": 2000,
"sha1": "2f8c0d28997ee57970f49554049ae6b37b1ad069",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0004203",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f114d07768028b73732aa0ccc599d654320a2808",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
219702547
|
pes2o/s2orc
|
v3-fos-license
|
Risk Factors for Mortality and Respiratory Support in Elderly Patients Hospitalized with COVID-19 in Korea
Background The mortality risk of coronavirus disease 2019 (COVID-19) is higher in patients with older age, and many elderly patients are reported to require advanced respiratory support. Methods We reviewed medical records of 98 patients aged ≥ 65 years who were hospitalized with COVID-19 during a regional outbreak in Daegu/Gyeongsangbuk-do province of Korea. The outcome measures were in-hospital mortality and the treatment with mechanical ventilation (MV) or high-flow nasal cannula (HFNC). Results The median age of the patients was 72 years; 55.1% were female. Most (74.5%) had at least one underlying condition. Overall case fatality rate (CFR) was 20.4%, and median time to death after admission was 8 days. The CFR was 6.1% among patients aged 65–69 years, 22.7% among those aged 70–79 years, and 38.1% among those aged ≥ 80 years. The CFR among patients who required MV was 43.8%, and the proportion of patients received MV/HFNC was 28.6%. Nosocomial acquisition, diabetes, chronic lung diseases, and chronic neurologic diseases were significant risk factors for both death and MV/HFNC. Hypotension, hypoxia, and altered mental status on admission were also associated with poor outcome. CRP > 8.0 mg/dL was strongly associated with MV/HFNC (odds ratio, 26.31; 95% confidence interval, 7.78–88.92; P < 0.001), and showed better diagnostic characteristics compared to commonly used clinical scores. Conclusion Patients aged ≥ 80 years had a high risk of requiring MV/HFNC, and mortality among those severe patients was very high. Severe initial presentation and laboratory abnormalities, especially high CRP, were identified as risk factors for mortality and severe hospital course.
INTRODUCTION
Coronavirus disease is an infectious disease caused by a novel coronavirus, severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2). Approximately 3 months after the first report in Wuhan, China, the number of cases exceeded 400,000 by late March. 1 High viral shedding early in the disease course and slow progression make the effort for containment extremely difficult, and a large surge of cases has been observed in Europe and North America. 2, 3 Its clinical course ranges from asymptomatic infection to acute respiratory distress syndrome (ARDS) and death. 4 Although most patients undergo mild febrile illness, a relatively large proportion of patients need hospitalization and respiratory support such as high-flow nasal cannula (HFNC) or mechanical ventilation (MV). 5 Case fatality rates (CFRs) vary significantly by country, as the magnitude and velocity of surge greatly affect the care of patients. However, severe cases and mortality are consistently reported among the elderly, and patients aged ≥ 60 years comprise the majority of fatal cases in both China and Italy. 6,7 Despite the importance of old age with respect to outcome, there have been no reports specifically aimed at examining the clinical characteristics and treatment outcomes in elderly patients with COVID-19 who require hospitalization. Information on the outcomes in this population, especially the need for MV/HFNC that requires specialized machines and considerable resources, is necessary for the public health response and planning.
Since February 19, 2020, a regional outbreak of COVID-19 occurred in the Daegu/Gyeongsangbukdo province of Korea (Fig. 1). Owing to an expanded testing capacity and rapid public health response, most patients are considered diagnosed and monitored. Although the healthcare capacity has been overstretched during the outbreak, the CFR observed in the area is substantially lower than the CFRs reported in China and Italy, suggesting that the healthcare system has been largely capable of providing adequate care for patients. 8, 9 The clinical data from Daegu/ Gyeongsangbuk-do province would provide useful information regarding the characteristics of COVID-19 in a situation that is different from the other two gravely affected countries.
Thus, we conducted a retrospective study to elucidate the clinical characteristics and risk factors for mortality and the need for MV/HFNC in elderly patients hospitalized with COVID-19.
Study population and data sources
We obtained medical records of patients aged ≥ 65 years who were admitted with laboratoryconfirmed COVID-19 in four hospitals between February 18 and March 4, 2020. The end date was set to ensure that all patients were observed for at least 14 days after admission, as the median time from onset to MV was 10.5 days in a previous report and 75th percentile of time to death after symptom onset was reported to be 14 days. 9,10 All patients were residents in the Daegu/Gyeongsangbuk-do province, and diagnosis of COVID-19 was made using a real-time reverse-transcriptase polymerase chain reaction (RT-PCR) assay of a nasopharyngeal swab or sputum according to the national guidelines.
Electronic medical records were reviewed to extract demographic characteristics, comorbidities, clinical features and laboratory findings on the day of admission, clinical course, treatment, and outcome. Patients were followed until death or discharge from hospital, whichever came first.
Study outcomes and definitions
The outcome measures were all-cause in-hospital death and MV/HFNC. We did not include care in intensive care units (ICUs) as an outcome measure as many mechanically ventilated patients were treated outside an ICU due to a shortage of ICU beds. The severity of the clinical course was evaluated through the highest respiratory support required during the hospital stay. They were categorized into none, supplementary oxygen (via nasal prong or facial mask), HFNC, MV, and extracorporeal membrane oxygenation. Noninvasive positive pressure ventilation was not administered to any of our study patients. Nosocomial acquisition was defined as a diagnosis of COVID-19 during admission in an acute-care hospital or a long-term care facility for other unrelated illnesses. The modified early warning score (MEWS) and national early warning score 2 (NEWS2) were calculated as previously described. 11,12
Statistical analysis
Patient characteristics were summarized and compared among outcomes using Student's t-test or Mann-Whitney U test for continuous variables and χ 2 or Fisher's exact test for categorical variables, as appropriate. In-hospital mortality of the two groups was compared using the Kaplan-Meier curve. A receiver operating characteristic curve was used to evaluate the accuracy of the prognostic factors. All tests were two-tailed, and significance was assessed at P < 0.05. R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) was used for the analyses.
Ethics statement
The study was approved by the Institutional Review Board of the Samsung Medical Center (SMC 2020-03-116-001) with waived informed consent.
Risk factors for mortality
Twenty patients died during their hospital stay, and the overall CFR was 20.4% ( Table 1). The median time to death after admission was 8 (IQR, 5-11) days. The CFR among male patients was significantly higher than that among female patients (31.8% vs. 11.1%, P = 0.023). Age was a significant predictor of mortality ( Fig. 2A); the CFR was 6.1% among patients aged 65-69 years, 22.7% among those aged 70-79 years, and 38.1% among those aged ≥ 80 years died. The substantial effect of age on outcome was observed consistently when CFR was examined according to severity. The CFR among patients ≥ 80 years of age who required HFNC or MV was 87.5%, substantially higher than that of lower age groups (Fig. 2B). In addition, the time to death was longer in patients aged ≥ 80 years (median, 11 days; IQR, 6-15 days) than in patients aged 70-79 years (median, 7 days; IQR, 4-8 days), although the difference was not statistically significant (P = 0.141). The overall CFR among patients who required MV was 43.8% (n = 7/16). All 12 patients whose highest respiratory support was HFNC, who declined to be intubated, died.
Nosocomial acquisition was also a significant risk factor for mortality (
Risk factors for advanced respiratory support
The overall proportion of patients received MV/HFNC was 28.6%. Older patients were more likely to need MV or HFNC; among patients aged 65-69 years, 12.1% received MV/HFNC, but 36.9% of patients aged ≥ 70 years needed MV/HFNC (Fig. 2C). Similarly, the proportion of patients who required supplementary oxygen was also higher in the older age groups.
A high CRP level (> 8.0 mg/dL) showed the highest risk for a severe clinical course in our patients, so its diagnostic characteristics was compared with those of two commonly used prognostication scores ( Table 3). High CRP levels showed higher sensitivity, specificity, and positive predictive value in predicting the need for MV/HFNC. The negative predictive values were comparable. The area under the receiver operating characteristic curve was also larger for high CRP level.
DISCUSSION
In our study of patients aged ≥ 65 years with COVID-19, a high mortality rate and severe clinical course frequently requiring advanced respiratory support were observed. The CFR (20.4%) in our patients was markedly higher than the overall mortality of COVID-19 in Korea (approximately 1.4%). 13 Approximately 29% needed MV or HFNC, and the CFR among that subgroup was very high (67.9%). Most patients had at least one underlying condition, which complicated the clinical course.
Age was the most important preexisting risk factor for mortality and MV/HFNC. In particular, patients aged ≥ 80 years had a 38.1% chance of receiving MV/HFNC. Among them, only one patient survived but was still on a mechanical ventilator at the time of data entry. The effect of older age on mortality has also been reported in China and Italy, which is consistent with our findings. 6,7 Furthermore, our data demonstrated the resources required to manage elderly patients with COVID-19. Combined with the relative risk of infection by age group and population distribution, our results provide critical information needed by healthcare facilities and public health authorities to prepare ventilators and HFNC machines to meet the expected demand. However, it should also be noted that no patients who used HFNC without further planning for MV survived in our study. The interpretation of our results is limited by the small number, but the limited role of HFNC alone may be taken into consideration when resources are extremely overwhelmed. As previous studies from China reported a lower mortality rate of patients treated with HFNC, there exists the possibility that our observation is specific to elderly patients. 14,15 Nosocomial acquisition and the presence of comorbidities were identified as important risk factors for mortality. Our results suggest that outbreaks in hospitals and long-term care facilities would result in grave consequences, which has been observed in the United States. 16 One interesting finding in our study is the lack of association between hypertension and mortality. Previous large-scale epidemiological data from the Chinese Center for Disease Control and Prevention reported that patients with hypertension had a high risk of death, similar to those in patients with chronic lung disease. 6 Other studies also showed that hypertension is associated with mortality or ICU care, 5,15,17 but conflicting reports also exist. 10, 18 In our study, hypertension was not a statistically significant risk factor in elderly patients, of whom about a half had hypertension. Isolated hypertension is generally not regarded as an important prognostic factor in infectious diseases; thus, the possibility of confounding should be examined in future studies.
A severe initial presentation, namely hypotension, tachypnea, hypoxia, or altered mental status, was indeed associated with a poor outcome. Two commonly used prognostication scores (MEWS and NEWS2) also correlated well with mortality. Among laboratory findings, leukocytosis, lymphopenia, and high CRP levels were associated with mortality and the need for MV/HFNC. Such an association has been reported in previous studies on the overall population and in critically ill patients. 10,14,15 Furthermore, we observed a very high degree of association with CRP; its OR for mortality was 25.33, and the OR for MV/HFNC was 25.08. When the cutoff was set at 8.0 mg/dL, elevated CRP had better diagnostic characteristics than those of MEWS and NEWS2. These two scores measure vital signs only, so a high CRP could be a useful addition for initial triage. Neutrophilia, lymphopenia, and elevated lactate dehydrogenase or D-dimer have been associated with severe course and mortality in previous reports, but a strong association of CRP was also reported in one study that specifically examined the risk factors for ARDS and death. 5 However, it is unclear whether this association reflects the degree of cytokine storm that leads to ARDS or the severity of viral infection. 19 Nonetheless, this study suggests that respiratory support might be prepared in advance for patients with high CRP as well as with MEWS ≥ 3 or NEWS ≥ 2.
Our study has several limitations. First, it was a retrospective study with a relatively small number of patients. The possibility of confounding cannot be excluded. Multivariable analysis using logistic regression was attempted, but an adequate model could not be constructed because of the small number and high collinearity between variables. Second, our study subjects consisted of hospitalized patients; thus, those deemed to be sufficiently fit for home isolation were not included. This explains the high mortality and severity observed in our cohort. Therefore, our results are not generalizable to mild cases. Finally, although we limited our study to patients with a follow-up duration of ≥ 14 days, a substantial proportion of patients were still hospitalized at the time of data entry. Although the risk of death was low after 14 days of admission in patients ( Fig. 2A), further follow-up is necessary. Despite these limitations, we believe that our results provide valuable information on the clinical outcomes and resource requirements of care for elderly patients with COVID-19 who have been shown to be the most vulnerable. We thought that waiting for the negative conversion of RT-PCR and subsequent discharge of patients would add little value to our results and delay the delivery of these important data.
In a retrospective study on elderly patients hospitalized with COVID-19, a high need for MV/HFNC and poor outcomes were observed. Patients aged ≥ 80 years had a high risk of requiring MV/HFNC, and mortality among those patients with severe disease was extremely high. A severe initial presentation and laboratory abnormalities were identified as risk factors for mortality and severe hospital course. In addition to high MEWS or NEWS2
|
2020-06-16T20:06:44.970Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "2aceed5ef4100bb04d7901a1b2bdc3ac5eb43bdc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2020.35.e223",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f66990ad006a3cce472bc8ef5a618704eba7b5af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
777086
|
pes2o/s2orc
|
v3-fos-license
|
Employ a mobile agent for making a payment
. The mobile agent paradigm offers flexibility and autonomy to e-commerce applications. But it is challenging to employ a mobile agent to make a payment due to the security consideration. In this paper, we propose a new agent-assisted secure payment protocol, which is based on SET payment protocol and aims at enabling the dispatched consumer-agent to autonomously sign contracts and make the payment on behalf of the cardholder after having found the best merchant, without the possibility of disclosing any secret to any participant. This is realized by adopting the Signature-Share scheme, and employing a Trusted Third Party (TTP). In the proposed protocol, the principle that each participant knows what is strictly necessary for his/her role is followed as in SET. In addition, mechanisms have been devised for preventing and detecting double payment, overspending and overpayment attacks. Finally the security properties of the proposed protocol are studied analytically. In comparison with other existing models, the proposed protocol is more efficient and can detect more attacks.
Introduction
Autonomous agents, stationary or mobile, offer new paradigms with autonomy, intelligence and flexibility.Autonomous agent based e-commerce technologies have drawn attentions from both the research community [7,11,15] and applications (e.g., Amazon [1] and eBay [2]).The introduction of autonomous agents acting on behalf of end-consumers could reduce the effort required from users to conduct e-commerce transactions by automating a variety of activities, such as, looking for and filtering out online shops selling the specified products, requesting offers, negotiating with shops and even completing payments [4,7,22].Due to the feature of mobile agent paradigm, an agent can be taken as a special service integrated as part of web services, namely mobile service, that is more suitable for handheld devices (e.g.PDA) with the interface of wireless communication [8,9].In the mobile agent paradigm, the network connection is required only when the agent is dispatched to remote servers and the result collected by the dispatched agent is sent back to the agent server/owner.For handheld devices, they are powered by batteries.Each battery can supply power for a few hours.In addition, the capacity of the CPU and RAM is limited too.Due to these features, it is a good choice to leave the computation task to other servers, during which the network connection is not necessary.The mobile agent paradigm is the right choice to bridge the handheld devices and remote servers.The latter can be the platforms of merchants in an e-commerce environment.
However, with respect to security, the introduction of mobile agents increases the risk as each agent is exposed to the visited servers [6,17,19].Some studies have been done for protecting the offers carried by a roaming buyer agent [5,19].But employing a mobile agent for making payment is always a challenging issue as it is not feasible to ask the agent to carry critical/confidential information (e.g.credit card information) even secret keys when visiting a set of remote hosts as this will expose sensitive data to potentially hostile environments [10].
In applications, most online payment protocols are based on SSL or S-HTTP.But as credit card information is stored on the merchant server, they are not considered secure enough.SET (Secure Electronic Transaction) protocol developed by VISA and MasterCard is regarded as a better protocol [3] aiming at protecting users' credit card information with important properties, such as authentication of the participants, data integrity and confidentiality.In SET, the credit card information is encrypted by the public key of the payment gateway.Therefore it is protected against the merchant and other parties.
In the literature, some protocols have been proposed aiming at employing one mobile agent to fulfill the payment task, such as SET/A [16], SET/A+ [25], LITESET/A+ [14] and LITESET/A++ [20].In the best case, one mobile agent is expected to be employed for searching shops/offers, negotiate with shops and complete the transaction including payment with the best seller with the best offer.
However, as we analyzed in Section 5 in this paper, the above agent-based payment protocols have various problems.For example, all protocols lack the mechanism preventing overspending and over payment problems.Most protocols except LITESET/A++ have no mechanism preventing double payment.
In this paper, we proposed a new protocol based on SET aiming at enabling a mobile agent to automatically and autonomously make final transactions and payment with the "best" merchant with the best offer without interacting with the end-consumer after having performed all kinds of tasks including asking for offers, and negotiating with merchants.This requires the capability of the agent in the protocol to dynamically sign with the "best" merchant, which is not determined in advance.Then the payment instruction is dynamically passed to the payment gateway (P G), which can be determined only after the interaction with the merchant according to the brand of the credit card.Hence encrypting everything in advance is impossible while asking the agent to carry any key for encryption is certainly a risk.The proposed protocol is based on Signcryption algorithm [26], which alleviates the burden for encryption and signature generation consuming less resource and hence is more suitable to mobile agent environments.Meanwhile, by adopting the Signature-Share scheme, the agent can sign contracts and pass the payment instruction to the P G in cooperation with independent TTP without the possibility of disclosing any secret to other participants.In the proposed protocol, a symmetric key is used to encrypt the payment instruction (P I), which is more efficient than the public key scheme used in LITESET/A++.In addition, in the proposed protocol, we have designed mechanisms preventing overspending and over payment problems.
The rest of this paper is organized as follows.Section 2 briefly reviews SET and Signature-Share scheme.The proposed protocol is presented in Section 3 and its features and security properties are analyzed in Section 4. In Section 5, we analyze existing agent-based secure payment protocols and compare them with the proposed protocol.Section 6 finally concludes our work in this paper.
Background
In this section, we will briefly review SET [3] and Signature-Share scheme [14].The notations and symbols used in this paper are listed in Table 1.
SET
The SET protocol [3] is composed of several kinds of transactions, ranging from registration of participants, to purchase request and payment processing.There are different roles in SET.They are cardholder (C), credit card issuer, merchant, acquirer and payment gateway (P G) [3].P G is a device of acquirer where the merchant has an account.As requested by the P G, successful payment should be finally authorized by the card issuer whereafter the issuer will pay on behalf of the cardholder and the money will be deposited to the merchant's account at the acquirer.
SET uses two distinct asymmetric key pairs for each party, one for key-exchange.The corresponding public key y K A is contained in public key certificate C K (A) of participant A. The key pairs (y K A , x K A ) are used for encrypting and decrypting messages.Another key pair is used for the creation and verification of signatures.The signature public key of participant A is included in the signature certificate C S (A). Figure 1 In SET, the key issue is to pass the payment instruction (P I) including card number, cardholder's name and expiry date to the payment gateway (P G) determined according to the brand of the cardholder's credit card that is included in purchase request (in step 1 in Fig. 1).P I is encrypted by a symmetric session key K that is contained in a digital envelope E P G {K, P I} passed to P G via merchant M .Finally the payment can be completed by P G without the possibility of disclosing P I to M .Due to the limited space, readers can refer to [3] for more details.
Signature-Share scheme
The Signature-Share scheme [14] (see Fig. 2) is based on Signcryption public key algorithm (see Appendix 1).In this scheme, the sender A wants to send a message m to recipient B through t sharing parties, say A i (i = 1, . . ., t).The signature key of A is shared by t parties, namely, x A = x A 1 + x A 2 + . . .+ x At .Each party generates the shared signature s i on the hash value r of message m, and all shared signatures are sent to B. With all (r, s i ), B can verify the signature and hence check the data integrity of m.
Details of the Signature-Share scheme are described in Appendix 2.
The proposed protocol
In the proposed protocol, Signature-Share scheme is adopted for passing securely the order information to the merchant.The cardholder's signature secret key is divided into two parts.The first part is kept by the cardholder.The second part is encrypted using the public key of the TTP and will be passed to the TTP for generating shared signatures.The dispatched agent does not carry any shared signature secret key.Instead it only carries one half shared signature signed on the order information (OI) and payment instruction (P I) respectively by the cardholder that should be sent to the merchant M .The other half shared signature is generated with the assistance of the TTP.On obtaining the two shared signatures (i.e.s 1 →M and s 2 →M ), the merchant M can verify the order information (OI) and check the data integrity.Meanwhile the payment instruction (P I) is encrypted by a symmetric session key, It can be passed to P G via the merchant and can be decrypted by P G only.Additionally, in the proposed protocol, mechanisms are also provided for preventing and detecting double payment, overspending and overpayment attacks.
Secret-sharing of cardholder's signature secret key x S C
In the proposed protocol, the cardholder and TTP share the cardholder's signature secret key x S C based on Shamir-threshold scheme [12].
According to the two share schemes presented in Section 2.2, A 1 = C and A 2 = T T P .x S C 1 is kept by C as a secret key always while x S T T P can be carried by the agent after being encrypted using the TTP's public key and will be passed to the TTP for generating the second shared signature that will be passed to M .
Description of the protocol
Step 1: Cardholder C (i.e.C's software) generates a temporary session key K →P G for the payment gateway.
1)
Then C uses K →P G to encrypt the payment instruction (P I): and generate ciphertext where -R is a random number chosen from [1, . . ., q]; -I C is the transaction identifier assigned by cardholder C; -T C is the timestamp at C when to complete the encryption and shared signature generation; -T e (T e > T C ) is the timestamp when the purchase request expires.It is unique to each purchase order.2) Meanwhile, C generates the first half shared signature s 1 →M on the hash value that will be passed to merchant M : where -OI is the description and constraint for the order of Product, namely, 3) Then C dispatches the consumer agent CA encapsulating the following arguments C S (C), The dispatched agent will visit a set of merchants asking offers and negotiating with them.An offer evaluation model and negotiation model can be found in [22].
Step 2: After completing the negotiation with some merchants, the agent chooses the best one -M -with the best offer to make the deal and send M the purchase request (see Fig. 3).The request contains the brand of the credit card that will be used for payment.
CA → M : C S (C), purchase request, T e Step 3: After receiving the request, M verifies C S (C) and reply CA.
where -I M is a unique transaction number issued by M and is the signature generated by M ; -P G is the payment gateway that is determined according to the brand of C's credit card, which is contained in the purchase request.
Step 4: From M 's reply, CA obtains the public key certificate of the payment gateway.Then CA sends TTP a message so that s 2 →M can be generated by TTP.
where -Amount = Price.Price is the price of the Product, which is determined by CA and M after the negotiation between them.Amount is the variable sent by CA.Price is the variable sent by M in Step 7.Here we distinguish Amount and Price as both of them will be passed to the P G where a consistency verification will be performed (in Step 8).
Step 5: On receiving the message, TTP verifies the validation of C S (C), C S (M ), C K (M ) and C K (P G), checks whether the current time T < T e and Amount PriceLimit.If all are correct, TTP decrypts the ciphertext from CA obtaining z and x S T T P , generates the 2nd half shared signature on hash value r →M , and generates Hereafter TTP keeps as a transaction record and sends a message to CA.
T T T P , SIG T T P
where -T T T P is the timestamp at TTP when to generate the shared signature (i.e.s 2 →M ); is the signature generated by TTP that can be kept by the cardholder as a non-repudiation receipt.
Step 6: Once receiving the message from TTP, CA sends a message to the merchant.
Step 7: After having received the message, M computes v by applying the Signature-Share scheme Then M sends a message to P G.
Step 8: From the message, P G obtains After decryption, it obtains K →P G and Amount.Hereafter P G can decrypt c →P G and thus obtain P I: If current time T < T e and Amount = Price, P G will send M an authorization response.
Step 9: After processing the order, the merchant generates and signs a purchase response, and sends it to the agent.
is the signature generated by M at time T 2 M .It will be finally passed to the cardholder as a non-repudiation receipt by the agent.
If the payment is authorized, the merchant will fulfill the order by delivering the product bought by the cardholder.
Step 10: The agent verifies the merchant signature certificate, checks the digital signature of the response, and then returns back to its owner carrying The owner takes appropriate actions based on the obtained contents.
Security analysis
From the above description, we could observe that in cooperation with TTP, the agent can sign contracts with the dynamically chosen merchant M and make payment.In this section, we will analyze the security properties of the proposed protocol focusing on the following possible issues.
-whether it is possible for any participant to re-generate the secret signature key of the cardholder (ATK1); -whether it is possible for any participant except P G to obtain the payment instruction (ATK2); -whether it is possible for any participant to re-perform the payment (double payment, ATK3); -whether it is possible for the agent to pay more than required by the cardholder (overspend, ATK4), and
CA TTP M
Step 4: E y K TTP {x S TTP //z//(K x PG +R+I C +T C +T e )}, r ®M Step 5: E y K M {s 2 ®M } Step 6: r ®M , s 1 ®M , E y K M {s 2 ®M }, OI, H(PI) Step -whether it is possible for the merchant to pass a wrong price to the P G (overpayment, ATK5).
1.In this protocol, the dispatched agent CA does not have any task for encryption, decryption or signing.So it is not necessary for it to carry any key.Therefore, the agent in the transaction is more of a messenger.Most of the encryption and signing works are done by TTP.What the agent should do is to communicate with different participants sending relevant messages to them.2. CA carries one shared half signature -s 1 →M .But it is generated by cardholder C and the shared secret key x S C 1 is kept by C. No irrelevant party could obtain both of the two shared signatures (i.e.s 1 →M and s 2 →M ) together with some arguments (i.e.r and z); so it is not possible for any party to obtain two shared secret keys so as to generate the secret signature key of the cardholder (i.e.x S C ).For instance, for the merchant, it can obtain the r →M , s 1 →M , s 2 →M , c →P G and H(P I), but cannot obtain P I. Argument z is also protected against the merchant.So it is not possible for M to obtain x S C (ATK1) (see Fig. 4).Likewise, TTP knows is not passed to TTP.As s 1 →M is not passed to TTP, TTP cannot generate x S C 1 so as to re-generate x S C .In the proposed protocol, the cardholder's secret signature key can be re-generated only if M and TTP collude.But it is impossible regarding the nature of TTP.
||Amount}, P G can decrypt it and obtain
K →P G .Thus P G can decrypt E →P G (P I) and obtain the payment instruction P I. M knows The property of non-repudiation has been improved.In terms of non-repudiation, timestamps are important in many electronic transactions indicating the time when a particular event or action has taken place [27].T e makes each purchase request from C unique preventing the re-payment attack.In addition, more timestamps are added in different stages, such as T C , T T T P , T 1 M and T 2 M .In the message from TTP to CA (in Step 5) and the message from M to CA (in Step 9), signatures are added including timestamps.These signatures adopt nested structure that can show message exchange processes among CA, TTP and M .These signatures prevent the replay attack (double payment) (ATK3).Meanwhile the generation of signatures will not significantly increase the burden of the agent to migrate back to the cardholder since the signatures are generated on the hash value, which has a fixed length. 5.In the proposed protocol, Amount, the amount of the transaction that will be charged to the cardholder's account is first passed to the TTP by CA, which checks it with the limit of current transaction (i.e.PriceLimit) (ATK4).Moreover, the amount is included in the ciphertext by the TTP that will be passed to the P G where the comparison will be conducted with the price (i.e. Price) from M .This can prevent the overspending and overpayment attacks (ATK5).
SET/A
SET/A protocol [16] is proposed to make SET adaptable to the mobile computing environments.Based on the principles used in purchase phase of SET, SET/A improves its performance only by adding a mobile agent for the cardholder to fulfill payment transaction since the cardholder need not frequently connect to Internet during the whole transaction phase.SET/A performs the same function of transaction as that in SET except that the mobile agent of SET/A replaces the cardholder of SET in the purchase phase.
For the protocol to be secure, aspects, such as, where the agent should execute securely, how to decrypt the encrypted information on OI and P I (i.e.E K ?{OI P I data} in Step 1) and where to generate the symmetric key to encrypt the P I (Step 4), are critical to a secure protocol.SET/A suggests running the agent in a tamper-proof environment [23] or a secure coprocessor [24] to protect the agent against malicious merchants.However, from the point of view the cardholder, it is insecure to expose some confidential information, such as the credit card information in the P I, to any merchant environment.
An alternative approach based on software using hidden computations [18] is also suggested in [16] without the cost of additional investment in hardware from each merchant.Another solution is given in [13], where an encrypted signature function is used so that a signature can be generated by the agent without the risk carrying and disclosing the secret signature key.But in this scheme, the function cannot be prevented from being abused and hence it is not secure and the non-repudiation property can hardly be ensured.If the cardholder gets P G's public key by sending a request to the merchant, the protocol is essentially the same as SET losing the autonomy and flexibility of mobile agents, which are the motivations of SET/A.
SET/A+
SET/A+ [25] is a more complex protocol, which adds a Trust Verification Center (T V C) in the payment system.The T V C keeps the sensitive information and charges cardholders or merchants by providing verification service.
The focus of SET/A+ is on how to pass securely the symmetric key K generated by the cardholder to P G so that P G can obtain P I that is encrypted by K (i.e.E K {P I}).
In SET/A+, T V C is not only used for verification but also used for encrypting information.However, the agents are limited in their functionalities.For example, an agent cannot sign dynamically and perform encryption for the owner during trading (since it requires the secret key of the owner).The agent carries the cardholder's signature -x S C (H(H(OI)||H(P I)))-generated in advance and passes it to the merchant to make a final deal.This is not flexible.In a malicious merchant environment, the signature can be abused easily and no sufficient non-repudiation mechanism is provided preventing the replay attack.This may cause the disputes between participants and result in the loss of the cardholder.
LITESET/A+
LITESET/A+ is based on Signcryption public-key algorithm [26] and Signature-Share scheme (called Signature-Threshold scheme in [14]).It employs a mobile agent and a Trusted Third Party (TTP).The role of TTP is the same as the T V C in SET/A+.The TTP can do both verification and encryption when necessary.
In this scheme, the most significant difference with SET/A+ is that it uses a Signature-Share scheme based on Shamir-threshold scheme [12,14].The signature secret key of cardholder-x S C -is divided into two shared parts, say x S A for agent and x S T T P for TTP.
x S A = x , x S T T P = x S C − x where x is a random number chosen from [1, . . ., q].By carrying x S A , the agent A can sign over the order information OI and the hash value of payment instruction P I, yielding x S A (H(H(OI)||H(P I))) Another shared signature x S T T P (H(H(OI)||H(P I))) can be generated by TTP after it obtains its shared signature secret key x S T T P .Once obtaining the two shared signatures, by applying Signature-Share scheme, the merchant can verify the dual hash value H(H(OI)||H(P I)) and hence check the validity of the order and the payment data.It is equivalent to obtaining the cardholder's signature x S C (H(H(OI)||H(P I))) as in SET/A+.But without the involvement of TTP, the merchant cannot obtain both shared signatures.
The aim of LITESET/A+ is to make the agent sign flexibly on behalf of the cardholder in cooperation with the TTP.This can be done after the process of negotiation with a set of merchants.So one consumer agent works well instead of dispatching a payment agent after the negotiation agent completes its tasks.By using Signature-Share scheme, the dispatched agent does not need to carry the cardholder's secret signature key.Instead, it carries its shared signature key x S A .
Thereafter the problem comes as the agent carries the shared secret key x S A and executes on a server provided by the chosen merchant where to make the final deal.For generating a shared signature s A on a hash value r, the agent should also carry r and argument z (q is public), so as to compute This means x S A , r and z will be exposed to the merchant.Once the merchant obtains the shared signatures s T T P from TTP (Step 8 in LITESET/A+), it can easily compute x S T T P because s T T P = z/(r + x S T T P ) mod q Hence the merchant can obtain the cardholder's signature secret key Moreover, the non-repudiation mechanism is weak in LITESET/A+.In addition to the obtained messages, a participant keeps I C or I M only as non-repudiation receipts.Though these identifiers are unique, no receipt can show when a transaction is completed.So I C and I M are not strictly non-repudiation receipts.This problem exists in SET/A and SET/A+ too.
LITESET/A++
LITESET/A++ [20] is based on Signcryption public-key algorithm [26], Signature-Share scheme and Signcryption-Share scheme (called Signature-Threshold scheme and Signcryption-Share scheme proposed in [14]).It solves the problem in LITESET/A+ that a shared secret key is carried by the dispatched agent.Instead, it is kept by the cardholder.LITESET/A++ keeps using Signature-Share scheme so that the dispatched agent has the capability to sign with the merchant chosen after dispatch in corporation with TTP.In addition, LITESET/A++ uses Signcryption-Share scheme to encrypt the payment instruction P I.In Signcryption-Share scheme, the secret signature key of sender A is shared by t parties.Each party generates the shared signature s i on hash value r obtained from A, and all the shared signatures are sent to recipient B with the ciphertext c.With c and all (r, s i ), B can decrypt c, obtain the plaintext m and verify the signature.
In LITESET/A++, P I is encrypted by a session public key.The session secret key is encrypted by the cardholder.In corporation with TTP, P G can obtain the two shared signatures, hash value and ciphertext and thus decrypt the ciphertext and obtain the session secret key, with which P G can obtain the P I.
In terms of security, LITESET/A++ is a good protocol.But it has a few problems.
1.As Signcryption-Share scheme is a public key algorithm, it is less efficient than a symmetric key algorithm; 2. No mechanism is devised for preventing double payment, overspending attack and over-payment attack.This problem exists in all the above protocols.
Comparison of protocols
In some sense, the agent in SET/A+ has the same flexibility since the agent carries the cardholder's signature x S C (H(H(OI)||H(P I))) signed on the order information and payment instruction.The signature can be passed to the merchant where the agent wants to make the deal.But the signature may be reused by any malicious merchant where a valid deal has been made.The merchant can mount a replay attack successfully by transferring the copy of a used digital envelope to the payment gateway causing the loss of the cardholder.This is because that its non-repudiation mechanism is weak.In addition, a visited merchant may abuse the pre-generated signature.
In the proposed protocol, the Signature-Share scheme avoids using a pre-generated signature.Meanwhile timestamps from different participants appear in the signatures of message senders.Also a unique expiry timestamp T e appears in the signature from the cardholder and it can be sent to each participant.Hence a replay attack can be detected easily.
As we analyzed in Section 4 and Section 5, both LITESET/A++ and the proposed protocol correct the security flaw in LITESET/A+ so that it is not possible for the merchant to re-generate the signature secret key of the cardholder.Meanwhile the flexibility for the agent to "sign" on behalf of the cardholder and make a deal with the merchant remains unchanged.Moreover, with the involvement of TTP, the agent in the proposed protocol need not to do any encryption and decryption.In contrast, in SET/A+ and LITESET/A+, the agent executes at the merchant's server and completes the encryption operations.
In the proposed protocol, similar to LITESET/A+, Signature-Share scheme is adopted to give the capability of the dispatched agent to sign contract dynamically with M in corporation with TTP.
The proposed protocol inherits good features of LITESET/A++ in terms of the properties listed in Tables 2 and 3 while the focus of LITESET/A++ is how to pass P I securely to P G which is determined afterwards by M according to the brand of the credit card.That means C doesn't know P G beforehand so as to encrypt P I using P G's public key.Regarding the protocol overhead, as analyzed in Section 5.1.4,based on Signcryption -a public key algorithm, the Signcryption-Share scheme is not as efficient as a symmetric key algorithm.In contrast, the proposed protocol adopts a symmetric key algorithm to encrypt PI.This inherits the efficiency property of SET and reduces the overhead at the side of the cardholder.In addition, the Signature-Share scheme is as efficient as a process of signature verification, which is carried out by the PG, not the agent.The proposed protocol needs to employ a TTP and thus leads to some communication with the TTP.But this is essential to an agent based payment protocol and it is the same as SET/A+, LITESET/A+ and LITESET/A++.The proposed protocol does not occur extra overhead than other protocols.
Furthermore LITESET/A++ has the drawback without any mechanism for preventing overspending and overpayment attacks.The problem also exists in other protocols.
In the proposed protocol, Amount, which is the transaction amount that will be charged to the cardholder's account, is first passed to TTP, who checks it with PriceLimit preventing overspending.Meanwhile, Amount is included in the ciphertext generated by the TTP, which is passed to the P G where Amount and Price from M are compared.This prevents the overpayment attack.
Security properties of different protocols are compared in Table 4.
Conclusions
In a mobile computing environment the cardholder's role can be played by an agent, which is dispatched to the merchant's server with the relevant information to perform the necessary operations.Since the cardholder does not need to keep the network connection while the transaction is being completed, this solution contributes to lower cost, higher robustness and autonomy while the requirements for security become more critical.In this paper, we proposed an agent-assisted secure payment protocol adopting Signature-Share scheme and employing a Trusted Third Party (TTP).The dispatched agent can dynamically choose the merchant and sign on behalf of the cardholder in cooperation with the TTP without the possibility of disclosing any secret to the merchant and TTP.In the proposed protocol, the principle that each participant knows what is strictly necessary for his/her role is followed as in SET while the efficiency is improved.In addition, mechanisms have been devised for preventing and detecting double payment, overspending and overpayment attacks.
The proposed protocol can be applied to e-commerce environments, where an agent is employed to collect offers, negotiate with merchants, place an order, and make the payment.For instance, the cardholder/customer using a laptop or PDA with wireless network interface and Internet access can apply this protocol for transactions.This can help reduce the complexity of operations at the side of the cardholder.In addition, the long-term network connection constraint for interactions will not apply.With the development of wireless communication and agent technology, we envisage that the above scenario is becoming applicable in practice.Thus, we expect the proposed protocol can be integrated with an agent-based e-commerce system [21] to enhance the autonomy of transactions and bring more convenience to customers.
and verify signature H(v, H(P I)||H(OI)||H(C S (C)||I C ||T C ||T e )) ?=r →M If it holds and the current time T < T e , M keeps T R M = {C S (C), r →M , s 1 →M , s 2 →M , OI, H(P I), I C , T C , T e } as a transaction record.
Table 1 Notions
K→PGthe session key used for encrypting P I that should be passed to P G.
depicts the purchase request phase of SET.
Table 2
Comparison of agent-based secure payment protocols (part I) The cardholder's secret signature key can be re-generated. 2No authorized party can obtain and re-generate the cardholder's secret signature key.
Table 3
Comparison of agent-based secure payment protocols (part II)
Table 4
Security properties of agent-based secure payment protocols
|
2018-04-03T03:10:12.108Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "4be7cf2d437d0f69adf473d63272dcaa3ccaaa2b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/misy/2008/402519.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4be7cf2d437d0f69adf473d63272dcaa3ccaaa2b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
210946730
|
pes2o/s2orc
|
v3-fos-license
|
Fever in a traveler returning from Ethiopia
His symptoms began about 10 days after returning and had been going on for 11 days. What was the cause?
doi: 10.3949/ccjm.87a.19017 A 44-year-old man presented to an outpatient clinic after 11 days of fever, chills, headache, and nausea. He was a coffee roaster by trade, and his symptoms had started about 10 days after returning from a 3-week trip to buy coffee in Ethiopia. He said his fever would come and go, and the last episode was 2 days earlier. He denied any diarrhea, constipation, rash, or lymphadenopathy.
The patient appeared lethargic. Examination of his heart, lungs, and abdomen was unremarkable. His vital signs were: • Temperature 38.9°C (102.0°F) • Heart rate 80 beats per minute • Respiratory rate 14 breaths per minute • Blood pressure 142/80 mm Hg • Oxygen saturation 97% on room air.
He had been treated for malaria in Tanzania when he fell sick there a few years earlier. He said he took chloroquine to prevent malaria every time he went abroad, as directed for his earlier trips. He had received the yellow fever virus vaccine because of his frequent travel to the tropics and was up-to-date on his routine childhood and pretravel immunizations. On his last trip, he had not been exposed to local domestic or wild animals, had not had any sexual encounters, had not drunk any unclean water, and had not eaten any raw or improperly cooked food.
■ DIFFERENTIAL DIAGNOSIS OF FEVER IN A RETURNING TRAVELER
1 What is the most likely cause of this patient's fever?
□ Malaria □ Typhoid fever □ Infl uenza □ Yellow fever □ Meningococcemia □ Measles The differential diagnosis for fever with a medium to long incubation period in a returning traveler is broad. Providers should consider the infections endemic to the region where the patient traveled (wwwnc.cdc.gov/travel). Thwaites and Day 1 proposed a risk-based approach using the Quick Sepsis-Related Organ Failure Assessment (qSOFA) score, signs of severe disease (cyanosis, meningism, peritonism, digital gangrene), and possibility of a highly transmissible infection (eg, Middle East respiratory syndrome-coronavirus [MERS-CoV], Ebola) as an initial assessment to identify and treat lifethreatening causes of fever. A detailed history of exposure to unclean water, animals, insects, bites, or raw or improperly cooked food is crucial in building a robust differential diagnosis. 2
Malaria
Fever in a traveler returning from an area where malaria is endemic (see www.cdc.gov/ malaria/travelers/country_table/) is an emergency. Major clinical features of malaria are fever (present in 92% of cases in 1 study), chills (78%), headache (64%), and nausea and vomiting (35%)-and our patient had all of these. Other possible symptoms such as myalgia (53%) and diarrhea (26%) are sometimes mistaken for symptoms of infl uenza or infectious gastroenteritis. 3 In another study, 4 Plasmodium falciparum malaria was the most common cause of fever in US residents returning from sub-Saharan Africa (accounting for 12.78% of cases), followed by acute unspecifi ed diarrhea (9%), acute bacterial diarrhea (5.59%), and giardiasis (4.23%).
Malaria is transmitted by the bite of a female Anopheles mosquito. 5 Most Anopheles mosquitoes are not exclusively anthropophilic (preferring to feed on humans). However, the primary malaria vectors, A gambiae and A funestus, are strongly anthropophilic and are the two most effi cient malaria vectors worldwide.
Our patient's symptoms were consistent with malaria. Moreover, although he was taking malaria chemoprophylaxis, he was not taking the right one, as there is a high incidence of chloroquine-resistant P falciparum malaria in Africa. The prolonged incubation period also points to malaria (Table 1).
Finally, although our patient's pulse rate of 80 beats per minute seems normal, it is actually lower than expected, given his fever. Assessing vital signs for relative bradycardia is a great tool to discern several medical conditions, and malaria is one of the causes ( Table 2). However, the most common cause of relative bradycardia is the use of beta-blockers. 6,7 Typhoid fever Typhoid fever, caused by Salmonella typhi, is a common cause of travel-related fever. In 2002, an estimated 408,837 cases of typhoid fever occurred in Africa. 8 However, precise numbers are not available, since many hospitals in Africa do not have laboratories capable of performing the blood cultures essential for the di-Fever in a traveler returning from a malariaendemic area is an emergency agnosis of typhoid fever. In addition, typhoid fever is often mistaken for malaria. Typhoid fever has an incubation period of about 1 week, which makes it less likely to be the cause of this patient's illness. However, in rare cases, the incubation period can be as long as 3 weeks. 9 The patient said he had no diarrhea or constipation, which also makes typhoid fever less likely. Moreover, typhoid fever is more commonly associated with high unremitting fever, which is inconsistent with the patient's fever pattern.
Infl uenza
Infl uenza is uncommon in warm-weather months; however, the seasons are reversed in the Southern and Northern hemispheres.
Also, physicians should suspect infl uenza at any time of year in travelers returning from the tropics, where infl uenza can occur yearround. 10 However, the incubation period of infl uenza is typically 1 to 4 days, which was inconsistent with our patient's history.
Yellow fever
Yellow fever should be suspected if an unvaccinated traveler returns from sub-Saharan Africa or forested areas of Amazonia with fever, jaundice, hemorrhage, and renal failure.
The mosquito vectors of yellow fever are Aedes species in Africa and Haemogogus species in South America. Aedes mosquitoes are also vectors for dengue virus (symptoms: high fever, sudden-onset skin rash, myalgia, headache, and mild hemorrhagic manifestations), West Nile virus, Chikungunya (symptoms: high fever, headache, myalgia, and moderate to severe arthralgia), eastern equine encephalitis virus, and Zika virus (symptoms: lowgrade fever, descending rash, myalgia, conjunctivitis, headache, edema, and vomiting) ( Table 3). 11 Our patient had relative bradycardia, which can be seen in yellow fever. However, the incubation period for yellow fever is short, 3 to 6 days (median 4.3 days) after the bite of an infected mosquito. 12 Moreover, he had been vaccinated against yellow fever. manifestations include sudden onset of headache, fever, neck stiffness, and petechial or purpuric rash, which did not fi t our patient's presentation.
Measles
Measles is considered the most contagious viral disease known, and its incidence in Ethiopia is high, with 49 cases per million population in 2016. 13 The incubation period ranges from 7 to 21 days from exposure to onset of fever. A clinical diagnosis of measles can be made from the clinical features of generalized maculopapular rash lasting for 3 or more days, temperature of 38.3°C (100.9°F) or higher, and cough, coryza, and conjunctivitis. These clinical features did not fi t our patient's presentation; moreover, he had been vaccinated against measles.
All of the infections discussed above can be prevented with appropriate pretravel vaccinations and chemoprophylaxis.
■ DIAGNOSTIC TESTING FOR MALARIA 2 If a pathologist or microbiologist is not available on call, how is the diagnosis of malaria made?
□ Blood culture □ Plasmodium species polymerase chain reaction (PCR) □ Plasmodium species rapid diagnostic test, then thick and thin blood fi lms when an expert is available to look at them □ Plasmodium serologic study The best choice in this situation is Plasmodium species rapid diagnostic test, followed by thick and thin blood fi lms.
Light microscopy is the gold standard
Light microscopy of blood smears with Giemsa staining (to give parasites a distinctive appearance) remains the gold standard for malaria diagnosis if qualifi ed staff are available to do it immediately (Figure 1). The thick fi lm is used to screen for parasites using hypotonic Measles is the most contagious viral disease known saline to lyse red blood cells. The thin fi lm is then used to identify the species of Plasmodium. Blood fi lms should be prepared and read immediately by experienced personnel.
Rapid diagnostic tests
If expert personnel are not readily available to examine a blood smear, a rapid diagnostic test should be performed immediately (Figure 2). 14 There are two types of rapid diagnostic tests for malaria. The fi rst is based on detection of Plasmodium histidine-rich protein-2 (HRP-2), which is closely associated with the development and proliferation of the parasite. The only test of this type approved and available in the United States is BinaxNOW Malaria (www.alere.com/en/home/productdetails/binaxnow-malaria.html), which has a reported sensitivity of 96% and specifi city of 99% for Plasmodium infection compared with microscopy. 15 This test is approved for use by hospital and commercial laboratories, not by individual clinicians or by patients themselves.
The second type of rapid diagnostic test, which is not available in the United States, is based on detection of P falciparum-specifi c lactate dehydrogenase and pan-Plasmodium lactate dehydrogenase. It has a sensitivity of 80% and a specifi city of 98% for Plasmodium infection compared with microscopy. 15 Rapid diagnostic tests take only 2 to 15 minutes and are highly specifi c; hence, a positive result should prompt immediate treatment. However, a negative result still requires a blood smear to detect low-level parasitemia or nonfalciparum species. Therefore, regardless of the rapid diagnostic test result, microscopy must always be performed afterward (Figure 2). 14
Polymerase chain reaction
Although PCR testing for Plasmodium is available in commercial laboratories, the turn-around time may be unfavorable when an immediate medical decision is needed. It can, however, be benefi cial in identifying the Plasmodium species (eg, P vivax and P ovale), which may further guide the need for presumptive antirelapse therapy (previously known as terminal prophylaxis).
Serologic testing
Serologic Plasmodium testing only assesses past exposure and has no utility in the acute setting.
Blood culture
Malaria diagnosis cannot be established through blood culture. Hence, that is not the correct answer to the question. However, if a provider suspects a bacterial coinfection with bacteremia (eg, Salmonella species or Escherichia coli), obtaining blood culture should be considered. In a small study of 67 adults hospitalized for P falciparum, 13% (95% CI 5.3%-21.6%) were bacteremic on admission. 16
■ CASE CONTINUED: LABORATORY RESULTS
A rapid diagnostic test was ordered for our patient and was positive for P falciparum. On-call expert personnel were available to read the blood fi lm. The level of parasitemia was 4% of red blood cells infected. Results of other blood tests were as follows: The patient was immediately transferred to the emergency department to be treated and monitored. □ Chloroquine phosphate □ Hydroxychloroquine □ Primaquine □ Atovaquone-proguanil
WONG
Our patient appeared to have uncomplicated P falciparum infection from a chloroquineresistant region. A patient who presents with symptoms of malaria and a positive malaria test without features of severe malaria is considered to have uncomplicated malaria ( Table 4). Given this information, he should receive atovaquone-proguanil ( Table 5).
Most severe malaria cases are caused by P falciparum. Fortunately, our patient appeared to have uncomplicated P falciparum malaria. This could be thanks to acquired immunity from earlier infection, which does not provide sterilizing immunity against parasitemia but may inhibit the development of symptomatic and severe disease. This immunity increases with age, cumulative number of malarial infections, and time spent living in a malariaendemic area. 17 Nevertheless, acquired immunity is usually short-lived without continuous exposure. It is a misconception that prior infection causes lifelong immunity against malaria; in fact, immigrants visiting friends and relatives constitute the most signifi cant group for malaria importation in developed countries. 18 Table 6 lists other risk factors for malarial acquisition.
If chloroquine phosphate, hydroxychloroquine, quinine, atovaquone-proguanil, or mefl oquine is used to treat P vivax or P ovale infection, either primaquine or tafenoquine must be given as presumptive antirelapse therapy (also known as terminal prophylaxis) to prevent late-onset or relapsing disease due to hypnozoites (the liver stage of the parasite) of P vivax or P ovale, which can occur 17 to 255 days after the initial infection. 19 The patient was treated with atovaquoneproguanil and recovered.
Defi nition Treatment
Positive blood smear and at least one of the following criteria:
Malaria prevention
It is essential to give appropriate chemoprophylaxis, taking into account the regions where malarial organisms are resistant to chloroquine, and to instruct patients to take measures to avoid being bitten by mosquitoes. Risk assessment of travelers to malariaendemic areas is important (Table 6). 20,21 Education of travelers and physicians about chloroquine-resistant areas is essential. Failure to take appropriate precautions may result in death due to severe malaria. 22 The US Centers for Disease Control and Prevention (CDC) website provides information on areas with malaria, estimated relative risk of malaria for US travelers, drug resistance, malaria species, and recommended chemoprophylaxis ( Table 7). Some chemoprophylaxis regimens need to be started 1 to 2 weeks before travel to malaria-endemic areas.
Other measures to prevent malaria infection are use of mosquito repellent containing 20% to 35% N,N-diethyl-meta-toluamide (DEET), wearing permethrin-treated clothes, sleeping under insecticide-treated bed nets, and staying in air-conditioned buildings.
Vaccinations
The CDC provides information about vaccinations according to the destination country at wwwnc.cdc.gov/travel. For example, for a traveler going to Ethiopia, vaccinations against cholera, hepatitis A, hepatitis B, meningococcal disease, polio, rabies, typhoid, and yellow fever are recommended. Certain countries require proof of vaccination against yellow fever to enter, especially if traveling from a country where yellow fever is endemic. Due to limited availability of yellow fever vaccine in the United States, travelers may need to schedule appointments well in advance and visit a nonlocal travel clinic.
Saudi Arabia requires visitors and Hajj and Umrah pilgrims to be vaccinated against meningococcal disease.
Obtaining care abroad
Medical evacuation insurance can be helpful when traveling to a remote destination or to a place where medical care is not up to US standards. Supplemental travel health insurance is recommended as well if the current travel and medical insurance has inadequate coverage.
The US embassy in the destination country (www.usembassy.gov/) can assist in locating medical services and notifying friends and family in the event of an emergency. Other sources such as the International Association for Medical Assistance to Travelers (www.iamat.org/medicaldirectory; requires free membership login) or International Society of Travel Medicine (www. istm.org/AF_CstmClinicDirectory.asp) can also help you fi nd travel clinics around the globe.
Artesunate
Artesunate, the fi rst-line treatment for severe malaria recommended by the World Health Organization, is now the fi rst-line treatment for severe malaria in the United States. However, US clinicians must call the CDC malaria hotline (770-488-7788) to obtain intravenous artesunate.
Malaria vaccine
In 2019, public health programs in Ghana, Kenya, and Malawi began vaccinating young children against P falciparum malaria using the RTS,S/AS01 (RTS,S) vaccine, the fi rst malaria vaccine provided to young children through routine immunization. In an intention-to-treat analysis of a controlled clinical trial, children 6 weeks to 17 months old who received this vaccine had an infection rate of 1.9% compared with 2.8% in a control group that received a nonmalaria comparator vaccine (P < .001), with a number needed to treat of 111 to prevent 1 case of severe malaria. 23 In 2019, public health programs in Ghana, Kenya, and Malawi began vaccinating young children against P falciparum malaria
WONG
getmalaria.org/), is designed to spread mutations through the wild population that knock out key fertility genes or reduce the proportion of female insects that transmit the disease. Researchers released about 10,000 genetically sterilized males to observe their survivability and dispersion in the wild and to introduce the concept of genetically modifi ed mosquitoes to regulators and community members.
Tafenoquine
Tafenoquine was recently approved for treating malaria of all species. It can be used for chemoprophylaxis against all Plasmodium species and, as a single dose, for presumptive antirelapse therapy. 25,26 Patients must be tested for glucose-6-phosphate dehydrogenase deficiency before receiving tafenoquine.
■ CASE CONCLUDED
Our patient recovered from his illness and received education about the importance of malaria chemoprophylaxis when he travels to malaria-endemic areas in the future. The most recent event did not deter him from further travel to buy coffee in South America or Africa; however, he is now an advocate for malaria prevention.
■ TAKE-HOME POINTS • Fever in a traveler returning from a malaria-endemic area is an emergency. • Clinical features of malaria are nonspecific and include fever, headache, weakness, and profuse night sweats.
• P falciparum is chloroquine-sensitive in some areas of Central America and the Caribbean and resistant in all other areas. • A blood smear is the gold standard for diagnosing malaria. However, a rapid diagnostic test can be used if a microbiologist or pathologist is not readily available. • Treatment of malaria depends on the severity and the sensitivity or resistance of the organism in the malaria-endemic area. ■
Malaria resources
Treatment should be in collaboration with an infectious disease physician and an infectious disease pharmacist
US Centers for Disease Control and Prevention (CDC)
Vaccines
|
2020-01-30T09:03:23.687Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c4de85680ffefee5d7b02808cf9d252516053339",
"oa_license": null,
"oa_url": "https://doi.org/10.3949/ccjm.87a.19017",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "fb27e27c3b90763186098734d10b25b8b9df2752",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261545781
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Flavor and Volatile Composition of Meat and Meat Products Observed after Exposure to Atmospheric Pressure Cold Plasma (ACP)
Studies on the atmospheric pressure cold plasma (ACP) exposure of meat and meat products mainly determine microbial inactivation, lipid oxidation, and meat color. Some studies include sensory evaluation, but only a few determine the changes in volatile composition due to ACP treatment. The results of sensory evaluation are inconclusive and range from “improvement” to “off-odor”. This could be due to differences in the food matrix, especially in processed foods, or different experimental settings, including inadvertent effects such as sample heating. The few studies analyzing volatile composition report changes in alcohols, esters, aldehydes, and other compounds, but not necessarily changes that are novel for meat and meat products. Most studies do not actually measure the formation of reactive species, although this is needed to determine the exact reactions taking place in the meat during ACP treatment. This is a prerequisite for an adjustment of the plasma conditions to achieve antimicrobial effects without compromising sensory quality. Likewise, such knowledge is necessary to clarify if ACP-exposed meat and products thereof require regulatory approval.
Introduction
In 1928, Langmuir coined the term plasma for the region of an ionized gas near the electrodes, which contains roughly equal numbers of ions and electrons [1].Plasma is also termed the fourth state of matter, and is, in other words, a conducting gas [2].Specifically, atmospheric pressure cold plasma (ACP) is plasma generated at atmospheric or reduced pressures, a technology that requires less power input than thermal plasma [3].ACP is produced by exposing a non-toxic gas to an electric field or to electromagnetic waves [4].ACP is promising in terms of sustainable food processing [3][4][5], where it has potential as a sanitation technology for food surfaces and food contact materials [6].Briefly, ACP effectively inactivates microorganisms at low temperatures [3] via reactive species damaging cell membranes, DNA, lipids, and proteins [7][8][9].Despite the effectiveness of ACP, the technique is not yet readily available for commercial use.As previously discussed by Csadek et al. [7], it is possible that ACP-treated foods may be classified as novel foods according to EU Regulation 2015/2283 [7,10].The definition of novel foods according to this regulation includes, inter alia, "food with a new or intentionally modified molecular structure, where that structure was not used as, or in, a food within the Union before 15 May 1997" (Article 3, paragraph 2, (i)) of EU Regulation 2015/2283 [10]) or "food resulting from a production process not used for food production within the Union before 15 May 1997, which gives rise to significant changes in the composition or structure of a food, affecting its nutritional value, metabolism or level of undesirable substances" (Article 3, paragraph 2, (i)) [10].For the application of ACP generated from air, oxidative changes, and reaction of nitrites from the plasma with meat myoglobin have been reported [6], but there is no evidence for the formation of novel structures or compounds or significant quality losses.Food business operators intending to place a particular food on the market in the EU must verify if this is a "novel food".To this end, the food business operators will have to consult the member state where they first intend to place this food.The member state may contact other member states and the Commission to reach a decision (Article 4 of EU Regulation 2015/2283 [10]).Ultimately, the Commission will decide on the classification of the particular food and its authorization as a novel food.The procedural requirements are laid down in Chapter III of EU Regulation 2015/2283 [10], and will involve an assessment by the European Food Safety Authority (EFSA).In essence, such novel foods must not pose a health risk and must not mislead the consumer.Currently, no ACP-treated foods are included in the EU list of authorized novel foods, Commission Implementing Regulation (EU) 2017/2470 [11], which is periodically updated.The regulatory approval routes in the United States have been reviewed and discussed by Keener and Misra [5] and by Yepez et al. [4].In summary, due to the complex chemical reactions involved, extensive research must be performed prior to the regulatory approval of ACP as a food technology [5].
Microbial inactivation is the main purpose of ACP application to food with the technology being efficient against bacteria, spores, fungi, and viruses in addition to pesticides and mycotoxins [12].Hence, the effect of ACP on meat and meat products has mainly been studied in terms of microbial inactivation [7], but it is evident that the formation of reactive oxygen species (ROS) and reactive nitrogen species (RNS) may lead to, e.g., lipid oxidation and color changes [6].ACP-induced lipid oxidation in meat has been reviewed extensively by, for example, Gavahian et al. [13].The degree of ACP-induced lipid oxidation depends on numerous factors such as the type of plasma used, gas composition, humidity, and settings such as input power and duration of the ACP treatment.Additional factors include the lipid composition of the food, moisture, storage time post-ACP treatment, and use of antioxidants [13].It is reasonable to assume that measurable changes in the volatile composition of meat and meat products may occur as a result of ACP, certainly after ACP with oxygen as part of the plasma-generating gas as lipid oxidation forms undesirable volatile compounds [14].Minor ACP-induced protein oxidation may lead to improved myofibrillar protein functional properties, while severe oxidation should be prevented as it may lead to decreased functional properties [15,16].Furthermore, ACP can be used for curing meat, either via direct treatment of the meat with ACP or via the use of plasma-treated water (PTW), as reviewed in our previous work [6].
Although numerous papers have been published on the topic of ACP treatment of meat, research on meat quality parameters beyond microbial quality, lipid oxidation, and color is still lacking.As pointed out by Rossow et al. [17], future research should deal with other important meat quality parameters as well.This includes sensory evaluation and determination of volatile composition.
This review provides a brief overview over the most common methods for the generation of ACP for treatment of meat and meat products as well as a description of novel developments in ACP technology.Studies reporting on the effects of ACP treatment on the flavor and volatile composition of meat and meat products are reviewed and discussed.Consequently, gaps in the current knowledge regarding the effect of ACP on parameters beyond microbial quality, lipid oxidation, and meat color are identified, especially as related to flavor.
Atmospheric Pressure Cold Plasma in Meat Processing
Cold plasma can be generated by a variety of sources: Corona discharge, gliding arc discharge, dielectric barrier discharge (DBD), plasma jet, microwave plasma, inductively coupled plasma, capacitively coupled plasma, and UV photo-ionization [5].The most common methods for ACP generation in food processing are DBD and plasma jets [18] with DBD seemingly being the most commonly used plasma source when treating meat and meat products [19].A DBD plasma generation setup consist of two metal electrodes, at least one coated with a dielectric layer, with a high potential difference applied across them [18] (Figure 1).The distance between the electrodes varies in different studies, and can be up to several cm [20].Exposing the gas to the electric field at room temperature generates a small percentage of ionized gas, resulting in formation of reactive plasma species such as ROS and RNS, depending on the composition of the treatment gas [4].
with a dielectric layer, with a high potential difference applied across them [18] (Figure 1).The distance between the electrodes varies in different studies, and can be up to several cm [20].Exposing the gas to the electric field at room temperature generates a small percentage of ionized gas, resulting in formation of reactive plasma species such as ROS and RNS, depending on the composition of the treatment gas [4].
The plasma jet can be considered a modification to the other methods for plasma generation [21], though most jets are based on DBD configurations [19].The plasma jet consists of two concentric electrodes through which the gas or gas mixture flows at a high rate [18,21].The plasma jet is generally placed a few millimeters above the food, and the ionized gas is directed via the nozzle (Figure 1).[18] with permission from Elsevier.
A relatively new concept is the use of in-package DBD [18] and other in-package ACP technologies, which have been reviewed extensively by Misra et al. [12].In-package ACP works by exposing the already packaged food product to ACP, thereby ionizing the gas in the headspace of the package and causing microbial inactivation in a uniform way.Unreacted plasma-generated species recombine, thus, recreating the original gas in the package (Figure 2) [12].The future potential of in-package DBD is excellent as it is possible to treat, for example, vacuum-packaged meat products under continuous large-scale conditions [19].The plasma jet can be considered a modification to the other methods for plasma generation [21], though most jets are based on DBD configurations [19].The plasma jet consists of two concentric electrodes through which the gas or gas mixture flows at a high rate [18,21].The plasma jet is generally placed a few millimeters above the food, and the ionized gas is directed via the nozzle (Figure 1).
A relatively new concept is the use of in-package DBD [18] and other in-package ACP technologies, which have been reviewed extensively by Misra et al. [12].In-package ACP works by exposing the already packaged food product to ACP, thereby ionizing the gas in the headspace of the package and causing microbial inactivation in a uniform way.Unreacted plasma-generated species recombine, thus, recreating the original gas in the package (Figure 2) [12].The future potential of in-package DBD is excellent as it is possible to treat, for example, vacuum-packaged meat products under continuous large-scale conditions [19].2019) [12].
Another recent development within ACP technology is its use as a drying pretreatment technology, primarily for fruits and vegetables [22].One such pretreatment ACP method is cold filamentary microplasma (CFM).CFM creates a thin plasma channel by a high voltage, non-self-sustained gas discharge between two metal electrodes in a gaseous medium equipped with permanent magnets to create a concentrated discharge [23,24].With this technique, it is possible to penetrate the entire food product as opposed to treating only the surface (Figure 3) [24].Another recent development within ACP technology is its use as a drying pretreatment technology, primarily for fruits and vegetables [22].One such pretreatment ACP method is cold filamentary microplasma (CFM).CFM creates a thin plasma channel by a high voltage, non-self-sustained gas discharge between two metal electrodes in a gaseous medium equipped with permanent magnets to create a concentrated discharge [23,24].With this technique, it is possible to penetrate the entire food product as opposed to treating only the surface (Figure 3) [24].[12].
Another recent development within ACP technology is its use as a drying pretreatment technology, primarily for fruits and vegetables [22].One such pretreatment ACP method is cold filamentary microplasma (CFM).CFM creates a thin plasma channel by a high voltage, non-self-sustained gas discharge between two metal electrodes in a gaseous medium equipped with permanent magnets to create a concentrated discharge [23,24].With this technique, it is possible to penetrate the entire food product as opposed to treating only the surface (Figure 3) [24].CFM as a pretreatment technology has been employed to reduce drying time and improve drying efficiency for plant products, e.g., potato slices [23] and apple slices [24].Drying is accelerated by the creation of surface micro-holes and electrically induced channels through the material [24], resembling piercing with a needle [23].It is plausible that this technology might be used on meat products in the future, though this would require extensive studies for optimization of conditions due to the differences in the nature of the matrix such as the higher content of fat and protein in meat.It would be interesting to investigate if CFM could reduce drying time for meat products such as dry-cured hams and jerky without negatively influencing important quality characteristics such as color, texture, and flavor.
ACP Effect on the Flavor of Meat and Meat Products
Table 1 shows an overview over research studies where the effect of ACP on meat and meat products has been evaluated in terms of a sensory evaluation and/or determination of the volatile composition in the headspace.Since the main task of plasma treatment of foods is to reduce bacteria or viruses on the food surface, some settings use prolonged exposure times to achieve sufficient reductions.Arguably, this bears the risk of heating the sample surface and some of the reported effects might have been caused by a rise in the surface temperature, which could be prevented by cooling systems.With respect to the action of plasma gas species, reduction in exposure times will reduce the risk of unwanted changes, as e.g., lipid oxidation.In order to better understand which reactions take place in the meat matrix, a characterization of the composition of the plasma should be provided [25] in addition to the technical settings (e.g., voltage, frequency, amperage, geometry of the electrodes etc.).Currently, a variety of methods is available for the characterization in airgas plasmas, depending on if the food item is exposed directly or indirectly (e.g., PTW).In direct treatment settings, analytical methods must be capable of detecting (and quantifying) not only long-lived compounds, but also short-lived ones [26].A detailed description of available methods and issues of selectivity have been given by Gorbanev et al. [26].
Uncured Meat
ACP-generated ROS can react with unsaturated fatty acids, thus initiating lipid oxidation [25].Secondary lipid oxidation products, such as many aldehydes, are known to cause off-flavors in meat [14].Especially when using an O 2 -containing treatment gas, an effect on flavor might be anticipated.Nevertheless, not all studies found ACP to influence flavor (Table 1).For example, no effect was found on the flavor of either raw or cooked pork loin treated with DBD ACP using a gas mixture consisting of He and O 2 [27].Similarly, in-package DBD using atmospheric air had no effect on the off-flavor of pork butt and beef loin, though taste was negatively influenced by the treatment for both types of meat [30].On the other hand, using Corona discharge plasma jet with dried, filtered air as the treatment gas increased the off-flavor of fresh pork slices, while the sensory characteristics of frozen pork slices remained unaffected [31].A number of studies have dealt with the effect of DBD on chicken breast.A decrease in desirable flavor and an increase in off-flavor with extended in-package DBD exposure time but no effect on taste was found in one study where atmospheric air was the treatment gas [36].Another study found a decrease in the flavor score of protein-coated boiled chicken breast cubes immediately following in-package DBD treatment, but with no effect between DBD-treated and untreated samples after 3 days of refrigerated storage [38].DBD on both raw and cooked chicken breast led to improvements in odor and flavor, respectively, after DBD ACP treatment with argon, not surprisingly, performing better than oxygen as the treatment gas [37].
A few studies have employed gas chromatography-ion mobility spectrometry (GC-IMS) as a method to analyze the effect of ACP on the volatile composition of meat.When applying in-package DBD with an Ar-gas to pork meatballs, the content of certain aldehydes, alcohols, and esters increased in the headspace despite the atmosphere being O 2 -free [33].The aldehydes include nonanal, (E)-2-octenal, (E)-2-nonenal, and benzaldehyde, which have previously been detected in, for instance, reheated pork [39] and stewed goat meat (with the exception of benzaldehyde) [40].Nonanal has a fruity odor [33,40] and originates from autoxidation of oleic acid or from the degradation of its hydroperoxides or degradation of hydroperoxides from other n-9 monounsaturated fatty acids [39].(E)-2-octenal has been attributed a grilled meat aroma and is suspected to be derived from linoleic acid degradation [40] or β-scission of n-7 unsaturated fatty acids [39].(E)-2-nonenal, which has been described as nutty [40] or as having a cooked, cured ham odor [33], originates from βscission of n-7 unsaturated fatty acids [39].The odor of benzaldehyde is described as bitter almond [33].Benzaldehyde may be formed by degradation of the amino acid phenylalanine and its derivatives in the presence of lipid hydroperoxides [41].The alcohol that saw the most significant increase in concentration due to ACP treatment in the pork meatballs was 2-hexanol [33], originating from the oxidation of polyunsaturated fatty acids [42,43].The presence of 2-hexanol has previously been described in the volatile composition of, for instance, yak meat [43] and rainbow trout [42], and the odor descriptors are in the areas of green and pungent [33,42].ACP treatment of pork meatballs also resulted in an increased headspace concentration of a few esters such as hexyl acetate [33], which has a fruity, green odor and has previously been detected in unsmoked bacon [44] and unheated yeast-fermented pork hydrolysate, being one of the main metabolites of the fermentation of the corresponding alcohol [45].Generally, a larger increase in headspace concentration of volatiles was seen with increasing treatment time for DBD ACP with Ar-gas performed on pork meatballs [33].
When performing in-package DBD ACP with atmospheric air as the treatment gas on fresh beef patties made from longissimus lumborum, it was found that certain alcohols (1-octen-3-ol and 1-pentanol), aldehydes (heptanal, trans-2-heptenal, n-nonanal, hexanal, and octanal), ketones (2,3-pentanedione), and esters (ethyl butanoate and (Z)-3-hexenyl acetate) were produced or increased in concentration [34].The aroma of the alcohols was overall described as being woody, acrid, grassy, fruity, mushroom, and fatty and produced via a reaction between alkoxyl radicals formed during lipid oxidation and a second lipid molecule [34,46].The detected aldehydes were described as nutty, fruity green, Foods 2023, 12, 3295 9 of 13 and fatty (heptanal), fatty and floral (n-nonanal), green, leafy, and grassy (hexanal), and fruity, citrusy, and fatty (octanal), i.e., largely undesirable odors of these secondary lipid oxidation products, which are generally important flavor compounds due to their low odor thresholds [34,39].The ketone 2,3-pentadione has been correlated positively to the aroma of yak meat [43].The esters ethyl butanoate and (Z)-3-hexenyl acetate are described as sweet or fruity and originating from esterification of the corresponding alcohols and carboxylic acids [34].The volatile compounds formed in the beef patties as a result of DBD ACP treatment in air are mainly related to lipid oxidation and could have a negative effect on the eating quality, though no sensory evaluation was conducted to confirm this [34].
Cured Meat and Meat Curing
As reviewed in our previous work, ACP can be used as a way of curing without the addition of nitrite [6] as shown successfully, for example, for pork jerky [47], canned ground ham [29], and a meat batter [48].Similar to curing with nitrite, curing via ACP largely prevents lipid oxidation, though microbial inactivation may be hampered and the ACP treatment will need to be adjusted accordingly [48].Sensory characteristics in terms of flavor and taste were found to be either unaffected or improved by curing via DBD ACP instead of traditional curing of canned ground ham [29].Instead of curing via direct application of ACP to the meat, curing with ACP may also be carried out using PTW with only slightly worse [49] or even slightly improved [50] results in terms of cured color.The important point here is equalization of the nitrite concentration [6].
An extensive study combining the use of the electronic nose, GC-IMS, and sensory evaluation was conducted to assess smoked pork sausage cured either via ACP-treated phosphate-containing brine in combination with (reduced amounts of) nitrite or the traditional way, i.e., via nitrite alone [32].Interestingly, despite no differences being detected by the sensory panel in terms of smoked flavor and the overall acceptability of the sausages, some differences were detected in the volatile composition, and grouping of the treatments was possible with partial least squares discrimination analysis [32].For example, sausages with the ACP-treated brine had an increased content of the aldehyde 2-methylbutanal compared to the untreated control and the nitrite-only group [32].The branched chain aldehyde 2-methylbutanal has previously been positively associated with the flavor development in Jinhua dry-cured ham [51] and detected as a minor odor compound in Iberian dry-cured loin [52].The compound originates from the Strecker degradation of the amino acid isoleucine [53].
Only one study combines application of ACP treatment of cured meat products with subsequent sensory evaluation.Inactivation of mold and bacteria in beef jerky was investigated and supplemented with a sensory evaluation [35].Unfortunately, DBD with ambient air as the treatment gas led to slightly decreased scores for flavor along with an increase in off-odor [35].One might speculate that treatment with an oxygen-free gas could be advantageous in this case, as it is well-known that cured meat is susceptible to the influence of oxygen [54].
It has been shown that DBD ACP can be used to modify myofibrillar protein structure and, therefore, their ability to bind volatile flavor compounds [28].Myofibrillar proteins extracted from dry-cured bacon treated with DBD ACP saw a decrease in five different aldehydes in the headspace, especially with ACP treatment at 70 kV, but also at 60 kV compared to 50 kV and the untreated control (Table 1).The authors suggest DBD ACP as a method for modifying the functionality of myofibrillar proteins while improving the flavor of meat products [28].Optimizing ACP conditions in order to obtain desirable results for all quality parameters is important.In addition, as shown by Luo et al. [28], increasing the voltage may actually reduce the headspace concentration of certain undesirable aldehydes.
Research Gaps
Comparably few research data have been published on the effects of ACP treatment of meat and meat products on meat quality parameters beyond microbial quality, lipid oxidation, and color.
It is well-known that ACP treatment can affect the activity of endogenous food enzymes [18,55].However, the effect of ACP on metmyoglobin reductase, which is critical for maintaining color stability in fresh meat, has not yet been investigated [6], though this might help explain observed color changes following ACP treatment.As myoglobin oxidation is known to accelerate lipid oxidation (and vice versa) [56], any ACP-induced reduction in color stability could lead to an increase in lipid oxidation, and, consequently, off-flavor development.Generally, it can be concluded that ACP decreases enzyme activity due to ROS and RNS destroying chemical bonds within the enzymes, leading to folding and conformational changes causing a reduction in enzyme activity [18,22].
Another significant research gap is the sensory changes taking place because of ACP.Though some studies do include a sensory evaluation, very few studies further include analysis of changes in volatile composition due to ACP treatment.This could certainly be advantageous when trying to explain any sensory changes taking place.Many studies speculate on the reactive species formed due to ACP, which are responsible for the observed changes in quality parameters.Nonetheless, most studies do not actually measure this.One exception is Rød et al. [57], who measured the ozone concentration after DBD ACP treatment of bresaola with 70% Ar and 30% O 2 as the treatment gas, but not the concentrations of other reactive species.Knowledge of reactive species formed and changes in volatile composition is necessary to determine the exact reactions taking place resulting in ACP-induced changes in flavor as well as other meat quality parameters.This information will make it easier to adjust the plasma conditions to achieve the desired outcome.
Conclusions
Studies on the effect of ACP treatment on meat and meat products mainly determine microbial inactivation, lipid oxidation, and meat color.Some studies include sensory evaluations, but only a few determine the changes in volatile composition due to ACP treatment.Most studies lack a thorough characterization of the reactive species formed due to ACP, which effectuate changes in flavor and other quality parameters.Such knowledge would not only allow fine-tuning of ACP treatment in terms of minimum sensory alteration but will also be needed to determine if ACP-treated foods must be considered as "novel foods" requiring regulatory approval.
Figure 1 .
Figure 1.Illustration of (a) dielectric barrier discharge and (b) plasma jet.Reprinted from [18] with permission from Elsevier.
FoodsFigure 2 .
Figure 2. Illustration of the processes taking place during in-package ACP treatment.(1) The meat product in a sealed package with air or a modified gas mixture.(2) The package is subjected to a high-volage electric field causing gas breakdown and plasma generation.(3) The formed reactive oxygen species (ROS) and reactive nitrogen species (RNS) diffuse in the package and inactivate spoilage microorganisms over the span of a few hours.(4) The ROS and RNS recombine to recreate the original atmosphere in the package.Inspired by Figure 2 from Misra et al. (2019)[12].
Figure 2 .
Figure 2. Illustration of the processes taking place during in-package ACP treatment.(1) The meat product in a sealed package with air or a modified gas mixture.(2) The package is subjected to a high-volage electric field causing gas breakdown and plasma generation.(3) The formed reactive oxygen species (ROS) and reactive nitrogen species (RNS) diffuse in the package and inactivate spoilage microorganisms over the span of a few hours.(4) The ROS and RNS recombine to recreate the original atmosphere in the package.Inspired by Figure 2 from Misra et al. (2019) [12].
Foods 2023 ,Figure 2 .
Figure 2. Illustration of the processes taking place during in-package ACP treatment.(1) The meat product in a sealed package with air or a modified gas mixture.(2) The package is subjected to a high-volage electric field causing gas breakdown and plasma generation.(3) The formed reactive oxygen species (ROS) and reactive nitrogen species (RNS) diffuse in the package and inactivate spoilage microorganisms over the span of a few hours.(4) The ROS and RNS recombine to recreate the original atmosphere in the package.Inspired by Figure 2 from Misra et al. (2019)[12].
Table 1 .
Effect of atmospheric pressure cold plasma (ACP) treatment on meat and meat products in terms of flavor and volatile composition.
|
2023-09-06T15:14:14.392Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "befddc2a7ff728e0e4b2bc9eb23e8ec5d685b681",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/12/17/3295/pdf?version=1693616292",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1629787a81fcc242d6e4fc558fb4d5f0f5c46d26",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16767576
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of muscle synergies for running between different foot strike patterns
It is well known that humans run with a fore-foot strike (FFS), a mid-foot strike (MFS) or a rear-foot strike (RFS). A modular neural control mechanism of human walking and running has been discussed in terms of muscle synergies. However, the neural control mechanisms for different foot strike patterns during running have been overlooked even though kinetic and kinematic differences between different foot strike patterns have been reported. Thus, we examined the differences in the neural control mechanisms of human running between FFS and RFS by comparing the muscle synergies extracted from each foot strike pattern during running. Muscle synergies were extracted using non-negative matrix factorization with electromyogram activity recorded bilaterally from 12 limb and trunk muscles in ten male subjects during FFS and RFS running at different speeds (5–15 km/h). Six muscle synergies were extracted from all conditions, and each synergy had a specific function and a single main peak of activity in a cycle. The six muscle synergies were similar between FFS and RFS as well as across subjects and speeds. However, some muscle weightings showed significant differences between FFS and RFS, especially the weightings of the tibialis anterior of the landing leg in synergies activated just before touchdown. The activation patterns of the synergies were also different for each foot strike pattern in terms of the timing, duration, and magnitude of the main peak of activity. These results suggest that the central nervous system controls running by sending a sequence of signals to six muscle synergies. Furthermore, a change in the foot strike pattern is accomplished by modulating the timing, duration and magnitude of the muscle synergy activity and by selectively activating other muscle synergies or subsets of the muscle synergies.
Introduction
Runners are broadly categorized into three groups according to their foot strike pattern [1][2][3][4][5][6][7][8]. These patterns include a fore-foot strike (FFS), in which the ball of foot lands before the heel; a mid-foot strike (MFS), in which the heel and ball of the foot land simultaneously; and a rearfoot strike (RFS), in which the heel lands first. In particular, the differences between the FFS and RFS have been studied in terms of ground reaction forces [1], knee loading [2], and running economy [3]. Yong et al. [4] showed a difference in muscle activity between habitual FFS and RFS runners. According to their findings, FFS runners showed significantly lower activity in the tibialis anterior during the terminal swing phase. In contrast, the medial and lateral gastrocnemius showed greater activity in the terminal swing phase in FFS runners. The same differences in muscle activity were also reported when habitual RFS runners ran with their natural RFS pattern and a FFS pattern [5]. However, those studies focused on the differences in the final output parameters and did not reveal differences in the neural control mechanisms that produced the different outputs.
The neural control mechanisms of human running are still unclear. Because humans and other species have a large number of muscles, motor control would be redundant if the central nervous system (CNS) individually issued commands to each muscle. To solve this redundancy problem, previous studies have suggested a modular neural control mechanism referred to as muscle synergies [9][10][11][12][13][14][15][16]. Assuming the existence of muscle synergies, the CNS issues commands to a small number of muscle synergies to achieve movements. Human running has also been discussed in terms of muscle synergies [17,18]. Cappellini et al. [17] suggested that both human walking and running might be controlled by a sequence of five temporal activation components. Hagio et al. [18] showed that the gait transition between walking and running was controlled by approximately nine muscle synergies. In those studies, the authors focused on comparing muscle synergies between walking and running. However, running includes different foot strike patterns [1][2][3][4][5][6][7][8], and there are likely changes in muscle synergies or in the activation patterns of muscle synergies that must occur to change foot strike patterns during running. Comparing the muscle synergies extracted during running with different foot strike patterns revealed the neural control mechanisms that modulate the activity of a large number of muscles to realize those running styles.
The main purpose of the present study was to compare muscle synergies for running with different foot strike patterns. To make a simple comparison, we classified running styles performed in the experiment into FFS and RFS running based on footswitch signals. We hypothesized that the results of the present study would reveal a modular neural control mechanism underlying human running and explain how the CNS controls running with different foot strike patterns. real time, foot strike patterns were confirmed by footswitches attached bilaterally to the fore and rear part of the shoe soles [18]. The front switch was located on 70% of the shoe-sole length from the rear edge and the rear switch was located on 15% of the shoe-sole length from the rear edge. We confirmed that subjects were running with FFS when front footswitches turned on first, and that they were running with RFS when rear footswitches turned on first. To verify foot strike patterns, we also calculated the foot strike angle (FSA) of the foot using kinematic data [19]. Before the recording session, subjects practiced for a few minutes running on the treadmill at different speeds with both FFS and RFS to adjust to those running styles.
Kinematic data was recorded bilaterally at 100 Hz by means of the three-dimensional optical motion capture system (OptiTrack V100, NaturalPoint Inc., Oregon, the United States) with eighteen cameras spaced around the treadmill. Infrared reflective markers were attached on each side of the subject to the skin overlying the following landmarks: temple, acromion, lateral condyle of the elbow, styloid process of the ulna, anterior superior iliac spine, posterior superior iliac spine, greater trochanter, lateral condyle of the knee, medial condyle of the knee, lateral malleolus, medial malleolus, heel, and toe. The markers were also attached to vertex, chin, and right blade bone.
The signals recorded from the EMG electrodes and foot switches were stored at the sampling frequency of 1000 Hz on the hard disk of a personal computer using a 16-bit analog-todigital converter (PowerLab/16SP; AD Instruments, Sydney, Australia). Sampling of EMG, foot switches, and kinematic data were synchronized.
Recording was started after the subjects had been running on the treadmill for about half a minute to allow their movement to settle into a regular pattern. More than 30 gait cycles were recorded for each running condition.
Data analysis
All analyses were performed with custom software written in Matlab (Mathworks, Natick, MA). The gait cycle was defined with respect to right leg movement and began when the right foot contacted the foot switch and initiated the corresponding signal. The front switch was used to indicate the start of the gait cycle in FFS running and the rear switch was used to indicate the start of the gait cycle in RFS running.
Foot strike angle. In order to verify that the subjects were in fact running with FFS or RFS, we calculated FSA [19] using kinematic data. FSA was defined as FSA ¼ ffAB footstrike À ffAB baseline where ∠AB footstrike was the angle at foot strike between vector AB, which was parallel to heel-toe segment, and horizontal axis in the sagittal plane, and ∠AB baseline was the averaged angle of the foot during the periods when both rear and front footswitches were on. A positive FSA indicated dorsiflexion.
Data preprocessing. The EMG data recorded for each condition, which included 30 consecutive gait cycles, were high-pass filtered at 40 Hz using a zero-phase-lag 4th-order Butterworth filter, full-wave rectified, and low-pass filtered at 10 Hz [17,22]. The filtered data were time-interpolated over a time base with 200 points [17,18,22,23] for individual gait cycles (cubic spline interpolation). The calculated data for each condition were arranged into a data matrix with 24 rows (24 muscles) and 6000 columns (200 points/cycle × 30 cycles). The amplitudes of the EMG waveforms in a data matrix were normalized by the maximum value in 10 data matrices (5 speeds × 2 foot strike patterns) for the corresponding subject so that all muscle scales ranged from 0 to 1 and also normalized by the standard deviation values for the corresponding muscles in the data matrix to have unit variance and thus ensure that the activity in all muscles was equally weighed [18]. After being low-pass filtered and time-interpolated, a small fraction of the EMG samples (0.01% and 0.005% of total trials, respectively) assumed negative values. However, to use the non-negative matrix factorization (NMF), all negative values were set to zero as NMF is constrained to positive values in the data matrix [16].
Extraction of muscle synergies. Muscle synergies were extracted from each data matrix of EMG recordings using the NMF [9][10][11][12][13][14][15][16]18,22,24] algorithm. NMF assumes that a muscle activation pattern M in a given time period is composed of a linear combination of a few muscle synergies W i that are each recruited by a synergy recruitment coefficient C i . Therefore, a particular muscle activation pattern M in a task would be represented by where we specify the relative contributions of the muscles involved in synergy i. Each muscle synergy has a fixed composition W i (a row vector of 24 elements) and is multiplied by a scalar recruitment coefficient C i (a column vector of 6000 elements), which changes over time. ε is residual. The detailed extraction procedure is as follows. (1) For each N syn (number of synergies), W and C matrices were initialized randomly and updated 3000 times to sufficiently minimize the residual error ε between the original data matrix and the reconstructed data matrix obtained by multiplying W and C. (2) To eliminate the influence of initial value dependence, ten runs of the NMF algorithm were performed, and the best solution that showed the highest variance accounted for (VAF) value (see the next paragraph) was selected [16]. (3) The synergy weighting and activation coefficient matrices were normalized such that the individual weighting vector was the unit vector [18].
Selection of the number of synergies. The goodness of fit of the data reconstruction using each number of muscle synergies was quantified by the VAF, which was defined as 100 × (1 − SSE/SST), with SSE representing the sum of square residuals of the data reconstruction and SST representing the sum of the squared data. The number of synergies was determined by choosing the least number of synergies that could account for greater than 90% of the overall VAF and for greater than 75% of the VAF in each muscle. This threshold is the same as that used in a previous study about human walking [15].
According to this threshold criterion, four to nine muscle synergies were extracted for all conditions. However, the EMG activity in most conditions was well accounted for by six synergies, and the variation seemed to be normally distributed (Fig 1). Hence, we set six as the conclusive number of synergies for all conditions. We again used the NMF algorithm in all conditions for the six synergies. The six synergies in an arbitrary subject were arranged in order according to the timing of the main peak of their activation patterns and then the six synergies in the other subjects were sorted based on the values of cosine similarity (see the next paragraph) with that of the arbitrary reference subject. The sorted six synergies were named in order as Syn1, Syn2, Syn3, Syn4, Syn5 and Syn6.
Muscle synergy comparison. To examine the similarity in weightings among the muscle synergies across conditions, we used a cosine similarity analysis [25]. In this analysis, when comparing two muscle synergies, the inner product of the two muscle synergy vectors was calculated. Because each weighting vector was normalized by its norm, the inner product of the two muscle synergy vectors represented the cosine of the angle between the vectors. Thus, an inner product closer to 1 indicated a greater similarity in the directions of the two vectors.
To compare muscle synergies extracted from FFS and RFS, we also examined the phase shifts and the differences in the duration and magnitude of the activation patterns. The phase shifts were quantified by comparing the timing of the main peaks of the activity. The main peak timings were transformed to radians ranging from -π to π for circular statistics in order to eliminate the influence of the peaks stepping over gait cycles. The duration of the activity was defined as the full-width at half-maximum (FWHM) of the main peak [17,22]. The magnitude of the activity was examined by calculating the root mean square (RMS) of the activity during the FWHM. Significant differences in muscle weightings, FWHM, and RMS between FFS and RFS were calculated using paired t-tests. Significant differences in phase shifts were assessed using the Watson-Williams test for circular data [22]. The significance of the differences was set at p < 0.05.
Verification of foot strike patterns
In the experiment, we confirmed foot strike patterns using footswitches in real time. However, this method has not been validated and it was insufficient to verify foot strike patterns. Thus we calculated FSA [19] using kinematic data. Following the criterion shown in the previous study [19], we classified foot strike patterns: FFS = FSA < −1.6˚, RFS = FSA > 8.0˚, and MFS = −1.6< FSA < 8.0˚. Fig 2 shows FSA in all tasks for every subject. According to the criterion, many foot strike patterns performed in FFS tasks were classified into MFS and some in RFS tasks were also classified into MFS. However, each subject showed larger FSA in RFS tasks than in FFS tasks. Furthermore, in all subjects, the front switch did turn on first in FFS tasks and the rear switch did turn on first in RFS tasks. This confirms the fact that the subjects were actually running with FFS in FFS tasks and with RFS in RFS tasks because the positions of the front and rear footswitches were within the areas in which the center of pressure at touchdown indicates FFS and RFS, respectively [19,26]. Note that we focused on revealing how the CNS coordinates muscle activities when it intends to change a foot strike pattern. Thus it is most important that subjects try to run with FFS or RFS in each task and it is not so essential that they run in fact with FFS or RFS according to the criterion shown in the previous study [19]. Therefore in this study, we determined that subjects were running with FFS or RFS based on footswitch signals.
EMG waveforms
The EMG activity for one cycle during running at various speeds in FFS and RFS are illustrated in Fig 3. The waveforms are similar to those observed in the previous study [17]. The amplitude of the EMG activity increased with an increase in speed. In almost all muscles, there is no determinant difference between FFS and RFS. However, TA showed higher activity in RFS during the terminal swing phase. On the other hand, the plantar flexors (MG, LG, and Sol) showed higher activity during terminal swing phase in FFS, indicating that their amplitudes around the touchdown were higher in FFS or that the onset of their activity was earlier in FFS.
Similarity in muscle synergies
Six muscle synergies were extracted for all conditions using NMF. We examined the similarity in weightings of muscle synergies using a cosine similarity analysis (p < 0.05 when r > 0.868). Fig 4A shows the cosine similarity values expressed as a color map between all possible combinations of muscle synergies extracted from running at 15 km/h. The six muscle synergies (Syn1-6) were quite similar across subjects in both FFS and RFS (mean ± SD across subjects and synergies: FFS, r = 0.820 ± 0.104; RFS, r = 0.844 ± 0.088). These characteristics were also observed in muscle synergies extracted from running at the other speeds. As shown in Fig 4B, the similarity in muscle synergies across speeds was also high (mean ± SD across speeds and synergies: FFS, r = 0.958 ± 0.038; RFS, r = 0.958 ± 0.036), which represents the cosine similarity values between all possible combinations of muscle synergies extracted from Sub2. The six muscle synergies were also quite similar between FFS and RFS (mean ± SD across speeds and synergies: r = 0.927 ± 0.038), but the similarity in Syn3 and Syn6 between FFS and RFS was relatively low (mean ± SD across speeds: Syn3, r = 0.873 ± 0.070; Syn6, r = 0.807 ± 0.043) compared to Syn1, Syn2, Syn4, and Syn5 (mean ± SD across speeds: Syn1, r = 0.959 ± 0.034; Syn2, r = 0.974 ± 0.016; Syn4, r = 0.963 ± 0.031; Syn5, r = 0.985 ± 0.006). These characteristics of muscle synergies across speeds were common to all subjects.
General characteristics of muscle synergies
A typical example of the six muscle synergies is shown in Fig 5. Syn1 was activated at the time of the right foot touchdown, and it mainly recruited the right VL, Gmed, and Gmax. Syn1 absorbed the touchdown impact and stabilized joints. Syn2 was activated during the time between the right foot touchdown and right foot liftoff or during the right leg stance phase, and it mainly recruited the right plantar flexors (MG, LG, and Sol) and BF. Syn2 pushed off from the ground with plantar flexion of the ankle joint and extension of the hip joint. Syn3 was activated during the time between the right foot liftoff and left foot touchdown, and it mainly recruited the right TFL and AL, along with the left BF. Syn3 lifted the right leg with flexion of the right hip joint and moved the left leg down with extension of the left hip joint. Similarly, Syn4 was activated at the time of the left foot touchdown and absorbed the touchdown impact. Syn5 was activated during the left leg stance phase and pushed off from the ground. Syn6 was activated during time between the left foot liftoff and right foot touchdown and lifted the left leg and moved the right leg down.
These characteristics were common to both FFS and RFS for all subjects. However, several muscle weightings were significantly different for each foot strike pattern (p < 0.05). In particular, the weightings of the left TA in Syn3 and right TA in Syn6 showed a large difference between FFS and RFS, and this difference was observed in all subjects except Sub7. Sub7 did not show significant difference in the weightings of TA between FFS and RFS.
Phase shifts in activation patterns
To compare the activation patterns of muscle synergies extracted from FFS and RFS, we first examined the phase shifts of muscle synergy activity. To quantify the phase shifts, the timing of the main peaks of the muscle synergy activity in one cycle was calculated and transformed to radians (Fig 6A). Eight out of ten subjects (Sub1-3 and Sub6-10) showed the tendency of a phase delay in the activity of the RFS muscle synergies (Fig 6B). In contrast, the other subjects (Sub4-5) showed the tendency of a phase delay in the activity of the FFS muscle synergies ( Fig 6C). However, when we defined the gait cycle with respect to right foot liftoff instead of right foot touchdown, all subjects showed a phase delay in the activity of the FFS muscle synergies, especially at lower speeds, and almost no phase shift between FFS and RFS at higher speeds (Fig 6D and 6E). Examining the change in the main peak timing with an increase in speed, in both FFS and RFS, the main peak of the activity tended to show a slight phase advance relative to the foot touchdown, and in contrast, the phase delay relative to foot liftoff.
Difference in duration of activation patterns
We compared the activation patterns of muscle synergies in terms of the duration of the activity. To estimate the duration, we calculated the FWHM of the main peak ( Fig 7A) [17,22]. Fig 7B shows the FWHM of both FFS and RFS muscle synergies (mean over cycles, speeds, and subjects ± SD). All of six synergies showed longer durations of activity in FFS. Syn1-5 showed significant difference (p < 0.05). Especially, Syn2 and Syn5, which mainly recruited the plantar flexors of the stance leg, showed clearly longer durations of activity in FFS (p < 0.001). We also examined the changes in the FWHM with an increase in speed. Fig 7C shows the FWHM of both FFS and RFS muscle synergies at all speeds (mean over cycles and subjects ± SD). The FWHM of Syn1-2 and Syn4-5 decreased with an increase in speed. On the other hand, the FWHM of Syn3 and Syn6 tended to increase with an increase in speed. Because the proportion of stance phase to one cycle decreased and that of the swing phase increased with an increase in speed, the changes in the FWHM seemed to correspond to the changes in the duration of the stance and swing phases.
Difference in magnitude of activation patterns
We examined the differences in magnitudes among the activation patterns. The magnitude of muscle synergy activity was defined as the RMS of the activity during the FWHM. Fig 8A Fig 6. Timing of the main peaks of muscle synergy activity. (A) The example of the circular data for the main peak timings (Sub1 at 15 km/h). (B) Phase delay in the activity of RFS muscle synergies relative to the right foot touchdown for Sub1. (C) Phase delay in the activity of FFS muscle synergies relative to the right foot touchdown for Sub5. (D) Phase shifts relative to the right foot liftoff for Sub1. (E) Phase shifts relative to the right foot liftoff for Sub5. Asterisks represent significant differences at p < 0.05. shows the RMS of the main peak activity of both FFS and RFS muscle synergies (mean over cycles, speeds, and subjects ± SD). Syn2-3 and Syn5-6 showed larger magnitudes in RFS, whereas Syn1 and Syn4 showed larger magnitudes in FFS. All of six synergies showed significant differences between FFS and RFS (p < 0.05).
We also examined the changes in the RMS with an increase in speed. Fig 8B shows the RMS of the main peak activity of muscle synergies at all speeds (mean over cycles and subjects ± SD). All six synergies in both FFS and RFS showed larger magnitudes with an increase in speed.
Discussion
We examined muscle activation over a range of speeds during running with both FFS and RFS. The EMG activity obtained in this study (Fig 3) showed characteristics similar to those observed in previous studies that compared muscle activity between FFS and RFS running [4,5]. These characteristics included increased activity in TA during the terminal swing phase in RFS and increased activity in the plantar flexors during the terminal swing phase in FFS. Using NMF, six muscle synergies were extracted in all conditions (Fig 5). The six synergies showed sequential activity in a cycle, and each synergy had a specific function. Muscle synergies that showed separate peaks in activity during human locomotion, such as walking and running, were also observed in previous studies [15,17,18,22,23]. The recruitment of muscles in each synergy was similar across subjects ( Fig 4A) and also similar across speeds (Fig 4B). These results suggest that the same six synergies are adopted across speeds and that a basic pattern of coordinated muscle activity is common to all subjects during running. The muscle synergies extracted from FFS and RFS running were also similar to each other. However, Syn3 and Syn6 showed relatively lower similarity values between FFS and RFS (Fig 4B). When comparing the weightings of muscle synergies, some muscle weightings showed significant differences between FFS and RFS ( Fig 5A). In particular, the significant differences in the weightings for TA before the touchdown in Syn3 and Syn6 were large (p < 0.003 for Sub2). These large differences in weightings may have resulted in the relatively lower similarity values of Syn3 and Syn6 between FFS and RFS. Differences between FFS and RFS were also found in the activation patterns in terms of the phase shift for the timing of the peak activity as well as the duration and magnitude of the peak activity.
Differences in muscle synergies between FFS and RFS
The main purpose of this study was to compare muscle synergies for FFS running with those for RFS running. The weightings for the muscle synergies were similar between FFS and RFS, even though some muscle weightings showed significant differences (p < 0.05). The weightings for TA before the touchdown in Syn3 and Syn6 showed especially large differences that were common to all subjects except Sub7. These differences correspond to differences in the EMG activity, which included increased activity in TA during the terminal swing phase in RFS. The differences in the weightings of TA make it possible to dorsiflex the ankle joint just before the touchdown in RFS, thus allowing the heel to land first [1,2,[4][5][6][7].
We also compared the activation patterns of muscle synergies. To examine the phase shifts of the muscle synergy activity, we focused on the timing of the main peak in activity. When the gait cycle was defined by successive right foot touchdown, eight out of ten subjects (Sub1-3 and Sub6-10) showed a phase delay in RFS muscle synergy activity (Fig 6B), whereas the other subjects (Sub4-5) showed a phase delay in FFS muscle synergy activity (Fig 6C). In contrast, all subjects showed a phase delay in FFS muscle synergy activity, especially at lower speeds and with almost no phase shift at higher speeds when the gait cycle was defined by successive right foot liftoff (Fig 6D and 6E). Cappellini et al. [17] also reported differences in phase shift patterns between the timing of the peak activity relative to the foot touchdown and the timing of the peak activity relative to the foot liftoff when comparing the temporal components extracted from walking and running. In the previous study [17], a specific temporal component showed unique phase shifts unlike the phase shifts of the other temporal components. However, in the present study, all six synergies showed similar phase shifts within the same subjects. Thus, at least in the case of controlling foot strike patterns during running, it seems that the activation timings of all muscle synergies adopted are uniformly modulated and that the timings are controlled with reference to foot liftoff rather than foot touchdown.
When comparing the duration of the muscle synergy activity assessed by the FWHM of the main peak (Fig 7A), we found that Syn2 and Syn5, which mainly recruited the plantar flexors in the stance leg, showed a highly significantly longer (p < 0.001) duration of activity in FFS ( Fig 7B). This difference in duration corresponds to the difference in the EMG activity, which included increased activity in the plantar flexors during the terminal swing phase in FFS or an earlier onset of plantar flexor activity. This allows plantar flexion of the ankle joint before touchdown [1,2,[4][5][6][7] and the ball of foot to land first in FFS and absorb the touchdown impact using the plantar flexors with larger ankle planter flexion moments [2,8,27] and Achilles tendon forces [2]. This absorption results in a relatively mild vertical ground reaction force at the touchdown in FFS [1,2,4,27]. When we looked at the FWHM in each subject, we found that only in Sub1, Syn2 and Syn5 showed significantly shorter durations of activity (p < 0.05) in FFS. However, instead of the shorter activity duration of Syn2 and Syn5 in FFS, the weightings of the plantar flexors when the leg landed in Syn1 and Syn4 were significantly larger in FFS in Sub1. However, in the other subjects, Syn1 and Syn4 did not necessarily have larger weightings for the plantar flexors in FFS. As for Sub1, this difference in weightings corresponds to the larger activity of the plantar flexors during the terminal swing phase in FFS.
We also examined the differences in magnitudes of muscle synergy activity defined by the RMS of the activity during the FWHM. We found that Syn2-3 and Syn5-6 showed significantly larger magnitudes in RFS and that Syn1 and Syn4 showed significantly larger magnitudes in FFS (p < 0.05) (Fig 8A). The increased activity of Syn1 and Syn4 in FFS may contribute to more effective absorption of the touchdown impact. The increased activity of Syn2 and Syn5 in RFS seems to originate from the necessity for plantar flexion of the ankle joint to occur after the heel strike [2,8]. The increased activity of Syn3 and Syn6 in RFS seems to be related to the necessity for dorsal flexion of the ankle joint to occur before touchdown [1,2,[4][5][6][7], as is the case for the weightings.
The present results indicated the activation patterns of muscle synergies also showed some changes with an increase in speed (Figs 6, 7C and 8B). However, there was no determinant difference between FFS and RFS in how the activation patterns change with an increase in speed.
In the present study, Sub5 and Sub10 were habitual FFS runners and the other subjects were habitual RFS runners. However, there was no distinct difference between habitual FFS and RFS runners about the composition and activation profile of muscle synergies. This result suggests the possibility that the differences we found in the present study between FFS and RFS would not come from the experiences of individuals but be inherent in individuals.
Neural control for human running
In this study, we extracted six muscle synergies during FFS and RFS running. In previous studies, five temporal components [17] or approximately nine muscle synergies [18] were extracted to explain EMG activity during human running. The differences in the number of synergies between the present study and those reported in previous studies seems to result from a difference in the number of muscles recorded, the extraction method (principal component analysis in Cappellini et al. [17]), or the VAF threshold criterion (overall VAF > 95% and each muscle VAF > 80% in Hagio et al. [18]).
The six muscle synergies extracted during FFS and RFS were similar. However, we found some differences in the muscle weightings and activation patterns. Differences in activation patterns, such as phase shifts and changes in the duration and magnitude, were also observed in previous studies with humans and cats that performed walk-to-run or run-to-walk transitions [17,18] and stepping over various obstacles [28]. The changes in activation patterns are easy to understand because they could be accomplished by controlling the timing, duration, and magnitude of the descending signals from the CNS. Therefore, the changes in the activation patterns between FFS and RFS, and also the changes with an increase in speed observed in this study, seems to reflect the changes in the timing, duration, and magnitude of the descending signals from the CNS. However, the difference in muscle weightings is difficult to understand because the origin of the muscle synergies would include the complexity in the neural circuitry.
By comparing the muscle weightings of muscle synergies between FFS and RFS, we found several significant differences in muscle weightings (p < 0.05) (Fig 5A). In particular, the differences in the weightings for TA before the touchdown in Syn3 and Syn6 were large and common to all subjects except Sub7. To explain these differences in weightings for TA, we suggest two possibilities regarding the neural control mechanisms with reference to the possible corticospinal connections proposed by Krouchev and Drew [28]. Given that a single subpopulation of corticospinal tracts connect with spinal interneurons that project to the motoneurons of all muscles in a synergy, (1) the CNS send signals to entirely different synergies for each foot strike pattern. That is, Syn3 and Syn6 in FFS and those in RFS are entirely different. Alternatively, given that there are several subpopulations of corticospinal tracts that are linked by intercortical connections and discharge simultaneously during running that connect with different populations of spinal interneurons, and each of these interneuron populations innervate only part of the total muscle synergy, that is, a synergy is composed of several subsets, (2) a subset exists in Syn3 and Syn6 that recruit only TA or TA and other muscles, and the CNS selectively activates the subsets.
We cannot say which of these two explanations are correct, but the fact that some muscle weightings, including those for TA, differ significantly for each foot strike pattern suggests that there are more than six synergies or several subsets in the six synergies. In recent studies [28][29][30][31][32], muscle synergies were extracted by focusing on the onset and offset timing of EMG activity. To more precisely reveal the neural control mechanisms underlying human running, more detailed analyses are needed.
Conclusions
Six muscle synergies were extracted from FFS and RFS, and these muscle synergies were similar between FFS and RFS, even though some muscle weightings in the synergies showed significant differences between FFS and RFS. The activation patterns of muscle synergies were also different in terms of their timing, duration and magnitude of the main peak activity. These results suggest that human running is accomplished by using six muscle synergies and that a foot strike pattern is controlled by the change in the timing, duration, and magnitude of the activation patterns of the synergies. The differences in muscle weightings between FFS and RFS suggest the existence of other muscle synergies or subsets of muscle synergies. However, the six synergies extracted in this study could account for the EMG activity during both FFS and RFS running of all subjects and captured the basic muscle coordination pattern underlying human running.
|
2018-04-03T04:31:43.123Z
|
2017-02-03T00:00:00.000
|
{
"year": 2017,
"sha1": "5aaf66ade1be74618196da6bde95da835ac21703",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171535&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff49530f5d679c5190209bbbdff4a4eaab8abe29",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
51718365
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating the impact of the DREAMS partnership to reduce HIV incidence among adolescent girls and young women in four settings: a study protocol
Background HIV risk remains unacceptably high among adolescent girls and young women (AGYW) in southern and eastern Africa, reflecting structural and social inequities that drive new infections. In 2015, PEPFAR (the United States President’s Emergency Plan for AIDS Relief) with private-sector partners launched the DREAMS Partnership, an ambitious package of interventions in 10 sub-Saharan African countries. DREAMS aims to reduce HIV incidence by 40% among AGYW over two years by addressing multiple causes of AGYW vulnerability. This protocol outlines an impact evaluation of DREAMS in four settings. Methods To achieve an impact evaluation that is credible and timely, we describe a mix of methods that build on longitudinal data available in existing surveillance sites prior to DREAMS roll-out. In three long-running surveillance sites (in rural and urban Kenya and rural South Africa), the evaluation will measure: (1) population-level changes over time in HIV incidence and socio-economic, behavioural and health outcomes among AGYW and young men (before, during, after DREAMS); and (2) causal pathways linking uptake of DREAMS interventions to ‘mediators’ of change such as empowerment, through to behavioural and health outcomes, using nested cohort studies with samples of ~ 1000–1500 AGYW selected randomly from the general population and followed for two years. In Zimbabwe, where DREAMS includes an offer of pre-exposure HIV prophylaxis (PrEP), cohorts of young women who sell sex will be followed for two years to measure the impact of ‘DREAMS+PrEP’ on HIV incidence among young women at highest risk of HIV. In all four settings, process evaluation and qualitative studies will monitor the delivery and context of DREAMS implementation. The primary evaluation outcome is HIV incidence, and secondary outcomes include indicators of sexual behavior change, and social and biological protection. Discussion DREAMS is, to date, the most ambitious effort to scale-up combinations or ‘packages’ of multi-sectoral interventions for HIV prevention. Evidence of its effectiveness in reducing HIV incidence among AGYW, and demonstrating which aspects of the lives of AGYW were changed, will offer valuable lessons for replication. Electronic supplementary material The online version of this article (10.1186/s12889-018-5789-7) contains supplementary material, which is available to authorized users.
Background
The incidence of HIV is declining or stabilizing in many settings, yet levels of new infections remain unacceptably high among adolescent girls and young women (AGYW) [1]. In almost all countries with generalized epidemics, young women aged 15-24 years are three to five times more likely than their male counterparts to be living with HIV; and in sub-Saharan Africa, 71% of new infections in adolescents are among girls [1]. In a pattern that is consistent across most high prevalence countries, HIV incidence rates rise dramatically between the ages of 15 and 24, and more steeply among females than males [2].
As the world's population of adolescents grows, particularly in east and southern Africa, high incidence among young people will equate to rises in the absolute numbers of new infections [2,3]. The role of adolescent HIV prevention in broader epidemic control is recognized with the growing commitment at global and national levels to prioritise young people in efforts to end the AIDS epidemic. With the ' All In to End Adolescent AIDS' campaign, for example, UNICEF and global partners seek to reduce new HIV infections among adolescents (10-19 years) by 75% between 2015 and 2020, and 'end' the AIDS epidemic among adolescents by 2030 (to fewer than 200 infections per year) [3]. The complexity of this goal is not underestimated, and the multidimensional nature of AGYW vulnerability has to date proven resistant to change by single interventions, sectors or disciplines [4]. The need for combination approaches, and 'packages' of interventions, is increasingly recognised. For example, the recent issue of Disease Control Priorities recommends an essential and cost-efficient 'package' to be delivered in adolescencethrough a mixed approach involving the community, media and health systems [5]. Similarly, a 'call for action' on HIV prevention, 'HIV Prevention 2020' , specifies a combination of primary prevention interventions, to be designed comprehensively and delivered effectively and at scale among populations at greatest risk [6].
The 'DREAMS' Partnership (http://www.dreamspartnership.org/) is an ambitious programme aiming to halt AGYW infections through such an approach: a broad package of evidence-based health, educational and social interventions to be delivered with urgency, high coverage, and where the need is greatest. On World AIDS Day 2014, the United States President's Emergency Plan for AIDS Relief (PEPFAR), the Bill & Melinda Gates Foundation and the Nike Foundation announced the DREAMS investment in 10 countries in sub-Saharan Africa [7]. The goal of DREAMS is to reduce new infections by 40% after two years of intervention among AGYW in sub-national geographic units identified as 'hot-spots' with high HIV burden.
By investing in a multi-component package, DREAMS aims to address the root causes of girls' and young women's vulnerability and improve their lives more broadlytheir value in society and their own esteem, their experiences within relationships, opportunities for schooling and employment, and healthy transitions from adolescence to adulthood. The Partnership aims to ensure that AGYW have an opportunity to live Determined, Resilient, Empowered, AIDS-free, Mentored and Safe lives ('DREAMS') in high-burden settings, through interventions targeting young women, their families, community and male sexual partners [7].
Evidence of DREAMS' effectiveness can stimulate a renewed focus on HIV prevention [6]. We sought the best opportunities to independently evaluate the impact of DREAMS in selected settings, in both general and key population groups, to offer lessons to those implementing DREAMS and to inform future investments in young women's health and well-being. To maximize the potential for generating evidence around DREAMS, four diverse settings in three countries -Kenya, Zimbabwe, and South Africawere chosen for this evaluation, based on the availability of existing demographic and HIV data platforms that would enable credible and timely evaluation. The diversity of settings is an asset to the generalisability of this evaluation, with each site presenting distinct opportunities to generate evidence, but it raises challenges in terms of ensuring comparability and an appropriate level of harmonization across the different settings. This paper presents the overall protocol for the evaluation in all four sites; details of the design unique to the evaluation in Zimbabwe are published elsewhere [8] and site-specific protocols for the other three settings are available upon request.
The impact evaluation is funded independently of DREAMS' implementation and is a collaboration between the
The DREAMS core package
The DREAMS Partnership supports a core package of interventions targeted at AGYW, their families, wider communities, and men characterized to be the sexual partners of AGYW [7]. The package is comprised of evidence-based interventions shown to address HIV risk behaviours, HIV transmission, socio-economic vulnerabilities and gender-based violence (Table 1).
DREAMS investments aim to ensure that AGYW in selected DREAMS areas (sub-national units with high HIV burden) have access to all core package interventions, either through DREAMS funding or additional PEFPAR funding schemes (e.g., for VMMC) or coordination with national government programmes (e.g., for cash transfers or educational subsidies). PrEP is planned for selected countries and sites within countries, as determined by national governments. Guidance for each component of the core package has been provided to countries by PEPFAR, and coverage targets have been set for each sub-national unit by age group, area and intervention [9]. 'Primary' interventions are the priority interventions from the core package that all AGYW in an age group should receive. 'Secondary' interventions are needs-based interventions from the core package, recommended for specific sub-populations of AGYW based on additional circumstances, e.g., condom provision for AGYW who are sexually active; post-violence care for those who have experienced sexual violence. Additional file 1: Table S1 summarises the primary and secondary interventions in each country setting.
The way in which the various DREAMS components are rolled out and coordinated, and the timing of implementation will differ in each evaluation site, depending upon: the capacity and readiness of Implementing Partners (IPs) contracted by the United States Government to implement DREAMS services; the timing of contractual arrangements with IPs; negotiations with national governments; finalization of sex education curricula for schools; and other contextual factors. Given the heterogeneity in DREAMS' delivery, we will monitor how, when, by whom, and to whom, components of the DREAMS package are delivered, in the process evaluation activities described below [10].
Aims & objectives
This protocol outlines the plans to evaluate the impact of the DREAMS programme at the individual and population levels in four sub-Saharan African settings representing diverse epidemiological and social contexts. The evaluation aims to answer three main questions: In the South African and two Kenyan sites, the impact of DREAMS, including community, facility, and school-based interventions, on HIV infection rates and other key outcomes will be measured in the general population. In Zimbabwe, the impact of a combination DREAMS package which includes an offer of oral PrEP, alongside other interventions, on HIV infection rates and other key outcomes will be evaluated among young women who sell sex (YWSS) [8].
Theory of change
We hypothesize that DREAMS will reduce incidence of HIV among AGYW through three related pathways of protection ( Fig. 1): 1) Social Protection: DREAMS will reduce social and economic vulnerability of AGYW by helping them Psycho-social mediators of change, such as empowerment and self-efficacy, are hypothesised to link uptake of DREAMS interventions by AGYW to the three pathways of protection and ultimately to a reduction of HIV incidence among AGYW.
The impact of DREAMS interventions will depend on the scale and intensity at which they are delivered and whether they are accessed. Through the process evaluation, we will assess the roles of supply of, demand for, and adherence to, DREAMS interventions, as per the conceptual framework for HIV prevention cascades [11]. More specifically, we will investigate the extent to which interventions in the core package work in combination to enhance the supply of prevention products/programmes, to limit barriers to access, and to create and enhance opportunities and motivations for AGYW and young men to adopt and adhere to them [10,12].
Study settings
To maximize the potential for generating evidence on the impact of the DREAMS Partnership, four settings in three countries were chosen, each with existing demographic and/or HIV data platforms. In the South African and two Kenyan sites, this evaluation will make use of long-standing longitudinal health and demographic surveillance systems (HDSS), while in Zimbabwe a national programme to provide HIV-related services to sex workers will serve as the evaluation's starting point. The HDSS provide direct measurement of trends in HIV Fig. 1 Theory of change to guide the impact evaluation incidence as well as demographic, sexual behavior, and linked clinical data to evaluate DREAMS' impact. Data from the national programme for sex workers in Zimbabwe provide estimates of past HIV incidence and a platform from which to identify and reach AGYW at highest risk of HIV.
uMkhanyakude, KwaZulu-Natal, South Africa
The Africa Health Research Institute (AHRI; formerly the Africa Centre for Population Health) in uMkhanyakude, KwaZulu-Natal, has followed a total of~160,000 individuals from 11,000 geocoded households from 2000, in a 428 km 2 surveillance area. Demographic surveys have been conducted three times a year, with annual collection of individual socio-economic, behavioral, and HIV service uptake data alongside collection of dried blood spots for laboratory testing for HIV infection. AHRI has a memorandum of understanding with the Department of Health that enables linkage of the population surveillance data to the primary care electronic record systems in the local health care facilities (2010 onwards) and to the TIER.Net electronic record system for HIV treatment (2004 onwards), as well as to all clinical laboratory test results of patients in the sub-district through linkage with the National Health Laboratory Systems (NHLS) database (since 2004). Since 2017, AHRI has embedded clinical research assistants in all primary health care settings in the surveillance area. They electronically capture details on the reason for attendance, and these clinic attendance data are linked to the demographic surveillance data.
Gem sub-county, Siaya county, Kenya
The Kenya Medical Research Institute (KEMRI)/Centers for Disease Control and Prevention (CDC) HDSS site in Siaya County of western Kenya covers a total population of approximately 223,000 people and 55,000 households, with demographic surveys three times a year. Siaya County includes three sub-counties: Rarieda, Siaya, and Gem. For the evaluation of DREAMS, the KEMRI/CDC platform in Gem will be used, as this is where HIV surveillance has been conducted most frequently and recently, i.e., three behavioral surveys and four rounds offering HIV testing services in 2011/2012, 2013/14, 2016, and 2017, among a random sample of one-quarter of all households and all resident members of those households as an open cohort [13].
Nairobi county, Kenya
The African Population and Health Research Center (APHRC) began the first urban-based longitudinal HDSS platform in sub-Saharan Africa, known as the Nairobi Urban Health and Demographic Surveillance System (NUHDSS) in 2002 in two informal slum settlements of Nairobi: Korogocho and Viwandani [14]. The NUHDSS covers approximately 65,000 people and 24,000 households in 14 villages with quarterly sociodemographic surveys and annual surveys (2012-2016) on fertility preferences. As the last HIV serological survey was conducted in 2007, HIV incidence will not be measured in this setting. This site is conducting formative research with 10-14 year olds for the Global Early Adolescent Study (GEAS) [15], and will therefore be able to include impact evaluation analyses from age 10 (unlike the other 3 evaluation settings which will focus on AGYW from age 13).
Zimbabwe
The Zimbabwe evaluation will capitalise on a national programme that provides HIV prevention and sexual and reproductive health services to female sex workers (FSW) in Zimbabwe, known as "Sisters with a Voice". 'Sisters' began in 2009 and provides free access to HIV testing, STI treatment, family planning, HIV prevention education, condoms and legal services to over 65,000 women across 36 sites [16]. Around 40% of FSW accessing the programme are younger than 25 years. The evaluation will include six districts in which the Sisters programme is active: two in which DREAMS+PrEP are delivered (Bulawayo and Mutare) and four comparison sites in which no DREAMS interventions are planned (Karoi, Chinhoyi, Zvishavane and Kwekwe). Comparison sites were selected for their comparability to intervention sites in terms of population size, urban location, and presence of a Sisters with a Voice site with relatively high client volume [8].
Study design
The DREAMS package of interventions prioritises AGYW, but also includes 'contextual' components directed at young men, the families of AGYW, and the wider community (Table 1). Consequently, the overall impact of DREAMS interventions should be measured at community, or "population", level. For example, if HIV incidence among AGYW is reduced following DREAMS interventions then this is likely to have been achieved through increased uptake of services and behaviour change among men as well as among AGYW themselves.
In the evaluation settings in which impact is measured in the general population (Nairobi, Gem, and Umkhanyakude), the primary way in which the impact of the DREAMS programme will be measured is through comparisons of HIV incidence (Gem and Umkhanyakude only), and HIV-related outcomes (in all 3 settings) across calendar time periods before, during early roll-out, and after DREAMS programmes have been established. This has the disadvantage that changes over time may be due to factors other than DREAMS interventions, but the advantage is that comparisons are made within the same setting and population, at multiple time points. A cluster-randomised trial design was not possible because the priority of the DREAMS Partnership was for rapid roll-out of DREAMS investments to geographic areas specifically chosen for their relatively high HIV prevalence, rather than to a randomly selected sample of areas [9].
For additional evidence of plausibility and impact, changes in outcomes will be assessed by estimating dose-response relationships between DREAMS uptake and outcomes at the small-area level [17]. 'Layering' of multiple interventions or services from the DREAMS core package, through integration and referrals, will be a key way of quantifying dose, for example, as the percentage of AGYW who received multiple intervention components and/or the minimum package designed for their age.
In Zimbabwe, the main way in which impact will be measured is through a comparison between 2 districts which will receive DREAMS interventions with 4 districts that will not. This alternative study design was chosen because the study population is young women who sell sex (not the general population of AGYW), who are at high risk of HIV acquisition in all 6 study districts, and a respondent-driven sample of YWSS will be enrolled into a cohort study and followed up for two years [8].
As well as measuring the overall impact of DREAMS interventions at population-level, in the Kenyan and South African settings a random sample of AGYW will be enrolled into a "nested" (within the total population) cohort study and followed up for two years, in order to collect more detailed data on awareness and uptake of interventions, psycho-social "mediators of change", and the three hypothesized pathways of change (social protection, sexual behaviour, and biological protection), and thus enable in-depth analysis of pathways of change.
To achieve the above, the design comprises three main components: 1) Population-based surveillance systems: In uMkhanyakude, and Gem and Nairobi in Kenya, existing surveillance systems that link HIV, demographic, behavioural, and service uptake data, will be used and enhanced in order to assess the population-level effects of DREAMS over time (in relation to the timing of DREAMS roll-out) among AGYW, men who are in the age range that includes most of the partners of AGYW, and also older adults who may receive DREAMS interventions that are directed at the wider community. In uMkhanyakude and Gem, linkage to HIV clinic data is possible and geospatial data are available, and in uMkhanyakude HIV phylogenetics data are also available. We will utilise historical (for baseline) and prospective data (for comparison) from the population-based systems (see Fig. 2 for example in South Africa). 2) Cohorts of AGYW, randomly selected from the total population: For detailed study of the pathways by which DREAMS interventions influence HIV risk, we will establish cohorts of AGYW within each evaluation site. Cohort enrolment will be completed during the early roll-out of DREAMS interventions, during 2017, and those enrolled will be followed prospectively, at~12 and~24 months later. There will be more detailed and comprehensive data collection on uptake of DREAMS interventions, mediators of change, and socio-economic, behavioural and health outcomes than is possible in the total population. In the Zimbabwe setting, the same cohort used to measure the overall impact of DREAMS interventions on HIV incidence among YWSS can be used to analyse pathways of change.
In uMkhanyakude, Nairobi, and Gem, nested cohorts of AGYW aged 13-22 will be selected using the HDSS census population as the sampling frame. A random sample of AGYW stratified by age (13-17 and 18-22 years) and area of residence will be selected. The Nairobi evaluation will further recruit a sample of young girls from age 10 (building on the Global Early Adolescent Study pilot in this setting), resulting in three age groups for the cohorts: 10-14, 15-17, and 18-22 years.
In Zimbabwe, the network-based recruitment strategy used to identify and refer YWSS to the DREAMS (intervention sites) or Sisters (in comparison sites) programme, is described in detail elsewhere [8]. This recruitment strategy is appropriate in the absence of a sampling frame and when the population of interest is primarily hidden as is the case among young women who sell sex in Zimbabwe.
3) Process evaluation:
In all four DREAMS evaluation sites, a process evaluation will use both qualitative and quantitative methods to describe DREAMS implementation in context, and to challenge and interrogate causal assumptions in the theory of change [18]. To understand DREAMS' influence on supply, demand, and uptake of interventions (the 'prevention cascade') [12], we will investigate reach and coverage, views and experiences of DREAMS components, what helps and hinders successful implementation and uptake, and to what extent implementation is influenced by differing social and epidemiological contexts. Specifically, we will explore fidelity (whether all components of DREAMS were implemented on schedule and as planned), feasibility (identifying barriers and facilitators to implementation), acceptability (how staff and beneficiaries perceive and value the intervention), and quality (measured by both objective and subjective criteria). In the process, we will aim to identify unexpected pathways and consequences, and who is left out (equity).
The process evaluation will include five methodologies: a. Qualitative longitudinal study of young people's experiences. Young people's experience of and "journeys" through DREAMS, including barriers and facilitators to what works in practice will be tracked in detail for a small cohort of 20 AGYW and, in HDSS sites only, 20 young men in each site. These cohortsof DREAMS beneficiaries sampled purposively from the general population (for males) and AGYW cohorts (for females)will be followed longitudinally and offered a range of ways to share their experiences in real-time including use of diaries and informal interviews. b. Small group discussions. The experiences of AGYW's families, parents, partners, and broader communities will be explored through focus and family group discussions. Group discussions in each evaluation setting will help investigate understanding and experience of DREAMS and its components and whether social norms and attitudes are influenced by the interventions. c. Rapid participatory community mapping. This method will be used in the DREAMS areas to quickly gain a broad understanding of the social context for adolescents and young people and the reach and coverage of the AGYW services at baseline and after two years of DREAMS intervention. The mapping will use rapid appraisal methods with participant observation and short interviews. d. Interviews with key informants in delivery organizations. Up to 20 individuals responsible for implementing DREAMS activities in each setting will be interviewed to explore views and experiences of, and barriers and facilitators to, DREAMS activities each year. Local health care workers and community and youth leaders will also be interviewed. e. Observations of DREAMS interventions delivered in context. Using checklists, structured observations will record the ways in which DREAMS is delivered and received, and with what quality and intensity (using DREAMS standard operating procedures for reference). Observations will be made of DREAMS interventions in a range of settings such as schools, safe spaces, and health facilities.
Measurement and analysis of key variables
For component 1, the population-based surveillance in the general population of AGYW and men, the primary comparison is across three time periods: pre-DREAMS, during the early roll-out of DREAMS interventions, and post-DREAMS. The aim is to know whether HIV incidence among AGYW aged 15-24 years (the primary outcome, directly observed through repeat testing) and key secondary outcomes, measured among both AGYW and men, have changed over time at population-level. The primary and secondary outcomes are summarized in Table 2, with secondary outcomes lying on the three pathways of central interest that are between the interventions and HIV incidence: social, behavioural (sexual), and biological protection. The extent to which any changes can be attributed to DREAMS interventions will be assessed in the context of other secular changes, and the findings of the process evaluation. For example, given the background scale-up of universal testing and treatment for HIV, our findings on HIV incidence trends among AGYW will be placed in the context of trends in HIV incidence and the uptake of HIV testing and treatment among those who are not directly targeted for DREAMS HIV prevention interventions. For component 2, the nested cohorts of AGYW designed to measure pathways of change, the primary exposure is uptake of DREAMS interventions among individual AGYW, considering single components as well as the number and combination of components of the core package that were received. The extent to which AGYW are aware of, invited into, and participate in DREAMS interventions will be summarized, using the core package and primary/secondary interventions as frameworks to categorise interventions and standardize across the settings. (See Table 3 for proposed, a priori measures of DREAMS uptake.) Comparisons of mediators and secondary outcomes will then be made among AGYW according to their uptake of DREAMS interventions.
'Mediators of change' (Fig. 1 and Table 3) will be measured at the individual level, representing the DREAMS Partnership's commitment that the interventions will increase determination, resilience, empowerment, social assets, and personal safety among AGYW.
Analysis plan Analysis of primary outcome
In the South African and western Kenyan sites, we will analyze population level change in directly-observed HIV incidence over time, with all data available from the know they are HIV+ or have tested HIV negative in the past 12 months c do not want a child in next 2 years or ever but not using a method to prevent pregnancy d uMkhanyakude only e Used condom at last sex (in past 12 months); Any condomless sex in last 1 month / last 12 months and in the last 3 months Table 3 Outcomes, mediators of change in the outcomes, and uptake of DREAMS interventions, to be captured via nested DREAMS cohorts of adolescent girls and young women (in Kenya and South Africa) and young women who sell sex (in Zimbabwe) AGYW Our main comparison will be between the post-DREAMS time period, and the two earlier time periods.
In Zimbabwe, we will compare HIV incidence between sites where DREAMS+PrEP has and has not been implemented, over two years of follow-up. In the absence of randomization, the analysis will adjust for known individual-level determinants of HIV incidence.
Analysis of secondary outcomes
Secondary outcomes will be captured via the HDSS in the three surveillance sites ( Table 2) as well as via the cohorts of young women in all four settings (Table 3). We will analyze population and individual level change, respectively, over time in these outcomes, using the calendar time periods described above for HIV incidence.
Analysis of causal pathways
To explore whether the hypothesized "mediators of change" lie on the causal pathway between the DREAMS interventions and HIV-related (secondary) outcomes (Table 2), longitudinal data collected from the nested AGYW cohorts at three time points over two years will be used (enrolment; 12 months; 24 months). The causal analysis will involve four main steps:
1) Analysis of whether uptake of DREAMS
interventions is related to an improvement in the "mediators of change", between enrolment and follow-up at 12 and 24 months 2) Analysis of whether uptake of DREAMS interventions is related to "lower-risk" sexual behaviour, social protections, and biological protections, at follow-up at 12 and 24 months 3) Analysis of whether improved levels of the "mediators of change" are related to "lower-risk" sexual behavior, social protections, and biological protections, at enrolment and during follow-up 4) Causal mediation analysis of the effect of DREAMS interventions on secondary outcomes (biological, behavioral, social), i.e., the extent to which any effect of DREAMS interventions on secondary outcomes after 12 and 24 months of follow-up is achieved through their effect on the "mediating" variables.
These analyses will adjust for important confounding variables measured at enrolment (for example, household socio-economic position) and an AGYW's "propensity to receive" DREAMS interventions. This is because the criteria used by DREAMS implementing partners to select AGYW who will be invited to participate in the programme are also likely to influence the outcomes that are to be measured, i.e., they may be risk factors for HIV incidence and the secondary outcomes, and so may confound observed associations between uptake of interventions and these outcomes. For example, Implementing Partners are prioritising girls considered to have the highest risk of HIV infection (such as, those who are living in relatively poor households, are orphans, are out of school, or are young mothers, as identified via the 'Girl Table 3 Outcomes, mediators of change in the outcomes, and uptake of DREAMS interventions, to be captured via nested DREAMS cohorts of adolescent girls and young women (in Kenya and South Africa) and young women who sell sex (in Zimbabwe) (Continued) In South Africa and Gem, Kenya, HIV incidence will be estimated from the larger population-level studies (see Table 2), for adequate statistical power (see 'Sample Sizes' below) b Age at enrolment, to be followed over two years c 'KP_Prev' is the PEPFAR indicator used to measure DREAMS package for key populations and includes condom promotion, HIV testing services, and social asset building [22] Roster' enumeration exercise and by community-based organisations) [19]. The characteristics that predict exposure to DREAMS will be identified using HDSS data, and the information across these characteristics will be synthesized into a single "propensity to be exposed to DREAMS" score (equivalent to an estimated probability of exposure to DREAMS, and taking values between 0 and 1), and AGYW will be categorized (stratified) into 4-5 groups on the basis of their propensity score. The association between uptake of DREAMs interventions and socio-economic and behavioural outcomes will be adjusted for the propensity score (in categories) as well as the individual characteristics that are the most important confounding variables.
Qualitative and process evaluation data
Analysis of the concurrent process evaluation data will follow the UK Medical Research Council guidance for process evaluation of complex interventions [18]. Data collected using the range of different methods, detailed above, will be carefully integrated to address the following process evaluation questions: How is delivery of DREAMS achieved and what is actually delivered? (Implementation) How does the delivered intervention produce change? (Mechanisms of impact) How does context affect implementation and outcomes? (Context) The mechanisms to be scrutinised include increased demand for (awareness and acceptability), supply of (accessibility and availability), and adherence to (ongoing adoption) the interventions in the DREAMS core package, as per the HIV prevention cascade framework, to achieve coverage among the target populations [11].
Sample sizes
HIV incidence -the primary endpoint for the impact evaluation -will be measured using HDSS data in uMkhanyakude and Gem, and data from the cohorts of young women who sell sex in Zimbabwe. In uMkhanyakude, HIV incidence was~6 per 100 person-years among AGYW aged 15-24 years old during 2011-2015, and~4.6 and~7.5 per 100 person-years among those aged 15-19 and 20-24 years respectively, based on a total of 7687 person-years of follow-up. Assuming, conservatively, that there will be~3000 person-years of follow-up during 2017-2019 (40% of 7687), [20] then study power is > 90% to show an overall reduction in HIV incidence of 30, and > 90% in sub-group analysis of AGYW aged 15-19 and 20-24 years to show a 40% reduction in HIV incidence (Fig. 3a). In Gem, western Kenya, HIV incidence was~0.7 per 100 person-years among AGYW aged 15-24 years during 2011-2016, based on a total of 8236 person-years of follow-up [21]. (Whereas sero-surveys are conducted annually in uMkhanyakude, they are less frequent in Gem: data are available from three sero-surveys in Gem between 2011 and 2016. Sero-conversions observed during this period, including those estimated from the 2016 survey, will be considered 'pre-DREAMS' because they are unlikely to be influenced by DREAMS by this early stage of implementation.) During 2017-2019, all AGYW in Gem will be sought for participation in the HDSS; with a participation rate of 70% in each year, and an annual out-migration rate of 20%, it is estimated that there will be~9000 person-years of follow-up during 2016-2019. Study power is low to show a change in HIV incidence in sub-group analyses of AGYW aged 15-19 and 20-24 years, but~80% to show an overall reduction of 45, and 90% to show an overall reduction of 50% in HIV incidence (Fig. 3b).
The same sample sizes will also allow us to detect meaningful change in secondary outcomes that are more common among AGYW than HIV incidence, including knowledge of HIV status and use of condoms (Additional file 1: Table S2). Secondary outcomes will also be measured at the population level for men including the proportion of males who know their HIV status, uptake of voluntary male medical circumcision among HIV-negative males, and uptake of anti-retroviral therapy among HIV-positive males (Additional file 1: Table S3).
For the nested cohorts in the three HDSS sites, selected from the HDSS sampling frames in South Africa and Western Kenya, a minimum of 400 girls in each of the 13-17 and 18-22 year age groups will allow us to analyze causal relationships between key mediators and key outcomes (Additional file 1: Table S4), and similarly to analyze causal relationships between uptake of DREAMS interventions and key outcomes. Over-sampling by 20% will cater for non-response and loss-to-follow-up. In Nairobi, for the additional cohort of 400 younger girls aged [10][11][12][13][14], the sample size of 400 will allow us to explore pathways between uptake of DREAMS interventions, key mediators and age-appropriate outcomes like school completion.
In Zimbabwe, network-based recruitment will be used to enroll 18 to 24-year-old women who sell sex from intervention and comparison sites. Based on the assumption that 20% of YWSS identified through this process will test HIV-positive and 30% of HIV-negative YWSS will be lost to follow-up over 24 months, it is estimated that 1200 women from the intervention and comparison sites (2400 total) will be needed to detect a 40% reduction in HIV incidence [8]. This sample size is also sufficient to explore pathways linking DREAMS to secondary outcomes.
Discussion
DREAMS is a direct responseprobably the most ambitious yetto the call for combinations or 'packages' of prevention approaches to address the multidimensional nature of HIV risk. PEPFAR and its DREAMS partners have set bold targets and allocated significant resources to urgently reduce new HIV infections. It is important to learn from these efforts, but evaluating such a multi-component programme is complex.
In the first instance, a randomised design was not possible because DREAMS sites were not selected at random, but chosen for their high burden of HIV prevalence and incidence. Furthermore, interventions in the core package cannot be rolled out at random, as implementation will begin with the interventions already in place (e.g., through pre-existing PEPFAR and national government programmes) and this is context-specific. Neither was a controlled design possible, given the numerous differences (non-comparability) across sub-national geographic units, as well as the absence of existing surveillance/data platforms in other areas, to allow for comparable data collection.
We have proposed the most rigorous design feasible in the absence of randomisation. Community-wide data platforms allow us to evaluate DREAMS in large, general populations, and provide the frameworks for randomly selected, representative samples of young people for detailed, nested studies. The range of data available (HIV, demographic, social, spatial, clinical) can be linked to maximize the range and depth of inquiry. In all settings, detailed longitudinal data will allow us to investigate pathways and explore change processes in the context of DREAMS roll-out (and minimize recall and reporting bias), and to account for a range of potential confounding variables. Historical measures of HIV incidence and other outcomes will provide baseline trend data, to help distinguish the impact of DREAMS from existing trends due to other factors.
In this study, estimates of HIV incidence will be directly observed through repeat testing and can be compared to levels of newly diagnosed infections in pregnant women tested in antenatal clinics serving the study populations, since the latter is a method by which PEPFAR will assess programme impact in DREAMS sites (e.g., through monitoring of ante-natal care testing data as part of Prevention of Mother to Child Transmission programmes) [22]. Tracking new HIV diagnoses can be a helpful complement to incidence rates, offering insight into the reach and yield of HIV testing services.
A particular challenge of this evaluation is also one of its main strengths: harmonizing across diverse settings. Each setting presents unique opportunities to deepen the understanding of AGYW's experience of DREAMS, but coordinating the design and measures across settings is not always possible. For example, in Nairobi, we have an opportunity to understand DREAMS impact in an urban setting and to capitalize on the site's extensive experience with young people, to track pathways through DREAMS from a very young age. In this setting, however, we will not be able to observe change in HIV incidence (except indirectly, by monitoring antenatal clinic outcomes of HIV testing). Gem offers a rural comparison to Nairobi, where we can measure the added value of DREAMS following wide-scale roll-out of anti-retroviral therapy and VMMC. In uMkhanyakude, we have an opportunity to evaluate DREAMS in a setting where HIV risk has remained persistently high and relatively few HIV prevention interventions have targeted AGYW prior to DREAMS. In Zimbabwe, no HDSS framework exists, but we will gain insight into the DREAMS+PrEP package, and understand HIV and HIV-related outcomes among an exceptionally vulnerable group of young women.
Working with existing research platforms offers infrastructure, experience and data prior to DREAMS introduction. However, it also means that data collection cannot be conducted at the same time in each site, and this must be taken into account in analyses and interpretation. Also, community sensitisation efforts are planned in each site to avoid research fatigue and maximize data quality and validity in settings with frequent and/or concurrent surveys. Furthermore, in each setting, DREAMS is delivered through different models of collaboration and implementation, and changes in implementation will occur in each setting over time. This heterogeneity and its influence on outcomes will need to be understood through careful process evaluation.
With a portfolio of evaluations in diverse settings, the sum can be greater than its parts. Learning within and across sites, we can document the role of context and adaptation in DREAMS impact, to inform replication in a range of other diverse settings. The effectiveness of the individual interventions in the DREAMS core package have been demonstrated in previous trials and evaluations. We now need to understand how they can be combined for maximum reach, scale and impact. This evaluation will investigate this in 'real-world' , non-trial conditions, providing immediately relevant and timely lessons for future policy and programming.
Additional file
Additional file 1: Table S1. Summary of primary and secondary packages of interventions in each country setting, by age where applicable. Table S2. Estimated sample sizes to measure change over time in secondary outcomes among AGYW. Table S3. Estimated sample sizes to measure change over time in key outcomes among males. Table S4. Estimated sample sizes to assess the causal effect of key mediators of change on secondary outcomes (via cohorts of adolescent girls and young women). (DOCX 58 kb)
|
2018-07-26T11:39:39.931Z
|
2018-07-25T00:00:00.000
|
{
"year": 2018,
"sha1": "415342835761d77fe1b37206dd4aee8d42a64588",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5789-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "415342835761d77fe1b37206dd4aee8d42a64588",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210838591
|
pes2o/s2orc
|
v3-fos-license
|
Invariant density adaptive estimation for ergodic jump diffusion processes over anisotropic classes
We consider the solution X = (Xt) t$\ge$0 of a multivariate stochastic differential equation with Levy-type jumps and with unique invariant probability measure with density $\mu$. We assume that a continuous record of observations X T = (Xt) 0$\le$t$\le$T is available. In the case without jumps, Reiss and Dalalyan (2007) and Strauch (2018) have found convergence rates of invariant density estimators, under respectively isotropic and anisotropic H{\"o}lder smoothness constraints, which are considerably faster than those known from standard multivariate density estimation. We extend the previous works by obtaining, in presence of jumps, some estimators which have the same convergence rates they had in the case without jumps for d $\ge$ 2 and a rate which depends on the degree of the jumps in the one-dimensional setting. We propose moreover a data driven bandwidth selection procedure based on the Goldensh-luger and Lepski (2011) method which leads us to an adaptive non-parametric kernel estimator of the stationary density $\mu$ of the jump diffusion X. Adaptive bandwidth selection, anisotropic density estimation, ergodic diffusion with jumps, L{\'e}vy driven SDE
Introduction
Diffusion phenomena arise from a Markovian stochastic modeling and as a solution of SDEs with or without jumps in many areas of applied mathematics. Their investigation concerns different mathematical branches and therefore research interest in questions such as existence and regularity of solutions of stochastic differential equations has constantly grown over the past years. The study of the statistical properties of diffusion models has emerged since such models are widely used for applications in finance and biology. Diffusion processes with jumps, in particular, have been used in neuroscience for instance in [8] while in finance they have been introduced to model the dynamic of asset prices [13], [20], exchange rates [3], or volatility processes [2].
In this work, we aim at estimating adaptively the invariant density µ associated to the process (X t ) t≥0 , solution of the following multivariate stochastic differential equation with Levy-type jumps: where W is a d-dimensional Brownian motion andμ a compensated Poisson random measure with a possible infinite jump activity. We assume that a continuous record of observations X T = (X t ) 0≤t≤T is available. Practical concerns raise new questions such as the dependence of statistical features on the observation scheme: it is, for the applications, a subject of interest to consider basic questions in different observation scenarios. From a theoretical point of view, it is however also of substantial interest to work under the assumption that a continuous record of the diffusion considered is available. In this framework, it belongs to the folklore of the statistics for stochastic processes without jumps that the invariant density can be estimated under standard nonparametric assumptions with a parametric rate (cfr Chapter 4.2 in [15]). The proof relies on the existence of diffusion local time and its properties and so such a result is restricted to the one -dimensional setting.
Regarding the literature on statistical properties of multidimensional diffusion processes in the continuous case, an important reference is given by Reiss and Dalalyan in [7], where they show an asymptotic statistical equivalence for inference on the drift in the multidimensional diffusion case. As a by-product of the study they prove, under isotropic Hölder smoothness constraints, convergence rates of invariant density estimators for pointwise estimation which are faster than those known from standard multivariate density estimation. Their result relies on upper bounds on the variance of additive diffusion functionals, proven by an application of the spectral gap inequality in combination with a bound on the transition density of the process. Still in the continuous case, in a recent paper, Strauch [24] has extended their work by building adaptive estimators in the multidimensional diffusion case which achieve fast rates of convergence over anisotropic Hölder balls. The notion of anisotropy plays an important role. Indeed, the smoothness properties of elements of a function space may depend on the chosen direction of R d . The Russian school considered anisotropic spaces from the beginning of the theory of function spaces in 1950-1960s (in [22] the author takes account of the developments). However, results on minimax rates of convergence in classical statistical models were rare for a lot of time. The question of optimal bandwidth selection based on i. i. d. observations for density estimation with respect to sup -norm risk was not completely solved until the pretty recent developments gathered in [17]. The methodology detailed in Goldenshluger and Lepski [11] inspired the datadriven selection procedure of the bandwidth of the kernel estimator proposed by many authors such as Strauch in [24] and Comte, Prieur and Samson in [6] and provides the starting point for the study of our adaptive procedure as well.
In this paper, we provide a non-parametric estimator of the invariant density µ with a fully data-driven procedure of the bandwidth. We propose to estimate the invariant density µ by means of a kernel estimator, we therefore introduce some kernel function K : R → R. A natural estimator of µ at x ∈ R d in the anisotropic context is given bŷ where h = (h 1 , ..., h d ) is a multi -index bandwidth, which will be chosen through the data-driven selection procedure. We first prove some bounds on the transition semigroup and on the transition density that will be useful to find sharp upper bounds on the variance of integral functionals of the diffusion X. Through them, we find the following convergence rates for the pointwise estimation of the invariant density of our diffusion with jumps: where α ∈ (0, 2) is the degree of jumps activity of the Lévy process andβ is the harmonic mean smoothness of the invariant density over the d different dimensions. We remark that the rate we find for d ≥ 3 is the same Strauch found in [24] in absence of jumps, which is also the rate gathered in [7] up to replacing the mean smoothness with β, the common smoothness over the d dimensions.
The case d = 1 evidences the main difference between what happens with and without jumps. Indeed, if in the continuous case the optimal convergence rate was 1 T , now the rate we found is between log T T and (log T ) 3 2 T . It is worth noting here that such a convergence rate is not necessarily the optimal one in the jumps framework. As a matter of fact in the continuous case different approaches, as the diffusion local time, have been used to get the rate 1 T ; we do not exclude the possibility that also in presence of jumps the implementation of other methods could lead to a convergence rate faster than the one presented here above for the mono-dimensional setting. To complete the comparison to the continuous framework, we recall that in both [7] and [24] the convergence rate found in the case d = 2 was (log T ) 4 T and so the convergence of the estimator seems being faster in presence of jumps than without them. The reason why it happens is that, to find the convergence rate, the transition density (p t ) t∈R + is needed to be upper bounded. If in [7] the authors assume to have p t (x, y) ≤ c(t − d 2 + t 3d 2 ) and in [24] Nash and Poincaré inequalities lead Strauch to a bound analogous to the one presented in [7]; Lemma 1 below provides us a different bound which guides us to a different rate. However, in absence of the term t 3d 2 in the assumption before, which is the case for example considering a bounded drift, also in the continuous setting the convergence rate turns out being, as in the jump -diffusion case, equal to log T T . It is moreover worth noting here that, if in [7] and [24] they needed to assume the existence of the transition density and a bound on it, we derive them through Lemma 1: all the assumptions we need are directly on the model (1). We no longer need to assume that the drift is of the form b = −∇V (where V ∈ C 2 is referred to as potential) as it was in both [7] and [24].
After having provided the rates of convergence of the estimators we finally propose, in the case d ≥ 3, a fully data-driven selection procedure of the bandwidth of the kernel estimator, inspired by the methodology detailed in Goldenshluger and Lespki [11]. The method has the decisive advantage of being anisotropic: the bandwidths selected in each direction are in general different, which is coherent with the possibly different regularities with respect to each variable. Finally, we prove that for the selected optimal bandwidth the following estimation holds: where we have denoted as · A the L 2 norm on A, a compact subset of R d and as H T the set of candidate bandwidths; B(h) is a bias term and V (h) an estimate of the variance bound. We remark that the estimator leads to an automatic trade -off between the bias and the variance: the second term on the right hand side of (2) is indeed negligible compared to the first one. Moreover, as the rate optimal choice h(T ) belongs to the set of candidate bandwidths H T , (2) turns out being whereβ is the mean smoothness of the invariant density. The paper is organised as follows. We give in Section 2 the assumptions on the process X. In section 3 we define the anisotropic Hölder balls and we construct our estimator. Section 4 is devoted to the statements of our main results; which will be proven in the two following sections. In particular, we show how we get the convergence rates for the invariant density estimation in Section 5 while in Section 6 we prove the estimator we find through our bandwidth selection procedure is adaptive. Some technical results are moreover presented in the Appendix.
Model Assumptions
We consider the question of nonparametric estimation of the invariant density of a d-dimensional diffusion process X, assuming that a continuous record X T = {X t , 0 ≤ t ≤ T } up to time T is observed. This diffusion is given as a strong solution of the following stochastic differential equations with jumps: Brownian motion and µ is a Poisson random measure on [0, T ] × R d associated to the Lévy process L = (L t ) t∈[0,T ] , with L t := t 0 R d zμ(ds, dz). The compensated measure isμ = µ −μ; we suppose that the compensator has the following form:μ(dt, dz) := F (dz)dt, where conditions on the Levy measure F will be given later. The initial condition X 0 , W and L are independent.
In what follows, we suppose the following assumptions hold: A1: The functions b(x), γ(x) and a(x) are globally Lipschitz and, for some c ≥ 1, where I d×d denotes the d × d identity matrix. Denoting with |.| and < ., . > respectively the Euclidian norm and the scalar product in R d , we suppose moreover that there exists a constant c > 0 such that, ∀x ∈ R d , |b(x)| ≤ c.
As we will see in Lemma 2 below, Assumption 2 ensures, together with the last points of Assumption 3, the existence of a Lyapunov function. The process X admits therefore a unique invariant distribution π and the ergodic theorem holds. We assume the invariant probability measure π of X being absolutely continuous with respect to the Lebesgue measure and from now on we will denote its density as µ: dπ = µdx. For any set S ⊂ R d we define µ(S) := S µ(x)dx and, by abuse of notation, we will write For each g ∈ L 1 (µ) we denote as g L 1 (µ) := µ(|g|) the L 1 norm with respect to µ on R d . The transition semigroup of the process X on The transition density is denoted by p t and it is such that P t f (x) = R d f (y)p t (x, y)dy; we will see in Lemma 1 that it exists.
The process X is called β -mixing if β X (t) = o(1) for t → ∞ and exponentially β -mixing if there exists a constant γ > 0 such that β X (t) = O(e −γt ) for t → ∞, where β X is the β -mixing coefficient of the process X as defined in Section 1.3.2 of [9]. We recall that, for a Markov process X with transition semigroup (P t ) t∈R + and L(X 0 ) = η, the β -mixing coefficient of X is given by where ηP t = L(X t ) and λ stands for the total variation norm of a signed measure λ.
For the exponential mixing property of general multidimensional diffusions, the reader may consult Theorem 3 of Kusuoka and Yoshida [14] for the α -mixing; Meyn and Tweedie [21], Stramer and Tweedie [23] and Veretennikov [25] for the β -mixing. The mixing property for general diffusions with jumps has been investigated by Masuda in [19]. Now we mention the notion of exponential ergodicity in the sense of [21].
Definition 1. We say that X is exponentially ergodic if it admits a unique invariant distribution π and additionally if there exist positive constants c and ρ for which, for each f centered under µ, We will see in Lemma 2 that both the exponential ergodicity and the exponential β -mixing can be derived from our assumptions.
In Lemmas 2 and 1 below we will prove some bounds on the transition semigroup and on the transition density that will be useful to establish tight upper bounds on the variance of integral functionals of the diffusion X.
Bounds of this type were proven before, in [7] (cf. their Proposition 1), by combining estimates based on the spectral gap inequality and on upper bounds on the transition densities of X. Through them they prove, under isotropic Hölder smoothness constraints, convergence rates of invariant density estimators for pointwise estimation which are considerably faster than those known from standard multivariate density estimation. We replace the spectral gap inequality with a control from L 1 to L ∞ given by the exponential ergodicity. Moreover, contrary to [7], we don't need to assume that such controls hold true since we get them as consequence of Lemma 1 and 2 below, having required some assumptions only directly on the model (3).
In the next section we will construct adaptive estimators for the density in the multidimensional diffusion case with jumps, which achieve fast rates of convergence over anisotropic Hölder balls.
Construction of the estimator
In several cases, the regularity of some function g : R d → R depends on the direction in R d chosen.
We thus work under the following anisotropic smoothness constraints.
for D k i g denoting the k-th order partial derivative of g with respect to the i-th component, ⌊β i ⌋ denoting the largest integer strictly smaller than β i and e 1 , ..., e d denoting the canonical basis in R d .
From now on we deal with the estimation of the density µ belonging to the anisotropic Hölder class H d (β, L).
Given the observation X T of a diffusion X, solution of (3), we propose to estimate the invariant density µ by means of a kernel estimator. To estimate some µ ∈ H d (β, L) we therefore introduce some kernel function K : As we will see in Section 4.2, a main question concerns the choice of the multi-index bandwidth 4 Main results
Convergence rates for invariant density estimation
We want to investigate on the convergence rates for invariant density estimation. In order to determine the asymptotic behaviour of our estimator for T → ∞, we study the variance of general additive functionals of X in d dimension. To do so, we need some properties as the exponential ergodicity of the process and a bound on the transition density. Such properties will be derived from our assumptions through the following lemmas, that we will prove in the appendix.
The following bounds on the transition density and on the transition semigroup hold true.
Lemma 1. Suppose that A1 -A3 hold. Then, for T ≥ 0, there exists a transition density p t (x, y) for which for any t ∈ [0, T ] there are a c 0 > 0 and a λ 0 > 0 such that, for any pair of points x, y ∈ R d , we have Lemma 2. Suppose that A1 -A3 hold. Then the process X is exponentially ergodic and exponentially β -mixing.
On the basis of the two previous lemmas we can prove the following bound on the variance, which is the heart of the study on the convergence rate.
Proposition 1. Suppose that A1 -A3 hold and let f : R d → R be a bounded, measurable function with support S satisfying |S| < 1. Then, there exists a constant C independent of f such that From the bias -variance decomposition in the anisotropic case (see Proposition 1 in [5]) we get the following bound We want to bound the variance here above using Proposition 1 on the function f (y) := . As it will be explained in the proof of Proposition 2 in Section 5, for d ≥ 3 it is which leads us to the following convergence rate.
Proposition 2. Suppose that A1 -A3 hold. If µ ∈ H d (β, L), then the estimator given in (5) satisfies, for d ≥ 3, the following risk estimates: We underline that, in the continuous case, the convergence rate found by Strauch in [24] for the estimation of the invariant density µ belonging to the anisotropic Hölder class In Proposition 2 we estimate µ over anisotropic Hölder class H d (β, L) and we therefore extend [24] to the jumps -diffusion case: the convergence rate we obtain is the same it was in the case without jumps, which is also analogous to the rate first obtained by Reiss and Dalalyan in [7] for the estimation of the invariant density µ over isotropic Hölder class H d (β + 1, L), up to replacing the mean smoothness β + 1 with β +1, the common smoothness over the d different dimensions.
For d = 1 and d = 2, the bound on the variance changes. Therefore, the rate optimal choice h will be different as well, as explained in following two propositions.
Proposition 3. Suppose that A1 -A3 hold. If µ ∈ H d (β, L), then the estimator given in (5) satisfies, for d = 1, the following risk estimates: The rate optimal choice for h yields to the convergence rate T .
It is worth remarking that, in Proposition 3, it is stated the main difference between the case with and without jumps. Indeed, if in the continuous case the convergence rate was 1 T , now it depends on the degree of the jumps α and it is between log T T and (log T ) We need to say that the convergence rate we have found here above for the estimation of the invariant density of a stochastic differential equations with jumps in the one dimensional setting is not necessarily the optimal one. In the continuous case other methods have been explored for such an estimation when d = 1, as the use of diffusion local time to get the optimal rate 1 T . We do not rule out the possibility to get a sharper bound through the exploitation of other approaches also for the jumps case, finding therefore a convergence rate faster than the one presented in the previous proposition.
Proposition 4. Suppose that A1 -A3 hold. If µ ∈ H d (β, L), then the estimator given in (5) satisfies, for d = 2, the following risk estimates: The rate optimal choice for h yields to the convergence rate Comparing our result with the convergence rate obtained in the continuous case over isotropic Hölder class H d (β + 1, L) in [7] and anisotropic Hölder class H d (β + 1, L) in [24], which is (log T ) 4 T in both works, one can observe that the convergence rate seems being faster in presence of jumps. The reason why it happens is that in [7] they assume the transition density to be upper bounded by 2 ), which is a bound different from the one we get from Lemma 1. If the term t 3d 2 would have been absent in their assumption, e. g. for bounded drift, then the convergence rate in the continuous case could have been improved to log T T , which is also what we get in the jump-diffusion case. In [24], Nash and Poincaré inequalities lead the author to an upper bound on the transition density which is analogous to the one found in [7] (see Remark 2.4 of [24]).
From the pointwise estimation of the invariant density gathered in the three previous propositions we move to the estimation on L 2 (A), where A is a compact set of R d . In the sequel, for A ⊂ R d compact and for g ∈ L 2 (A), g 2 A := A |g(x)| 2 dx denotes the L 2 norm with respect to Lebesgue on A. As a consequence of Propositions 2, 3 and 4 and the fact that the constants which turn out in the proofs do not depend on x, the following corollary holds true: , then for the rate optimal choice for h = h(T) provided in Propositions 2, 3 and 4 we have the following risk estimates: The proof of Corollary 1 will be given in Section 5.
Adaptive procedure
The question of density estimation belongs to the canonical framework of nonparametric statistics. As detailed in Propositions 3 and 4, both the bandwidth and the upper bound on the rate of convergence appearing on the right hand side of (7) and (8) do not depend on the unknown smoothness of the invariant density µ and so there is no gain in implementing a data-driven bandwidth selection procedure for density estimation in the framework of continuous observations of a one or two dimensional diffusion process with jumps. Hence, throughout the sequel we restrict to the case d ≥ 3. It is clear from the previous section that for d ≥ 3, instead, the proposed bandwidth choice depends on the regularity of the density µ, which is unknown. This is why we study a data-driven bandwidth selection device. We emphasize that the d selected bandwidths are different, and this anisotropy property is important in our setting: the regularity in each direction can be various. The bandwidth selection procedure has to be able to provide such different choices for h 1 , h 2 , ... , h d .
To select h adequately, we propose the following method, inspired from Goldenshluger and Lespki [11]. We define the set of candidate bandwidths H T as The conditions on d l=1 h l we have just given are needed to use Talagrand inequality, on the basis of which we show our adaptive result. We suppose moreover that the growth of |H T | is at most polynomial in T , which is there exists c > 0 for which |H T | ≤ cT c . An example of H T is the following set of candidate bandwidths: (11) In correspondence of the variation of h ∈ H T , we have the following family of estimators, defined as in (5) F We aim at selecting an estimator from the family F (H T ) in a completely data-driven way, based only on the observation of the continuous trajectory of the process X solution of (3). We now turn to describing the selection procedure from F (H T ), which is based on auxiliary estimators relying on the convolution operator. According to our records, it was introduced in [18] as a device to circumvent the lack of ordering among a set of estimators in anisotropic case, where the increase of the variance of an estimator does not imply a decrease of its bias. For any bandwidths h = (h 1 , ..., h d ) T , η = (η 1 , ..., η d ) T ∈ H T and x ∈ R d , we define We moreover define the kernel estimatorŝ We remark that for how we have defined the kernel estimators, since the convolution is commutative, it isμ h,η =μ η,h .
The proposed selection procedure relies on comparing the differencesμ h,η −μ η . We define with where k is a numerical constant which is large. In particular, it is sufficient to choose it bigger than the constants 2k * 0 and 2k 0 which appear in Lemma 4. Even if k is not explicit, it can be calibrated by simulations as done for example in Section 5 of [6] through the implementation of a method inspired by Goldenshluger and Lepski [11] and rewritten most recently by Lacour, Massart and Rivoirard in [16]. Heuristically, A(h) is an estimate of the squared bias and V (h) of the variance bound. It is worth noticing that the penalty term V (h) which is used here comes from Proposition 1 for the function f being the Kernel function. Thus, the selection is done by setting We introduce the following notation: for c 1 and c 2 positive constants.
The bound stated in Theorem 1 shows that the estimator leads to an automatic trade-off between the bias µ h − µ 2Ã and the variance V (h), up to a multiplicative constant c 1 . The last term is indeed negligible. The proof of Theorem 1 is postponed to Section 6.
We recall that Proposition 2 provides us the rate optimal choice h(T ) for d ≥ 3, which is h l (T ) = ( 1 T )β β l (2β+d−2) . Using such a bandwidth we will prove in Section 6 the following theorem.
Theorem 2. Suppose that assumptions A1 -A3 hold and let H T be defined by (11). Then, we have for c 1 and c 2 positive constants.
Underlining once again that the second term in the right hand side of the equation here above is negligible compared to the first, we have that the risk estimates we get using the bandwidth provided by our selection procedure converges to zero fast. In particular, its convergence rate coincides to the optimal one provided by both [7] and [24] in the case without jumps.
Proof convergence rates for invariant density estimation
In this section we prove Propositions 2, 3 and 4, which gives us the convergence rate for the estimation of the invariant density µ ∈ H d (β, L) in the three different situation: d = 1, d = 2 and d ≥ 3. We emphasize that all the constants will appear in the proofs do not depend on the point x considered. We start showing the bound on the variance gathered in Proposition 1.
Proof of Proposition 1
Proof. We consider first of all the case d ≥ 3. We define the function f c := f − µ(f ). From the symmetry and the stationarity we have Applying the change if variable u := s − t, using Fubini and computing the integral we have that the quantity here above is equal to 2 where the specific choice of δ and D will be given later. The idea is to deal with the integral here above in different way for u which is in different intervals. For this reason we see where we have denoted as < ., . > µ the scalar product deriving by the norm with respect to the measure µ, for which < g, h > µ := R d g(x)h(x)µ(x)dx, each g, h ∈ L 2 (µ). In the last inequality we have moreover used that (µ(f )) 2 is always more than 0. Now we use Cauchy-Schwartz inequality and the fact that P u f is a contraction map in L 2 (µ) to get where in the last inequality we have used the estimation Concerning the second integral in (15), we remark that (16) still holds on [δ, D]. We then estimate it through the definition of transition semigroup. It is We want to use the bound on the transition density given in Lemma 1 which holds for t ∈ [0, T ] but it is not uniform in t big. Nevertheless, for t ≥ 1, we have where the constant c changes from line to line. The right hand side of (18) is therefore upper bounded by where we have bounded in both integrals the absolute value of f with its infinity norm. Now we want to calculate the integral with respect to the variable u. We observe that, since d ≥ 3, 1 − d 2 < 0. The exponent of the second term in the integral here above, after having integrated, is 2 − d+α 2 . It is more than zero if d < 4 − α, which is possible only if α ∈ (0, 1) and d = 3, less then zero otherwise. Therefore, we have to consider the two different possibilities, according to the fact that the exponent would be positive or negative. It follows We are now left to estimate the third integral of (15). From Lemma 2 it follows it is upper bounded by and so where in the last inequality we have used the following estimation |µ(S)| ≤ µ ∞ |S| ≤ c|S| (22) and the fact that |S| < 1. Therefore we get Replacing (17), (19) and (23) in (15) we have that We now want to choose δ and D for which the estimation here above is as sharp as possible.
Recalling that the exponent on δ are less than zero in the second and the third terms of the right hand side of (24), we have that for a small choice of δ would correspond the smallness of the first term while the second and the third would be big, the opposite would hold for a big δ. In the same way, the behaviour of the last two terms of the right hand side of (24) relies on the choice D. Aiming at balancing them, we define δ := |S| which give us the result we wanted remarking that both 2 and 1 + 4−α d are always more than 1 + 2 d for d ≥ 3 and α ∈ (0, 2). Otherwise, if T ≤ (− 2 ρ log(|S|)), by the definition of D we obtain D = T . We still have and, moreover, the last integral which we dealt with in (23) is in this case between T and T and so its contribution is null. Hence, the result still holds true.
We now consider the case d = 1. We can act exactly like we did in the case d ≥ 3, splitting the integral in three parts. Estimations (17) and (23) where we have used that now, integrating, both the exponent we get are positive. In total in the case d = 1, using also (22), we therefore have ) + e −ρD ).
Proof of Proposition 2
Proof. Estimation (6) is a straightforward consequence of the bias -variance decomposition and Proposition 1 applied to f (y) := , whose support S is such that |S| ≤ c d l=1 h l and which is by construction such that f ∞ ≤ c( d l=1 h l ) −1 . To find the optimal choice of h we define h l (T ) := ( 1 T ) a l for l ∈ {1, ..., d} and we look for a 1 , ... a d such that the upper bound of the mean-squared error in the right hand side of (6) would result as small as possible. Replacing the definition of h l (T ) in the bias -variance decomposition, it means searching for a 1 , ..., a d for which we get the balance and so we have to resolve the following system: We observe that, as a consequence of the first d − 1 equations, we can write a l as β d β l a d for each l ∈ {1, ..., d − 1}. Therefore, the last equation becomes and, replacing it in the system, we have Taking in the right hand side of (6) the rate optimal choice h l (T ) = ( 1 T )β β l (2β+(d−2)) we get the convergence rate wanted.
Proof of Proposition 3
Proof. The upper bound of the mean-squared error follows from the decomposition bias -variance and from Proposition 1, recalling that for f (X t ) := 1 h K( x−Xt h ) we have f ∞ ≤ ch −1 and its support S is such that |S| ≤ ch. Now, aiming at balancing the terms, we take h := ( 1 T ) a ; getting the mean-squared error is upper bounded by If a gets bigger clearly h gets smaller; it is enough to take a such that 2aβ > 1 to obtain the first two terms here above are negligible compared to the last ones, which gives us the convergence rate
Proof of Proposition 4
Proof. Again, (8) follows naturally from the bias -variance decomposition and Proposition 1.
Regarding the convergence rate, we take again h l := ( 1 T ) a l for l = 1, 2. It follows log( 1 h1h2 ) = a 1 log T + a 2 log T and so the mean-squared error is upper bounded by Taking a 1 and a 2 big enough to make the first two terms here above negligible compared to the third, we get the convergence rate log T T .
Proof of Corollary 1
Proof. It is a straightforward consequence of Propositions 2, 3 and 4 after having remarked that the constants which turn out in all the previous propositions do not depend on the point x considered. Indeed, such propositions yield 6 Proof of the adaptive procedure The heart of the proof of Theorem 1 consist of finding an upper bound for the expected value of A(h), which is gathered in the following proposition.
Proposition 5. Suppose that assumptions A1 -A3 hold. Then, ∀h ∈ H T , Proposition 5 will be proven after the proofs of Theorems 1 and 2.
Proof. From triangular inequality it follows ∀h ∈ H
By the definition (13) of A(h) it follows that the first and the second term of (25) are respectively upper bounded by A(h) + V (h) and A(h) + V (h), having also used on the second term that µ h,h =μh ,h . Then, sinceh has been defined in (14) as the h ∈ H T for which Hence, for any h ∈ H T , we get We want an upper bound for the expected value of the left hand side of the equation (26) and so we need to evaluate . From the standard bias variance decomposition, recalling that Now, we can upper bound the first term of the right hand side here above by enlarging the integration domain getting Moreover, as consequence of Proposition 1 in the case d ≥ 3 we obtain, as it was in Proposition 2, (28) Comparing the upper bound given in (28) with the definition of V (h) and using also (26) and (27) we get, for each h ∈ H T , Now from Proposition 5 and the arbitrariness of the bandwidth h we are considering it follows as we wanted.
As a consequence of Theorem 1 we get, considering the rate optimal choice h l (T ) = ( 1 T )β β l (2β+d−2) provided by Proposition 2, the estimation gathered in Theorem 2. Its proof relies on the fact that, for how we have found it in Proposition 2, if the rate optimal bandwidth belongs to H T then the inf h∈HT (B(h) + V (h)) is clearly realized by it.
Proof of Theorem 2
Proof. We observe that, for the rate optimal choice h(T ) of the bandwidth, the conditions gathered in the right hand side of (10), which are (log T ) 2d the upper bound condition is therefore which is true if and only if log T ≤ T d−2 3(2β+d−2) . Now we observe that , which is always true since d ≥ 3. In particular we can write d−2 3(2β+d−2) =: γ ∈ (0, 1) and, given that eventually for T going to ∞ it is log T ≤ T γ , we have d+2 . In the same way it is .
For the same reasoning as here above it is true if ( 1 3 − 1 2β+d−2 ) 1 2 =: γ is positive. Beingβ > 1 and d ≥ 3, it turns out γ > 0, as we wanted. Up to considerh l (T ) := 1 ⌊Tβ β l (2β+d−2) ⌋ instead of h l (T ), which is asymptotically equivalent and which leads to the same convergence rate, we have that the rate optimal choice belongs to the set of candidate bandwidths H T proposed in (11). Having now h(T ) ∈ H T , for how we have found the rate optimal choice in Proposition 2, the inf h∈HT (B(h) + V (h)) is clearly realized by it and so the bound stated in Theorem 1 is actually (see also Corollary 1) as we wanted.
We have showed Theorem 1 using, as main tool, the bound on E[A(h)] stated in Proposition 5. Its proof, as we will see in the next section, relies on the use of Berbee's coupling method as in Viennet [26] and on a version of Talagrand inequality given in Klein and Rio [12].
We denote byμ * h the estimator computed using X * t instead of X t and we writeμ * h ) to separate the part coming from X * .,1 (super -index (1)) and those coming from X * .,2 (super -index (2)), havingμ * (1) h In a natural way we define moreoverμ * h,η := K η * μ * h , which can be written again as 1 2 (μ * (1) h,η +μ * (2) h,η ), to separate the contribution of X * .,1 and X * .,2 . With this background we can evaluate E[A(h)]. We recall that, as defined in (13), Now we can seeμ h,η −μ η as sum of different terms which we deal with singularly: . As a consequence of the triangular inequality and of the definition of A(h) the following estimation holds true: We start considering I h,η 5 . We define the set Ω * : As a consequence of the second property of the process X * and of the β -mixing with exponential decay showed in Lemma 2 we get, recalling that 2p T q t = T (with q T and p T to be chosen), From the definition ofμ * h and Jensen inequality it is By the definition (12) we get that, ∀h ∈ H T , K h ∞ ≤ ( d l=1 h l ) −1 . We recall that, from how we have defined in (10) Replacing this bound in (30) it follows We take its expectation and we use (29), getting a term which depends on q T , a real to be chosen. From the arbitrariness of q T we get a convergence to zero as fast as we want, for T going to ∞. Indeed, taking q T := (log T ) 2 yields ∀h, η ∈ H T , Regarding sup η∈HT I h,η 1 , we estimate it through (31) and the following lemma, which will be proven in the appendix.
where we have denoted as . A the usual L 2 norm on A, . 1,R d the L 1 norm on R d and . 2,Ã the L 2 norm onÃ.
We recall thatμ h,η = K η * μ h andμ * h,η = K η * μ * h . Therefore, remarking that diam(K) ≤ 2 and so by the definition of K η it is diam(K η ) ≤ 2 √ d, we can use Lemma 3, which yields Taking the expected value, using that K η 1,R d ≤ c ∀η ∈ H T and the equation (31), remarking that the dependence on the integration set considered is hidden in the constant c in which this time will appear |Ã| instead of |A| we get We still use Lemma 3 to study sup η∈HT I h,η 3 , recalling that µ h,η = K η * µ h and µ η = K η * µ b . It yields We are left to study I h,η 2 and I h,η 4 , for which we need the following lemma that will be showed right after the proof of this proposition.
Lemma 4. For i = 1, 2, there exist some positive constants c * 1 , c * 2 , c * 3 and a constant k * 0 such that, for anyk ≥ k * 0 , Moreover there exist k 0 such that, for anyk ≥ k 0 , ). Hence, from triangular inequality and the definition of positive part function, we get From (34), for a k in the definition of V (η) big enough, for which we have k In the same way, remarking that From (31) , (32), (33), (36) and (37) we obtain, for any h ∈ H T , as we wanted.
To conclude the proof of the adaptive procedure we need to show Lemma 4, which core is the use of the Talagrand inequality. First of all, we recall the following version of the Talagrand inequality, which has been stated as Lemma 2 in [6] and which is a straightforward consequence of the Talagarand inequality given in Klein and Rio [12].
with c a universal constant and where V ar(r(T j )) ≤ v.
Proof of Lemma 4
Proof. Since the two cases i = 1 and i = 2 are similar, we study only one of them. We start proving (34), the proof of inequality (35) follows the same line. We first observe it is Our goal is now to find a bound for the right hand side of the inequality here above using the version of the Talagrand inequality gathered in Lemma 5. To do it, we need to introduce some notation.
We observe that µ η −μ * (1) η 2 A = sup r, r =1 < µ η −μ * (1) η , r > 2 , and the supremum can be considered over a countable dense set of function r such that r = 1; let us denote this set by B(1). We define is a centered empirical process with independent variables to which we want to apply Talagrand inequality (38). Therefore, we have to compute M , H and v as defined in Lemma 5. We start by the calculation of M. For any r ∈ B(1) it is, using the definition of r and Cauchy -Schwartz inequality,
Now from the definition of T and Jensen inequality it follows
where we have also used that the support of K η is on S which size is d l=1 η l . Hence, Regarding the computation of H, from the definition of v pT (r) and the fact that the random variables ψ * j,1 r are centered and independents it follows where in the last inequality we have used the estimation for the variance in the case d ≥ 3 gathered in Proposition 1, considering that taking the Kernel function as f we have that its support S is such that |S| ≤ c( d l=1 η l ). It yields We want to prove a tight upper bound for the variance of the integral functional 1 qT 2jqT (2j−1)qT f η (X * j,1 t )dt of the diffusion X * , where we have denoted f η := K η * r. Following the proof we have given of Proposition 1 we have, (H a ) There are c 1 > 0 and β ∈ (0, 1) such that for all x, y ∈ R d , |a(x) − a(y)| ≤ c 1 |x − y| β and, for some c 2 ≥ 1, c −1 2 I d×d ≤ a(x) ≤ c 2 I d×d . (H k ) The function k(x, z) := |z| d+α F ( z γ(x) ) is bounded, measurable and, if α = 1, for any 0 < r < R < ∞ it is r≤z≤R zk(x, z)|z| −d−1 dz = 0. (48) (H b ) The function b belongs to the Kato class K 2 which is, as defined in [4], |f (x + y)|η γ,γ−1 (s, y)dy ds = 0 , where we have denoted f (x + y) as an abbreviation for f (x + y) + f (x − y) and Lebesgue measure on R d for every x ∈ R d , and (x, y) → p ∆ (x, y) is bounded in y ∈ R d and in x ∈ K for every compact K ⊂ R d . Moreover, for every x ∈ R d and every open ball U ⊂ R d there exists a point z = z(x, U ) ∈ supp(F ) such that γ(x) · z ∈ U . We observe that the existence of a bounded density has already been proven in Lemma 1. Moreover, from second and third points of A3, we know that supp(F ) = R d and that γ is an invertible matrix. Hence, for every x ∈ R d and every open ball U ⊂ R d there exists a point z = z(x, U ) ∈ R d such that γ(x) · z ∈ U .
To conclude, we have to prove that Assumption 3* holds and so we have to show the existence of a Lyapunov function. We therefore want to provide a function f * which satisfies the drift condition Af * ≤ −c 1 f * + c 2 for c 1 > 0 and c 2 > 0. A denotes the generator of the diffusion, which is the sum of the continuous and discrete part and for every function f : R d → R, f ∈ C 2 (R d ).
From the fifth point of condition A3 we know there exists ǫ > 0 such that R d |z| 2 e ǫ|z| F (z)dz ≤ c.
For such an ǫ we define f * (x) := e ǫ|x| . We observe it is ∂ i f * (x) = ǫe ǫ|x| xi |x| and We therefore have, using also the drift condition gathered in assumption A2, ∀x : |x| >ρ Concerning the discrete part of the generator, from intermediate value theorem we have where H 2 f (x+sγ(x)·z) denotes the hessian matrix of the function f computed in the point x+sγ(x)·z.
|
2020-01-22T02:01:18.855Z
|
2020-01-21T00:00:00.000
|
{
"year": 2020,
"sha1": "db08b4c166c82ee11c36f27cb551dd19fb89f2e6",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://manuscript.elsevier.com/S037837582030121X/pdf/S037837582030121X.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "db08b4c166c82ee11c36f27cb551dd19fb89f2e6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
240149857
|
pes2o/s2orc
|
v3-fos-license
|
Modern Problems of Historical Parks Recreational Potential Comprehensive Assessment
Among the main tasks of the effective management of a historical park there should be not only efforts aimed at preserving the park, but also the formation of its all-season attractiveness to the public and the development of special recreational programs. Each park has a certain recreational potential. The concept of “Potential” is “sources, opportunities, funds, reserves that can be used to solve any task, achieve a certain goal” (BSE). In addition, the recreational potential of the territory consists of such indicators as the maximum recreational capacity of the territory and the possible maximum recreational load. However, currently, the definition of recreational potential is mainly reduced to assessing the stability of vegetation and maintaining their attractiveness without taking into account the entire set of indicators, such as the timing of the recreation season, target categories of visitors, types of recreation by the level of organization, as well as recreational infrastructure. It is necessary to expand the concept of “Recreational potential”, which will allow to form effectively the recreational attractiveness of historical parks within the framework of the tasks of preserving the cultural heritage.
Introduction
Recently, the concept of so-called ecosystem services has become increasingly popular in the world scientific community. Among other things, scientists around the world consider cultural ecosystem services to be also ecosystem services. This concept involves all the benefits that a person gets from interacting with nature, including those in a form of organized recreation [1]. Thus, it is becoming more and more relevant to address the issues of accounting, assessment and formation of the market for recreational services provided. It is becoming important not only to search for the methods of ecosystem preserving, but also to study the possibilities of providing high quality recreation [2].
Methods and Materials
The main condition for adapting historical parks to modern use is to preserve the features of objects of historical and cultural value. Each Park, as an established ecosystem, is a three-dimensional art complex which has a unique potential for organizing public recreation and requires individual solutions. Lack of criteria for evaluating factors that determine recreational potential does not always allow to choose the right adaptation strategy for historical landscape architecture objects. The purpose of this research is to understand and analyze the problems of integrated assessment of historical parks recreational potential. The research was conducted using methods of inductive experience
Results and Discussion
The main goal of a recreation system of large cities is to improve the health of the population, i.e., to recover and develop the physical and mental strength of a person. In other words, the green oases of large urban conglomerates are charged with the responsibility for organizing such a recreation, which provides the possibility for not only for rest, but also grow "spiritually" by means of interacting with nature [2,3]. In the case of historical parks, "spiritual" growth also occurs due to the perception of the history of a city, region, or country.
Traditionally, the park is considered as a recreational structure with comfortable conditions for recreation -from contemplative to active. The historical park in the structure of old cities green spaces is a complex system that has to adapt both to the increasing density of surrounding areas and the increase in recreation visits, as well as to the greatly changed demands of visitors over the past decades and new requirements for the quality of recreational services.
In this regard, among the main tasks of effective management of historical parks for the provision of ecosystem services should be not only efforts aimed at preserving the park, but also the formation of its all-season attractiveness to the public.
Visitors' demand for active family recreation constantly leads to expansion and adjustment of the existing range of recreational activities, which often do not correspond with the functional and planning concept of the historical park, which determines its recreational potential [4]. This is especially clearly demonstrated by the experience of private parks use after the revolution of 1917 in Russiathe way how during the Soviet period historical parks were adapted to the needs of the Soviet society. At this particular time most historical parks were adapted for the recreational needs of the population. The Parks were turned into "Parks of culture" with facilities for mass activities such as rides, stadiums, outdoor theaters and dance floors (figure 1). Recreation centers were set up on the ground of palaces and estates. So, the problem with integrating historical parks into modern city life is determined by two conflicting objectives: the conservation and continued viability of historic landscape on the one hand, and the visitors' increasing social demands to expand recreational functions of a green space on the other. Identifying the priorities is the first task. Local residents consider historical sites as spaces for various outdoor activities before they appreciate their historic value [4,5].
The recreational potential of the green space includes such indicators as the maximum recreational capacity of the territory and the possible maximum recreational load. However, currently, the definition of recreational potential is mainly reduced to assessing the stability of vegetation and maintaining their attractiveness without taking into account the entire set of indicators, such as the duration of the recreation season, target categories of visitors, types of recreation by the degree of its organization, as well as recreational infrastructure [6][7][8]. Maintaining the attractiveness of vegetation, in turn, is limited to a system of prohibitions on certain types of recreational activities. At the same time, bans are presented to visitors in an ultimatum form without any alternative.
It creates a situation when the management of the historical park is aimed to increase the amount of visitors and attracting interurban and international tourists, while prohibits many traditional types of recreation, thereby reducing the number of visitors from the surrounding areas, for whom recreation in this historical park is the only available option.
An analysis of the demand for various types of recreation showed that the main forms of recreation for the overwhelming majority of park visitors are: visiting thematic exhibitions and festivals, meeting friends, organizing picnics, admiring the beauty of nature in an untouched landscape [9]. The kinds of parks use are defined by the needs of the visitors themselves, which includes four social groupsholiday-makers, children, tourists and researchers. All of them may come to the park in groups (tourists) or individually. Each visitor has his own interests. It is important to clearly define zoning and leisure activities for users of recreational resources in the first place -individual and family visitors. It is better to have appropriate facilities and a certain level of amenities, adjusting the level of the visitors' influence on the most valuable historical landscape. Children love to be active. It is difficult for them to be just "observers" in the museum park space. On the other hand, the playground equipment must be integrated into the historical atmosphere. The problem is that this equipment is required to be certified. For instance, you can see the playground in the Summer Garden after restoration. The sandbox was styled as a historic fountain. In our opinion, this was done well. Tourists and researchers are primarily audience of information. However, the specifics of the information is fundamentally different for each of them. For tourists it is necessary to build excursion routes. A group of researchers who are familiar with the general history or have detailed knowledge of highly specialized issues may include amateurs and professionals in the field of local history, architecture, landscape architecture, botany, dendrology, art and others. The structure of the recreational activities is not the same for each site and is determined by several factors such as the value of the park, its location, the type of planning, functional zoning and visiting time.
The value of these sites largely determines the demand for the functionality of the green space. A large area allows for capacious and a greater variety of activities. Small size of parks and gardens significantly narrows the range of activities. In terms of lay-out organization -formal sites are less adaptable to free recreation and visitors perceive it as a museum space.
Location of historical parks -in the city or in the country, can form an idea of the kind of potential activity. Spaces located outside the city are often perceived by the population as a place of active recreation with unorganized activities, which is not always possible and does not provide for regulations to use the territory. The result is a conflict between the needs of visitors in recreational activities, and the administration will to preserve the site in its' proper condition.
Temporary exhibitions, events and festivals keep interest in historic parks and make them more attractive to patrons. One of the most famous festivals is the Fountain Festival in Peterhof. Celebration attracts a large number of visitors (about 10 000 at once). One of the problems is to distribute spectators on the territory of the park. It is hard to see the performance from some points, so some of the people climb shaped trees in the alleys. A huge crowd goes through a parterre and bosquets after the performance. This event is loved by locals and tourists, but it is necessary to adapt it to formal park of Peterhof [5].
When the historical park is the only available place of recreation, it becomes the topical task for the park management to develop special recreational programs making it possible to avoid the humancaused degradation of plantings and at the same time provide high quality recreation according to social requirements.
Individual recreational programs or scenarios development for historical parks should start with working out the criteria for the level of permissibility of a particular type of recreation. To define these criteria following steps are required: -analyze the possibility of changing the original function of the historical park, which is based on the analysis of the specific features of the historical park, its size, stylistic and compositional concept, also various permissible types of recreation are considered. An example of the change in the function of the park is numerous estate parks, where mass recreation is almost impossible, and this determines the further choice of the recreational scenario.
-analyze the demand for various types of recreational services. To do this, it is necessary to identify the possible target groups of visitors and consider the planning features of the park in order to determine various types of seasonal use. Thus, a vivid example of providing a variety of recreational services designed for seasonal visits can be the Kirov Central Park (Elagin island)the use of ponds for boating in summer or for skiing in winter; -analyze the possible costs of maintaining the park's vegetation and planning structure while increasing the number of visitors, as well as evaluate the resulting economic effect. In other words, to compare the cost of changing the quality of park reconstructions (including widening roads and paths, installing lighting, equipping the territory with toilets, etc.) with the possible economic effect of increasing the number of visitors. An example of increasing the resilience of green spaces to recreational impact after widening roads during the restoration of the Ekaterininsky Park [10] ( figure 4).
The calculation of the recreational potential of the historical park taking into account all the above criteria, can only be carried out in the case of regulated recreation. Recreation regulation should be aimed at the creation of specialized functional areas such as children's play areas, picnic areas, bike lanes and access areas for visitors with dogs (in addition to "museum" areas), rather than at the creating prohibition system.
Traditional bans on picnics, dog walking, cycling and picking mushrooms and berries lead to a decrease in the attractiveness of parks for a significant number of potential visitors. On the contrary, Obligatory conditions for the existence of such functional zones on the territory of historical parks must be: -providing access to a large number of visitors; -planning the functional zones in such a way as to exclude violations of the historical layout of the park as well as changes of the historical park composition; -mandatory visiting rules that define the rules and regulations of visitors' behavior. This regulation should be aimed at ensuring the safety of all categories of visitors, as well as at preserving the historical sites and features of the park; -all new functional zones on historical territories must be provided with equipment that allows implementing the prescribed regulations. For example, for playgrounds, the equipment must be certified and the appearance of this equipment must correspond to the historical features of the park; bike lanes must be designated routes that do not violate the historical layout and do not interfere with the main flow of visitors.
The existence of pedestrian or transport transit through the historical park also leads to making adjustments to the system of regulations. The planning of a new functional zoning of the historical park should be carried out with the mandatory consideration of these transits and each individual case should be carried out individually (figure 5). The definition of the concept of "Potential" given by the Great Soviet Encyclopedia includes "sources, opportunities, means, reserves that can be used to solve any problem, to achieve a certain goal." [11]. In this case, the main goal is to provide various types of recreation with the mandatory condition of preserving the features of the historical park.
Conclusion
The above-mentioned approaches for assessing the recreational potential will allow to form the recreational attractiveness of historical parks effectively within the framework of the tasks of preserving the cultural heritage.
|
2021-10-29T20:12:26.873Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "191bab5c0042d24e56d4bd49a4842e7eb105f387",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/876/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "191bab5c0042d24e56d4bd49a4842e7eb105f387",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
244472267
|
pes2o/s2orc
|
v3-fos-license
|
Early diagnosis of focal congenital hyperinsulinism: A fluorine-18-labeled l-dihydroxyphenylalanine positron emission tomography/computed tomography study
ABSTRACT Congenital hyperinsulinism (CHI) is responsible for hyperinsulinemic hypoglycemia which needs aggressive treatment in order to prevent neurological damages. Recent advances in genetics have linked CHI to mutations in many different genes that play a key role in regulating insulin secretion from pancreatic ß-cells. Furthermore, histopathological lesions, diffuse and focal, have been associated with these different genetic alterations. This short manuscript describes how the advent of fluorine-18-labeled L-dihydroxyphenylalanine-positron emission tomography/computed tomography (18F-DOPA-PET/CT) scanning has changed the management of patients with CHI. 18F-DOPA PET/CT imaging differentiates focal from diffuse disease and is 100% accurate in localizing the focal lesion. In these patients, the lesion can be surgically removed allowing complete resolution of clinical alterations. We report a case in which clinical experience together with rapid genetic analysis and imaging with 18F-DOPA-PET/CT, were able to guide the correct clinical management of this condition. We confirm that advances in molecular genetics, imaging methods (18F-DOPA PET-CT), medical therapy, and surgical approach have completely changed the management and improved the outcome of these children.
INTRODUCTION
Congenital hyperinsulinism (CHI) is a rare but complex disorder caused by unregulated secretion from the beta-cells of the pancreas. Maintenance of euglycemia is necessary to minimize neurologic damages such as cerebral palsy, epilepsy, neurodevelopmental deficits, and even death. The term CHI refers to these inherited forms of hyperinsulinemic hypoglycemia (HH). CHI occurs due to mutations in key genes which play a role in insulin secretion from pancreatic β-cells. Currently, mutations have been identified in many different genes (ABCC8, KCNJ11, GLUD1, GCK, HADH, SLC16A1, UCP2, HK1, PGM1, PMM2, HNF4A, and HNF1A) that lead to dysregulated secretion of insulin; the most common cause for CHI are mutations in the genes ABCC8 and KCNJ11 that encode the SUR1 and Kir6.2 subunits of the pancreatic β-cell K ATP channel. Histologically, CHI is now classified into three groups: diffuse, focal, and atypical forms. [1][2][3] Focal forms are sporadic in inheritance and the lesions may occur in any part of the pancreas, although the tail and the body are the most common locations. The diffuse disease is due to recessive mutations in ABCC8 and KCNJ11 and affects the whole Early diagnosis of focal congenital hyperinsulinism: A fluorine-18-labeled l-dihydroxyphenylalanine positron emission tomography/computed tomography study pancreas. The atypical form is related to an enlargement of β-cell nuclei, localized to discrete areas of the pancreas. The diffuse form is medically unresponsive and will require a near-total (>95%) pancreatectomy; the focal form, affecting only a small region of the pancreas, requires a limited pancreatectomy. Thus, the preoperative differentiation of these two subgroups is necessary. [4] However, the routine imaging techniques, such as US, computed tomography (CT), and/or magnetic resonance imaging (MRI), are unable in distinguishing the diffuse and focal forms of CHI. Nowadays, the imaging modality of choice to diagnose CHI is fluorine-18-labeled l-dihydroxyphenylalanine positron emission tomography/CT ( 18 F DOPA PET/CT) scan. The rationale of 18 F-DOPA PET/CT is based on the utilization of 18 F-DOPA as a precursor for dopamine. Pancreatic islets take up 18 F-DOPA and are able to convert it into dopamine using the enzyme DOPA decarboxylase. As part of the amine precursor uptake and decarboxylation system, normal islets in the pancreas also take up amine precursors ( 18 F-DOPA, for example) and decarboxylate them to amines by means of the aromatic amino acid decarboxylase enzyme. In hyperfunctioning islets (as in case of primary hyperinsulinemia), the uptake is more pronounced, and 18 F-DOPA PET/CT becomes of value for evaluating such patients. Because both forms of HH have an increased activity of this enzyme, the PET/CT scan usually demonstrates a uniform uptake of 18 F L DOPA throughout the pancreas in cases of diffuse CHI, whereas in focal CHI, the uptake is located only in particular foci of disease within the pancreas. Moreover, there have been reports of ectopic pancreatic tissue causing CHI in children. An 18 F-DOPA-PET/ CT scan localized the ectopic lesions in the vicinity of the former head of the pancreas or in other districts. [1,4,5] For this reason, the preoperative evaluation of the focal lesions would have led to the removal of local and ectopic lesions and preservation of the rest of the pancreas.
CASE REPORT
An infant boy, born at 38 weeks' gestation via vaginal birth, body weight 4446 g (+3.01 SDS), length 52 cm (+1.1 SDS), head circumference 35 cm (+0.36 SDS), showed normal APGAR scores of 9 and 10 at 1 and 5 min, respectively. Due to the early detection of hypoglycemia, the infant was transferred to the neonatal intensive care unit, was on full enteral feeding, and received intravenous glucose treatment at a 1.2 g/kg/day dose to maintain blood glucose values within normal ranges. A panel of critical laboratory tests showed persistent high insulin levels (over 30 mcg/ml), negative ketonemia, low free fatty acid (213 mcmol/l, normal values 500-1600), normal insulin-like growth factor-1, cortisol, and ammonia levels in the setting of hypoglycemia, suggesting the diagnosis of CHI. Diazoxide treatment produced limited response, while subcutaneous octreotide allowed a significant decrease of intravenous glucose infusion. The diazoxide unresponsiveness suggested a potassium channel gene mutation. Therefore, genetic analysis and 18-F-DOPA PET/CT scan were organized. Genomic DNA was extracted from peripheral blood using the automated extractor Maxwell 16 (Promega). Sample enrichment and paired-end library preparation were performed using the commercial kit TruSight One (Illumina, San Diego, CA, USA), and sequencing was performed on NextSeq 500 instrument (Illumina, San Diego, CA, USA) with a flow cell high output, 300 cycles PE (150 × 2). Calling of variants was focused on genes for hyperinsulinism (ABCC8, GCK, GLUD1, HADH, HNF1A, HNF4A, INSR, KCNJ11, SCL16A1, and UCP2). Candidate variants were classified according to the ACMG-AMP criteria. [6] Identified variant was validated using Sanger Sequencing on AB3730 sequencer (Applied Biosystems), according to the manufacturers' protocols (primer and PCR conditions available on request). In the subject, we identified the mutation in the ABCC8 gene NM_000352.3:C.119T>G (p. Leu40Arg) previously described in a subject with CHI. [7] and demonstrated to prevent the export of the protein from the endoplasmic reticulum. [8] By performing segregation analysis, we demonstrated the paternal origin of the variant. The presence in the patient of a monoallelic recessive paternally transmitted ABCC8 mutation predisposes to somatic recessive condition by loss of heterozygosity and supports the diagnosis of focal CHI.
The patient received 4 MBq/kg of 18 F-DOPA intravenously. After 60 min, a whole-body scan was obtained in 3-4 bed positions. To obtain images for visual analysis, iterative reconstruction was performed and the reconstructed images were evaluated in a three-dimensional display using axial, coronal, and sagittal views to define pancreas. 18 F-DOPA PET/CT images showed intense 18 F-FDOPA uptake in the head of the pancreas, confirmed by a semi-quantitative evaluation (maximum standardized uptake value = 6.67) [ Figure 1]. Due to the small size of the baby and the location of the lesion in the head of the pancreas, the first choice was diazoxide treatment which unfortunately was unable to control the decrease in blood glucose. As a second choice, octreotide treatment administered with an insulin pump was effective in keeping blood glucose levels within the normal range and allowing the child to grow normally. The growth of auxological parameters was very good during this treatment. At the age of 14 months, the baby underwent successful abdominal surgery with complete resolution of the hypoglycemia. On exploratory laparotomy, a solid focal lesion in the head of the pancreas was visible, with a diameter of 20 mm × 10 mm × 8 mm, which was in close connection with the intestinal wall. It was excised easily including a small portion of intestinal tissue. Microscopically, the lesion contained hyperplastic islet cells separated by thin fibrovascular bands. The islets were adenoma like, and some of the β-cells within the lesion had enlarged nuclei typical of the focal form of CHI. Islets in the surrounding pancreas were normal.
DISCUSSION
At the time of surgery, the focal lesion was found exactly where the PET/CT position suggested and was removed with complete resolution of symptoms. Because focal lesions in many cases are difficult to identify during surgery and cannot be detected with conventional imaging approaches such as CT and MRI, 18 F-DOPA PET/CT scan is the preferred method of diagnosing CHI. Therefore, we confirm that 18 F-DOPA-PET/CT is a safe, noninvasive, and the investigation of choice in distinguishing between the focal and diffuse forms of CHI; the prompt and accurate localization permits the correct enucleation of the focal lesion preventing the risk of developing iatrogenic diabetes mellitus and pancreatic insufficiency.
Informed consent
According to the Italian Law, the parents of the patient signed a written informed consent for taking part in the study.
Financial support and sponsorship Nil.
Conflicts of interest
There are no conflicts of interest.
|
2021-11-22T16:04:10.216Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "fae3e83b5c2302117bcd69642802de3165792ed8",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8abe9df771030d723abb6e374f0fe210a6f18ea1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258605977
|
pes2o/s2orc
|
v3-fos-license
|
Intraoperative Assessment of Breast Cancer Tissues after Breast-Conserving Surgery Based on Mapping the Attenuation Coefficients in 3D Cross-Polarization Optical Coherence Tomography
Simple Summary Multiple technological solutions are being explored to be used in the intraoperative assessment of resection margins in breast cancer and detection of the residual tumor cells during breast-conserving surgery (BCS) for the purpose of reducing the need for a re-resection. We applied the cross-polarization optical coherence tomography (CP OCT) method for intraoperative ex vivo human breast cancer specimens imaging and performed a qualitative and quantitative assessment of 3D CP OCT data using a depth-resolved approach of measuring the attenuation coefficient estimation in co- and cross-polarization channels. En face color-coded attenuation coefficient maps were constructed, and targeted calculations of the characteristic median value of the attenuation coefficients in both channels were performed for different breast tissue regions. As a result, highly accurate differentiation of the tumorous from non-tumorous breast tissue was achieved. This new optical technique with estimation attenuation coefficients of volumetric CP OCT data can be used as an innovative adjunct intraoperative tool to evaluate resection margins during BCS and to perform a targeted histological biopsy. Abstract Intraoperative differentiation of tumorous from non-tumorous tissue can help in the assessment of resection margins in breast cancer and its response to therapy and, potentially, reduce the incidence of tumor recurrence. In this study, the calculation of the attenuation coefficient and its color-coded 2D distribution was performed for different breast cancer subtypes using spectral-domain CP OCT. A total of 68 freshly excised human breast specimens containing tumorous and surrounding non-tumorous tissues after BCS was studied. Immediately after obtaining structural 3D CP OCT images, en face color-coded attenuation coefficient maps were built in co-(Att(co)) and cross-(Att(cross)) polarization channels using a depth-resolved approach to calculating the values in each A-scan. We determined spatially localized signal attenuation in both channels and reported ranges of attenuation coefficients to five selected breast tissue regions (adipose tissue, non-tumorous fibrous connective tissue, hyalinized tumor stroma, low-density tumor cells in the fibrotic tumor stroma and high-density clusters of tumor cells). The Att(cross) coefficient exhibited a stronger gain contrast of studied tissues compared to the Att(co) coefficient (i.e., conventional attenuation coefficient) and, therefore, allowed improved differentiation of all breast tissue types. It has been shown that color-coded attenuation coefficient maps may be used to detect inter- and intra-tumor heterogeneity of various breast cancer subtypes as well as to assess the effectiveness of therapy. For the first time, the optimal threshold values of the attenuation coefficients to differentiate tumorous from non-tumorous breast tissues were determined. Diagnostic testing values for Att(cross) coefficient were higher for differentiation of tumor cell areas and tumor stroma from non-tumorous fibrous connective tissue: diagnostic accuracy was 91–99%, sensitivity—96–98%, and specificity—87–99%. Att(co) coefficient is more suitable for the differentiation of tumor cell areas from adipose tissue: diagnostic accuracy was 83%, sensitivity—84%, and specificity—84%. Therefore, the present study provides a new diagnostic approach to the differentiation of breast cancer tissue types based on the assessment of the attenuation coefficient from real-time CP OCT data and has the potential to be used for further rapid and accurate intraoperative assessment of the resection margins during BCS.
Introduction
Breast cancer remains the most commonly diagnosed cancer (around 30% of all newly diagnosed cancers each year) among women worldwide [1,2]. Currently, advances in multimodal imaging allow its early detection. The most recommended treatment for earlystage breast cancer is breast-conserving surgery (BCS); during the process, the surgeon needs to assess the extent of the disease accurately and its margin status to reduce the likelihood of local recurrence and the need for a re-resection [3,4]. The main criterion for detection and confirmation of optimal BCS intraoperatively is the negative resection margins around the primary tumor (the margin is defined as the distance from the tumor to the cut surface of the resection specimen) [5][6][7]. Difficulties in making the diagnostic decision on where to draw the resection lines occur in patients who have been treated with neoadjuvant chemotherapy due to the possible development of a scattered response in some types of breast cancer. Several factors need to be accounted for in the evaluation of the resection margin, such as the heterogeneity of breast cancer, the prognosis of the disease course (which is dependent on the degree of malignancy or aggressiveness), and the choice of treatment tactics [8][9][10]. Additionally, the margins of the breast specimen may not be clear after primary surgery when invasive breast cancer is accompanied by ductal carcinoma in situ (DCIS) and/or lobular breast cancer. Therefore, there exists a significant clinical need for a better intraoperative margin assessment and targeted surgery in combination with targeted therapy which involves the use of cytotoxic agents, immunotherapy, and intraoperative radiation therapy [11].
At present, several methods of intraoperative margin assessment are part of standard care, but they all have significant clinical and technical limitations. Intraoperative tumor margin evaluation can be performed using frozen section analysis and imprint cytology [12]. However, these techniques have several limitations, such as resource intensity, technical difficulties in preparing adipose tissue, sampling only a small percentage of the surgical margins, and limited efficacy, especially for DCIS. Thus, there has been increased research interest in deploying new intraoperative high-resolution technologies for the delineation of tumor morphology and the precise estimation of breast cancer size and localization in real-time with the aim of achieving clear resection margins and therefore reducing the BCS re-excision rate [13]. The most promising techniques for intraoperative margin assessment are various optical methods due to their rapid acquisition of image data, label-free imaging technique, and penetration depths sufficient to meet consensus guidelines for establishing clear margins in BCS. These methods include handheld probe-based radiofrequency spectral analysis [14], quantitative diffuse reflectance imaging [15], confocal mosaicking microscopy [16], point spectroscopy [17], and optical coherence tomography (OCT) [18][19][20][21]. OCT is the most promising method for intraoperative assessment of breast cancer due to its high resolution (5-15 µm) and label-free imaging modality that yields real-time 3D images of tissue microstructure at high speed to depths of up to 2 mm in biological tissues. Furthermore, OCT can be miniaturized into a handheld or needle probe [22,23] to enable local diagnosis and assessment of resection margins with micrometer-resolution for targeted breast surgery. In earlier studies, traditional OCT was introduced as a highresolution imaging tool for differentiating between malignant tumors and fibro-adipose tissues [18,21,24]. In addition, OCT has been used for intraoperative assessment of resection margins during BCS [19][20][21][22]. At the same time, on conventional structural log-scale OCT images, dense malignant tissue, and normal connective tissue can be poorly differentiated due to their similar refraction and high scattering intensity. The interaction between parenchyma and stroma in invasive breast cancer determines its aggressive biological behavior, resistance to chemotherapeutic treatment, and various survival rates depending on the degree of the tumor malignancy [25]. Tumor stroma state assessment is fundamentally important because the collagen matrix plays a crucial role in breast cancer invasion and metastatic spreading [26,27]. It is necessary to search for new OCT criteria to increase the information content provided by differences in the optical properties of tissues in the diagnosis of breast cancer. These criteria may include quantification of tissue stiffness based on OCT-elastography [28,29], calculation of the attenuation coefficient [30], and birefringence contrast (specifically, differences in the collagen content) between non-tumorous and tumorous tissues based on polarization-sensitive (PS) OCT for intraoperative breast margin assessment [31,32]. Recently, there has been increasing interest in the automated quantitative characterization of human breast tissue types for surgical margin assessment based on machine learning segmentation of OCT [33,34] and PS OCT images [35,36]. It allows for identifying optimal threshold values of the optical and polarization criteria for classifying malignant tumors, fibro-adipose, and stromal tissue among human breast tissues.
Cross-polarization (CP) OCT is a variant of PS OCT that allows imaging of the initial polarization state changes due to both birefringence and cross-scattering in biological tissues which makes it possible to assess different states of the connective tissue [37]. Only orthogonally polarized backscattered light, which is mutually coherent with the incident wave, contributes to the cross-polarization image. A number of studies have shown that CP OCT is a promising method for differentiating tumorous from non-tumorous breast tissue [29,38] and human brain tissue [39], as well as for the diagnosis of bladder cancer [40]. CP OCT can also measure the attenuation coefficient, which can be helpful for improving and increasing the contrast between tumorous and non-tumorous breast tissues [41][42][43]. Various morphological features of breast cancer are anticipated to influence the polarization qualities of tumor tissue differently. This stimulates interest in the evaluation of the clinical potential of CP OCT techniques for breast cancer subtypes (malignancy grade) determination, detection of tumor borders, and improving the reliability of the negative resection margin assessment during BCS.
Despite technological development over the past decade, there are several challenges in performing high-level breast cancer tissue characterization using OCT data. They are caused by a wide range of morphological features exhibited by breast tissues under OCT examination, and a large number of individual 2D images are generated from volumetric OCT data requiring assessment. Therefore, there exists a need for new approaches to allow the quantitative analysis of OCT data and its automatization. In this study, we present for the first time attenuation coefficient maps in co-and cross-polarization channels in order to differentiate molecular breast cancer subtypes between each other and non-tumorous tissue. Furthermore, the determination of the thresholding of the attenuation coefficient values and their diagnostic significance in differentiating breast tissue types and detecting clusters of tumor cells was performed as the next step toward introducing OCT technology into the BCS protocol.
The main aims of this study are: (1) to apply mapping of the attenuation coefficients in 3D CP OCT for intraoperative differentiation of breast cancer tissues and resection margins assessment during BCS; (2) to determine the diagnostic accuracy of attenuation coefficients in co-and cross-polarization channels for delineation of non-tumorous and tumorous breast tissues.
Human Breast Specimens after BCS
The study was carried out on freshly excised human specimens of breast lesions obtained from 68 patients (age 30-76) with stage I or II (T1-2 N0-1 M0) breast cancer undergoing BCS (Figure 1a). Before BCS, five patients received neoadjuvant (preoperative) chemotherapy with the combined use of doxorubicin, cyclophosphamide, and paclitaxel according to the clinical guidelines (RUSSCO Clinical Practice Guidelines [44], corresponding to generally accepted guidelines [45]). All specimens contained tumorous and non-tumorous breast tissue. The size of the specimens varied from 0.5 × 1.0 × 0.5 cm to 1.0 × 2.0 × 0.5 cm (Figure 1b). morous tissue. Furthermore, the determination of the thresholding of the attenuation coefficient values and their diagnostic significance in differentiating breast tissue types and detecting clusters of tumor cells was performed as the next step toward introducing OCT technology into the BCS protocol.
The main aims of this study are: (1) to apply mapping of the attenuation coefficients in 3D CP OCT for intraoperative differentiation of breast cancer tissues and resection margins assessment during BCS; (2) to determine the diagnostic accuracy of attenuation coefficients in co-and cross-polarization channels for delineation of non-tumorous and tumorous breast tissues.
Human Breast Specimens after BCS
The study was carried out on freshly excised human specimens of breast lesions obtained from 68 patients (age 30-76) with stage I or II (T1-2 N0-1 M0) breast cancer undergoing BCS (Figure 1a). Before BCS, five patients received neoadjuvant (preoperative) chemotherapy with the combined use of doxorubicin, cyclophosphamide, and paclitaxel according to the clinical guidelines (RUSSCO Clinical Practice Guidelines [44], corresponding to generally accepted guidelines [45]). All specimens contained tumorous and non-tumorous breast tissue. The size of the specimens varied from 0.5 × 1.0 × 0.5 cm to 1.0 × 2.0 × 0.5 cm (Figure 1b). The procedure for intraoperative breast cancer assessment using CP OCT. (a) isolation of the breast cancer for the subsequent study; (b) the photo of a typical breast-tissue specimen; (с) CP OCT setup with the OCT probe; (d) special motorized table for convenient positioning of the OCT probe above the specimen and bringing it into contact with the tissue surface during scanning; (e) an example of 3D OCT data set on the computer monitor acquired from real-time tissue scanning. The dotted oval in panel (a) shows the region of the tumor to be resected in the patient's breast tissue, and a dashed rectangle in panel (b) shows the area from which CP OCT images were obtained.
The study was approved by the Institutional Review Board of the Privolzhsky Research Medical University and the Nizhny Novgorod Oncology Clinic (Protocol #10 from 28 September 2018 and Protocol #12 from 23 December 2021, Nizhny Novgorod, Russia). Informed consent was obtained from all participants enrolled in the study and/or their next of kin. The procedure for intraoperative breast cancer assessment using CP OCT. (a) isolation of the breast cancer for the subsequent study; (b) the photo of a typical breast-tissue specimen; (c) CP OCT setup with the OCT probe; (d) special motorized table for convenient positioning of the OCT probe above the specimen and bringing it into contact with the tissue surface during scanning; (e) an example of 3D OCT data set on the computer monitor acquired from real-time tissue scanning. The dotted oval in panel (a) shows the region of the tumor to be resected in the patient's breast tissue, and a dashed rectangle in panel (b) shows the area from which CP OCT images were obtained.
CP OCT Setup and Data Acquisition
The study was approved by the Institutional Review Board of the Privolzhsky Research Medical University and the Nizhny Novgorod Oncology Clinic (Protocol #10 from 28 September 2018 and Protocol #12 from 23 December 2021, Nizhny Novgorod, Russia). Informed consent was obtained from all participants enrolled in the study and/or their next of kin.
CP OCT Setup and Data Acquisition
A spectral domain CP OCT setup (Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia) was used in the study for intraoperative visualization of breast cancer tissue (Figure 1c) [37]. The setup combines traditional structural OCT and polarization modes. As a result, two images are recorded simultaneously: an image in co-polarization (reflected light with a polarization state parallel to the initial polarization state) and an image in cross-polarization (reflected light with changed polarization which is orthogonal to the initial one) channels (Figure 1e). The CP OCT setup has a common-path interferometric layout that operates at a 1.3 µm central wavelength with an axial resolution of~10 µm and a lateral resolution of~15 µm. It has a 20 kA-scans/sec scanning rate and performs 2D lateral scanning with a range of 256 × 256 B-scans to obtain a 3D distribution of backscattering light in two polarization states. The scanned volume of 2.4 × 2.4 × 1.5 mm was acquired over 26 s. A special motorized table was used to conveniently position the OCT probe above the specimen and bring it into contact with the tissue (Figure 1d). The 3D CP OCT images of the fresh breast tissue were acquired immediately after tumor resection (Figure 1e). The distribution of the optical attenuation coefficients was generated in real-time during the acquisition process. Each specimen was used to acquire 10-14 3D CP-OCT images (in 1-2 rows) depending on the size of the specimen and were taken with the image overlap to attain data from the entire surface being studied. The total scanning time along a 1-2 cm trajectory on a specimen was 3-5 minutes, depending on the number of stitched images. In total, greater than 600 3D CP OCT images with corresponding attenuation coefficient maps of breast tissue were obtained.
CP OCT Data Processing
Quantitative assessment of the 3D CP OCT images of breast tissue (characterized by high optical and morphological heterogeneity) was based on calculating two optical coefficients using a depth-resolved approach: the commonly used rate of attenuation in the co-(Att(co)) and cross-(Att(cross)) polarization channels. We expect that the additional usage of Att(cross) may provide us with more information about the morphological features of breast cancer tissue. The results were shown as en face color-coded maps to present the 2D distribution of these coefficients for different breast tissue types. The depth-resolved approach was applied for the quantitative assessment of OCT data in co-polarization. Such an approach was proposed in [46] under the assumption that the backscattering coefficient is proportional to the attenuation coefficient with the constant ratio between the two in the OCT depth range. As shown in [46], the attenuation coefficient could be estimated as follows: where I i is the noise-free OCT signal amplitude in the co-or cross-channel, µ i att is the corresponding specimen attenuation coefficient estimation, i-is the axial measurement number, i max -is the total number of pixels in the axial direction, and ∆ is the pixel size along the axial dimension.
Due to the presence of the non-zero additive noise in the experimental OCT signal amplitude distribution, the direct application of Equation (1) to the measured OCT signal will lead to the following estimation: where N i is the additive noise, µ i est is the attenuation coefficient estimation obtained from OCT data in the presence of noise. To mitigate the effect of the noise, in [42], it was proposed to reorganize Equation (2) as follows to improve the estimation: where N is the mean noise amplitude.
Such reorganization of the equation allows one to obtain the estimation of the attenuation coefficient according to Equation (1) from the estimation according to Equation (2): where SNR µ i is the local signal-to-noise ratio (SNR). One should note that according to [42], the depth-dependent sensitivity of the OCT system due to confocality and spectral roll-off will lead to the attenuation coefficient estimation bias, which will not exceed 10%; thus, these factors were not considered in the present study. One should also note that to reduce the effect of the speckle noise on the local attenuation coefficient estimation, the volume of measured OCT data was averaged in the local 3 × 3 × 3 pixel window before the attenuation coefficient estimations. Figure 2 shows the outline of the data processing used to calculate both attenuation coefficients in co-and cross-polarization channels from the OCT signal. Figure 2A shows examples of B-scans in co-and cross-polarization channels for breast tissue with corresponding random A-scans, demonstrating different speeds of attenuation of the signal with depth, which was calculated using an adaptive depth-resolved approach to the attenuation coefficient calculation. The estimated volume distributions of the optical coefficients were shown as 2D en face color-coded maps with averaging in the predefined depth range ( Figure 2B). For each collected A-scan set, both attenuation coefficients were calculated in the same predefined depth range starting from ∼105 µm below the tissue surface to a depth of ∼630 µm (Figure 2A). The choice of depths was determined by the construction of the most contrasting color-coded maps in this range, providing the best information about the morphology of breast tissue. The 2D en face color-coded maps were constructed based on the distribution of coefficient values for each OCT image in co-and cross-polarizations ( Figure 2B). Color-coded maps provide information about the tissue properties in the range of specified depths; therefore, they are easier to interpret, and they visualize the tumor in a more contrasted way than the original en face log-scale structural OCT images, for the analysis of which only one plane from the tissue surface is selected. The color settings of each map reflect the values of the corresponding attenuation coefficient, where areas with a high signal decay rate are represented by shades of yellow and red colors and areas with low attenuation by shades of blue and light blue.
Histological Study
After CP OCT imaging of the breast tissue, the scanning area was marked with histological ink for correlation with histology. En face histological sections were prepared through the marked areas so that their planes corresponded to the en face CP OCT and en face attenuation coefficient images. Multiple 7µm thick serial sections were taken from a single specimen block, with 35 µm discarded between levels and each section stained with hematoxylin and eosin (H&E) to determine the breast cancer type and additional sections stained with Van-Gieson's solution in order to assess collagen content. Histological samples were examined using the EVOS M7000 Imaging System (Thermo Fisher Scientific Inc., Waltham, MA, USA) in transmitted light. Histopathological analysis was performed
Histological Study
After CP OCT imaging of the breast tissue, the scanning area was marked with histological ink for correlation with histology. En face histological sections were prepared through the marked areas so that their planes corresponded to the en face CP OCT and en face attenuation coefficient images. Multiple 7µm thick serial sections were taken from a single specimen block, with 35 µm discarded between levels and each section stained with hematoxylin and eosin (H&E) to determine the breast cancer type and additional sections stained with Van-Gieson's solution in order to assess collagen content. Histological samples were examined using the EVOS M7000 Imaging System (Thermo Fisher Scientific Inc., Waltham, MA, USA) in transmitted light. Histopathological analysis was performed by a single experienced pathologist (S.K.).
The identified histological types of breast lesions include fibro-adenomatosis (n = 3), fibroadenoma (n = 3), micro-invasive carcinoma with ductal carcinoma in situ (DCIS) (n = 3), invasive lobular carcinoma (ILC) (n = 18), invasive carcinoma of no special type (NST) (n = 36) with high-grade or low-grade cancer cells and breast specimens after neoadjuvant chemotherapy with a partial pathological response (n = 5). All 68 breast lesions studied contained both tumor and peri-tumoral non-tumorous tissues (adipose and fibrous connective tissue). Special attention was paid to the tumor stroma of the mammary gland, which has different dystrophic changes in collagen fibers (fibrosis or hyalinosis). In addition, different molecular subtypes of breast cancer are characterized by a special structure of the connective tissue and its certain proportion with tumor (malignant) cells.
Accurate disease prognosis and optimization of individual therapy options require differentiation of molecular breast cancer subtypes. Therefore, an immunohistochemistry assessment, including the analysis of estrogen receptor (ER), progesterone receptor (PR), Her2/neu, and antigen Ki-67 expression, was carried out. All invasive and microinvasive breast cancer cases (n = 62) were divided into five molecular subtypes: To assess the tumor response to therapy, Residual Cancer Burden (RCB) histological grading system was used. RCB is an integral criterion widely used in clinical practice that allows the prediction of relapse-free survival based on the size of the residual breast tumor, cellularity, and the number and size of the affected lymph nodes [47].
Correlation of the CP OCT Data with Histology and Region of Interest Selection
The results of histopathology were compared with the corresponding CP OCT-based findings. After saving en face log-scale structural CP OCT images acquired at a depth of ∼150 µm from the tissue surface and constructing en face color-coded attenuation coefficient maps at a depth range of 105-630 µm, we selected one of the en face histological sections prepared from the same sample from a depth of~150-200 µm that best matched the attenuation coefficient maps. Despite the efforts to select the area of comparison as accurately as possible, complete correspondence between the en face CP OCT and en face attenuation coefficient maps on the one hand and the corresponding en face histological images, on the other hand, could not be achieved. We identified several reasons for this. First, the shape of the sample could be distorted or deformed during the histological preparation procedures due to the fixation in formalin, dehydration, paraffining, etc. [48]. Second, a slight difference in the histological slice positions (even~tens of microns) also affects the geometry of the breast tissue structural components and complicates the comparison with the OCT-based images of the fresh tissue. Third, although OCT images have a micrometerscale resolution (typically 10-15 µm) approaching that of one of histology [49], they do not typically reach the cellular level of histology; therefore, slight discrepancies in the size of individual structural components of the tissue are possible [50]. Given the above, only larger (~tens of microns or more) regions or components can be matched at this scale, while smaller structures (~several microns) may appear much more distorted and displaced. Therefore, in this study, to provide more accurate identification of certain breast tissue structures on color-coded attenuation coefficient maps, we scanned a large sample field and then analyzed the area consisting of 10-14 stitched en face CP OCT images and the corresponding attenuation coefficient maps. On such stitched en face OCT-based images, we analyzed the topological similarity of the mutual positions and geometrical sizes of the structural components of tumorous and non-tumorous breast tissue and the corresponding en face histology sections.
For a comparative numerical analysis of data in each color-coded attenuation coefficient map, corresponding histological regions of interest (ROIs) were selected for calculating the local attenuation coefficient. In doing so, we first selected ROIs based on the log-scale structural CP OCT images and then assessed the attenuation coefficient maps. A total of 165 ROIs for different breast tissue types in each attenuation coefficient map were selected for quantitative statistical analysis and were categorized into five types: adipose tissue (n = 33), the non-tumorous fibrous connective tissue of breast (n = 36), hyalinized tumor stroma (n = 27), fibrotic tumor stroma with low-density tumor cells (n = 34), high-density tumor cells (n = 35) (n-number of images). The typical size of each ROI was 300 × 300 µm, over which the median values of the attenuation coefficients were calculated, as shown in the color-coded maps by the colored squares ( Figure 2B). For a fairly limited number of the studied breast tissue images (n = 165), the ROI-windows on the color-coded attenuation coefficient maps were selected manually by exact matching with the histology. The manual ROI selection was used with the understanding that if a significant difference between these areas were observed, the next research stage would be moving towards more automated processing to develop an automatic procedure of breast tissue differentiation in the future.
Statistical Analysis
The variables for statistical inter-group comparison were the Att(co) and Att(cross) coefficients calculated from 3D CP OCT images. To evaluate the results of quantitative image processing, we used the mean and median values among all values of every optical coefficient calculated for each A-scan of a 3D CP OCT image. The results are expressed as Me [Q1; Q3], where Me is the median of the analyzed parameter and [Q1; Q3] are the 25th and 75th percentile values, respectively. Since this study includes a comparison of multiple groups, the Mann-Whitney U-test with Bonferroni correction was selected. In all cases, the differences were considered statistically significant when p < 0.05.
The assessment of the informative value and diagnostic capabilities of the studied method was carried out with an estimation of sensitivity (Se), specificity (Sp), and diagnostic accuracy (Ac). Based on the Se and Sp values, the receiver operating characteristic (ROC) curves were constructed, which show the dependence of the number of a true positive rate on the number of a false positive rate. For quantitative characterization of the ROC curves, we evaluated the area under the ROC curve (AUC), i.e., the area bounded by the ROC curve and the axis of the false positive rate [51]. The higher the AUC is, the better the classifier.
For statistical data processing, the program Statistical Package for Social Sciences 16.0 (SPSS, Chicago, IL, USA) was used. The ROC-related calculations of the sensitivity, specificity, diagnostic accuracy, and area under the ROC were performed with Prism 8.0.2 statistical software (GraphPad Software, San Diego, CA, USA).
Color-Coded Attenuation Coefficient Maps in Differentiation of Breast Cancer Subtypes
In the first part of this study, stitched en face color-coded attenuation coefficient maps in co-(Att(co)) and cross-(Att(cross)) polarization channels from benign (non-cancerous) fibroadenoma, four cases of invasive breast cancer with different grades of malignancy (having different molecular subtypes) and a case of invasive breast cancer after neoadjuvant chemotherapy were analyzed. First, we conducted their visual assessment vis-a-vis histological images and distinguished a characteristic pattern with the predominant color palette of the attenuation coefficient maps. Second, we discussed inter-and intra-tumor het-Cancers 2023, 15, 2663 9 of 23 erogeneity of various breast cancer subtypes and the features of their boundaries and also noted the advantages of the attenuation coefficient imaging compared to en face log-scale structural CP OCT images. Figure 3A shows the case of benign breast condition (fibroadenoma) containing both glandular tissue (ducts and lobules) and non-tumorous fibrous connective tissue. Histologically it appears quite heterogeneous: most of the fibroadenoma is characterized by the growth of fibrous connective tissue around the atypical glandular structures of the mammary gland, squeezing of the ducts, which take the form of narrow slits, and atypical ductal hyperplasia. (Figure 3(a1)). These structures have opposite scattering properties: the fibrous connective tissue enhances the OCT signal due to densely packed collagen bundles; the ducts and lobules, on the other hand, transmit light, and the corresponding areas in en face CP OCT images appear dark (with a greatly reduced signal level or its absence) (Figure 3(a2,a3)). Narrowed lumen of the ducts due to fibrosis and reduced lobules are too small to be detected in log-scale CP OCT images, but if they are distinguished among fibrous connective tissue, they look more contrasted in cross-polarization images (Figure 3(a3)). The en face color-coded attenuation coefficient maps (Figure 3(a4,a5)) seem to be more contrasted in the visualization of heterogeneity of fibroadenoma compared to log-scale CP OCT images (Figure 3(a2,a3)). For dense fibrous connective tissue, higher attenuation coefficient values are typical: in Att(co) coefficient maps more than 5.0 mm −1 (Figure 3(a4), red and orange colors), in Att(cross) coefficient maps more than 5.0 mm −1 (Figure 3(a5), red and yellow colors); regions of the ductal and lobular structures expressed with decreased values of attenuation coefficient values: in Att(co) coefficient maps less than 5.0 mm −1 (Figure 3(a4), yellow color), in Att(cross) coefficient maps less than 3.0 mm −1 (Figure 3(a5), bright blue and dark blue colors). Att(cross) coefficient maps compared to Att(co) coefficient maps distinguish non-tumorous fibrous connective tissue from ducts/lobules structures in a more contrasted way. Figure 3B, Figure 4, and Figure 5 demonstrate several examples of the invasive breast cancer subtypes, which differ in the quantitative ratio of the connective tissue and tumor cell clusters, the localization of these clusters, and the different tumor cell density. The presented examples also contain boundaries with non-tumorous adipose and fibrous connective tissue, which allow for the evaluation of the quality of OCT visualization of tumor margins using the calculation of attenuation coefficients and without them.
To generalize, normal adipose tissue in en face log-scale CP OCT images in co-and crosschannels are characterized by a special cellular structure and have a very low level of the OCT signal with bright spots ( Figure 3B, Figure 5, and Figure 6), tumor stroma has a considerably higher level of the OCT signal in cross-channel with commonly distinguished densely packed collagen bundles (Figures 3B, 4 and 5). Breast cancer tissue has a low level of the OCT signal and, as a rule, is clearly distinguished in the log-scale CP OCT images only in the presence of large foci of tumor cells (Figures 4 and 5). In other cases, it is rather poorly differentiated, especially at the low density of tumor cells in the fibrotic tumor stroma ( Figures 3B and 5). Figure 3B shows typical CP OCT findings for low-grade invasive breast cancer of no special type (Luminal A molecular subtype), which is the most common and less aggressive subtype with the best prognosis for treatment. Histologically, this tumor subtype has the most pronounced tumor stroma, consisting of densely intertwined collagen fibers, among which there are numerous foci of tumor cells in the form of chains or clusters of low density (Figure 3(b1)). Comparing en face log-scale CP OCT images (Figure 3(b2,b3)) and en face color-coded (Figure 3(b4,b5)) maps of this state, we can conclude that Att(cross) coefficient map demonstrates a more pronounced contrast between non-tumorous fibrous tissue and infiltrating tumor cells. Areas of decreased values of Att(cross) coefficient (less than 4.0 mm −1 ) corresponded to low-density tumor cells in the fibrotic tumor stroma (Figure 3 (Figure 3(a4), yellow color), in Att(cross) coefficient maps less than 3.0 mm −1 ( Figure 3(a5), bright blue and dark blue colors). Att(cross) coefficient maps compared to Att(co) coefficient maps distinguish non-tumorous fibrous connective tissue from ducts/lobules structures in a more contrasted way. Figures 3B, 4, and 5 demonstrate several examples of the invasive breast cancer subtypes, which differ in the quantitative ratio of the connective tissue and tumor cell clusters, the localization of these clusters, and the different tumor cell density. The presented examples also contain boundaries with non-tumorous adipose and fibrous connective tissue, which allow for the evaluation of the quality of OCT visualization of tumor margins using the calculation of attenuation coefficients and without them.
To generalize, normal adipose tissue in en face log-scale CP OCT images in co-and cross-channels are characterized by a special cellular structure and have a very low level It has been found that benign fibroadenoma ( Figure 3A) and invasive carcinoma of a low degree of malignancy ( Figure 3B) may have similar scattering and polarization properties, which also similarly affect the values of Att(co) and Att(cross) coefficients. Figure 4 shows representative stitched en face log-scale CP OCT images and stitched en face color-coded attenuation coefficient maps for high-grade micro-invasive carcinoma and ductal carcinoma in situ (DCIS). Histologically, this case is characterized by the presence of ducts filled with tumor cells with clear boundaries, which are surrounded by fibrosis and hyalinized stroma. (Figure 4a). The presence of hyalinized stroma indicates secondary deeper (dystrophic) changes in the tumor stroma. At the same time, collagen fibers form dense homogeneous structures characterized by a prominent accumulation of proteoglycans and glycosaminoglycans [52,53]. In this case, in the log-scale OCT images in co-and cross-channels (Figure 4b,c), a fairly heterogeneous structure is visualized. Stromal tissue has a high level of OCT signal, and clusters of tumor cells with high density have a low level of OCT signal. Comparing the two color-coded attenuation coefficient maps, Att(cross) (Figure 4e Figure 5A presents a common example of CP OCT examination of high-grade inva sive carcinoma of no special type (Luminal B(Her2Neo-) molecular subtype). This subtype of breast cancer, compared to the Luminal A subtype, is more aggressive and is associated with a worse treatment prognosis. It can be observed from the stitched en face log-scale CP OCT images that this breast cancer is characterized by a low and homogeneous level o OCT signal in the co-and cross-channels ( Figure 5(a2,a3)), which is histologically con firmed by significant disorganization of the collagen fibers' orientation and a significan increase in the tumor cell area ( Figure 5(a1)). A higher concentration of tumor cells in the tumor tissue leads to a more homogeneous distribution of lower attenuation values of the Att(co) (Figure 5(a4)) and Att(cross) (Figure 5(a5)) coefficients in color-coded maps in com parison with Luminal A subtype ( Figure 3B). Comparing en face log-scale ( Figure 5(a2,a3) and en face color-coded ( Figure 5(a4,a5)) images of this state, we can conclude that color coded Att(cross) map also makes it possible to identify the areas of infiltrating tumor cells with more contrast. It was shown that the areas of high accumulation of tumor cells are characterized by the lowest (less than 2 mm −1 ) values of the Att(cross) coefficient ( Figure 5(a5), indicated with a white line). In addition, in this case, in the en face color-coded at tenuation coefficient maps, the border between the tumorous and non-tumorous breas tissue was less clearly visualized by each coefficient (Figure 5(a4,a5)). This is due to the fact that high tumor cell density and adipose tissue are characterized by close low values of the coefficients (less than 4 mm −1 ). In this case, when looking for the resection margin it is better to focus on standard log-scale CP OCT images, on which the adipose tissue has Figure 5A presents a common example of CP OCT examination of high-grade invasive carcinoma of no special type (Luminal B(Her2Neo-) molecular subtype). This subtype of breast cancer, compared to the Luminal A subtype, is more aggressive and is associated with a worse treatment prognosis. It can be observed from the stitched en face log-scale CP OCT images that this breast cancer is characterized by a low and homogeneous level of OCT signal in the co-and cross-channels ( Figure 5(a2,a3)), which is histologically confirmed by significant disorganization of the collagen fibers' orientation and a significant increase in the tumor cell area ( Figure 5(a1)). A higher concentration of tumor cells in the tumor tissue leads to a more homogeneous distribution of lower attenuation values of the Att(co) (Figure 5(a4)) and Att(cross) (Figure 5(a5)) coefficients in color-coded maps in comparison with Luminal A subtype ( Figure 3B). Comparing en face log-scale ( Figure 5(a2,a3)) and en face color-coded ( Figure 5(a4,a5)) images of this state, we can conclude that color-coded Att(cross) map also makes it possible to identify the areas of infiltrating tumor cells with more contrast. It was shown that the areas of high accumulation of tumor cells are characterized by the lowest (less than 2 mm −1 ) values of the Att(cross) coefficient ( Figure 5(a5), indicated with a white line). In addition, in this case, in the en face color-coded attenuation coefficient maps, the border between the tumorous and non-tumorous breast tissue was less clearly visualized by each coefficient (Figure 5(a4,a5)). This is due to the fact that high tumor cell density and adipose tissue are characterized by close low values of the coefficients (less than 4 mm −1 ).
In this case, when looking for the resection margin, it is better to focus on standard log-scale CP OCT images, on which the adipose tissue has a special cellular structure, which is well distinguished from the homogeneous low level of the OCT signal in the area of tumor cells of high density ( Figure 5(a2,a3)).
Cancers 2023, 15, x 13 of 24 a special cellular structure, which is well distinguished from the homogeneous low level of the OCT signal in the area of tumor cells of high density ( Figure 5(a2,a3)). The cases of even more aggressive non-luminal and triple-negative breast cancers with high histologic grade and poor prognostic factors are characterized by similar scattering properties in log-scale CP OCT images, and the distribution of the Att(co) and Att(cross) coefficients was the same as for the Luminal B subtypes ( Figure 5A). The homogeneous distribution of low values of attenuation coefficient areas for both breast cancer subtypes is consistent with the histologically confirmed significant disorganization of the collagen fiber orientation and a significant increase in the area of tumor cells. Figure 5B presents an example of CP OCT findings for high-grade invasive lobular carcinoma (Luminal B (Her2Neo-) subtype), which is the second most common type of invasive breast cancer and which is characterized by an increased tendency to metastasize in comparison to Luminal A. Histologically, this type of breast cancer has a heterogeneous structure with an equal proportion of tumor cells of different density and tumorous stroma with a different density of collagen fibers ( Figure 5(b1)). Stitched en face CP OCT images in cross-channel were also characterized by high heterogeneity of the OCT signal distribution ( Figure 5(b3)), which similarly affected the Att(cross) coefficient values (Figure 5(b5)). It should be noted that in the stitched en face Att(co) coefficient map, it is difficult to distinguish features of the tumor structure ( Figure 5(b4)). By contrast, on the stitched en face Att(cross) coefficient map, it is easy to distinguish the different breast tissue The cases of even more aggressive non-luminal and triple-negative breast cancers with high histologic grade and poor prognostic factors are characterized by similar scattering properties in log-scale CP OCT images, and the distribution of the Att(co) and Att(cross) coefficients was the same as for the Luminal B subtypes ( Figure 5A). The homogeneous distribution of low values of attenuation coefficient areas for both breast cancer subtypes is consistent with the histologically confirmed significant disorganization of the collagen fiber orientation and a significant increase in the area of tumor cells. Figure 5B presents an example of CP OCT findings for high-grade invasive lobular carcinoma (Luminal B (Her2Neo-) subtype), which is the second most common type of invasive breast cancer and which is characterized by an increased tendency to metastasize in comparison to Luminal A. Histologically, this type of breast cancer has a heterogeneous structure with an equal proportion of tumor cells of different density and tumorous stroma with a different density of collagen fibers ( Figure 5(b1)). Stitched en face CP OCT images in cross-channel were also characterized by high heterogeneity of the OCT signal distribution ( Figure 5(b3)), which similarly affected the Att(cross) coefficient values ( Figure 5(b5)). It should be noted that in the stitched en face Att(co) coefficient map, it is difficult to distinguish features of the tumor structure ( Figure 5(b4)). By contrast, on the stitched en face Att(cross) coefficient map, it is easy to distinguish the different breast tissue types and the tumor node from the surrounding adipose tissue (Figure 5(b5)). Areas of adipose tissue and high-density clusters of tumor cells were characterized by the lowest Att(cross) coefficient (less than 2 mm −1 ) values, areas of low-density tumor cells in the fibrous tumor stroma were characterized by lower Att(cross) coefficient (about 2-3 mm −1 ) values and hyalinized tumor stroma formed by densely packed collagen fibers had the highest Att(cross) coefficient (more than 6 mm −1 ) values ( Figure 5(b5)). Adipose tissue is better visualized in the structural OCT images compared to Att(co) and Att(cross) coefficient maps.
Additionally, we present the results of the CP OCT study of breast cancer specimens (Luminal B (Her2Neo+) molecular subtype) from patients post neoadjuvant chemotherapy. In the case of incomplete tumor response to therapy, CP OCT could detect residual tumor cells in the tumor bed ( Figure 6). In the en face histological images, they corresponded to the multiple foci of fibrous stroma, adipose tissue, as well as areas of single small clusters of residual tumor cells (Figure 6a). In en face log-scale CP OCT images, a high level of OCT signal in the areas of fibrosis and small areas with low levels of OCT signal from clusters of residual tumor cells were observed (Figure 6b,c). Fibrous stroma formed by densely packed collagen fibers is characterized by the dominance of high values (more than 6 mm −1 ) of both coefficients in the en face attenuation coefficient maps (Figure 6d types and the tumor node from the surrounding adipose tissue (Figure 5(b5)). Areas of adipose tissue and high-density clusters of tumor cells were characterized by the lowest Att(cross) coefficient (less than 2 mm −1 ) values, areas of low-density tumor cells in the fibrous tumor stroma were characterized by lower Att(cross) coefficient (about 2-3 mm −1 ) values and hyalinized tumor stroma formed by densely packed collagen fibers had the highest Att(cross) coefficient (more than 6 mm −1 ) values ( Figure 5(b5)). Adipose tissue is better visualized in the structural OCT images compared to Att(co) and Att(cross) coefficient maps. Additionally, we present the results of the CP OCT study of breast cancer specimens (Luminal B (Her2Neo+) molecular subtype) from patients post neoadjuvant chemotherapy. In the case of incomplete tumor response to therapy, CP OCT could detect residual tumor cells in the tumor bed ( Figure 6). In the en face histological images, they corresponded to the multiple foci of fibrous stroma, adipose tissue, as well as areas of single small clusters of residual tumor cells (Figure 6a). In en face log-scale CP OCT images, a high level of OCT signal in the areas of fibrosis and small areas with low levels of OCT signal from clusters of residual tumor cells were observed (Figure 6b,c). Fibrous stroma formed by densely packed collagen fibers is characterized by the dominance of high values (more than 6 mm −1 ) of both coefficients in the en face attenuation coefficient maps (Figure 6d, To conclude, the main result of this part of the study is that mapping the attenuation coefficients for various breast cancer subtypes significantly increases the amount of information available from the CP OCT data compared to the unprocessed log-scale CP OCT images. Tumor cell areas and fibrous connective breast tissue in log-scale structural OCT images can have similar scattering and polarization properties and, therefore, may not be contrasted. However, the calculation of local mapping of Att(cross) coefficient values makes it possible to differentiate these tissues. The calculation of the Att(cross) coefficient in comparison with Att(co) coefficient provided better contrast for the visualization of different breast tissue types (adipose tissue, non-tumorous fibrous connective tissue, hyalinized tumor stroma, high-density tumor cells and low-density tumor cells in fibrotic tumor stroma). Based on Att(cross) coefficient mapping, a considerable difference between breast cancer subtypes was revealed. This is very valuable for the exact identification of the resection boundaries of breast cancer and ensuring the cleanliness of the resection margins as well as for assessing the efficacy of therapy.
Comparison of Attenuation Coefficients for Breast Tissue Types Differentiation
During BCS, the main task is to detect and differentiate areas of tumor cells (regardless of their density) not only from adipose tissue but also from fibrous connective tissue to ensure the negative resection margin of the resection. Therefore, in the quantitative assessment, the non-tumorous breast tissue group was divided into the adipose and fibrous connective tissue of the mammary gland. Breast cancer tissue was divided into hyalinized tumor stroma, fibrotic tumor stroma with low-density tumor cells, and areas of tumor cells of high density. Here tumorous tissue statistically differs from non-tumorous fibrous connective tissue, only accounting for Att(cross) coefficient (p < 0.0001) ( Figure 7B). The calculations also demonstrate that with high statistical significance, it is possible to differentiate (p < 0.0001) hyalinized tumor stroma from other subtypes of breast tissue using Att(cross) coefficient ( Figure 7B; light pink boxes). The increase in the Att(cross) coefficient in the hyalinized tumor stroma to 4.7 [4.6; 5.1] mm −1 is most likely related to the increase in the density of the arrangement of collagen fibers compared to non-tumorous connective tissue (see Figure 5B) which have values of 3.7 [3.4; 4.2] mm −1 ( Figure 7B). Non-tumorous connective tissue is characterized by less dense and more ordered collagen fiber location, which leads to a lower signal attenuation rate in depth in the cross-channel (see Figure 3). In the case of the defibering of collagen fibers in tumor stroma and their fragmentation with an increase in the density of tumor cells, the Att(cross) coefficient values decrease significantly (see Figure 5A).
The calculations also demonstrated that adipose tissue is characterized by the lowest Att(co) and Att(cross) coefficient values and constitute 3.9 [3.5; 4.2] mm −1 and 0.9 [0.7; 1.2] mm −1 , respectively ( Figure 7A,B; light gray boxes). A high spread of values is associated with relatively high differences between the refractive indices of the fat cell cytoplasm and membrane. This specific structure of adipose tissue is well visualized in the log-scale OCT images (see Figures 3B and 5). While there is no need to differentiate this tissue from other subtypes of breast tissue numerically, according to Att(co) coefficient values, adipose tissue statistically differs from all other four breast tissue types (p < 0.05). The calculations also demonstrated that adipose tissue is characterized by the lowe Att(co) and Att(cross) coefficient values and constitute 3.9 [3.5; 4.2] mm −1 and 0.9 [0.7; 1. mm −1 , respectively ( Figure 7A,B; light gray boxes). A high spread of values is associate with relatively high differences between the refractive indices of the fat cell cytoplasm an membrane. This specific structure of adipose tissue is well visualized in the log-scale ОC images (see Figures 3B and 5). While there is no need to differentiate this tissue from oth subtypes of breast tissue numerically, according to Att(co) coefficient values, adipose ti sue statistically differs from all other four breast tissue types (p < 0.05).
To conclude, numerical estimation of Att(co) and Att(cross) coefficients values of d ferent breast tissue types demonstrated that Att(cross) coefficient provides statistically si nificant differences for most of the analyzed tissue types, in particular for differentiatio of tumor cell areas (regardless of their density) from non-tumorous fibrous connecti tissue.
Diagnostic Accuracy of Attenuation Coefficients for Breast Tissue Type Differentiation
In the final part of this study, the diagnostic accuracy and optimal thresholds of va ues of both attenuation coefficients were identified to detect areas of tumor cells (regar less of their density) in the non-tumorous breast tissue. To identify optimal threshol (Pth) of attenuation coefficients values using quantitative ROC analysis, we considere together two subtypes of tumorous tissues containing tumor cells (high-density tum cells and low-density tumor cells in fibrotic tumor stroma) and named them tumor ce areas because, in BCS, it is important to ensure a tumor cell-free resection margin. Th choice of the CP OCT diagnostic parameter (attenuation coefficients values) for the diffe entiation of (1) tumor cell areas from adipose tissue, (2) tumor cell areas from non-tumo ous fibrous connective tissue, and (3) hyalinized tumor stroma from non-tumorous fibro connective tissue is a trade-off between the sensitivity (Se) and specificity (Sp) rates. Figure 8A shows ROC curves for the detection of tumorous tissue in non-tumoro breast tissue using the attenuation coefficient in the co-channel. For differentiation b tween tumor cell areas and adipose tissue Att(co) coefficient threshold (Pth) equal to 4 mm −1 was proposed. This threshold is applicable at Se = 84% and Sp = 84% (Ac = 83%). Th To conclude, numerical estimation of Att(co) and Att(cross) coefficients values of different breast tissue types demonstrated that Att(cross) coefficient provides statistically significant differences for most of the analyzed tissue types, in particular for differentiation of tumor cell areas (regardless of their density) from non-tumorous fibrous connective tissue.
Diagnostic Accuracy of Attenuation Coefficients for Breast Tissue Type Differentiation
In the final part of this study, the diagnostic accuracy and optimal thresholds of values of both attenuation coefficients were identified to detect areas of tumor cells (regardless of their density) in the non-tumorous breast tissue. To identify optimal thresholds (Pth) of attenuation coefficients values using quantitative ROC analysis, we considered together two subtypes of tumorous tissues containing tumor cells (high-density tumor cells and low-density tumor cells in fibrotic tumor stroma) and named them tumor cell areas because, in BCS, it is important to ensure a tumor cell-free resection margin. The choice of the CP OCT diagnostic parameter (attenuation coefficients values) for the differentiation of (1) tumor cell areas from adipose tissue, (2) tumor cell areas from non-tumorous fibrous connective tissue, and (3) hyalinized tumor stroma from non-tumorous fibrous connective tissue is a trade-off between the sensitivity (Se) and specificity (Sp) rates. Figure 8A shows ROC curves for the detection of tumorous tissue in non-tumorous breast tissue using the attenuation coefficient in the co-channel. For differentiation between tumor cell areas and adipose tissue Att(co) coefficient threshold (Pth) equal to 4.2 mm −1 was proposed. This threshold is applicable at Se = 84% and Sp = 84% (Ac = 83%). The area under the ROC curve (AUC) was equal to 0.91. For differentiation between tumor cell areas and non-tumorous fibrous connective tissue Att(co) coefficient threshold (Pth) equal to 4.8 mm −1 was proposed. This threshold provides Se = 65% and Sp = 91% (Ac = 78%). The area under the ROC curve (AUC) was equal to 0.83. Rather low values of diagnostic accuracy indicate a clear insufficiency of this coefficient in the differentiation of these types of tissues. In addition, with this coefficient, it is practically impossible to precisely differentiate between non-tumorous fibrous connective tissue and hyalinized tumor stroma. The area under the ROC curve (AUC) was equal to 0.56.
Similarly to Figure 8A,B shows the ROC curves for the detection of tumor tissue using Att(cross) coefficient. The Att(cross) coefficient threshold value (Pth) was chosen at 1.1 mm −1 for the differentiation of adipose tissue from tumor cell areas, resulting in Se = 68%, Sp = 66%, and Ac = 67%. The area under the ROC curve (AUC) was equal to 0.71. The Att(cross) coefficient threshold value (Pth) was chosen equal to 3.1 mm −1 for the differentiation of non-tumorous fibrous connective tissue from tumor cell areas, resulting in Se = 98%, Sp = 99%, and Ac = 99%. The area under the ROC-curve (AUC) was equal to 0.99. In addition, the use of this coefficient allows good differentiation of non-tumorous fibrous connective tissue from hyalinized tumor stroma of the breast with high Se = 96%, Sp = 87%, and Ac = 91%. The area under the ROC-curve (AUC) was equal to 0.97. to 4.8 mm was proposed. This threshold provides Se = 65% and Sp = 91% (Ac = 78%). The area under the ROC curve (AUC) was equal to 0.83. Rather low values of diagnostic accuracy indicate a clear insufficiency of this coefficient in the differentiation of these types of tissues. In addition, with this coefficient, it is practically impossible to precisely differentiate between non-tumorous fibrous connective tissue and hyalinized tumor stroma. The area under the ROC curve (AUC) was equal to 0.56.
Similarly to Figure 8A,B shows the ROC curves for the detection of tumor tissue using Att(cross) coefficient. The Att(cross) coefficient threshold value (Pth) was chosen at 1.1 mm −1 for the differentiation of adipose tissue from tumor cell areas, resulting in Se = 68%, Sp = 66%, and Ac = 67%. The area under the ROC curve (AUC) was equal to 0.71. The Att(cross) coefficient threshold value (Pth) was chosen equal to 3.1 mm −1 for the differentiation of non-tumorous fibrous connective tissue from tumor cell areas, resulting in Se = 98%, Sp = 99%, and Ac = 99%. The area under the ROC-curve (AUC) was equal to 0.99. In addition, the use of this coefficient allows good differentiation of non-tumorous fibrous connective tissue from hyalinized tumor stroma of the breast with high Se = 96%, Sp = 87%, and Ac = 91%. The area under the ROC-curve (AUC) was equal to 0.97. Table 1 summarizes the diagnostic performance of both co-and cross-channel attenuation coefficients for the detection of different breast tissue types. It demonstrates that at the chosen thresholds of optical coefficient values, the highest diagnostic accuracy of separation of tumor cell areas and adipose tissue is 83% when calculating the Att(co) coefficient. However, in the differentiation between tumor cell areas and non-tumorous fibrous tissue, diagnostic accuracy for Att(cross) coefficient was much higher-99% against 78% for Att(co) coefficient. In addition, the Att(cross) coefficient allows better differentiation of non-tumorous fibrous tissue from hyalinized tumor stroma-diagnostic accuracy is 91% against 58% for the Att(co) coefficient. Table 1 summarizes the diagnostic performance of both co-and cross-channel attenuation coefficients for the detection of different breast tissue types. It demonstrates that at the chosen thresholds of optical coefficient values, the highest diagnostic accuracy of separation of tumor cell areas and adipose tissue is 83% when calculating the Att(co) coefficient. However, in the differentiation between tumor cell areas and non-tumorous fibrous tissue, diagnostic accuracy for Att(cross) coefficient was much higher-99% against 78% for Att(co) coefficient. In addition, the Att(cross) coefficient allows better differentiation of non-tumorous fibrous tissue from hyalinized tumor stroma-diagnostic accuracy is 91% against 58% for the Att(co) coefficient.
Discussion
During BCS of invasive breast cancer, the benefits of novel imaging technologies are selfevident for intraoperative detection of any residual tumor tissue (without or after neoadjuvant treatment) in real-time. OCT appears to be a very promising tool for the routine surgical practice of an oncologist due to the advantages of this method, such as safety (a near-infrared light source is used), accuracy (micrometer-scale resolution~10-15 µm), label-free imaging and high speed of obtaining 2D or 3D images of the subsurface tissue structure in real-time to a depth of 2 mm. The continuous improvements of the OCT technology in the direction of imaging speed, sensitivity, the development of functional modalities, and emerging endoscopic and handheld scanning probes, as well as OCT-signal data processing, have been the cause of the increased interest in OCT [22,23,49,54]. Furthermore, some studies apply machine learning and artificial intelligence to improve breast tissue type differentiation [33][34][35][36]. All these advances allow optimization of the analysis of large amounts of OCT data and make a visual interpretation of the OCT results more convenient for medical practitioners. Overall, the OCT method, compared to other optical methods, shows the most promise because it can collect information in a short time and achieve penetration depths required to meet current consensus definitions of clean margins. Compared to routine histological analysis requiring 3-5 days, OCT has the potential to be used for rapid (within several minutes) intraoperative evaluation of tumor boundaries and negative surgical margins.
Earlier studies demonstrated that quantitative evaluation of OCT images with the determination of attenuation coefficients provides substantially improved contrast over its qualitative analysis or description, delineating nuanced features within breast cancers and potentially improving resection margin assessment [30,42]. However, variations in the attenuation coefficient calculated based on conventional structural OCT introduced by different breast tissue types within dense benign and tumorous tissue could contribute to the overlap in the attenuation coefficient values between these groups, which complicates their differentiation. CP OCT technology detects polarization-dependent changes and, therefore, represents a promising tool for the assessment of the state of connective tissue in breast cancer and its differentiation from areas of tumor cell clusters based on the registration of cross-polarization backscattering of the OCT signal [29,38]. In addition, in this study, a more recent method of measuring the attenuation coefficient, known as the depth-resolved approach [42,43,46], was used, while in other studies on the evaluation of attenuation coefficient in breast tissue linear fitting method was used [30]. The advantages of the depth-resolved approach involve avoiding axial resolution deterioration from the fitting range and inaccurate attenuation coefficient measurements due to a poor choice of fitting range. In addition, it is worth noting that averaging across the A-scans could significantly reduce the speckle noise, thus improving the estimation of the attenuation coefficient for the cost of the reduced resolution. In the present study, the trade-off between the accuracy of the attenuation coefficient estimation and the resolution of the resulting distributions was resolved by applying averaging in the local 3 × 3 × 3 pixel window before the attenuation coefficient estimations.
Overall, in this study, using a depth-resolved approach for attenuation coefficient calculation and building color-coded Att(co) and Att(cross) coefficient maps allowed detailed visualization of breast cancer specimens by providing a much higher contrast between different breast tissues within various breast cancer subtypes compared to conventional structural OCT images and linear fitting for attenuation coefficient calculation. In our study, the construction of Att(cross) coefficient maps allowed us not only to sharpen the contrast of breast lesions but also to improve correspondence to histology data. In 2014, the American Society of Surgical Oncology stated that tumor (invasive cancer or DCIS) not touching the ink at the specimen edge is acceptable to prevent local recurrence [55,56]. Based on the results of two separate meta-analyses, the current consensus states that the negative margin for invasive breast cancer is the absence of tumor cells in the inked edge of the resected specimen, and for DCIS, the appropriate margin is greater than 2 mm. The evaluation of the Att(cross) coefficient allowed for the differentiation of more precisely five breast tissue types (adipose tissue, non-tumorous fibrous connective tissue, hyalinized tumor stroma, high-density tumor cells, and low-density tumor cells in fibrotic tumor stroma) and to achieve a more contrasted border between tumorous and non-tumorous tissue as well as a correlation between the morphological structure of the tumor and the scattering/polarization properties of the tissue. Specifically, based on the calculation of attenuation coefficients, it was shown that attenuation imaging in cross-channel facilitates the identification of variations in tumor cell density from the surrounding tumor stroma with a different state. This is likely because the polarization mode used here is connective tissue-targeted; this makes it possible to assess the features of the state of the connective tissue of the breast, visualize changes in the tumor stroma and clearly separate it from clusters of tumor cells.
The observations reported in Figures 3-6 regarding the relationship between breast tissue morphology and attenuation patterns showed a wide range of breast tissue types appearing as different patterns on the attenuation coefficient maps. Figure 7 summarizes the findings of this study and demonstrates statistically significant differences between the various breast tissue types. As a result, this made it possible not only to distinguish large clusters of tumor cells from the stromal tissue but, with high statistical significance (p < 0.0001), to separate cells with low density in the fibrotic tumor stroma from the surrounding non-tumorous fibrous connective tissue. The Att(co) coefficient box plots in Figure 7A show a large overlap between non-tumorous fibrous tissue and low-density tumor cells in fibrotic tumor stroma and hyalinized tumor stroma. Hypothetically, this is due to low-density cells and dense connective tissue having a similar scattering level. At the same time, the calculation of the Att(cross) coefficient ( Figure 7B), detecting the polarization properties of the tissue, showed higher statistically significant differences between all the studied breast tissues (p < 0.0001). Measurements of tumor cell density may assist intra-operationally by ensuring a clean margin of the resection during BCS and determining the pathological response of cancer to neoadjuvant chemotherapy.
In addition, color-coded attenuation coefficient maps of these specimens demonstrated that various degenerative changes of the tumor stroma (fibrosis or hyalinosis) could also be differentiated. The presence of hyalinosis of the tumor stroma indicates its secondary deeper (degenerative) changes [52]. During the study, it was found that areas of hyalinosis of collagen fibers are characterized by a decrease in Att(co) and an increase in Att(cross) coefficient compared to the fibrous tissue. Earlier studies demonstrated that denser and highly scattering malignant tumor tissue could be difficult to differentiate from normal connective tissue due to similar optical refractive index and scattering intensity [57]. In this regard, the additional use of median values of the Att(cross) coefficient can be useful in cases where predominated colors on the optical maps reflect the values of the coefficients between adjacent tissue types (low-density tumor cells/fibrosis or hyalinized stroma). Previously, when calculating the traditional attenuation coefficient, such significant distinctions of breast tissue types were not shown. This is the first study where local attenuation coefficients of different breast tissues were identified in different molecular and morphologic breast cancer subtypes. In the case of high-grade breast cancer subtypes associated with poor treatment prognosis, the cancer zones demonstrate decreased attenuation coefficient with a fairly homogeneous spatial distribution. In these histological data, this corresponds to a decrease in the content of the stromal component and the predominance of regions of tumor cells. In the case of low-grade breast cancer subtypes and benign fibroadenoma associated with good treatment prognosis, the tumor cell areas demonstrate an increased attenuation coefficient with a fairly heterogeneous spatial distribution. In these histological data, this corresponds to predominance in the content of the stromal component and a small number of regions in tumor cells or ducts/lobules, each of which may give rise to a different Att(cross) coefficient. It is important to take into account that in low-grade invasive carcinoma of no special type on the border with normal tissue, the density of tumor cells can be considerably reduced; also, not only adipose tissue but also fibrous tissue with areas of hyalinosis can constitute the border. Therefore, characteristic values of the attenuation coefficient reflect the changes in the morphological structure of different breast cancers and can be used as a potential biomarker to predict the molecular subtype of breast cancer. This can facilitate more precise breast cancer border identification, as well as assessment of the resection margin.
For the first time, we evaluated the diagnostic ability of Att(co) and Att(cross) coefficients to differentiate tumorous from non-tumorous breast tissue. In this study, optimal threshold values of both attenuation coefficients for detecting tumor cell areas and hyalinized tumor stroma among the surrounding non-tumorous adipose or connective tissues were determined. Overall, for the studied breast cancer, diagnostic parameters for the Art(cross) coefficient demonstrated better values compared to Att(co) coefficient. The CP OCT-based Att(cross) coefficient was able to detect areas of tumor cells with 99% diagnostic accuracy, 99% sensitivity, and 99% specificity in the surrounding non-tumorous fibrous connective tissue (see Figure 8, red line). Att(co) coefficient was able to detect areas of tumor cells with 99% diagnostic accuracy, 99% sensitivity, and 99% specificity in the surrounding adipose tissue (see Figure 8, blue line). In addition, it was demonstrated that Att(cross) coefficient allows for better differentiation of non-tumorous fibrous connective tissue from hyalinized tumor stroma (91%) (see Figure 8, green line). The use of quantitative processing of OCT data with threshold values of attenuation coefficients makes it possible to objectify the data and increase the diagnostic accuracy of this method, and to the best of our knowledge, we are the first to demonstrate this result. High diagnostic accuracy of the Att(cross) coefficient, in comparison with Att(co) coefficient, allows tumor cells and tumor stroma to be better visualized in the surrounding non-tumorous fibrous connective tissue for achieving clear resection margin during BCS and evaluation of the efficacy of neoadjuvant tumor therapy. Thus, we demonstrated the high diagnostic potential of using Att(co) and Att(cross) coefficients for detecting tumor cell areas and minimizing the risk of recurrence and re-resection. Other researchers have demonstrated the dynamics of the decrease or increase in attenuation coefficient values in different breast tissues, which are arbitrarily based on the authors' clinical experience [30].
Along with the promising results, we would like to highlight several limitations of our study. On the one hand, we need to mention the limitations of the OCT method, such as the low penetration depth of the probing light inside the tissue, which is 1.5 mm. This aspect also includes the small size of selected OCT images and, consequently, the small volume of tissue scanning. In addition, the areas of coagulation and hemorrhage may also lead to changes in the nature of the received OCT signal and the corresponding optical attenuation, which is important in the case of the in vivo application during breast cancer surgery. The richness of the extracted information from OCT examination may be comparable with histology, although the resolution of OCT-scans~10-15 µm is lower than in microscopic histological studies. In addition, co-registration of the histology may be difficult in some cases because of tissue deformation and shrinkage after the formalin fixation and processing. This complicates automated correlation and one-to-one mapping between tissue type and optical attenuation. In addition, in some cases, when high-density tumor cells are present at the tumor border, in the color-coded attenuation coefficient maps, it was difficult to identify the border of adipose tissue because they had similar values. In such cases, it is preferable to use structural log-scale OCT images or elastography OCT-based imaging, as was demonstrated earlier [18,28,29].
In the future, once it becomes possible to acquire a large number of samples, we are planning to apply machine learning and deep neural networks for differentiating breast tissue types and surgical margins assessment using attenuation coefficient values based on volumetric CP OCT data. This will enable building a classifier for automated identification of the various breast tissue type and performing automatic cancer detection by combining the attenuation metrics with additional intensity parameters information on the tissues. Furthermore, studies are needed for a comprehensive evaluation in the intra-operative setting.
To summarize, the CP OCT method with quantitative measuring of the attenuation coefficients is a promising tool for intraoperative human breast tissue differentiation during surgical resection of breast cancer or for performing a targeted histological biopsy for the assessment of the resection margin as well as evaluation of the efficiency of neoadjuvant therapy. Furthermore, we believe that the results reported here represent a baseline in the use of this technique and are a first step towards establishing its use in a clinical setting. Current efforts are underway for the construction and implementation of a handheld CP OCT probe for in vivo imaging during BCS and real-time resection margins assessment.
Conclusions
In this study, we have shown qualitatively and quantitatively that color-coded attenuation coefficient maps based on CP OCT imaging may enable better visualization of detailed features in different subtypes of breast cancer. We showed that mapping of the attenuation coefficient in co-and cross-polarization channels using the depth-resolved approach presented here shows great promise for automated classification of different tissue types in human breast tissue (adipose tissue, non-tumorous fibrous connective tissue, hyalinized tumor stroma, high-density tumor cells and low-density tumor cells in fibrotic tumor stroma). Att(cross) coefficient is more suitable for differentiation of tumor cell areas of different densities from non-tumorous fibrous tissue: diagnostic accuracy was 91-99%, sensitivity-96-98%, and specificity-87-99%. Att(co) coefficient is more suitable for the differentiation of tumor cell areas from adipose tissue: diagnostic accuracy was 83%, sensitivity-84%, and specificity-84%. We believe this methodology provides an important improvement to conventional OCT imaging of different breast tissue type morphology and is a step towards in vivo assessment of the surgical margin of breast cancer resection using attenuation coefficients based on volumetric CP OCT data. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2023-05-11T15:04:57.271Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "da59d49876087a58a258ed0f489690976fb02bbe",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "018ac48b7f113b60807f03e8ac6ee70c70a65f03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
54779897
|
pes2o/s2orc
|
v3-fos-license
|
OXIDATION AND CHARACTERIZATION OF ACTIVE CARBON AG-5
The surface chemistry of the commercial active carbon AG-5 has been modifi ed by oxidation with concentrated nitric acid. The structural changes caused by oxidative treatment were estimated on the basis of nitrogen adsorption-desorption isotherms and thermal analysis. Boehm titration method and infrared spectral analysis have been used in order to evaluate surface chemistry characteristics of active carbon samples. After oxidation process the amount of total acidic groups on oxidized active carbon surface (AG-5ox) increases by about 6 times in comparison with unmodifi ed sample (AG-5). The concentration of the acidic groups on the oxidized active carbon surface (AG-5ox) was in the following order: strong acidic >>> weak acidic > phenolic.
Introduction
For the treatments of drinking and waste waters, packed beds of granular activated carbon are frequently used.The type of contaminant which will be adsorbed and the adsorption/removal effi ciency of the active carbons are strongly dependent on their both porous structure and surface chemistry [1][2][3][4].Therefore, the surface chemical modifi cation of active carbons is of great interest in order to produce materials for specifi c applications.This modifi cation has been mainly carried out by oxidative methods, producing a more hydrophilic structure with a large number of oxygencontaining groups [5].Various reagents have been used as oxidants: nitric acid, hydrogen peroxide, sodium hypochlorite, permanganate, transition metals etc. [5][6][7][8][9][10].A review of the literature concerning oxidation of the active carbons shows, that using nitric acid as oxidizing agent provides strong acidic functional groups on the carbon surface [6,[11][12][13].
In order to minimize operational problems during the water treatment processes, attention must be given to the size of the granular active carbon particles.The large particles have a small external surface area and long internal diffusion path lengths [14].This reduces the mass transfer velocity, resulting in long adsorption/removal processes of pollutants from waters.
The aim of this work was to modify the surface chemistry of the commercial granular active carbon AG-5, using the concentrated nitric acid as oxidizing agent.In order to characterize unmodifi ed and oxidized active carbon samples, the standard test methods for evaluation of physical properties (particle size distribution and bulk density), physical-chemical characteristics (elemental analysis, thermal analysis, nitrogen adsorption measurements) and surface chemistry characteristics (pH of surface, Boehm titration method and IR spectral analysis) have been used.
Materials
In this study commercially available granular activated carbon AG-5 (GOST 20777-75) has been used.Activated carbon AG-5 is obtained from pit coal by steam activation representing granules of cylindrical shape [15].All the chemical reagents used in this study were of analytical grade.
Sample oxidation method
The oxidation was carried out as follows: the solution of concentrated nitric acid (63%) was added to 400g of granular active carbon AG-5 (at a ratio of solid:liquid of 1:3), which was placed in the glass fl ask on the water bath.The fl ask was connected with refl ux condenser ended with absorption bulb fi lled with the NaOH solution.The carbonnitric acid mixture was kept at the temperature of 95 ºC for 8 h.The released nitrogen oxides were absorbed in the absorption bulb.After fi nishing the oxidation process the mixture was cooled and decanted.The humic acids formed during oxidation processes have been removed with 1.0 N solution of KOH.After removal of humic acids the oxidized active carbon sample has been treated with 1.0 N solution of HCl in order to neutralize residual KOH and to obtain the H + form of oxidized active carbon.
Afterwards, the oxidized sample was extensively washed with distilled water until chlorine ions have not been detected in the washing water, dried at 110±5ºC and labelled as AG-5ox.
Characterization methods
Prior characterization measurements the active carbon samples were dried at 110±5ºC for 3 h.
Physical characteristics
Particle size is an important property infl uencing the fl ow characteristics, adsorption kinetics and catalytic behaviour of granular activated carbon layers.The granular active carbon samples were separated according to particle size using a set of standard sieves with decreasing size opening (in mm).The results are given in weight percent for those particles between different sieves expressed as particle size [16].
The bulk density is defi ned as the mass of a unit volume of the sample in air, including both the pore system and the voids between the particles.The bulk density (D b ) is given in g/cm 3 and determined by Eq.(1) [16]: where, m-mass of the dry sample in g; V-volume of the sample, measured under test condition in cm 3 .
Physical-chemical characteristics
Elemental analysis (C, H, N) was carried out by the Elemental Analysis group of the Institute of Chemistry of the Academy of Sciences of Moldova.
The content of metals was determined by atomic absorption spectroscopy (AAS-1N).
The surface area, pore volumes and pore size distributions of the carbons were performed on a surface area analyzer (Autosorb 1-MP) through N 2 adsorption-desorption at 77 K after outgassing the samples at 250 ºC to a residual vacuum 10 -5 Pa [17].Surface area (S BET ) was measured by the BET (Brunauer-Emmet-Teller equation) method.Pore size distribution was determined by the Non-Local Density Functional Theory (NLDFT) method assuming slit pore geometry.The total pore volume (V total ) was deduced from the manufacturer's software [18].Micropore volume (V micro ) was calculated using Dubinin Radushkevich (DR) method.Mesopore volume (V meso ) was determined by the subtraction of the V micro from the V total .
Thermal analysis measurements were performed using a Derivatograph Q-1000 analyzer.The samples were heated from room temperature up to 1000 °C in air at a heating rate of 10 °C/min.
Surface chemistry characteristics
The pH of the active carbon surface has been evaluated by determination of pH value of active carbon suspension (0.4 g of dried sample/20 mL of distilled water) equilibrated for 24 h [19].
The quantifi cation of the surface functional groups have been done by selective neutralization technique of Boehm [20].According to this method, carbon sample (0.5g) was equilibrated with 50 mL of each of three bases 0.05N NaHCO 3 , Na 2 CO 3 , NaOH, sealed and shaken for 72 h, and then 10 mL of each fi ltrate was back-titrated with 0.05N HCl.The surface concentrations of each acidic groups: strong-carboxyl, weak-carboxyl and phenolic; have been determined by differences between the amounts reduced by each of the bases [11].The total basic surface oxides have been determined with a similar titration technique using 0.05 N HCl and back-titrating with 0.05 N NaOH.Titrations have been done using automated titrator TitroLine ® 6000 (SI Analytics, Germany).
The concentrations of surface acidic groups (N A , meq/g) have been calculated by Eq.( 2): where, C 0 and C e are initial and equilibrium concentrations of bases NaHCO 3 , Na 2 CO 3 , NaOH; V -volume of bases added to active carbon sample, in mL; m-mass of active carbon sample, in g.The quantity of the basic functional groups (N B , meq/g) has been calculated by Eq.( 3): where, C 0 and C e are initial and equilibrium concentrations of HCl; V -volume of HCl added to active carbon sample, in mL; m-mass of active carbon sample, in g.The Fourier Transform Infrared spectra (FTIR) of the active carbons were recorded in the range 400 -4000 cm -1 using a Fourier Transform Infrared Spectrometer (PerkinElmer, Spectrum 100, USA).Prior to spectral analysis the samples were dried and the dilutions in KBr have been used (0.15wt%).
Physical characteristics
The particle size distribution evaluated by sieve analysis is presented in Figure 1.For both samples, initial granular active carbon AG-5 and modifi ed by oxidation AG-5ox, the size fraction between 1.3 and 2.0 represents about 65% of total weight.For further experiments the size fraction between 0.8 and 2.0 mm was chosen.
The bilk density has been determined only for the size fraction 0.8÷2.0mm,which was selected for further experiments.After oxidation process the bulk density of active carbon AG-5ox decreases with about 4%, it can be explained by removal of soluble inorganic (ash) with nitric acid (Table 1).
Table 1
The bulk density of size fraction 0.8÷2.0mmfor initial and oxidized active carbon samples.
Physical-chemical characteristics
The elemental analysis, the metals and ash content for studied active carbon samples, are presented in Tables 2 and 3.The initial active carbon AG-5 contains about 16.6% ash, while after oxidation with nitric acid this value decreased to ca. 6% (Table 2).Similar results have been obtained by Jaroniec et al. during oxidation of AG-5 active carbon with concentrated nitric acid [21].After oxidation process most of the metals are removed from the active carbon sample (Table 3).Nitrogen adsorption isotherms and pore size distribution curves for studied samples are shown in Figures 2 and 3.Both samples exhibit microporous structure.The porous structure parameters calculated from nitrogen adsorption isotherms are presented in Table 4.As can be seen, the values of the BET surface area (S BET ), total pore volume (V total ) and micropore volume (V micro ) slightly increase after oxidation treatment with about 14-16% (Table 4).
The impact of nitric acid on porous structure of the active carbon AG-5 is quite different described in the literature.Some authors have been reported an increase of total pore volume after oxidation of AG-5 with nitric acid, while other authors have been presented opposite results, degradation of porous structure and decrease of structural parameters values over 50% [6,12,13,21].In our case, we suggest, the slight increase of structural parameters values after oxidation process is due to dissolution of inorganic species that may block the entrance of micropores.T. Goreacioc / Chem.J. Mold. 2015, 10(1), [76][77][78][79][80][81][82][83] Thermo-gravimetric analysis (TGA) and derivative thermo-gravimetric (DTG) curves of active carbon samples are presented in Figures 4 and 5. TGA curve of the oxidized sample AG-5ox differs from that of unmodifi ed sample AG-5, which indicates that the oxidation with concentrated nitric acid has changed not only the surface properties, but also destroyed the active carbon structure making its decomposition much easier.TGA and DTG curves of the two samples show an initial weight loss around 100 ºC, which is related to thermodesorption of physically adsorbed water [13,21,22].The weight loss at 250 ºC is presented only on the DTG profi le of the oxidized sample (AG-5ox) being attributed by many researchers to the decomposition of carboxylic surface groups [23][24][25].Till around 400 ºC the decomposition of lactonic and phenolic groups takes place [23][24][25] and then the active carbon sample buns out (Figure 5).The unmodifi ed sample, AG-5, is much more thermally stable in comparison with oxidized sample (Figure 4).The DTG profi le of this sample does not present any signifi cant weight loss until 450 ºC.Both samples present ash content at 1000 ºC much high for AG-5, about 16% (Figure 4).T. Goreacioc / Chem.J. Mold. 2015, 10(1), 76-83
Surface chemistry characteristics
The surface properties of the active carbons were evaluated by pH of the carbon surface, Boehm titration method and spectral analysis under IR range.
The results of Boehm's titration method and pH value of the carbon surface are given in Table 5. Signifi cant differences have existed on the amount of acidic and basic functional groups of the active carbons.After oxidation process with nitric acid the surface of AG-5ox sample becomes acidic and pH of active carbon suspension decrease till 3.30.
The amount of total acidic groups (titrated with NaOH) on active carbon surface AG-5ox increases by about 6 times and strong acidic groups -by about 9 times in comparison with initial sample AG-5.At the same time the amount of basic groups decreases by about 4 times (Table 5).The concentration of the acidic groups on the active carbon surface (AG-5ox) was in the following order: strong acidic >>> weak acidic > phenolic.The FTIR spectral analysis is an important tool to identify some characteristic functional groups on the active carbons surface.In Figure 6 the IR spectra for active carbons AG-5 and AG-5ox are compared.For both of the activated carbons, before and after oxidation, there are a number of common bands.The absorptions around 800 cm -1 are assigned to the out of plane bending of the ring C-H bonds [25][26][27].
Bands in the 1000-1200 cm -1 region are diffi cult to assign because there is a superposition of a number of broad overlapping bands.It could be assign to C-O as in phenols/ethers/esters (1200 cm) [9,27].The shoulder at 1164 cm -1 , together with two absorptions of low intensity (1385 and 1399 cm -1 ), confi rm the presence of phenolic groups on the oxidized active carbon surface (AG-5ox, Figure 5 (2)).T. Goreacioc / Chem.J. Mold. 2015, 10(1), 76-83 The bands in the region of 1500-1600 cm -1 have been observed by many authors and have not been interpreted unequivocally.These bands, presented in the spectrum of AG-5ox at 1521, 1562 and 1625 cm -1 , and in the spectrum of AG-5 at 1490 and 1560 cm -1 , can be assigned to aromatic ring stretching (C=C) coupled to highly conjugated carbonyl groups (C=O) [24,26].
The band between 1700 and 1730 cm -1 , that can be assigned to the stretching vibration of C=O bonds, characteristic of carboxylic, ketone and aldehide groups [11,24] is much higher in AG-5ox than in AG-5 spectrum.
In the region 2860-2980 cm -1 , two bands of low intensity are presented, frequently attributed to aliphatic CH bond in the CH, CH 2 and CH 3 groups [26].
Broadband in the region 3300-3600 cm -1 is assigned to (OH stretching) vibrations of OH groups of related alcohols, phenols and carboxylic acids [11,26,27].
Generally, used methods to quantify the surface characteristic groups on active carbon samples show an increase of acidic surface groups on oxidized sample, presented as strong acidic -carboxylic groups, weak acidic -ketones and aldehide groups, and phenolic groups.
Conclusions
Evaluation of physical-chemical and surface chemistry characteristics of unmodifi ed and oxidized active carbon samples indicates that oxidation with concentrated nitric acid changed not only the surface properties but also destroyed the active carbon structure making its decomposition much easier.The unmodifi ed sample, AG-5, is much more thermally stable in comparison with oxidized sample (AG-5ox).
The slight increase of structural parameters (S BET , V total , V micro ) after oxidation treatment with about 14-16% is due to dissolution of inorganic species that may block the entrance of micropores.
After oxidation process with nitric acid, the surface of AG-5ox sample becomes acidic and pH of active carbon suspension decreases till 3.30.The amount of total acidic groups on active carbon surface AG-5ox increases by about 6 times in comparison with initial sample AG-5.The obtained results show an increase of acidic surface groups on oxidized sample, presented as strong acidic -carboxylic groups, weak acidic -ketones and aldehide groups, and phenolic groups.
Figure 1 .
Figure 1.Weight percent of different size fractions of the initial granular active carbon AG-5 and modifi ed by oxidation AG-5ox.
Table 4 Parameters of porous structure determined from nitrogen adsorption isotherms.
micro -average micropore radius; E micro -adsorption energy in micropores.
|
2018-12-05T17:24:46.777Z
|
2015-06-01T00:00:00.000
|
{
"year": 2015,
"sha1": "86dd6f35a8a21f724e58616527ade3a1da136edd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19261/cjm.2015.10(1).11",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "86dd6f35a8a21f724e58616527ade3a1da136edd",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
88520005
|
pes2o/s2orc
|
v3-fos-license
|
The P\'olya sum kernel and Bayes estimation
We consider a particular Cox process from a Bayesian viewpoint and show that the Bayes estimator of the intensity measure is the so-called P\'olya sum kernel, which occurred recently in the context of the construction of the so-called Papangelou processes. More precisely, if the prior, the directing measure of the Cox process, is a Poisson-Gamma random measure, then the posterior is again a Poisson-Gamma random measure and the Bayes estimator of the intensity is the P\'olya sum kernel. Moreover, we extend this result to doubly stochastic Poisson-Gamma priors and give conditions under which one can identify the Bayes estimator for the intensity.
Introduction
Given some statistical model of point processes, the interest lies in deriving statements about unknown parameters from observations, which are point configurations. For Poisson processes on an Euclidean space it is possible to determine the intensity measure among the stationary ones from a single observation [11]. In a Bayesian context, one starts with a probability distribution on the set of parameters, which may be interpreted as prior information or a degree of belief on the set of models, and is interested in firstly the law of the parameter given some observations, the posterior law, and secondly in the estimator for the parameter.
Staying in the context of Poisson processes, any choice of a prior distribution on the set of stationary intensity measures leads to degenerate posteriors in the sense that they are concentrated on a single intensity measure. For a larger parameter set, when such perfect estimates are not available, one is interested in finding suitable sets of priors, as discussed e.g. in [1]: Desirable properties are analytical tractability in the sense that it should be possible to determine the posterior law given some observation analytically, and that together with the prior, the posterior should belong to the same class of distributions. In this case the set of priors is said to be closed under sampling or conjugate.
The questions considered in [11] are strongly connected to Bayesian statistics, they were discussed in an abstract form in [4,3]: In terms of point processes one starts with a consistent family of local specifications, i.e. local laws, and aims at constructing firstly all stochastic fields which are specified by this family, and secondly integral representations of these point processes as mixtures of certain extremal elements. Once the integral representation is obtained, interpreting the mixing measure as a prior, a single observation is sufficient to determine the particular extremal element. this property was called ergodic decomposability in [5] and is equivalent to the posterior being a Dirac measure for almost every observation.
Returning to the conjugated classes of priors, one important example is the class of Gamma distributions as priors for the family of Poisson distributions: If the prior is Γ(a, r) distributed, then the posterior given the observation k is Γ(a + 1, r + k) distributed. Hence the family of Gamma distributions is closed under sampling for Poisson models. Of course, this does not touch the question whether the family of Gamma distributions is a natural choice.
We generalize this result to general random measures and point processes: Starting with a Poisson process and its intensity measure being a Gamma-Poisson random measure, the posterior is again a Gamma-Poisson random measure. More precisely, if the prior is the Gamma-Poisson random measure with parameters a ∈ R + and ρ a σ-finite measure, then the posterior given the observation µ, a point configuration with possibly multiple points, has parameters a + 1 and ρ + µ. In particular the class of Gamma-Poisson random measures is such a big class that there is no chance to get more precise information about the directing intensity measure from a single observation. Moreover we show that the Bayes estimator for the underlying intensity is z(ρ+µ), where z depends on a in a unique way.
A similar generalization from Dirichlet distributions for multinomial modells are Dirichlet processes for a point processes realizing a fixed number of points [15].
In fact, the subsequently presented ideas reverse the original considerations of Zessin [16]: Is there a point process which places its points according to a Pólya urn mechanism instead of simple urn mechanism like the Poisson process. Zessin constructed this so-called Pólya sum process as is the unique point process with Papangelou kernel z(ρ + µ), that is that the Pólya sum process is the unique solution of a particular integration-by-parts formula. Intuitively, the Papangelou kernel can be understood as a conditional intensity at which points are placed given the observation µ. In [13] this Pólya sum process was identified as a Cox process, the underlying random intensity being a Gamma-Poisson random measure. This connection is subject of section 2.
In a further step we allow the parameters being random and then consider Cox processes directed by doubly stochastic Gamma-Poisson random measures. Equivalently, we consider doubly stochastic Pólya sum processes. We put the results in [14] into the Bayesian context: Point processes with local laws given by conditioned Pólya sum processes were identified as certain mixed Pólya sum processes, hence again one has an integral representation applicable for Bayesian analysis: Firstly, for any prior which is concentrated on a certain parameter set, the posterior will be concentrated on a single point and therefore determines the parameters uniquely, hence Bayes kernel and estimator of the intensity can be identified explicitly. Secondly, using the Cox representation of the Pólya sum process, we obtain the results of the first part with a doubly stochastic Poisson-Gamma process as directing measure, a result in the spirit of [1] for mixed Dirichlet processes.
Implications of these results are that for particular cases we obtain that the doubly stochastic Pólya sum process is again a Papangelou process, and moreover that each point process satisfying the same integration-by-parts formula must be a doubly stochastic Pólya sum process. These ideas are presented in section 3, the proofs are contained in section 4.
Some random measures, point processes and results
Let X be a polish space and denote by B = B(X) its Borel sets as well as by B 0 = B 0 (X) the ring of bounded Borel sets of X. Furthermore let M(X) and M ·· (X) be the space of locally finite measures and locally finite point measures on X, respectively, each of which is vaguely polish, the σ-algebras generated by the evaluation mappings ζ B (µ) = µ(B), B ∈ B 0 . M ·· (X) is the set of observable point configurations, i.e. locally finite subsets of X with possibly multiple points, and M(X) is the set of 'mass distributions' with finite mass in each bounded set. ζ B (µ) counts the number of points of µ ∈ M ·· (X) inside B considering possible multiplicities.
A probability measure P on M(X) is a random measure, and if P is concentrated on M ·· (X), a point process. Finally denote by F (X) be the set of bounded, non-negative and measurable functions on X and F b (X) ⊂ F (X) the subset of those functions in F (X) with bounded support. Following the notation ζ B (µ) = µ(B) denote by ζ f (µ) = µ(f ) = f dµ for f ∈ F (X) and a measure µ ∈ M(X) the evaluaton mapping of f at µ.
Apart from the finite dimensional distributions, to characterize a random measure or point process P uniquely, one may either use the Laplace transform or the the Campbell measure The Campbell measure admits under suitable assumptions two disintegrations, one with respect to its intensity measure yielding Palm kernels, and one with respect to P itself yielding the Papangelou kernels. The famous example is the Poisson process P ρ with intensity measure ρ ∈ M(X), for which Moreover, there is exactly one point process which satisfies equation (2.1), known as Mecke's characterization of the Poisson process. The kernel η(µ, dx) = ρ(dx) does not depend on the configuration µ, meaning that the point process places points independent of all other points with the same distribution. In analogy to inductive rules defining a law given some observation, one should read equation (2.1) as given any observed point configuration µ, the intensity for another point is given by the measure ρ; and moreover, if some point process is given by this rule, it must be the Poisson process. In a similar way the following modification has to be understood with additional rewards to observed points. Note that neither every choice of a rule specifies a point process, nor in case of existence this point process must be unique.
Recently, Zessin considered in [16] the Papangelou kernel for a measure ρ ∈ M(X) and some z ∈ (0, 1), which replaces the urn mechanism with replacement of the Poisson process by a Pólya urn mechanism. Instead of a point being placed according to the intensity ρ independent of a present configuration, here points in the configuration µ get a reward for the intensity of the following point. Zessin answered the question of the existence and uniqueness of a point process satisfying the functional equation Again, there is exactly one solution, the Pólya sum process S z,ρ . This point process has like the Poisson process independent increments and is infinitely divisible. In contrast to the Poisson process, S z,ρ is not a simple point process even if ρ is a diffuse mesaure. Moreover, S z,ρ has a representation as a Cox process [13] with its underlying random intensity measure being a Poisson-Gamma random measure, see e.g. [10] for the latter process. More precisely, if D z,ρ is the infinitely divisible random measure with its Levy measure χ ρ ⊗ τ z being the image of the product of ρ and under the mapping χ : X × R + → M(X), (x, r) → rδ x , then the Cox representation of the Pólya sum process is The parameter a in the introduction and the parameter z are linked via the relation a = 1−z z . We interpret D z,ρ as a prior for the measurable family of Poisson processes. Following [5], let the Bayes kernel B from M ·· (X) to M(X) be or equivalently be defined via Theorem 2.1. Let z ∈ (0, 1) and ρ ∈ M(X). Then the posteriori measure of the Pólya sum process S z,ρ is a Poisson-Gamma random measure, more precisely B(µ, · ) = D z 1+z ,ρ+µ .
Note that if we write z ′ = z 1+z , then there corresponds some a ′ to z ′ and one has a ′ = a + 1. Thus the transformation of the parameters is in the same spirit as in the random variable case. Remark that B(µ, · ) is the superposition of two random measures Hence the Bayes estimator is exactly the Pólya sum kernel. The sum exactly reflects the representation of B(µ, · ) as the convolution of two random measures. In such a case the intensity measure necessarily is the sum of the two intensity measures. Thus if one ignored (2.3) and denotes the point process on the rhs. by P, then for any non-negative, measurable function h by applying firstly Mecke's characterization of the Poisson process and (2.4). Since the last equation has the unique solution S z,ρ , P must be the Pólya sum process.
1. Instead of a fixed z ∈ (0, 1) one may start with a measurable function z : X → (0, 1). As long as z is bounded away from 1, calculations do not need further justifications.
2. If X is countable, ρ is the counting measure on X and z, with the just mentioned restrictions, is a probability measure on X, then we are in the case of the Bose gas as in [2]. Given an observation of particles, the Bayes estimator of the intensity of particles occurring in certain states is the special Pólya sum kernel.
Doubly stochastic Pólya sum processes
As indicated in the introduction, the construction of H-sufficient statistics in [3] for a certain set of probability measures C fits into the context of Bayesian statistics and has implications in connection with [13]. At first we briefly describe the considered problem and the results obtained. Denote by C the set of all point processes P satisfying for each bounded set B ∈ B. E B is a σ-algebra containing few information about the events inside the bounded set B and full information about the events outside B, such that if B ′ contains B, then E B ′ is contained in E B . We will be more precise in a moment and only remark that ρ is assumed to be a diffuse and infinite measure. Such a P is called stochastic field with local characteristic given by the rhs. of (3.1). In [13] there was constructed a stochastic kernel Q from M ·· (X) to M ·· (X) satisfying P = Q(µ, · )P(dµ) (3.2) for any P ∈ C and moreover that Q µ, µ ′ : Q(µ ′ , · ) = Q(µ, · ) holds. In fact, Q is the common conditional probability of P conditioned on the asymptotic σ-algebra E ∞ = B∈B 0 E B for every P ∈ C. In the terminology of [3], Q is an H-sufficient statistic for C, in the terminology of [5], Q is a decomposing kernel and C an ergodically decomposable simplex. From [3] it follows that there exists a subset ∆ ⊆ C of extreme points and a unique probability measure V P on ∆ such that P = ∆ P V P (dP ). So far this is the essence of [3] and [5], the in [13] discussed choices of (E B ) B∈B 0 lead to the extremal points: 1. if E B admits the counts the points inside B with multiplicity, then ∆ = S z,ρ : z ∈ [0, 1) , 2. if E B admits the counts the points inside B without multiplicity, then 3. if E B admits the counts the points inside B with and without multiplicity, then ∆ = S z,wρ : z ∈ (0, 1), w ∈ (0, +∞) or z = w = 0 .
For any of these cases, Q is identified as where Z and W are uniquely determined from the densities of points with and without multiplicity (in the first two cases one of them is fixed). Note that since we assumed ρ to be an infinite measure, these densities are almost surely constant for each extremal point. We remain in the setup of the last case, i.e. start with any priori distribution V on (0, 1) × M(X) and consider the doubly stochastic Pólya sum process directed by V . Remark 3.1. For any priori distribution V on (0, 1) × M(X), the doubly stochastic Pólya sum process S V = S z,ρ V (dz, dρ) is a Cox process P V directed by the doubly stochastic Poisson-Gamma random measure D V = D z,ρ V (dz, dρ).
2. The Bayes kernel B V of the Cox process P V is given by hence the Bayes estimator for the random intensity measure is Thus under the assumptions of Theorem 3.2 we have shown that the Cox process P V is a Papangelou process: for some infinite and diffuse measure ρ 0 ∈ M(X). Then S V is a solution of the partial integration formula and Z and W are E ∞ -measurable random variables. Moreover, any solution of (3.4) with E ∞ -measurable Z and W and infinite and diffuse measure ρ 0 ∈ M(X) is a mixed Pólya sum process.
Proofs
We identify B(µ, · ) by computing the Laplace transforms of both sides of equation (2.4). This shows Theorem 2.1 Before recall, e.g. from [10], that the Laplace transform of the Poisson Gamma random measure with Levy measure χ ρ ⊗ τ z is which by the relation log 1 + y Secondly note that the Laplace transform of the Pólya sum process is Lemma 4.1. Let z ∈ (0, 1) and ρ ∈ M(X). Then for all g, h ∈ F (X), Proof. Note that the inner integral on the lhs. is the Laplace transform of the Poisson process with intensity measure κ, hence Thus what remains ist is the Laplace transform of D z,ρ at 1 − e −g +h. But this is by equation (4.1) Next compute the rhs. of equation (2.4) Lemma 4.2. Let z ∈ (0, 1) and ρ ∈ M(X). Then with z ′ = z 1+z holds for all g, h ∈ F (X) Proof. We check the Ansatz B(µ, · ) = D z ′ ,ρ+µ . By equation (4.1), Therefore we get two exponentials, where for the integration with respect to S z,ρ only the integral with respect to µ matters. But this just is the Laplace transform of S z,ρ evaluated at g + log [1 + zh], hence Finally to get Corollary 2.3, note that for any h ∈ F b (X), the intensity ν 1 The first statement of Theorem 3.2 is a reformulation of the Theorem in [14]. The second part follows from the observation that for a non-negative, measurable function g by the application of Theorem 2.1 and the ergodic decomposability g(µ, κ, z, wρ)P κ (dµ)D z,wρ (dκ)V (dz, dw) = g(µ, κ, z, wρ)D z ′ ,wρ+µ (dκ)S z,wρ (dµ)V (dz, dw) = g(µ, κ, z, wρ)D Z ′ (µ),W (µ)ρ+µ (dκ)S z,wρ (dµ)V (dz, dw).
Dropping the dependence of g on the last two arguments, we get the second part of Theorem 3.2.
Proof of Corollary 3.3. Any mixed Pólya sum process S V solves the partial integration formula (3.4). Now assume that P is any solution of (3.4), then the joint Laplace transform of Z, W and P is L Z,W,P (u, v, tf ) = P e −uZ−vW −tζ f = P e −uZ−vW P e −tζ f |E ∞ by conditioning on E ∞ . Denote by P ∞ the conditioned point process P. Differentiation wrt t yields the Campbell measure of P ∞ , which allows to identify this conditional measure, thus on the one hand Thus P ∞ satisfies P-a.s. the functional equation (2.2) and therefore is a Pólya sum process with the parameters given by Z and W ρ. But then immediatly P is a mixture of these processes.
|
2012-05-10T13:04:41.000Z
|
2012-02-21T00:00:00.000
|
{
"year": 2012,
"sha1": "e442c040b2378e42b6e7fa8d939ab8dd68967edc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e442c040b2378e42b6e7fa8d939ab8dd68967edc",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
231614675
|
pes2o/s2orc
|
v3-fos-license
|
eDNA in a bottleneck: obstacles to fish metabarcoding studies in megadiverse freshwater systems
The current capacity of environmental DNA (eDNA) to provide accurate insights into the biodiversity of megadiverse regions (e.g., the Neotropics) requires further evaluation to ensure its reliability for long-term monitoring. In this study, we first evaluated the taxonomic resolution capabilities of a short fragment from the 12S rRNA gene widely used in fish eDNA metabarcoding studies, and then compared eDNA metabarcoding data from water samples with traditional sampling using nets. For the taxonomic discriminatory power analysis, we used a specifically curated reference dataset consisting of 373 sequences from 264 neotropical fish species (including 47 newly generated sequences) to perform a genetic distance-based analysis of the amplicons targeted by the MiFish primer set. We obtained an optimum delimitation threshold value of 0.5% due to lowest cumulative errors. The barcoding gap analysis revealed only a 50.38% success rate in species recovery (133/264), highlighting a poor taxonomic resolution from the targeted amplicon. To evaluate the empirical performance of this amplicon for biomonitoring, we assessed fish biodiversity using eDNA metabarcoding from water samples collected from the Amazon (Adolpho Ducke Forest Reserve and two additional locations outside the Reserve). From a total of 84 identified Molecular Operational Taxonomic Units (MOTUs), only four could be assigned to species level using a fixed threshold. Measures of α-diversity analyses within the Reserve showed similar patterns in each site between the number of MOTUs (eDNA dataset) and species (netting data) found. However, β-diversity revealed contrasting patterns between the methods. We therefore suggest that a new approach is needed, underpinned by sound taxonomic knowledge, and a more thorough evaluation of better molecular identification procedures such as multi-marker metabarcoding approaches and tailor-made (i.e., order-specific) taxonomic delimitation thresholds.
INTRODUCTION
The need for advancing our understanding of the world's biodiversity increases in parallel with the 53 acceleration of anthropogenic impacts on the planet's ecosystems. To implement strategies to 54 minimise the effects of human impacts, understanding compositions of species assemblages within 55 ecosystems is paramount (Morris, 2010). This task is particularly difficult when investigating 56 megadiverse regions of the world such as the Neotropics, which harbour an extremely large 57 diversity of living organisms. The Amazon basin, for example, is estimated to hold the highest 58 diversity of freshwater fish found anywhere on the planet (Albert & Reis, 2011). To date, it has 59 been documented that 2,406 species belonging to 514 genera and 56 families of fish inhabit the 60 tributaries of the Amazon River, with many more not yet described (Jézequel et al., 2020). 61 Undoubtedly, this region serves as a biodiversity hotspot, with Amazonian fishes representing 62 ~15% of all freshwater fish species described worldwide (Jézequel et al., 2020; Leroy et al., 2019). 63 Due to the increase in anthropogenic impacts in Neotropical rivers (e.g., pollution, siltation, 64 mining and damming), there is a growing danger that this rich biodiversity will be lost before it 65 can be fully described ( In this study, we aim to assess the use of eDNA metabarcoding as a tool to estimate fish 128 biodiversity in a megadiverse Neotropical system, by directly comparing it to data obtained by 129 netting surveys carried out at the same time and sites. We adopted a step by step, integrative 130 approach using newly generated and existing sequence data from a wide range of neotropical fish 131 species, eDNA from water samples and netting data. We first performed a genetic distance-based 132 analysis to investigate the optimum delimitation values based on the 12S MiFish primers, followed 133 by an assessment of the performance of the delimitation values through a barcoding gap analysis. 134 We then analysed water samples collected from the Brazilian Amazon and we compared measures 135 of α-diversity (species richness) and β-diversity (change in species composition among locations) 136 generated from eDNA and traditional netting data collected in the same sampling sites. 139 In order to improve taxonomic assignment, 47 fish species (Table S1) is set to 0.01 (1%), however, this can be changed using the threshVal function, and here were 156 included values ranging from 0.001 to 0.03 (0.1% to 3%). Using the thresholds generated from the 157 threshVal function, a genetic-based delimitation analysis was performed using K2P genetic 158 distances (Kimura, 1980). The threshold estimates were applied as the best delimitation values to for each sequence in the dataset: "correct", "incorrect", "ambiguous", and "no ID". The "correct" 164 results suggest that all matches within the threshold value of the query are the same species and 165 "no ID" shows that no matches were found to any individual within the threshold.
166
SPIDER was also used as a means to investigate the presence/absence of the "barcoding gap" by 167 identifying the furthest intraspecific distance among the same species, using the maxInDist() Table S4). The eDNA extraction, amplification of the 12S 208 rRNA fragment using the MiFish primer set, and library preparation were conducted following the (Table S3). To reach this target, all non-fish reads were removed from the dataset, 234 including non-target species (e.g., human and domestic species reads) and MOTUs that were likely (Fig. 2B). The overlap between intraspecific and interspecific genetic 300 distances was particularly evident for species still lacking a full taxonomic description (e.g.,
301
Trichomycterus spp., Hypostomus spp. and Harttia spp. MOTUs; only four were identified to species level with the fixed general threshold, whereas 41 314 were assigned solely at the family level and 37 could only be attributed to the order level (Fig. 4). 315 From the MOTUs identified to species level, one is known to occur in the Ducke Reserve (Hoplias (Fig. 6B). Only sampling points B3 and B4 were slightly clustered, 346 indicating that they share a more similar composition to each other compared to B1 and B2. produce accurate data, they cannot be applied to the required spatial scale in Neotropical basins. 447 Furthermore, their success and reliability depend on the ability/expertise of the surveyor (labour 448 intensive), the accessibility of the target area (challenging environments), and selectivity based on 449 the deployed techniques' ability to capture target species. Yet, when optimal conditions are met 450 (complete reference database, and appropriate markers to name a few), eDNA will outperform 451 traditional sampling, for its ability to detect species that are missed by the fishing gear deployed.
452
In this study, the order Synbranchiformes, known to occur in the sampled area (Zuanon et al., Reserve.
475
Although eDNA metabarcoding potentially offers a means to assess biodiversity on a 476 larger, more time-efficient scale, ensuring accuracy in results is critical. Evidently, eDNA 477 metabarcoding as a biomonitoring tool in the Neotropical region is in its infancy, highlighted by 478 the lack of appropriate reference databases. Herein, we argue that the commonly adopted threshold 479 of >97% to assign MOTUs to species level is not optimal in megadiverse, understudied regions 480 due to the likelihood of false positive or negative assignments. While this study shows that limited,
|
2021-01-16T14:22:21.339Z
|
2021-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "38819c05c9542fd795c6fca64af0bba1d6584f9e",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/edn3.191",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "38819c05c9542fd795c6fca64af0bba1d6584f9e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
117035812
|
pes2o/s2orc
|
v3-fos-license
|
Triangle Order $\leq_{\bigtriangleup}$ in Singular Categories
We prove that the triangle order $\leq_{\bigtriangleup}$ in the singular category $D_{sg}(A)$ defines a partial order on the set of isomorphism classes of objects in $D_{sg}(A)$ for a finite-dimensional $k$-algebra $A$.
Introduction
Degeneration order of modules is introduced from geometric methods of representation theory of finite dimensional algebras. More precisely, let A be a finite dimensional associative k-algebra over the algebraically closed field k. Let d be an positive integer. A d-dimensional (left) A-module M is the vector space k d together with an action by A from the left. We denote by mod d (A) the set of d dimensional A-modules. Note that mod d (A) is an affine variety (For more ample details we refer to Section 2). The general linear group GL d (k) acts on mod d (A) by conjugation. The orbits under this action are the isomorphism classes of d-dimensional A-modules. We say that an A-module N is called a degeneration of M (denote by M ≤ deg N) if N belongs to the Zariski closure of the GL d (k)-orbit of M in mod d (A). Clearly, this degeneration defines a partial order on the set of isomorphism classes of A-modules. Riedtmann and Zwara gave an algebraic description of degeneration order in [Rie] and [Zwa2]. They showed that M ≤ deg N if and only if there is an A-module Z and an exact sequence 0 → N → M ⊕ Z → Z → 0 and equivalently there exist an A-module Z ′ and an exact sequence Later in [Yosh2] Yoshino gave a scheme-theoretical definition of degenerations, so that it can be considered for modules of a Noetherian algebra. In this paper, we consider the algebraic description of degeneration order as our definition of degeneration order of A-mod. That is, let A be a finite dimensional k-algebra A over any field k (not necessary algebraically closed), we define an A-module N is a degeneration of an A-module M (still denoted by ≤ deg ) if there exists an A-module Z and an exact sequence Then from Theorem 2.2 in [Zwa1] it follows that this degeneration ≤ deg is a partial order on the set of isomorphism classes of A-modules.
Degeneration theory for triangulated categories and derived categories (cf. [JSZ1], [JSZ2], [SaZi]) has been studied. In a triangulated category we say for two objects X and Y that X ≤ ∆ Y if there is an object Z and a distinguished triangle In [JSZ1], Jensen, Su and Zimmermann showed that the triangle relation ≤ △ in bounded derived category D b (A) is a partial order. More generally, in [JSZ2], the authors showed that, under some finiteness assumptions on the triangulated category including the condition that the morphism spaces between objects are finite dimensional, ≤ △ is always a partial order.
The singular category was introduced by Ragnar-Olaf Buchweitz in an unpublished manuscript [Buch], he called it stable derived category, and Dmitri Orlov [Orl] rediscovered this notion independently in algebraic geometry and mathematical physics, under the name of singular category. We remark that in general the singular category D sg (A) of a finite-dimensional k-algebra A is not Hom-finite and is in general not a Krull-Schmidt category (cf. [Chen], [ZhZi]). Therefore we cannot use [JSZ2] to argue that the triangle relation ≤ △ in D sg (A) for any finite dimensional k-algebra A is a partial order. But in this paper, we will prove, in a different way, that ≤ △ in D sg (A) is really a partial order for any finite dimensional k-algebra A.
This paper is organised as follows, in Section 2, we define a stable degeneration ≤ st for the stable module category A-mod and prove that it is a partial order. In Section 3, we first recall some notions about the stabilization S(A-mod) of the left triangulated category A-mod and prove that in S(A-mod), the triangle relation ≤ △ coincides with the quasi-stable degeneration ≤ qst induced from the stable degeneration order ≤ st in A-mod. We prove that the quasi-stable degeneration ≤ qst in S(A-mod) is a partial order. Last we show our main result (Theorem 3.7), that is, the triangle order ≤ △ in D sg (A) is a partial order. We note that Theorem 3.7 and its proof extend without any changes to finitely generated modules over artinian algebras.
Stable degeneration order
First, let us recall the geometrical definition of degeneration order. Let A be a finite dimensional associative k-algebra over an algebraically closed field k. Let n be the dimension of A and d be an positive integer. Let λ 1 , λ 2 , · · · , λ n be a k-basis of A, λ i λ j = l a l ij λ l for i, j = 1, · · · , n with the structure constants a l ij ∈ k. Then a module M corresponds to a unique n-tuple of matrices m = (m 1 , · · · , m n ) ∈ (Mat d×d (k)) n such that m i m j = l a l ij m l for i, j = 1, · · · , n. For each 1 ≤ l ≤ n let X l denote the indeterminate matrix (X l µν ) µ,ν=1,··· ,d . Then there is a one-to-one correspondence between mod d (A) and the zero set of the ideal I ⊂ k[x l µν ], (µ, ν = 1, · · · , d; l = 1, · · · , n), where I is generated by the equations of the matrices X i X j − l a l ij X l for i, j = 1, · · · , n. The general linear group GL d (k) acts on mod d (A) by conjugation. The orbits under this action are the isomorphisms classes of d-dimensional A-modules. We say that an A-module N is called a degeneration of M (denote by M ≤ deg N) if N belongs to the Zariski closure of the GL d (k)-orbit of M in mod d (A). Clearly, this degeneration defines a partial order on the set of isomorphism classes of A-mod. From the work of Riedtmann and Zwara in [Rie] and [Zwa2], we have an algebraic description of degeneration order. In this paper, we use this algebraic description of degeneration order as our definition of degeneration order.
Definition 2.1. Let k be a field. Let A be a finite dimensional k-algebra and X, Y ∈ A-mod, then we say that X ≤ deg Y if there exist Z ∈ A-mod and an exact sequence on A-mod, Remark 2.2. From Theorem 2.3 in [Zwa1], X ≤ deg Y is equivalent to that there exists a short exact sequence for some A-module Z ′ . We remark that the degeneration order ≤ deg in A-mod defines a partial order on the set of isomorphism classes of A-modules (cf. [Yosh2], [Zwa1]).
Definition 2.3. Let A be a finite-dimensional k-algebra and X, Y ∈ A-mod. We say that X ≤ st Y if and only if there exist two projective A-modules P and Q such that X ⊕ P ≤ deg Y ⊕ Q. Clearly, we can induce a relation (called stable degeneration, still denote by ≤ st ) on isomorphism classes of objects in A-mod.
Remark 2.4. Note that P = 0 in A-mod if and only if P is a projective module, so that X ≤ st Y is well-defined for any two objects X and Y in A-mod.
Lemma 2.5. Let A be a finite dimensional k-algebra. Then ≤ st defines a partial order on the set of isomorphism classes in A-mod.
Proof. First, it's clear that the reflexivity is inherited from the degeneration order. We need to check anti-symmetry and transitivity. For anti-symmetry, let X ≤ st Y and Y ≤ st X. Therefore there exist projective A-modules P, Q, P ′ , Q ′ such that From the transitivity of ≤ deg , we have Therefore we obtain (cf. Proposition 4.4 [Yosh1]) for any simple A-module S. Note that there exists a canonical bijection between the isomorphism classes of projective indecomposable modules and the isomorphism classes of simple modules for a finite dimensional k-algebra A (cf. e.g. [Lein]), hence it follows from the inequality (1) From the anti-symmetry of the degeneration order ≤ deg , we know that Hence we have X ∼ = Y in A-mod. This shows that ≤ st is anti-symmetric on the set of isomorphism classes of A − mod. In order to prove transitivity of ≤ st , let Then there exist projective A-modules P, Q, R and S such that From the transitivity of ≤ deg it follows that Hence X ≤ st Z in A-mod. Therefore the transitivity of ≤ st holds. Next we define the triangle relation ≤ △ for a (left) triangulated category. For the concept of left triangulated categories we refer to [BeMa] and [KeVo].
Definition 2.6 ( [Yosh2], [JSZ2]). Let C be a (left) triangulated category and X, Y ∈ C, then we say that X ≤ △ Y if there exist Z ∈ C and an exact triangle in C, Remark 2.7. As well-known, A-mod has a left triangulated structure with the syzygy functor Ω A as the translation functor (cf. [BeMa]). Hence, we can consider the triangle relation ≤ △ in A-mod. Next we will prove that ≤ △ coincides with ≤ st in A-mod.
Lemma 2.8. Let A be a finite dimensional k-algebra and let X, Y ∈ A-mod.
Then there exist projective A-modules P and Q such that X ⊕ P ≤ deg Y ⊕ Q. By Definition 2.1, there is an exact sequence in A-mod, This induces the following exact triangle in A-mod, Conversely, assume that X ≤ △ Y . Then there exist an A-module Z and an exact triangle, By the construction of the left triangulated structure in A-mod, we know that there exist projective A-modules P, Q and R and an exact sequence in A-mod, Consider the following commutative diagram, Since R is projective, the bottom row sequence splits, hence ker α ∼ = Y ⊕ P ⊕ R and we obtain the exact sequence, Proof. From Lemma 2.8 it follows that X ≤ st Y if and only if X ≤ △ Y in A-mod. Hence we have the following exact triangle in A-mod, The Axioms of (left) triangulated categories imply that there is the following exact triangle, Therefore Ω A (X) ≤ △ Ω A (Y ), hence Ω A (X) ≤ st Ω A (Y ) using again Lemma 2.8.
Stable categories and stabilization
We recall some notions about the stabilization of stable categories. For details, we refer to [Bel].
Definition 3.1. Let (C, Ω, △) be a left triangulated category. The stabilization of C is a pair (ι, S(C)), where S(C) is a triangulated category and ι : C → S(C) is an exact functor, called the stabilization functor, such that for any exact functor F : C → D to a triangulated category D, there exists a unique exact functor F * : S(C) → D such that F * ι = F .
We recall the construction of S(C) (cf. [Bel], [Hel], [KeVo]). An object of S(C) is a pair (X, m) where X ∈ C and m ∈ Z.
is an exact triangle in S(C) if and only if there exist k ∈ 2Z and a triangle, which represents the above triangle, Theorem 3.2 (Corollary 3.9. [Bel]). Let A be a finite-dimensional k-algebra, then there exists a triangle equivalence S(A-mod) ∼ = D sg (A).
We say that (Y, n) is a quasi-stable degeneration of (X, m) (denote by (X, m) ≤ qst (Y, n)) if and only if there exists k ∈ N such that Ω k−m (X) ≤ st Ω k−n (Y ) in A-mod.
Remark 3.4. Since S(A-mod) is a triangulated category, there is the triangle relation ≤ △ (cf. Definition 2.6) in S(A-mod). Next we will show that these two relations ≤ qst and ≤ △ in S(A-mod) coincide.
Proposition 3.5. Let A be a finite dimensional k-algebra, Then in S(A-mod) we have that (X, m) ≤ qst (Y, n) if and only if (X, m) ≤ △ (Y, n).
Proof. If (X, m) ≤ qst (Y, n), then by Definition 3.3, there exists k ∈ N such that Ω k−m (X) ≤ st Ω k−n (Y ) in A-mod, which means that there exist an A-module Z and an exact triangle in A-mod So (X, m − k) ≤ △ (Y, n − k), hence (X, m) ≤ △ (Y, n).
Hence Ω k−m (X) ≤ st Ω k−n (Y ), so (X, m) ≤ qst (Y, n). Now let us prove the main theorem (cf. Theorem 3.7). Before we come to the proof of Theorem 3.7, we need the following lemma.
Lemma 3.6. Let A be a finite dimensional k-algebra. Then the quasi-stable degeneration relation ≤ qst in S(A-mod) is a partial order on the set of isomorphism classes of objects in S(A-mod).
|
2014-10-24T21:03:44.000Z
|
2014-10-24T00:00:00.000
|
{
"year": 2014,
"sha1": "25ca6c5b40bc0a3bd5930bbbbc251ddfd8ed6c7e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b3405ee53b1cb2e84d9dd5fcddd51b89454cad1f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
247389664
|
pes2o/s2orc
|
v3-fos-license
|
DIONYSUS BETWEEN SĀSĀNIAN IRAN AND ROMAN ALLUSIONS
lot can be said about religious notions in the late Roman Empire, but further to the east the picture is quite different. Until today even sketching the religious evolution within the Arsacid and the Sāsānian World remains problematic. A substantial amount of the most central Zoroastrian texts are incomplete and what we do find preserved is often mirrored through redaction after the Islamic conquest. About other central textual sources the only thing we know for a fact is that they existed. The situation with other cults not belonging to the Zoroastrian state church is even worse. Of course, this is also true when it comes to the interpretation of related archaeological material. Many of the themes we find depicted on toreutics, seals or stucco are hard to explain, while other representations are strongly reminiscent of cults known from the Roman World but somehow oddly adapted. In this contribution I will try to examine one of these cults — the worship of Dionysus. Since information about the Dionysian Cult in the east is quite scarce, it might prove useful to pay attention to a differentiation emphasized by Martha Carter. She stressed the difference between the term ‘Dionysian’, written with capital-D as related to the god, and ‘dionysian’ seen as a general mode to express a relation to wine or ecstatic behaviour, not necessarily connected to the cult of the god.1) For both terms we find comparable visual vocabulary, like scenes of vintage with erots or various animals between vine branches etc. In the specific case of the Late Roman beholder there likely was a connection between both perceptions, but the further east we go with an analysis the more difficult is to say what the actual content was. In Gandhāra and
Mathura, for example, we have several examples strongly resembling scenes of the thiasos -the followers of Dionysus -but somewhat oddly adapted in respect to Greco/Roman iconography. This makes it hard to judge whether what we have here is a representation of the same cult as in the Roman World, or some different expression, partly using the same vocabulary, but with no direct related meaning. Most probably it is the latter. 2) But let us turn our attention to the Sāsānian World. Here we can at the very least present several works certainly made in Sāsānian workshops, but carried out with the vocabulary best known from Hellenistic and Roman art.
The aim of this contribution is to argue for the persistence of a Late Dionysian cult within the Sāsānian Realm and its perception as mirrored in archaeological finds from the Roman World and Eastern Hellenistic models. To show how interwoven and at the same time different such relations most probably were, I will discuss a vase in the Freer Gallery of Art from the Dionysian circle. Therefore I will try to develop a sketch of its artistic and historical background and combine this with a possible perspective that a Sāsānian beholder could have had. This will be first of all an attempt to reconstruct a content for which we have almost no sources, in order to argue for the persistence of a Dionysian Cult, about which we have no information except some archaeological finds.
We know a couple of Sāsānian toreutics with unclear connections to the cult of Dionysus since it is very hard to judge whether they represent the results of a proper Dionysian aspiration or merely echo general ideas of wine and Roman taste. A connection to Late Roman/Early Byzantine works is obvious but its contextualization is still problematic. One of the latest elements in this row is a silver vase in the Freer Gallery of Art ( Fig. 1) said to be from the 6/7 th century. 3) The imagery on the vase is composed of three scenes, divided by pillar-like palm trees, each set on top of a stylized mountain landscape.
2)
Carter (1968) connected all expressions of dionysiac aspects in Kušan art with the worship of Yakṣas and other related beings.
The trunks or pillars themselves are twisted, endowed with round bases, and on top of each in the acanthus-like leaves there is seated a bird, oriented towards the left side. The movement in the scenes is directed towards the right. The central episode represents a naked person, always described as Dionysus, wearing just a cloak lined with a wavy pattern. The pubic region is emphasized by small punches, but with no actual indication of genitals. The person's face is in three-quarter profile, his cap-like haircut is divided in a curly lower part and an upper part with chased waves. He wears a diadem or a crest featuring a round element with two small loops, comparable with a knot. In his raised right hand he holds the thyrsos with the middle and ring finger -the little and index finger are extended. The left hand holds a short leash with a panther, trying to climb up the 'palm tree' to catch one of the birds sitting on its top who observes him, frightened. In the second scene to his right side, there is a woman, depicted in profile. In her right hand she also holds a leash with a panther, but this time it drinks from a wine jar. Her left hand holds two stems or an instrument. She is clad in a long-sleeved robe reaching her ankles and an over-garment which only covers her left shoulder. The upper garment is adorned with a doted pattern. Around her head there is a thin band tied with fluttering ribbons in the back part, at the front a knotlike element is added, similar to the one in the headgear of Dionysus. Two long braids cascade down her neck, while a shorter one covers her ear. The last scene is set on the left from Dionysos. A bowing man is depicted playing with a child. He is dressed in a loose scarf and a V-shaped collar but no other indications of clothing. It is therefore not easy to determine whether he is naked or not -most likely he is depicted like Dionysus with no indication of the sex. On his head he is wearing a crest made of heart-shaped leaves.
Richard Ettinghausen emphasized that it is possible to ascribe all of these scenes to Roman prototypes. 4) However a more detailed comparison gives us some reasons to doubt this view. Only the representation of Dionysus itself might be linked to some late Roman and early Byzantine works 5) -but even here its ascription is not totally convincing. It is even harder to find direct precursors for the woman attributed as being a maenad and the man playing with the child. 6) Neither do they mirror the widely used blueprints 4) Ettinghausen (1972: 3 -4).
6)
It is remarkable that the 'maenad' and 'Dionysus' wear almost the same headgear, of neoattic models, used in Roman art, nor is it possible to sketch any closer connection to later aspirations of the Dionysian entourage. We may identify them as members of the thiasos only from general comparisons. As a result Martha Carter concludes that the Freer vase is part "of a 'dionysian' folklore that persisted in Sasanian Iran […]". 7) However, the positions of Ettinghausen and Carter imply that there has never been an independent cult of Dionysus within the Sāsānian World and simultaneously that any presence of Dionysian elements can only be explained through copies of Western sources and, as a last consequence, that all odd adaptations from this sphere were caused by misunderstandings on the part of the craftsman or interpreted in a new sense. Confronted with the lack of Sāsānian sources it is hard to judge whether such an assumption is acceptable or not -but most probably it is easier to justify this position from the historical perception of the research itself and not on the basis of archaeological sources.
It should not come as a surprise that the first scholars who dealt with Western connections to the East and resulting adaptations of religious aspirations had classical education. Their research was driven by the common wish to recognize the beloved Greek culture brought to the East with Alexander the Great, and so was their corresponding knowledge. Until today, compared to the richness of material from the West, our understanding of Central Asia and Iran before the Islam is scant. This sometimes leads to extreme interpretations, supported by the whole classical knowledge celebrated between 19th until the first half of the 20th century. For example, Kurt Weitzmann saw on a Bactrian bowl the satyr play 'Syleus', written by Euripides in the 5th century B.C., 8) but a closer examination reveals that not a single scene is really comparable to any of the different versions of Euripides -parts of it are even philological reconstructions of the barely preserved play. However, the same we can observe on a series of three Iranian silver plates with Dionysus and Ariadne siting on a chariot, which we will discuss later. This feature is atypical for Roman representations but we may be able to explain it via the apotheosis of Ariadne through the wedding with Dionysus. In this course Ariadne takes part of the divineness of Dionysos, here possibly expressed with the help of the similar headgear. So the 'maenad' would be Ariadne. The depiction of the man playing with the child was convincingly explained by Martha Carter (2015: 221) as a scene from the childhood of the god.
Weitzmann's conclusion was that the craftsman misunderstood his own motive. 9) In defense of Weitzmann one can mention the lack of typologies and comparable material, problems of dating etc. Nevertheless, the tendency to compensate missing sources with positions which are hard to prove instead of admitting our lack of knowledge is problematic.
For further analysis it is important to arranges one's material connected to dionysian expressions, to provide a rough sketch of its development. A couple of Roman toreutics found in the Iranian World and neighbouring regions prove that the Western material was accessible to local craftsman. 10) But this alone provides no hint to the question how far its original meaning was understandable for the local recipient. We have reason to believe that since the 3rd century B.C. Dionysus gained a high ranking position in several Hellenistic courts as conqueror of the East, well-fitting to the self-image of the Seleucids and Bactrian Greeks. 11) It should also come as no surprise that the philhellnic Arsacids kept a close connection to classical plays with Dionysiac content, as Plutarch reports. After a disastrous failed campaign of Crassus against the Parthians in 53 B.C., his head was brought to Artavasdes of Armenia, a subject of the Arsacid king Orodes II. The decapitated head was at once used as requisite in the play Bacchae of Euripides. 12) Even if we consider this episode as a macabre elaboration, it is remarkable how self-evident the idea of the Parthians consuming a classical drama of the Dionysiac circle appeared to Plutarch. In the same sense we can argue for one of the famous rhyta from Nisa, ca. 2nd -1st century B.C. 13) The thiasos is definitely depicted, in a distinct style best known from other rhyta from Nisa, but also strongly resembling the 'stiff' appearance of later Sāsānian representations. 14) The mosaics found
9)
For a more elaborate argumentation and a proposed interpretation in context of a Zoroastrian funeral ritual see: Schulz (forthcoming).
10)
E.g. a Roman silver plate found in Beitan, province of Gansu, China, with Dionysos riding on a lion and with a Sogdian and a Bactrian inscription. Dating between 2 nd and 3 rd century: Baratte (1996); A Roman silver handle said to be from Iran with the Indian Triumph of Dionysos, roughly 2 nd century, Alexander (1955 -56); A Roman 3 rd century silver dish with Dionysiac scenes and 2 Gupta-period Brahmi inscriptions in the Al-Sabah Collection: Carter (2015: 259 -261 Cat. 72).
11)
For the position of Dionysos in relation to Alexander the Great in Middle and late Hellenistic dynastic legitimation see Bohm (1989: 125 fn. 111).
14)
For the influence of this style even in Late Antiquity cf. a 5/6th century ewer from in the iwan of the so-called palace of Šapur I, in Bīšapūr 15) provide a different perspective. The palace's inner court was framed with mosaics most probably made by Roman craftsman, and so their appearance strongly recalls Antioch mosaics from the same period. 16) This point is remarkable, because it proves the lively influx of Roman material and secondly it shows, that the early Sāsānian court adored Dionysiac themes, which would not have happened, if the idea of Dionysos' worship had been meaningless in their imagination. However, even if the Bīšapūr findings serve as evidence to the persistence of the Dionysos Cult during the beginning of the Sāsānid Dynasty, they do so for the middle of the 3rd century, while the Freer Vase is attributed to the 6th/7th century, we still have to bridge at least 300 more years.
For this purpose we might consult a series of 3 silver plates. The first, maybe from the 3 rd century, is in the British Museum, said to be from the treasure of the Mīrs of Badakšan. 17) The next two are later, probably dating to the 5/6 th century One was purchased on the art market, now in the Freer Gallery 18) and the last exemplar was found 1953 not far from the train station of Alkino, Southern Ural, preserved in the Historical Museum Moscow (Fig. 4). 19) All three plates represent the same scene but with two distinct variations. A rather stylized cart is drawn to the left by two dressed persons while respectively two winged erots, or one on the Badakšan plate are depicted at the wheel. There is a prominent half-naked person sitting on the chariot with a bowl in his right hand -filled with berries on the Freer and the Alkino plate. The second person is depicted much smaller, sitting on the rear of the cart. There is an erot with an ewer standing at its outer left edge, who holds the end of a lash of a whip held by the second flying erot. The right side of the composition is closed by a grape-wine and a naked dancing man with a small Guyuan (Ningxia, China) in a typical Roman/Sāsānian shape but with a scene depicted in a distinct Late Eastern Hellenistic mode. The naked hero on the left is realized in the manner alike to the Dionysos of the Freer-Vase: Harper (1993: 104 fig. 88). Marshak and Anazawa argued that the ewer originated from Bactria. For the present of comparable realization in Western Iran cf. some late Parthian Dionysiac stuccos from Qal'eh-i Yazdigird: Keall et al. (1980: figs. 8 -12).
19)
Smirnov (1957). tail, wearing a lion skin on his arm and a kind of a club on his shoulder. The whole composition is set on a baseline. The baseline separates a space featuring a lion, who is about to drink from a krater on the Badakšan plate, and from a bulging jar on the Alkino and the Freer-plate. On the Freer-plate he is surrounded by musicians and on the Badakšan plate by plants, while on the Alkino plate both motives are combined.
Already Dalton compared the Badakšan plate with a cameo from the former collection of Lorenzo di Medici in Florence, but he overemphasized the idea that he had discovered the original pattern of the plate and, moreover, he misunderstood some of its elements leading him to the conclusion that it "imperfectly reproduces the Triumph of Dionysos". 20) Some of these mistakes were later on quoted to argue in the same vein for all three plates. 21) For example, it was doubted that the craftsmen still understood that they were depicting a chariot since it was not obvious how it was pulled, or that the 'Heracles' has a small tail, and that Dionysos and Ariadne have almost the same appearance etc. Finally, an orthodox Dionysiac content was refuted. However, a comparable Roman glass cameo in London already features a chariot with Dionysos and Ariadne in a similar shape with no indications of fastening to the car (Fig. 5), and the 'Heracles' strongly resembles a common neoattic type of a tailed dancing faun carrying a thyrsos and a panther skin, which originated in classical models of the 4th century B.C. very common in Roman representations of the thyasos. 22) The only late addition is the separated scene with the lion drinking from a wine jar. However this element is also frequently found in Late Roman/Early Byzantine dionysiac representations. 23) Continuing this line of argument we can explain almost the whole composition based on Hellenistic material, with some later additions, which makes it hard to trace the origins of its prototype. It is equally possible that the prototype might have originated from early Roman or from com-20) Dalton (1964: 50).
23)
Cf. a mosaic from Sheikh Zouede (Sinai), most probably mid 4th / mid 5th century: Ovadiah et al. (1991: 189 -190); alike 2 mosaics from Ptolemais (Libya) very unlikely dated to the late 1st century Because of its similarity with mosaics from Syria, a dating in the 5/6th century is preferable: Kraeling (1962: 261); in Wadi Ayoun Mousa (Syria), ca. 5th century: Balty (1984: pl. 27.2). Accepting the date of the Badakshan-plate in the 3rd century or even in the 4th century (Carter 2015, 38 -39) would mean that we find here the earliest Dionysiac scene of two felinae drinking from a vine vessel. mon Hellenistic sources -maybe Arsacid or Bactrian. 24) Notwithstanding, it can be said with certainty that the content of all three plates is actually Dionysiac.
From that perspective it would be reasonable to find similar connections between the Freer Vase and contemporary western Dionysiac representations. The situation, however, is more complicated -we cannot precisely identify a single Roman scene used as a common blueprint. It is only possible to explain certain elements with help of Dionysiac narratives and some general comparisons with Roman material. If we want to keep our hypothesis that the Freer Vase is Dionysiac we need a more developed argumentation.
The only figure compared by Entinghausen with earlier or contemporary western material is the androgynous man with the panther. His body is turned to the side while the examples of Entinghausen and their neoattic forerunners are all oriented towards the viewer. 25) This observation seems to be banal but consulting a sāsānian silver plate in the Metropolitan Museum of Art dating from the approximately same period, 26) we might identify a much closer model. Here we see two men, also naked, arranged in a heraldic composition together with two winged horses. Many details of their appearance are very similar to the Freer Dionysos. In place of a panther they hold winged horses in the same position and similarly a spear looking exactly like the thyrsos of the Freer Dionysos, even the index finger being extended likewise. Additionally all figures wear the paludamentum, the Roman military cloak, nearly absent in Hellenistic and Roman representations of Dionysos. Two of the rare exceptions 27) are a 5/6th century ivory pyxis in the Metropolitan Museum 28) and a second one in Vienna (Fig. 3b), showing Dionysos in the battle against the Indians. In Sāsānian art it is worn only by Roman emperors 24) The model of the lying Dionysos on the Badakshan plate is also known from a Bactrian terracota found in Karabag, Kashkadariya region, Uzbekistan, dated 2nd /1st century B.C.: cf. Abdullaev, Radzhabov (2000)
27)
Another example is a 4th century ivory handle in the Dumbarton Oaks Collection, quoted by Ettinghausen (1972: fig. 2).
28)
Volbach (1976: Taf. 101). during their submission before the King of Kings, 29) but it is not part of a usual Sāsānian dress. In connection with the two pyxides we might interpret the paludamentum on the Freer Vase hence as an allusion to Dionysos as a commander and conqueror of India. 30) If we try to connect the idea of a 'Roman' element in connection with Sāsānian toreutics we have to translate its meaning in terms of its beholder. The pahlavi equivalent for Roman is hrōmāyān, but this term was also taken as synonymous for Byzantines as well as for the Greeks of the time of Alexander the Great. In later consequents, even Alexander was titled 'Alexander the Roman'. Coming back to the Alkino plate we can add a further observation, giving a hint for the same interpretation. There is a symbol depicted at the hip of Dionysos, recalling the shape of a Maltese cross, framed by a crest. 31) It is unlikely that its meaning is Christian in a classical sense, but it is also unlikely that a recipient of the 5/6th century made no connection to the widespread Christian communities, since exactly this type of a cross was the most typical version of the eastern cross between the 5th and 7th century 32) Concerning the history of Christian persecutions within the Sāsānian Realm, we might propose that the Sāsānid State too did not always differentiate between them and the actual Romans -though the local Christians were seen as hrōmāyān. 33) If we suggest that the cult of Dionysos was still connected to its hrōmāyān roots, or rather to Roman believers from the same period we
30)
The Indian Campaign is the most popular story of Dionysos conquering the world inebriated in order to prove the advantages of wine. Later the story became an allegory for the deeds of Alexander the Great, and Alexander himself was copied by plenty of Hellenistic rulers cf.: Bohm (1989), and later on, by the Roman elites and even emperors cf: Kühnen (2005). The theme stayed even under Christian domination popular as shown by the appearance of a new version of the life of Dionysos written by Nonnus of Panopolitanus, Dionysiaka, in the 5th Century one of the most extensive works of the whole antiquity, featuring the Indian Campaign as its culmination.
33)
It has to be emphasized that many Christians on both sides of the Roman Sāsānian border spoke either Syrian or Armenian. A differentiation between both groups is literally just a political one. So we should consider most conflicts between them and the Sāsānian State not primary as religious but as political. Christian communities were mostly a well-integrated part of the state. cf.: Gyselen (2006). easily can translate the most common Christian symbol, the cross, as cipher for hrōmāyān. 34) This is even more plausible if we take into account that Dionysiac representations were still present in the Byzantine World until the 7th century 35) The Metropolitan plate is certainly not Dionysiac, but its reading could easily breathe the same hrōmāyān connotation 36) . In the same moment its similarities with the Freer Plate seems to give sufficient reasons to believe that both figures share a common Hellenistic model, alien to Roman representations. It is perhaps possible to identify one of these models on a pair of clasps, excavated in Tillya Tepe (Fig. 2). 37) We find a heavily armed Hellenistic warrior, elaborately pictured wearing a muscle cuirass, a shield and a lance in a stance similar to the one described previously, but dated to the 2nd quarter of the 1st century 38) . His hair is long and curly and he wears an Hellenistic helmet with animal ears and horns. Michael Pfrommer saw therein an allusion to Dionysos, suitable for a Bactrian king, and concluded that the model of the clasps originates in the Hellenistic art of the Bactrian court, but was slightly barbarized since the 'king' has long hair. 39) Pfrommer might be wrong -the juvenile Dionysos also often is depicted with long curly hair. Moreover, he too wears the military cloak in the same fashion as the one we have seen on the Freer Vase. We might thus as well identify him as Dionysos the conquerer of India. His cloak is likewise lined with a wavy pattern. This detail is absent in Roman representations, but most probably it indicates that the cloak is made of a tiger pelt, which could be understood as a hint to the omnipresent Dionysian cats. Cats are also featured within the clasps frame. Here we see two winged lions depicted in Sarmatian Animal Style, 40) demonstrating yet
38)
Also cf. the attitude of Oēšo on Kušān coins: Göbl (1984: pl. 1 -3). He represents an intermediate stage between the Tillya Tepe clasps and the Freer Vase. The indicated movement to the side is present alike on the Tillya Tepe clasps but the index-finger of the hand holding the spear is stretched like on the Freer Vase.
40)
First discussed by Rostovtzeff (1929: 41 -61). again its independence from the Western representations of Dionysos. The same is also true for a second Dionysiac pair of clasps from Tillya Tepe, showing Dionysos riding with Ariadne on a fairy cat, accompanied by drunken Sylenus. 41) When we accept, that the eastern Hellenism developed its own formal language for the cult of Dionysos 42) and, as we have seen, the worship of Dionysos lasted until Parthian and Sāsānian rule, different traditions referring to the same cult should not come as a surprise.
Coming back to the 1st pair of clasps from Tillya Tepe we find another remarkable detail. Dionysos is framed by pillar-like plants, each of them featuring a bird seated on its crown 43) . This representation is very kin to the 'palm trees' of the Freer Vase, but none of the authors, who came across this detail for the vase in scrutiny, were able to propose an interpretation. Nonetheless, it is remarkable to find the same feature depicted on the much later pyxis in Vienna raised before, also together with the Indian Campaign (Fig. 3a). Dionysos is enthroned together with Ariadne, framed by pillars with birds on its crown. The same element appears on three 5/6th century ivory diptycha with no connection to Dionysos 44) and, again in a certainly Dionysiac context, we find birds inbetween the spandrels of arcades of 3rd/4th century sarcophagi. 45) Returning to the Iranian World we also come across arcades provided with birds approximately dated in the same period. 46) If 41) Sarianidi (1985: 258, Taf. 77 -79.)
43)
Michael Pfrommer (1996, 111) thought the birds on the Tillya Tepe clasps were eagles and counted them as a link to Zeus, the father of Dionysos. It is also remarkable that the birds are adorned with fluttering ribbons, they are the oldest examples showing this element, known to the author. It might be possible to trace a connection to later representations, also connected to the dynastic iconography of Central Asia, but this idea needs more argumentation.
44)
Volbach (1976 No. 36, 43, 51 -52). 46) E.g. on a Buddhist Chapel at Tapa Shotor (Afghanistan) roughly after late 4th century: Kuwayama (1987: 172 -173); an altar in Surkh Kotal dated according a Sāsānian coin alike: Schlumberger et al. (1983: Pl. 69.235, 131 -132, 146 -147), on a Sāsānian silver vase in the State Hermitage, St. Petersburg: Marshak 1986: pl. 187; or on the celebrated Bīmarān Reliquary. In this context it would be reasonable to request the Reliquary's date in the 1st century A.C. because of its strong connections to Roman pyxides of the 4 -6th century, and its composition in arcades resembling a typical Late Antique means of organizing a composition, typical for Central Asia, Iran and the Roman west, also we do not consider it as a solely decorative element it might be possible to propose a general interpretation of the birds in the sense of a parapetasma, in western Roman iconography, or the Nimbus, as a means of underlining the most central figures. Anyway its meaning is certainly not an exclusively Dionysiac one. Since we have no idea about the original significance of the birds in Eastern Hellenistic vocabulary, a general interpretation will not be made here.
The connection drawn here between the Tillya Tepe clasps, the Roman examples and the Freer Vase can help us to retrace the continuation of Eastern Hellenistic models in Central Asia bridging at least six centuries. At the same time we also have to keep in mind the assumed context of a late Sāsānian beholder. As argued, the Dionysos Cult was still alive within the Sāsānian Realm continuing older traditions, best captured with the help of the Tillya Tepe clasps, but in the same moment the repertoire of the Freer Vase is also in many details quite differed to the Tillya Tepe scene. There was obviously an interest in expressing the contemporary understanding of iconography related to the needs of the cult, likewise mirrored in Roman representations, which makes it hard to explain any element as solely decorative.
I would like therefore to propose a more speculative interpretation for the Freer Vase. From the Roman context we know another mythical bird significant for the state propaganda and depicted sitting on altars or treesthe phoenix. From the 1st century onward it became a very popular symbol of eternity and renewal, and as such it was used in the cult of the Emperor, as well as in private or Christian contexts, always with a slightly modified connotation. In the state cult the Phoenix was seen as an allusion to the everlasting renewal of the empire and the Emperor, while within Christian iconography it expressed the resurrection of Christ and the coming Kingdom of God. 47) Especially here the Bird appears, sitting on a palm tree. 48) In the Poem of the Phoenix (de ave phoenice), written by Lactantius in early 4th century, 49) the phoenix lives in a happy land in the outer most East on top cf. the arg. of Benjamin Rowland (1946). But this discussion is behind the scope of this contribution.
48)
The palm tree is a usual representation of the heavenly Paradise. For references and depictions cf.: van den Broek (1972: 183 Tabs. 24 -30). (1993: 64 -5). Lactantius (250 -320) was one of the most important apologetics transmitting classical narratives into Christian vocabulary. of the highest mountain, where it sits on a huge tree. In other versions it is directly expressed that the Phoenix lives in India. In such a context we easily can interpret it as an allusion to India or to the East in general. The same assumption might be true for the ivory pyxis in Vienna and some other Late Roman / Early Byzantine pieces. 50) Especially the pyxis' dated to the 5/6th century (Fig. 3), with the rare Dionysos wearing a military cloak, and the representation of the Indian Campaign in combination with the birds featured on pillars, clearly provides a connection with the Freer Vase. Therefore, it is very likely that in both cases Dionysos is represented, as the conquerer of India featuring the phoenix as a hint to the ambition of Dionysos -India. With that we can even explain the mountain landscape under the 'palm trees' of the Freer Vase as the mountain mentioned by Lactanitius. 51) This idea is also supported by the genre-like scene of the panther, who is about to climb one of the trees to eat the phoenix. There are many similar scenes in the thiasos. We consistently find representations of drunken, foolish followers often cruel in their demeanour -a solely decorative element would not be a part in this narrative.
Richter
From this perspective it would be reasonable to explain the scene as a typical Late Antique Dionysiac aspiration, very close to Roman expressions from the same period -and indeed it is, but similarly we have to employ in our analysis the perspective of the contemporary beholder and its art historical context. As demonstrated, we can trace two different traditions expressing the same cult. The one under scrutiny represents an Eastern Hellenistic tradition, maybe partly influential for Late Roman expressions since we can trace certain Late Dionysiac elements first in the East, like the panther drinking from a wine jar, also figured on the Freer Vase, 52) or the birds siting on pillars
50)
Cf. the Phoenix Mosaic in Daphne. The phoenix is sitting on top of a crag, framed by rams borrowed from Sāsānian sources: Lassus (1938). It might be easily interpreted as an allusion to the East. Also cf. a mosaic in the apsis of a Villa in Piazza Armerina, Sicily. Depicted is a phoenix siting at the side of a personification which was seen as being India: Settis (1975: 950 -951).
51)
van den Broek (1972: 314 -319) argued that the home of the Phoenix, as described by Lactantius, is largely inspired by the imagination of Christian paradise, additionally he stressed comparable conceptions in the Iranian cosmology of the Bundahišn. This would provide an interesting link to a common means of Sāsānian compositions to set scenes in stylized mountain landscapes. It might be possible to see therein a hint for supernatural environment.
52)
Cf. fn. 23. or trees first found on the 1 st century clasps from Tillya Tepe featuring the victorious Dionysos in India. It is even harder to encounter the perspective of the Sāsānian recipient from that period. We tried to explain the bird as an Roman allusion to India. The phoenix was a very widespread motive in the Late Roman World, and as such it was also known to the Sāsānian beholder. This is proven by Christian seals, owned by Sāsānian subjects depicting the phoenix on an altar. 53) Therefore the interpretation of the phoenix as an allusion to the East, would be most probably an adaptation from Roman sources but based on a much older Eastern Hellenistic pattern that we do not completely understand in its original content. It might be interesting to 'translate' the phoenix in the same way as we did before in case of the cross on the hip of the Dionysos on the Alkino plate in Moscow (Fig.4), as a symbol for everything hrōmāyān, connecting the cult with its Hellenistic roots and its Early Byzantine present but this idea is highly speculative. Even if our interpretation is wrong both traditions, the Roman and the one represented through the Freer Vase, were most probably not independent of each other, and so it is very likely that in Late Antiquity a Dionysiac cult continued in the Sāsānian Realm.
|
2022-03-12T16:14:37.125Z
|
2018-12-31T00:00:00.000
|
{
"year": 2018,
"sha1": "dbb7678a465650ae1f9737f7307b79b2f0d237a9",
"oa_license": null,
"oa_url": "http://czasopisma.marszalek.com.pl/images/pliki/aoto/7/aoto702.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eef50cdf7bbfc984ee5ec0d63be2a0f4befe3017",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
269024862
|
pes2o/s2orc
|
v3-fos-license
|
Navigating the Caregiving Pathway: Understanding the Contextual Influences on Sense of Coherence Among Family Caregivers
Introduction: Family caregivers of patients with chronic conditions face challenges such as emotional and physical stress, which can lead to caregiver burden. A good sense of coherence (SOC) is crucial in promoting resilience, positive health outcomes, and coping. Caregivers with a high SOC are optimistic about their caregiving roles and finding meaning and purpose in their responsibilities. In this background, we looked into the contextual influences that facilitate or impede the sense of coherence of the family caregivers of patients with chronic conditions requiring home-based long-term care. Methods: We conducted telephonic interviews with 10 self-identified primary family caregivers of patients with chronic conditions. We utilized semi-structured interview guidelines, transcribed the interviews verbatim, and performed thematic analysis. Potential factors influencing caregivers' SOC were identified through inductive coding, allowing themes to emerge from the data. However, we report themes along the three components of SOC. Results: Good knowledge about the disease conditions, open communication with care recipients and providers, and past caregiving experiences all contribute to improving comprehensibility. Insufficient knowledge about the condition could be distressing. Effective management requires adapting care strategies through learning, planning, and utilizing available resources, and support networks, too, play a crucial role. However, insufficient caregiver support and neglecting one's health can result in distress and disruptions in care management. Maintaining positive perspectives and ascribed values to interpersonal connections can enhance meaningfulness among caregivers. These interpretations may not apply to caregivers with affective disorders. Conclusion: Various aspects influence the comprehensibility, manageability, and meaningfulness pertaining to the situation of family caregivers, and these in turn impact their well-being and ability to provide quality care. Understanding these factors can help create support systems with targeted interventions and strategies to reduce caregiver burden and improve quality of life.
Introduction
The aging population and rising life expectancy necessities the need for long-term care for the elderly and people with chronic conditions.Informal caregivers, often women family members, meet these care needs [1].Such caregivers face numerous challenges, including emotional and physical stress and strain that can lead to or exacerbate caregiver burden [2][3][4].Care tasks are heavily gendered, and most family caregivers are women.Traditional gendered roles and norms mean that women have much less power in decision-making than men within domestic spaces.Caregivers may thus have to negotiate with complex challenges and may end up feeling helpless and hopeless.Caregivers' incapability of supporting the care recipient may be associated with feelings of failure, anxiety, guilt, and unmet needs.These feelings can be a significant part of the caregiving experience and can be detrimental to both the caregiver and the care recipient -a "burning at both ends" situation marked with feelings of helplessness and hopelessness [5].
Research into negotiating with these roles and their challenges increasingly focuses on the concept of sense of coherence (SOC).A person's SOC is significant for promoting resilience, positive health outcomes, and adaptive coping across diverse populations.A high level of SOC protects against anxiety, depression, posttraumatic stress disorder, and burnout or exhaustion [6][7].Those who have a greater sense of SOC have a higher quality of life and can also significantly predict a caregiver's emotional well-being [8] According to Eriksson, SOC, as developed by Antonovsky, is a salutogenic concept that can explain why certain individuals remain healthy while others become sick during hardship [9].SOC centers around the concept of health and well-being and demonstrates how an individual's perspective on life and capability can influence how they deal with challenging circumstances.It involves seeing life as ordered, manageable, and meaningful.It describes an individual method of using their inner trust, a compassionate concern for themself and their abilities, which enables them to find, take advantage of, utilize, and repurpose the resources available to them.SOC thus has three components: comprehensibility, manageability, and meaningfulness [10].These components constitute a theoretical model for understanding the relationship between stress, coping, and health.SOC is a salutogenic concept that refers to the common attribute underlying the utilization of resources and seeing the world positively to attain adaptive coping with complex stressors [11].Comprehensibility can be seen as a cognitive attribute that enables the individual to comprehend their internal and external circumstances rationally.Manageability is a behavioral attribute that indicates managing and utilizing resources to deal with a situation.Meaningfulness can be seen as a motivating attribute that gives particular emotional meaning to our lives, which helps us to navigate through everyday problems or challenges.Caregivers with a high SOC tend to be optimistic about their caregiving duties, finding meaning and purpose in their responsibilities and making it easier to deal with the challenges that come with it.They better understand the care recipient's needs, enabling them to provide better care and assistance.Thus, having a high SOC can serve as a tool for efficient coping among caregivers [12].
Several studies reiterate the favorable impact of high SOC levels on family caregivers on their health-related aspects, psychological and coping aspects, and negative influence on the burden experienced by them [13][14][15].A caregiver's level of SOC may be shaped by multiple aspects steeped within the contexts where the caring role emerges and is sustained.Biddle's role theory, a symbolic interactionist approach that permits some active place for individuals, considers roles as evolving through the interplay of individual and social processes that we refer to as the context here.For instance, values may be considered as individual-level aspects but inevitably reflect social concept systems like body-related, economic, or religious norms and ideologies, even if individual-level interpretations may conveniently alter these.This theory works well in small social systems like within families [16].There is limited evidence on the contextual influences that shape the SOC among such family caregivers.Therefore, we conducted an exploratory study among family caregivers of patients with chronic conditions on the contextual influences that potentially facilitate or impede their sense of coherence.Understanding these can aid in creating support systems that enhance health outcomes and well-being for family caregivers, enabling targeted interventions and strategies to reduce burden and improve quality of life.
An initial analysis of this work was presented at the sixth Amrita International Public Health Conference, which was held on December 1-2, 2023.
Research design
This study was carried out as part of a formative assessment of care recipient suffering and caregiving experiences of caregivers of patients with chronic conditions that require long-term care support.We used in-depth interviews to delve into the caregiving experience of family caregivers.Recognizing the subjective nature of caregiving and its intricate ties to social, cultural, and political contexts, we opted for in-depth interviews to investigate this phenomenon.
Researcher characteristics and reflexivity
Both researchers hold postgraduate degrees in public health or community medicine and have extensively explored constructivist and post-positivist ideas in their research on palliative care.This familiarity with these approaches and various study findings formed the foundation for developing the in-depth interview guidelines and analyzing the interview data.AVR, who conducted the interviews and did the initial analysis, lacks direct experience in providing palliative care but has close engagement with caregivers for about five years and identifies as a woman from within the same cultural setting as the caregivers interviewed.Both authors were closely involved in caregiving within their respective families, which may have influenced their analysis due to personal experiences with the issue.However, we acknowledge that this background may not fully capture the intricacies of caregivers' experiences.
Context
The study was conducted in purposively selected panchayats of Kollam district, Kerala.The state-run palliative care home program has been implemented in all panchayats.Under this, palliative care nurses offer monthly home visits to chronically ill patients confined to their homes, ensuring practical and immediate care and support for accomplishing activities of daily living.
Sampling strategy
At first, palliative care patients were selected purposively from the chosen panchayats of the Kollam district through palliative care nurses and Accredited Social Health Activists (ASHA) of the respective panchayats.Patients with cancer, dementia, heart failure, chronic kidney disease, and stroke were selected purposively to get a range of caregiver experiences.These conditions constitute the top five chronic morbidities according to the Global Burden of Disease study estimates for India [17].The caregiving requirements for these are likely to be long-term and diverse.For each patient chosen, one self-identified adult (more than 18 years) primary caregiver providing care for at least 12 months was included.Based on descriptions of potential participants elicited from the palliative care nurses or ASHA workers by AVR, cares who were physically or emotionally unable to participate or whose care recipient was critically ill at the time of the interview were excluded.The sample limit of 10 depended on reaching the thematic saturation during the analysis of indepth interviews.
Data collection methods
In-depth interviews were used to capture the care experiences of the family caregivers.A semi-structured question set with probes was used to conduct the in-depth interviews.The interviewer used prompts or questions to get participants to elaborate on their response -for instance, by eliciting the back story of a certain situation or requesting participants to elaborate further on some statements that the interviewer felt were relevant to the study.The interview was done telephonically at a mutually convenient time for the interviewer and the participant.AVR initially contacted prospective participants, explained the study's purpose, and took verbal consent, which was subsequently documented electronically when interviews were recorded.The interviews were conducted in the participants' native language (Malayalam).
Data analysis
All interviews were translated into English by the researcher before analysis.Open coding analysis was done manually.Emerged codes are mapped into categories to develop themes.The themes were then subsequently grouped to describe various aspects that facilitate and impede the three dimensions of SOC in family caregivers.AVR did the initial coding, followed by code refining and discussion with RPV. Discussion and recording were repeated until the two researchers reached a consensus.The above method helped clarify, illustrate, and validate the emerging patterns.To ensure an accurate understanding of the findings, direct excerpts from the interviews were used to explain the themes and categories that emerged.We tried to establish the dependability and confirmability of our findings through reflexive journaling, audit trails, periodic meetings between the investigators, and presentation of the findings to other researchers.
Ethical clearance
Ethics clearance was obtained from the Institutional Ethics Committee of Sree Chitra Tirunal Institute for Medical Sciences and Technology (SCTIMST) (Ref No. SCT/IEC/2048/May/2023).
Participant profiles
The average age of caregivers (CG) was 47.5 years old, ranging from 38 to 59 years old.All interviewed caregivers were women, spouses, or daughters/in-laws of the care recipients (CR).The average age of care recipients was 66.1 years old, ranging from 53 to 82 years old.The average duration of caregiving was 61.8 months.Interviews conducted for the study had an average duration of 30 minutes, though varied between 20 to 50 minutes (Table 1).We then report the themes that emerged from the analysis using illustrative participant quotes.We have partially modified the quotes for clarity and ease of understanding, and these are indicated using ellipses or angle brackets.Each quote is followed by the diagnosis of the care recipient, the age of the caregiver in years, and the relationship of the caregiver to the care recipient in brackets.Most caregivers expressed complex challenges related to the care role where they had to balance self-care, other responsibilities, and the care role-related tasks.
"…I will go (for my check-up).I will ask my kids to look after her (care recipient) ... I will go at that exact time of the appointment and come back soon after.I need to make time for myself...At night, she keeps calling and won't sleep.It's been like that for quite some time now.I need to get up early, also.It is difficult.
Children <need to go> to college, so I have to make food for them.… Some issues are there.We are human, and we adapt to everything, so I'm doing all this.Our mind is everything.We will have that confidence.I became more adaptable to this over time."(Stroke CG, 49, daughter) Caregivers respond to these challenges try to overcome them and the SOC level could be considered high when they succeed.
"…Earlier, he felt so bad that when I cleaned him after using the commode or bathed or dressed him.His face will be sad and all.I could feel that, but he never said anything.Then I will say something funny and make him laugh; like that, I manage him well.Now he is okay.I make him laugh while bathing to overcome his embarrassment."(Stroke CG, 39, daughter) One caregiver could successfully balance her career as a beautician with control over her own routines and care-related tasks.
"… It's (the beauty parlor where the CG worked) near my home...I have kept a home nurse to take care of her when I'm away.She (home nurse) needs to do <very little>, only to give her the lunch through the <nasogastric> tube.If I need to go anywhere, she will come and sit with my mother...I will come back soon.I will cook and do everything… I won't stay anywhere overnight.I came back in 2-3 hours."(Cancer CG, 38, daughter) Some others were struggling to cope with the situation.
"…I feel bad when she won't listen to me...I will ask her...' why you are doing this to me.Why you are like this to me?' I especially feel bad when someone comes home...If someone is visiting, I will get angry at her (care recipient) and make her change her clothes and <diapers>.She won't listen.Then I feel bad that I'm doing everything and she is behaving like that.I feel bad about that."(Dementia, CG, 41, daughter-in-law) We have listed the emerging themes and elaborated them in the following section (Table 2).We have grouped the emerging themes and reported them under the logically consistent domain of SOCcomprehensibility, manageability, or meaningfulness -respectively.The themes represent enabling and hindering factors of various components of SOC and suggestive implications on the caregiver's capacity to cope with the caring role and avoid burden and distress.We have used the term care recipient instead of patient unless the statement involved primarily biomedical aspects of care.
Accessibility and Knowledge About Patient Condition and Previous Experience
Access to information about the patient's condition through healthcare professionals and personal networks enabled a more comprehensive approach to caregiving.Furthermore, obtaining and using information from friends provided valuable insights, such as alternative treatment facilities that offer better care at a reduced cost.This collective knowledge and experience facilitate the development of a comprehensive understanding of the situation, thereby helping caregivers make informed decisions about the provision of care.Previous experience in caregiving and firsthand knowledge of similar conditions within the family also contributed to a significantly enhanced understanding of the care required and its underlying circumstances.
For one participant, having worked as an attendant in a hospital and the resultant familiarity with different care needs provided practical skills and insight into recipient care needs.
"…my father died of kidney failure.Then my grandmother died of cancer.So, I didn't feel anything (about the cancer diagnosis of my mother).I know about the condition.Then my mother was also brave.She said that even small kids have cancer and I'm old so it is okay.She also took things bravely.My father also had (heart) attacks...3 times so I wasn't shocked" (Cancer Caregiver, 38, Daughter)
Access to Formal and Informal Financial Assistance and Support
Medical emergencies and chronic treatment require access to financial support and schemes.This can come from various formal and informal sources, such as bank loans, government schemes, family assistance, and help from friends and neighbors.Government schemes and insurance may provide financial aid or subsidized treatment options, easing the burden on patients and their families.Family and friends' support plays a significant role, often stepping in financially; most often, elderly caregivers heavily rely on their children for financial support.
" Caregivers have diverse beliefs about the care recipient's condition, considering biological, environmental, and lifestyle factors.A comprehension of the cause of the disease or condition that makes sense and is perceived as logical by the caregiver provides a foundation for a good caregiving environment and the caregiver's ability to adjust to challenging circumstances.
"The doctor said this was because of the intake of excessive medicines.Then the doctor said at that time not to take even one paracetamol for any cold or fever.Then we didn't take any medication after that."(CKD Caregiver, 51, Spouse) "He was a smoker.It is because of that.I used to tell him not to smoke.But he never listens.Only because of that, this happened."(Heart failure Caregiver, 47 Spouse) Continuous learning and education about patient conditions and care needs and having a positive attitude towards the role Continuous learning and education are fundamental components of providing quality patient care.By actively monitoring and understanding patient symptoms and medication, caregivers can easily understand patient disease progression and evolving care needs.Maintaining a positive attitude fosters a clearer understanding of circumstances and enables one to approach challenges with a constructive mindset.
Lack of Guidance, Support, and Knowledge About Care Recipient Condition and Sole Caregiving
Caregivers may struggle to understand their situation without an adequate awareness and understanding of their care recipient's condition.Moreover, the absence of caregiver support from healthcare professionals further compounds these difficulties, as individuals may struggle to access the necessary resources and assistance to cope with their environment.Also, being the sole caregiver within such a dynamic can exacerbate feelings of isolation and be overwhelming, leading to increased emotional strain.
"I'm sad about his condition.I'm struggling with him.I'm taking care of everything.I cannot do anything or cannot go out.Everything seems bad in my life.I always feel I'm alone."(Dementia Caregiver, 59, Spouse)
Planning and Implementing Diverse Care Approaches Brings Balance to Care and Other Responsibilities
Meticulous planning and proactive measures can greatly contribute to the manageability of both patient care and other responsibilities.Caregivers may schedule hospital appointments well in advance to avoid long hospital wait periods.They also implement care approaches and behavior modification strategies like planning household activities to coincide with the care recipient's sleeping schedule or physical adjustments to the house to accommodate their specific needs, ensuring their comfort and safety.This may include modifications such as keeping doors closed to prevent the care recipient from wandering off, building attached toilets, and making changes to their clothing to ensure they are appropriately dressed.Establishing routines in care tasks along with close monitoring of recipient's condition helps to ensure effective care management.
Financial Planning, Management, and Utilization of Available Resources
Caregivers have adopted various strategies to manage household expenses while ensuring proper treatment for the patient.They have prioritized the financial resources for the patient's treatment by cutting down on non-essential household costs.Additionally, choosing treatment, medicines, and assistive aids provided by public services when available had helped minimize treatment expenses.The utilization of government schemes and insurance are valuable resources that help caregivers with budgeting, especially during extreme financial difficulties.Furthermore, having a job enabled such caregivers to contribute financially to the patient's care, supplementing available resources.
Previous Experiences, Support, and Guidance From Peers
The caregiver's previous experiences and exposure to similar roles resulted in delivering better care and managing various responsibilities effectively.Patient-centered care plans, the caregiver's experience, and guidance from other caregivers in similar situations could improve manageability for family caregivers.
Guidance from individuals who have shared similar caregiving experiences has been instrumental in shaping care approaches.Other family members offering assistance with caregiving responsibilities helped caregivers to better balance their lives with caregiving.Having a network of family and friends with whom the caregivers could share their emotional distress or challenges encountered in their caregiving role has been crucial for maintaining their well-being and resilience.Utilizing paid care support had been essential in providing some flexibility and enabling such caregivers to effectively balance their caregiving responsibilities with other commitments, such as their jobs.
"I used to work in a hospital as an attendee.So, I know something (about the care needs)" (Heart failure Caregiver, 47, Spouse)
Ignoring Personal Health and Self-Care Management
Caregivers often neglect to monitor or seek care for their own health conditions due to their caregiving responsibilities.By failing to prioritize their well-being, caregivers risk their own health.Additionally, caregivers might conceal their illnesses from the care recipient and other family members, often to avoid incurring treatment costs in addition to the ongoing care recipient related expenses.
"When he became sick... when his condition became bad...I gave more importance to him than me.I didn't buy (my) medicines.I put mine pending.I did nothing… no testing, no medicines, nothing.If he feels like I'm also sick, then he will be tense."(Heart failure Caregiver, 54, Spouse)
Lack of Support and Opportunity Cost
Lack of support impacted care task management and incurred an opportunity cost for caregivers.Lack of support often forced the caregiver to stop patient treatment, like physiotherapy, while they were forced to quit their job or lost their jobs and were no longer able to enjoy a social life.The absence of financial security and support further pushed them into debt.Sometimes, caregivers felt that the other family members were not dependable, choosing not to entrust them with care tasks.
Inadequate Knowledge About the Patient's Specific Care Needs
Caregivers who lacked the knowledge and access to specific resources necessary for effectively managing the specific behaviors exhibited by their patients, like in dementia, made the management of care more difficult.This lack of awareness often led to difficult situations like controlling behaviors and inadequate emotional support.
Caregiver's Motivations
Caregiver motivations include reciprocity, beliefs in meaningful relationships, and other benefits that foster caregiving.These motivations encompass various aspects that contribute to the meaning of caregiving.Key aspects here include a sense of appreciation from others, satisfaction, and happiness.Moreover, an increased understanding, compassion, and empathy within the care relationship create a profound sense of purpose.Among the perceived benefits that caregivers see as positive outcomes include the hope that their children will reciprocate the care in the future, personal growth and development, a positive outlook on life, and various religious and cultural beliefs.When all these aspects were aligned, it enabled caregivers to find profound meaningfulness in caregiving.
"The doctor appreciated me "midukki" (good girl).He (patient) had become well physically "nannayi, kuttapanayi" (gained weight and became handsome)" (Stroke Caregiver, 58, Spouse) "Tomorrow I also will age.My children are seeing this and growing up." (Dementia Caregiver, 41, Daughter in law) "Now it is my turn to take care of her.I have to do that for my mother.we need to take care of her.It's our responsibility.We cannot expect anything in return for that….This is our karma."(Stroke Caregiver, 49, Daughter) Lack of meaningful relationships and affective disorders Caregivers may lose meaningfulness due to feelings related to a lack of fulfillment, worsening of affective disorders, or loss of previous belief.When the caregiver-care recipient relationship lacks meaning, there is a lack of significance in the caregiving role and lives.
"He stares at me like a stone.That's all.He won't comfort me or anything.He will do nothing…he just stares…nothing."(Stroke Caregiver, 58, Spouse)
Discussion
Through a set of in-depth interviews, we aimed to learn about contextual influences that facilitate and impede the sense of coherence among family caregivers of palliative care patients in Kollam, Kerala.Being biomedically trained public health professionals with some research engagement with caregiving, we tried to list interview questions and generate conversations based on our readings and experience.Although it was difficult to claim an insider position alongside women caregivers who were primarily homemakers in Kollam, Kerala, we actively tried to situate our codes and themes about context aspects shaping SOC within participant narratives, which we then connected with the components of SOC.Participants might have chosen to give socially desirable responses to our inquiries about their roles and tasks.However, our efforts were to interpret aspects of the context where responses to difficult tasks associated with such roles emerged, and our findings would still be credible despite this.Our interpretation of context appears individualistic partly because of the small system where caregiving roles are evolving -the family level careand partly because of the underlying symbolic interactionist perspective of Biddle's role theory that strongly retains individual-level aspects and places interplay with the context as the basis for evolution of roles.Caregivers start by appreciating aspects of roles, gaining information, and later insight into the roles as the tasks and behaviors become their own, but this is not a social imperative and is shaped by the individuals themselves [16].
The study highlights the importance of facilitators in developing comprehensibility, manageability (or bearability), and meaningfulness among family caregivers.All studied caregivers were women, which is practically the norm in most families in Kerala.Culturally, women's expected roles are marked with obligations to provide care, and prioritizing the provision of care over their own well-being is often normalized.This can often lead to helplessness, hopelessness, demoralization, and existential distress [18].The identified enablers of comprehensibility provide the necessary expertise and assistance for family caregivers to plan patient-centered care goals while maintaining their own well-being.However, the lack of these enablers acts as a hindrance and could lead to caregiver distress and suffering.This lack of comprehensibility may lead to greater uncertainty, distress, and a sense of vulnerability among caregivers.The findings show that family caregivers manage their caregiving challenges with the necessary financial, informational, and emotional enablers.The lack of those might impair an individual's ability to properly manage their roles and responsibilities.We also found that caregivers find meaning and fulfillment in the caregiving role through its perceived benefits.Loss of these enablers can result in disputes in care relationships or caregiver issues that may even escalate to existential distress in extreme situations.This lack of meaning can cause significant suffering, resulting in the loss of their goals and objectives of life.
Understanding enabling and hindering aspects is crucial for targeted interventions and strategies for developing a high level of SOC in family caregivers.Developing interventions that promote these enablers can address family caregivers' challenges, reduce caregiving burden, and foster SOC.Given that Kerala's palliative care program is heavily dependent on family caregivers, there is a need for further research attention into how policymakers, healthcare providers, and family caregivers perceive these issues and explore how to use the concept of SOC to address barriers and leverage opportunities for preventing adverse caregiver outcomes.
Flexible job opportunities, accessible medical care, social support, financial assistance, and support in respite care can help develop a SOC among these Caregivers.Improving health literacy is critical to reducing the issues of family caregivers and obtaining help and resources.These suggestions seek to maximize enabling aspects while minimizing barriers for these family caregivers.Research indicates that factors such as effective communication, knowledge, social and financial support, and satisfaction in caregiving play crucial roles in alleviating the burden faced by family caregivers [19][20][21].Additionally, cultivating a sense of coherence (SOC) has been identified as a key contributor to reducing this burden [6,12].Therefore, possessing these elements enhances one's sense of coherence and diminishes the burden of caregiving.A good sense of coherence, in general, is correlated with and favors good pre-care relationships.Positive care patterns with successful adaptations are especially emphasized when prioritizing manageability and meaningfulness [22].Caregivers with a low sense of coherence can be considered as possibly vulnerable to adverse outcomes for both the care recipient and the caregiver [23].
One limitation of the study was that we did not examine the theoretical salience of the concept of SOC before embarking on describing aspects related to it in our study setting.Also, we included only women caregivers aged between 38 and 58 years who were giving care for at least a year.We may have inadvertently selected caregivers who were "survivors" or could adapt to the circumstances.Including others who had just entered a caregiving role or male caregivers might have produced different results.Also, we did not include younger or older caregivers who may have experienced and responded to the same circumstances differently.We also did not consider potentially important aspects of the expression of SOC, such as the duration of caregiving, the attitude of the caregiver to the care recipient, and vice versa.These may limit the transferability of our findings to such situations.
Future research should explore these nuances along with cross-cultural variations.Kerala, characterized by a dearth of institutional care support, grapples with unique challenges, particularly with increasing spousal caregiving, making caregivers themselves older, given the prevalence of migration and nuclear family arrangements in the state.Understanding the specific dynamics in a broader, cross-cultural context will contribute to a more comprehensive and nuanced understanding of care dynamics, shedding light on how societal structures impact the experiences and challenges faced by caregivers.Other contexts in India and other low-and middle-income settings may have different demographic patterns, gender and power relationships, decision-making approaches, and meanings for suffering and care.There is also a need to look for cost-effective, scalable interventions in enhancing the SOC in family caregivers, which may have to be specific, such as therapies, psychoeducational interventions for caregivers, or broader salutogenic interventions, such as promoting positive mental health and well-being, such as those for healthy aging.
Moreover, investing in capacity building and skill-based training for healthcare providers is essential.This would enable them to better recognize distress among caregivers and provide timely and tailored support.While our findings offer limited insight in this direction, future research could further investigate the availability of health and social resources for caregivers, as well as their characteristics and unmet contextspecific needs of these caregivers.Future studies can also examine the systemic inequalities in addressing caregiver needs, which is crucial for devising targeted programs at the health system level.This research can contribute to developing a comprehensive approach to supporting these caregivers and improving their overall well-being.Larger, diverse sample sizes and exploring other related concepts or variables such as caregiver burden, role conflicts, and caregiver burnout may provide a more comprehensive understanding of aspects influencing family caregivers, enabling more applicable findings.We did not explore the potential role of technology in developing SOC among caregivers, which may also be the focus of future research on this topic.
Conclusions
Empowering the family caregivers of palliative care patients by enhancing the level of SOC can prevent or reduce their burden and improve the quality of care they provide.Various factors, including knowledge, access to resources, open communication, financial management plans, care approaches, and support networks, influence SOC's three components.Based on our findings, we feel that it is crucial to empower caregivers through comprehensive education and help them negotiate for more resources through the program or through peer groups and targeted assistance.We, therefore, advocate for improved resource accessibility, enhanced support networks, and policy and program review to improve support activities for family caregivers.By helping caregivers of palliative care enhance their sense of coherence levels, caregivers can be enabled to navigate their roles with resilience and find fulfillment in their caregiving journey.
Comprehensibility 1 .
Accessibility and knowledge about patient condition and previous experience 2. Access to formal and informal financial assistance and support 3. Open communication and guidance from health care providers and caregivers going through similar conditions 4. Logical beliefs about the care recipient's condition 5. Continuous learning and education about care recipient's conditions and care needs and having a positive attitude towards the role 6. Lack of guidance, support, Knowledge about patient condition, and sole caregiving Manageability 1. Planning and implementing diverse care approaches brings balance to care responsibilities and other responsibilities 2. Financial planning, management, and utilization of available resources 3. Previous experiences, support, and guidance from peers.4. Ignoring personal health and self-care management 5
TABLE 1 : Participant Profile
M: Male; F: Female
TABLE 2 : Themes indicating contextual aspects that may facilitate or impede a sense of coherence
There is a [socioeconomic category] in [Hospital name].So we need only less money for treatment.But we need to submit our ration card and other documents.Then they [Hospital staff] considered us as [Socioeconomic category].Because of that, we only need to spend about one-third of the amount" (Heart failure Caregiver, 54, Spouse)Open Communication and Guidance From Healthcare Providers and Caregivers Going Through Similar ConditionsOpen communication and guidance from healthcare providers (HCPs) and others are crucial in ensuring the appropriate care, such as wound care after surgery.Through this effective communication, caregivers can comprehensively understand their care recipient's condition, medications, and care needs.Additionally, interaction with other caregivers facing similar circumstances may lead to a gain of practical advice.
|
2024-04-10T15:13:46.243Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "6da082a6d0df181fe4cfa8cdbc9d7dfc092be42f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/232439/20240408-32189-195msvl.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2b0e8f4e8ed7651a84710befdf09cc26af66129",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
136193361
|
pes2o/s2orc
|
v3-fos-license
|
Hexagonal boron nitride nanomechanical resonators with spatially visualized motion
Atomic layers of hexagonal boron nitride (h-BN) crystal are excellent candidates for structural materials as enabling ultrathin, two-dimensional (2D) nanoelectromechanical systems (NEMS) due to the outstanding mechanical properties and very wide bandgap (5.9 eV) of h-BN. In this work, we report the experimental demonstration of h-BN 2D nanomechanical resonators vibrating at high and very high frequencies (from ~5 to ~70 MHz), and investigations of the elastic properties of h-BN by measuring the multimode resonant behavior of these devices. First, we demonstrate a dry-transferred doubly clamped h-BN membrane with ~6.7 nm thickness, the thinnest h-BN resonator known to date. In addition, we fabricate circular drumhead h-BN resonators with thicknesses ranging from ~9 to 292 nm, from which we measure up to eight resonance modes in the range of ~18 to 35 MHz. Combining measurements and modeling of the rich multimode resonances, we resolve h-BN’s elastic behavior, including the transition from membrane to disk regime, with built-in tension ranging from 0.02 to 2 N m−1. The Young’s modulus of h-BN is determined to be EY≈392 GPa from the measured resonances. The ultrasensitive measurements further reveal subtle structural characteristics and mechanical properties of the suspended h-BN diaphragms, including anisotropic built-in tension and bulging, thus suggesting guidelines on how these effects can be exploited for engineering multimode resonant functions in 2D NEMS transducers.
INTRODUCTION
Nanoelectromechanical systems (NEMS) vibrating at their resonance modes and made from atomic layer crystalline materials have attracted increasing research interest owing to their promises for exceptionally high responsivities and sensitivities to external stimuli, enabled by their ultralow weight (mass) and ultrahigh surface-area-to-volume ratio [1][2][3] . Following semi-metallic graphene, the early hallmark of two-dimensional (2D) crystals, a variety of 2D materials have been studied as structural materials for 2D NEMS resonators, including superconducting NbSe 2 (Ref. 4), semiconducting MoS 2 (Refs. 3,5-7), and black phosphorus 8,9 , which opens a wide spectrum of emerging applications, such as sensing 10,11 and signal processing with ultralow power and broad tunability 12,13 . Although atomic layer crystals with bandgaps ranging from 0 to 2 eV have been studied in earlier explorations (such as 0 eV graphene 1,2,12 , 0.3-1.5 eV black phosphorus 8,9 , 1.2-1.9 eV MoS 2 (Refs. 3,5-7), and so on), 2D NEMS utilizing wide bandgap atomic layer materials have not yet been demonstrated. The adoption of wide bandgap 2D materials in NEMS resonators could offer new opportunities for interactions with ultraviolet (UV) photons and for higher power density-handling capabilities, including higher electrical voltage and higher light intensity.
Ultrathin hexagonal boron nitride (h-BN) crystals isolated from their layered bulk have recently been employed as essential building blocks for emerging 2D devices and heterostructures. The h-BN material has a very wide bandgap (5.9 eV) 14 and excellent chemical and thermal stability beyond that offered by graphene 15,16 , making h-BN attractive for wide bandgap 2D NEMS resonators. The h-BN crystal also possesses attractive mechanical properties owing to its hexagonal crystal structure nearly identical to that of graphene, including a Young's modulus theoretically predicted to be as high as E Y ≈780 GPa (Ref. 17), and a very high breaking strain limit of ε≈22% (Ref. 18). In addition, monolayer h-BN is theoretically predicted to have strong piezoelectricity, thus showing promise for potential integrated electromechanical actuation and sensing 19,20 . Further, very importantly, the graphene-like in-plane honeycomb crystal structure of h-BN facilitates an ultra-smooth surface and lattice-matched interface with graphene, thus enhancing electron transport in graphene channels to achieve greatly boosted mobility (up to 1,000,000 cm 2 V − 1 s − 1 ) [21][22][23] . Given these special characteristics, it is natural to choose h-BN as an attractive candidate among 2D crystals for innovating future generations of NEMS and nanooptomechanical systems. To date, h-BN has primarily been employed as lattice-matched high-κ dielectric layers in 2D heterostructures for enabling high-performance electronic devices [21][22][23] , and in UV and deep UV optoelectronic devices 14,24 ; however, its excellent mechanical and electromechanical properties have not yet been investigated and remain unexploited in device platforms. This lack of exploitation persists because of technical challenges associated with the fabrication of suspended structures and nanomechanical devices in the h-BN crystal, including greater difficulties in both the isolation of mono-and few-layer flakes (compared with graphene) and the identification of ultrathin h-BN structures (due to its transparency in the visible range) 25 . Equally important, it has also been plagued by the challenges involved in measuring responses from the vanishingly minuscule motion of the free-standing h-BN structures. Therefore, both delicate, deterministic device fabrication and high-precision measurements are greatly desired. Our approach here enables the first fabrication and detection of ultrathin h-BN 2D NEMS resonators, which are the smallest ultrawide bandgap crystalline resonators with demonstrated multimode resonances.
In this work, we develop a precise protocol to efficiently identify and discern very thin pristine h-BN flakes (with thicknesses ranging from~6.7 to~292 nm) exfoliated from their layered bulk, and utilize them to fabricate suspended h-BN devices that function as new nanomechanical resonators. We perform comprehensive experimental measurements on the minuscule resonator motion using interferometric detection techniques. By vividly visualizing the static structure and dynamic resonance motion of the device using a high spatial-resolution (⩽1 μm) spectromicroscopy mapping technique with minimized parasitic photothermal effects, we investigate the anisotropic built-in tension and bulging-induced phenomena, such as resonance mode shape symmetry breaking and splitting resonant modes. Moreover, we demonstrate comprehensive determination of the h-BN material's Young's modulus from the experimentally measured resonance characteristics.
MATERIALS AND METHODS
The superior mechanical properties of h-BN are rooted in its graphene-like honeycomb crystal structure. Figures 1a and b illustrate the crystal structure of h-BN, which is formed by replacing carbon atoms in a graphene crystal with boron and nitrogen atoms. Thus, h-BN has nearly the same bond length (1.45 Å) and layer distance (3.33 Å) as graphene does (1.42 Å and 3.35 Å, respectively).
We use a suite of specially developed, completely dry exfoliation, and transfer techniques to fabricate pristine h-BN resonators suspended over pre-defined microtrenches 26 . We obtain h-BN layers a few nanometers thick from high-quality bulk h-BN by exfoliating it onto a polydimethylsiloxane (PDMS) stamp. After exfoliation and careful optical identification, we transfer the h-BN nanosheet, with controlled alignment, to a pre-defined microtrench with the aid of a micromanipulator to achieve a suspended structure. This technique enables the fabrication of pristine suspended h-BN resonators free from wet chemistry contamination compared with conventional wet transfer methods 27,28 . In addition, after all the device fabrication steps, we conduct annealing to further minimize the potential deterioration of device performance due to adsorbates and localized stress.
We analyze the smallest achievable h-BN thickness for the resonator limited by visibility using this device-geometrycontrollable transfer technique and, in accordance with the guideline analysis, we fabricated the thinnest possible resonator using this method. The refractive index of h-BN 25,29 is close to that of a PDMS stamp at 550 nm wavelength 30 . Thus, the reflectance at the interface of the materials is reduced, significantly diminishing the optical contrast of h-BN on the PDMS stamp. In our analysis, monolayer h-BN on PDMS has showed extremely low optical contrast ( Figure 1c). In this work, our thinnest h-BN nanosheet identified on PDMS after exfoliation is~6.7 nm (~20 layers) thick (Figure 1d), using which we have fabricated the thinnest h-BN nanoresonator with exfoliated h-BN crystal, that is, Device #1, a doubly clamped h-BN flake with 6.7 nm thickness across a trench Previously, 2D NEMS resonators were fabricated using semimetallic graphene 1,2,12 , superconducting NbSe 2 (Ref. 4), semiconducting MoS 2 (Refs. 3,5-7) and black phosphorus 8,9 . These devices are primarily actuated by electrostatic forces. However, this scheme requires conductive 2D materials to form a capacitor between the freestanding 2D resonator and a back gate and, therefore, cannot be readily applied to insulating h-BN. Similarly, displacement detection via suspended channel transistor 2 and piezoresistive effects 31 cannot be used. Meantime, h-BN might offer sufficient piezoelectricity down to a few-layer regime 19,20 , which remains challenging at the device level, whereas the thinnest h-BN device (~20 layers) reported here is not expected to fall in the regime of strong piezoelectricity. Therefore, pure optical detection, and specifically optical interferometry, is a natural scheme for characterization of these first h-BN nanomechanical resonators (Supplementary Information).
Furthermore, although adoption of 2D materials into NEMS is still emerging, many of the intricate structural and elastic properties of such devices are of fundamental interest and are worth exploring as we move toward precise device engineering 6 . Ultrasensitive detection of multimode Brownian motion using high spatial-resolution scanning optical interferometric spectromicroscopy 32 likely remains the best technique for unveiling these subtle features. To effectively probe the detailed structural properties, less light absorption is always advantageous for minimal parasitic thermal stress induced by photothermal heating, which can obscure the intrinsic device characteristics. Consequently, it is desirable to use wide-bandgap 2D crystals, such as h-BN, for NEMS resonators, in order to observe the higher order Brownian resonances and explore the otherwise hidden structural characteristics of the devices.
RESULTS
In the current study, we demonstrate measurements of both undriven thermomechanical resonances that arise from Brownianmotion thermodynamic fluctuations and photothermally driven oscillations of the thinnest Device #1 (Figures 2b and c, respectively) with resonance frequencies of f th ≈14.06 MHz and f drv ≈14.38 MHz and quality factors (Qs) of Q th ≈39 and Q drv ≈33, respectively.
Beyond the simple doubly clamped 2D resonator, h-BN resonators with circular drumhead geometry are of greater interest due to their easier availability for multimode resonances. Based on a circular drumhead h-BN resonator (Device #2 shown in Figures 1i and 2d) with a diameter d≈11.3 μm and a thickness t≈10 nm, we have measured multimode thermomechanical resonances with up to 4 modes and fitted the resonance data to a damped harmonic resonator model using Equation (1): where k B , T, ω m , M m,eff , and Q m are the Boltzmann's constant, temperature, angular resonance frequency, resonator effective mass, and quality factor of the m-th mode, respectively. As shown in Figures 2e-i, the multimode resonance frequencies range from 5.2 MHz to~13 MHz with Qs from 20 to 54. To understand the mechanical properties and resonance behaviors of the 2D h-BN resonators in detail, we further investigate the multimode resonances and their vibrational mode shapes using scanning spectromicroscopy techniques 32 . We scan the 633 nm red laser over the device area and measure the amplitudes of reflected light intensity. In the static measurement map (inset in Figure 2e), we find that the reflectance from the resonator shows an uneven pattern over the suspended region. This observation provides important structural information about the device in that, although the suspended h-BN nanosheet appears flat in optical (Figure 1i) and scanning electron microscopy (SEM; Figure 2d) images, mild wrinkles could be present in the diaphragm, as induced by spatially uneven tension. This asymmetric tension might arise from the directional mechanical exfoliation and transfer of h-BN nanosheets during device fabrication (one can define the direction of transfer as the axis perpendicular to the advancing frontline/boundary between the flake's regions contacted and not-yet-contacted to the substrate that is receiving the flake). We use spatially resolved mapping of the detailed mode shapes of the measured resonances as a powerful tool for more precise probing and quantification of the rich mechanical properties of these h-BN nanomechanical devices (beyond the basic information of f and Q values). First, we show that spatial mapping of the mode shapes reveals the asymmetric built-in tension. The righthand insets of Figures 2f-i represent the spatially resolved motion amplitude at each resonance frequency and clearly show that the mode shapes are more complicated than those expected simply based on device geometry. The antinodes of the 1st, 3rd, and 4th modes deviate from the center of the resonator. In addition, the 3rd mode shape is no longer the degenerated mode of the 2nd mode and consists of two nodal lines in the same direction. We perform finite element method (FEM in COMSOL Multiphysics) simulations to further investigate these unusual resonance mode shapes. Since the device is very thin (~10 nm thick), we assume that the resonances of the device are governed by pre-tension rather than by flexural rigidity. The left insets in Figures 2f-i show the FEM simulation results when we apply asymmetric biaxial tensions (0.29 N m − 1 and 0.05 N m − 1 for each direction), and the resonance mode shapes agree with the measured results. Our results clearly demonstrate that, although the asymmetric biaxial built-in tensions in the resonator are ultrasmall (strain levels of 74 ppm and~13 ppm), they impact the resonance characteristics and dictate the resonance motion to a great extent.
Our further investigation of the mechanical properties of h-BN resonators relative to a thicker drumhead device, that is, Device #3 (d≈11.1 μm, t≈30 nm), reveals more complicated and intriguing device behavior that is of interest for further device engineering. In our wide-range frequency sweep of the device's undriven thermomechanical motion, we find multimode resonances up to the 7th mode, with frequencies ranging from~18.38 tõ 33.31 MHz and Qs of 329 to 619 ( Figure 3). Especially for this resonator, the sensitivity of the measurement system reaches S 1=2 x;sys ≈ 12.9 fm Hz − 1/2 , demonstrating the excellent performance of this system and the associated techniques (Supplementary Information).
We also conduct high spectral-and spatial-resolution scanning spectromicroscopic measurements on Device #3, from which an 8th resonance mode near 35 MHz is found owing to the extensive mapping data. Such high spatial-resolution mapping reveals both structural and motional characteristics of the resonator that have been more difficult to obtain using single-or few-spot interferometric detection. In Figure 4, both static reflectance mapping ( Figure 4b) and dynamic displacement mapping of the device thermomechanical motion (Figure 4e) are shown. The structure of the freestanding h-BN is indicated by the static reflectance map, with the lowest reflectance located at the center of the diaphragm and showing a gradual increase towards the clamping edge (Figure 4b). This reflectance gradient implies a non-flat suspended structure of the device, which might arise from 2D material transfer (Supplementary Information). For Device #3, the diaphragm has a thickness of~30 nm (Figure 4d), which is not sufficiently thick to ignore the built-in tension effects on resonant behavior.
On the basis of the spatial mapping (which has revealed otherwise hidden or unobservable, subtle, and unusual resonance mode shapes and mode sequences (Figure 4e)), we analyze both the built-in tension and structural bulging effects on the device's resonant behavior using FEM simulations. In Figure 4f, the first row shows the simulated mode shapes of the device with 20 MPa uniaxial pre-stress (equivalent to 0.6 N m − 1 of surface tension), and the second row illustrates the mode shapes with spherical bulging of the drumhead, where the center deflection is 143 nm. Although the stressed device simulations match better with the measured results in asymmetric mode shapes such as S4 and M5, and S6 and M7, the bulging device simulations show better agreement in the mode shape sequences such as modes B4 and B5 and modes B7 and B8. In addition, we have investigated the frequency ratios of the multimode resonances for the resonator (Supplementary Information). We have found that the measured frequency ratios are much smaller than the theoretical values for flat devices. In other words, the resonance frequencies are closer to each other than expected. Inspired by the static device reflectance implied by the non-flat device structure, we have verified that the decreased mode spacing could be caused by bulging of the diaphragm. The simulations show that the structure with a 143 nm center bulging deflection has the mode spacing that shows good agreement with the measured results ( Supplementary Information).
Due to the complexity of the real device structure, it is natural to combine multiple effects at the same time. Thus, resonance characteristics of the device should be affected by both the asymmetric built-in tension and the bulging of the h-BN resonator, which are introduced in the transfer process and are suggested from the reflectance mapping, respectively. The results reveal that these effects should be considered for frequency and mode shape engineering in freestanding 2D crystalline resonators.
To further understand the elastic properties of h-BN resonators, we have also fabricated and conducted interferometric measurements on thicker h-BN resonators, that is, Devices #4, #5, and #6 ( Figure 5). Due to their larger thickness, the resonance frequencies of these devices are governed by their flexural rigidity (Supplementary Information). Thus, we can estimate the Young's modulus (E Y ) of the h-BN using the fundamental mode resonance frequencies of these devices. In this type of device, the Young's modulus can be obtained using Equation (2) (Supplementary Information) 33 : where r is the radius of the resonator, ρ 2D is the areal mass density of h-BN, ν is the Poisson's ratio, (k 0 r) 2 is an eigenvalue calculated by a numerical method (which in this case is (k 0 r) 2 = 10.215), t is the thickness of the device, and f 0 is the fundamental mode resonance frequency.
Since the larger motional masses and higher resonance frequencies of thicker devices make their Brownian motion close to or even smaller than the sensitivity of our measurement system, we photothermally excite these devices to enhance the motion and measure the resonance frequencies, from which 2 resonance modes are detected for each device ( Figure 5). Based on the measured fundamental resonances, we have extracted the Young's moduli of these h-BN devices using Equation (2) and obtained E Y ≈552 GPa for Device #4, E Y ≈377 GPa for Device #5, and E Y ≈248 GPa for Device #6. The scattering of the Young's modulus values could arise from subtle and non-ideal structural effects in these ultrathin and very small drumheads, effects that are not readily and explicitly included in the theoretical model. Nonetheless, we have calculated an averaged Young's modulus of E Y = 392 ± 125 GPa. This value is lower than the theoretically predicted value 17 but higher than the results measured in nanoindentation experiments 18 . Figure 5 Optical microscopy images and measured photothermally driven resonance spectra of the first 2 modes of (a) Device #4, (b) Device #5, and (c) Device #6. All scale bars are 10 μm.
DISCUSSION
With the measured resonance frequencies for circular drumhead devices with thicknesses ranging from 9 to 292 nm, we are able to compare the experimental resonance data and the theoretical frequency scaling. Figure 6 shows the clear elastic transition regimes and frequency scaling of 4 observed modes of the drumhead h-BN resonators using the experimentally determined Young's modulus of h-BN, that is, E Y ≈392 GPa. In the plots, we use different built-in tensions of 0.02, 0.2, and 2 N/m, which represent the expected range of tension in this type of devices. To calculate the resonance frequency of different modes, we use Equation (3): where m denotes the mode that we calculate, D is the flexural , and γ is the in-plane pre-tension evenly distributed in the 2D material 33 . When the h-BN thickness is less than~10 nm, where pre-tension dominates, resonance frequencies scale with resonator thickness as fpt -1/2 . When the h-BN thickness is greater than~100 nm, where the flexural rigidity of the device dominates, the resonance frequencies are proportionally dependent on the resonator thickness, that is, fpt. In the transition regime between these regimes, 10 nm o to 100 nm, both the pre-tension and flexural rigidity play considerable roles in determining the resonance frequency. In addition, by plotting the experimental resonance frequencies, we find that the experimental results fit the theoretical expectation very well. Thus, for thin devices (t o10 nm), we can tune the frequency by using pretension engineering, and for thick devices (t4100 nm), we can achieve a stable resonator. For devices in the transition regime (10 nm oto 100 nm), wide engineering freedom exists for making h-BN nanoresonators.
CONCLUSION
In conclusion, we have demonstrated the first h-BN nanomechanical resonators operating at high and very high frequencies with devices covering a wide range of thicknesses (6.7 to 292 nm). Despite the insulating properties, which prohibit electrical detection approaches, we have been able to measure both thermomechanical motion and photothermally driven oscillations of multimode h-BN resonators using the laser-scanning optical interferometry scheme. All devices show robust resonances in the high frequency (HF) or very high frequency (VHF) bands, and we have experimentally determined the Young's modulus of h-BN, which is E Y ≈392 GPa. Equally importantly, multimode spatial mapping has allowed us to visualize the precise resonance motion of each mode, and these results clearly elucidate otherwise hidden subtle effects, such as uneven built-in tension and bulging in the 2D h-BN diaphragm. This study reveals both important mechanical properties and subtle unusual characteristics of h-BN resonators, adding new understanding and degrees of freedom for engineering of 2D resonators toward advancing applications, such as sensors and multimode signal transduction across mechanical, optical, and electronic domains. This work is expected to pave the way for future investigations into the piezoelectric effects in 2D electromechanical and optoelectromechanical devices made from h-BN and its heterostructures and other piezoelectric 2D crystals.
|
2019-04-29T13:17:37.375Z
|
2017-07-31T00:00:00.000
|
{
"year": 2017,
"sha1": "c5cdb4aff02a034c19aca4d3bc7787b7b1bae120",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/micronano201738.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5cdb4aff02a034c19aca4d3bc7787b7b1bae120",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
2981352
|
pes2o/s2orc
|
v3-fos-license
|
Innate Immune Responses and Rapid Control of Inflammation in African Green Monkeys Treated or Not with Interferon-Alpha during Primary SIVagm Infection
Chronic immune activation (IA) is considered as the driving force of CD4+ T cell depletion and AIDS. Fundamental clues in the mechanisms that regulate IA could lie in natural hosts of SIV, such as African green monkeys (AGMs). Here we investigated the role of innate immune cells and IFN-α in the control of IA in AGMs. AGMs displayed significant NK cell activation upon SIVagm infection, which was correlated with the levels of IFN-α. Moreover, we detected cytotoxic NK cells in lymph nodes during the early acute phase of SIVagm infection. Both plasmacytoid and myeloid dendritic cell (pDC and mDC) homing receptors were increased, but the maturation of mDCs, in particular of CD16+ mDCs, was more important than that of pDCs. Monitoring of 15 cytokines showed that those, which are known to be increased early in HIV-1/SIVmac pathogenic infections, such as IL-15, IFN-α, MCP-1 and CXCL10/IP-10, were significantly increased in AGMs as well. In contrast, cytokines generally induced in the later stage of acute pathogenic infection, such as IL-6, IL-18 and TNF-α, were less or not increased, suggesting an early control of IA. We then treated AGMs daily with high doses of IFN-α from day 9 to 24 post-infection. No impact was observed on the activation or maturation profiles of mDCs, pDCs and NK cells. There was also no major difference in T cell activation or interferon-stimulated gene (ISG) expression profiles and no sign of disease progression. Thus, even after administration of high levels of IFN-α during acute infection, AGMs were still able to control IA, showing that IA control is independent of IFN-α levels. This suggests that the sustained ISG expression and IA in HIV/SIVmac infections involves non-IFN-α products.
Introduction
Chronic immune activation during HIV infection is considered as the main driver of CD4 + T cell depletion and AIDS, and early T cell activation is a better predictor of the outcome of the infection than viral load [1]. Recent observations suggest that inflammation is even more important than T cell activation to predict disease progression and mortality [2,3]. Already in the acute primary phase of HIV-1 infection, the levels of soluble inflammatory mediators, such as IP-10 (CXCL10), were predictive of disease progression [4,5].
Type I IFN (IFN-I), such as IFN-a, is an important component of innate immunity providing a first-line defense to viral infections, as well as bridging the innate and adaptive immune systems. This cytokine is mainly produced by plasmacytoid dendritic cells (pDCs) in viral infections. These cells interact with myeloid dendritic cells (mDCs), NK cells, monocytes, T and B cells and contribute to the orchestration of the immune response. IFN-a production is critical for the activation of NK cells, enhancing IFN-c secretion and their cytotoxicity. Reciprocally, NK cells can affect pDC maturation and function [6]. Thus, upon infection, a crosstalk is engaged between NK cells, pDCs and mDCs, an interplay that involves IFN-I activity coupled with the release of other soluble factors [7].
Upon recognizing HIV-1, pDCs become activated, secreting high amounts of IFN-a and inflammatory cytokines, such as TNFa [8]. This leads to bystander maturation of mDCs [9]. Both pDCs and mDCs are reduced in number and function in HIV-1 infected individuals in the circulation [10]. PDCs have been shown to migrate to lymph nodes (LNs), gut and spleen and accumulate there [11][12][13][14]. As a matter of fact, the diminished responses seen in disease progressors might be explained by pDC exhaustion or trafficking to tissues [13,15]. Moreover, a defect in the pDC-NK cell cross-talk, due in large part to impaired NK cell responsiveness to IFN-a, has been described in HIV-1 infection [16,17]. Still the role of IFN-a in HIV infection is controversial. On the one hand, IFN-a may delay disease progression by inhibiting viral replication through the induction of cellular restriction factors and by stimulating various components of the immune response involved in the control of HIV [18,19]. A beneficial effect of IFN-a is also suggested by the observation of higher levels of pDCs and IFN-a production by TLR9-stimulated pDCs in HIV-infected long-term non-progressors [20]. On the other hand, IFN-a levels and type I interferon-stimulated gene (ISG) are markedly increased and sustained in progressors as compared to long-term non-progressors [21,22]. Indeed, in HIV-untreated patients, high levels of ISG, such as IP-10, were associated with a more rapid CD4 + T cell depletion [4,23]. Thus, it has been suggested that IFN-a might exert deleterious effects through various mechanisms. It could fuel chronic immune activation by the induction of ISGs including chemokines able of attracting target cells to the site of viral replication [24]. It could also stimulate innate immune cells, such as NK cells, which will in turn produce cytokines (IFN-c,…) and chemokines, and indirectly contribute to the activation of other cell types. Moreover, the up-regulation of the ISG TRAIL may induce apoptosis of uninfected CD4 + T lymphocytes [25]. Chronic high levels of IFN-a could also induce defects in the thymopoiesis and bias in T cell selection, thereby accelerating disease progression [26].
Fundamental clues regarding the role of inflammation in AIDS and the mechanisms that protect against it may lie in natural hosts of SIV, such as African green monkeys (AGMs) and sooty mangabeys (SMs), which are asymptomatic carriers of SIV [27,28]. This protection against AIDS is seen despite virus replication levels in blood and gut similar to HIV-1 infected humans and SIVmac-infected macaques [29]. It is associated with an absence of chronic immune activation, lacking both chronic T cell activation and chronic inflammation [27,[30][31][32]. This is not due to ignorance of the virus or to a functional defect of pDCs in sensing the virus [33][34][35][36]. Indeed, a vigorous innate immune response is triggered upon infection [34,[36][37][38][39][40]. Thus, the acute phase of SIVagm infection is characterized by the recruitment of pDCs to LNs, IFN-a production, induction of ISG and corresponding protein (ISP) expression [34,[36][37][38][39][40]. The levels of ISP strongly correlated with IFN-a levels during the acute phase of SIVagm infection [36]. However, there are major differences as compared to SIVmac infection: the levels of IFN-a produced in blood and LN were lower than those observed in SIVmac infection [35,36,38]. Moreover, in some reports, most cytokines were produced only to moderate levels in natural hosts and several pro-inflammatory cytokines were not induced at all, in contrast to the cytokine storm seen during pathogenic HIV-1/SIVmac infections [30,35,36,38,[41][42][43]. Finally, ISGs, cytokines and T cell activation are down-regulated by the end of the acute phase in natural hosts and maintained as such. Thus, while in HIV/ SIVmac pathogenic infections, immune activation persists, in natural hosts there are mechanisms that either prevent the onset of sustained inflammation or mechanisms that rapidly and efficiently turn them off.
In this report, we investigated the effect of SIVagm infection in AGM on innate immune cell compartments, in particular pDCs, mDCs and NK cells, and tested whether exogenous administration of IFN-a would modify the development of antiviral responses, promote chronic inflammation and/or alter clinical parameters.
Close follow up of viral replication and T cell dynamics
The innate immune responses were followed in six SIVagm.-sab92018-infected AGMs between days 2 and 547 post-infection (pi). We analyzed both blood and LNs. It is crucial indeed to study LNs because these are the sites where T and B cell responses are induced, shaped, and regulated and where correlates of protection were identified [44,45]. Consistent with previous reports, AGMs displayed high levels of SIV replication with a peak on day 9 pi coinciding with a transient decline in CD4 + T cell levels ( Figure 1A and B) [30,36,46]. We monitored T cell proliferation and confirmed that the primary phase of SIVagm infection in AGMs is associated with a transient increase in the percentages of Ki-67 + T cells in blood and LNs ( Figure 1D and E) [30]. The peak of Ki-67 + CD4 + T cells was observed between day 7 and 9 pi (at day 9, p = 0.031), while the percentage of Ki-67 + CD8 + T cells reached a plateau on day 11 pi in blood (p = 0.008) and a peak on day 25 pi in LN (p = 0.016). The Ki-67 + CD4 + and DN T cell frequencies subsequently decreased on day 11 and that of Ki-67 + CD8 + T cells after day 31 pi.
PDCs from SIVagm-infected AGMs displayed a more immature phenotype as compared to mDCs To better understand the trafficking and function of mDCs, pDCs and NK cells in AGMs, we investigated the early changes in activation, maturation, function and homing markers of these cells in blood and LNs (Figures 2, 3 and 4, respectively). The gating strategy used for flow cytometry analysis is depicted in Figure S1. We first confirmed previous data on pDC and mDC dynamics during SIVagm infection (data not shown) [38,47,48]. We then studied two homing receptors for DCs: the homing inflammatory chemokine receptor CXCR3, which is the receptor for CXCL9, IP-10 and CXCL11 and CCR7, which is a receptor for chemokines that are expressed constitutively in secondary lymphoid organs. In line with the increase of mDC frequency in LNs, expression of CXCR3 increased on these cells in blood and LNs during acute infection (Figure 2A and B). Moreover, the percentage of CCR7 + mDCs was transiently increased in blood (p = 0.039 at day 9 pi) (not shown) while CCR7 levels on mDC surface did not increase ( Figure 2C and D). For pDCs, the expression levels of CCR7 were significantly increased up to day 14 pi, while the CXCR3 levels were not increased ( Figure 3A-D). Hence, CXCR3 and CCR7 showed opposite expression profiles on pDCs and mDCs. Still, both mDCs and pDCs showed an
Author Summary
Chronic inflammation is considered as directly involved in AIDS pathogenesis. The role of IFN-a as a driving force of chronic inflammation is under debate. Natural hosts of SIV, such as African green monkeys (AGMs), avoid chronic inflammation. We show for the first time that NK cells are strongly activated during acute SIVagm infection. This further demonstrates that AGMs mount a strong early innate immune response. Myeloid and plasmacytoid dendritic cells (mDCs and pDCs) homed to lymph nodes; however mDCs showed a stronger maturation profile than pDCs. Monitoring of cytokine profiles in plasma suggests that the control of inflammation in AGMs is starting earlier than previously considered, weeks before the end of the acute infection. We tested whether the capacity to control inflammation depends on the levels of IFN-a produced. When treated with high doses of IFN-a during acute SIVagm infection, AGMs did not show increase of immune activation or signs of disease progression. Our study provides evidence that the control of inflammation in SIVagm infection is not the consequence of weaker IFN-a levels. These data indicate that the sustained interferonstimulated gene induction and chronic inflammation in HIV/SIVmac infections is driven by factors other than IFN-a.
increase expression of one homing marker, concomitant with their increases in LNs [38,47,48] MDCs showed up-regulations of the maturation markers CD80 and CD86 in blood and LN at early time points of primary infection (in blood: p = 0.008 at day 4 pi for CD86 and p = 0.008 at day 2 pi for CD80, in LNs: p = 0.031 at day 9 for CD80) ( Figure 2F, G, I and J). In contrast, the expression of CD86 was not modulated on pDCs ( Figure 3E and F).
It was surprising to see this discrepancy in maturation profiles between mDCs and pDCs. To confirm these findings, we analyzed the maturation profiles of mDCs and pDCs in the blood of another group of 8 AGMs infected with SIVagm. In this group, we further distinguished CD16 + mDCs (inflammatory) and CD16 2 mDCs ( Figure 2E, H and K). In addition to the maturation markers CD80 and CD86, we also measured HLA-DR expression. The two CD16 + and CD16 2 mDC subsets were present in similar frequencies in blood (not shown). Both mDC subsets displayed increases in the expression of CD80, CD86 and HLA-DR. These maturation markers were more significantly increased on the CD16 + than on the CD16 2 subset ( Figure 2E, H and K).
We confirmed the pDC phenotype in these 8 additional animals by staining for BDCA-2 ( Figure 3G-H). We choose to follow HLA-DR and CD40 as these markers are well known to be upregulated when pDCs mature and CD40 expression is increased on pDCs in SIV/HIV pathogenic infection [49,50]. On AGM pDCs, the expression of HLA-DR was significantly downregulated during the acute phase and the expression of the activation/maturation marker CD40 was not modulated ( Figure 3G and H). The expression of CCR7 was transiently increased (p = 0.031 at day 4 pi) in this group of AGM too (not shown). These analyses thus confirm that mDCs show a more pronounced maturation profile than pDCs during SIVagm infection.
Strong activation of NK cells during primary SIVagm infection
The two main functional NK cell subsets (cytolytic versus cytokine producers) were analyzed. These two subsets were differentiated based on the expression of CD16, the CD16 + subset being the predominant one in blood, as in humans and macaques ( Figure 4A). Similar to cynomolgus macaques, the CD56 marker cannot be used in AGM to differentiate the NK cell subsets [51]. Thus, NK cells were defined as CD3 2 CD20 2 HLA-DR 2 CD8a + NKG2A + CD16 +/2 ( Figure S1B and C), as in other studies on NK cells from macaques and SMs [39,52]. A significant transient decline of both subsets was observed in blood (p = 0.031 at day 2 pi) ( Figure 4A). NK cell numbers then progressively increased to reach 249% of the pre-infection levels at the end of primary infection (day 25-31 pi) for the major CD16 + subset, and 154% for the CD16 2 subset. They returned to baseline levels in the chronic phase (not shown). In LNs, only few NK cells were detectable and most corresponded to the CD16 2 subset, similar to humans [53]. A significant decrease in the percentage of the CD16 2 subset in LNs was observed ( Figure 4B). The levels of CD16 + cells in LNs were too low to be followed. Thus, CD16 + NK cells in the blood exhibited maximal increase at the time of transition between acute and chronic phase, similar to what has been observed in SMs [39].
We monitored the activation profiles of CD16 + and CD16 2 NK cells in blood and of the CD16 2 NK cells in LNs. As shown in Figure 4C and E, the frequencies of Ki-67 + and CD69 + NK cells were markedly enhanced upon SIVagm infection in blood with a peak on day 11 pi. The activation profiles of CD16 2 NK cells in blood followed a similar kinetic than that of CD16 + NK cells (not shown). The percentage of activated NK cells also highly increased in the LNs ( Figure 4D and F).
To evaluate NK cell function, the surface expression of CD107a (a surrogate marker for cytolytic function) and intracellular expression of IFN-c (cytokine production) were measured ( Figure 4G-J). The NK cell cytolytic activity was significantly increased only in LNs ( Figure 4G and H) and no significant increase of IFN-c production was observed in either blood or LNs ( Figure 4I and J).
Interferon-a production correlated with NK cell activation Both NK cell activation and cytotoxic activity are stimulated by IFN-I, which is driven by virus. IL-15 plays a pivotal role in the development, survival and function of NK cells. We quantified IFN-I and IL-15 concentrations in blood and tissues. In line with previous reports for SIVagm infection [36,38,42], the IFN-a levels in plasma were transiently increased during primary infection. In addition, we reveal an increase of IL-15 production. The animals displayed two peaks of IFN-a and IL-15 production, on days 2 and 9 pi, day 9 corresponding to the peak of plasma viremia ( Figure 5A and C). By day 11 pi, these levels were already decreased and below detection limit after day 14 pi for IFN-a. IFN-a and IL-15 were also measured in LNs ex vivo by collection of supernatants from the LN cell preparations ( Figure 5B and D). The IFN-a concentrations in these supernatants were increased between days 2 and 11 pi. The limited number of LNs that could be collected did not allow for the same close monitoring frequency as in blood and it is unclear whether two peaks of expression were present in the LN compartment as well.
We found that NK cell activation in blood was correlated with the IFN-a levels (CD69%: Rs = 0. 39
Increased cytokines in early but not late acute phase of SIVagm infection
We quantified thirteen additional cytokines in the plasma for the two AGM groups ( Figure 5 and S2) to determine the earliest kinetics of cytokines and search for differences with the cytokine storm reported in HIV-1 and SIVmac infections. Cytokines reported to be increased early during HIV-1 infection were selected, such as MCP-1 as well as ''innate'' cytokines, such as IL-12. The early collection time points were chosen at very short intervals starting at 6 hours pi. Among the 15 cytokines studied in total, 8 were significantly up-regulated and 6 displayed a first peak on day 2 as well as a second increase on day 7 and/or 9 pi: IL-15, IFN-a, IP-10, MCP-1, IFN-c and IL-18 ( Figure 5 and S2). IL-15, IP-10 and MCP-1 are inducible by IFN. Their profiles strongly correlated with IFN-a levels (IL-15: Rs = 0.53, p,0.001; IP-10: Rs = 0.73, p,0.001; MCP-1: Rs = 0.6, p,0.001) ( Figure 5). IL-8 was modestly increased at day 9 pi, while IL-12 was up-regulated only later on, on days 14 and 28 pi and even downregulated at time points just after the IFN-a peaks ( Figure S2). This might be due to the fact that IL-12 is inhibited by IFN-a [54]. As a matter of fact, previous reports in SIVmac and HIV-1 infection showed that IL-12 levels increase late in primary infection, once the IFN-a levels decrease [43,55]. Strikingly, in SIV-infected AGMs, most of the other pro-and anti-inflammatory proteins and ISPs measured (IL-6, sTRAIL, TNF-a, IL-17, TGF-b) were not modulated ( Figure S2).
We wondered whether the first peak (day 2 pi) is specific for natural hosts. For those cytokines, which showed increases already on day 2 pi in AGMs, we also measured cytokines in two rhesus macaques infected with SIVmac251 ( Figure S3). Also in these monkeys, a peak of cytokine production was observed on day 2 pi depending on the cytokine and the animal studied, suggesting that the early peak is not unique to AGMs. Most of studies conducted so far don't include such early time points. However, a similar early induction (day 2-4 pi) of IFN-a and some ISGs has been observed at mucosal sites of orally infected macaques [56]. We had already noticed such an early peak in previous studies [36,38]. The AGMs were infected here with a purified virus which therefore excludes that the early peak is due to contaminants in the inoculum. Finally, the data confirm previous reports showing that IFN-a levels are lower in acutely infected AGMs compared to macaques ( Figure S3B) [35,36,38].
Altogether, the close monitoring of fifteen soluble factors showed that cytokines, which are known to be produced early during SIVmac infection in macaques or HIV-1 infection in humans, i.e. before, during or shortly after the viral peak, were all induced in SIVagm infection. In contrast, cytokines that are induced late during the acute phase of pathogenic infection were not or only moderately induced in AGMs.
Administration of high levels of IFN-a during primary infection has no impact on SIVagm infection
We tested whether the lower levels of IFN-a in SIVagm infection might dictate the outcome of infection, in particular with respect to the resolution of the inflammation. We therefore administered high doses of recombinant IFN-a (r-mamu-IFN-a) during the acute phase of SIVagm infection in an attempt to perturb the control of inflammation and abolish its resolution, which would be characterized by uncontrolled expression of ISGs and chronic immune activation.
The r-mamu-IFN-a used for the in vivo treatment was first tested for its efficacy on AGM cells in vitro and in vivo ( Figure 6A-D and F). The same cytokine has been previously used in SMs without inducing any anti-IFN-a antibodies [57]. AGM PBMCs exposed to r-mamu-IFN-a in vitro up-regulated the expression of ISGs, such as Mx1 or IP-10, to similar levels as in macaque PBMCs ( Figure 6A). Low doses of the r-mamu-IFN-a were already highly efficient for ISG induction, in line with previous data [36]. After a single in vivo injection of 5610 5 IU of r-mamu-IFN-a, high levels of IFN-a were observed in plasma 1 hour post-treatment ( Figure 6B), leading to strong up-regulation of ISGs, such as IP-10 ( Figure 6C). Since the half-life of a similar human recombinant IFN-a has been estimated at 2-5 h in AGMs in vivo [58], this could explain why the levels of IFN-a and IP-10 mRNA were already low at 24 h after administration despite the r-mamu-IFN-a being an IgG fusion protein [57,59]. In order to maintain robust levels of IFN-a and constantly high expression of ISGs, we injected r-mamu-IFN-a daily with a 10% increment every 2 days for 16 days. The safety and efficacy of such treatment were verified on a SIV-chronicallyinfected AGM. The latter displayed a 2 log 10 decrease of the chronic viral load during the treatment ( Figure 6D), but no major difference in T cell activation ( Figure 6F), similar to data reported for chronically SIV-infected SMs [57]. Anti-IFN-a antibodies were not detected at any time point during or after treatment (data not shown).
Since r-mamu-IFN-a was efficient in cells from uninfected and chronically infected AGMs, we then tested whether such treatment would affect the resolution of immune activation during primary infection. The treatment was started on day 9 pi because after day 9 pi, endogenous IFN-a levels started to decrease, concomitantly with the diminution of other cytokines and ISPs such as IP-10 and MCP-1 and the decrease of activated NK cells and Ki-67 + CD4 + T cells. Also, based on data from the literature, we could not exclude that an initial inflammation during the first week of infection is necessary to establish infection. Finally, we did not want to interfere with the efficacy of the initial viral replication. Two AGMs were injected daily with r-mamu-IFN-a between day 9 and 24 pi. The virological and immunological profiles in the IFN-a-treated AGMs were compared to those of the 6 infected but untreated AGM (Figures 6E, 6G, 6H, 7, S3 and S4).
The administration of r-mamu-IFN-a during the acute phase of infection had no major effect on viral load ( Figure 6E), even if a slight but not significant decrease was observed as compared to untreated animals at the first time points after treatment. Body temperatures were elevated during the treatment period with IFNa ( Figure S5). The expression of the ISGs CXCL9, IP-10 and CXCL11 were however comparable between treated and untreated AGMs in both PBMCs and LN cells (Figure 7). The treatment also did not result in a persistent T cell activation ( Figure 6G and H) or CD4 + T cell loss over time (data not shown). As IFN-a is known to exert direct and indirect effects on innate immune cells, we also investigated whether in the absence of a change in disease outcome, the IFN-a treatment would still have had an impact on NK cells, mDCs and pDCs ( Figure S4). The administration of rmamu-IFN-a in the context of the SIV primary infection did not affect their frequencies and did not induce an increase of maturation or activation of these innate immune cells.
In summary, in spite of daily administration of high doses of IFN-a post peak of SIV replication, AGMs were still able to resolve inflammation and immune activation.
Discussion
We aimed to study if the lower levels of IFN-a described during SIVagm infection as compared to SIVmac infections matter in the resolution of the inflammation in AGMs. The early host immune responses are an essential factor in determining the subsequent clinical course of disease. In mice, an early innate alteration significantly compromises the following immune responses and the host's ability to counteract the virus/parasite spread [60,61]. We tested here whether by artificially increasing IFN-a related inflammation during the acute phase of SIVagm infection, one can overcome the intrinsic control of immune activation in this natural host. In order to study the control of immune activation and not interfere with the establishment of viral infection, the treatment was administered between the plasma viral peak and the end of the acute phase of infection. Surprisingly, the treatment did not affect viral dynamics, control of inflammation or T cell activation. This is not due to a lack of sensitivity of AGM cells to the recombinant IFN-a used here. Indeed, the r-mamu-IFN-a molecule was functional in vitro and in vivo in healthy and chronically-infected AGMs. Moreover, when administered during the chronic phase of infection, our results paralleled those described in chronically-infected SMs treated with the same molecule, namely a reduction in viral load and increase in ISG expression in the absence of major increases of T cell activation [57]. Although the number of animals was low, the analyses show that r-mamu-IFN-a was fully active on AGM cells.
It is possible that the lack of changes after IFN-a treatment during primary SIVagm infection was due to tolerance to the injected IFN-a. In SMs chronically infected with SIVsmm and treated with IFN-a, the effect of IFN-a was transient likely due to the induction of tolerance to such treatment as also reported in humans [62,63]. Here, the treatment was short, but refractoriness could have been induced by the previous response to endogenous high levels of IFN-a. Of note, we treated the animals starting from day 9 pi, corresponding to the peak of endogenous IFN-a production. Had we started the treatment on the day of infection, we cannot exclude that we might have seen an effect on ISGs or viral load. However, such protocol might have lowered the initial viral replication, which was in opposition with the aim of our study. Altogether, administration of IFN-a in the mid and late part of the acute phase did not change the outcome, suggesting that the resolution of inflammation in AGMs is not due to a difference in the levels of IFN-a production during primary infection. It has been suggested that the combination of antiretroviral therapy and interferon given during acute HIV infection may potentiate both innate and adaptive immune responses against HIV replication and/or reservoir levels [64]. Our study shows that in primary infection, IFN-a, when administered after the peak of viremia, does not affect viral replication or innate responses. It reduces viral load during chronic phase. Whether this is true for pathogenic infection, remains to be determined. However, the timing is very important and should be considered when such treatment is envisaged.
We previously showed that during the acute phase of SIV infection the levels of ISGs, such as of IP-10, strictly correlate with IFN-a levels [36]. Here, treatment with high doses of IFN-a did not lead to sustained ISG expression. This suggests that, at the transition of acute to chronic phase other factors than IFN-a predominantly drive ISG expressions in macaques and humans. Elevated expression levels of ISGs in chronic infection are associated with uncontrolled viremia and disease progression [23,65]. It could be that not the IFN-I production, but constant ISG expression are deleterious for the host. IP-10 has been reported to be an excellent marker of inflammation and disease progression [4,[65][66][67][68]. IP-10 is inducible not only by IFN-I and IFN-c, but also by other pro-inflammatory cytokines (TNF-a, IL-1b, IL-18) [69][70][71]. Hence, it is possible that IFN-a alone is not sufficient, but that a combination with other factors, TNF-a for example, is also required to induce ISG expression. Moreover, even though ISGs are IFN-inducible, some were shown to be directly up-regulated through recognition of viral or bacterial products by pattern-recognition receptors in an IFN-independent manner [72][73][74][75]. Finally, an expansion of the enteric virome and microbial translocation are observed in chronic HIV-1 and SIVmac infections [76,77]. It could thus explain why macaques and humans maintain ISG expression but not AGMs who do not display microbial translocation or virome expansion. Other or additional factors might also play a role in the maintenance of ISG expressions during HIV-1/SIVmac infections. For instance, distinct IFN-a subtypes differently induce ISG expressions in vivo, while here, only IFN-a2, which is considered the most abundant in viral infections, was used [69,[78][79][80].
Altogether, our study indicates that the non-pathogenic outcome of SIVagm infection is not due to differences in IFN-a levels between AGM and macaques or humans. It does not exclude that the difference in outcome is related to different levels of ISG expression. It indicates however, that the mechanisms, which maintain high levels of ISG expression, are due to other or additional factors than IFN-a.
Several studies have debated whether natural hosts display lower or similar immune activation levels during primary infection as compared to pathogenic infections. Some studies have reported weaker levels of T cell activation and cytokine concentrations during the acute phase of SIVagm or SIVsmm infection, whereas in other studies the levels were equivalent to those observed in SIVmac-infected macaques [30,36,41,81]. We performed a detailed follow-up of T cell activation and, to attempt to reconcile the discrepant cytokine profiles, we deciphered here the early production of cytokines in AGM. We included in the study the cytokines known to be induced very early during pathogenic infection, such as IL-15 [19,43]. Of note, the acute phase of SIVagm infection resulted in significant increases of early cytokines, including IL-15, IP-10 and MCP-1, similar to pathogenic infection. However, salient differences were observed for cytokines known to be produced in later stages of the acute phase of HIV-1/SIVmac infections. They were either not or only weakly induced in AGMs. In particular, IL-6 and TNF-a were not upregulated ( Figure S2). We hypothesize that the early cytokines, which are produced in AGMs during the first two weeks pi, confer a benefit to both the virus and the host. It would be beneficial to the virus as inflammation attracts target cells to the sites of infection. For the host, the induction of early innate responses (restriction factors, NK cells, mDCs), would allow the development of antiviral innate and adaptive responses for partial control of viral replication. AGMs might have found a way to allow early inflammation resulting in productive infection while blocking the cytokine storm that takes place following the viral peak. This would avoid the sustained inflammatory environment.
The dual pattern of cytokines that we observed might be explained by a differential susceptibility to activation by the innate cells. While pDCs show a normal sensing of SIVagm [36,37,40] leading to the production of IFN-a, other cells, for instance myeloid cells, such as mDC and macrophages, might not produce any cytokine. Indeed, a recent study reported that in contrast to SIVmac and HIV-1 infections, mDCs mature but do not show spontaneous production of pro-inflammatory cytokines such as TNF-a in primary SIVagm infection [48].
To understand what might be the key events of the innate response in natural hosts that allows them to maintain the inflammation under control, we investigated the effect of SIVagm infection in AGM on innate immune cell compartments, in particular pDCs, mDCs and NK cells. Little is known about those decisive early cellular responses in AGMs, in particular regarding pDC maturation and NK cell activation. In addition, for the first time the activation profiles of these three types of innate cells were analyzed concomitantly in the same animals. We analyzed the maturation and homing patterns of two sub-populations of mDCs, the CD16 2 and CD16 + subsets. These correspond to two major subsets of mDCs in humans [82,83]. The CD16 + mDCs displayed higher levels of activation and maturation than the CD16 2 subpopulation. Whether these cells play a distinct role in T cell activation or tolerance is unclear. PDCs displayed lower levels of maturation than mDCs. IFN-a production by pDCs is associated with their maturation stage. It has been shown that HIV skews pDCs toward a partially matured and persistently IFN-a-secreting phenotype which allows their survival [84]. Eventually, the partial maturation of pDCs in AGMs might be associated with their capacity of efficient IFN-a production during the acute phase of SIVagm infection. Egress of pDC precursors from bone marrow could then account for the return of IFN-a levels to baseline [85]. This is supported by the decrease of HLA-DR expression on the pDC's surface. On the contrary, a preferential maturation process at the expense of cytokine secretion might be occurring at the level of mDCs in AGMs, especially in presence of IFN-I, since IFN-I induces mDC maturation rather than cytokine secretion [83].
We also analyzed NK cells for the first time in the context of SIVagm infection. We observed a strong increase in proliferation and activation of NK cells during the acute phase of SIVagm infection. Our data support the observations reported in SMs on earlier and stronger NK cell responses than in SIVmac-infected macaques [39]. The rapid and strong increase in NK cell proliferation in AGMs might be a direct consequence of the early and robust production of IL-15 and IFN-a during primary SIVagm infection. No production of IFN-c by NK cells was observed, while NK cell cytotoxicity was induced. It has been shown that IFN-a and IL-15 promote NK cell proliferation and survival, while IFN-a is able to increase NK cell cytotoxicity, and IL-12 to augment the secretion of IFN-c [86]. This is in accordance with the fact that modest levels of IL-12 and high levels of IFN-a were detected, playing thus a putative role in establishing such protective NK cell responses. It was surprising to detect increases in NK cytotoxic function in LNs. One hallmark of SIV infection in natural hosts is the high viral load in blood and intestinal tissues, but low viral burden in LNs in the chronic phase of infection [29,[87][88][89][90]. It has been suggested for SMs that the rapid and dramatic control of viral replication in LNs is associated with CD8 + T cell responses [89]. However, it is tempting to speculate that at least in the early stage of SIVagm infection in AGMs, NK cells could significantly contribute to the control of viral replication in LNs which in consequence could contribute to limit immune activation [91].
Altogether, our study provides evidence that the control of immune activation in SIVagm infection is not a consequence of lower levels of IFN-a production. We show that AGMs mount strong early innate immune responses as exemplified by the significant NK cell activation and production of early cytokines, such as IL-15 and MCP-1. Our study indicates that the sustained ISG production in HIV/SIVmac infections is likely driven by additional or factors other than IFN-a, among which could be elevated pro-inflammatory cytokine levels, enteric virome expansion and microbial translocation. The data also suggest that mechanisms controlling inflammation are in place before the transition of the acute to the chronic phase, thus earlier than previously considered. Whether this is due to the establishment of inhibitory or tolerance mechanisms after the viral peak, or to a distinct susceptibility to infection or immune activation by specific immune cell subsets, needs to be further investigated.
Ethics statement
Animals were housed in the facilities of the CEA (''Commissariat à l'Energie Atomique'', Fontenay-aux-Roses, France) and Institut Pasteur (Paris, France) (CEA permit number: A 92-032-02, Institut Pasteur permit number: A 78-100-3). All experimental procedures were conducted in the CEA animal facility and in strict accordance with the international European guidelines 2010/63/ UE about protection of animals used for experimentation and other scientific purposes (French decree 2013-118) and with the recommendations of the Weatherall report. The monitoring of the animals was under the supervision of the veterinarians in charge of the animal facilities. All efforts were made to minimize suffering, including efforts to improve housing conditions and to provide enrichment opportunities (e.g., 12:12 light dark schedule, provision of monkey biscuits supplemented with fresh fruit and constant water access, objects to manipulate, interaction with caregivers and research staff). All procedures were performed under anesthesia using 10 mg of ketamine per kg body weight. For deeper anesthesia required for lymph node removal a mixture of ketamine and xylazine was used. Paracetamol was given after the procedure. Euthanasia was performed prior to the development of any symptoms of disease (e.g., for macaques when the biological markers indicated progression towards disease, such as significant CD4 + T cell decline and increases of viremia). Euthanasia was done by IV injection of a lethal dose of pentobarbital. The CEA is in compliance with Standards for Human Care and Use of Laboratory of the Office for Laboratory Animal Welfare (OLAW, USA) under OLAW Assurance number #A5826-01. Animal experimental protocols were approved by the Ethical Committee of Animal Experimentation (CETEA-DSV, IDF, France) (Notification number: 10-051b).
Animals, treatments and sample collections
Eighteen Caribbean-origin African green monkeys (Chlorocebus sabaeus) and two Chinese rhesus macaques (Macaca mulatta) were used in the study. AGMs were infected by intravenous inoculation with 250 TCID 50 of purified SIVagm.sab92018, and macaques with 5000 AID 50 of SIVmac251, as previously described [90]. SIVagm.sab92018 has been purified by sucrose density gradient centrifugation and on Vivaspin 20 columns (Vivaproducts). Neither IFN-a nor endotoxin (LAL QCL-1000 Kit, Lonza) were detected in the two viral stocks. Four AGMs were treated with the r-mamu IFN-a-IgFc by subcutaneous injection (Resource for NHP Immune Reagents, Emory University, Atlanta, GA): 2 were used for the establishment of the treatment protocol and control of its efficiency in AGM, and 2 were treated during the acute phase of infection. When daily injections of 5610 5 IU for over a period of 16 days were performed, the dose was increased by 10% every second day.
Whole blood was collected from all AGMs. For the initial groups of monkeys, baseline blood collections were performed at 4 to 6 time points before infection (days 230, 228, 223, 221, 219 and 216) to mimic the sampling of the acute phase and measure any difference linked to the sampling. No variation due to the sampling was observed. Blood was then collected during primary infection (on days 2, 4, 7, 9, 11, 14 and 25) as well as isotype controls. FcR Blocking Reagent (Miltenyi) was used to block unwanted binding of antibodies and increase the staining specificity of cell surface antigens. For detection of IFN-c and CD107a, cells were pre-incubated for 4 hours with brefeldin A and monensin at 37uC prior to labeling with surface-binding antibodies and then fixed and permeabilized prior to incubation with IFN-c antibody. Cells were run on a BD LSR-II flow cytometer system, collected with BD FACS Diva 6.0 software, and analyzed with FlowJo 8.8.7 (TreeStar).
Cytokine quantifications
Cytokines were measured in plasma and LN cell supernatants. LN cell supernatant consists of the medium in which the biopsy was collected and kept for 2-3 hours at 4uC. Cells were prepared by homogenization in the same medium and the supernatant was collected after centrifugation. Titers of bioactive IFN-a were determined as previously described [38]. The same test was used to search for plasma IFN-a antibodies that might have developed in response to the treatment. The other cytokines were quantified using the following ELISA kits: MONKEY IFN-gamma, IL-6, IL-8, IL-10, IL-12/23p40, TNF-alpha (U-Cytech), Human IL-15, CXCL10/IP-10, CCL2/MCP-1, TRAIL/TNFSF10 Quantikine Kits (R&D), Human IL-17A Ready-SET-Go (eBiosciences), Human IL-18 Kit (MBL), Simian IFN-beta Kit (USCN), TGF-b1 Multispecies Kit (Invitrogen). To verify the cross-reactivity of the antibodies used in the ELISA kit for cytokines that have never been tested on AGM [30,36], AGM PBMCs were stimulated in vitro and cytokines were measured in the supernatants (data not shown).
Quantification of viral RNA and of cellular mRNA
Plasma viral load was determined by real-time PCR [90]. Quantification of ISG transcripts was performed by real-time RT-PCR in triplicate using Taqman gene expression assays (Life technologies). The expression of each gene was normalized against that of 18S rRNA [30,36].
Statistical analyses
To characterize each marker's progression (figures 2-4), a linear mixed effect model was used to account for multiple measurements within each AGM. Firstly, we graphically assessed that the marker's distribution was Gaussian; if not, a logarithmic transformation was used. Secondly, a LOWESS (locally weighted scatterplot smoothing) curve was used to assess whether the marker's trajectory looked linear or piecewise linear. Based on these trajectories, we introduced or not slopes. We indicated at which time-point the change of slope occurred. Finally, a mixed effect linear or piecewise-linear model was applied. When two slopes were introduced into the model, the difference between the two slopes was tested using Wald's test. The Wilcoxon matchedpairs signed rank test was used to evaluate whether there was a statistically significant difference in the level of one given marker at a given time point following inoculation when compared to the baseline medians (day 0), using Prism (GraphPad). Baseline medians in blood consisted of 3 to 6 pre-infection values per animal for the flow cytometry and gene expression analysis, and 4 total pre-infection values per animal for the plasma cytokines study. In LNs, it consisted of 1 to 2 pre-infection values per animal for all the measurements. Finally, the Spearman rank test was used to assess the correlation between 2 continuous variables. Figure S5 Effect of injection with high doses of recombinant IFN-a during primary SIVagm infection on body temperature. Body temperature changes as compared to temperature either before infection (BI) or before treatment (day 9 p.i.) for the 2 treated AGMs (blue) and 6 untreated AGMs (black). The temperature values after infection but before IFN-atreatment correspond to time points between days 2 and 9 pi and are shown on the left. In the IFN-a-treated animals (between days 11 and 25 pi) on the right, the changes are indicated relative to the time point before treatment initiation (day 9 pi) and compared to the body temperature of the 6 untreated AGMs during the same time period. The median is indicated. (TIF)
|
2016-05-12T22:15:10.714Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "a432a0f453cb84646a751692c82b721833a0d42e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1004241&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a432a0f453cb84646a751692c82b721833a0d42e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
24967070
|
pes2o/s2orc
|
v3-fos-license
|
Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum
The entanglement entropy of a pure quantum state of a bipartite system $A \cup B$ is defined as the von Neumann entropy of the reduced density matrix obtained by tracing over one of the two parts. Critical ground states of local Hamiltonians in one dimension have entanglement that diverges logarithmically in the subsystem size, with a universal coefficient that for conformally invariant critical points is related to the central charge of the conformal field theory. We find the entanglement entropy for a standard class of $z=2$ quantum critical points in two spatial dimensions with scale invariant ground state wave functions: in addition to a nonuniversal ``area law'' contribution proportional to the size of the $AB$ boundary, there is generically a universal logarithmically divergent correction. This logarithmic term is completely determined by the geometry of the partition into subsystems and the central charge of the field theory that describes the equal-time correlations of the critical wavefunction.
A major challenge in the theory of quantum critical phenomena is to understand those aspects that do not appear in the classical theory. The entanglement entropy at criticality is an example: a pure quantum state of a bipartite system A ∪ B can become a mixed state, with an associated entropy, when restricted to subsystem A or B. Critical ground states in one dimension are known to have entanglement entropy that diverges with subsystem size for several types of pure [1,2,3,4] and disordered [5,6,7] quantum critical points, with a coefficient determined by the central charge in the pure case. Hence, in one dimension, entanglement entropy gives a definition of universal critical entropy for quantum critical points that is consistent with the conventional definition (central charge) in the conformally invariant case.
In higher dimensions, results have been obtained recently for gapless free fermions [8,9], and a generic scaling form is conjectured in [3]. A standard expectation [10], which is known to be violated for free fermions, is of an "area law": entanglement entropy scales as the area of the boundary between the subsystems. For example, if only one subsystem is finite, the area law predicts that entanglement entropy scales as L d−1 in d spatial dimensions, where L is the linear size of the finite subsystem. Entanglement entropy in gapped phases in two dimensions satisfies an area law but has subleading terms that probe topological order [11,12]: adding and subtracting properly chosen regions cancels the leading area term and leaves a term reflecting the ground-state degeneracy on topologically nontrivial manifolds.
An additional motivation for recent studies of entanglement entropy in many-body physics is as a route to better numerical algorithms for finding ground states: knowing that ground states of local Hamiltonians have much less entanglement, even at criticality, then generic quantum states both explains the success of the "densitymatrix renormalization group" algorithm in one dimension [13,14,15] and motivates recent proposals for analogous numerical methods in higher dimensions [16].
This paper obtains the entanglement entropy for the class of "conformal" two-dimensional quantum critical points [17] that includes such standard examples as the quantum dimer model [18,19], the quantum eight-vertex model [17]. These quantum critical points were first introduced because perturbing away from these solvable critical points can yield topologically ordered phases in lattice models [20]. The same method can be used to obtain the entanglement entropy for related noncritical wavefunctions. These critical points have dynamic critical exponent z = 2 and equal-time correlation functions given by a two-dimensional conformal field theory (CFT). For the entanglement entropy created by partitioning of a 2D manifold into regions A and B, we find a universal subleading logarithmic contribution in addition to the "area law" nonuniversal contribution proportional to the linear size L of the boundary: Here c is the central charge of the CFT, a is the ultraviolet cutoff, and the coefficient α is determined by geometric properties of the partition. The area law coefficient f s is interpreted below as a boundary free energy. As an example of the geometric dependence of α, if A is a rectangle surrounded by B, α = − 1 9 , while if A has a smooth boundary surrounded by B, α = 0. This contribution for the case of a free scalar field originates in the logarithmic contribution that appears in the classical problem of the determinant of the Laplacian operator on a 2D man-ifold [21,22] ("hearing the shape of a drum"), which was extended to a general CFT by Cardy and Peschel [23].
The entanglement entropy of a pure state of a bipartite system is defined as the von Neumann entropy of the reduced density matrix for either subsystem: For conformal quantum critical points, the Hilbert space has an orthonormal basis of states |{φ} indexed by classical configurations {φ}, and the ground state |ψ 0 of the bipartite system is determined by a CFT action S: Here Z c is the partition function (dφ) e −S({φ}) , and expectation values in this state reproduce CFT correlators.
The next step is to obtain traces of powers of the reduced density matrix in order to find the entanglement entropy using the replica trick First consider the reduced density matrix ρ A . The element of the reduced density matrix between internal configurations |{φ A 1 } and |{φ A 2 } is, after introducing a normalization factor to ensure Tr ρ A = 1, Here the action has been divided into regions A, B, and the boundary ∂, where the last takes into account contributions mixing the A and B degrees of freedom (e.g., couplings of spins across the boundary in a lattice model).
Higher powers of the density matrix need not trace to unity: Tr ρ n A is now a sum over n configurations defined in A and n configurations defined in B. The key is to keep track of how these different configurations are stitched together at the boundary by the terms S ∂ that link A and B: }. This is normalized through division by (Z c ) n , which can be thought of again as n copies of A and B configurations, but with {φ A i } linked only to {φ B i }. In the continuum limit, the boundary terms impose continuity of the fields in the CFT because strong local fluctuations are penalized in the action. Since each A field is linked to the B field of the same index, we can define global configurations {φ i } in both the numerator and denominator, but in the numerator, the links between configuration i and configuration i ± 1 require that all n configurations agree on the boundary, while this requirement is absent in the denominator. Schematically, Tr ρ n A = Z(n configurations agreeing on the boundary) Z(n independent configurations) .
(6) Note that this is symmetric with respect to A and B: Tr ρ n A = Tr ρ n B as known from Schmidt decomposition. If Tr ρ n A is known for all integer n, then assuming an analytic continuation, the entropy is obtained as The above expression for Tr ρ n A can be put in a form that simplifies taking this derivative: for an explicit realization, consider the case of a free scalar field. Then the condition that n scalar fields φ i agree with each other on the boundary can be satisfied by forming n−1 linear combinations 1 √ 2 (φ i − φ i+1 ), which vanish at the boundary (i.e., satisfy Dirichlet boundary conditions), plus one linear combination 1 √ n i=1,...,n φ i that has no restriction at the boundary (i.e., is a free field on A ∪ B).
More generally, for any CFT there exists a conformal boundary condition that generalizes the notion of the Dirichlet boundary condition in the free scalar case. Thus, in terms of the partition functions Z D , for a field in the whole system A∪B that vanishes at the boundary, and Z F , for a free field in the whole system, and therefore In the last equality, the Dirichlet boundary condition at the boundary was used to split the partition function into contributions from A and B, each including the boundary with Dirichlet boundary conditions. Finally, the entanglement entropy for a general conformal quantum critical point is just the dimensionless free energy difference induced by the partition in the associated CFT: Explicit results for the entanglement entropy can now be found using results on the free energy corrections in conformal field theory. If A ∪ B is the plane, A and B are connected (which requires that one be simply connected), and the boundary is smooth, then the free energy result of Cardy and Peschel [23] can be applied: using region A as an example and supposing that it is finite, where f b and f s are the bulk and surface partial densities, |A| is the area, L is the perimeter, and χ is the Euler characteristic of the manifold Here g is the number of handles and b the number of boundaries of the manifold. As an example, χ = 1 for a planar simply connected region with boundary, such as a disk. In (11), the cutoff is now incorporated into the definitions of f b and f s : note that the coefficient of the logarithm is cutoff-independent, as changing the cutoff gives only an additive constant from this term. In the expression for S, the bulk energy terms cancel, and since A ∪ B is the plane and has no boundary, the linear in L term is just twice the dimensionless boundary energy in the CFT with the same regularization. Note that the logarithmic term differs from the leading (boundary) term in that it is cutoff-independent and hence potentially universal. The logarithmic contribution is determined for a smooth boundary by the change in Euler characteristic: We note immediately from this formula that if the boundary between A and B is a smooth closed curve in the interior of A∪B, there is no logarithmic contribution because the total Euler characteristic is conserved. Although this cancellation is quite specific to a smooth closed boundary, it shows that the existence of logarithmic contributions to F A and F B (unless they have zero Euler characteristic) does not imply a logarithmic entanglement entropy. There are two simple mechanisms that generate a logarithmic contribution. The first is that the boundary between A and B may intersect the boundary of A ∪ B, as for example if a disk is sliced into two halves, which modifies the total Euler characteristic. The second is that the boundary between A and B may have corners. [24] The Cardy-Peschel formula argument given here is valid for boundaries that are smooth except for a finite number of sharp corners; fractal boundaries, for example, could give rise to different subleading corrections. Henceforth we write S log for the possible logarithmic term in the entropy. Three examples are shown in Fig. 1.
We first treat the case of sharp corners. A sharp corner with interior angle γ, 0 < γ < 2π, gives a logarithmic contribution to the free energy [22,23] According to the formula (14), the contribution from a polygon approaches that of a smooth curve as all angles approach π. However, sharper angles contribute more to the logarithmic correction than would be predicted by the Gauss-Bonnet formula. Contributions from the angles γ and 2π − γ only cancel to linear order near γ = π, so that sharp angles on the AB boundary give a net contribution.
As an example, summing the interior and exterior contributions for the four corners of a rectangle A surrounded by B gives Note that a negative or positive sign for the logarithmic part of the entropy is physically permissible as a subleading correction to the (positive) boundary entropy.
A B
FIG. 1: The logarithmic contribution to entanglement entropy for three partitions of a disk. Partitions (a) and (c) do not change the total Euler characteristic, while partition (b) does. Note that it is the partition length scale, which may be finite even if A or B is infinite, that sets R in the logarithm.
Another source of logarithmic contributions is a smooth boundary that disconnects the original system. As an example, consider the disk of radius L cut into two pieces by a diameter. The resulting two half-disks have two sharp angles of π 2 each, formed by the intersection of the diameter with the circumference: the resulting logarithmic contribution is The quantum critical points studied in this paper can, as in the quantum dimer model and its generalizations (both Abelian and non-Abelian) [17,25,26], lie next to topologically ordered phases with subleading O(1) terms in their entanglement entropy in addition to an area law: [11,12] for some nonuniversal constant a and a universal topological number γ, Another way to obtain a universal logarithmic term at criticality is by means of a physical process by which the system is split into two disjoint regions each with a smooth boundary. In this case there is a net change in the Euler characteristic leading to a lowering of the universal term in the entanglement entropy (due to a physical loss of correlations) by ∆S = −(c/6) log L. Something very similar has been found to happen to the universal constant term γ which arises in a topological fluid (see above), such as in the case of a quantum Hall fluid [27].
It is natural to ask whether even at criticality there are topological nondivergent terms in addition to the universal logarithmic divergences studied here. For the case of the free scalar field, there can be nondivergent terms with a complicated (but scale invariant) dependence on the partition geometry, suggesting that there is no purely topological number [28]: e.g., the free energy of the annulus with radii r 1 < r 2 contains a term log log(r 2 /r 1 ).
An important question for future work is to understand explicitly how the entanglement evolves, for a topologically nontrivial partition, as the Hamiltonian is tuned from a critical point to a massive topological phase, as this requires a more accurate calculation than simply cutting off the logarithmic divergences by a correlation length ξ away from criticality. The dependence of the universal logarithmic term on the central charge c of the associated 2D CFT indicates that upon perturbation there should be a downward flow of the coefficient of this universal term.
Although the area law at leading order is consistent with the general scaling ansatz of Calabrese and Cardy [3] in d > 1, our results show that in d = 2 there are universal logarithmic contributions to the entanglement entropy at a class of critical points for which the "area law" applies. These universal contributions are determined by the critical bulk theory and the geometric details of the partition into subsystems. Combined with recent results on subleading corrections to entanglement entropy in massive topological phases [11,12], it now appears that entanglement entropy in d ≥ 2 is considerably richer than the area law might suggest, even when the area law is applicable.
|
2018-04-03T04:04:31.549Z
|
2006-05-29T00:00:00.000
|
{
"year": 2006,
"sha1": "e52fd11cf7f1921f3869e878afbf531bc2b44259",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0605683",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e2bc212a69c913379a71e9c715ce2c9d81f09179",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
208926691
|
pes2o/s2orc
|
v3-fos-license
|
1,8-Bis(3-chloroanilino)-N,N′-bis(3-chlorophenyl)octane-1,8-diimine
There are two half-molecules in the asymmetric unit of the title compound, C32H30Cl4N4, in both of which the N—H bonds are syn to the meta-chloro substituents in the adjacent benzene ring. The other two Cl atoms of these two molecules are disordered with occunpancy ratios of 0.79 (2):0.21 (2) and 0.68 (1):0.32 (1). Adjacent chlorophenyl rings make dihedral angles of 74.3 (2) and 63.0 (2)° in the two molecules. In the crystal, intermolecular N—H⋯N hydrogen bonds link the molecules into infinite chains.
There are two half-molecules in the asymmetric unit of the title compound, C 32 H 30 Cl 4 N 4 , in both of which the N-H bonds are syn to the meta-chloro substituents in the adjacent benzene ring. The other two Cl atoms of these two molecules are disordered with occunpancy ratios of 0.79 (2):0.21 (2) and 0.68 (1):0.32 (1). Adjacent chlorophenyl rings make dihedral angles of 74.3 (2) and 63.0 (2) in the two molecules. In the crystal, intermolecular N-HÁ Á ÁN hydrogen bonds link the molecules into infinite chains.
Related literature
For our study on the effect of substituents on the structures of this class of compounds, see: Gowda et al. (2007Gowda et al. ( , 2009Gowda et al. ( , 2010
Atoms Cl2 and Cl4 in (I) are disordered and were refined using a split model. The site-occupation factors were refined so that their sum was unity [0.79 (2) and 0.22 (2) for Cl2, 0.68 (1) and 0.32 (1) for Cl4, respectively]. The corresponding bond distances in the disordered groups were restrained to be equal.
Experimental
Suberic acid (0.2 mol) was heated with Phosphorus oxychloride (1.2 mol) at 70°C for 2 h. The acid chloride obtained was treated with 3-chloroaniline (0.8 mol). The product obtained was added to crushed ice to obtain the precipitate. It was thoroughly washed with water and then with saturated sodium bicarbonate solution and washed again with water. It was then given a wash with 2 N HCl. It was again washed with water, filtered, dried and recrystallized from ethanol.
Prism like colorless single crystals of the title compound used in x-ray diffraction studies were obtained by a slow evaporation of its solution at room temperature.
supplementary materials sup-2 Refinement
The H atoms of the NH groups were located in a difference map and later restrained to the distance N-H = 0.86 (2) Å. The other H atoms were positioned with idealized geometry using a riding model with C-H = 0.93-0.97 Å. All H atoms were refined with isotropic displacement parameters (set to 1.2 times of the U eq of the parent atom).
Atoms Cl2 and Cl4 are disordered and were refined using a split model. The corresponding site-occupation factors were refined so that their sum was unity [0.79 (2) and 0.21 (2) for Cl2, 0.68 (1) and 0.32 (1) for Cl4, respectively] and their corresponding bond distances in the disordered groups were restrained to be equal. The U ij components of Cl2, Cl4, C16 and C27 were restrained to approximate isotropic behavior. Fig. 1. Molecular structure of (I), showing the atom labeling and displacement ellipsoids drawn at the 50% probability level. Both disorder components are shown. The minor disorder components are shown with dashed bonds. Symmetry codes for the unlabeled atoms: -x+1, y, -z+1/2 and -x+3/2, -y+1/2, -z+1.
|
2016-05-04T20:20:58.661Z
|
2011-02-12T00:00:00.000
|
{
"year": 2011,
"sha1": "88a454fa61f0d6e689e2cad3375392eb5cf7a047",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2011/03/00/bq2276/bq2276.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "442eedcb9adcf3029576486c46d0363797cb59ca",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258846058
|
pes2o/s2orc
|
v3-fos-license
|
Clinical chorioamnionitis: where do we stand now?
Intraamniotic infection is an infection resulting in the inflammation of any combination of the amniotic fluid, the placenta, the fetus itself, the fetal membranes, umbilical cord, or the decidua. In the past, an infection of the amnion and chorion or both was dubbed chorioamnionitis. In 2015, a proposal was made by an expert panel that, instead of clinical chorioamnionitis, the name intrauterine inflammation or infection or both be used, abbreviated as Triple I or simply IAI. However, the abbreviation IAI did not gain popularity, and this article uses the term chorioamnionitis. Chorioamnionitis may arise prior to, during, or following labor. It can present as a chronic, subacute, or acute infection. Its clinical presentation is generally referred to as acute chorioamnionitis. The treatment of chorioamnionitis varies widely across the world due to different bacterial causes and the absence of sufficient evidence to support a specific treatment regimen. There are limited randomized controlled trials that have evaluated the superiority of antibiotic regimens for treating amniotic infections during labor. This lack of evidence-based treatment suggests that the current choice of antibiotics is based on limitations in existing research, rather than absolute science. Chorioamnionitis cannot be cured by antibiotic therapy alone without delivery, and therefore it is necessary to make a decision according to the guidelines for induction of labor or acceleration of delivery. When a diagnosis is suspected or established, it is therefore necessary to apply broad-spectrum antibiotics according to the protocol used by each country, and to continue with them until delivery. A commonly recommended first-line treatment for chorioamnionitis is a simple regimen consisting of amoxicillin or ampicillin and once-daily gentamicin. Available information is not sufficient to indicate the best antimicrobial regimen to treat this obstetric condition. However, the evidence that is currently available suggests that patients with clinical chorioamnionitis, primarily women with a gestational age of 34 weeks or more and those in labor, should receive treatment with this regime. However, antibiotic preferences may vary based on local policy, clinician experience and knowledge, bacterial reasons for the infection, antimicrobial resistance patterns, maternal allergies, and drug availability.
Introduction
Intraamniotic infection is an infection resulting in the inflammation of any combination of the amniotic fluid, the placenta, the fetus itself, the fetal membranes, the umbilical cord, or the decidua (1). Previously, an infection of the amnion and chorion or both was dubbed chorioamnionitis. In 2015, a proposal was made by an expert panel that, instead of clinical chorioamnionitis, the name intrauterine inflammation or infection or both be used, abbreviated as Triple I or simply IAI (2). However, the abbreviation IAI did not gain popularity, and this article uses the term chorioamnionitis (3,4).
Pathologists use the name histologic chorioamnionitis to refer to an inflammation that lacks typical microbiological or clinical findings connected with acute infection, which adds to the complexity of terms (1,3). Diagnoses of clinical and histologic chorioamnionitis overlap considerably, but they are not always concurrent. The reason for this may lie in subclinical chorioamnionitis, which can be identified based on examination of the placenta but may still not be clinically diagnosed, or because non-specific clinical signs are used to diagnose clinical chorioamnionitis (5).
Chorioamnionitis can arise prior to, during, or following labor. It may be chronic, subacute, or acute. Its clinical presentation is generally referred to as acute chorioamnionitis. Chorioamnionitis is most often linked with premature labor, prolonged membrane ruptures, prolonged labor, smoking, meconium-stained amniotic fluid, nulliparous pregnancy, multiple vaginal exams following rupture of membranes, and identified viral or bacterial infections. Chorioamnionitis can also arise at term and appear in women without previous infections. Chorioamnionitis can result in morbidity and mortality in the mother and the newborn if not treated in time and appropriately. The morbidity and mortality of the newborn increase with early pregnancy. It has been shown that antibiotic treatment decreases the frequency and severity of chorioamnionitis in both mothers and newborns (1,3,6).
The clinical picture of chorioamnionitis is presented below with an emphasis on current treatment and the use of antibiotics. This article presents various guidelines used by various countries and identifies differences in the treatment approaches.
Epidemiology
A systematic review performed by Woodd et al. (7) estimated that clinical chorioamnionitis occurs in 3.9% of all puerperae and that it is the most common infection during labor (7,8). The incidence of chorioamnionitis varies greatly between studies (9). This difference is the result of multiple factors, especially variations in research methodology (higher rates are reported in prospective studies in comparison to retrospective studies), variations in the distribution of risk factors among the populations investigated, the application of various diagnostic criteria (e.g., clinical compared to histologic ones: the incidence of histologic chorioamnionitis is much greater), and changes in obstetric practice (3,6). Incidence also differs between preterm labor (prior to 37 weeks' gestation) and labor at term (37 weeks or more). In women with preterm labor or that have preterm premature rupture of membranes (PPROM), the incidence is 40-70% (10)(11)(12). In deliveries occurring from 21 to 24 weeks of gestation, it is possible to find histologic chorioamnionitis in over 94% of cases (9). In term pregnancies, clinical chorioamnionitis was diagnosed in roughly 1-3% of cases with membranes that were intact and 6-10% of cases having preterm rupture of membranes (PROM) (11, 12).
Pathogenesis and microbiology
The origin of chorioamnionitis is often polymicrobial; it involves both anaerobic and aerobic bacteria, and it often arises from the vaginal flora (3). It mostly occurs through bacterial invasion ascending from the lower genital tract into the amniotic cavity, which is usually sterile. Ascending infections, progression, and clearance may be linked to vaginal dysbiosis, according to recent research (13). Although microbiome diversity is typically indicative of good health in most areas of the body, the vaginal microbiome's health is instead tied to low microbial heterogeneity and the predominance of Lactobacillus spp. This genus produces enzymes capable of glycogen fermentation, leading to the production of substantial amounts of lactic acid. The subsequent low pH protects the cellular metabolic function of the cervix and vagina and impedes the growth of potentially pathogenic species. In addition, genetic factors may impact mucosal immunity and microbiome diversity (13).
In rare cases, it is also possible for intraamniotic infection to occur following invasive procedures such as amniocentesis or chorionic villus sampling or via a hematogenous route that is secondary to maternal systemic infection-for example, with Staphylococcus aureus or Listeria monocytogenes. Potential routes of intrauterine infection/chorioamnionitis are shown in the Figure 1. Most cases of chorioamnionitis detected and managed by obstetricians are noted at term, and clinically apparent intraamniotic chorioamnionitis complicates only 2-5% of these deliveries (1). More recent information shows that the relative risk of chorioamnionitis and neonatal infection increases from 40 weeks of gestation onward (1,9).
The pathophysiology of chorioamnionitis is highly complicated. One of the most important cytokines responsible for the inflammation response is tumor necrosis factor-alpha (TNF-α), which is a multifunctional Th1 cytokine and is produced by macrophages during inflammation. Cytokine homeostasis allows for embryo implantation and normal pregnancy outcomes. In the early stages of normal pregnancy, Th1 pro-inflammatory cytokines are necessary for the stimulation of new vessels for successful embryo implantation. However, prolonged exposure to Th1 cytokines may result in a cell-mediated immune response, which is harmful to the fetus and may cause spontaneous abortion or preterm birth (14). Recent studies have demonstrated an important correlation between TNF-α and the endocannabinoid/endovanilloid (EC/EV) system in preterm deliveries. Torella et al. found a link between the stimulation of cannabinoid receptor type 1 (CB1) and the antagonism of the transient receptor potential vanilloid 1 (TRPV1) channel in the placenta, which could be used in preterm birth prevention through selected molecules (15). The bacteria involved in chorioamnionitis may vary by location and specific population. Bacteria that are commonly identified in chorioamnionitis are group B streptococcus (GBS), Bacteroides spp. Escherichia coli, Gardnerella vaginalis, Mycoplasma pneumoniae, and Ureaplasma spp. Candida and its subspecies are defined as risk factors linked with chorioamnionitis, which leads to preterm birth and neonatal infections. Studies have shown that in young persons with sexually transmitted infections trichomoniasis constitutes a risk for developing chorioamnionitis. Even though chorioamnionitis represents a risk factor for vertical transmission during pregnancy, HIV status of the mother does not represent a risk factor with regard to chorioamnionitis (1,2,4,6,8,16).
In addition to the aforementioned pathogens, there are many that can activate the inflammation cascade but are not often mentioned. Zika virus infection, which can lead to chorioamnionitis, is possible in pregnancies in high-risk areas. It mainly crosses the placenta in the first trimester and therefore is rarely considered in pregnancy. Ascending vaginal infection is also possible (17).
Infections in non-genital sites such as pneumonia and periodontal infection are linked to preterm labor. Oral infections are considered a contributing factor to preterm labor incidence because research has found that commensal bacterial species from the oral cavity can spread to the fetoplacental unit of women with term gestation and adverse pregnancy outcomes. Adverse pregnancy outcomes have been strongly associated with certain microbes, including Fusobacterium nucleatum, Campylobacter rectus, Porphyromonas gingivalis, and Bergeyella spp. (18).
Risk factors
The most important risk factors for chorioamnionitis are prolonged labor and the time from the spontaneous rupture of the fetal membranes to birth. Increased risk for the development of chorioamnionitis is also associated with the following (1,3,19): • GBS infection during pregnancy, sexually transmitted diseases, bacterial vaginosis; • Multiple digital vaginal examinations during labor (especially post rupture of membranes); • Digital examinations instead of speculum examinations of pregnant women with PPROM; • Cervical insufficiency; • Intracervical balloon catheter to help the cervix ripen faster or induce labor; • Meconium-stained amniotic fluid; • Alcohol abuse and smoking during pregnancy; • Pregnancy after IVF-ET; • Chorioamnionitis in previous pregnancies.
Signs and symptoms
Maternal fever over 38 • C and fetal tachycardia even before the onset of fever predominate in the clinical picture Frontiers in Medicine 03 frontiersin.org of chorioamnionitis in pregnant women. In laboratory blood test results, the diagnosis is confirmed by elevated inflammation indicators and a typically raised white blood cell count that has a left shift; in addition, purulent amniotic fluid and uterine tenderness on palpation are also typical (20). The clinical picture is often non-specific and may include one or more of the following symptoms (1, 2, 6): • Fever; • Leukocytosis in the mother exceeding 15,000/mm 3 ; • Tachycardia in the mother exceeding 100/min; • Tachycardia in the fetus exceeding 160/min; • Uterine tenderness on palpation; • Bacteremia (most common when chorioamnionitis is associated with a GBS or E. coli infection; • Purulent amniotic fluid. Chorioamnionitis can have a subclinical manifestation, which is defined as not presenting with the clinical image described above. A subclinical infection can present as preterm labor with intact fetal membranes or as PPROM (4). We approach pregnant women with suspected chorioamnionitis in stages. The patient history should start with the age of the mother, gestational age, parity, major characteristics of the pregnancy including any difficulties, and the history of sexually transmitted infections, urinary tract infections, and other illnesses. The status of the fetal membranes is important: whether there has been a rupture or whether the membranes are preserved, and whether meconiumstained amniotic fluid is present. We measure the basic vital functions of a pregnant woman (temperature, blood pressure, pulse, and oxygen saturation). The physical examination should be thorough; it should also include a complete physical assessment, including an examination of the abdomen and vagina, and ultrasound examination of the uterus and fetus. Upon admission and suspicion of chorioamnionitis, it is necessary to take a swab of the vagina for pathogenic bacteria. The initial assessment of chorioamnionitis thus includes a complete maternal and fetal clinical assessment. A blood count (for leukocytosis) is routine for suspected infection, but recent studies suggest that leukocytosis in pregnant women with PPROM on admission does not confirm microbial invasion into the amniotic cavity or inflammation. Bacterial cultures taken via vaginal or cervical swab do not correlate with infection secondary to chorioamnionitis (1,3,4,6).
Chorioamnionitis can be diagnosed using amniotic fluid culture or Gram staining-or both of these methods plus biochemical analysis-but in most puerperae such a diagnosis is primarily based on clinical evaluation of symptoms and signs. According to the panel of maternal and neonatal experts Table 1.
Markers before delivery combined with clinical manifestations and gestational age are useful for guiding management for the mother. Confirmation of chorioamnionitis using antenatal markers is not routinely required in women at term that are showing progress toward delivery. However, for women experiencing preterm labor that are undergoing evaluation for tocolysis or using corticosteroids, biomarker assessment may have some value in confirming a diagnosis of chorioamnionitis (21). In clinics where amniocentesis is carried out to ascertain chorioamnionitis, the lab tests used for clinical management include lactate dehydrogenase (LDH) activity, glucose concentration, WBC and RBC counts, bacterial cultures, and Gram stain. The results of cultures are not usually available promptly enough for making decisions. Consequently, physicians are forced to rely on other analyses, the turnaround times for which are only hours. Unfortunately, these tests (glucose, LDH, WBC count, and Gram stain) sometimes do not agree in confirming or ruling out chorioamnionitis; consequently, interpreting the results of these tests may not be simple. Studies examining biomarkers of chorioamnionitis are complicated by the absence of a gold standard for diagnosis (2). The various biomarkers that have undergone evaluation are not ideal; that is because their accuracy is insufficient for defining a particular threshold or due to the invasive character of amniocentesis. Some studies have noted the promise of IL-6 as an intrauterine inflammation marker (22,23). Nonetheless, current information remains debated. A Cochrane review recently showed that when AF analysis is used to exclude chorioamnionitis in women with PPROM it suffers from low evidence quality (21,24).
Chorioamnionitis can range from mild to severe. Histopathological results agreeing with inflammation may also be found in placentas during normal pregnancies (25). In cases of chorioamnionitis, the fetal membranes can appear normal or
Category
Features and comments
Differential diagnosis
The majority of clinical signs linked with chorioamnionitis are not specific; in a pregnant woman, fever may be associated with dehydration, use of prostaglandins for cervical ripening or induction of labor, or epidural analgesia. Maternal tachycardia can be physiological or associated with pain, epidural analgesia, or medication. Leukocytosis in a pregnant woman occurs both during childbirth and during antenatal treatment with corticosteroids and also in infections that are not chorioamnionitis. Tachycardia in the fetus may be associated with hypoxemia in the fetus, fever of any etiology in the mother, or the passage of certain drugs via the placenta (1, 3, 4, 6).
Differential diagnosis in pregnant women with clinical signs of chorioamnionitis includes delivery, placental abruption, and other infections. Labor may be linked with fever (if the puerpera has had epidural analgesia), tachycardia in the mother, leukocytosis, and a uterus tender to palpation. Clinical chorioamnionitis is difficult to diagnose in puerperae with epidural analgesia because fever is frequent and can be associated with the anesthetic. Moreover, epidural anesthesia hides the sensitivity of the uterus and may cause tachycardia in the mother or fetus. On the other hand, a small abruption can result in increased uterine sensitivity and tachycardia in the mother, but it is usually linked with absence of fever and vaginal bleeding. Other, extrauterine infections linked with abdominal pain (not necessarily with labor) and fever and including pyelonephritis, viral respiratory infections, appendicitis, pneumonia, and COVID-19 are also considered in the differential diagnosis. Such infections can result in tachycardia and leukocytosis in the mother as well as tachycardia in the fetus. Nonetheless, they can generally be distinguished from chorioamnionitis based on clinical presentation (e.g., gastrointestinal or respiratory symptoms indicate an extrauterine cause of fever) and through lab tests (e.g., urinalysis) (1, 3, 4, 6, 21).
Complications
Exposure to chorioamnionitis increases the risk of an adverse pregnancy outcome by 2-to 3.5-fold, independent of the duration of infection. Sepsis in the mother and neonate are leading causes of death globally. According to the literature, between 2003 and 2009, sepsis resulted in 10.7% of maternal deaths worldwide (25). Even though maternal death as a result of sepsis is more frequent in lessdeveloped countries, it is a growing problem in some developed countries, such as the United States (5).
At present, chorioamnionitis serves as a primary risk factor to identify infants at risk of early-onset neonatal sepsis (EONS). However, if newborns are overexposed to broad-spectrum antibiotics before EONS is excluded, or for "presumed" EONS without a definitive diagnosis, this has the potential to cause short-and long-term adverse effects, including increased risk of necrotizing enterocolitis and mortality (5,(29)(30)(31). Lateonset neonatal sepsis (LONS) is challenging to diagnose because the clinical signs are non-specific, and no risk stratification tools are available as a guide toward a threshold for when diagnostic tests should be ordered or antibiotics started. It is (32,33). In a recent meta-analysis and systematic review, Villamor-Martinez et al. (34) concluded that the immaturity of an infant is key in the morbidity linked with very preterm or extremely preterm birth; however, the pathological processes resulting in preterm birth can also affect the outcome. Their data indicate that, if infection or inflammation serve as triggers for preterm birth, such infants are more inclined to develop sepsis not only directly following birth, but also during the first weeks of their lives. The link between chorioamnionitis and EONS does not appear to be connected with gestational age, whereas the lower gestational age of infants exposed to chorioamnionitis reduced the effect size of the link between LONS and chorioamnionitis. According to Villamor-Martinez et al.
(34), chorioamnionitis may start the immunomodulatory sequence that leads to LONS, but it may also modify the exposure rate to other stimuli, including antenatal and post-natal antibiotics and corticosteroids, invasive therapies, lung damage, patent ductus arteriosus, or necrotizing enterocolitis, which result in greater vulnerability of very preterm or extremely preterm infants to sepsis. On the other hand, chorioamnionitis may cause serious complications in the mother, including adult respiratory distress syndrome (ARDS), intensive care unit (ICU) admission, the need for a hysterectomy, post-partum endometritis, post-partum hemorrhage, prolonged labor, sepsis, wound infection, and, in unusual cases, maternal mortality (2). Women experiencing clinical chorioamnionitis are at increased risk of uterine atony, blood transfusion, and postpartum hemorrhage compared to women that do not have clinical chorioamnionitis. Increased postpartum hemorrhage and uterine atony seem to be related directly to myometrial contractility impairment as a result of intraamniotic inflammation or infection (4). Beck et al. (5) also pointed out that, although there are large numbers of studies evaluating neonatal sepsis, most of them were without maternal characteristics, and so their systematic review was unable to conclude if risk changes as a result of baseline clinical and demographic characteristics. Moreover, it was not possible to evaluate the association between maternal sepsis and chorioamnionitis due to the lack of publications available (5). Taking into account the limitations of published articles, further research on this topic is especially important due to high maternal mortality rates, especially in the United States in comparison to similar developed countries.
Management
The treatment of chorioamnionitis varies widely across the world due to different bacterial causes and the absence of sufficient evidence to support a specific treatment regimen. There are limited randomized controlled trials (RCTs) that have evaluated the superiority of antibiotic regimens for treating amniotic infections during labor. This lack of evidence-based treatment suggests that the current choice of antibiotics is based on limitations in existing research, rather than absolute science.
Even though chorioamnionitis is linked to preterm labor and delivery, there is no evidence to support antibiotics being routinely administered to women in preterm labor if they have intact membranes and there are no overt signs of infection. In fact, such prophylaxis may worsen outcomes (35). However, intrapartum antibiotics have been effective in preventing EONS and have significantly reduced its incidence in countries where they are used, regardless of the regimen used (35).
Pregnant women with chorioamnionitis (including suspected or confirmed) should be started on antibiotic therapy, and a decision should be made to induce labor; Figure 2. At present, evidence indicates that administering magnesium sulfate for fetal neuroprotection plus antenatal corticosteroids for fetal lung maturation to patients experiencing clinical chorioamnionitis between gestational ages of 24 0/7 and 33 6/7 weeks, and perhaps also between 23 0/7 and 23 6/7 weeks, provides a generally beneficial effect for the infant.
Nonetheless, there should be no delay of delivery to finish the complete course of magnesium sulfate and corticosteroids (4). Intravenous antibiotic therapy administered to the pregnant woman ensures concentration of antibiotics in the fetus and amniotic fluid 1-1.5 h after infusion, which reduces the risk of serious complications in the mother and fetus, but delivery is necessary for chorioamnionitis to be cured. The effectiveness of antibiotics is limited because bacteria in the amniotic fluid form biofilms that are resistant to antibiotic treatment (1,4,6). Immediate induction or acceleration of labor according to the protocol and guidelines is necessary in pregnant women with chorioamnionitis, and cesarean section should only be performed in the presence of obstetric indications. When cesarean delivery is applied with chorioamnionitis, this raises the risk of venous thrombosis, endomyometritis, and wound infection (1,4,36).
Antibiotic treatment started due to suspected or confirmed chorioamnionitis should not automatically continue after delivery; the extension of antibiotic treatment should be based on risk factors for endometritis after delivery (1,36). According to a study by Edwards and Duff (22), there is a lower likelihood of endometritis in postpartum women that delivered vaginally and may not require antibiotic therapy after delivery (1,22). In postpartum women that delivered by cesarean section, at least one additional antibiotic dose is recommended after delivery (1). However, based on the existence of other maternal risk factors, such as postpartum bacteremia or fever, it can be decided to continue antimicrobial treatment;
Differences in guidelines between countries
According to the American College of Obstetricians and Gynecologists (ACOG), antibiotics should be used whenever chorioamnionitis is suspected or confirmed in the absence of clearly documented overriding risks (1). In addition to antibiotics, the use of antipyretics is also necessary. The guidelines of the National Institute for Health and Care Excellence (NICE) (36) indicate that antibiotics should be administered to a woman with a clinical diagnosis of chorioamnionitis during labor. It is also necessary to give antibiotics to a pregnant woman immediately if an infection is suspected and to continue using them until the baby is born (36).
Many countries follow the ACOG guidelines, including Italy (37), China (38), India (39), and Sweden. Many countries have used the existing recommendations and adjusted them according to their counties' situation, including antibiotic availability, bacteria resistance, and so on.
The Slovenian guidelines (40) state that antibiotics are used during childbirth to prevent and treat infections in the mother and to prevent infections in newborns; Figure 3. Shortterm treatment during childbirth is used to prevent infections with GBS in newborns, to prevent postpartum endometritis, and to prevent or treat infections in the mother (postpartum endometritis, amnionitis, and chorioamnionitis). Antibiotics for preventing infections are administered when there are clinical and/or laboratory signs of inflammation in the mother and/or when the mother has a fever during childbirth. In addition, antibiotics are administered when there is rupture of the fetal membranes after the 34th week of pregnancy lasting for more than 12 h (23). Table 4 shows various antibiotic regimens for the treatment of chorioamnionitis, including the ACOG and NICE recommendations. The emphasis is on the different regimens in several European countries-Spain (41), France (42), and Slovenia (40)-that have distinctive antibiotic recommendations.
Discussion
A commonly recommended first-line treatment for chorioamnionitis is a simple regimen consisting of amoxicillin or ampicillin and once-daily gentamicin. Even though there is not sufficient information to demonstrate the best antimicrobial regimen for treating chorioamnionitis, the evidence that is currently available shows that women experiencing clinical chorioamnionitis, primarily women with a gestational age of 34 weeks or more and that are in labor, should receive treatment with this regime. These antibiotics are widely available at most facilities around the world and are more cost-effective than other regimens, which may lead to reduced healthcare costs in certain settings (1,4,43). Although there is not enough evidence to support one antibiotic over another, most countries prefer a regimen that is easy to administer and follows antibiotic use principles to minimize the emergence of resistant bacterial strains (36). In a recent article, Koucky et al. (44) pointed out that only three recommendations or guidelines [ACOG (1), CNGOF (42), and WHO (43)] addressed this issue, and that there was agreement among them with regard to immediately initiating combination antibiotic therapy and maintaining this for the duration of labor. The guidelines recognized the weakness of evidence to make strong recommendations regarding the duration for which antibiotics should be continued after delivery (44). Maternal GBS colonization is one of the main factors associated with the onset of chorioamnionitis and neonatal infection. As a consequence, a number of professional bodies in the United States, Spain, Australia, and Canada are in favor of universal antenatal screening. In contrast, the New Zealand Medical Association (NZMA) and the Royal College of Obstetricians and Gynaecologists (RCOG) do not support such a policy. The RCOG decision is based on the opinion of the UK National Screening Committee that clear evidence is still lacking regarding the benefits of routine screening for GBS, no fully accurate screening test is available yet, and at 35-37 weeks of gestation the GBS status does not reflect what this status will be at delivery (44)(45)(46). On the other hand, the American Academy of Pediatrics stated that broad adoption of the routine antenatal GBS screening policy it recommends has corresponded to an estimated 80% drop in early-onset GBS (44). For example, the proportion of Slovenian pregnant women that are carriers of GBS in the intestine or vagina was determined in two studies (47,48) and was found to be 17 and 23%, respectively. Pregnant women can be selected for antibiotic prophylaxis: by the presence of perinatal risk factors (premature birth, PPROM, fever, presence of GBS in the urine during pregnancy, or neonatal infection with GBS during a prior pregnancy), in the event of previously established GBS colonization in the third trimester of pregnancy, typically between the 35th and 37th week, or in the case of established perinatal GBS colonization of the mother. Nevertheless, in 2019 the Health Council of Slovenia adopted a proposal to introduce universal screening for GBS in the 35th to 37th week of pregnancy (49).
Moreover, Chatzakis et al. (19) concluded that a number of antibiotics seem to be more effective than using a placebo or no treatment in reducing the rate of chorioamnionitis following PPROM. Nonetheless, none of these are consistently better in comparison to other antibiotics, and the majority are not superior to no treatment or placebo for outcomes besides chorioamnionitis. In their conclusion, the quality of the evidence is low and for some antibiotics it is possibly outdated; moreover, some drugs frequently administered in clinical practice, in particular cephalosporins, have been underrepresented in RCTs. Similar conclusions were pointed out by Conde-Agudelo et al. (4) based on a survey carried out among American obstetricians, which showed broad variation in patterns of practice for managing clinical chorioamnionitis. This survey pointed out that clinicians are using over 25 different regimens of primary antibiotics, and that the duration of postpartum antibiotics ranges from no treatment an all to as much as 48 h of treatment postpartum (4).
It is still unclear if antibiotics ought to be discontinued following birth or continued postpartum. However, if a woman remains symptomatic, extended antibiotic treatment for a minimum of 24 to 48 h after the infection signs and symptoms have subsided may be beneficial. Evidence is not sufficient to determine the best antimicrobial regimen to treat women with intra-amniotic infection because the trials available have investigated a small number of patients and often lack sufficient power for detecting statistical differences among the treatments compared. Nevertheless, the last Cochrane review, from 2014 (50), concluded that the evidence quality ranged from low to very low for the majority of outcomes, following the GRADE approach. The evidence available is too limited for revealing the best antimicrobial regimen for treating patients experiencing chorioamnionitis, whether it is appropriate to continue antibiotics postpartum, and what treatment duration or which antibiotic regimen ought to be used (50).
In contrast to the dilemmas of antibiotic therapy, the mode of delivery is a less problematic question. Namely, if clinical chorioamnionitis is diagnosed, delivery should be considered regardless of the gestational age, but this does not necessarily mean a cesarean delivery is needed (1,4). It is important to ensure proper labor induction and progression if not contraindicated. Vaginal delivery is generally safer, and cesarean delivery should only be used for standard obstetric indications (4). Venkatesh et al. (46) conducted a large multicenter retrospective cohort study that supports this recommendation, finding that clinical chorioamnionitis is associated with an increased risk of adverse maternal outcomes for women that have a cesarean delivery, regardless of the type and duration of antibiotic therapy, but Management of prevention infection in the mother at delivery, the Slovenian recommendations. Adapted from Fabjan Vodušek et al. (40).
not for those that have a vaginal delivery (46). The study included 216,467 women without clinical chorioamnionitis and 4,807 women with clinical chorioamnionitis, with 2,794 delivering vaginally and 2,013 undergoing cesarean delivery. The adjusted odds ratio for adverse maternal outcomes was 2.31 (95% CI 1.97-2.71) for cesarean delivery and 1.15 (95% CI 0.93-1.43) for vaginal delivery. Another study by D'Arpe et al., analyzing the causes of peripartum hysterectomy, found that the cause for atony was rarely a peripartum infection (47). However, more research is needed on the relationship between chorioamnionitis and caesarean section. We should also not forget to mention the new SARS-CoV-2 strain. Pregnant women are infected as often as the general population, but a higher proportion of them recover from the infection without symptoms compared to the non-pregnant population. However, does COVID-19 cause chorioamnionitis? Pregnant women belong to a vulnerable population because they have a higher risk of complications due to anatomical, physiological, hormonal, and immunological changes during pregnancy. SARS-CoV-2 enters the cells of the lungs and other organs via the angiotensin-converting enzyme 2 receptor (ACE2 receptor) (48). Binding of the virus to ACE2 causes downregulation of this enzyme, resulting in reduced conversion of angiotensin II to angiotensin. The ACE2 receptor plays an important role in trophoblast proliferation, angiogenesis, and the regulation of arterial blood pressure during pregnancy (49). Down-regulation of ACE2 in the placenta due to SARS-CoV-2 may lead to oxidative stress of the placenta and the release of anti-angiogenic factors, including soluble fms-like tyrosine kinase-1 (sFlt-1) (50), and a decrease in pro-angiogenic factors, leading to features of pre-eclampsia and HELLP syndrome (49). When infected with SARS-CoV-2, symptomatic pregnant women have an increased risk of preterm birth, with delivery before the 37th week of pregnancy, especially iatrogenic preterm birth, due to worsening of the condition of the pregnant woman or the fetus. The increased risk of preterm birth after infection with SARS-CoV-2 is also related to the gestational age at the time of infection and different strains of SARS-CoV-2. SARS-CoV-2 can cause inflammation and damage to the placenta, which in turn increases the risk of developing preeclampsia and stillbirth and thus the risk of premature birth. The large multinational INTERCOVID (51) cohort study assessed maternal and neonatal outcomes in a group of pregnant women with COVID-19 (n = 706) compared to a group without COVID-19 (n = 1,424) and found that pregnant women with COVID-19 had an increased risk of preterm birth; 83% of preterm births had a medical indication, and the main diagnoses were preeclampsia/eclampsia/HELLP (24.7%), small size for gestational age (15.5%), and fetal distress (13.2%). Pregnant women with COVID-19 had fewer incidences of spontaneous onset of labor and more caesarean deliveries. Pregnant women with COVID-19 delivered earlier than non-COVID-19 women after 30 weeks of gestation, with the largest difference before 37 weeks of gestation (51). Much of the initial research on the impact of infection on perinatal outcomes was performed in a group of third-trimester pregnant women, and the question of the impact of infection before and after 20 weeks of gestation was raised early. Badr et al. found that SARS-CoV-2 infection in the late second and early third trimesters increased the risk of adverse obstetric and neonatal outcomes, as well as of delivery before 37 weeks of gestation, whereas there were no statistically significant differences between delivery before 32 weeks of gestation and spontaneous delivery before 37 weeks of gestation (52). In the WAPM study on COVID-19 (53,54), the multinational retrospective cohort study conclusion was that high-risk pregnancies complicated by SARS-CoV-2 were at higher risk of adverse maternal outcomes than low-risk pregnancies complicated by SARS-CoV-2 infection. Moreover, early gestational age at infection, maternal ventilatory supports, and low birthweight are the main determinants of adverse perinatal outcomes in fetuses with maternal COVID-19 infection. In a retrospective multicenter cohort study by Piekos et al. (55) on whether SARS-CoV-2 infection in unvaccinated pregnant women in the first and second trimesters is a risk factor for preterm birth, it was found that the greatest predictor of the size of the fetus at delivery was the size of the fetus at the time of infection. The highest risk of preterm birth is with infection in the first trimester, and the possible causes are thought to be increased levels of ACE2 receptors in the placenta in early pregnancy (ACE2 levels are almost undetectable at birth) and increased risk of placental infection via viral binding to the ACE2 receptor. This results in poorer placental function and an increased risk of fetal growth restriction and fetal distress (55). It is important to be aware that many of the studies were conducted at the beginning of the pandemic, or during the delta and omicron waves, when vaccination of pregnant women was just beginning. By now, most pregnant women have already been exposed to SARS-CoV-2 and probably have a different immune response to the virus than at the beginning of the pandemic, with a lower risk of a more severe course of the disease and thus a lower risk of preterm birth. However, it is important to bear in mind research showing a significant impact of the virus on placental abruption. A poorly functioning placenta can lead to serious complications of pregnancy, affecting both iatrogenic and spontaneous preterm birth. Nevertheless, there are no data that COVID-19 infection itself causes chorioamnionitis. A question that arises is what happens with the infected and then healed uterus after the postpartum period, considering that a diagnosis of chorioamnionitis was made at delivery. Does the tissue completely heal, or are there any biochemical or histological signs that can predict any complications with subsequent pregnancies and deliveries? Vimercati et al. (56) studied the distance between the caesarean section scar and vesicovaginal fold on vaginal ultrasound as a predictor for pre-labor risk assessment of uterine rupture, which indicates a possible new diagnostic tool that can be used as an inspiration to look for new non-invasive diagnostic methods of any kind and try to decrease maternal and fetal complications in subsequent pregnancies.
Conclusion
Chorioamnionitis is an infection that often occurs in puerperae that have given birth prematurely, but it is also not excluded even in term births. Correct and timely identification of chorioamnionitis is important, especially recognition of the significance of laboratory findings and clinical signs, because identifying and implementing recommendations for treatment is essential to effectively reduce maternal and neonatal morbidity and mortality. The purpose of prophylactic antibiotic application at delivery is to prevent early neonatal sepsis caused by GBS, prevent maternal infections (chorioamnionitis and postpartum endometritis), and prolong pregnancy in PPROM before 34 weeks of gestation. Chorioamnionitis cannot be cured by antibiotic therapy alone without delivery, and therefore it is necessary to make a decision according to the guidelines for induction of labor or acceleration of delivery. When a diagnosis is suspected or established, it is therefore necessary to apply broad-spectrum antibiotics according to the protocol used by each country, and to continue with them until delivery; Figure 2. The time that elapses between diagnosing clinical chorioamnionitis and delivery is related to the majority of adverse outcomes for mothers and newborns. Antibiotic preferences may vary based on local policy, clinician experience and knowledge, the bacteria causing the infection, antimicrobial resistance patterns, maternal allergies, and availability of drugs.
Author contributions
DL: conceptualization and methodology. DL and MB: writing-preparing the original draft. DL, MB, GK, and MD: writing-review and editing. MD and GK: supervision. All authors have read the published version of the manuscript and agreed to it.
|
2023-05-24T13:12:55.797Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "d6575237b9448660251567a59d7e698f282cc212",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d6575237b9448660251567a59d7e698f282cc212",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234673499
|
pes2o/s2orc
|
v3-fos-license
|
Measures for Government Supervision Department to Strengthen Supervision on Safety Production of Construction Enterprises
Safety production has always been a concern. In the field of construction, the supervision of safety production can be roughly divided into internal supervision and external supervision. External supervision is usually carried out by specific government regulatory departments. The trend of government supervision departments to strengthen the safety production management of construction enterprises is obvious, and a series of deficiencies in the supervision need to be effectively made up. In the new era, the supervision of work safety needs to be further strengthened, and the role and value of supervision need to be fully reflected. This paper will specifically analyze the current situation of the government supervision department on the safety production of construction enterprises, and put forward more effective supervision strategies. Key word: Government sector ; Const ruct ion enterprises; Safe production Publication date: July, 2020 Publication online: 31 July, 2020 *Corresponding author: Yan Zhao, ma1392020@163. com On the whole, the supervision level of the government on the safety production of construction enterprises has been improved to a certain extent, and before the construction, the enterprises have more and more cooperated with the supervision activities of the functional departments. However, under the influence of various factors, the supervision department has exposed some problems in the supervision of safety production of construction enterprises, which shows that there is a large space for adjustment and Optimization in the implementation of specific supervision activities. It is an indisputable fact that the regulatory authorities in some regions are unable to perform the corresponding duties well. In view of this, it is necessary and important to explore a better strategy for safety production supervision. 1 Overview of government supervision on safety production of construction enterprises It is of great significance for the government regulatory departments to supervise the safety production of construction enterprises, which is also the necessary guarantee in the construction of project construction of construction enterprises. The project construction process of construction enterprise itself is a dynamic process, and the construction period of large-scale project is often long. In this state, the weak safety awareness, and the lack of internal safety management in enterprises can easily lead to the occurrence of safety accidents, and the supervision conducted by the supervision department can effectively weaken the actual risk of safety accident risk. After the government regulatory departments effectively perform the corresponding regulatory functions and timely discover and investigate the relevant hidden dangers, the safety production and construction environment of construction enterprises can also be effectively optimized. For the government supervision department, it naturally needs to effectively consider the safety production supervision of construction enterprises.
On the whole, the supervision level of the government on the safety production of construction enterprises has been improved to a certain extent, and before the construction, the enterprises have more and more cooperated with the supervision activities of the functional departments. However, under the influence of various factors, the supervision department has exposed some problems in the supervision of safety production of construction enterprises, which shows that there is a large space for adjustment and Optimization in the implementation of specific supervision activities. It is an indisputable fact that the regulatory authorities in some regions are unable to perform the corresponding duties well. In view of this, it is necessary and important to explore a better strategy for safety production supervision.
Overview of government supervision on safety production of construction enterprises
It is of great significance for the government regulatory departments to supervise the safety production of construction enterprises, which is also the necessary guarantee in the construction of project construction of construction enterprises [1] . The project construction process of construction enterprise itself is a dynamic process, and the construction period of large-scale project is often long. In this state, the weak safety awareness, and the lack of internal safety management in enterprises can easily lead to the occurrence of safety accidents, and the supervision conducted by the supervision department can effectively weaken the actual risk of safety accident risk. After the government regulatory departments effectively perform the corresponding regulatory functions and timely discover and investigate the relevant hidden dangers, the safety production and construction environment of construction enterprises can also be effectively optimized. For the government supervision department, it naturally needs to effectively consider the safety production supervision of construction enterprises.
The frequency of supervision is obviously insufficient
In the supervision of safety production of construction enterprises carried out by the government regulatory departments, the frequency of supervision is relatively low, and the obvious lack of supervision frequency also leads to many hidden dangers that can not be found in time, and the actual effectiveness of external supervision led by supervision departments will be weakened to varying degrees [2] . On the one hand, the regulatory resources of the government regulatory authorities are relatively limited. In unit time, when the actual number of projects under construction of construction enterprises in the region is large, it is difficult for the regulatory authorities to carry out comprehensive safety production supervision, and the supervision frequency will naturally be in a relatively low state. On the other hand, the regulatory authorities in some areas did not find their own positioning, the regulatory concept is also very old, the lack of ideological awareness leads to the mechanical characteristics of the actual supervision work is obvious. It is also because of the low actual frequency of external supervision led by regulatory authorities, it is usually difficult to improve the regulatory effect.
Lack of focus in actual supervision
The lack of supervision focus is also obvious in the safety production related supervision of construction enterprises conducted by government regulatory departments. Taking the safety production supervision of specific enterprises and specific projects as an example, the supervision departments need to pay attention to a variety of supervision levels in a unit time, while in a specific region, the number of enterprises and projects that the regulatory authorities need to implement synchronous supervision will also show different characteristics due to seasonal factors. Many regulatory personnel are still in accordance with the inherent thinking and mode of supervision, and some construction enterprises have accumulated sufficient "experience" to deal with the supervision of regulatory authorities. In this state, once the regulatory authorities are unable to clarify the basic focus in the actual supervision and make flexible adjustments, the supervision by the regulatory authorities is easy to become a mere formality. Over time, the effectiveness of regulation will be weakened. To rely on the development of regulatory activities to ensure the safety of construction enterprises, project construction will become more difficult. This shows that the lack of focus in the actual supervision is the specific problem in the corresponding regulatory activities.
Decoupling of supervision and guidance
In the safety production supervision of construction enterprises by the government supervision department, the phenomenon of decoupling between supervision and guidance is very common, which has become a specific problem in the corresponding supervision activities. In the actual supervision work, it is not difficult to find that many construction enterprises' safety awareness is very weak, not only that, it is difficult to effectively carry out safety production and construction organization. However, the effectiveness of administrative punishment is relatively low. In the actual work, even if the supervision department has carried out administrative punishment, some construction safety risks cannot be effectively solved. Government supervision has the basic power of supervision and administrative punishment, but at the same time, it should also give the corresponding enterprises necessary guidance and help in safety production and construction. Once the supervision and guidance are decoupled, the contribution of supervision to the solution of some substantive problems will be greatly reduced, and the supervision activities cannot provide sufficient support for the safety production and construction environment.
. 1 S e l e c t i v e i n c re a s e o f re g u l a t o r y frequency
The government regulatory departments should pay attention to the moderate improvement of supervision frequency in the supervision of safety production of construction enterprises, and selective improvement of supervision frequency is also an effective way to further enhance the regulatory role [3] . For example, in Northeast China, the seasonal characteristics of project construction are very obvious. Therefore, the government regulatory authorities should increase the corresponding supervision frequency in summer and autumn every year, and release the signal of strengthening supervision through the increase of supervision frequency as much as possible, so that the corresponding construction enterprises can have more awareness of safety production and construction. In addition, it is not difficult to find that the absence of external supervision is usually one of the causes of accidents by analyzing the causes of safety accidents in some construction enterprises. Therefore, it is very important to selectively enhance the supervision frequency of safety production, so as to strengthen the supervision. In this case, the supervision department can find the corresponding problems more timely, and various problems existing in the safety production and construction of construction enterprises can be more effectively solved.
With the help of the innovation of supervision form, the key points of supervision are highlighted
There are many factors that can have an impact on the regulatory functions of government regulatory departments and the effectiveness of regulatory activities. Among them, the selection of regulatory means and the clarity of regulatory focus are more direct factors. From the perspective of improving the actual effectiveness of regulatory activities, it is very important to highlight the regulatory focus with the help of innovation and diversification of regulatory forms. For example, the regulatory authorities can effectively link active supervision with passive supervision, and receive the information from the public for targeted supervision by unblocking the corresponding online reporting platform. This practice can usually help the regulatory authorities to clarify the direction and focus of supervision in a timely manner. In addition, the government regulatory departments can also analyze the types and causes of common construction safety problems in the region under the big data thinking, so as to clarify the supervision focus, and carry out specific innovation in the selection of supervision means in combination with the actual needs of corresponding regulatory activities.
Targeted guidance combined with regulatory issues
Combined with the performance of a series of regulatory issues, it is equally important to provide targeted guidance, which is also a practical problem that government regulatory departments can not ignore when performing the corresponding regulatory functions. For example, some enterprises can not effectively solve the problems in production safety and construction in a short period of time. In this case, the government regulatory authorities should not only impose corresponding penalties, but also help the corresponding enterprise entities to clarify the solutions of safety production and construction problems by issuing rectification proposals, so as to solve the corresponding problems and restore the normal production and construction order For effective help. Government supervision departments can also provide necessary help and support for construction enterprises to carry out project construction safely by issuing corresponding safety production and construction manual, publishing supervision standards on the network, and strengthening the publicity of safety production and construction supervision knowledge. After the strict supervision and effective guidance are fully linked together, the regulatory and service attributes of government regulatory departments can be effectively highlighted, and the role of regulatory activities can be further improved.
Conclusion
The government supervision departments related to the safety production supervision of construction enterprises need to pay more attention to their own supervision functions in the new period, and strengthen the supervision of the corresponding construction activities of relevant enterprises in the region. For the deficiencies and problems existing in the supervision, the corresponding departments should make up for and solve them based on systematic examination and analysis. In addition, the regulatory authorities need to constantly summarize the experience of safety production supervision of construction enterprises, and carry out more systematic adjustment and Optimization in the supervision based on the actual supervision experience, which is also an effective way to continuously improve the supervision level and better play the regulatory effectiveness.
Reference:
[1] Yi HM. On how the government regulatory departments strengthen the supervision of safety production of construction enterprises [J]. Building materials and decoration, 2019(23):
|
2021-01-07T09:08:02.763Z
|
2020-09-14T00:00:00.000
|
{
"year": 2020,
"sha1": "3a25173b2da4460cdd85aff1f1dab3e9da41e6b6",
"oa_license": null,
"oa_url": "http://ojs.bbwpublisher.com/index.php/JARD/article/download/1472/1284",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "156ba209a07bb57d4c2b50274c1f79966c146434",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
2817409
|
pes2o/s2orc
|
v3-fos-license
|
Hyperbolic structures
We study the effect of rotation during the collision between dust aggregates, in order to address a mismatch between previous model calculations of Brownian motion driven aggregation and experiments. We show that rotation during the collision does influence the shape and internal structure of the aggregates formed. The effect is limited in the ballistic regime when aggregates can be considered to move on straight lines during a collision. However, if the stopping length of an aggregate becomes smaller than its physical size, extremely elongated aggregates can be produced. We show that this effect may have played a role in the inner regions of the solar nebula where densities were high.
1997), dust a
gregation certainly stands at the beginning of the formation of planetesimals and therefore of the terrestrial planets.Independent of the detailed process that finally forms planetesimals (e.g.Weidenschilling, 1980;Youdin and Shu, 2002;Cuzzi et al., 2001), dust grains first need to grow and settle towards the midplane of the disk.Planet formation therefore starts with the first steps of dust aggregation, when dust grains inherited by the disk from the interstellar medium start to collide and grow.This first step of growth is governed by Brownian motion aggregation.In the high density regions of the disk, dust/gas coupling is so tight that the main source of relative velocities between grains are the random motions produced by individual collisions between gas atoms and the grains.Understanding the Brownian motion phase of dust aggregation therefore means understanding the first step towards planets in disks.
At the very low collision velocities produced by Brownian motion (∼cm/s), grains always stick and no restructuring is occurring in the collision (Dominik and Tielens, 1997;Blum and Wurm, 2000).The structure of aggregates formed in this regime is therefore indeed a pure probe of the physical processes driving growth with Brownian motion of the particles.Theoretically, growth by Brownian motion was studied for example by Kempf et al. (1999).They calculated the motion of micron sized dust grains enclosed in a box of approximately constant dust number density.Diffusion, caused by the presence of a gaseous medium, produces relative motion of grains and leads to the growth of dust.The orientation of the (spherically not symmetric) aggregates was not followed during the computations.Instead, in order to randomize the relative orientation during collisions, the orientations of collision partners just before a collision was selected randomly.With these initial conditions, the aggregates were left to collide, without considering rotation during the collision.The calculations show a slow growth of the dust aggregates with time.At any time, the box contains a distribution of aggregate shapes which is characterized by a distribution of fractal dimensions.The mean fractal dimension found in the numerical study is around D f = 1.8.On the experimental side, a series of low-gravity experiments has been conducted (Blum et al., 2000;Krause and Blum, 2004) in order to study the growth of fractal aggregates under Brownian motion conditions.The results confirm many expected aspects of this growth regime, but also showed unexpectedly elongated aggregates, in contrast with the predictions by Kempf et al. (1999).
The authors qualitatively argue that rotation of the aggregates during the collision might lead to more elongated aggregates, because the probability of achieving contact between two aggregates at larger separations increases if the aggregates rotate.
To compare experimental and theoretical results, it is necessary to quantify the visual impression of elongation.In this study we will be using two different ways to do s :
(1) One can define an elongation factor as the ratio of maximum to minimum diameter of aggregate
f el = d max d min (1)
where d min is measured in perpendicular direction o d max .Experimentally, this quantity has to be derived from a few, or even a single picture.
W ue for model aggregates, we therefore choose three different projections of the aggregate and determine d max and d min from these images.(2) We can also derive the fractal dimension of the aggregates.This quantity can be computed by measuring the mass-radius relation of aggregates of different mass m and size R.If that relation follows a power law
m ∝ R D f (2)
then D f is called the fractal dimension of the aggregate.The fractal dimension measured in the experiments is 1.4, and typical values predicted by the numerical are around 1.8, indicating a mismatch.
While the idea of particle rotation is appealing as a mechanism to increase the elongation of dust aggregates, this has never been shown quantitatively.In this paper we inves igate the influence of rotation on the structure of aggregates formed under Brownian motion conditions.
The model
In order to test the influence of rotation on the growth of aggregates we developed a model for collisions of rotating aggregates.Since we are interested in the lo
velocity r
gime, collisions are not energetic enough to cause restructuring.These processes are therefore neglected, and aggregates are treated as rigid entities.Our code calculates the equation of motion of two aggregates set on collision course with each other.When both aggregates get into the first contact between any two constituent grains, a new aggregate is formed.The structure of this aggregate is determined by the position and orientation of the two aggregates at the moment of contact -no further changes occur after this moment.
In all our calculations we used the same monomers sizes and masses as in the CODAG experiment (Krause and Blum, 2004).Each monomer was 1µm in diameter and its mass was 1.0 × 10 −15 kg.The temperature assumed in our calculations was 300K , again like in the experiment.
In a fully general calculation, one would have to consider an ensemble of particles and follow the growth of this ensemble.However, it has been shown experimentally that the mass dis ribution function during Brownian motion growth is very narrow, so that a mono-disperse approximation works very well (Krause and Blum, 2004).We therefore proceeded in the following way: Starting with dimers, we first compute a large number of collisions between dimers, with random initial position, rotational orientation, and impact parameter.
The translational and rotational velocities are taken from a Maxwellian distribution for a given temperature T .The resulting quadrumer structures are stored in a database.In the next tep, we collide two quadrumers selected randomly from the database.A database of quadrumer structures is built in this way.In a similar way, we produce aggregates with larger sizes, always containing 2 n grains.In order to solve the problem we have studied the influence of rotation in two separate cases.The first one assumes ballistic collisions where the mean free path of an aggregate is longer than its size.The second assumes aggregation in a non-ballistic regime.The stopping length in this case is shorter than the size of the aggregate.
Ballistic collisions
In the ballistic regime, the mean free path of all involved particles is larger than the size of the largest particle.In this case, the center of mass of each particle m
es on a straight line
during the collision.In physical terms, the ballistic limit is reached in the limit of low (but non-zero, because the gas still needs to cause Brownian motion of the grains) gas densities.This is generally assumed to be the case both in protoplanetary disks, and in microgravity experiments described above.
When aggregates are colliding without rotation and on linear trajectories, elongated aggregates are formed when the first contact is made between grains on the outside of the aggregate, while the two aggregates are more or less aligned.Depending on the initial orientation, this situation is realized for different impact parameters: If the aggregates are aligned along the direction of relative motion between the colliding aggregates, a small impact parameter is needed.If the alignment is perpendicular to the collision direction, a large impact parameter produces the most elongated result, while a small impact parameter would lead to a more compact structure.If the aggregates are rotating during approach, the chances that the contact will be made early with an elongated geometry increase.For this process to be efficient, the linear speed of grains far away from the center of mass should be comparable or larger than the translational motion.Therefore, the larger the ratio of rotational and translational speeds are, the greater the average elongation of the aggregate becomes.
This can be achieved in two ways.Superthermal grain rotation does occur in interstellar space, where it is responsible for the alignment of non-spherical grains with the galactic magnetic field and in this way produces the interstellar polarization (Purcell, 1979).With superthermal rotation, a fast circular motion is induced while the linear velocity remains thermal.However, superthermal rotation relies on small forces like non-isotropic absorption and scattering of light (Draine and Weingartner, 1996), or H 2 formation on specific sites (Purcell, 1979), to accelerate the grains.The non-thermal rotation speeds can only be sustained at extremely low gas densities.In fact, the densities prevailing in protoplanetary disks are prohibitive, and superthermal rotation can be ruled out there (Ossenkopf, 1993).
On the other hand, the effective linear translational speed can be slowed down by embedding the grains into gas of high density, leading into the regime of non-ballistic collisions.
Non-ballistic collisi ns
The mean free path l b = v th τ f of a dust grain is the distance an aggregate with mean thermal velocity v th can move during one friction time.The friction time of a dust grain
s
τ f = ε m σ a 1 ρ g v m
(3)
where m, σ a , ρ g and v m are mass, aerodynamical cross section, gas density and mean thermal velocity of the gas molecule respectively and ε is a proportionality coefficient whi l., 1996).
If ρ g increases, the mean free path decreases accordingly.When the mean free path becomes similar to the largest aggregate involved in the collision, the assumption of a ballistic collision is no longer v lid.Instead, the particle execute a random walk both in linear motion and in rotation.On average, the aggregates will spent a longer time at large distances before moving closer together.This effect increases the chance of creating a contact already at large distances, i.e. a strongly elongated aggregate.Fig. 1a presents the schematic picture of a non-ballistic collision.
Results
In order to investigate the influence of rotation on the production of elongated aggregates, we calculated growth of rotating aggregates in the range from ballistic to highly non-ballistic collisions.
3.1 Ball
stic collisions with and without thermal rotation Distributions of f el for ballistic collisions are plotted in fig. 2. In order to be able to compare with the results of Kempf et al. (1999), we also computed a case with rotation turned of.The two top diagrams show the non-rotating case and thermal rotation case in the ballistic regime.For the non-rotating case, a very narrow distribution results.This distribution peaks at a value f el ≈ 1.6.For thermal rotation, the peak value shifts to f el ≈ 2 and the distribution becomes wider.For both cases, a very small fraction of particles with elongation factors 4 and larger are produced, due to fortuitous initial conditions in the computation.Fig. 3 shows the mean elongation factors as a function of aggregates size.Each point is plotted together with its ±1σ errorbars.The no rotation case clearly has the lowest elongations and, as in the thermal rotation case, the line is almost flat for bigger sizes, meaning the growth affects the elongation very weakly.The elongation factors for rotating aggregates are shifted upwards from values of about f el = 1.8 to about f el = 2.3.The effect of rotation is also clearly seen in the fractal dimensions (fig.4) which reach level of D f ≈ 1.7 for non-rotating collisions.Thermal rotation causes the fractal dimension to drop to a value of 1.46.
Rotating aggregates outside of the ballistic limit
Further calculations were done for higher gas densities.In this case, the friction length l b becomes comparab
While the thermal velocity decreases with increasin
size (mass) of an aggregate, τ f ≈ const for small, non-compact aggregates.Consequently, the mean free path l b = τ f v th decreases with increasing particle size.We performed calculations for several different gas densities.By tuning the gas density, we can study the transition from the ballistic to the non-ballistic regime.Fig. 2 shows the resulting distribution of elongation factors for different gas densities.Fig. 3 presents the relation between the mean elongation factor and the aggregate size for different gas densities.
The mean elongation factor f el depends strongly on the gas density ρ g in the transition regime.Increasing the gas density from 10 −6 to a few times 10 −5 g cm −3 strongly broadens the elongation factor distribution and shift the peak value to f el ≈ 4. Extreme values of up to f el = 10 are reached in the tail of the distribution function.The elongation factor also depends on size of aggregates: Larger aggregates reach the largest elongations.This is the result of the decrease of the mean free path with increasing aggregate size.The elongation -size relation becomes steeper for higher gas densities.The mean elongation factor in our calculations follows a power law dependence on the gas density.The power index is size dependent.Aggregates consisting of 32 monomers follow a power law with index of 0.18, while the index for those made of 16 monomers is 0.13.This means that the latter aggregates can reach a mean elongation factor value about 3.4 for gas density ρ g = 3.88 × 10 −5 g cm −3 .For this gas density and aggregates bigger than 32 monomers the mean elongation factor reaches values above 4.5.
Fig. 3 presents also results of the CODAG experiment (Blum & Krause, personal communication).Three points together with their ±1σ errorbars show the ratio of maximum to minimum diameter of aggregates formed during the experiment.An excellent fit with our calculations seems to be ρ g = 1.94 × 10 −6 g cm −3 or slightly higher.The density used in the experiment was ρ g = 2.25 × 10 −6 g cm −3 (Krause and Blum, 2004).
We also calculated the fractal dimension D f by fitting a power law function to the plot of mean aggregate mass versus mean radius of gyration r g .The average was taken over the entire distribution of particles created for the given mas .
The fractal dimension as a function of the gas density is shown in Fig. 4. For large densities, D f appears to follow a power law with index α = −0.062.Thus for a gas density ρ g = 3.88 × 10 −5 the fractal dimension reaches the value 1.11.At low gas densities, the fractal dimension asymptotically approaches a value of 1.46, slightly larger than D f = 1.41 as observed in the CODAG experiment.Indeed, it turns out that the CODAG experiment is operating close to, but not safely within the ballistic limit.A gas density of 2.25 × 10 −6 g cm −3 is in fact entirely consistent with D f = 1.4.At this density, the mean free path of a particle is only about 1.5 grain diameters.The experiments therefore are not in the ballistic limit, but start to feel the influence of random walk during the collision.Further increasing the gas density should strongly enhance this effect, and fractal dimensions very close to unity should show up when ρ g exceeds a few times 10 −5 g cm −3 .
Discussion
The ballistic regime
In the ballistic limit, rotation increases the collisional cross-section by producing more opportunities for a contact while two aggregates are passing each other.In the non-rotation case only specific 'luck
appropriate orientati
n and impact parameter) lead to an elongation, while rotation allows a much larger set of possible collision parameters to be effective.This effect leads to more elongated structures, because it produces significantly more opportunities for collisions with larger impact parameters.
The non-ballistic regime
In the non-ballistic regime, the random walk executed by the collision partners during a collision effectively reduces the mean (or effective) relative translational velocity.The typical distance traveled from the orig
al position in a random w
lk scales with the square root of the number of steps N, we can define an effective translational speed by
< v >= √ N l b Nτ f .(4)
If we take √ Nl b = L where L is size of the aggregate, then the mean effective translational velocity during random walk is inversely proportional to the gas density ρ g (see eq. 5, 3).
< v >= l 2 b Lτ f .(5)
As the gas d ective velocity decreases and indeed leads to very long and slow collisions between aggregates.During that time, the aggregates also execute a random walk in rotation and in this way expo rd each other.In the limit of large gas densities, this leads to contact being made always at maximum distance, and consequently to fully linear structures.
It is important to remember that we are only computing collisions of equal size aggregates, i.e. pure cluster-cluster aggregation (CCA).This simplification will tend to exaggerate the elongation at a given size.If monomers and small aggregates cont ibute significantly to the growth, slightly more compact aggregates are produced.In fact, if a size distribution of impactors is involved in the growth of a target, the growth physics is an intermediate case between pure CCA and particle-cluster aggregation (PCA).As an indication, we can compare the fractal dimension of 1.7 we found for non-rotating aggregates in pure CCA with the fractal dimension of ∼ 1.8 found by Kempf et al. (1999) for a realistic size distribution.
Influence on aggregation timescale
Elongated aggregates, influenced by rotation and aggregation in non-ballistic regime, expose a larger surface to the gas and are easier targets for possible collisions with other dust grains.Just like for the partic
mass, one can introduce a fractal
imension for the cross-section D σ and write the relation between aggregate shape and cross-section as
σ ∝ r Dσ g .(6)
Thus for limiting case of compact agglomerates D σ = 2 while for linear grains it is 1.To see how the crosssection changes with aggregate size, one can combine both fractal dimensions D σ and D f :
σ ∝ (m/m 0 ) Dσ D f .(7)
The cross-secti is related to aggregation timescale by
t ∝ 1 σnv (8)
where t is aggregation time, n is number density of dust grains and v is velocity.We calculated cross-sections of aggregates with different fract investigate the timescale of aggregation.Fig. 5 shows the timescale di imescale for the most compact aggregates.Each line corresponds to different fractal dimension D f .The most interesting are D f = 1.8 which corresponds to the aggregates formed in the numerical calculations by Kempf et al. (1999) and fractal dimension D f = 1.4 which was obtained in the CODAG experiment (Krause and Blum, 2004).The timescale is shorter for elongated and open aggregates because of the larger cross-sections.The difference increases with increasing size of aggregates.
Application to protoplanetary disks
To assess the relevance of the effects discussed in the current paper in a protoplanetary disk, we need to compute the stopping length of dust grains as a function of aggregate size and location in the disk.The boundary co
ition for ballistic and non-ballisti
collisions is reached when the mean free path equals the physical size of the dust aggregate approximately given by the radius of gyration: l b (ρ g , T ) = r g (m).This condition leads to a relation between cluster size and gas density, indicating how large the cluster should be at a given density in order to start feeling the effect of elongation enhancement by non-ballistic collisions.
We will make the simplifying assumption that the stopping time of an aggregates is (at a given density) independent of size.This assumption is valid for very small aggregates, and for aggregates with very open structure, i.e. low fractal dimension.
We use the d finition of the radius of gyration given by eq. 2 and transform it into
r g (m) = Bm 1 D f (9)
where B = 1 A 1/D f and A is a proportionality coefficient from eq. 2. Then the relation between cluster mass and gas density relation can be derived as
m D f −0.5 = ε B 3m m 8π 1 n 0 r 2 ρ g ,(10)
where m m is a mean mass of a gas molecul monomers in the aggregate.In order to find n 0 we substitute the cluster mass by m = n 0 m 0 where m 0 is the mass of a monomer.
n 0 = ε B 3m m 8π m 0.5− how the critical size of a cluster is dependent on the gas density.It also reveals a dependence on monomer size.We applied eq. ( 11) to the Hayashi model of the solar nebula (Hayashi et al., 198 density of the model, the critical size of a cluster as a function of distance from the star.Each line represents this relation for a different monomer size.Thus aggregates with a fractal dimension of D f = 1.46, are formed below the line, while above that line more elongated grains are produced.Each line is a border between the ballistic and non-ballistic regimes for a given monomer size.For example, at a distance below 1 AU from the star, the gas density strongly influences the shape of aggregates formed through Brownian motion, if the aggregates consist of more than a few tens of 1µm grains.Consequently, the CODAG experiment reproduced conditions present in the inner part of the protoplanetary disk while its applicability to the low density regions in the outer disk is limited.At the CODAG density of ρ g = 2.25 × 10 −6 g cm −3 , the critical size is about 1.5 micron sized monomers.
Conclusions
We have studied the effect of aggregate rotation during collisions.The results show that rotation must definitely be treated correctly when modeling growth due to aggregation, or the geometrical structure of the resulting aggregates will be incorrect.
Rotation plays a role because it enhances the probability that two approaching aggregates make contact early on during the collision, between outer constituent grains.For the most simple case of ballistic collisions, during which the colliding aggregates move on a linear path, ignoring aggregate rotation increases the fractal dimension from 1.46 to 1.7 in the limit of pure clustercluster aggregation.
This effect becomes strongly enhanced if the density of the surrounding medium becomes so large that the stopping length of an aggregate becomes shorter than the aggregate size, i.e. in the non-ballisti
limit.We fin
that reducing the stopping length to the monomer radius results in a fractal dimension of 1.25.When the stopping length is reduced to one tenth of the monomer radius, the fractal dimension drops to 1.1.
For the solar nebula, we find that non-ballistic collisions play a role in the innermost regions of the disk for even very small aggregates made of a few µm-sized grains.For smaller monomers and/or further away from the Sun, enhanced elongation can be expected for larger aggregates, made of a few 100 or 1000 grains.In the outer disk, all collisions may be considered ballistic.
Fig. 1 .
1
Fig. 1.Schematic sketch of collisions between two aggregates i two cases.anon-ballistic collision with mean free path of an aggregate shorter than the aggregate size.Velocities change every τ f .b -ballistic collisions with mean free path longer than size of an aggregate.
Fig. 2 .
2
Fig. 2. Distribution of elongation factors, with peak value normalized to unity.From top to bottom: non-rotation case, ballistic thermal rotation, thermal rotation for different gas densities ρ g in cm −3 .The distributions are shifted vertically for better visibility.
Fig. 3 .
3
Fig. 3. Mean elongation factor for different gas densities.ρ g = 0 indicates the limiting case for low densities (mean free path much larger than aggregate sizes).The bottom line shows the elongation factor for non-rotation case.Three boxes are measured elongation factors of agglomerates formed i the CODAG experiment with ±1σ errorbars (Blum & Krause personal communication).
Fig. 4 .
4
Fig. 4. Fractal dim
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "aaca9e719ea7bcae59034baa83afbb2bb804c8fc",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "ee5a52e2973ac9c3479a688036351fd45141f0b8",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": []
}
|
81538624
|
pes2o/s2orc
|
v3-fos-license
|
Corneal Biomechanical Characteristics and Their Correlation in an Iranian Adult Myopic Population
Article Info Background and Objective: Corneal biomechanics is a branch of science that studies deformation and equilibrium of corneal tissue under the application of any force. The objective of the study was to determine the normal values of corneal biomechanical characteristics including corneal resistant factor (CRF) and corneal hysteresis (CH) in an Iranian adult myopic population and their associations with age, gender and ocular biometrical components. Methods: The number of 480 eyes of 480 patients (mean age: 26.73 ± 4.9) with myopia and myopic astigmatism were included in this study. Ocular Response Analyzer (ORA) was used to measure corneal biomechanical metrics of CH and CRF. Corneal topographic and pachymetric measurements were obtained using Pentacam Scheimflug topographer. Results: The means of CH and CRF were 10.28 ± 1.49 and 10.49 ± 1.61, respectively. Females showed higher CH and CRF values compared to males (CH: 10.55 ± 1.36 vs. 9.72 ± 1.57, CRF: 10.73 ± 1.46 vs. 9.94 ± 1.74). The CH was significantly positively correlated with central corneal thickness (CCT) and corneal volume (CV) and significantly negatively correlated with horizontal and vertical radius of curvatures of the back corneal surface and horizontal radius of curvature of the front corneal surface. The CRF had a significant positive correlation with CCT and CV, whereas significant negative correlations were found between CRF and horizontal and vertical radius of curvatures of the back corneal surface. In the linear multiple regression model, CH was only significantly associated with CV; likewise, CRF showed significant association only with CCT. Conclusion: The mean values of CH and CRF in Iranian population were higher than values reported in East Asian countries, comparable to or higher than values in USA and UK populations. From various ocular dimensions, CH was significantly associated with CV; whereas, CRF was significantly associated with CCT.
Introduction
Corneal biomechanics has been the recent subject of attention in ophthalmic literature. Several studies covering a wide variety of applications of the corneal biomechanical study have been performed in recent years. Corneal biomechanical characteristics which are known to affect the accuracy of intraocular pressure measurements (Lau and Pye, 2011;Tonnu et al., 2005) may be useful to identify early corneal diseases such as keratoconus (Schweitzer et al., 2010), and may assist with predicting refractive outcomes following corneal refractive surgery (Roberts , 2002). It has also been suggested that corneal biomechanical properties may reflect globe biomechanics and thus give an indication of the susceptibility of developing glaucomatous damage (Congdon et al., 2006;Wells et al., 2008).
The in vivo measurement of the corneal biomechanical properties was enabled by the development of the Ocular Response Analyzer (ORA) by Luce (2005). The ORA (Reichert Ophthalmic Instruments, New York, USA) evaluates the biomechanical status of the cornea through a bi-directional applanation process: A fully automated alignment system positions an air tube to a precise position relative to the apex of the cornea. Once aligned, a 25-millisecond air pulse applies pressure to the cornea. The air pulse causes the cornea to move inward, past applanation and into a slight concavity before returning to normal curvature; then two pressures are measurable through ORA signal. The ORA produces two measurements of corneal biomechanical properties; corneal hysteresis (CH) and corneal resistant factor (CRF). CH represents the absolute difference between the applanation pressures P1 and P2 and is mostly representative of the viscous property of the cornea resulting from the viscous damping inherent in the cornea. CRF is an empirically determined parameter that is thought to represent the overall resistance of the cornea. CRF is indicative of the cumulative effects of both the viscous and elastic resistances and it is influenced by elastic properties more than CH (Kotecha, 2007).
Some studies have evaluated the normal values of these biomechanical metrics in different samples and have concluded that there are ethnic, geographic and genetic differences in ORA measurements (Narayanaswamy et al., 2012;Foster et al., 2011;Fontes et al., 2008;Hashemi et al., 2014). Therefore, the knowledge of the average and normal ranges of CH and CRF values in each geographic area and ethnicity can be beneficial (Hashemi et al., 2014). According to the fact that few studies have evaluated corneal biomechanical indices in Iranian populations and most of them have used relatively small sample sizes, this study was aimed to establish the normal values of CH and CRF in Iranian population by using a relatively large sample size.
This study also aims at finding the associations between ORA biomechanical measures and age, gender and other ocular biometrics. Understanding such relationships may help better elucidate the significance of and applications for the metrics that ORA produces and is of great importance for controlling the confounding factors in different corneal biomechanical studies.
Study subjects
This is a cross sectional study; 480 healthy adult subjects with myopic refractive error participated in this study (spherical equivalent of subjective refraction < -0.50). All subjects referred to the Noor Eye Hospital in Tehran, Iran and scheduled for refractive surgery. Only the right eyes of the subjects were included in this study.
Examinations
First, preliminary examinations were done by an experienced optometrist for all participants including measurement of uncorrected distance visual acuity by the Snellen E chart at the distance of 6 meter, objective refraction by the auto-refractometer (Nidek ARK-1, Gamagori, Japan) and retinoscope (Heine Beta 200, Heine Corp, Optotechnik, Germany) and subjective refraction. Then, all participants were examined by an anterior segment specialist.
The ORA (Reichert Ophthalmic Instruments, New York, USA) was used to measure the biomechanical features of CH and CRF while the subject was sitting comfortably in a chair in front of the instrument. The patients were instructed to look at a fixation target (a red blinking light) in the ORA. The ORA was activated by pressing a button attached to the computer. A noncontact probe released an air puff. A signal of air reflux was sent to the ORA that displayed the CH and CRF on the computer monitor. An experienced examiner performed all the measurements with three consecutive readings in each eye. Only goodquality measurements with two distinct peaks were considered. The average of three readings was documented for each eye. Topographic/pachymetric measurements of the cornea were obtained using the Pentacam (Oculus, Wetzlar, Germany). The Pentacam is a rotating Scheimpflug camera that generates a 3-dimensional model of the cornea and anterior segment. For this study, the Pentacam's "50 picture 3D scan" measurement mode was used. The subjects were instructed to fixate on the central fixation target (the focus of which was adjusted to account for each subject's spherical refractive error), and to blink and open eyes wide just prior to image capture. The instrument digital camera and slit illumination system then rotated around the corneal apex to capture 50 cross-sectional Scheimpflug images of the anterior eye, each separated by 3.6 degrees. The Pentacam automatically captures images once correct alignment in the x, y and z directions is attained and flags any measurements that are unreliable (due to poor alignment, excessive eye movements, or any missing or invalid data). Any unreliable measurements were repeated. Main Pentacam's measures included central corneal thickness (CCT), horizontal and vertical radius of curvatures of the front corneal surface (HRCF and VRCF), horizontal and vertical radius of curvatures of the back corneal surface (HRCB and VRCB), corneal volume (CV), anterior chamber depth (ACD) and shape factor of the front and back corneal surfaces (QF and QB).
Data of each subject that met inclusion criteria was extracted using a data extraction form. This form was used as a pilot in ten patients before starting the study and it was amended. Finally, data were directly entered into SPSS 19 by typing in data view.
Exclusion criteria included patients under 18 years old, previous ocular surgery, diagnosis of any corneal pathology and other anterior segment diseases, chronic use of topical medications and corneal scars or opacities. Contact lenses were removed at least 72 hours prior to the ORA exam.
The research was approved by Ethics Committee of Iran University of Medical Sciences and followed the tenets of the Declaration of Helsinki. All study participants signed the informed consent form before inclusion.
Statistical analyses
The frequency table was used to present mean and standard deviation of baseline data. Bivariate correlations between predictor and dependent variables assessed using Pearson correlations statistic. We then selected the predictors of CH and CRF using multiple linear regression analysis with stepwise method of variables selection (Entry P<0.1; removal P>0.2) to create the best adjusted model. Cluster bar chart was used to show the means of CRF and CH with regard to gender and age categories. P value less than 0.05 was considered statistically significant.
Results
Of the total 480 eyes included in the study, 312 eyes (65%) belonged to females. The mean age of patients was 26.73 ± 4.9 years (range: 19-41). Table 1 shows the baseline data for the study variables. No significant correlation was observed between ages with CH (P=0.63) and CRF (P=0.57) but Mann-Whitney test showed a statistically significant difference in means of CH and CRF between two gender groups; such that eyes belonged to females with higher values. The mean of CH was 9.72 ± 1.57 in male and 10.55 in female groups (P=0.003). The mean of CRF in male and female groups was 9.94 ± 1.74 and 10.73 ± 1.46, respectively (P=0.01). The means of the CH and CRF with regard to gender and age categories are shown in Figures 1 and 2. CH was significantly positively correlated with CCT (P<0.001, r = +0.54) and CV (P<0.001, r = +0.60). On the other hand, significant negative correlation was observed between CH and VRCB (P<0.001, r = -0.38), HRCB (P<0.001, r = -0.33) and HRCF (P=0.002, r = -0.28). No significant correlation was found between CH and other variables. CRF had a significant positive correlation with CCT (P<0.001, r = +0.57) and CV (P<0.001, r = +0.56). Significant negative correlation was seen between CRF with HRCB (P<0.02, r = -0.20) and VRCB (P=0.002, r = -0.27). CRF was not significantly correlated with other variables. Pearson correlations between predictor and dependent variables are shown in Table 2. Table 3 shows the results of the multiple regression models for CH and CRF. Despite significant correlations between CH and curvatures of both front and back corneal surfaces, CCT and CV as well as significant association with gender, subsequent multiple regression analysis found that only CV was a statistically significant predictor of CH. The association between CH and CV has been depicted as a scatter plot with regression line in Figure3. With regard to CRF, despite significant association with gender as well as significant correlations between CRF and corneal back surface curvatures, CCT and CV, only CCT was found to be a statistically significant predictive factor of CRF in regression model. Figure 4 depicts the linear correlation between CRF and CCT.
Discussion
In the present study, we reported the mean values of corneal biomechanical characteristics (CH and CRF) in a large sample of Iranian population. Similar studies have reported a detailed description of these indices in different populations. The results of this study showed that the means of CRF and CH were 10.49 ± 1.61 and 10.28 ± 1.49, respectively. The mean corneal biomechanical properties in some East Asian adult populations were lower than in our study (Narayanaswamy et al., 2011;Jiang et al., 2011;Kamiya K, Shimizu K, and Ohmoto F, 2009;Wang et al., 2014;Hwang HS, Park SK, and Kim MS, 2013); whereas, our findings were comparable to or lower than the results reported in the USA and the UK populations ( Johnson et al., 2017;Leite et al., 2010;Laiquzzaman M, Tambe K, and Shah S, 2010). Besides the racial issue which has been previously reported18, the factor that could be attributed to these differences is refractive error. Highly myopic populations have been shown to have lower CRF and CH values (Hashemi et al, 2014); therefore, East Asian populations are expected to have lower mean values.
Our results showed no association between biomechanical features and age. The normal corneal stroma consists of lamellae of liquid crystal like arranged proteoglycan coated collagen fibrils. The biomechanical properties of the cornea are related to a very regular orthogonal arrangement of these lamellae (Fratzl &, Daxer, 1993). A detailed study of the collagen fibrils in normal human corneas showed a significant age-related increase in the collagen fibril diameters as well as an elongation of the collagen fibrils (Daxer &, Fratzl, 1997). Furthermore one would expect a tendency toward biomechanical strengthening of the cornea during aging. The results of this study are in accordance with the finding by Ortiz et al who did not find a significant change in biomechanical properties during aging (Ortiz D, Pinero D, and Shabayek MH, 2007). Therefore, it may be postulated that the biomechanical effect of the age-related increase in structural dimensions which may favor the strengthening of the cornea, may be offset by age-related changes in fibril orientation and /or macroscopic dimensions.
With regard to the association between gender and corneal biomechanical properties, some studies reported higher values of CH and CRF in females (Narayanaswamy et al., 2011;Foster wt al., 2011;Fontes et al., 2008 ) and our results supported their findings. To our opinion, such associations are not clinically significant and as can be seen in Figures 1 and 2, patterns of changes of corneal biomechanical features were not similar among different age categories; such that males in some age categories showed higher values of CH and CRF compared to females. We believed that gender should not be considered as a sole factor. There might be different other factors associated with biomechanical parameters of CH and CRF which may change in different manners in both genders at different ages. To investigate the associations between gender and corneal biomechanics, a huge sample size seems necessary and it is better to investigate such associations in different age groups with the consideration of other presumed contributory factors.
In this study we also evaluated the associations between CH and CRF with corneal curvatures of both the front and back corneal surfaces. Such associations have been associated with conflicting results in the literature. Some studies found no association; however, the results of studies in which dynamic contour tonometry was used, have suggested that corneal curvature affects corneal rigidity, with flatter corneas being less rigid and lower CH and CRF values are thus at least partially indicative of a less rigid corneal structure (Francis et al., 2007;Matsumoto et al., 2000). Our results showed that despite the significant correlations between CH and curvatures of the both front and back corneal surfaces, as well as significant correlations between CRF and curvatures of the back corneal surface, such associations were not reproduced in regression model indicating that corneal curvature was not a significant predictor of biomechanical state of the cornea when considered along with other variables and can be considered as a confounding variable.
Our results showed a significant association between CRF and CCT which has been consistently reported (Wasielica-Poslednic et al., 2010;Shah et al., 2006). Conversely, the association between CH and CCT has been associated with conflicting results in literature. A considerable feature of this study is that the relation between CCT and biomechanical metrics was investigated using a regression model by controlling the effects of known confounders. By this manner, CCT was found to be a significant predictor of CRF not CH, indicating that CH and CRF represent different biomechanical aspects of the cornea. As a result, it could be hypothesized that the reduced CH value is not primarily a function of corneal thinning after corneal refractive surgery.
On the other hand, the only significant predictor of CH in our study was CV. Pathel (2010) also demonstrated that CV was a predictor for CH but not CRF, suggesting that CV may reflect a more composite effect of corneal thickness and contour variation. Since CV is a three dimensional parameter, it can play a more effective role in determining biomechanical status of the cornea than CCT which is a two-dimensional parameter. The important role of CV in detection of keratoconus was previously reported by Ambrosio (2006) andFallah (2010). Since the CCT has a wellestablished role in corneal refractive surgery, we recommend that more attention should be paid on CV to have a better appreciation of corneal biomechanical status. More studies are recommended to investigate the effect of CV on the outcomes of refractive surgery.
As well known, eyes with keratoconus and high myopia have higher negative Q values (Pinero et al., 2010). Knowing the fact that CH and CRF are lower in keratoconic eyes (Schweitzer et al., 2010) as well as in high myopic eyes (Hashemi et al., 2014), our attention directed toward the associations between corneal shape factor and corneal biomechanical parameters. However, no association was found between shape factors of both front and back corneal surfaces with CH and CRF values.
Finally, no association was found between ACD, CH and CRF. Investigation of this relationship seemed logical; because, patients with glaucoma and small anterior chambers are shown to have lower CH and CRF values (Congdon, 2006;Wells, 2008). We found only one study reaching such associations; however, that study could be criticized because of the very small sample size.
|
2019-03-18T14:05:27.087Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "2da6d9e284d30a66d7e45bc5d42cf3b7a3bee1d1",
"oa_license": "CCBYNCSA",
"oa_url": "http://fdj.iums.ac.ir/files/site1/user_files_4098d4/afsoon-A-10-27-5-329d423.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ab69bcf267fc58f056c6b56a43cb8d17d485fa44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
234290094
|
pes2o/s2orc
|
v3-fos-license
|
Clinical and Social Characteristics of Deliberately Intoxicated Minors Treated in Paediatric Intensive Care
Background. The aim of the study was to determine the clinical and social characteristics of minors using alcohol and drugs for inebriation, and the same for those using them for suicide. Methods. This study includes an analysis of case histories of adolescents hospitalized in the Pediatric Intensive Care Unit because of acute alcohol or/and drug intoxication in 2014–2016. Two groups (group I: inebriation, and group II: suicide) were compared on age, sex, severity of intoxication, used substances, presence of other self-harm evidence, and social status. Results. A total of 390 cases were registered: 78.21% in Group I and 21.79% in Group II. The Glasgow-Coma-Scale scores showed that patients from Group I were more severely intoxicated, with an average score of 11.47, whereas patients from Group II averaged 13.45 (P < 0.001). Self-harm was more prominent among minors from Group II, with an incidence of up to 65.09%. The most common substance used to become inebriated was alcohol (72.79%), and for committing suicide was medication (88.24%). Patients who were living in children’s care homes composed 13.33% of all cases included into the study, despite the low frequency of these minors in Lithuania (0.8%). Conclusions. The substance used for deliberate intoxication was mostly alcohol. Minors experiencing inebriation were hospitalized in worse clinical condition in comparison to those who had attempted suicide. Other signs of self-harm were signicantly more common among suicidal minors. Living children’s care homes a possible risk factor for deliberate intoxication among young people in Lithuania.
Teenagers who have intoxicated themselves with alcohol or other substances are not the only ones being admitted to Paediatric intensive care units (PICU) due to deliberate intoxication-minors who have attempted suicide by overdosing on medication or other chemical substances are also brought to these units. Suicide among minors is an especially relevant issue in many countries. According to research, suicide is the fth leading cause of death among children aged 9-13, and the third leading cause among adolescents aged 14-18, in the United States of America [9]. Overdosing of medication as a method of suicide is one of the most popular among minors. However, it is also one of the least effective methods [10]. When minors have suicidal thoughts and attempt suicide, the frequency of recurring actions increases, and so does the probability of committing suicide [11]. This is an extremely important issue in Lithuania, as suicide is responsible for up to 20% of deaths among [10][11][12][13][14][15][16][17][18][19] year old adolescents [12]. Additionally, the behaviour of teenagers attempting at suicide by overdosing on medication lacks scholarly research.
For clarity reasons, the terms intoxicate and intoxicated shall refer to the act and fact of self-poisoning of an individual, and inebriated shall refer to the feeling of being inebriated (as in high) throughout this article.
The aim of this study was to determine the clinical and social characteristics of both minors who overdose on substances for inebriation, and minors who use alcohol and drugs with the intention of committing suicide.
Methods And Materials
A retrospective study was performed in the PICU of the Clinic of Children's Diseases. Vilnius and district is the largest region and the capital of Lithuania (home to approximately 122,000 minors under the age of 18), and Vilnius district is the largest district in Lithuania. Pediatric patients suffering from any kind of acute intoxication are generally admitted to this Clinic from the entire region.
Ethical approval was obtained from the Lithuanian regional Bioethics Committee and Faculty of Medicine of Vilnius University. All procedures in the current study were in accordance with ethical standards from the Declaration of Helsinki (1964) and its later amendments.
This study includes an analysis of case histories of children and adolescents (aged < 18) who intoxicated themselves by means of a deliberate overdose of medication, drugs, and/or alcohol over a period of 3 years (January 2015-December 2017). The analysis considered the following aspects: the age of the study participants, their gender, the intention of intoxication, severity of the clinical condition based on the intention of intoxication, substances used for intoxication, relevant factors involved in deliberate selfharm, the season of intoxication, psychosocial environment, and adjustment. Data pertaining to said factors were manually collected from medical records.
Patients meeting the deliberate intoxication clinical criteria were divided into two groups: the Group I consisted of those who used alcohol, drugs, and medication with the intention of becoming inebriated, whereas the Group II consisted of teenagers whose intention of intoxication was obvious suicide attempt.
These groups were compared based on age, gender, season of the incident, living location, severity of intoxication (Glasgow Coma Scale (GCS) scores and blood alcohol content, observed and recorded during the primary physical examination), substances used for intoxication, presence of other self-harm signs (e.g. self-cutting), and place of residence (family, foster care, children's care homes). Relapsed minors were compared to minors who were hospitalized once within a span of 3 years, based on gender, other self-harm, and the social living situation.
The data were analyzed using Microsoft O ce Excel 2013 and IBM SPSS Statistics 21. Continuous variables were expressed as the mean ± standard deviation and qualitative data were reported as numbers and percentages. The normality of the variable distribution was tested by the Kolmogorov-Smirnov test. The signi cance of differences between groups with a normal distribution of parameters was assessed by the Independent Samples t-test. Associations between qualitative parameters were tested using the χ2 test or Fisher's exact test. P < 0.05 was considered a statistically signi cant value in all statistical analyses.
Results
During the aforementioned 3 year period (2015-2017), 356 patients met the inclusion criteria. There were 390 cases of intoxication registered in the PICU that were included and analyzed in our study. Among these, 28 patients were readmitted twice or even three times during the investigation period; therefore, the medical records of 356 patients were analyzed.
The population consisted of 186 (52.25%) boys and 170 (47.75%) girls. The average age of intoxicated minors at the moment of hospitalization was 15.02 years (SD 1.5). Table 1 indicates the distribution of the studied cases of minors based on year of hospitalization, gender, age, and signs of other self-harm. The Group I consisted of 305 (78.21%) cases (271 patient) of intoxication with the intention of becoming inebriated, while Group II contained 85 (21.79%) cases (85 patients) of intoxication obviously motivated by suicide. These two groups were compared with each other, and the resulting data is presented in Table 2.
The severity of intoxication was compared on the basis of GCS scores in both groups. The lowest score in Group I was 5, while that in Group II was 6. The highest score in both groups was 15.
We compared both groups based on the season of intoxication. In Group I, the dominant season was winter with 31.2%, while the most uncommon season was autumn with 20.2%. In Group II, the dominant season was spring with 30.9%, and the most uncommon season was autumn with 18.5%. No signi cant difference was detected.
In the majority of cases, the cause of intoxication in Group I was alcohol with 222 (72.79%) registered incidents, whereas drugs as the cause of inebriation was detected in 69 (22.62%) cases. Other causes included a mix of alcohol and drugs in 11 (3.31%) cases, and a mix of nicotine and volatile substances in 3 (0.98%) cases.
The main substance of intoxication in Group II was medication (only) with 75 (88.24%) incidents, a mix of medication and alcohol with 8 (9.41%) incidents, and a mix of medication and drugs with 2 (2.35%) incidents. The most common choise of drugs for inebriation and suicide attemtp in our study are presented in Fig. 1, and their distribution based on the year, is presented in Fig. 2.
In the 390 cases included into the study, 240 cases were associated with alcohol consumption. A total of 231 were intoxicated with alcohol with the intention of becoming inebriated (Group I). The average blood alcohol content in Group I was 45.67 ± 14.22 mmol/l (min. 8.9 mmol/l; max. 98.75 mmol/l). Eight minors from Group II (n 85) consumed alcohol too. Their average blood alcohol content was 35.29 ± 23.98 mmol/l (min. 1.20 mmol/l; max. 77.30 mmol/l). The signi cance value was P = 0.110, which means that there was no statistically signi cant difference in the alcohol consumption amount between minors with the goals of inebriation and suicide.
A total of 28 adolescents from Group I were hospitalized more than once: 22 of them were hospitalized twice (44 hospitalizations) at the PICU over the course of 3 years, while 6 of them were hospitalized three times (18 hospitalizations). The results of the comparison between relapsed minors and those who were hospitalized once within a span of 3 years based on gender, other self-harm, and social living situation are presented in Table 3. Among the 28 relapsed minors, the most common reason for intoxication during the rst hospitalization was alcohol (60.7%) and drugs (28.6%); in the second hospitalization, these values were 53.6% and 32.1%, respectively; in the third hospitalization, alcohol was the reason for intoxication in 83.3% of cases. There were no relapsed patients in Group II.
Discussion
According to the collected data, the average age of minors who were inebriated and who were attempting suicide was 15. Having compared these data to the statistics of other countries, we noticed a similar trend: the average age of minors who were deliberately intoxicated ranged between 14.5 and 16.0 [6,7,13].
The gender distribution was as follows: boys were more often hospitalized due to deliberate intoxication with the intention of becoming inebriated, whereas girls were more often hospitalized due to deliberate intoxication with the intention of committing suicide. Similar data can be found in other countries. In the Netherlands and Slovakia, boys were more often the ones attempting to become inebriated using alcohol [7,13] and, in Australia and the Czek Republic, girls were more often the ones attempting to commit suicide [10,14].
Moreover, during our research, we noticed that the highest numbers of attempts at intoxication with the intention of inebriation were recorded during the winter time, while attempts at committing suicide using medication were most frequent during the spring. A study conducted in the Czech Republic also showed that most minors attempted to commit suicide using medications in the springtime [10]. This coincides with several literature reviews regarding suicide seasonality, suggesting attempts at suicide become more frequent in the spring season [15,16].
The severity of intoxication was evaluated based on the GCS scale and the blood alcohol content. This study determined that minors who were intoxicated with alcohol and drugs were in a worse condition. We believe that the reason behind this is that the minors who were intoxicated with alcohol overdid it without understanding their own limits, whereas minors who were attempting to overdose on medication wanted to attract attention to their problems; they eventually confessed this to their relatives out of fear of death, which is why their GCS scores were higher.
GCS scores in other studies researching minors attempting to become intoxicated have been similar to those of our results: the mediane GCS score of studies conducted in Germany was 12.21 [17], while the median GCS score acquired through research conducted in Melbourne was 12, whith the average blood alcohol content in this study was 45.67 ± 14.22 mmol/l [14]. The blood alcohol content ranged from 1.76 to 1.98 g/l in the other studies, or 38.4-42.9 mmol/l [6,7].
We would like to mention that during the period 2015-2017, synthetic cannabinoids were a dominant substance among those intoxicated with drugs-this is a drug that is di cult to identify. In order to determine what the patient is intoxicated with, specialists must attain the patient's medical history, the accuracy of which is often dubious because teenagers tend to lie and negate. Since there may be no way of accurately identifying what the minor has consumed and what chemical substances were in the compound, it can be di cult to assign treatment. Considering data collected in Lithuania and other countries, synthetic cannabinoids are rising in popularity and are often the cause of serious health problems [18,19].
In this study, minors who intoxicated themselves using medication often consumed benzodiazepine-type drugs. According to a study conducted in Australia, the most common drugs used in an attempt of selfpoisoning were paracetamol and non-steroidal anti-in ammatory drugs, whereas sedatives and hypnotics were used by only 5% [14]. In a study conducted in the Slovak Republic, it was determined that 39% of cases, when a medication was used in an attempt to commit suicide, involved medicine that affected the nervous system [7].
Minors who were attempting to intoxicate themselves using medication were most often the ones who also harmed themselves cutting. Research conducted in other parts of the world on the issue of self-harm suggests that both self-harming adults and minors tend to seek to extinguish anxiety, stress, and bad feelings with alcohol or other psychoactive substances [20]. According to multiple authors, deliberate selfharm among minors who attempted to commit suicide is a very frequent occurrence [21].
The currently conducted study included 28 minors who were hospitalized recurrently. The results show that as the number of hospitalizations of the same minors' increases (girls were more often hospitalized recurrently than boys), the ratio of hospitalized minors living in orphanages also increased. Furthermore, deliberate self-harm was also more frequent.
Another very signi cant aspect is that a signi cant number of minors admitted to the hospital due to deliberate intoxication were from children's care homes. Considering that approximately 4000 minors in Lithuanian live in non-family situations, which constitutes 0.8% of people less than 18 years of age, we are able to assess the scope of this problem. Most minors become intoxicated with the intention of becoming inebriated, which is why caretakers should ensure the psychosocial health of minors being raised in orphanages.
At the moment, the Lithuanian Child Care System is undergoing reform, the aim of which is to close down all children's care homes by 2030 and to relocate orphans to live with small families in a natural home setting. Having collected results from a study that was conducted prior to the reform, it will be interesting to study how the situation changes after implementation of the reform.
Research conducted in the Netherlands determined that only 1.5% of minors admitted to the hospital due to alcohol poisoning were from orphanages [23]. Most studies show that the presence of a stable and supportive family reduces the risk of dangerous minor behavior. Studies also suggest that minors living in families with close social ties and support are less likely to suffer from mental disorders [24]. This type of relationship is exactly what minors living in orphanages lack.
Conclusions
Deliberate intoxication is equally common among female and male minors in the Lithuanian capital. The substance most often used for deliberate intoxication is alcohol. Minors with inebriate intoxication tend to be hospitalized in worse condition than those who attempted suicide through taking medication. Other signs of self-harm were signi cantly more common among suicidal minors. Living in children's care homes could be a risk factor for recurrent deliberate intoxication in children and adolescents. The problem of minors harming themselves using alcohol, drugs, and medication is a very signi cant and relevant one.
Declarations Ethics approval
Ethical approval was obtained from The Regional Ethics Committee of Vilnius University Hospital (protocol No. 18VVR-6304). All procedures in the current study were in accordance with ethical standards with the Declaration of Helsinki (1964) and its later amendments.
Funding information no funding Consent for publication not applicable.
Availabilityof data and materials data, used in the study, were manually collected from medical records and can't be shared due to Patient Rights Decree and Person Data Protection Decree of Lithuania. Figure 1 The substances most commonly used for suicide attempt and for inebriation during the study period.
|
2021-05-11T00:03:04.714Z
|
2021-01-23T00:00:00.000
|
{
"year": 2021,
"sha1": "60489c82f32b9f07389224300571bca8db7ef4dc",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-151766/v1.pdf?c=1631886689000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "9da059b6a275e7292976ba4e23f5064a37bb9c23",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238807575
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of the state of intracardiac hemodynamics and myocardial remodeling in women with gonarthritis,
. The problem of comorbidity is one of the most pressing problems of modern medicine. The presence of arterial hypertension and overweight
Key words: arthritis; arterial hypertension; overweight; comorbidity; intracardiac hemodynamics.
Background.The problem of comorbidity is one of the most pressing problems of modern medicine.The presence of arterial hypertension (AH) and overweight (OW)one of the most common diseases in the world, in patients with gonarthritis (GA), is associated with the earlier development of target organ damage and subsequent cardiovascular accidents [1,2].
Essential AH is one of the most common diseases in Europe,it affects about 30% of the general population [3].In addition, OW is diagnosed in 50-80% of patients with AH, which significantly increases the risk of developing cardiovascular complications (CVC) [4].
High blood pressure (BP) is found in 65-80% of patients with OW.In 50-70% of patients with OW, disorders of carbohydrate metabolism develop on the background of already existing AH [5].Over the past few decades, obesity has become a global epidemic.OW with AH are among the three most common diseases in the world [6].
It was found that an increase in systolic BP for every 10 mm Hg in patients with OW, increases the risk of CVD by 20% [7].
In addition, the combination of AH and OW is considered as the most aggressive in the context of mortality, which is associated with the earlier development of target organ damage and attackable by cardiovascular catastrophes [8].
Thus, the relevance of the research is due to the need for a comprehensive study of the comorbid pathology development mechanisms in gonarthritis using modern technologies available for practical health care, to search for rational and effective methods of its prevention and pharmacological correction, taking into account the course of the underlying pathology.
The purpose: to assess the state of internal cardiac hemodynamics and myocardial remodeling in women with GA, AH and OW, depending on their combinations.
Materials and methods.
At the first stage of the study ( 2018 -2020), on the basis of the rheumatology in -patient department of the «City Hospital № 10» (Zaporizhzhia), a prospective examination of 198 women with deferent combinations of GA, AH and OW was conducted.
The second stage of the study-сatamnestic-was carried out in 2019 -2021 on the basis of the out -patient department of the «Primary Health Care Center №9» (Zaporizhzhia).
At the first diagnostic stage, 198 women with GA and combinations of GA with AH and OW were examined at the age from 40 to 70 years (62.6 ± 1.9 on average) with an average duration of the disease 13.4±3.8years.According to clinical forms and predominant localization of joint lesions, patients were divided into two groups: 89 (44.94%) women with polyosteoarthritis and 109 (55.06%) women with GA.Of these 82.8% cases of combinations of GA with osteochondrosis of the lumbar, thoracic and cervical spine and 27.2% caseswith other joints.
The diagnosis of GA was established according to the ICD-10 criteria, recommendations of the Ukrainian Rheumatologists Association.The diagnosis of AH was verified according to the order of the Ministry of Health of Ukraine № 384 dated 24.06.2012.
The presence of OW was established according to the WHO recommendations (1998).Body mass index (BMI) was calculated using the formula: BMI = body weight (kg) / height 2 (m).If BMI was within 24-30, then OW was diagnosed, if within 30-34.9obesity of the 1st degree.
Patients with the fourth radiological stage of GA, third stage of AH, and 2-3 degrees of obesity were not included in the study.
In order to analyze the effect of comorbid pathology on the course of the underlying disease, all examined women were divided into three groups.The first group (G1) included 59 women with symptoms of GA without concomitant pathology, the second group (G2) -74 women with GA and AH, and the third group (G3) -65 women with GA and AH and OW.
All patients underwent a standard general clinical examination, including physical examination, history taking, and instrumental laboratory tests.
Ultrasound examination of the heart was performed on the ULTIMA PA apparatus (Radmir, Ukraine), according to the standard method with the recommendations of the American Echocardiographic Society.The presence of left ventricular hypertrophy (LVH) was established according to the recommendations of the European Society of Cardiology and the European Society of Hypertension (2018).
The results obtained were processed statistically using the Microsoft Excel software package and the Biostatistics 7.0 software.
Results and discussion
In accordance with the purpose of the study, the assessment of indicators characterizing the systolic and diastolic function of the heart was carried out in G1, G2 and G3.Analysis of structural and geometric indicators characterizing the size and volume of the atria showed that G2 and G3 patients was characterized by an increasing in the volumes of the left (LA) and right atria (RA), as well as the diameters of the left atria (LA-D), with a reliability of p <0.001 it differs from G1patients.At the same time, the diameter and volume of the LA in G3 patients were significantly bigger (p <0.001) compared with those in G2.In G3 and G2patients, there were significantly larger end-systolic (LVESD) and end-diastolic diameters and volumes of the left ventricle (LVEDD) compared to G1 (p <0.001), with significantly higher values (Table 1) of these indicators in the presence of comorbidity (p <0.001).showed that despite the fact that the study included patients exclusively with preserved systolic function, the EF of patients with G2 was significantly lower than in G1 (p <0.001).
Moreover, patients with G3 had significantly lower EF than patients with G2 (p <0.01).The degree of anteroposterior shortening of myocardial fibers (MFS) and the rate of circular myocardial fibers shortening (CMFSS), which were used to assess LV myocardial contractility, were lower in patients with both G2 and G3 compared to G1 (p <0.001).The presence of OW negatively influenced the contractility of the myocardium, as evidenced by the significantly lower values of MFS and CMFSS compared with G2 (p <0.01).
Patients of both groups also had an increase of posterior wall thickness (PWT) and interventricular septum thickness (ST) compared to G1 (p <0.001).At the same time, in the presence of OW, these indicators were large, as evidenced by the reliability of differences in indicators in G1 and G2 (p <0.001).In addition, the relative wall thickness (RWT) of patients with G2 acquired significantly higher values (p <0.001) compared with those for G1.
When examining patients, it was found that left ventricular mass (LVM) and left ventricular mass index (LVMI) in both groups of patients were significantly higher than in G1 (p <0.001).The presence of OW led to a further increase in LVM and LVMI in patients with AH, which is confirmed by significantly higher levels of these indicators in G1 compared with patients in G2 (p <0.001).
In order to assess how OW influenced changes in indicators characterizing the structural and functional state of the myocardium, a comparative assessment of the indicators of G3 and G1 patients was carried out ( As shown in Table 2, G3 patients had significantly larger values of the size and volume of the atria than those with normal body weight (p <0.001).This group of patients was also characterized by larger LVEDD, LVESD, EDV and ESV (p <0.01) with a lower EF (p <0.05) compared with patients with normal body weight.The degree of MFS decreased with an increase in BMI, as evidenced by the significant differences in the indicator between both groups of patients (p <0.05).G3 patients were characterized by significantly bigger ST and the PWT of the LV (p <0.01), however, LVM in both subgroups of patients did not acquire significant differences.LVMI in G3 was significantly higher (p <0.01) than in the case of normal body weight, however, LVMI did not differ significantly in both groups.
To assess the effect of AH on changes in indicators characterizing LV systolic function, patients with G2 were also compared with G1 (Table 3).127.53 ± 2.92 122.388 ± 2.36 Note: * -statistically significant differences between G1 and G2 G2 patients were characterized by significantly larger LA size and volume (p <0.01), as well as LVEDD, LVESD and ESV (p <0.05), than patients with G1.In addition, the specified group of patients had high values of PWT(S) (p <0.01) and LVM (p <0.05) in the absence of significant differences in RWT and LVMI.
At the next stage of the study, the types of LV remodeling in G2 and G3 patients were assessed.It was found that the vast majority of patients with OW (77.54%) and more than half of G2patients (53.42%) had left ventricular hypertrophy (LVH), which was not present in G1.
All patients of G3 had impaired LV geometry; 2 patients (2.22%) of G2 and 48 patients (81.26%) of G1 had normal LV geometry.The predominant types of remodeling in G2 were concentric hypertrophy (45.56%) and concentric remodeling (42.22%).At the same time, in the vast majority of cases in G3, hypertrophic variants of LV remodeling prevailedconcentric (64.06%) and eccentric (16.88%) hypertrophy, which are regarded as prognostically unfavorable types of remodeling.
Comparative assessment of the types of remodeling in G2patients showed significant differences in the variants of remodeling between G1 and G3.As for groups of patients as a whole, and when subgroups were identified depending on BMI, the distribution of types of remodeling in patients remained: the prevalence of hypertrophic variants in the presence of OW and almost the same percentage of cases of concentric hypertrophy and concentric remodeling at normal weight.
For the completeness of the clinical and prognostic picture, and the development of corrective approaches, we investigated the diastolic function of the heart, which is an important sign of the myocardial damage possibility in patients with OW.
It was found that in G2 and G3 patients the pulmonary artery diastolic pressure (PAD) was significantly higher (p <0.001) than in G1 (Table 4).At the same time, patients with G3 had a significantly higher (p <0.001) level of this indicator than patients with G2, which indicates a greater expressiveness of diastolic dysfunction in the presence of NS.
Analyzing the data of Table 4, it was found that the maximum speed of early filling (E) in G2 was significantly (p <0.001) less (66.47±0.64cm/s and 67.35±1.06cm/s, respectively) than in G1 (79.27±1.47cm/s).At the same time, there were no significant differences in the value of E between patients G3 and G2.The maximum late atrial filling rate (A) in patients G2 was significantly (p <0.001) higher than in patients G1.The high A values found in patients G3 significantly differed from those in G2 (p <0.01).According to the ratio of these indicators (E / A) G3 and G2, significantly (p <0.01) differed from G1.
The deceleration time of the early diastolic flow rate (DT) in G2patients was significantly (p<0.01)longer than in G3patients.In G3, this indicator tended to decrease, however, there was no significant difference in DT levels between G3 and G1 patients.
Thus, summarizing the research, the following results can be determined: the presence of OW in patients with GA and AH negatively affects the contractile function of the heart.In G3 patients EFthe main indicator of LV systolic function, was significantly lower than in G1 (p<0.001) and G2 (p<0.01), which led to a further increase in LVM and LVMI in G3patients.Patients with OW and with comorbid GA and AH had significantly larger values of the size and volume of the atria (p<0.001),larger values of PWT(S) (p<0.01) and LVM (p<0.05), as well as larger LVEDD, LVESD, EDV and ESV (p<0.01) with a lower EF (p<0.05) compared with patients with normal body weight.The degree of MFS decreased with an increase in BMI (p <0.05).While maintaining LV systolic function in the vast majority of patients G3, hypertrophic variants of LV remodeling were establishedconcentric (64.06%) and eccentric (16.88%) hypertrophy, which are prognostically unfavorable types of remodeling.At the same time, hypertrophic and non-hypertrophic variants of LV remodeling are almost equally encountered in G2 patients.G3 patients had more pronounced diastolic dysfunction compared to patients of G2, where it was represented by initial changes in the form of impaired relaxation, while in 14.38% of patients G3, diastolic dysfunction progressed in the form of pseudo normalization of blood flow.These changes confirm the high values of the mean pressure in the pulmonar y artery and the level of the integral indicator of diastolic function in patients with OW.The severity of diastolic dysfunction in patients is determined by an BMI increase, confirming the reliability of the differences in most indicators of diastolic function in patient subgroups G1 and G3.Confirmation of the association of diastolic dysfunction and OW is the revealed pattern: the second stage of diastolic dysfunction took place in a significantly greater (p<0.05)number of patients with OW compared with patients with normal body weight.
Table 1
Structural and functional disorders of the myocardium in women with GA Evaluation of ejection fraction (EF) -The main indicator of LV systolic function
Table 3
Comparison of systolic heart function indicators in G1 and G2
Table 4
The state of diastolic function in the examined patients with GA
|
2021-10-15T00:09:14.511Z
|
2021-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "ddd5395f222e0248f1de0a52abfc01404d164c9e",
"oa_license": "CCBYNCSA",
"oa_url": "https://apcz.umk.pl/JEHS/article/download/JEHS.2021.11.07.017/29433",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2585feec0cd0fbf31b9b59bfdc5f86282c5da848",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17383818
|
pes2o/s2orc
|
v3-fos-license
|
HIV-1 resistance conferred by siRNA cosuppression of CXCR4 and CCR5 coreceptors by a bispecific lentiviral vector
Background RNA interference (RNAi) mediated by small interfering RNAs (siRNAs) has proved to be a highly effective gene silencing mechanism with great potential for HIV/AIDS gene therapy. Previous work with siRNAs against cellular coreceptors CXCR4 and CCR5 had shown that down regulation of these surface molecules could prevent HIV-1 entry and confer viral resistance. Since monospecific siRNAs targeting individual coreceptors are inadequate in protecting against both T cell tropic (X4) and monocyte tropic (R5) viral strains simultaneously, bispecific constructs with dual specificity are required. For effective long range therapy, the bispecific constructs need to be stably transduced into HIV-1 target cells via integrating viral vectors. Results To achieve this goal, lentiviral vectors incorporating both CXCR4 and CCR5 siRNAs of short hairpin design were constructed. The CXCR4 siRNA was driven by a U6 promoter whereas the CCR5 siRNA was driven by an H1 promoter. A CMV promoter driven EGFP reporter gene is also incorporated in the bispecific construct. High efficiency transduction into coreceptor expressing Magi and Ghost cell lines with a concomitant down regulation of respective coreceptors was achieved with lentiviral vectors. When the siRNA expressing transduced cells were challenged with X4 and R5 tropic HIV-1, they demonstrated marked viral resistance. HIV-1 resistance was also observed in bispecific lentiviral vector transduced primary PBMCs. Conclusions Both CXCR4 and CCR5 coreceptors could be simultaneously targeted for down regulation by a single combinatorial lentiviral vector incorporating respective anti-coreceptor siRNAs. Stable down regulation of both the coreceptors protects cells against infection by both X4 and R5 tropic HIV-1. Stable down regulation of cellular molecules that aid in HIV-1 infection will be an effective strategy for long range HIV gene therapy.
Background
HIV/AIDS continues to be a major public health problem worldwide with millions of people currently infected and new infections being on the rise. As no effective vaccines are currently available for prevention, new and innovative therapies need to be developed. Although combinatorial therapies such as HAART have proven to be effective in prolonging life, they do not afford a complete cure. Other constraints with HAART therapy are the development of drug resistant viral mutants and toxicity after prolonged therapy. Intracellular immunization by gene therapy strategies offers a promising alternative approach for controlling and managing HIV disease. A number of previous approaches that involved the use of transdominant proteins [1][2][3], decoys [3][4][5][6][7], and ribozymes [5,[8][9][10][11][12] had shown initial promise but fell short of practical utility in providing adequate protection. With the discovery that the RNA interference phenomenon operates in mammalian cells and is highly effective in selective gene silencing, new potent small interfering RNA (siRNA) molecules have become available to add to the anti-HIV arsenal [13].
RNAi is a highly potent mechanism of post-transcriptional gene silencing. Mediated by sequence specific siR-NAs, it can effectively down regulate expression of either viral or cellular RNA target molecules by selective degradation of mRNAs [13][14][15][16]. Mechanism of destruction involves an endonuclease present in the RISC complex which is guided by the antisense component of the siRNA for target recognition. A number of reports have shown that delivery of siRNAs by transfection of presynthesized or plasmids encoding siRNAs into cultured cells can effectively inhibit HIV-1 infections [17][18][19][20][21][22][23][24][25][26]. Antiviral effects of these delivery methods are only transient due to eventual degradation and dilution of siRNAs during cell division. For HIV gene therapy strategies to succeed in long range, it is necessary that siRNA coding transgenes be maintained and expressed long term in a virus susceptible target cell. In this regard, lentiviral vectors have proven to be highly effective in high efficiency gene transduction and sustained gene expression.
A number of previous approaches using either synthetic siRNAs or plasmid expressed constructs have successfully targeted viral transcripts and achieved effective viral inhibition. Of these, some anti-HIV-1 siRNAs, such as siRNAs against tat, tat-rev had been introduced into lentiviral vectors and their efficacy was demonstrated both in cell lines and primary T cells and macrophages [27,28]. Promising data was also obtained in experiments showing that antirev siRNAs against HIV-1 were functional in conferring viral resistance in differentiated T cells and macrophages derived from lentiviral transduced CD34+ hematopoietic progenitor cells [29].
In addition to targeting viral transcripts, many studies including ours also investigated the efficacy of siRNAs in down regulating host cell molecules necessary for HIV-1 infection [18,21,23,24,30,31]. An advantage in targeting cellular molecules is that efficacy will be more broad spectrum against all the clades of the virus and the frequency of escape mutants will be lower. Down regulation of the primary cell surface receptor CD4 and consequent inhibi-tion of HIV-1 infection was shown using synthetic siR-NAs. However, since CD4 is an essential cell surface molecule for immunological function, it is not a practical target for HIV gene therapy. Chemokine receptors CCR5 and CXCR4 play critical roles as coreceptors for viral entry during infection with macrophage tropic R5 and T cell tropic X4 HIV-1 viral strains respectively [32,33]. Thus they are suitable targets for siRNA mediated down regulation. Since both R5 and X4 viral strains are involved in disease pathogenesis, it is important to consider blocking of both respective coreceptors when developing effective therapeutics. In a segment of the human population, a naturally occurring 32-bp deletion in the CCR5 gene results in the loss of this coreceptor thus conferring significant resistance to HIV infection [34][35][36]. Homozygous or heterozygous individuals for this mutation remain physiologically normal. With regard to the CXCR4 coreceptor, it was found to be dispensable for T cell development and maturation in murine studies [37]. These findings suggest that CCR5 and CXCR4 are promising targets for HIV therapies.
Based on this rationale, recent work with synthetic siRNAs demonstrated that down regulating either CXCR4 or CCR5 will protect cells from X4 or R5 HIV-1 strains respectively at the level of viral entry [18,21,23,24]. Although stable expression of an anti-CCR5 siRNA was achieved using a lentiviral vector in one study, down regulating CCR5 alone in the face of an HIV-1 infection is insufficient [31]. Therefore, we recently experimented with synthetic bispecific combinatorial constructs targeted to both CXCR4 and CCR5 and have shown their efficacy in cultured cells [24]. To make further progress, our present studies are directed towards constructing a single bispecific lentiviral vector expressing both CXCR4 and CCR5 siRNAs. Using this combinatorial construct, here we show high efficiency transduction, simultaneous down regulation of both coreceptors resulting in HIV-1 resistance.
Coreceptor down regulation by a bispecific lentiviral vector
Our major goal in these studies is to introduce both CXCR4 and CCR5 siRNAs into a single lentiviral construct to achieve their stable expression in transduced cells. Lentiviral vectors offer advantages over conventional retroviral vector systems since they can transduce dividing as well as nondividing cells and are less prone to transgene silencing [44][45][46][47]. The transfer vector HIV-7-GFP-XHR (referred to as XHR) contained a short hairpin type anti-CXCR4 siRNA driven by a Pol-III U6 promoter followed by a short hairpin anti-CCR5 siRNA driven by a different Pol-III promoter, H1. Downstream, the reporter gene, EGFP is driven by a CMV promoter. The control GFP-alone vector, HIV-7-GFP, contained only the reporter gene EGFP (Fig 1).
Magi-CXCR4 cells constitutively expressing CXCR4 on the cell surface when transduced with the control vector or XHR vector had shown 97% and 83% EGFP expression respectively as measured by FACS analysis indicating high efficiency of transduction (Fig 2A and 2C). To determine if CXCR4 was down regulated by the respective siRNA in the XHR construct, the transduced cells were analyzed for CXCR4 surface expression. The surface levels of CXCR4 were reduced significantly in XHR transduced cells (73% lower) compared to the cells transduced with control vector ( Fig 2B and 2D) indicating the efficacy of the CXCR4 siRNA on its target. Similarly, to determine the activity of the anti-CCR5 siRNA in the XHR vector, transduced Ghost R5 cells that constitutively express CCR5 were evaluated. As seen in Fig 3A and 3C, high levels of transduction (84% and 83%) were seen in Ghost-R5 cells with either the control vector or XHR vector, respectively. When the transduced cells were analyzed for CCR5 expression, a dramatic decrease in CCR5 expression was seen in XHR cells (72%) compared to control vector transduced cells (Fig 3B and 3D). These results had shown that the bispecific lentiviral vector XHR efficiently down regulates both CXCR4 and CCR5 targets in respective cells.
Expression of siRNAs and down regulation of CXCR4 and CCR5 transcripts
To confirm that the down regulation of both CXCR4 and CCR5 coreceptors as seen by FACS analysis is due to reduced levels of the corresponding mRNAs, vector transduced cells were analyzed by RT-PCR. As an internal control, GAPDH mRNA was also analyzed. XHR vector transduced cells showed considerable reduction in transcript levels for both CXCR4 and CCR5 as compared to Bispecific lentiviral vector (XHR) encoding anti-CXCR4 and CCR5 siRNAs Figure 1 Bispecific lentiviral vector (XHR) encoding anti-CXCR4 and CCR5 siRNAs. A) Control transfer vector pHIV-7-GFP encoding a CMV promoter driven EGFP reporter gene. B) To derive the bispecific vector pHIV-XHR-GFP, a U6 promoter driven short hairpin CXCR4 siRNA cassette was cloned into the BamHI site upstream to the CMV-EGFP cassette. The H1-CCR5 siRNA cassette was inserted into an MluI site downstream to the U6-CXCR4 siRNA cassette.
non-transduced and control GFP vector transduced cells. The levels of GAPDH control mRNA remained unchanged in all samples (Fig 4). To validate the expression of individual siRNAs in transduced Magi-CXCR4 and Ghost R5 cells, cellular RNA was analyzed by northern analysis for their presence. As internal controls, the presence of constitutively expressed miRNA-16 RNAs were also analyzed in parallel. As expected, comparable levels of miRNA-16 Cell surface down regulation of CXCR4 in XHR transduced Magi-CXCR4 cells RNAs (22 bp in length)were detected in GFP control vector transduced as well as in XHR vector transduced cells (Fig 5A). RNAs corresponding to CXCR4 and CCR5 shR-NAs (representing the 21nt antisense strand of each shRNA) were seen in XHR transduced but not in GFP control vector transduced cells (Fig 5B).
Bispecific siRNA vector does not induce interferon
Double stranded RNA molecules longer than ~30 bp are known to induce the interferon pathway in response to viral infections. As siRNAs are generally comprised of 19-24 bp in length, they are not expected to activate such a response that mediates a non-specific down regulation of Cell surface down regulation of CCR5 in XHR transduced Ghost-R5 cells cellular or viral mRNAs. However, recent data had shown that in some circumstances, certain siRNAs might induce variable levels of interferon activation [48][49][50]. To rule out such a possibility with the present siRNAs, we looked for upregulation of phosphorylated-PKR by western blot analysis. PKR is a protein kinase that becomes activated through phosphorylation in the presence of dsRNA and is involved during the interferon response. Our results have shown that the levels of phosphorylated PKR remain unchanged in XHR transduced cells similar to mock and GFP vector transduced cells. In contrast, elevated levels of phosphorylated PKR could be seen in poly I:C transfected cells used as positive controls (Fig 6). These data exclude the possibility of non-specific interferon activation by the combinatorial lentiviral construct.
Resistance of siRNA transduced cells to HIV-1 infection
To determine if down regulation of the essential coreceptors, CXCR4 and CCR5, translated to virus resistance, transduced Magi-CXCR4 and Ghost R5 cells were challenged with X4 (NL4-3) and R5 (BaL1)-tropic strains of HIV-1 respectively. Viral p24 antigen levels at different days post-challenge were determined by ELISA to quantify levels of HIV-1 resistance. Over a 10-fold reduction in viral antigen levels was seen with both XHR transduced Magi-CXCR4 and Ghost-R5 cells as compared to nontransduced and GFP-alone vector transduced cells (Fig 7). There was a slight increase in viral production in XHR transduced cells on days 5 to 7. This could be due to nontransduced and/or low siRNA expressing cells producing the virus. We next wanted to determine if the XHR vector expressing CXCR4 and CCR5 siRNAs is effective in physiologically relevant cells for gene therapy. Accordingly, PBMCs transduced with vectors were challenged in the same manner as above. A 3-fold level of inhibition was seen on days 3, 5, and 7 (Fig 8). These results established that the XHR vector is also effective in primary cells in inhibiting HIV-1. Although clearly significant, the levels of virus inhibition were not as dramatic as seen with Magi and Ghost cell lines. The observed levels of viral inhibition in primary PBMC are similar to those observed in a recent report [31]. Lower levels of protection in PBMCs were likely due to the lower levels of transduction. Future studies that are aimed at increasing transduction efficiencies into primary lymphocytes and macrophages are likely to overcome this hurdle.
In summary, our studies have shown for the first time that a single lentiviral vector could be used to stably deliver two different siRNAs targeted to two different cell surface co-receptor molecules and achieve protection against both X4 and R5 tropic HIV-1 viral strains. The short hairpin design permitted use of a single promoter to transcribe both the sense and anti-sense strands of each of the siRNAs. No promoter interference was observed between the U6 promoter driving the transcription of CXCR4 siRNA and the H1 promoter driving the CCR5 siRNA since comparable amounts of both the siRNAs could be seen in transduced cells. Furthermore, possible interferon induction by the combinatorial construct was also ruled out.
RT-PCR detection of CXCR4 and CCR5 mRNA down regulation
A major advantage in using a combinatorial lentiviral construct targeted to both the coreceptors is that infection with either of the viral strains could be prevented at the entry step thus eliminating the possibility of proviral integration and viral latency. Given the success with the current bispecific construct, other novel constructs could be designed and experimented with that incorporate siRNAs targeted to both the cellular as well as viral targets. Based on the design employed here, it is possible to introduce more than two siRNAs in a single construct in the future. However caution should be exercised while incorporating multiple siRNAs in a single construct because the possibility exists that over expression of foreign siRNAs in a cell may have undesirable effects such as saturating the endogenous RISC complex and consequent toxicity. Such a possibility needs to be tested in long range experiments in vivo. We previously have introduced a monospecific siRNA targeted to HIV-1 rev into CD34 hematopoietic progenitor cells via lentiviral vectors and derived transgenic macrophages in vitro and T cells in vivo [29]. The transgenic cells were found to be apparently normal while markedly resistant to HIV-1 infection.
No deleterious effects are expected by the stable knock down of the CCR5 coreceptor in vivo since individuals harboring a 32 bp deletion in the corresponding gene are physiologically normal [34,35]. Although CXCR4 down regulation in circulating mature T cells in the periphery may not have any insurmountable ill effects, this may have possible drawbacks in a stem cell setting due to its role in cell homing into bone marrow [51,52]. Additionally, recent gene expression profiling studies indicated Northern analysis to detect siRNA expression in transduced cells some off-target effects by siRNAs [53]. Therefore, the present combinatorial construct targeted to both CXCR4 and CCR5 coreceptor molecules need to be thoroughly tested in an in vivo system such as the SCID-hu mouse model to evaluate its efficacy and possible toxicity in differentiated cells before it can be used for gene therapy in human subjects. Such experiments are currently underway.
Conclusions
For HIV/AIDS gene therapy strategies to succeed, novel molecules need to be harnessed. In this regard, siRNAs offer great potential. Exploitation of these promising candidates to down regulate essential cellular coreceptors via the use of lentiviral vectors facilitates long term derivation of resistant T cells and macrophages which are the main targets for the virus. Our results showed for the first time that expression of both CXCR4 and CCR5 siRNAs in combination is possible by the use of lentiviral vectors. Coreceptor specific siRNAs stably transduced with the bispecific lentiviral vector showed marked resistance against both T cell tropic and monocyte tropic HIV-1 infection in cell lines and primary PBMCs. The newly developed bispecific vector shows promise for potential in vivo application.
Plasmid and lentiviral vector construction
Previously characterized siRNAs against CXCR4 and CCR5 were used in generating the bispecific lentiviral vector [23,24,30]. A third generation lentiviral vector backbone was employed to derive the bispecific constructs. The two cis-acting elements, namely, the central DNA flap consisting of cPPT and CTS (to facilitate the nuclear import of the viral preintegration complex) and the WPRE (to promote nuclear export of transcripts and/or increase the efficiency of polyadenylation of transcripts), are engineered to enhance the performance of the vector [38,39]. An siRNA expression cassette targeting CXCR4 under the control of the Pol-III U6 promoter was PCR amplified from the plasmid pTZ-U6+1 as described by Castanotto et al [40]. This cassette was cloned into pHIV-7-GFP transfer vector in the BamHI site immediately upstream of the CMV-EGFP gene. This cassette contained a MluI restriction site downstream from the CXCR4 siRNA sequence for subsequent cloning of the H1 promoter driven CCR5 siRNA cassette. The H1-CCR5 siRNA expression cassette was also generated as described above using the plasmid pSUPER (Oligoengine, Seattle, WA). Sequencing and confirmation of candidate clones was performed by Laragen Inc. (Los Angeles, CA). The transfer vector containing the inserts U6-X4 siRNA and H1-CCR5 siRNA is termed pHIV-XHR-GFP.
Lack of interferon induction in siRNA transduced cells
Cell culture and vector production 293T cells and PBMCs were maintained in DMEM media supplemented with 10% FBS. Magi-CXCR4 cells obtained from the AIDS Reference and Reagent Program were maintained in media as previously described [41,42]. Ghost-R5 cells obtained from the AIDS Reference and Reagent Program were maintained in media as previously described [43]. To generate lentiviral vectors, fifteen micrograms of transfer vector with either GFP-alone or XHR were transfected along with 15 ug pCHGP-2, 5 ug pCMV-Rev, and 5 ug pCMV-VSVG into 293T cells at 60% confluency in 100 mm culture dishes using a calcium phosphate transfection kit (Sigma-Aldrich, St. Louis, MO). Six hours after transfection, fresh medium was exchanged. Cell culture supernatants containing the vector were collected at 24, 36, 48, and 60 hours post transfection and pooled. Vector supernatants were concentrated by ultracentrifugation and later titrated on 293T cells using FACS analysis for GFP expression.
Lentiviral vector transduction and FACS analysis
Magi-CXCR4 and Ghost-CCR5 cells were seeded in 6-well plates 24 hours prior to transduction, 5 × 10 5 cells per well. Cells were transduced with lentiviral vectors at an m.o.i. of 10 in the presence of 4 ug/ml polybrene for 2 hours. For transduction of PBMCs, cells were first isolated from whole blood by Histopaque ® -1077 (Sigma-Aldrich), and then cultured in CD3 and CD28 antibody coated plates. Three days after stimulation, PBMCs were transduced at an m.o.i of 20 in the presence of 4 ug/ml polybrene. PBMC transduction was repeated the following day. Seventy-two hours post transduction with siRNA containing lentiviral vectors, FACS analysis was performed to determine the levels of cell surface expression of CXCR4 and CCR5. Non-transduced and transduced cells were stained with appropriate antibodies conjugated with PE-Cy 5 (Pharmingen, San Diego, CA) namely, anti-CXCR4 for Magi-CXCR4 cells and anti-CCR5 for Ghost-CCR5 cells. Transduction efficiency was determined by assaying for EGFP expression. FACS analysis was performed on the Beckman Coulter Epics XL using ADC software for analysis.
Northern analysis for shRNA expression
Total RNA was extracted from non-transduced and transduced Magi-CXCR4 and Ghost-CCR5 cells using the RNA-STAT-60 reagent (Tel-Test, Friendswood, TX). Small RNAs, <200 nt, were separated and concentrated using the mirVana™ miRNA Isolation Kit (Ambion, Austin, TX). Twenty micrograms of small RNAs were hybridized overnight at 37°C using the mirVana™ miRNA Detection Kit (Ambion) with γ-32 P labeled probes made using the mir-Vana™ Probe & Marker Kit (Ambion). Probes were complementary to the antisense strands of CXCR4 and CCR5 siRNAs. Hybridization reactions were processed according to the manufacturer's protocol and run on 15% polyacrylamide TBE-Urea gels. Gels were then exposed to X-ray film. A probe complementary to miRNA-16 supplied with the miRNA detection kit was used as an internal control.
Western Blot analysis of phosphorylated PKR
Cell lysates of non-transduced and transduced cells were run on 10%-polyacrylamide-SDS TBE gels. Proteins were immunoblotted onto Immobilon™-P membranes (Millipore, Bedford, MA) and incubated with antibody specific for phosphorylated-PKR (Sigma-Aldrich), while anti-actin antibody (Sigma-Aldrich) was used to detect cellular actin as an internal control. A secondary antibody, goat antirabbit IgG conjugated with alkaline phophatase (Promega, Madison, WI), was then added. An alkaline phophatase substrate reagent, Western Blue (Promega), was used to visualize the bands.
HIV-1 Challenge
To determine if down-regulation of CXCR4 and CCR5 transcript levels and cell surface expression inhibited HIV-1 infection, non-transduced and transduced cells were challenged with NL4-3 (X4-tropic) and BaL-1 (R5-tropic) strains of HIV-1, at an m.o.i of 0.01, as previously described [24]. Viral supernatants were collected daily HIV-1 challenge of XHR transduced PBMCs Figure 8 HIV-1 challenge of XHR transduced PBMCs. Vector transduced PBMCs were challenged with either X4 tropic or R5 tropic viruses. Culture supernatants were collected at different days post challenge and p24 antigen was assayed by ELISA. Transduced PBMCs challenged with either HIV-1 NL4-3 (A) or BaL-1 (B). Data presented is from triplicate experiments.
from infected Magi-CXCR4 and Ghost-CCR5 cells for p24 assay. ELISA was used to determine p24 values employing a Coulter-p24 kit (Beckman Coulter, Fullerton, CA). For PBMC challenge experiments, non-transduced and transduced cells were infected with NL4-3 and Bal-1 strains and cell culture supernatants were collected on days 1, 3, 5, and 7 post-infection to measure p24 levels.
|
2014-10-01T00:00:00.000Z
|
2005-01-13T00:00:00.000
|
{
"year": 2005,
"sha1": "218c4f87275da0fa52bec76b4f382b74bf8b59c6",
"oa_license": "CCBY",
"oa_url": "https://aidsrestherapy.biomedcentral.com/track/pdf/10.1186/1742-6405-2-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba04ee8c93091eb542c3150d8cde06748cb98785",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.