id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
94852 | pes2o/s2orc | v3-fos-license | Modelling social identification and helping in evacuation simulation
Social scientists have criticised computer models of pedestrian streams for their treatment of psychological crowds as mere aggregations of individuals. Indeed most models for evacuation dynamics use analogies from physics where pedestrians are considered as particles. Although this ensures that the results of the simulation match important physical phenomena, such as the deceleration of the crowd with increasing density, social phenomena such as group processes are ignored. In particular, people in a crowd have social identities and share those social identities with the others in the crowd. The process of self categorisation determines norms within the crowd and influences how people will behave in evacuation situations. We formulate the application of social identity in pedestrian simulation algorithmically. The goal is to examine whether it is possible to carry over the psychological model to computer models of pedestrian motion so that simulation results correspond to observations from crowd psychology. That is, we quantify and formalise empirical research on and verbal descriptions of the effect of group identity on behaviour. We use uncertainty quantification to analyse the model's behaviour when we vary crucial model parameters. In this first approach we restrict ourselves to a specific scenario that was thoroughly investigated by crowd psychologists and where some quantitative data is available: the bombing and subsequent evacuation of a London underground tube carriage on July 7th 2005.
Introduction
The importance of evacuation simulations for pedestrians is generally accepted for designing buildings and for ensuring safety at mass events.Multiple models for pedestrian simulation were developed in the past decades.The forerunners and still most popular among them are force based models (Helbing and Molnár, 1995;Langston et al., 2006;Dietrich and Köster, 2014) and cellular automata (Gipps and Marksjö, 1985;Blue et al., 1997;Schadschneider, 2001;Kirik et al., 2007), but several alternatives were added to the portfolio in more recent years (Dijkstra et al., 2006;Sud et al., 2008;Seitz and Köster, 2012).
While having a realistic locomotion model is an important basis for simulating pedestrian evacuations, many crucial aspects of social behaviour have been neglected.In particular, the emergence and effects of group behaviour have been examined extensively in empirical research by social psychologists but are often missing in computer models (Templeton et al., 2015).
On the other hand, there are a number of publications on pedestrian motion models which incorporate social and psychological behaviour (Pan et al., 2007;Chu et al., 2011;Tsai et al., 2011;Chu and Law, 2013).Yet, these publications mostly describe computer frameworks designed to incorporate many possible but as yet unspecified behavioural models.That is, their focus is on the implementation of the framework.Notably, they do not attempt to emulate empirical findings from psychology.Furthermore, information on how the behaviours they implemented are modelled in detail is missing in most cases.Another problem is the large number of parameters used to control the interworking between the software modules that instantiate the supposed behavioural models.These three aspects -the distance of the models to the underlying theories, the lack of detailed modelling information, and the number of parameters to calibrate -make it nearly impossible to replicate the models and to check the models against observations.From a safety scientist's point of view, this is a severe drawback.This is the gap that we attempt to close.With the Social Identity Model Application (SIMA) for pedestrian simulation, we present a step towards combining research from psychology and computer science.We focus on one pivotal behaviour that was observed in several evacuations: helping others (see Figure 1).In modelling, we directly follow the ideas of the selfcategorisation theory (SCT) (Turner et al., 1987) and the social identity theory (SIT) (Tajfel and Turner, 1979) which are both part of the social identity help evacuate approach.We describe how to algorithmically formulate helping behaviour in an evacuation and how to choose the parameters so that other scientists can replicate, validate, and use the model.We keep the model independent of the locomotion level, and the parameter space small.We argue that the model is parsimonious without being reductionist and thus falsifiable (Seitz et al., submitted 2015).
Materials and methods
This contribution combines findings from social psychology and computer science.To address an audience from both disciplines it is necessary to introduce the methods with a little more depth and detail than one would, perhaps, use in either one of the single disciplines.
Social identity theory and self-categorisation theory 2.1.1. Incorporating evidence from crowd psychology
Our model is based upon extensive empirical research on collective behaviour by social psychologists.There are numerous real life examples of collective behaviour where people act together as a group: for example orchestras, football fans, and sports teams.Two prominent theories which provide insight in to how this group behaviour emerges are social identity theory (Tajfel and Turner, 1979) and self-categorisation theory (Turner et al., 1987).According to social identity theory, people have multiple social identities which are distinct from the identity of a person as an individual because they refer to ones identity as part of a social group, such as a fan of a certain sports team.Self-categorization theory refers to the process whereby one categorises oneself as an individual or a group member.It suggests that collective behaviour occurs through the process of depersonalisation, where individuals self-stereotype themselves in line with their group.This occurs through a transformation of one's identity from the personal self to the collective self.It is this self-categorisation as a group member which makes collective behaviour possible.It can therefore explain the behavioural differences between a physical crowd of individuals (who are simply in the same location together) and a psychological crowd (where people in a crowd act together).
The effect of social identities on peoples behaviour is crucial to understand for crowd modellers who aim to simulate psychological crowds.Research has shown that a shared social identity amongst crowd members increases the prevalence of supportive behaviours among people in emergency evacuations (Drury et al., 2009c).For example, when they share a social identity people evacuating may be more likely to coordinate their walking behaviours with others and by letting them move first rather than competing for the same exit.Research on emergency mass decontamination has also demonstrated that social identity is key to understanding the coordination of queuing behaviour, showing that that members of the public are more likely to participate in queuing if they identify with the person organising the situation (Carter et al., 2014).
An example of collective behaviour in an emergency evacuation comes from the July 7 th London bombings (Drury et al., 2009b).In this paper, we will focus upon on this event which has been analysed by social psychologists.
The event: the London bombings, 7 th July 2005
At 8.50am, during the peak rush hour in central London, three bombs were set off simultaneously in the London underground.The bombs were coordinated so that they detonated on three separate tube lines when the tube trains were between busy stations.The passengers in the tube trains were plunged into darkness and could not know if there were going to be further explosions, with no information when help would arrive.Emergency services did not reach them for some time.Over 700 people were injured in the attacks and 52 people were killed.In this emergency situation, the survivors of the bombings came together to tend to the injured and find a way of safely evacuating.In contrast to portrayals of crowds as panicking and acting selfishly to evacuate, research has shown that the opposite occurred.In the aftermath of the disaster, Drury et al. (2009b) collected 141 contemporaneous accounts from survivors.140 of the survivors reported seeing helping behaviour and mutual aid, such as offering water to others, providing first aid, and applying makeshift bandages.Only three of the survivors reported witnessing selfish behaviour.This was replicated in internet sources and the public enquiry; first-hand accounts showed that 42 people out of 127 described seeing help and only 11 described seeing people act selfishly.
Crucially, Drury et al. (2009b) found that survivors reported feeling part of a group with the other survivors.Many participants gave accounts of where people coordinated to help the group to escape.In fact, more survivors described the behaviour of the crowd as calm and orderly, rather than as panicking given the unexpected emergency which they were faced with.An important way that the survivors assisted safe evacuation was by providing mutual aid through cooperating and coordinating with each other.There was evidence of orderly queuing and people allowing others to go first, which in turn helped people overall by making the evacuation more safe.Other examples include leaving in a calm manner rather than rushing or pushing past each other which could have caused hindrance to the evacuation of the group overall.Drury et al. (2009b) demonstrate how survivors emotionally supported one another, allowed others to evacuate first, and stayed behind with people (who were previously strangers) at a personal risk to themselves.This behaviour was relatively common across the survivors, and when combined with the reports of feeling as part of a group it is in line with the idea that the commuters shared a social identity which was invoked through the common fate of the emergency situation.Although there are numerous examples of this collective mutual aid in emergency situations, in this paper we will focus on one key helping behaviour that was documented in the London bombings: assisting injured people to evacuate safely.We argue that in order to adequately simulate collective behaviour in emergency and disaster events, modellers should heed the research conducted by social psychologists on crowd behaviour.Specifically, modellers should focus on the role of a shared group identity and incidents of ingroup helping by incorporating aspects of self-categorisation theory into their simulations.As such, we use the London bombings scenario to propose a realistic model of collective be-haviour which combines the comprehensively validated Optimal Steps Model for locomotion with principles from the well-established social identity theory and self-categorisation theory.
Collecting empirical evidence
The helping behaviour that is modelled in SIMA is taken from accounts given by survivors of the July 7 th 2005 London bombings and the behaviour of the crowd in the aftermath.We have based this on the research by Drury et al. (2009b) which examined survivors' experiences of the bombings.They collated and analysed the survivor's perceptions of people's behaviour and the feelings that they experienced during the event.This was conducted by collecting one hundred and forty one accounts in contemporaneous newspaper material, in addition to personal archives and accounts from eighty one survivors which were recorded on the day or in the immediate aftermath of the bombings.Crucially, from this sample of survivors, they were asked about the level of danger that they felt they were in, how the others in the crowd behaved, whether people performed helping behaviour or acted selfishly, and whether their perception towards others changed throughout the course of the event (for a comprehensive account of the methodology, see (Drury et al., 2009b)).Notably, support for these findings can be seen in Drury et al. (2009a) which examines the accounts of crowd behaviour given by 21 survivors of 11 emergencies.
The underlying pedestrian motion model
For the locomotion level of our simulation, we use the Optimal Steps Model (OSM) (Seitz and Köster, 2012;von Sivers, 2013;von Sivers and Köster, 2015).As in many models, pedestrians are represented by circles, with radius 20 cm, that represent the solid body.The model deviates from older approaches in its treatment of motion: As in reality, pedestrians make steps to move forward.They do not glide along smooth trajectories that resemble imaginary rails as in force-based models or hop from cell to cell as in cellular automata.For this, each agent searches for the possible next position within a disk of which the radius is the agent's maximum stride length.The maximum stride length is determined individually according to empirical findings that link the stride length to free-flow speed (Grieve and Gear, 1966;Kirtley et al., 1985;Jelić et al., 2012;Seitz and Köster, 2012).The free-flow speed, that is, the speed an agent is supposed to prefer when walking uninhibited on a flat surface, is a standard input parameter in pedestrian simulations.
The search for the next position constitutes a two-dimensional optimisation problem on the step disk.The objective function is a superposition of dedicated utility functions that express closeness to the target, or rather a short travel time to the target (Köster and Zönnchen, 2014), sufficient interpersonal distance to other pedestrians (Hall, 1966;von Sivers and Köster, 2015) and sufficient distance to obstacles (Seitz et al., 2015).The shorter the remaining travel time to the target the higher the utility becomes.The correct travel time, while skirting obstacles that hide the target from direct view, is expressed through the solution of a mathematical equation: the eikonal equation (Kretz, 2009;Kretz et al., 2010;Hartmann, 2010).Originally, the eikonal equation describes the arrival time of a wave front that spreads out from an area, in our case, the target.The arrival time of this imaginary wave front coincides with the remaining travelling time of the agents in the model.All local utility or cost functions are chosen on a compact support, that is, they are truly zero at a suitable distance.This avoids numerical cutoff errors.The update scheme for the pedestrians is event driven (Seitz and Köster, 2014).That is, each agent steps ahead according to its individual stepping frequency which is predefined from the agent's free-flow speed and maximum stride length (Seitz and Köster, 2012;Seitz et al., 2015).Each step is an event in the event queue and occurs at its "natural" time.
To make replication possible, the parameters for the locomotion model are compiled in Table 1.The Optimal Steps model is embedded into the VADERE simulation framework at Munich University of Applied Sciences.Each module of the framework is verified using an automatic test suite.The model itself has been extensively validated (Seitz and Köster, 2012;von Sivers and Köster, 2013;Dietrich and Köster, 2014;Seitz and Köster, 2014;Köster and Zönnchen, 2014;von Sivers and Köster, 2015;Köster et al., 2015;Köster and Zönnchen, 2015) including independent implementations (Kneidl, 2015).Thus it is a reliable basis on which to build modules that instantiate social behaviour.
The theory of uncertainty quantification
The field of uncertainty quantification (UQ) addresses the problem that not all parameter values in the simulation are exactly determined: they are uncertain.Uncertainty quantification assumes distributions of the parameters and, thereby, allows to analyse their impact on the solution of the simulation.Probability density functions (PDFs) may be provided directly or statistical moments may be specified.
In recent decades, several methods beyond traditional Monte Carlo simulations have been developed.For a general overview see (Smith, 2014) and (Xiu, 2010).Uncertainty quantification distinguishes between forward and inverse uncertainty quantification simulation.The former analyses the impact of the uncertain parameters on the model whereas the latter tries to determine the distribution of the input parameters.In this work, we will use forward uncertainty quantification.Also we will employ non-intrusive methods where, in contrast to intrusive methods, the underlying model remains untouched.
A forward simulation in the context of uncertainty quantification contains three phases (Iaccarino, 2008, p.17-7): the assimilation phase, the propagation phase, and the certification phase (Figure 2).In the assimilation phase, the uncertain parameters are defined and prepared with the help of probability theory.For each uncertain parameter, a suitable probability distribution function (PDF) has to be chosen.In the propagation phase the model is evaluated.For this, the parameters are propagated through the model multiple times.The number of propagations necessary to obtain good results depends on the specific uncertainty quantification method.In the certification phase the output of each propagation step is collected.The impact of the uncertain parameters on one or several quantities of interest is observed.Often, the mean and the variance of an output quantity is calculated, but higher statistical moments are also possible (Xiu, 2010, p.67).Typical quantities of interests (QoI) are physical, domain, or timing values.Here, we focus on parametric uncertainty: We want to understand the impact of uncertain parameters, such as the number of injured persons, on the simulation results.We choose stochastic collocation (SC) with the pseudospectral approach (Xiu, 2007) as uncertainty quantification method.This fits our requirements because the method is non-intrusive, suitable for the analysis of models with few parameters, and needs less computational effort than other methods (Xiu, 2009).
Stochastic collocation with the pseudospectral approach (SC-GPC) is based on the generalised polynomial chaos (GPC) (Xiu, 2010, p.57-67) expansion.The idea is to separate the spatio-temporal part of a random process U (x, t, ζ) from the purely random part (see (Smith, 2014, p.209)).In our case, U (x, t, ζ) is the theoretical 'analytical' solution of the considered pedestrian evacuation scenario.
The generalised polynomial chaos expansion for U (x, t, ζ) is defined as (Xiu and Karniadakis, 2003, p.143) which depends on space x, time t, and the vector of random parameters ζ = (ζ 1 , . . ., ζ M ) with M independent random variables.c j (x, t) are the coefficients of the expansion representing the spatio-temporal part and are denoted by c j for the sake of simplicity in the following.Their computation depends on the specific generalised polynomial chaos method (Smith, 2014, p.209) one uses.The functions Φ j are orthogonal polynomial basis functions that correspond to the distributions of the random variables.To evaluate Equation (1) numerically, the expansion is truncated after N terms.
In the stochastic collocation with the pseudospectral approach, the model is evaluated on so-called collocation points.The coefficients c j of the generalised polynomial chaos expansion are approximated by applying a projection with an integration rule: Once the uncertain parameters are chosen, the collocation points (Xiu, 2007, p.299) z i and weights w i (i = 1, . . ., Q) for the integration rule can be generated (assimilation).To propagate the uncertainty through the model and to calculate the coefficients c j , the model u(z i ) (i.e.our pedestrian simulation) and the orthogonal basis functions Φ j (z i ) have to be evaluated at the generated collocation points z i .
Finally, the quantities of interest are extracted (certification).In our example, we have one quantity of interest: the number of pedestrians that have not yet reached safety in an evacuation scenario resembling the London bombings in 2005.To extract the mean µ u N and the variance σ 2 of the desired quantity of interest, we can directly use the coefficients c j .For the mean, one has to evaluate (Xiu, 2010, p. 67) and for the variance σ 2 (Xiu, 2010, p. 67) with σ being the standard deviation.We implemented a Python program using the chaospy library (Feinberg, 2015) to realise the stochastic collocation with the pseudospectral approach (for results, see Section 4.2.2).Since we are going to use uniform distributions for the computations (see Section 4.2.1),Legendre polynomials are used as orthogonal basis functions Φ (according to (Xiu and Karniadakis, 2002, p.626)).We choose the Gauss quadrature as integration rule, and the order of the polynomial chaos expansion is fixed to N = 6.The number of required simulation runs depends on the number of collocation points and the number of uncertain parameters.In our example, we use Q = 21 collocation points for each uncertain parameter.Hence, for the settings with one uncertain parameter, 21 simulation runs are required, and for the setting with three uncertain parameters 9126 (= 21 3 ).
Results: Formalisation of social identity and helping
In this section, we present the way we formalise the self-categorisation process and the resulting helping behaviour.We explain the structure of our model through flow diagrams and declare the parameters.We build our model on evidence from empirical research that shows the effect of social identity on behaviour.Since the social psychological models are very complex and detailed we must decide which parts to take into account and which to drop to construct a model that remains falsifiable.
Self-categorisation theory suggests that, when people categorise themselves as being in the same group, they are more likely to support each other (Levine et al., 2005).For the simulation of emergency evacuations, this basic idea is essential.Other parts of the self-categorisation theory that are important to social psychologists -for example a changing degree of social identity during the evacuation -have to be postponed until there is data to substantiate the mechanisms of the change.In the next four subsections, we algorithmically formulate shared social identity and its effect on helping behaviour.
Social Identity Model Application
We want to enable independent researchers to use the new social model, the Social Identity Model Application (SIMA) with any locomotion model and in any simulation framework.The Optimal Steps Model and the VADERE simulation framework of this contribution are merely examples among several well validated choices.We achieve our goal by defining an interface that exclusively consists of the target and the velocity of the pedestrians, the two typical input parameters for a locomotion step.The Social Identity Model Application is called before the execution of the locomotion module in every time step of the simulation.The outcomes of the SIMA call are adjusted targets and velocities of the agents.
The Social Identity Model Application consists of two main components: the social identity component (Establishing Social Identity) described in Section 3.2 and the helping behaviour component (Helping Behaviour) described in Section 3.4.Figure 3 shows the main loop of the Social Identity Model Application with these two key components.A new type of agents, badly injured pedestrians, is introduced (see Sec. 3.3).
The Establishing Social Identity component is called the first time a pedestrian recognises the emergency.This follows findings from social psychological research that people categorise themselves as ingroup members when faced with the common fate of an emergency.The component Helping Behaviour is relevant during the whole duration of the simulation.It is called for pedestrians who share a social identity.Pedestrians who do not share a social identity head straight for safety, that is, they evacuate without caring for others.
Establishing social identity
The first step in the model is to establish the social identity.However, not every pedestrian in an emergency shares a social identity (Drury et al., 2009a).Thus, we define a parameter perc sharingSI .Pedestrians are randomly selected to share a social identity or not according to perc sharingSI .The procedure for one pedestrian in the scenario is visualised in Figure 4.
Badly injured pedestrians
To model helping behaviour, we first need to introduce another type of agent: a badly injured pedestrian.In emergencies in general, people can get hurt by fire, bombs, plunging building parts or for other reasons.Although in the London bombings there were some pedestrians who suffered minor injuries, we neglect them here.Their behaviour could be easily modelled by reducing their speed.Instead, we focus on badly injured pedestrians who need assistance from others to evacuate.
A pivotal element in evacuation models is the target that each agent moves towards.Usually this is a 'safe area'.We model the immobility of badly injured pedestrians by fixing their target at their current position.Thus they remain stationary.They cannot evacuate without assistance from unharmed pedestrians.As soon as such an aide arrives we turn the aide into the injured agent's target.By doing this, the injured agent becomes dependent on the aide.The aide's target is set to the 'safe area' and the pair evacuate at a reduced speed v inj .This helping behaviour matches reports by survivors (Baker et al., 2002;Johnson, 2005;Tucker et al., 2007).
Helping behaviour
Reports on help between strangers and survivors in evacuations usually lack a description of what exactly people do to help.However, one can imagine the typical steps that must be taken.Initially, an injured pedestrian needs to be detected by an aide.Subsequently, this aide must approach the injured pedestrian to finally physically support the injured person while evacuating together.These assumptions are what we implement as helping behaviour (see Figure 5).
We assume the pedestrians choose the nearest injured and unaided pedestrian as the person to assist.The sub-component Seek Injured Pedestrian realises this searching behaviour.Each pedestrian has a range of perception that, in reality, depends on the scenario or on the pedestrian's abilities.In our simulations, we simplify these dependencies by assuming that each pedestrians is aware of casualties within a radius of 10 m, neglecting visual obstructions or the fact that the train has different compartments and cars.We argue, that even through visual obstructions the shouts of injured people can be heard so the range of perception is far bigger than the range of vision.Each potential helper chooses the nearest unaided and injured pedestrian as new target (see Figure 6) and approaches this target.If there is no injured pedestrian in the range of perception the pedestrian heads to the 'safe area'.
The sub-component Reached Injured Pedestrian controls the situation when a pedestrian reaches the unaided and injured pedestrian.If there is less than an arm length between them they form are group and evacuate together.They both change status and targets.First, the unhelped injured pedestrian p ui becomes a helped injured pedestrian p hi .This status change guarantees that other potential aides stop approaching and search for other casualties (see Figure 5).Then, the aide selects the safe location as next target and reduces the free-flow speed to v inj .Since the aide's location is set as target for the injured pedestrian, the latter automatically follows.The injured agent's free-flow speed is set slightly higher than v inj so that it does not fall behind or loose the aide.The arm length l a is set to 60 cm.The Parameter l a is the arm length of a pedestrian, p ui stands for an unaided injured pedestrian, p hi for helped injured pedestrian.
single steps of the procedure are shown in Figure 7.
The last step of the helping behaviour is the sub-component Assist Injured Pedestrian.An agent in this stage evacuates with the injured agent without looking for other casualties.Notably, agents can only help one injured throughout the simulation.From then on, both aide and injured pedestrians keep their respective targets and their reduced speeds.Note that, while the Social Identity Model Application can be applied for almost any locomotion model, the concrete simulation outcomes, such as evacuation times, will most likely vary. Param.
Description
Value perc sharingSI people sharing a social identity 0.8 perc injP eds percentage of injured pedestrians 0.1 v inj speed of a helper with an injured pedestrian 0.6 m s Table 2: Parameters for the Social Identity Model Application.
Results: quantification of uncertainty in the model
In this section, we present simulation results for two scenarios, both inspired by the situation in the London bombings.In the first scenario, we demonstrate how the inclusion of helping behaviour affects the evacuation of a whole tube train.The second scenario focuses on one carriage of the train.With this scenario, we analyse the impact of parameter variations on the simulation using uncertainty quantification.
Simulation of a whole train evacuation
Initially, we focus on the introduction of helping behaviour in a train evacuation scenario of a correctly dimensioned London Underground C69/C77 Circle Line train as in the July 7 th London bombings.The evacuation route, which in reality was a path along the tracks of the trains, is modelled as a long and narrow corridor that leaves no room for walking abreast.As during the real bombings, the scenario takes place during rush hour, so that every seat of the 192 seats in the train is occupied, that is, the initial positions of the agents are given by the seat locations.For simplicity, we assume that the standing room is empty.
The simulation parameters of the locomotion model, the Optimal Steps Model, are compiled in Table 1.All parameters of the Social Identity Model Application for this simulation are compiled in Table 2.In the absence of measured behavioural evidence for this scenario, we chose plausible values for the number of pedestrians who share a social identity and for the number of injured pedestrians.The speed of a person assisting a casualty is set to 0.6m/s.This value corresponds to observed slowest speeds of pedestrians with a walking handicap (Perry, 1992).
We assume that the bomb detonated in the third car and that, as a consequence, 19 pedestrians are badly injured: 16 casualties are near the bomb and 3 casualties are randomly positioned at other places.Among the remaining pedestrians, those who share a social identity are randomly chosen, unharmed pedestrians injured pedestrians event, explosion Figure 8: Evacuation scenario of a train.We assume that every seat is occupied (blue circles for unharmed passengers and red-rimmed light blue circles for injured passengers), but nobody is standing.Black lines indicate the partitioning inside the train and the walls.Light grey areas cannot be stepped upon.The escape route is the narrow white corridor that leads to safety.The safe area, and target, is indicated by the large (yellow, striped) rectangle in the upper left corner.16 of the 19 injured pedestrians are placed near the event (grey star), three more randomly in the train.
according to the percentage set for the simulation run.See Figure 8 for an illustration of the setting.
After a few seconds of the simulation, the first pairs of aides and injured pedestrians form.In Figure 9 the helpers are depicted by black striped circles and the injured by light blue circles.At this point, some of the injured pedestrians are still without helpers.A few seconds later, all injured pedestrians are assisted (see Figure 10).The other pedestrians (indicated by blue circles) evacuate on their own.In this model, those pedestrians who do not help anybody leave faster than those assisting injured pedestrians.This is a result of the reduced speed of the aides and injured pedestrians.
In a later state of the simulation, the escape route becomes congested.In this scenario, overtaking while walking along the evacuation path is not possible.Thus, faster agents get stuck behind aides with their charges.See Figure 11.
All observations outlined in the simulation match accounts by survivors in emergencies (Johnson, 2005;Drury et al., 2009b).They reported that survivors assisted those who were injured before leaving and formed orderly queues while evacuating.This is the behaviour that emerges in our simulations.We argue that this constitutes a qualitative validation of the Social Identity Model Application.Quantitative validation must be postponed until suitable data is available, possibly from evacuation drills or video footage of future emergencies.For a longer discussion on validation challenges see von Sivers et al. (2014).In addition to qualitative validation, we are able to provide statistical data on the variation of simulation outcomes using the techniques of uncertainty quantification.
Uncertainty quantification of the Social Identity Model Application
In many applications, the precise value of an important parameter, such as the number of injured pedestrians in emergency planning, is unknown.Moreover, even if a measured value for the parameter is available, it is only correct within a margin of error.If the model is sensitive to the variation of this parameter the predictive power of the model is decreased.
For example, the new helping behaviour in the simulation has a crucial impact on the evacuation time.Clearly, the average evacuation time must depend on the number of injured pedestrians and helpers.In the extreme case, where at least one pedestrian is injured but nobody helps, the evacuation time is infinite.The same occurs if everyone is injured.On the other hand, if there are no casualties then no time is invested in searching for them and nobody is slowed down by helping.Evacuation is much faster.A sample simulation with the setting from Section 4.1 with 10% casualties results in an evacuation time of 467 seconds.In the same scenario without injuries, the evacuation time is 231 seconds.Only 10% of casualties in this scenario doubles the evacuation time.
As long as there are uncertain parameters and sensitive parameters, one sample simulation does not give a reasonable estimate of the evacuation time.But how can one reasonably quantify the impact of uncertain parameters such as the number of injured pedestrians?We tackle this challenge using uncertainty quantification.
Simulation setup for uncertainty quantification
For a proof of concept, we focus on one car and make several assumption with respect to the initial states of the agents.Again, every seat is occupied, but this time some people are standing, so that there are 60 persons in the car.They evacuate to a 'safe' platform next to the car.We assume that the bomb detonated at one end of the car and that the 14 people near the event are likely to be injured.As long as the percentage of injured pedestrians is set below 25%, the attribute of being injured is randomly assigned to these pedestrians.Above 25%, all of 14 likely casualties are marked as injured.
Figure 12: Scenario for the analysis with uncertainty quantification: evacuation of one car with 60 passengers (blue and light blue circles) to a platform (yellow striped rectangle in the upper part of the picture) after a bomb explosion.The grey star shows the place of the bomb.Thus, the red-rimmed light blue circles on the left side of the train are the passengers with a high risk of injury.
Parameter
Description Min.Max.perc sharingSI people sharing a social identity 0.6 1.0 perc injP eds percentage of injured pedestrians 0.1 0.3 v inj speed of a helper with an injured pedestrian 0.4 0.8 Table 3: Three uncertain parameters in the Social Identity Model Application.They are uniformly distributed between their minimum and maximum values.
For the remaining number of casualties other passengers are randomly chosen.The percentage of passengers sharing a social perc injP eds is completely unknown.It is the first uncertain parameter we investigate.The attribute is randomly assigned to passengers using a uniform distribution according to perc injP eds .Figure 12 illustrates the setting.Further uncertain parameters are the percentage of casualties and the speed at which an aide and charge evacuate.Since there is no data on parameter distributions from real evacuations or experiments, we need to make plausible assumptions.We assume that the parameters are uniformly distributed with the minimum and maximum values in Table 3.
We choose the maximum evacuation time and the number of people who have not yet reached safety as quantities of interest.At each time step, we analyse the mean and the variance or standard deviation of each quantity of interest, until all persons have evacuated.In the simulations, the percentage of pedestrians with social identification perc sharingSI varies between 60% and 100%, the percentage of injured pedestrians perc injP eds ranges from 10% to 30%, and the speed of a helper with charge (v inj ) ranges from 0.4 m/s to 0.8 m/s.
The next step is to calculate percentiles which measure which percentage of the determined values are below a specific value.By plotting values of the 10th percentile and the 90th percentile, the space between this values can be interpreted as the area where 80% of the values lie.
Impact of the uncertain parameters
In real world scenarios, all parameters are uncertain at the same time.In simulations one must focus on the parameters that one expects to have a decisive impact on simulation results.Here we select the number of pedestrians with social identification perc sharingSI , the number of injured pedestrians perc injP eds , and the evacuation speed v inj .In this first uncertainty investigation, all three parameters are assumed to be uniformly distributed as listed in Table 3.
The quantification results are shown in Figure 13.In the left plot, the blue line in the filled space is the mean value of the number of pedestrians remaining in the car.Values are plotted from the beginning of the evacuation to the end.Therefore, at the beginning all pedestrians are still in the danger zone, while at the end, everyone has left.The green line on the left of the filled space is the 10th percentile.The red line on the right of the filled space is the 90th percentile.The filled space can be interpreted as follows: With a probability of 80%, the corresponding number of pedestrians remain in the danger zone within the time span.During the very first seconds every agent is evacuating, but nobody has successfully evacuated, because it takes some time to reach the safe area.In the right plot of Figure 13, the standard deviation for the number of pedestrians who are still in danger is plotted.It illustrates the spread around the mean number of agents at every time step from the beginning of the simulation to the end.The mean for the maximum evacuation time is 21.35 seconds.The standard deviation is 5.68 seconds (32.37 variance).
To analyse the impact of a single parameter, only this parameter is disturbed and the others are kept fixed.We choose the average of the minimum and the maximum values from Table 3 as the fixed values.
Figure 14 shows the results where the percentage of the injured pedestrians perc injP eds is uncertain.Not surprisingly, we observe that the percentage of injured pedestrians has a strong impact on the number of pedestrians who still remain in the danger zone during the whole simulated evacuation time.During the first seconds of the evacuation, the spread around the mean is small.This is because the number of casualties does not have an impact on the behaviour of agents who are not sharing a social identity and directly evacuate without helping.The mean for the maximum evacuation time is 20.60 seconds and the standard deviation is 5.43 seconds.Next we consider the speed of a helper with an injured pedestrian v inj as uncertain.The result is depicted in Figure 15.During the first seconds, there is no uncertainty in the number of safely evacuated pedestrians.This can be explained with the immediate evacuation of the unharmed pedestrians who do not share a social identity and who are near the doors.For these pedestrians the speed of a helper with charge plays no role.The mean for the maximum evacuation time is 21.81 seconds and the standard deviation is 3.29 seconds.
Finally, Figure 16 illustrates the results when the percentage of pedestrians with a shared social identity is uncertain.At the beginning of the evacuation, the number of aides has a strong influence on the number of pedestrians remaining in the danger zone.We find this plausible, because many potential helpers make it likely that injured pedestrians are reached quickly and that helping behaviour has an impact on how the situation evolves.Later in the simulation, the influence of the parameter decreases as one might expect: As long as enough unharmed pedestrians share a social identity each injured person will be helped eventually.The values for the maximum evacuation time spread only a little bit around the mean of 21.62 seconds with a value of 1.11 seconds for the standard deviation.
Table 4 gives a survey of the results of the four uncertainty quantifications above.From the standard deviation, we see that the percentage of injured pedestrians has the greatest impact on the simulation results.In an earlier sensitivity study (Davidich and Köster, 2013) where the evolution of a passenger stream at a German railway station was simulated and compared to video footage, it was found that changing the targets for the pedestrians had the greatest impact on results compared to other parameters like the passengers' preferred walking speeds.Interestingly, this coincides with our new results: In the Social Identity Model Application each injured agent changes, first and above all, targets: for itself and for potential aides.The percentage of pedestrians who share a social identity, on the other hand, has a comparatively small impact.Moreover, if all three parameters are uncertain the impact of the different uncertain parameter does not appear to be cumulative: The standard deviation of 5.68 seconds in the maximum evacuation time is only marginally higher than the standard deviation of 5.43 seconds when the percentage of injured pedestrians is the only uncertain parameter.
Through uncertainty quantification we attain a better and deeper understanding of our new model and the parameters -an understanding that cannot be derived from the model itself but is important to safety scientists.And we have good news: While the number of sharing a social identity, or the degree of this identification, defies measurement, at least at present, the impact of its variation seems small.Thus, the model retains its predictive power.
Finally, the results quantitatively substantiate our earlier claim that modelling social identity and the helping behaviour that ensues has a very significant impact on evacuation simulations.Neglecting helping behaviour leads to quantitative results at one extreme end of the analysis.In view of the range of the evacuation times in Figure 14, for example, neglecting injured pedestrians would lead to a serious underestimation of evacuation times.
Conclusions
In this paper, we presented an algorithmic formulation of empirical findings from social psychology on human behaviour in a situation of great danger and duress.In particular, we looked at the effect of social identification within a crowd and ensuing helping behaviour during an evacuation.For this, we embedded our algorithm into a pedestrian evacuation simulation.We examined the behaviour in the simulation for a particular scenario that resembled the bomb attack on a metro train in London on July 7 th 2005.The computer simulation reproduced observations from the real evacuation.In particular, the agents evacuated in pairs of injured passengers and helpers and the overall behaviour was orderly.We argue, that this constitutes a qualitative validation of the computer model.
Crucially, we went a step beyond qualitative validation: In most cases, one or more model parameters that influence simulation outcomes are uncertain.Either they are entirely unknown, such as the number of injured pedestrians in our virtual evacuation, or they are measured with limited accuracy.We identified three parameters that could be decisive in our model: the percentage of injured pedestrians, the speed at which helpers and charges evacuate, and the percentage of people who share a social identity.We used uncertainty quantification to quantify their influence in an example scenario from which more complex scenarios can be derived.Variations in the percentage of injured pedestrians turned out to have a great influence, whereas variations in the speed had a medium impact, and variations in the percentage of pedestrians sharing a social identity had a relatively small impact.Since the latter parameter is very hard to measure, this is encouraging news for the safety scientist who needs predictive power of the model to give safety advise on the basis of simulations.
Quantitative validation of our model against measurements, such as trajectories of pedestrians or evacuation times, is still open and must remain open until suitable data is available.Knowing this, we reported all model parameters so that independent researchers can replicate and thus validate -or falsify -our findings.
We consider the instantiation of social identity and helping among strangers in our computer model as a proof of concept that it is indeed possible to carry over findings from social psychology into computer models that possess predictive power.
Yet, helping among strangers in emergencies is only one behaviour among many that stem from social identification and that are relevant to safety science.Another important example is the identification with ones own family.Also, with competing social identities the question arises which of the social identities is salient in which situation.Strategies to handle this must be found.Methods from uncertainty quantification which we have introduced to the field of safety science, promise to allow efficient characterisation and quantification of the influence of competing and interworking identities and of further social phenomena.
Figure 1 :
Figure 1: In an evacuation people have to decide whether to evacuate as quickly as possible or to help others.We explain psychological mechanisms behind these decisions, formalise them in a computer model and show which model parameters are decisive for safety science.
Figure 2 :
Figure 2: Illustration of the forward uncertainty quantification phases: assimilation, propagation, and certification.
Figure 3 :
Figure 3: General flow chart of the Social Identity Model Application.
Figure 4 :
Figure 4: Flow chart of how the social identity for pedestrian P is established at the beginning of the simulation.The percentage of pedestrians sharing a social identity is perc sharingSI .
Figure 5 :
Figure 5: Flow chart of the helping behaviour for pedestrian P during the emergency; p ui is the abbreviation for unaided and injured pedestrian.
Figure 6 :
Figure6: Flow chart of the helping behaviour when pedestrian P looks for injured pedestrians and chooses whom to help.p ui is the abbreviation for unaided and injured pedestrian.Target means the target of the pedestrian P .The range of perception is a radius of 10 m around the pedestrian.
Figure 7 :
Figure7: Flow chart of the helping behaviour when an injured pedestrian is reached.Parameter l a is the arm length of a pedestrian, p ui stands for an unaided injured pedestrian, p hi for helped injured pedestrian.
Figure 9 :
Figure 9: Evacuation scenario of a train.Close-up after the first seconds of the evacuation.First pairs of injured pedestrians (red-rimmed light blue cirlces) with helpers (black striped circles) form.
Figure 10 :
Figure10: Evacuation scenario of a train.Close-up after all pairs of aides and charges have formed.The unharmed pedestrians (blue circles) leave the train faster, the helpers (black striped circles) with injured (red-rimmed light blue circles) are slower.
Figure 11 :
Figure 11: Evacuation scenario of a train.The slow pairs of helpers and charges (redrimmed light blue and black striped circles) cause congestions.
Figure 13 :
Figure13: Uncertainty quantification with three uniformly distributed uncertain parameters.The mean value and the percentiles of the number of pedestrians who remain in the danger zone are plotted on the left.The standard deviation is plotted on the right.In the simulations, the percentage of pedestrians with social identification perc sharingSI varies between 60% and 100%, the percentage of injured pedestrians perc injP eds ranges from 10% to 30%, and the speed of a helper with charge (v inj ) ranges from 0.4 m/s to 0.8 m/s.
Figure 14 :
Figure14: Uncertainty quantification with the percentage of injured pedestrian as uncertain parameter.The mean value and the percentiles of the number of pedestrians who remain in danger are plotted on the left.The standard deviation is plotted on the right.Parameter perc injP eds is uniformly distributed between 10% and 40%.
Figure 15 :
Figure 15: Uncertainty quantification with the speed of a helper with charge as uncertain parameter.The mean value and the percentiles of the number of pedestrians who remain in danger are plotted on the left.The standard deviation is plotted on the right.Parameter v inj is uniformly distributed between 0.1 m/s and 0.4 m/s.
Figure 16 :
Figure16: Uncertainty quantification with the percentage of pedestrians sharing a social identity as uncertain parameter.The mean value and the percentiles of the number of pedestrians who remain in danger are plotted on the left.The standard deviation is plotted on the right.Parameter perc sharingSI is uniformly distributed between 60% and 100%.
Table 4 :
Comparison of impact of the uncertain parameters in the Social Identity Model Application on the maximum evacuation times. | 2016-02-03T08:33:53.407Z | 2016-02-02T00:00:00.000 | {
"year": 2016,
"sha1": "1f90af3cc05bde859b5e25b3be47dd4a687a8fd6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ssci.2016.07.001",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "a911e90b1c72e281852369649c15d13c904517c3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Engineering",
"Physics",
"Computer Science",
"Mathematics"
]
} |
13907457 | pes2o/s2orc | v3-fos-license | Impact of Preventive Responses to Epidemics in Rural Regions
Various epidemics have arisen in rural locations through human-animal interaction, such as the H1N1 outbreak of 2009. Through collaboration with local government officials, we have surveyed a rural county and its communities and collected a dataset characterizing the rural population. From the respondents’ answers, we build a social (face-to-face) contact network. With this network, we explore the potential spread of epidemics through a Susceptible-Latent-Infected-Recovered (SLIR) disease model. We simulate an exact model of a stochastic SLIR Poisson process with disease parameters representing a typical influenza-like illness. We test vaccine distribution strategies under limited resources. We examine global and location-based distribution strategies, as a way to reach critical individuals in the rural setting. We demonstrate that locations can be identified through contact metrics for use in vaccination strategies to control contagious diseases.
Introduction
In general, the spread of infectious diseases can be contained by human response using different approaches. Susceptible people can acquire immunization through vaccination, or can protect themselves from the diseases using preventive behaviors, such as avoiding close physical contacts with infected individuals or using hygienic habits. Correspondingly, human responses can be modeled using three classes of models distinguished by changes taking place in compartments, parameters, or contact levels to take into account the behavioral changes [1].
A vast literature exists on efficient vaccination strategies, given the need for efficient strategies to distribute vaccines that can often be insufficient for the entire population. Some of these strategies assume that human contact networks are well represented by scale free networks. One popular strategy aims at immunizing those individuals having the highest number of contacts, as the most critical actors for spreading the infection [2]. However, local strategies are more efficient and implementable and often require a lower fraction of the population to be vaccinated than random global immunization to contain epidemics. The strategy of acquaintance immunization proposes the immunization of random acquaintances of random individuals [3]. Another local strategy proposes to vaccinate highly connected acquaintances of randomly selected people; based on the properties of scale free networks, with this approach the probability of targeting the highly connected individuals in the contact network increases with respect to the simple random selection [4]. In the case of a limited amount of available vaccines, the authors of [5] use stochastic simulations of epidemic and numerical optimization methods to find near-optimal vaccine distributions to minimize the epidemic size. Again in the case of a limited amount of available vaccines, the best strategy suggests to vaccinate schoolchildren, the population group with highest contact in different communities, and the high-risk groups, the population groups that need protection [6]. Since a strong community structure can be detected in social contact networks, the approach in [7] aims at immunizing individuals bridging communities rather than simply targeting highly connected individuals. An extensive set of simulations performed in [8] suggests two strategies based on age classes: In the first strategy vaccinating older children, adolescents, and young adults minimizes the number of infections, while in the second strategy vaccinating either younger children and older adults or young adults minimizes the number of deaths. Using game theory, the authors of [9] show that when vaccination is an individual's choice, a periodic behavior can be seen in simulations. A severe epidemic in one year incentivizes high vaccination rates in the following year, causing a milder epidemic for which individuals have less motivation for vaccination in the subsequent year. In [10], authors develop a vaccination strategy based on optimizing the susceptible size by a partitioning of the contact network through vaccination. Based on the authors' simulations, this strategy is more efficient than those based on vaccinating the highest betweenness or contact individuals. Using a decision-making framework for vaccine distribution policies based on a geographical and demographical data in USA, the authors of [11] find that distributing vaccines first to counties where the latest epidemic waves are expected is the most efficient policy.
In any case, assessing the effectiveness of mitigation strategies and behavioral responses both from a public health point of view and from individuals' perspectives is a complex and not fullyexplored problem. In particular, a thorough evaluation and comparison of feasible mitigation strategies in the specific setting of rural regions is missing. In other words, not only the amount of success a given strategy can provide is not determined, but also its related cost in economical and social terms is unknown.
In this paper, we carry out extensive simulations on a weighted contact network determined by collected data in the City of Chanute and Neosho County in the State of Kansas. In particular we study the impact of limited resource vaccination campaigns, using an exact model of a stochastic SLIR Poisson process. Simulations are run across several scenarios and with stochastic sets of the SLIR model parameters. The evaluation of the vaccination campaigns is performed computing the average number of cases prevented per a single vaccine and the sizes and durations of the outbreaks. Our contributions are twofold: we construct and analyze a data-based rural contact network and we provide a thorough analysis and comparison of mitigation strategies in a rural region. We hope that our results can provide practical guidelines for health officials to contain and suppress epidemics in rural regions.
Methods
In the following we describe the data collection and analysis, and the models for the network, for the epidemic spreading, and also for vaccination strategies and distributions.
Data Collection and Analysis
As of the 2010 U.S. Census, Neosho County was a rural county with 16,512 residents in 571.5 square miles in southeastern Kansas. Most of the population was White (94.1%); a majority were female (50.6%) and many (17.4%) were 65 years of age or older. The median household income was $36,702 with 17.0% living below the poverty level. Between July and October 2010, the towns of Chanute, Thayer, and Galesburg were selected to participate in a survey concerning factors that would predict the spread of epidemics in rural areas. From county public household rosters, households were randomly selected from Chanute (10%, N = 171), Thayer (50%, N = 158), and Galesburg (50%, N = 73) for a total initial N = 402. After considerations mentioned in the supplementary information, the final number of available and eligible households were 143, 65, and 162 in Thayer, Galesburg, and Chanute, respectively, with total N = 370.
The tailored design method was used, with minor modifications, to improve response rates [12][13][14][15] with a focus on personalization and multiple follow-up mailings. The initial survey also included a local news report announcing the impending start of the survey [16]. Overall, 242 surveys were for an overall response rate of 65.4%. The response rate for Chanute was 74.7% (121/162). The response rate for Thayer and Galesburg combined was 55.8% (116/208). The difference in response rates (74.7% vs. 55.8%) between Chanute and Thayer/Galesburg was significant statistically, two-sided Fisher's Exact Test (p,.001), odds ratio = 2.34 (95% CI, 1.50-3.66, p,.001). The difference in response rate for the more urban location was probably related to the content of the survey, which focused on respondents' visiting locations of stores, public sites, and restaurants in Chanute itself. Thus, the survey probably seemed more relevant to Chanute residents, even though we were interested in how often households from outlying towns went to the nearest urban center to visit or shop.
A majority (56%) of the respondents reported being from Chanute compared to 23% from Thayer and 10% from Galesburg (the remaining percentage did not specify exactly where they were from). Of the 357 participants, the largest number were ages 45 to 64 (47.1%), with 26.1% 65 years of age or older and 18.8% (26-44) and 8.1% (18 to 25) younger than 45. A majority of the participants were females (57.6%). Most of the respondents (75.4%) had lived in their local community for 15 years or more. The vast majority (97.5%) of the respondents lived in a single family home. Very few (6.2%) of the households included a homebound member. Most of the respondents had either the equivalent of a high school degree (22%) or a college (23%) or graduate (12%) degree. Nearly sixty percent had incomes between $25,000 and 100,000 a year with 11% earning more and 30% earning less. Some respondents had type I (1.2%) or type II diabetes (10.4%) or were pre-diabetic (3.2%). Most respondents considered themselves to be slightly (35.7%), somewhat (18.2%), or extremely (8.6%) overweight. Most (56.6%) reported that they ate out one or two times a week with 26% eating out more often and 17% not at all.
In terms of compliance risk, nearly 49% of respondents said they would still visit at least one or two households outside of their home if there was a serious epidemic and radio/TV/internet had told them to remain at home and not visit with others. Figure 1 presents the distribution of the number of individuals that a respondent expects to still visit against advice. Only half (50.0%) of the respondents had been vaccinated against the flu within the past six months. Nearly 40 percent (38.9%) did not obtain such a vaccination because of concerns about the vaccine's safety or effectiveness. Only about 7% believed they had come down with the flu within the past six months while about 18% thought they might have come down with a cold. About 18% of the respondents reported taking vitamin D supplements; only 6% reported taking zinc supplements. Approximately 80% of the respondents had extensive contact with domestic pets on a daily basis while about 19% of respondents had contact with farm animals or wild animals regularly, shown in figure 2. Contact risk (low, moderate, high) was significantly related statistically with compliance risk (none, low, high) (p,.001, ES = 0.50, medium effect size). As contact risk increased from low to high, high compliance risk increased from 4.4% to 21.8%; as contact risk decreased from high to low, the percentage of respondents with no compliance risk increased from 38.6% to 62.3%.
Models
Here, the procedure to construct the contact network from survey data is explained. Furthermore, the compartmental model used for simulations and the preemptive vaccination strategies are described.
From the survey responses, we constructed a rural contact network as an estimation of the social contact structure among the survey respondents. The network is based on two central questions: the number of contacts that a person has, and the locations that a person visits at different times in a typical day. The basis for the interactions between a pair of respondents is the locations that they both visited in common. We considered 4 types of location-based interactions: both visit the same location in the morning, both visit the same location in the afternoon, both visit the same location in the evening, and both visit the same location regardless of time. The fourth category introduces some overlapping in the interactions, but it is added to account for some of the uncertainty in potential pathways of the disease spread. We considered 66 locations in the network construction and therefore 264 = 6664 possible interactions between each pair of survey respondents. We compute normalized weights from each respondent i to each other respondent j given by l ij , representing the number of location-based interactions between respondents i and j. For the few respondents who did not complete the section of the survey regarding location visits, we assign them uniform weights of interacting with every other respondent in the network. Letting nodes represent the set of N = 353 respondents and weighted links represent the contact between them; we have a symmetric contact network at this point. Next we uniformly scale the weights on the links directed outward from each respondent i such that the sum of these weights is equal to the number of contacts that respondent i has indicated having with his or her response (w ij = a i *l ij for every j in 1, 2, … N). (This scaling makes irrelevant the absolute value of the uniform weight of the respondents who lack location data.) The bipartite network of locations and survey respondents is finally represented as a weighted, directed (asymmetrical), and unipartite contact network of 353 nodes, with each pair of nodes (i and j) connected by two links which are respectively characterized by the weights w ij and w ji .
Six of the vaccination strategies will center on three node metrics: incoming node strength (the sum of the weights incoming to a node), outgoing node strength (the sum of the weights outgoing from a node), and node betweenness (a count of the shortest paths among all pairs that utilize the node) [17,18]. The incoming node strength of a node is a topology metric that captures the direct impact of the network on the node. The outgoing node strength captures the direct impact that a node can have on the network. The betweenness of a node is a measure which captures the significance of a node in traversing the network. A node with a higher betweenness would be more likely to be traversed (in a shortest-path-type travel across the network between any pair of nodes) than a node with lower betweenness. Although an epidemic is not restricted to following the shortest paths across a network, the betweenness metric still plays an important role in identifying nodes which are likely to catch the disease if it reaches a majority of the nodes in the network. The rural contact network is depicted in figure 3, where the nodes representing individuals are shown in purple in a cloud and they are connected to the locations that they frequent, shown as orange nodes on the map [19].
On this weighted network, we model an epidemic outbreak using a Susceptible-Latent-Infected-Recovered compartmental model (SLIR) [17,20]. In the SLIR model, we assume infections arrive at a susceptible (S) node j from an infected (I) node i with a rate that is a product of the directed contact w ij and the basic infection rate b. When an infection arrives to a susceptible node, the node takes on a latent infection (transfers from the susceptible compartment to the latent compartment). A node, once latent (L), is considered unable to spread the disease, but is developing to that stage with rate l. The inverse of the rate l is the expected time for a node to spend in the latent state. The next stage of the disease, the infected/infectious state, enables the node to spread infections to each of its neighbors at rates proportional to the weights on its outgoing links. Each infected node recovers from the infected state at a rate m. Once a node is in the recovered state (R), it remains recovered and does not participate in the disease process any further.
We simulate this model exactly using an event-driven simulation of the SLIR process on the weighted rural contact network. We initialize the simulation by assigning a disease state to each node and then drawing exponential waiting times for the next event at each node. Taking the event with the minimum time across all nodes, we advance the event node to its next disease state and redraw waiting times for all nodes. This step is repeated until all waiting times are infinite, which happens when the disease process is complete. At this point, all nodes will be either susceptible or recovered. In the event-driven simulation, the time periods between successive events will not be regular, but instead they are non-integer stochastic values.
Vaccination is carried out by selecting a set of nodes and immunizing them with a certain vaccine efficacy rate. We consider seven different strategies for selecting the set of nodes for vaccination. The first and simplest strategy is a random selection of 10% of the population (35 nodes). The random method represents a blind distribution across the population. The next three strategies consider a targeted selection of nodes (individuals) based respectively on the three node metrics, incoming node strength, outgoing node strength, and node betweenness. These three strategies are idealistically implemented by selecting the 35 nodes with the highest values for the respective metric and administering the vaccine. For less ideal situations, we consider three additional strategies that attempt to represent feasible vaccine distribution strategies for rural populations. Considering again the three above mentioned network metrics, we determine the location which has the highest average value (on the set of nodes that visit the location) of each metric. These locations are a restaurant (outgoing node strength), a pharmacy (node between- ness), and a location used for public events (incoming node strength). After selecting the locations that represent on average the best places to find nodes with higher values of each metric, we consider a random selection within a location of 10% of the entire population for vaccination. This location-based targeting has been proposed in [20]. It allows an indirect (and thus more feasible) targeting of critical populations that ensures a more effective use of resources than widely distributing resources in a global manner. Note that there is an implicit assumption that the entire population is susceptible previous to the distribution of the vaccine. Although this is not a realistic assumption for a commonly occurring strain of influenza, it would likely be the case for any new disease threat. In figure 4, a simple exemplification of these strategies is described.
Results
We measured on the network the metrics of interest for the vaccination targeting strategies. Figure 5 shows the diversity found in the weights that measure the levels of contact between each neighboring pair of nodes. Roughly 30 percent of the links carry very small weights, and there are very few links representing the highest weighted contacts. In figure 6 we display two views of the network topology to visualize the estimated rural community contact structure. Since the network is rather dense, we remove the links with lower weights in two different patterns. On the left side of figure 6, we colored the nodes and the links having weights between 0.2 and 1.0, where the weights of the green links are between 0.2 and 0.3 and those of the purple links are between 0.3 and 1.0. In this depiction, a minority but significant set of individuals (roughly 50 nodes) can been noticed for their state of isolation. These nodes are not strongly connected to the core of the network, but are connected when the links with the lowest weights are considered. This loosely connected ''fringe'' of the rural community is rarely reached by epidemics until a very strong epidemic comes. On the right side of figure 6, we colored the nodes and the links having weights between 0.4 and 1.0 as well as what we call the ''best-friends'' links. For each node, we select the link having the highest out-going weight and define this link as the ''best-friend'' link of the node. This depiction of the network captures the most likely paths (it is composed of the highest weighted links) that an epidemic might take from anywhere in the network towards the center of the network. Although this pattern of visualization may give the false impression that the network is tree-like or scale-free, an epidemic would leave a tree-like pattern as it traces its way through the rural community. Note that figure 5 proves that both of these network visualizations in figure 6 are missing majorities of the links in the complete network. Figure 7 shows the distribution of the node betweenness metric for the network. More than 80 percent of the nodes have very small values of node betweenness, leaving a select group of nodes that are critical connections in the system of shortest paths through the community. Figure 8 depicts the distribution of the node in-strength metric for the network. It is much less heterogeneous than the node betweenness and link weight distributions as the in-strengths are found rather homogeneous across the nodes. We explored the Orange nodes represent locations, red circles represent the selected nodes for vaccination, and the green nodes represent random selected individuals, whose friends will be candidate for vaccination. doi:10.1371/journal.pone.0059028.g004 correlations between the network metrics and various survey responses and found that node betweenness was significantly correlated with age (r = 2.15, p,.01), travel time to work (r = .20, p,.001), distance to work (r = .19, p,.001), level of education (r = .12, p,.05), number of non-family friends contacted weekly (r = .51, p,.05), and hours away from home each day (r = .22, p,.001). The outgoing node strength was significantly correlated with age (r = 2.20, p,.001), visiting with more family members outside one's residence (r = .18, p,.01), household size (r = .12, p,.05), travel time to work (r = .44, p,.001), distance to work (r = .45, p,.001), compliance risk (r = .13, p,.05), level of education (r = .23, p,.001), income (r = .17, p,.01), having diabetes ( r = 2.12, p,.05), how often one eats out (r = .16, p,.01), and hours away from home each day (r = .50, p,.001). Node instrength was correlated with level of education (r = .12, p,.05) and having had the flu in the past six months (r = 2.12, p,.05).
In general, while many of these relationships are not especially strong in terms of effect sizes, it appears that residents with higher levels of education, who have longer commutes, who are younger, with more income, those without diabetes or recent flu-like illnesses, who are away from home more hours each day, and who eat out more often are more likely to be important agents in the network measures that influence the potential spread of epidemics. It is interesting to observe that the younger rural residents are likely the most important agents when considering that rural regions are typically characterized by aging populations. This importance appears to be due to them, the younger persons, spending more time away from home, driving longer to work, visiting more businesses, and in all this, having and visiting more persons outside of their homes. Perhaps, the traditional farmer who rarely visits town and is mostly self-sufficient within his home and immediate neighbors is giving way to a younger generation and changing economy where increased travel and social interaction are increasingly required.
We performed extensive simulations to investigate potential epidemics and the proposed vaccination strategies for the rural contact network representing a sample population from Neosho County. To mimic a realistic epidemic with the stochastic SLIR model, we utilize average values of l 21 = 0.764 days, m 21 = 1.736 days, and R 0 = b/m = 1.75 with respective standard deviations of 0.100 days, 0.100 days, and 0.065 [21][22][23][24][25][26][27]. We explore the hypothetical outbreaks first by simulating 1,000,000 trials of the considered situation (such as without mitigation or with a specific mitigation strategy). For each trial, a triple of (l, m, R 0 ) is drawn from the three Gaussian distributions with the respective parameters and the outbreak is simulated until it dies out, leaving only susceptible and recovered individuals behind. This first type of experiment attempts to capture the diversity of possible influenza-like outbreaks in the rural community and we use the results of these for numerical comparisons between the different mitigation strategies. However, the irregularity of the parameter values does not yield insightful figures. The second type of experiment we ran was the simulation of sets of 10,000 trials that scan over values R 0 to quantify the range of potential outbreaks. In this second type of experiment, we deterministically vary R 0 , while l and m are still drawn from their distributions [28,29].
For each simulation, we track the numbers of nodes in each disease state through time as well as the timings of all event occurrences. We capture the total cases, representing this as the attack rate or the fraction of the total population infected, and the duration of each outbreak in days. The duration of an outbreak is the (continuous) time in days from the beginning of the simulation to the recovery (I to R transition) of the last infected node at which point all nodes in the network will either be susceptible or recovered. We define an outbreak as any trial that resulted in at least one secondary infection and present statistics only over the trials successfully demonstrating outbreaks. We simplify the presentation of the results of the second type of experiment by computing and plotting the average and 95% range of the resulting total cases for each group of 10,000 simulations on a single R 0 [21][22][23][25][26][27]. Figure 9 summarizes the distributions of the total cases as a fraction of the population infected (attack rate) in the manner described above. As R 0 increases, the epidemic size increases in a near-linear manner. It can be seen that distributions are broad but have low average values. This figure suggests that around 5 percent of the population might on average fall sick during an influenza season, but a few large outbreaks might touch 30-40 percent of the community. It is interesting to observe in figure 9 that the average attack rate varies little over the explored range and the median attack rate varies even less. The regularity of the outbreak distributions for different epidemic strengths is likely due to the strongly connected core of the network and the weakly connected fringes.
We ran seven sets of simulations to consider the seven vaccination strategies described in Section 3 and for each set we ran both types of experiments as described previously. In each trial, we draw a value for vaccine efficacy from a Gaussian distribution with mean of 72.0% and standard deviation of 6.0% to approximate realistic efficacy values [30][31][32]. The first vaccination strategy, the random distribution over the entire population, is the selection of a group of individuals representing 10 percent of the population and administering vaccines prior to the start of an outbreak with the given efficacy. Figure 10 demonstrates the potential reduction in the distributions of outbreaks by random vaccination.
The three idealistic vaccination strategies select their targets and vaccinate them by rankings determined by the node metrics. The left side of figure 11 captures the reduced epidemic sizes under an individual targeting strategy which uses node betweenness to select the individuals. For a realistic targeting of a distribution location, the right side of figure 11 captures the potential reductions in the epidemic sizes under the node-betweenness-based location targeting strategy. The location-based strategies are intuitively less successful than the individual targeting methods, but they represent much more feasible options for an administrative intervention.
A brief comparison of the results shown in figures 9, 10, 11 can be seen in figure 12. Figure 12 plots the average attack rates of the three strategies and the case of no vaccination. It can be seen that the situation of no vaccination results in the highest average attack rates, while the individually targeting strategy results in the lowest average attack rates and the remaining two strategies appear similar in an intermediate level of effectiveness. The distributions of the epidemic durations are not shown as they did not vary as R 0 varies. We summarize the comparison of the different vaccination strategies under the first type of experiments in table 1, which describes the distributions of attack rates and epidemic durations in days (in italics) by their averages, medians and 95% confidence intervals. It is immediately interesting to notice that each of the vaccination strategies reduces the average epidemic duration, some by as much as 20 percent on the average value. In table 1, it can be seen that the individual targeting methods have the highest average results, but among the feasible methods, the locationbased targeting using the node betweenness metric is the most successful at reducing the total cases on average. The node betweenness also provides the best metric for the individual targeting strategies.
The last column of table 2 displays the cases prevented per vaccine distributed. The value cases prevented per vaccine has an intuitive benchmark of the average vaccine efficacy at 0.72. If a vaccination strategy is very efficient at stopping an outbreak, then we can expect the average number of cases prevented per vaccine to be higher than the typical efficacy of the vaccine. On the other hand, if a vaccinated trial is resulting in an average number of cases prevented per vaccine that is less than the typical vaccine efficacy, it doesn't necessitate that the vaccination strategy will perform poorly in all situations. In general this situation implies that the vaccines are being given to individuals who usually aren't being infected and therefore they made little use of the vaccine in that set of trials. This could arise from either a poor vaccine distribution strategy or from a distribution of vaccines that is larger in size than a typical outbreak. When we have a strong outbreak, the vaccines are almost surely going to be a necessary measure, whereas in a weaker outbreak, most of the population will not be infected and extra vaccines will be ''unused'' with respect to preventing new cases. Notice that for the first type of experiment when we are not considering any vaccination the epidemic impacts roughly 5.1 percent of the population while the number of vaccines distributed is sufficient for 10 percent of the population. Table 2 lists the probabilities of outbreaks occurring under each of the eight scenarios considered. Of particular interest is that the targeted in-strength approach, having the lowest probability of an outbreak, has a higher average attack rate and longer average epidemic duration than the other targeted strategies.
Discussion
From the network analysis, we observed that the rural contact structure displays a significant amount of heterogeneity in the considered metrics. This heterogeneity suggests that the small number of nodes having the highest values of each metric might present strategic sub-populations for mitigation objectives. The rural contact network also contained a relatively disease-resistant sub-population due to their poor level of connectivity and location on the ''fringes'' of the rural community network. From statistical correlations, it appears that residents with higher levels of education, who have longer commutes, who are younger, with more income, those without diabetes or recent flu-like illnesses, who are away from home more hours each day, and who eat out more often are more likely to be important players in the according to the network metrics that influence the potential spread of infectious diseases.
In the data collected from the rural survey, there remain significant limitations. Although the survey presents a variety of types of locations such as schools, restaurants, libraries, and public attractions, the data does not sufficiently capture the information regarding household interactions. It was not feasible to anonymously identify individual households and which survey respondents visited them with the resources at our disposal. The lack of information regarding young respondents and household interactions remains a strong limitation in the characterization of this community and the following epidemic study on the rural contact network.
For vaccine distribution we considered seven strategies, but only four are reasonably feasible for local administrators to implement, those being the random distribution across the population and the three location-based distributions. The traditional targeted groups for distribution such as the health-care personnel, the very young (6-59 months), the elderly (50 years or older), pregnant women, those with chronic health issues, and American Indians are not completely identifiable from our survey results [33]. We could identify respondents by age range, but occupation and maternity status are transitory positions and were not explored by the survey. The global random distribution of vaccines gives a simplest method to compare the other vaccination methods to. The location-based methods are indicative for anonymously targeting subpopulations, not only for vaccination campaigns, but also for educational outreach to encourage social responses such as adoption of preventative health practices.
Interestingly, using the network metrics to select locations does not necessarily produce intuitive results. The restaurant chosen to represent locations that are frequented by nodes with high node outgoing node strength (as it had the highest average value) had less than one-third of the survey respondents frequenting it than some of the more popular restaurants in the region. Although diseases are partially mitigated, there is a limit to the reduction that can be observed in the total cases for the strongest diseases due to the resource limitation. Therefore when considering limited-resource vaccine distribution, local administrators should probably follow the traditional priority schedule. However, the identification of the critical locations would be useful for preventative education efforts, real-time epidemic alerts, and emergency resource distribution.
The results of this analysis are intended to help guide responses to a rural epidemic threat. With this, responders can explore the theoretical impacts that might be had from a limited-resource vaccine distribution by exploring various locations for distribution. Social behavior and human interaction (contact) are not exact sciences, so the theoretical mitigation results should be considered possibilities and aspirations rather than deterministic outcomes for any rural county or town.
Conclusions
Starting with a survey of a rural community, demographics were analyzed and an estimation of the social contact structure was built. This network was measured and the metrics were correlated with various demographics from the survey. Through the use of an exact model of a stochastic SLIR Poisson process, we have characterized a typical influenza-like outbreak in the community and investigated vaccination strategies. When considering resource-limited vaccine distribution strategies, we identified critical locations for ethical targeting subpopulations with the goal of effective disease prevention. Our aspiration is that this analysis will be a valuable resource for both the rural community on which this study focused, and also for several similar communities in the region. | 2016-01-22T01:30:34.548Z | 2013-03-11T00:00:00.000 | {
"year": 2013,
"sha1": "09d45982e9ed2a3e9a9b21665003dad0b1676481",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059028&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be0dc7c92c3ec8b21a854af24686059981522e80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231709621 | pes2o/s2orc | v3-fos-license | Density Matrix Reconstruction with Spin-polarized Current
We simulate a feasible experimental scenario that reconstruct the density matrix of an electronic qubit via spin-polarized tunneling currents. Our system is composed of a single level quantum dot, attached to ferromagnetic leads. By properly rotating the leads magnetizations we can select the spin component to be detected. Counting the number of tunneling events taken in four different magnetic alignments we find the Stokes parameters, that are used as a train data set into a supervised machine learning algorithm. The predicted model allows the complete knowledge of the open system density matrix, including both probability amplitudes and relative phases.
Since the seminal work of Loss and DiVincenzon on quantum computation with single spins in quantum dots,[1] a wealth of experimental and theoretical studies on quantum information processing with quantum bits (qubit) based on the electron spin have arised in the literature,[2] mainly motivated by the long spin decoherence times for electrons in semiconductors, and the ability to measure and control quantum-dot spins. [3] A few examples encompass the implementation of CNOT gates in semiconductor quantum dots using electron spin qubits, [4] √ SW AP two-qubit exchange gate between electron spin qubits, [5] and both CZ and CNOT gates in double quantum dot structure. [6] Recently, it was show a valley-orbit hybridization mechanism that protects electronic qubits in quantum dots from the effects of charge noise. [7] The qubit state reconstruction is of fundamental importance for quantum technologies, as their characterization allows the study of properties such as probabilities of possible measurable outcomes, coherences and entanglement. [8] The field of quantum state tomography emerged in the context of quantum optics, with the devolpment of tomography to reconstruct photonic states. [9][10][11][12] In the context of electron spin, optical tomography schemes have also been successfully applied to characterize single electron spin states. For instance, time-resolved Kerr rotation spectroscopy allows a non-destructive observation of the spin precession in a quantum dot. [13] Tomographic Kerr rotation technique have been developed to observe spin states of optically injectecd electrons in a semiconductor quantum well. [14] Also, tomographic methods have been applied to two and threepartite electronic systems. [15,16] Additionally, via shotnoise, Jullien et al. reconstructed the wavefunction of single electrons in ballistic conductors, by repeatedly applying Lorentzian voltage pulses that injects on-demand electrons into a conductor. [17] Recently, Bisognin et al. * fmsouza@ufu.br show fermionic quantum state reconstruction from electrical currents, thus opening the possibilities of quantum transport based tomography. [18] The present work arises in this context as a theoretical proposal to reconstruct the density matrix of single electron spin qubit using spin-polarized quantum transport. Our system consists of a well known spintronic device based on a quantum dot attached to ferromagnetic leads. [19] The idea here is to probe spin-polarized tunneling processes that provide the probability distributions in different spin-quantization axies. With a sequence of identical initializations and subsequent tunneling detections, it becomes possible to reconstruct the entire density matrix of a single qubit. The main idea here encompasses the following steps: (1) a numerical calculation of the dynamics of a quantum dot attached to leads, tha provides tunneling probability distributions as input of (2) an experimental scenario simulation (data acquisition) that uses stochastically generated data. From the collected counting data we determine (3) the Stokes parameters, that are used as a training set for (4) a supervised machine learning algorithm. With the predicted model we (5) reconstruct the whole quantum state, including probabilitiy amplitudes and relative phases.
Consider a closed single qubit model, with density matrix given by ρ 0 = 1 2 3 i=0 S i σ i , where σ i are the Pauli matrices and S i are the Stokes parameters, given by S i = Tr{σ i ρ 0 }. The index 0 in the density matrix means closed system. Here, S 0 = 1 and σ 0 = I, where I is a 2×2 identity matrix. Each Stokes parameter corresponds to the outcome of projective measurements, taken along x, y or z direction, which is a well known technique to probe photonic qubits. [20] Here we propose to apply the same technique to probe the elecronic qubit in quantum dots. To this aim we simulate an experimental scenario based on stochastic tunneling events between the quantum dot and drain leads.
We consider an open system composed of a quantum dot attached to ferromagnetic leads full polarized with their magnetization taken along some specific direction given by λs. This coupling is described in our model by where c † k,λs creates one electron with wave vector k and spin λs in the drain. The operator d λs annihilates one electron in the dot with spin λs. The parameter V gives the tunnel matrix element between the dot and the lead states. We assume V being independent of both the electron spin and energy (wideband limit). The quantum dynamics for the open system is governed by the von Neumann equation,ρ tot (t) = −i[H, ρ tot (t)] ( = 1), with ρ tot being the total density matrix defined in the space E D ⊗ E λs , where E D and E λs stand for dot and leads subspaces, respectively. More than one lead can be attached to the dot, so we can have, for instance, E λ+ ⊗ E λ− as leads subspace. The total Hamiltonian includes dot and leads Hamiltonians (free electron terms), as well as hopping terms. To find the reduced density matrix ρ(t), which considers only quantum dot degrees of freedown, we take the partial trace related to the leads degree of freedown, i.e., ρ(t) = Tr lead {ρ tot (t)}, so we can write Here ρ 00,00 gives a double occupancy probability, ρ 01,01 = ρ ↑↑ and ρ 10,10 = ρ ↓↓ provide the occupation probability for a single electron with spin |↑ and |↓ , respectively, taken along z direction, and ρ 11,11 gives the zero occupancy probability. Double occupancy will not be accounted for in our model, as we assume that a single electron will be initialized in the quantum dot, and subsequently drained into one of the leads, i.e., the charge flows unidirectionally from dot to lead. This single occupancy can be achieved in the Coulomb blockade regime. [22] Following Ref. [23], the relations between the Stokes parameters and probabilities p λs of tunneling events for a specific spin orientation are written as where Γ 0 is a tunneling rate and δ a short enough time interval. Also p 0 = (p z+ +p z− )/2. The probability that a tunneling event occurs around time t i in the λs polarized lead is given by If δ is small enough, we approximate p λs (t i ) ≈ δΓ 0 λs| ρ(t i ) |λs , so we take Here we use the notation |↑ = |+ and |↓ = |− for spins taken along z direction. Rotating the basis, the x+ and y+ probabilities can be expressed as Additionally, p 0 = δΓ 0 ( +| ρ |+ + −| ρ |− )/2. At this point we describe a stochastic approach used to simulate actual experimental conditions for the characterization of the qubit, where tunneling events are repeatedly counted. From the experimental point of view, to find p λs we need to initialized and read-out R times the quantum system. For instance, one can photoexcite the quantum dot with circular polarized laser pulse, thus creating a single electron with spin up in the dot. [24], [25] This electron will eventually be drained into the lead after some time. This tunneling event can be detected with nowadays technologies. [26] For our simulations, we assume that the system is observed for some time interval T . We divide T in M smaller intervals of size δ, so that T = M δ. If we repeat the experiment R times, with R being large enough, the probability p λs will tend to p i = n i /R, where n i is the number of times a particle was detecte around i-th time interval in lead λs polarized.
We now proceed to determine the probability distributions behind the stochastic tunneling processes. Consider a set S of integer numbers ranging from 1 to L, with L M . We partitionate this set in M + 1 subsets such that s 1 ∪ s 2 ∪ · · · ∪ s M ∪ s M +1 = S and s i ∩ s j = ∅ for i = j. If we say that subset s i contains N i elements, we define so that the sum N 1 + N 2 + · · · + N M = Lf , with f being a fraction of L, with the remainder inside the last subset s M +1 , i.e., N M +1 = L(1 − f ). Here the probability to detect a tunneling event at i-th time is given by p i = N i /L. In the numerical implementation we follow a uniform distribution for the random selection of one element x in S, if x ∈ s i (i ≤ M ) it means that a tunneling event was detected at the i-th time interval. Now we fix the N i distribution in such a way that expectation values make sense, i.e., we take N (s i ) = int(LΓ 0 δ λs| ρ(t i ) |λs ), where int means integer-valued function. We have to numerically find ρ(t i ) and use it as an input to determine the distribution N (s i ).
The quantum dynamics of the open system is given in the standard Born-Markov approximation, via the Lindblad equation [27] where the superoperator L λs stands for tunneling processes. More specifically we write where L λs = √ Γ 0 d λs . The dot Hamiltonian is given by Heren specifies some magnetic field direction and Ω is a characteristic constante, proportional to the magnetic field strength. So this model allows a coherent spin dynamics in the dot, that will be probed by the tunneling events into ferromagnetic drains. Figure (1) shows the tunneling events counted in each lead with one of the four spin polarizations, |+ , |− , |x+ and |y+ . We observe that the counting events present a damped oscillatory pattern along +, − and y+ spin orientations. This is a result of the spin precession taken in the yz plane in the Bloch sphere, as we assume a transverse magnetic field taken along x direction. In particular, + and − components follow a damped Rabi like oscillations. In contrast, the counted events in x+ direction presents a monotonic exponential decay, as the spin projection in x+ direction does not oscillate in time. With the number of tunneling processes n i found in Fig. (1), we determine p i for each one of the four spin components considered. Using these probabilities in Eqs.
(2)-(5) we determine the Stokes parameters S l (t i ) illustrated in Fig. (2). The parameter S 0 shows a simple exponential decaing, thus reflecting the decoherence imposed by the ferromagnetic leads. It is valid to note that from S 0 we can directly extract the tunneling rate. In contrast, S 1 presents a dispersion with mean value close to zero. This shows that both spin components x+ and x− have equally probability amplitudes to tunnel out the dot. Finally, S 2 and S 3 present well defined damped oscillatory patterns, which reflects the dynamics taken in the yz plane.
In order to reconstruct the reduced density matrix, we apply a supervised machine learning algorithm. [28] The Stokes parameters S l (t i ) found in Fig. (2) are used as a set of train data, whose machine learning model interpolates. To this aim we construct (d + 1)dimensional feature vectors of the form, |t i = (1 t i · · · t d−1 i t d i ) T ∈ R d+1 , with i = 1, ..., M . The estimators are obtained via linear regression model, |b [29] The dimension d + 1 should be properly adjusted depending on l in order to avoide both underfitting and overfitting problems. The solid black lines in Fig. (2) is the resulted S l (t) model, that captures the main features of the randomly generated data S l (t i ).
Finally, in Fig. (3) we plug the obtained Stokes parameters modelŜ l (t) inρ = 1 2 3 l=0Ŝ l σ l . In panels (a) and (b) we show the diagonal elementsρ 00 = 0|ρ |0 andρ 11 = 1|ρ |1 , respectively. As the system is initialized in |0 (spin up), we haveρ 00 starting close to one. As time evolves bothρ 00 andρ 11 present damped Rabi oscillations due to the presence of the transversal magnetic field. The dotted lines give the corresponding density matrix elements of the closed system, i.e., without leads. In The z+ component starts with high counting number due to the initialization state |↑ . As time evolves, both z+ and z− present bunchs of tunneling events, reflecting the characteristic Rabi oscillations taken place in a two level system. The bunchs vanish as time evolves due to the decoherence process imposed by the drain leads. The y+ component also presents an oscillatory damped pattern, while the x+ counted tunneling events show just an exponential decaing rate. panels (c) and (d) we show the real and imaginary parts of 0|ρ |1 , respectively. While the real part presents no dynamics, the imaginary part oscillates, tending to zero as time evolves. Note that with increasing time the offdiagonal element ofρ vanishes due to decoherences. Also, bothρ 00 andρ 11 vanish as the charge is being drained to the leads.
In conclusion, we have simulated an experimental scenario of spin-polarized charge tunneling detection in a quantum dot system attached to ferromagnetic leads, that allows the full reconstruction of quantum state dynamics. By counting the number of tunneling events in four different spin orientations we obtain the Stokes parameters, that are used as train data set in a supervised machine learning algorithm that predicts the entire behavior of the Stokes parameters. This allows to reconstruct the density matrix of the system, including both probability amplitudes and relative phases.
Data Availability Statement. The data that supports the findings of this study are available from the corresponding author upon reasonable request. , that is pluged inρ(t) = (1/2) lŜ l (t)σ l . In panels (a) and (b) we show the diagonal elements 0|ρ |0 and 1|ρ |1 , that shows characteristic damped Rabi oscillations. The dotted lines show the analytical theoretical density matrix derived for a closed system. In panels (c) and (d) we show, respectively, real and imaginary parts of 0|ρ |1 . So the entire density matrix is recovered, which means that we have the full information of the quantum state, i.e., both probability amplitudes and relative phase. Parameters: φ = Ωt and Γ0 = 0.1Ω. | 2021-01-27T02:16:11.327Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "726453c2e700256ba4a48bbf3267a062764be2e7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "726453c2e700256ba4a48bbf3267a062764be2e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
227230826 | pes2o/s2orc | v3-fos-license | COUNTEN, an AI-Driven Tool for Rapid and Objective Structural Analyses of the Enteric Nervous System
Abstract The enteric nervous system (ENS) consists of an interconnected meshwork of neurons and glia residing within the wall of the gastrointestinal (GI) tract. While healthy GI function is associated with healthy ENS structure, defined by the normal distribution of neurons within ganglia of the ENS, a comprehensive understanding of normal neuronal distribution and ganglionic organization in the ENS is lacking. Current methodologies for manual enumeration of neurons parse only limited tissue regions and are prone to error, subjective bias, and peer-to-peer discordance. There is accordingly a need for robust, and objective tools that can capture and quantify enteric neurons within multiple ganglia over large areas of tissue. Here, we report on the development of an AI-driven tool, COUNTEN (COUNTing Enteric Neurons), which is capable of accurately identifying and enumerating immunolabeled enteric neurons, and objectively clustering them into ganglia. We tested and found that COUNTEN matches trained humans in its accuracy while taking a fraction of the time to complete the analyses. Finally, we use COUNTEN’s accuracy and speed to identify and cluster thousands of ileal myenteric neurons into hundreds of ganglia to compute metrics that help define the normal structure of the ileal myenteric plexus. To facilitate reproducible, robust, and objective measures of ENS structure across mouse models, experiments, and institutions, COUNTEN is freely and openly available to all researchers.
Introduction
Gastrointestinal (GI) motility is regulated by the enteric nervous system (ENS; Spencer and Hu, 2020). The majority of the neurons and glia of the ENS are contained within the myenteric plexus, where they are clustered in interconnected ganglia of various sizes (Sternini, 1988). Alterations in ENS structure, gauged by parameters such as neuron density or number of neurons per ganglia, are associated with GI dysmotility (Rao and Gershon, 2018). While ENS structure is thus relevant to health and disease, methods by which it can be comprehensively assessed are limited. Currently there are no objective methods to capture and quantify enteric neurons over large tissue areas or within multiple ganglia. The challenge of objectively quantifying ENS structure at both the neuronal and ganglionic level is particularly acute. While some studies have calculated aggregate myenteric neuronal densities (Gabella, 1987;Santer and Baker, 1988;Kunze and Furness, 1999), they have addressed the ganglionic organization of these neurons (defined here as the number of neurons per individual ganglia, or number of ganglia per area of tissue) in other limited ways.). Recent studies have computed average ganglia size (Gianino et al., 2003;Becker et al., 2018); however, because of the laborious process of manually counting and clustering neurons, these studies have analyzed only isolated tissue areas, thus precluding more comprehensive understanding of the structure of the adult myenteric plexus. In addition, since myenteric ganglia show a diversity of shapes and sizes, and exhibit varying degrees of proximity from each other, manual enumeration and clustering of neurons into ganglia is prone to error, subjective bias, and peer-to-peer discordance. Hence, there is a critical need for objective and rapid tools and methods for standardized enumeration and classification of myenteric neurons into ganglia over large tissue areas to build a comprehensive understanding of ENS structure.
In this report, we present COUNTEN (COUNTing Enteric Neurons), the first automated, open-source software that uses computer-vision and image-processing methods for high-throughput analysis of widefield microscopy images to reliably identify, enumerate, and cluster myenteric plexus neurons in a rapid and objective manner.
Animals
All animal experiments were conducted in accordance with the protocols that were approved by the Animal Care and Use Committee of Johns Hopkins University in accordance with the guidelines provided by the National Institutes of Health. Nine-week-old littermate male mice from the C57BL/6 (Charles River) background, which were housed in the same cage, were used for the experiment.
Tissue isolation
Mice were anesthetized with isoflurane and killed by cervical dislocation. An abdominal incision was made to perform a laparotomy and isolate intestines that were gently pulled out and placed in a clean Petri dish containing sterile ice-cold Opti-MEM solution. The intestinal contents were flushed using ice-cold sterile PBS after which the terminal ileum, defined as the last 5 cm of small intestinal tissue before the cecum, was dissected out. The longitudinal muscle containing myenteric plexus (LM-MP) tissue from the terminal ileum was peeled out with a sterile clean cotton swab, cleaned in sterile ice-cold OptiMEM, flattened on a dish and fixed with freshly prepared ice-cold 4% paraformaldehyde solution for 30 min. The tissue was washed in sterile ice-cold PBS and used for immunostaining.
Immunostaining
Fixed LM-MP tissues were incubated at room temperature (RT) while shaking in blocking permeabilization buffer (BPB; 5% normal goat serum, 0.1% Triton X-100 in sterile PBS), after which tissues were washed in sterile PBS and incubated overnight with rabbit anti-HuC/D primary antibody (1:750; Abcam) at 16°C with constant shaking. The tissues were removed from the primary antibodies, washed three times (10 min each) with PBS and incubated in Goat anti-Rabbit Alexa Fluor 488 secondary antibody (Invitrogen) at RT for 1 h in the dark. Subsequently the tissues were washed three times (10 min each) with PBS and mounted with Prolong Anti-Fade mounting medium containing nuclear stain DAPI (Invitrogen). Care was taken not to let the tissue fold on itself during the mounting process.
Imaging
Using an EVOS M7000 motorized-stage fluorescent microscope (Thermo Fisher), tissues were imaged with a 20Â objective (EVOS AMEP4924; Fluorite LWD, 0.45 NA/ 6.23WD). Imaging was performed such that the entire width of the tissue was imaged over variable length. Care was taken not to image folded tissues. Initial concordance measurements of COUNTEN versus manual counting were done using images of individual fields. Subsequently, individual images were stitched together to generate a composite image that was used for COUNTEN analysis for generating the ENS map.
Manual counting
Manual counting of HuC/D-immunostained neurons was performed by a trained technician using the plugin Cell Counter on ImageJ. Special attention was given to counting individual HuC/D labeled cells, without counting the same cell twice. Classification of neurons into ganglia was done by following the rule of defining a cluster of more than or equal to three neurons as a ganglion. Ganglionic boundaries were established as published in earlier studies (Kulkarni et al., 2017). The total number of ganglia per image and number of neurons per ganglia were thus enumerated and tabulated.
Software
The COUNTEN workflow consists of four sequential steps ( Fig. 1): (1) image pre-processing; (2) neuron identification; (3) neuronal clustering into ganglia; and (4) image post-processing for segmentation. This algorithm was implemented in Python using the scikit-image, NumPy, and scikit-learn libraries. The same workflow was used for all the processing described in this report.
Code accessibility
The COUNTEN software is freely available without restrictions on GitHub (https://github.com/KLab-JHU/COUNTEN) and is uploaded as a zip file with this submission (Extended Data 1).
COUNTEN requires two user-specified parameters as input: the pixel density r (pixels/mm) as dictated by the imaging protocol, and the full width at half maximum s (pixels) of a Gaussian smoothing kernel used during preprocessing. The four steps of the workflow are detailed below: Image pre-processing This step eliminates noise and staining variations, which might otherwise confound the results. We opted for a simple procedure, which can easily be replicated across different equipment configurations. The RGB image was first converted to a single grayscale channel and processed using an isotropic Gaussian filter. Larger blurs will reduce the contribution of extraganglionic neurons but also make the algorithm more susceptible to false negatives. Smaller blurs may result in insufficient de-noising of the image. We empirically determined that setting the Gaussian full width at half maximum to s ¼ 7 pixels yields neuronal counts highly concordant with those of human raters (see Fig. 2). Hence, we fixed s ¼ 7 for all analyses in this work. Next, we divided the image into nine equal partitions and used the center region to set a threshold between foreground (neurons) and background (GI tract). We use the center region to avoid biasing the threshold based on abnormalities at the tissue edges. The threshold is selected adaptively using Otsu's method (Otsu, 1979), which minimizes the intraclass variance.
Neuron identification
This procedure searches for and returns all local maxima within the image, separated by a distance of at least d m (pixels). In other words, the peaks are local maxima of a circular neighborhood in the image with a prespecified radius of d m . When there are multiple peaks within the same neighborhood, then the average of these coordinates is returned. We fixed the default value of d m according to the pixel density r as follows: d m in pixels ð Þ¼ 2:5mm x r : We note that the parameter d m is accessible to users within the Python source code and can be modified from this default value as needed for other imaging protocols.
Clustering into ganglia
We used the density-based spatial clustering of applications with noise (DBSCAN) algorithm (Ester et al., 1996) to cluster the peak locations from step 2 into ganglia. DBSCAN is effective in our application since it does not assume a predefined number of clusters, and allows for unlabeled points (i.e., extraganglionic neurons). The DBSCAN algorithm takes as input two parameters, the minimum number of neurons in a ganglion N g , and the minimum separation between ganglia « m . Here, we fixed N g ¼ 3 (ganglia contain at least three neurons), and this convention was kept constant between COUNTEN-driven neuronal and ganglionic counts and manual counts by technician. We fixed the default value of « m according to the pixel density r as follows: « m in pixels ð Þ¼ 20:6mm x r : Once again, the parameter « m is accessible to users within the Python source code and can be modified from this default setting as needed.
Output segmentation
We binarized the image and used the watershed segmentation algorithm (Barnes et al., 2014) to flood the background pixels. This procedure leaves just the identified ganglia as the final output. The algorithm also colors the ganglia for ease of visualization.
Results
We designed the COUNTEN algorithm to follow a sequence of four steps (Fig. 1): (1) image pre-processing using Otsu's adaptive thresholding method (Otsu, 1979) to separate foreground neurons from background tissue; (2) neuron identification based on peak intensities within a local neighborhood; (3) neuronal clustering into ganglia using the DBSCAN algorithm (Ester et al., 1996); and (4) image post-processing via watershed segmentation (Barnes et al., 2014). We found that these four steps provide high-concordance data on the enumeration of neurons present in tissue and within defined ganglia (Fig. 1).
To test the concordance of COUNTEN over a larger number of images, we analyzed adult murine ileal LM-MP (iLM-MP) tissue. We first evaluated whether COUNTEN correctly identified and enumerated neurons with the same precision as human experts. We analyzed 100 images (n = 100), each representing a random 20Â field of view of iLM-MP tissue immunostained with antibodies against pan-neuronal marker HuC/D. Each image depicted different numbers of neurons and we found that COUNTEN achieved high concordance with manual enumeration of neurons performed by human experts (Fig. 2A). In effect, every neuron identified by COUNTEN was also identified by manual counting.
Second, we evaluated the accuracy of the ganglionic clusters identified by COUNTEN. We defined a ganglion as a cluster of more than or equal to three neurons and defined ganglionic edges and boundaries by labeling the contours of the HuC/D-expressing cell clusters as previously published (Kulkarni et al., 2017). While manual counting identified 413 ganglia in total, COUNTEN achieved similar performance and identified 411 ganglia across the 100 images, underscoring COUNTEN's reliability. Further, analysis of ganglia size exhibited a similar high degree of concordance between manual and COUNTEN methodologies (r 2 = 0.9257; Y ¼ 0: 9998pX 1 1:058; Fig. 2B). Beyond its reliability, COUNTEN offers a tremendous reduction in the time spent on the analysis. For reference, a technician took 2 d to analyze 100 images, while COUNTEN processed the same number of images in ,10 min, providing us with a platform that performs rapid, precise, and objective neuronal and ganglionic counts in HuC/D-immunostained iLM-MP tissue.
Third, we used COUNTEN to quantify metrics that are often used for ganglionic organization within the iLM-MP. In contrast to our earlier tests of COUNTEN on single fields of views, here we deployed COUNTEN on widefield images generated from multiple stitched contiguous 20Â images of HuC/D-immunostained iLM-MP tissue from three adult male littermate mice. We imaged areas of 46.15, 48.83, and 36.34 mm 2 , which we found to contain 15741, 13268, and 9247 neurons within 778, 742, and 475 ganglia, respectively. Using COUNTEN-generated data, we calculated the neuronal density in the three tissues to be 344.33, 288.21, and 269.03 neurons/mm 2 and the ganglionic density was calculated to be 16.86, 15.19, 13.07 ganglia/mm 2 .
Finally, we used COUNTEN-generated data to study mean ganglia size and ganglia diversity. Ganglia size in none of the three tissues followed normal distribution (Anderson Darling test, p , 0.0001), and the mean ganglia size between the three tissues, computed to be 20.23, 17.88 and 19.47 neurons/ganglia, were not statistically different (Kruskal-Wallis test; Fig. 2C). The frequency distribution of ganglia size across the three tissues showed inverse correlation of ganglia size and their relative abundance (Fig. 2D), which can be summarized by a negative binomial equation: Y ; NBð X À 3 ð Þ; 0:58; 0:035Þ:
Discussion
The ENS has been previously described at an anatomic, physiological, and transcriptomic level (Costa et al., 2000;Spencer and Hu, 2020;Morarach et al., 2021), yet a detailed, overarching structural analysis of ENS plexuses within defined intestinal tissue areas has not been performed. This can be attributed to the lack, until now, of reliable tools and methods for enumerating large numbers of enteric neurons from distinct tissue segments and clustering them into ganglia.
In this report, we describe the generation and methodology for use of COUNTEN, the first open-source tool for rapid, automated and objective enumeration and clustering of ENS neurons. Prior tools such as ImageJ, while providing modules and macros that allow for correct enumeration of neurons, require human intervention to identify and cluster neurons into ganglia, making the procedure slow and subjective (Keating et al., 2013). COUNTEN, on the other hand, relies solely on computer-vision algorithms to parse a large number of images in a short duration of time, and applies a single definition for neuronal clustering equally to all the images to produce rapid and objective neuronal and ganglionic enumeration.
To showcase the ability of COUNTEN, we compared human manual counting using an already available tool, ImageJ, with COUNTEN to analyze 100 images of immunostained-iLM-MP tissue. While COUNTEN took a fraction of the time that a trained technician took to parse through the images, we found there to be a high degree of conformity between manual counting and COUNTENgenerated data (Fig. 2B), suggesting that the gains in speed with COUNTEN were not associated with a loss in accuracy with regards to the identification and enumeration of neurons and their classification into ganglia.
The rapidity with which COUNTEN analyzes immunostained images of iLM-MP tissue allowed us to generate high-fidelity data on neuronal numbers and ganglia size from a large region of tissue. We analyzed these data to show how COUNTEN can help the scientific community construct large scale statistical maps of the ENS. We used COUNTEN-generated data to establish that ganglia size is conserved across ileal tissues from sex-matched littermate mice. We further used these data to establish the parameters of a negative binomial equation that defines the frequency distribution of ileal ganglia size. Along with neuronal and ganglionic density, this equation provides the metrics of ganglionic organization in the iLM-MP and can be used as a reference for future studies. These analyses show how rapid and objective neuronal enumeration by COUNTEN can help the scientific community define ENS structure in animal models of health and disease.
While COUNTEN is a significant advance in our ability to interrogate ENS structure, it is a first-generation tool and has a few limitations. The performance of COUNTEN depends on the robustness and homogeneity of immunostaining and imaging, which we have standardized (see Materials and Methods). Further, while ganglia are threedimensional (3D) structures, the current algorithm does not operate on 3D image-data. However, the high concordance between COUNTEN and human-generated data, suggests that the data generated by COUNTEN are as accurate as those generated by a human observer. Finally, COUNTEN is currently limited to interrogating HuC/D-immunolabeled neurons and does not have the ability to query two different immunolabels. However, to facilitate rapid and reproducible measurement of ENS structure within the broader ENS community and to propel the development of this tool in an open-source manner, COUNTEN is freely and openly available to researchers. As an open-source, freely-available tool under active development, the authors and the scientific community alike can add modules to the existing program that may increase the future functionality of COUNTEN. | 2020-12-01T14:15:11.444Z | 2020-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "5c9df88b97c3b2f05c8b26c9a19fee29feade0e0",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/8/4/ENEURO.0092-21.2021.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a8504ff7a0941c00f09189eb9be6977fc7b31bf",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265270120 | pes2o/s2orc | v3-fos-license | MOTHERHOOD MADE EASIER: THE NANNY CARE MOBILE APPLICATION FOR POST-NATAL WOMEN
The Nanny Care mobile app has accomplished a significant mobile development application targeted at post-natal women. This app aims to solve the challenges post-natal women face in accessing reliable and safe confinement care services. Additionally, it creates a golden opportunity for the confinement ladies to promote and provide their confinement services to the targeted customers. On the other hand, this study follows the Extreme Programming (XP) methodology, which focuses on iterative development and continuous customer feedback. Specifically, this app allows post-natal women (users) to register and log in, view confinement ladies' profiles, book services, view past bookings, and rate confinement ladies. Based on the usability evaluation of the app, it has shown that users found it helpful, easy to use and efficient. The users' feedback highlights the app's strengths and suggests areas for improvement, such as additional features and better error handling. Overall, this Nanny Care mobile app has the potential to enhance the post-natal women's confinement experience, empowering confinement ladies to offer their services and providing a reliable platform for accessing and providing quality care.
INTRODUCTION
The childbirth process holds tremendous significance throughout different cultures worldwide and is accompanied by various culturally specific rituals and customs.Specifically, becoming a mother brings about profound changes in women's lives, rendering them exceptionally susceptible during the post-natal (after birth) time.For instance, an established tradition called "confinement" is a famous term related to the duration after childbirth among post-natal women in most Eastern societies.Commencing after birth, this tradition entails complete and extensive support provided by family and caregivers within the home.The primary goal behind such assistance revolves around ensuring adequate rest for mothers and proper dietary intake while simultaneously fulfilling newborn care obligations and adhering to relevant confinement practices designed to promote warmth (Cleveland Clinic, 2022).
Despite living in urban areas or being separated geographically from their families, post-natal women frequently find themselves lacking the support of family members or access to a professional caregiver known as a confinement lady.This situation requires seeking several alternative methods for obtaining proper post-natal care and supportone common approach being through social media platforms like Facebook and Instagram.However, caution should be exercised since relying on these channels may pose reliability and safety risks.Disappointingly, there have been documented incidents where mothers received substandard care from hired professionals in confinement centres.For instance, a recent case in Selangor involved a confinement lady who actively hindered a mother's breastfeeding efforts and forcefully introduced formula to a newborn baby who was only a few days old (Bernama, 2021).
To address these challenges, the introduction of 28days.comhas been proposed.It is the first online confinement care services platform in Malaysia.It has been proven to be a valuable resource for mothers (28Days, 2019).Throughout this platform, mothers can choose the services they desire and have access to trustworthy ratings and reviews.However, it is essential to recognise that websites alone may not completely ease mothers' concerns nowadays, especially in this technological age.Hence, this study's proposed mobile application development would allow them to easily book confinement services and provide feedback afterwards to support these mothers greatly.In short, developing this application would prevent mothers from unknowingly selecting inappropriate services or confinement ladies.Ultimately, it gives them peace of mind and ensures a safe and dependable confinement experience.
BACKGROUND AND RELATED STUDIES
An extensive explanation of the background and relevant studies concerning prevailing systems for finding confinement ladies are discussed and provided in this section.These systems generally encompass diverse methods such as Instagram, websites, and applications.The primary goal is to thoroughly comprehend the current landscape of platforms facilitating the search for confinement ladies while identifying their advantages and disadvantages.
The search for confinement ladies has tremendously impacted post-natal women -primarily those residing in urban settings or those distant from supportive family members during their recovery phase.Across various cultural backgrounds, there is a great reverence towards safeguarding new mothers during their vulnerability after childbirth, thus giving rise to a widespread practice of self-imposed isolation termed "confinement."These practices involve providing comprehensive assistance and care within the confines of their homes, prioritising the mother's well-being through adequate rest, proper nutrition, and adherence to specific customs outlined for this particular period.
In the traditional sense, confinement ladies were typically found through personal networks, family connections, or recommendations from trusted sources.However, as technology has advanced and society has changed, the search for confinement ladies has now transitioned to digital platforms.The digital media include Instagram, website and application.Specifically, the finding for confinement ladies can be done through Instagram, dedicated websites such as "Pantang Plus", and applications like "Confinement-Mommy Genius", discussed in the following subsection.
A. Instagram
Typically, using Instagram as a digital platform to find confinement ladies involves utilising the platform's popularity and broad reach by society.The results obtained from Instagram searches for confinement ladies are presented in Figure 1 as follows.
Results from an Instagram search
One of the advantages of Instagram to businesses and entrepreneurs is the opportunity to expand their operations while engaging in marketing activities.This is done by sharing feedback from previous customers on their profiles, indirectly providing users with valuable insights.Furthermore, businesses can showcase their confinement lady services through real videos on Instagram.However, it is important to note that Instagram does not offer detailed ratings for each confinement lady, which may limit the options available and result in a smaller selection for users.
B.
Website "Pantang Plus" An alternative approach has been investigated by utilising a dedicated online platform called "Pantang Plus" (PantangPlus, 2023), shown in Figure 2.
Figure 2
Results from the Pantang Plus website: This platform is used to locate confinement ladies.Also, this platform specialises in connecting users with confinement ladies nearby.It offers extensive information regarding various packages, costs, and services for mothers and babies.Users can make more informed decisions regarding their selection process by accessing details on service offerings and pricing options through this platform.Nevertheless, a drawback of using this website is that it needs to disclose specific qualifications or experience details of the confinement ladies, making it difficult for users to assess their capabilities thoroughly.Moreover, the inability to view feedback or ratings left by previous clients further limits users' capacity to make informed choices.
C. Application "Confinement-Mommy Genius"
Another approach that has been studied involves the "Confinement Mommy Genius" application (Khai li, 2020).The notable app serves as a valuable resource for individuals seeking guidance during their confinement period while also offering a platform for searching esteemed confinement ladies as illustrated in Figure 3.
Result from the "Confinement Mommy Genius" application
This well-designed application allows users to conveniently browse various businesses on its platform and gather detailed information regarding their relevant services.Apart from being an informative tool, this application goes above and beyond by providing additional features like food delivery options and recipe suggestions tailored towards enhancing user convenience throughout their confinement journey.However, it is worth mentioning that one of the limitations associated with this application is the absence of a direct booking functionality for confinement ladies.Furthermore, users cannot provide valuable reviews or feedback about their experiences with these confinement ladies.Thus, it has missed an opportunity to assist other potential users needing recommendations.Based on the above three digital platforms, a summary of the advantages and disadvantages of each digital platform is provided in Table 1 as follows.In brief, digital platforms such as Instagram, dedicated websites like "Pantang Plus " and applications such as "Confinement Mommy Genius" have emerged in the quest for confinement ladies.Also, these digital platforms render a range of benefits and limitations for individuals seeking confinement services.Only a little work or research has been done to enhance these digital platforms to target post-natal women users.
Although Instagram offers businesses a platform to showcase their services and allows users access to feedback and videos, it needs more comprehensive ratings and might have limited options.On the other hand, dedicated websites like "Pantang Plus" offer complete information about confinement services but do not disclose specific details about the confinement ladies or furnish feedback from past clients.Applications like "Confinement Mommy Genius "give various resources to facilitate the search process but need more features for direct bookings and user feedback.To simplify, these digital platforms must provide users transparency regarding reviews and feedback.Thus, gaining insight into these existing systems is crucial when developing an innovative platform that is easy to use for those searching for a suitable confinement lady.
METHODOLOGY
The project's development has chosen the Extreme Programming (XP) framework as its methodology, widely used in designing digital platforms (Amir Azimi, 2017).
XP is specifically designed to cater to the distinct needs of software development by small teams facing ambiguous and evolving requirements.This approach offers numerous advantages, such as rapid development, user-friendliness, safety, flexibility, predictability, a scientific approach, and a pleasant development experience.The efficient development process within the XP framework is depicted in Figure 4 as follows.
Figure 4
Extreme Programming (XP) Methodology Generally, five iterative steps are involved: planning, designing, coding, testing and listening, which are listed as follows.
Step 1: Planning In this initial stage, user stories describing their envisioned results determined the desired outcomes.These requirements are broken down into iterations necessary to complete all required functionalities.The proposal is the deliverable of this stage.
Step 2: Designing This step covers the requirements collected during planning, in which a solid foundation for system design is established.Also, designing artefacts like context diagrams, entity relationship diagrams, data flow diagrams, and data directories helps to understand how information should flow through the system.These graphical representations guide interface creation.
Step 3: Coding During this stage, specialised XP techniques such as continuous integration come into play when writing the actual code.The code has undergone review to ensure alignment with the requirements and enable identification of necessary changes.
Step 4: Testing Multiple testing processes are applied to the planned project to identify potential issues or bugs.In addition, the system is carefully evaluated to verify that it meets user requirements.Any faults discovered are promptly addressed and resolved.Notably, testing takes place during the development phase in XP rather than afterwards.
Step 5: Listening An integral principle of XP is continuous customer interaction through feedback loops.Some commentary gathered helps refine requirements and form a foundation for innovative designs.This cycle repeats, facilitating continuous improvement and adaptation throughout the development process.
To simplify, adopting the XP framework has led to numerous benefits to the project due to its iterative and customer-centric approach.This methodology guaranteed that the project had been advanced through various stages, including planning, designing, coding, testing, and listening.Consequently, it enabled a quick adaptation to change the requirements and continuous user feedback.Such an iterative and collaborative approach has significantly improved the overall quality and success of the software development process.
Design and Development of Nanny Care Mobile App for Post-Natal Women
The discussion on the creation and progress of a Nanny Care mobile app for Post-Natal Women is elaborated in this section.The development process adhered to the five steps above of the XP Programming methodology.This section is divided into two subsections: (1) exploring the requirements of the Nanny Care mobile app and (2) building a prototype of the application to showcase the identified requirements.
To develop a mobile app, the following requirements-gathering processes are carried out.This process involved two methods: (1) gain input from the potential users, who provided valuable insights and guidance, and (2) conduct research on existing systems and apps through online sources that use the Google search engine.The main goal of this process was to collect essential information and extract requirements based on the features and functionalities of the existing mobile application.
Method 1: Input from potential users
The guidance from the potential users who generously offered valuable insights and recommendations regarding the desired features and functionalities for the mobile app.In-depth discussions were conducted to gather comprehensive information on specific requirements such as user registration, login functionality, and others.
Method 2: Research on Existing Systems and Apps
To enhance the requirements-gathering process, a thorough investigation has been carried out on the various Nanny Care mobile applications that are currently accessible through online sources.This analysis has completed an extraction of additional requirements and features commonly observed in widely used existing applications.
From these two methods, the requirements that served as the foundation for determining the functionalities and features of the Nanny Care mobile app have been done.Table 2 presents a comprehensive list of the essential requirements and their priorities.
Table 2
List of Requirements for Nanny Care mobile app for Post-Natal Women
NC11
Login into the user profile and log out.
NCA1-1
The user (postnatal women/confinement lady) should log into the administrator account.
SM 1_2
The user (postnatal women/confinement lady) should be allowed to reset the password.
NCA1_3
Users (postnatal women/confinement ladies) can log out anytime.
NCA2
Register user profile.View registered accounts
NCA2_1
Users (postnatal women/confinement ladies) should register their profile with a profile picture included.
NCA3
View confinement lady NCA3_1 Users (postnatal women) view all confinement ladies.
NCA4_1
Users (postnatal women) view the confinement lady's profile details.
NCA5_1
Users (postnatal women) should be able to book selected confinement lady's services.
NCA6_1
Users (postnatal women) should be able to view their past bookings and details.
NCA7_1
Users (postnatal women) should be able to rate their confinement lady after the whole process.
NCA8_1
The user (confinement lady) should be able to view all the bookings they received.
NCA8_2
The user (confinement lady) should be able to accept or decline the bookings.
NCA9
Create services
NCA8_1
The user (confinement lady) should be able to create the services with the name and the price per day received.
NCA8_2
The user (confinement lady) should be able to edit and update the services.
To convert the outline requirements in Table 2 into functional elements of a computer system, a visualisation and modelling process is necessary to utilise suitable methods and tools.This study selected Unified Modeling Language (UML) as the preferred modelling technique due to its effectiveness in such applications.The chosen approach comprised three primary types of diagrams: use case diagrams, activity diagrams, and class diagrams, representations capable of depicting the behavioural and structural facets of the designed mobile app.Within the confines of the current study, the two interactive diagrams, namely use case and activity diagrams, drawn using the draw input-output (I/O) application, were chosen to portray the underlying system's structural components accurately.
Nine essential use cases have been identified as depicted in Figure 5: Login into user profile and log in, Register user profile, View registered accounts, View confinement lady, View profiles, Book confinement lady, View past bookings, Rate confinement ladies, Booking Approval, and Create services.These use cases cover the main functionalities and features of the app.
RESULTS AND DISCUSSION
A usability evaluation was conducted with ten respondents: five confinement ladies and five post-natal women.The respondents were contacted through personal meetings and calls.The Nanny Care mobile app for Post-Natal Women's high-fidelity prototype and a post-task questionnaire have been used to conduct the assessment.
The post-task questionnaire consists of 20 items in five sections.Section A collected demographic information, while Section B assessed the respondents' opinions on the application's Interface Design using a five-point linear scale (1 representing "Strongly Disagree," and five representing "Strongly Agree").Section C evaluated the application's usability (functionality) using the same scale as Section B. Then, in Section D, the respondents were asked about the application's ease of use using linear scale questions.
Finally, in Section E, the general feedback from the respondents is collected.
The respondents performed the following step-by-step procedure for the evaluation as follows.
(2) Having the respondents interact with the application as the experimental procedure outlines.
(3) Having the respondents complete the post-task questionnaire.
Specifically, each user type accounted for 50% of the respondents, with five individuals in each category based on the usability above evaluation.The age range of the respondents varied from 21 to 50 years old.Most participants, representing 50% of the total, were between 31 and 40.The next largest group comprised individuals between 41 and 50, making up 30% of the respondents.Only two respondents fell into the age range of 21 to 30, accounting for 20% of the total.Notably, no respondents were in the age range of 11 to 20.These demographics illustrate a diverse sample that includes different user types and age groups, allowing for a comprehensive evaluation of the Nanny Care mobile app's usability for post-natal women.
Next, the feedback for the interface design of the Nanny Care mobile app is done.According to the data tabulated in Table 3, a significant proportion of the participants expressed contentment with both the colour and layout of the application, which stood at 70% each.Moreover, an overwhelming majority, which reached 90%, found the text effortlessly legible.Furthermore, a considerable % of respondents, constituting 70%, deemed the app's design user-friendly.Additionally, 80% agreed that the chosen font type and size were appropriate.These observations collectively demonstrate an overall positive reception while acknowledging minor recommendations for enhancement in specific aspects.The responses from Sections C, D, and E were categorised as post-task questionnaires, which were analysed to ascertain respondents' opinions, especially regarding the functionality and user-friendliness of the Nanny Care mobile app for Post-Natal Women.
The data extracted from Section C demonstrated an overwhelmingly positive reaction towards usability in terms of functionality.Respondents expressed that the app proved valuable daily (90% agreement).It was assisting them in becoming more efficient (60% agreement).While also recognising its potential for timesaving purposes (80% agreement).Additionally, respondents recognised that it simplified tasks (90% agreement) and met their expectations accordingly (80% agreement), as shown in Table 4.Meanwhile, Section D focused on ease of use, with respondents holding a favourable view, as shown in Table 5.The respondents found the navigation through the app is straightforward (100%), resulting in quick task completion (70%) while also offering an easy error recovery option (100%).The majority also articulated that using the app required minimal effort (80%), with few needing technical support (80%) Overall, this data showcases a high level of satisfaction with both the functionality and ease of use provided by this app, resulting in sections C and D, respectively.This suggests a positive overall response from respondents who express substantial contentment regarding the Nanny Care mobile app for Post Natal Women as valuable and straightforward to operate effectively by respecting principles established by individuals or groups.Meanwhile, Section E provides the collected general feedback from the respondents as tabulated in Table 6.From the collected results, it has been shown that the quality of the app is good, achieving 80%.The app is considered valuable and easy to use.In addition, users expressed satisfaction with its feature for creating and managing, finding the registration, login, search, and reminder setting processes simple and valuable.The app's intuitive interface and attractive appearance were well-received.Also, users rated the overall quality positively and praised its convenience for booking and accessing services.Overall, the feedback provided valuable insights for enhancing functionality and user experience in the future.
CONCLUSION
The Nanny Care mobile app for Post Natal Women has undergone some assessments and emerged as a valuable and user-friendly solution.The feedback gathered from respondents indicated their contentment with various aspects of this app, including its visual interface, smooth navigation experience, and intuitive usage, which made it unnecessary for them to rely on written instructions.Furthermore, they found it easy to retain information about how to use this application effectively.Nonetheless, users highlighted areas requiring improvement, particularly regarding additional features and increased availability of learning materials and error-handling resources.Despite these suggestions for enhancement, most respondents acknowledged the app features' convenience during their evaluation process and its clear organisation.They considered everything while allowing room for further development in certain areas.Users displayed overall satisfaction with both the functionality and aesthetic appeal of the Nanny Care mobile app.The feedback collected proved invaluable when making future updates and enhancements.
In summary, the primary purpose behind developing the Nanny Care mobile app was based on the commitment towards delivering an efficient and dependable solution tailored specifically towards mothers seeking professional confinement care services.The central idea focused on providing convenience by streamlining the process of finding and booking trusted confinement ladies while mitigating any risks related to fraudulence or abuse through its user-friendly interface and intuitive navigation.The app enhances post-natal women's overall confinement experience, ensuring their mental tranquillity and physical security.The valuable inputs received from our users shall serve as a strong foundation for future updates and improvements, thus enabling the app to understand better and fulfil the needs and preferences of its esteemed user base.The true potential of the Nanny Care mobile app lies in its ability to revolutionise post-natal women's access to high-quality confinement care services.They render it an invaluable tool for mothers and the entire post-natal care industry.
Figure 5 A
Figure 5 A Nanny Care mobile app for Post-Natal Women Use Case Diagram
Figure 6 AFigure 7 A
Figure 6 A Nanny Care Mobile App for Post-Natal Women Activity Diagram (Postnatal Women only)
Figure 8
Figure 8 Interface for login (a) and sign up (b) and Figure 9. Interface Registration (a) and a registered profile (b) respectively.
Table 1
Summary of Advantages and Disadvantages of the Existing Digital Platforms
Table 3
Results for Section B: The Interface Design of the Nanny Care mobile app
Table 4
Results for Section C: The usability (functionality) of the Nanny Care mobile app
Table 5
Results for Section D: The usability (functionality) of Nanny Care mobile app
Table 6
Results for Section E: The general feedback of the Nanny Care mobile app | 2023-11-18T16:05:38.267Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "3ce87ae78066b21b9eedb02dec37cbee31d72936",
"oa_license": "CCBY",
"oa_url": "https://e-journal.uum.edu.my/index.php/jdsd/article/download/21407/4101",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4776be2d04a8d464d6fcd3ca9c5e16e38d983636",
"s2fieldsofstudy": [
"Sociology",
"Computer Science"
],
"extfieldsofstudy": []
} |
13113797 | pes2o/s2orc | v3-fos-license | Extinction Corrected Star Formation Rates Empirically Derived from Ultraviolet-Optical Colors
Using a sample of galaxies from the Sloan Digital Sky Survey spectroscopic catalog with measured star-formation rates (SFRs) and ultraviolet (UV) photometry from the GALEX Medium Imaging Survey, we derived empirical linear correlations between the SFR to UV luminosity ratio and the UV-optical colors of blue sequence galaxies. The relations provide a simple prescription to correct UV data for dust attenuation that best reconciles the SFRs derived from UV and emission line data. The method breaks down for the red sequence population as well as for very blue galaxies such as the local ``supercompact'' UV luminous galaxies and the majority of high redshift Lyman Break Galaxies which form a low attenuation sequence of their own.
Introduction
Although directly tracing young massive stars, the ultraviolet (UV; λ ∼ 912 − 3000Å) luminosity of a galaxy is not a straightforward measure of its current star-formation rate (SFR), nor is in fact any other observable related to new born stars. In particular, the galaxy's dust content and past star-formation history (SFH) have a significant influence on the interpretation of the observed UV flux in terms of current star production. A large, sometimes dominant fraction of the UV emission may be obscured by dust and reprocessed at far infrared (FIR) wavelengths. The UV spectral slope is commonly used to estimate this fraction (Meurer et al. 1999) but it is strongly affected by the SFH (Kong et al. 2004). Also because UV emitting stars live long enough for successive generations to coexist, the SFH over the past few hundred Myrs must be known to translate the dust corrected UV flux into a more instantaneous SFR, such as derived from the galaxy's Hα emission. In the case of a constant SFR and in the absence of dust, the UV luminosity to SFR ratio reaches a plateau after ∼ 10 8 yrs (e.g. Kennicutt 1998), but a strong starburst will cause the UV luminosity to scale differently with the SFR. Interpreting the UV emission of early-type galaxies is also less straightforward due to contamination by older stars (Ree et al. 2007).
Dust obscuration and SFH may be estimated with the help of additional data (e.g. the Balmer decrement or far-infrared emission for the dust, the Balmer break for the SFH) and of theoretical assumptions. However such additional information is not always available or in fact acquirable, in particular at high redshifts where SFRs are generally derived from UV and/or infrared (IR) photometry with rather large uncertainties. New near-IR spectrographs on 8-10m telescopes are now making it possible to detect Hα emission and/or continuum breaks at z > 2 ( van Dokkum et al. 2005;Erb et al. 2006b;Kriek et al. 2006a,b) but the technique is still limited to prominent features. Detecting Hα at z ∼ 2 implies a SFR greater than a few M ⊙ yr −1 (Erb et al. 2006b), which is not representative of the whole population (Kriek et al. 2006b). Optical images which pick up the rest-frame UV at z > 2 remain the easiest data to obtain.
Here we use medium deep UV photometry from the Galaxy Evolution Explorer (GALEX) and the wealth of additional data provided by the Sloan Digital Sky Survey (SDSS) to derive simple empirical relations between the observed UV luminosity and the SFR of local star-forming galaxies. SFRs were derived for tens of thousands of SDSS galaxies using their emission lines and state-of-the-art models including a consistent treatment of the dust from the UV to the far-IR . We assume these SFRs to be the best possible estimates at the present time, given the quality of the spectro-photometric data and the tested reliability of the models, and present an empirical method to recover them from the UV luminosity of galaxies using their UV−optical colors. We compare our relations with existing methods, in particular attenuation estimates based on the slope of the UV continuum which are commonly used, and investigate their limitations both locally and at high redshift.
The data are summarized in Section 2. In Section 3 we review the relation between the UV luminosity and the SFR of star-forming galaxies as well as several published methods for estimating their UV attenuation. In Section 4 we present simple empirical color relations that best reconcile the UV data with the SFR estimates based on emission line measurements, and discuss their limitations. Our conclusions are presented in Section 5. Throughout the paper we assumed a flat ΛCDM cosmology with H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3 and Ω Λ = 0.7, and a Kroupa IMF (Kroupa 2001).
Data and derived physical quantities
We select galaxies from the Sloan Digital Sky Survey (SDSS) spectroscopic catalog (Data Release 4; Adelman-McCarthy 2006) with NUV and FUV photometry from the GALEX Medium Imaging Survey (Internal Release 1.1; Martin et al. 2005, Morissey et al. 2005. The UV filters have effective wavelengths of 1528 and 2271Å respectively. The Medium Imaging Survey (MIS) has a 5 σ detection limit of 22.7 (AB magnitude) in both filters for a typical exposure. This magnitude limit corresponds to a cut in magnitude error of ∼ 0.1 in the NUV band and ∼ 0.2 in the FUV band (Bianchi et al. 2007, their Fig. 2). Our primary sample consists of 23400 SDSS galaxies with r < 17.8, z > 0.005, measured Hα emission, aperture corrections less than 1.3 dex (defined as the ratio of the total SFR to the SFR estimated within the fiber; see below), and GALEX coverage in the NUV band. Adding FUV coverage reduces the sample to 17500 galaxies due to occasional failures of the GALEX FUV detector. Galaxies flagged as AGNs in the SDSS MPA/JHU DR4 value-added catalogs 1 have been excluded.
The physical properties of SDSS galaxies were analyzed in detail by Kauffmann et al. (2003a,b); Tremonti et al. (2004) and Brinchmann et al. (2004) (hereafter B04) among others. In particular, the full likelihood distributions of their SFRs were derived by fitting all strong emission lines simultaneously using the Charlot & Longhetti (2001) models, fol-lowing the methodology of Charlot et al. (2002) (B04). Dust is accounted for with the Charlot & Fall (2000) multicomponent model which provides a consistent treatment of the attenuation of both continuum and emission line photons. The dust attenuation is based on the Hα/Hβ ratio to first order but is really constrained by all the lines. B04 also devised a method for estimating the SFR of early-type galaxies with no detectable Hα emission from their 4000Å break index but we excluded those from our sample. We use the medians of the SFR distributions and consider these values, noted SF R e (for emission lines following B04), to be the best currently available estimates of the SFR given the quality of the data and the technique used to derived them. Uncertainties are discussed in detail in the original paper.
Other quantities such as 4000Å break indices and stellar masses (Kauffmann et al. 2003a) are also available from the SDSS MPA/JHU DR4 value added catalogs 1 . The stellar mass is defined as the total mass of stars formed over the lifetime of the galaxy. The 4000Å break index -D n (4000) -is defined as the ratio of the average flux density F ν in the narrow bands 3850−3950Å and 4000−4100Å following Balogh et al. (1998). It is a relatively dust insensitive measure of a galaxy's SFH, equivalent to the ratio of the SFR averaged over the last ∼ 10 8.5 years to the SFR averaged over > 10 9 years (Johnson et al. 2007). The D n (4000) distribution is strongly bimodal around D n (4000)∼ 1.6, dividing galaxies into the well known 'red sequence' of early-type, old star dominated galaxies (D n (4000) 1.6) and the 'blue cloud' of late-type galaxies with recent star formation (D n (4000) 1.6) (Kauffmann et al. 2003a;Strateva et al. 2001). In the following we refer to these 2 populations as simply red and blue galaxies.
Our final 2 samples consist of the 20800 galaxies in the primary sample (89%) that have been detected by GALEX in the NUV band (SDSS+NUV sample), and of the 14900 galaxies in the primary sample with additional FUV coverage (85%) that have been detected both in the NUV and FUV bands (SDSS+NUV+FUV sample). In the following we use the larger, SDSS+NUV only sample whenever FUV fluxes are not explicitly needed. These samples are strongly biased against red sequence galaxies but complete for blue galaxies: ∼ 98% of D n (4000)< 1.6 galaxies are detected in the NUV band (and in the FUV band when both are available) against ∼ 72% of D n (4000)> 1.6 galaxies (∼ 52% in both UV bands when both are available). The average magnitude error is 0.03 in the NUV band and 0.07 in the FUV band for the blue population; 0.08 in the NUV band and 0.16 in the FUV band for the red population.
We derive absolute magnitudes in all the bands from the redshift and the Galactic extinction-corrected SDSS+GALEX photometry using the kcorrect v4 1 software of Blanton & Roweis (2007). In order to minimize the uncertainties on the k-corrections, the magnitudes are k-corrected to the mean redshift of the SDSS sample (z = 0.1) and are noted 0.1 mag where mag = f or n for the GALEX FUV and NUV bands, g, r, i and z for the SDSS bands. The k-correction at redshift z is by definition −2.5 log(1 + z) in all bands for all galaxies and deviates from this value towards both ends of the redshift range (0.005 < z < 0.28). In the UV bands, this deviation is less than 0.1 magnitude for 95% of the galaxies. Figure 1 shows the distribution of the SDSS+NUV sample in the 0.1 (n − r) vs D n (4000) plane. The 0.1 (n − r) color distribution is strongly bimodal (Wyder et al. 2007) with 0.1 (n − r) = 4 defining roughly the same boundaries as D n (4000)= 1.6 between the red and blue populations. Galaxies with 0.1 (n − r) < 4 and D n (4000)< 1.6 (the 'blue cloud') represent 70% of the sample (81% of the SDSS+NUV+FUV sample); galaxies with 0.1 (n − r) > 4 and D n (4000)> 1.6 (the 'red sequence') represent 22% of the sample (10% of the SDSS+NUV+FUV sample). The solid lines are polynomial fits to the dust/color/SFH relation derived by Johnson et al. (2007) (see Section 3) for given values of the FUV attenuation as marked in the figure. The fits are good for galaxies with D n (4000)< 1.6 but less reliable for red sequence galaxies (see Johnson et al. 2007 for details). The model illustrates how broad band colors depend on both the SFH and the amount of dust attenuation.
Calibration
The SFR measured from the UV emission is usually written as: where η U V converges to η 0 U V for a constant SFR. Scaled to a Kroupa IMF, the most commonly used factor is log(η 0 U V ) = 28.02 (Kennicutt 1998). It assumes that the UV spectrum is nearly flat in L ν over the wavelength range 1500-2800Å. Using the Bruzual & Charlot (2003) stellar population synthesis models with similar assumptions (solar metallicity, a constant SFH and a Kroupa IMF), S. Salim (private communication) derived slightly higher factors for the GALEX filters: log(η 0 F U V ) = 28.09 and log(η 0 N U V ) = 28.08. They are little sensitive to metallicity and to the SFH provided the SFR has been nearly constant in the last 10 8 years. Very young starburst galaxies would significantly deviate from a constant SFR model and require a higher value of η U V , while the UV emission of early-type galaxies is contaminated by older stars. For an optically selected sample with a mix of SFHs and metallicities similar to the SDSS/GALEX sample defined in the previous section (the average metallicity of which is 0.8Z ⊙ ), Samir et al. (2007) suggest using their median conversion factor log(η F U V ) = 28.14.
We assume this calibration for both UV bands in the following. Figure 2 shows SF R U V against SF R e for the NUV and FUV bands (left and right panel respectively), assuming no dust correction for the UV luminosities. The dashed green line denotes equality of SFR. Blue and red dots distinguish between blue and red galaxies defined as having 0.1 (n − r) < 4 and > 4 respectively. The histograms in inset show the distribution of the SF R U V to SF R e ratios for the blue and red populations. As expected from uncorrected luminosities, SF R U V underestimates the 'true' SFR, including for the red population although part of their UV luminosity is unrelated to the current SFR. The scatter is large in both bands, indicating a large range of UV attenuations for a given SFR or a given UV luminosity. There is also a clear trend with SF R e in the sense that galaxies with higher SFR tend to require a larger dust correction, as was first observed by Wang & Heckman (1996).
Dust attenuation estimates
The fraction of UV flux emitted by new born stars and absorbed by dust in the galaxy is reradiated at infrared (IR) wavelengths (Buat 1992). Assuming a standard extinction law and that the dust is heated by intrinsically young stellar populations, the FUV attenuation can be approximated by: where IRX is the ratio of the IR to UV luminosities (the so called infrared excess) and µ corrects for the fraction of IR luminosity heated by older stars and by light bluer than the FUV band (Meurer et al. 1999). Other relations were derived that yield very similar results (e.g. Buat et al. 2005). UV reddening as measured by the slope β of the UV continuum (f λ ∝ λ β ) or a UV color, correlates with IRX in starburst galaxies, as expected from a foreground screen of dust (Witt et al. 1992;Calzetti et al. 1994). Thus β or UV colors are often used to estimate A F U V . The IRX/β correlation was recently revisited and corrected by several authors for more 'normal' star-forming galaxies using GALEX data , Cortese et al. 2006, Salim et al. 2007, Johnson et al. 2007). Seibert et al. (2005) (hereafter Se05) found that the starburst relation systematically overestimates the FUV attenuation of more quiet galaxy types by 0.58 mag, albeit with a large scatter. They derived the following empirical relation from a diverse sample of ∼ 200 galaxies with UV photometry from GALEX and FIR photometry from the Infrared Astronomical Satellite (IRAS): where m F U V and m N U V are the apparent magnitudes in the FUV and NUV bands respectively. The 0.1 magnitude offset corrects for a change in calibration between the GALEX photometry used by Se05 (the internal data release IR0.2) and that used in the present paper (IR1.1) (Seibert et al., in preparation). Salim et al. (2007) (hereafter Sa07) derived a yet shallower relation between the attenuation and the UV color of normal blue galaxies using a different technique and the much larger GALEX/SDSS sample we are using here. They obtained the SFR and FUV attenuations, among other physical properties, by fitting the UV and optical photometry to an extensive library of model SEDs for which dust attenuation was computed from the same Charlot & Fall (2000) model used by B04. While B04 obtained their SFRs and attenuations from optical emission lines (the Hα line and the Balmer decrement to first approximation), Sa07's are essentially constrained by the UV fluxes and the UV colors respectively. The agreement is generally good between the two approaches but discrepancies remain, in particular between the attenuations as we'll discuss further below. Sa07 derived the following simple prescription from their extensive modeling for galaxies with 0 (n − r) < 4 and 0 (f − n) < 0.9: where the 0 subscript refers to rest-frame colors k-corrected to z = 0. The small (< 5%) fraction of galaxies with 0 (n − r) < 4 and 0 (f − n) > 0.9 are assigned a constant attenuation of 2.96.
Longer baseline colors such as UV−optical colors carry mixed but separable information about the SFH and IRX. Using a sample of galaxies with UV through IR photometry from GALEX, SDSS and Spitzer, Johnson et al. (2006Johnson et al. ( , 2007 (hereafter J07) showed that given the SFH of a galaxy (they used D n (4000)), IRX could be more accurately inferred from UV−optical colors than from UV colors. Assuming Eq. 2 with µ = 0.6, they derived the following relation for galaxies with D n (4000)< 1.6: We note SF R F U V,corr the SFR derived from the UV luminosity corrected for dust attenuation using one of the above equations. Figure 3 shows SF R F U V,corr against SF R e for the blue galaxies ( 0.1 (n − r) < 4) using Eq. 3 (Se05), 4 (Sa07) and 5 (J07) as indicated. In each panel the dotted, dashed and solid blue lines show the ordinary least-square (OLS) regression of the Y axis on the X axis, the OLS regression of the X axis on the Y axis and the bisector of those 2 lines respectively (Isobe et al. 1990). We choose the bisectors as the 'best-fit' lines, here and in the rest of the paper. The best-fit slopes, variances, correlation coefficients and residual scatters are listed in Table 1 for the 3 attenuation models (first 3 lines). The histograms show the distributions of the SF R F U V,corr /SF R e ratios compared to the distribution of the uncorrected SF R F U V to SF R e ratios. The averages of the distributions are also listed in Table 1. All 3 methods provide a very good average correction with a reduced scatter compared to the uncorrected SF R F U V , especially so for the J07 correction. However a residual trend with SF R e remains in the sense that galaxies with the highest and lowest SFR tend to be under and over-corrected respectively. This indicates that the models do not quite sufficiently scale with the SFR to straighten up the uncorrected correlation in Fig. 2. The trend is minimal for the Se05 correction for which the scatter is largest and most pronounced for the J07 correction for which the scatter is otherwise best reduced. This trend with SF R e is the same as that noted by Sa07 as a trend with mass (their Fig. 8) and by J07 as a trend with 0.1 (n − r) and D n (4000) (their Fig. 12 and 13). Indeed mass and to a lesser extent colors and D n (4000) correlate with SF R e . The reason for it remains unclear but Sa07 concluded that the most likely interpretation in the framework of their modeling was that attenuations were less well constrained by the UV data than by the emission lines at the two ends of the distribution. In the case of J07, the parametric relation between IRX, D n (4000) and 0.1 (n − r) is a good fit to blue galaxies, more so than between IRX and UV color (see J07 for a detailed discussion). Therefore it is perhaps the relation between IRX and A F U V that is not totally adequate. We return to this point in the next section.
Empirical color corrections
Assuming as we do that SF R e is the current best dust-corrected SFR estimate and that our choice of η U V is adequate, the UV attenuation (FUV or NUV) can be directly measured as: We now revisit the color dependence of these known attenuations. Figure 4, left panel, shows the SF R N U V to SF R e ratios as a function of 0.1 (n − r) for the SDSS+NUV sample. The solid lines show the OLS bisector for each D n (4000) bin as shown in inset. The 3 bins making up the blue sequence (D n (4000)< 1.6) add up to form a single tight correlation while the 2 bins with D n (4000)> 1.6 form a scattered cloud. For galaxies with D n (4000)< 1.6 and 0.1 (n − r) < 4, the bisector fit is: with a linear correlation coefficient r = 0.77 and rms = 0.70. The SDSS+NUV sample allows us to see the impact of the SFH on the attenuation/color relation. It is bimodal to first order: attenuation is linearly dependent on color for blue sequence galaxies and practically independent of it for red sequence galaxies. Similar correlations are found when using FUV luminosities and/or other UV−optical colors. Using the FUV luminosity and 0.1 (f − g) yields: The correlation is tighter than Eq. 7 (r = 0.84, rms = 0.66). On the other hand 0.1 (f − n) results in a poorer and more scattered correlation (r = 0.56, rms = 1.02): The last two correlations are shown in the right panel of Fig. 4. The latter correlation is similar to that proposed by Cortese et al. (2006) (their Fig. 10) using log(L Hα /L F U V ) as a function of (m F U V −m N U V ) for a small sample of star-forming galaxies in the COMA cluster. It is consistent with Eq. 3 (Se05) but a much steeper function of UV color than Eq. 4 (Sa07). However all 3 equations as well as Eq. 8 converge for the majority of galaxies around the peak of the color distribution ( 0.1 (n − r) ∼ 2.4 − 2.5) and yield similar average values in good agreement with the measured average attenuation (< A F U V >∼ 1.8). Equation 5 (J07) yields a slightly lower average attenuation of 1.6. Discrepancies between these corrections are largest for the reddest and bluest galaxies.
We note SF R F U V,c the FUV based SFRs corrected using one of the functions of color derived in this section. Figure 5 shows SF R F U V,c against SF R e using A F U V,f −n (left panel) and A F U V,f −g (right panel) for the blue population. The correlation is very close to equality in both cases, as expected since the corrections were designed to minimize SF R F U V,c /SF R e , and the scatter is best reduced using 0.1 (f −g), as expected as well from the higher correlation coefficient in Eq. 8. However a small residual trend with SF R e remains in this case, which means that the SFR dependence of the attenuation is not completely accounted for by the color dependence (log(SF R U V /SF R e ) would be better fitted by a linear function of color plus a linear function of log(SF R e )). The trend practically disappears when using 0.1 (f − n) but the correlation is less significant, as in the case of Se05. The parameters of the fits are listed in Table 1 for comparison with the corrections presented in the previous section. There is a small trade-off between the scatter and the trend with SF R e (the lower the rms, the more the slope deviates from unity) except for the empirical 0.1 (f − g) correction for which the combination of trend and scatter is best reduced (a > 0.9 and rms < 0.3). Figure 6 shows the difference between the measured attenuation (Eq. 6) and the 4 parametric estimates (Se05, Sa07, J07 and Eq. 8) as a function of SF R e (∆A F U V = A F U V − A F U V,model ). The red curves are isodensity contours. The dashed green lines mark ∆A F U V = −1, 0 and 1. All 4 methods converge with A F U V around the peak of the SFR distribution (∼ 2M ⊙ yr −1 ) and as noted above provide good average corrections but Eq. 8 minimizes ∆A F U V for the majority of the galaxies as well as the dependence with SF R e . Although the improvement is by no means dramatic, it provides an estimate of the FUV attenuation which best recovers the SFR derived from emission lines at an equivalent or lower 'cost', since at least one optical photometric measurement and the redshift are required for k-corrections in all cases.
Discussion
As aperture corrections are an important source of uncertainty in deriving total SFRs from the SDSS fiber spectra, we check that the above correlations are not affected by aperture effects. We define the aperture correction (AC) as the ratio of the total SFR (SF R e ) to the SFR estimated within the fiber (B04) and split the blue sample into 3 bins of increasing aperture correction. Figure 7 shows the SF R F U V to SF R e ratios as a function of 0.1 (f − g) for the 3 bins (shown in inset), shifted by a constant as indicated for clarity. The solid lines show the bisector fits in each bin and the dashed lines show Eq. 8 shifted by the appropriate amount for comparison. No significant difference is seen in the correlation itself as a function of aperture correction, but galaxies with SF R F U V > SF R e (yielding a negative FUV attenuation) have systematically high aperture corrections (and low redshift). It is likely that in these large nearby galaxies, the fiber missed off-centered regions of enhanced star-formation and that SF R e was underestimated.
Another source of uncertainty may lie in the definition of the UV attenuation (Eq. 6). Allowing η U V to vary, the SF R U V to SF R e ratio would no longer be a measure of attenuation alone but a combination of A F U V and η U V : log(SF R U V /SF R e ) = −0.4A U V + log(η U V /η U V ). B04 showed that the conversion factor η Hα from Hα luminosity to SFR (η Hα = L Hα /SF R e ) decreases with mass (and metallicity), spanning nearly 0.4 dex in the range of mass (and metallicity) spanned by the SDSS sample. This is interpreted as massive/metal rich galaxies producing less Hα than low mass, metal poor galaxies for the same SFR. However η U V is much less sensitive to metallicity than Hα, consequently SF R U V /SF R e is expected to be a nearly direct measure of UV attenuation except perhaps for galaxies with very low UV attenuation and metallicity. In any case the ∼ 1 dex variation in η U V required to straighten up the trend seen in Fig. 6, most notably for the J07 dust correction, is definitely ruled out. Let's note that Eq. 8 can be used to recover SF R e from the observed UV luminosity whatever the interpretation of the SF R U V /SF R e ratio provided η U V is assumed in Eq. 1. Figure 8 shows the relation between A N U V and A F U V as defined in Eq. 6. The solid blue line is the best linear fit; the dashed green line at A N U V = 0.75 A F U V is the ratio expected from a λ −0.7 absorption curve (Charlot & Fall 2000). The distribution of the A N U V to A F U V ratios is shown in inset as a solid histogram, and that of the A N U V,n−r to A F U V,f −g ratios as a dotted histogram. The median ratio for the measured quantities is 0.74 (0.75 for the fits), in excellent agreement with the prediction. The GALEX data are therefore consistent with the attenuated UV fluxes predicted by the dust model used to derive SF R e from independent emission line measurements.
As noted in the previous section, the relation between IRX and A F U V may be responsible for the discrepancy between A F U V and A J07 F U V (Eq. 5). Figure 9 shows the relation between A F U V (Eq. 6) and IRX derived from D n (4000) and 0.1 (n − r) (Eq. 2 and 5). The solid line is the relation used by J07 to relate the two quantities (Eq. 2 with µ = 0.6). While it is appropriate for the average galaxy, it becomes discrepant at the blue and red ends (or for the least and most massive/star-forming galaxies). There is also a large range of attenuations for a given IRX. A color (or SFR) dependent µ parameter accounting for the fraction of IR flux due to new born stars (as opposed to preexisting older stars) would remedy some of the discrepancy. A blue galaxy would have little dust attenuation, little FIR emission for a given UV flux (a small IRX) and its little FIR emission would seem to have little to do with the new born stars (small µ). At the other end, a red (star-forming) galaxy would be a dusty galaxy with a high SFR and its large FIR emission (large IRX) would be entirely due to the heating of dust by its new stars (large µ). A µ parameter as large as 2 (meaning that the obscured UV emission would have to be twice the observed FIR emission) is actually necessary to reach the upper enveloppe of the IRX/A F U V distribution. This might be accounted for by the uncertainty in the IR flux estimate, or by a large fraction of obscured UV photons being reprocessed at wavelengths other than IR. J07 modeled the contribution of new born stars to IRX as a function of galaxy color and arrived at the opposite conclusion: red galaxies have a higher contribution of older stars to their IR emission than blue galaxies and should therefore required a lower µ, making the trend between the corrected UV luminosity and SF R e even more pronounced. The validity of SF R e at low and high mass may of course be questionned but it seems that the estimate of the IR flux and the interpretation of IRX in terms of FUV attenuation currently involve more uncertainties than the interpretation of the optical data.
Limitations: the oldest and youngest galaxies
The color corrections do not apply to red sequence galaxies in the local sample. Dust attenuation estimates based on correlations between IRX and colors do not apply well to early-type galaxies either, both the UV and IR SEDs of such galaxies being much less directly related to the emission of young stars than those of late-type galaxies (J07). This isn't a drastic limitation to the various methods since red sequence galaxies contribute little to the overall SFR in the local Universe, and even less as redshift increases. But we would like to know whether the above correlations between attenuation and UV-optical color apply to star-forming galaxies at higher redshift, where rest-frame UV fluxes are generally corrected using the IRX/β correlation of local starburst galaxies (Meurer et al. 1999).
Although no spectroscopic data exist at high redshift that allow the same emission line fitting technique as the SDSS spectra, we can use the unique sample of Erb et al. (2006b) who were able to acquire Hα flux measurements for 114 UV selected galaxies at z ∼ 2, for which optical and NIR photometry is also available. They defined SF R F U V as in Eq. 1 and SF R Hα = L Hα /η Hα , using the η F U V and η Hα values of Kennicutt (1998) converted to a Chabrier (2003) IMF 2 . A factor of two aperture correction was also applied to the Hα luminosities. Dust corrections were derived from the best-fit values of E(B − V ) obtained from fitting SED models to the multiband photometry and using the extinction law of Calzetti et al. (2000) (see Erb et al. (2006b) for details). The authors assumed that the color excess of the nebular emission lines was equal to that of the UV continuum, rather than 2.5 times larger as proposed by Calzetti et al. (2000), as it yielded the best agreement between the UV and Hα SFRs after dust correction. The corrected SF R Hα are at most 3 times the uncorrected values and less than twice for most of the sample.
As high redshift galaxies have lower metallicities than local galaxies on average, a higher value of η Hα might be justified for this sample. However as both the range of masses and the range of metallicities span by the z = 2 galaxies remain within those of the SDSS sample despite evolution in the mass/metallicity relation (Erb et al. 2006a), η Hα is not expected to be larger than the value predicted for the least massive/most metal poor galaxies in the local sample, i.e. a factor of 1.5 higher than the Kennicutt value used by Erb et al. (B04,their Fig. 7). We assume the uncorrected values of SF R Hα divided by 1.5 to be lower limits to the SFR and use the dust corrected values as upper limits. We computed the absolute magnitudes of the galaxies in the GALEX and SDSS bands from their optical and NIR photometry using kcorrect v4 1 (Blanton & Roweis 2007). Figure 10 shows SF R N U V /SF R e as a function of 0.1 (n − r) assuming SF R Hα = SF R e for the high redshift sample and η N U V = η F U V for all galaxies as before. The pink circles are the uncorrected values of SF R Hα . The upper and lower limits are defined as above. The local blue population is plotted in blue with the correlation derived in the previous section (Eq. 7). It is clear that the majority of LBGs which have extremely blue colors do not follow the same correlation as the local galaxies, but the reddest ones ( 0.1 (n − r) > 1.5) may still be consistent with the local correlation or show a similar trend with color with only a small blue shift. The bluest LBGs cluster blueward of the local correlation in a region of low UV attenuation.
Also overplotted is a sample of 97 compact UV Luminous Galaxies (UVLGs) drawn from the present sample and from a cross-match between the SDSS and the larger, shallower GALEX All Sky Imaging Survey (Hoopes et al. 2006). SF R e from B04 are available for all of them. UVLGs Hoopes et al. 2006) are locally rare galaxies defined as having FUV luminosities typical of LBGs: L F U V > 2 × 10 10 L ⊙ , corresponding to ∼ 0.3L ⋆ at z ∼ 3 (Steidel et al. 1999) but to ∼ 5L ⋆ at z ∼ 0 . While low surface brightness UVLGs are simply extra large versions of normal spiral galaxies, high surface brightness UVLGs with I F U V > 10 8 L ⊙ kpc −2 were found to consist primarily of compact starburst systems. Among these, the "supercompact" UVLGs with I F U V > 10 9 L ⊙ kpc −2 bear a remarkable resemblance to high redshift LBGs for a wide range of physical properties (mass, SFR, metallicity). They are thought to be their closest analogs in the Local Universe (Hoopes et al. 2006). Compact and supercompact UVLGs are represented in Fig. 10 with open squares and filled triangles respectively. Both categories occupy the same region of the plot as the high redshift sample. The supercompact UVLGs are unusually blue ( 0.1 (n − r) < 1.5) among the local galaxies and like the bluest LBGs, lie the furthest away from the bulk of the local population. The extreme blue colors are an indication of strong recent starformation (as well as low attenuation). As noted by Kennicutt (1998), the calibration η U V in Eq. 1 might be significantly higher for strong starburst galaxies such as these. A higher η U V would lower their SF R U V to SF R e ratio proportionally and further separate them from the main population. Therefore very blue galaxies with very recent star-formation, both locally and at high redshift, form a distinct cluster of their own blueward of the attenuation/color relation of the blue sequence. This locus adds to the blue shift of the attenuation/color relation with D n (4000) seen in Fig. 4 (left panel) as those galaxies would have D n (4000) indices lower than our lowest bin. The reddest of the compact UVLGs and of the LBGs cover a wider range of attenuations which seem to correlate with colors in the same way as local galaxies only shifted to bluer colors. D n (4000) estimates are not available for the high redshift sample and only available for a fraction of the UVLGs but we can use the inverse of the specific SFR -T SF R = M ⋆ /SF R -as a common SFR time-scale for the local and high redshift samples. Stellar masses were estimated for most of the z ∼ 2 sample (Erb et al. 2006c). Masses for the GALEX/SDSS sample are from Kauffmann et al. (2003a). A third of the UVLGs have mass estimates from this catalog; for the remaining 2/3 we use the values derived by Hoopes et al. (2006) via SED fitting following Salim et al. (2005). The agreement between the mass estimates of Kauffmann et al. (2003a) and Salim et al. (2005) for the GALEX/SDDS sample is good with a rms of 0.12. As above, we use SF R = SF R e for the local sample including the UVLGs and SF R Hα for the z ∼ 2 galaxies. Figure 11 shows T SF R as a function of 0.1 (n − r). The SDSS galaxies are color coded in bins of D n (4000) as shown in the inset of Fig. 4 (this highlights the relation between D n (4000) and T SF R ). The LBGs are represented by filled circles, the compact UVLGs by open squares and the supercompact UVGLs by filled triangles. The two horizontal lines correspond to the age of the Universe at z = 0 and 2 (13.8 Gyrs and 3 respectively). Roughly, galaxies with T SF R larger than the age of the Universe at their redshift (e.g. nearly all local galaxies with D n (4000)> 1.6 and a few LBGs) have had larger SFR in the past. Inversely, galaxies with T SF R shorter than the age of the Universe at their redshift (most LBGs and nearly all the compact UVLGs) must be forming stars more intensely than in the past. Figure 11 is an analog of Fig. 1 with the addition of an 'ultrablue' sequence at 0.1 (n−r) < 1.5 and T SF R < 3 Gyrs consisting of young compact starburst galaxies. The A U V /color correlations derived in the previous section hold for galaxies with rather uneventful SFHs (3 Gyrs T SF R 15 Gyrs). They are a majority today but may not be when the Universe was only ∼ 3 Gyrs old, although many LBGs look very much like local blue sequence galaxies. Furthermore a dominant fraction of the stellar mass at z > 2 is found in redder galaxies which are largely absent from UV surveys van Dokkum et al. 2006;Marchesini et al. 2007). Kriek et al. (2006b) showed that almost half of their sample of NIR selected galaxies at z = 2.0 − 2.7 have low SFRs and T SF R > 10 Gyr (from their Fig. 2). These galaxies are redder than LBGs and would lie in the same part of the plot as the local population.
Although we can't conclude on the use of the local A U V /color correlation at high redshift, we may expect it to hold for most galaxies to at least intermediate redshifts or to be slightly shifted to the left as only 'mild evolution' of the blue sequence was reported between z = 0 and 1 (no change in the number density and colors only ∼ 0.3 mag bluer; Blanton 2006 ). The luminosity density from UVLGs was found to undergo dramatic evolution between z = 0 and 1, reaching > 25% of the total FUV luminosity density at z = 1 , but this includes all UVLGs, i.e. mostly very large but otherwise ordinary spiral galaxies. Compact and supercompact UVLGs are very rare in the local Universe. Although their evolution with redshift is yet unknown, they are unlikely to dominate the galaxy population at z = 1.
Conclusions
Using a large sample of galaxies from the Sloan Digital Sky Survey spectroscopic catalog with measured SFRs and UV photometry from the GALEX Medium Imaging Survey, we derived empirical linear correlations between the UV attenuation measured by the SFR to observed UV luminosity ratio, and the UV−optical colors of blue sequence galaxies ( 0.1 (n − r) < 4). The SFRs were derived from a detailed modelling of the emission lines in the optical spectra ) and were considered best estimates. The attenuation/color relation provides a simple prescription to correct UV measurements for dust attenuation in the absence of SDSS quality data. We found or confirmed that other UV attenuation estimates , Salim et al. 2007, Johnson et al. 2007 correct the UV luminosity of galaxies with the lowest (highest) emission line SFRs or mass. Using a sample of LBGs (Erb et al. 2006b) at z ∼ 2 with measured Hα emission as well as a sample of local compact UV luminous galaxies with LBG like properties (Hoopes et al. 2006), we found that extremely blue galaxies at both low and high redshift escaped the attenuation/color relation of the blue sequence to form a low attenuation sequence of their own. As such galaxies are very rare locally and the blue sequence does not evolve much from z = 0 to 1 (Blanton 2006;Willmer et al. 2006), we expect our attenuation correction to remain adequate for the majority of galaxies to at least intermediate redshifts. ) in units of M ⊙ yr −1 . The dashed green lines shows SFR equality. The blue and red dots represent blue and red sequence galaxies, defined as 0.1 (n − r) < 4 and 0.1 (n − r) > 4 respectively. The blue and red histograms in each panel are the log(SF R U V /SF R e ) distributions for the two populations. As expected from uncorrected luminosities SF R U V underestimates SF R e , especially so at high SFR. least-square (OLS) regression of the Y axis on the X axis, the OLS regression of the X axis on the Y axis and the bisector of those 2 lines respectively. The histograms show the distributions of the SF R F U V,corr /SF R e logarithmic ratios: light blue, red and orange for Se05, Sa07 and J07 respectively. The dotted histogram is the distribution of the uncorrected SF R F U V /SF R e ratios. All three models provide a very good average correction but tend to over (under) correct galaxies with the lowest (highest) SFRs. Fig. 4 between A F U V and 0.1 (f − n) (Eq. 9; left panel) and between A F U V and 0.1 (f − g) (Eq. 8; right panel). The histograms in inset show the distributions of log(SF R F U V /SF R e ) with and without corrections (solid and dotted lines respectively). SF R e is best recovered using the A F U V / 0.1 (f − g) correlation.
Fig. 6.-∆A F U V = A F U V −A F U V,model as a function of SF R e for the various models described in the text as indicated (Sa05, Se07, J07 and Eq. 8). The dashed lines mark ∆A F U V = −1, 0 and +1. The red curves are isodensity contours. All 4 methods converge with A F U V around the peak of the SFR distribution (∼ 2M ⊙ yr −1 ) and provide good average corrections but Eq. 8 (the A F U V / 0.1 (f − g) correlation) minimizes ∆A F U V and its dependence with SF R e for the majority of the galaxies. -The SF R F U V to SF R e ratio as a function of 0.1 (f − g) in 3 bins of aperture corrections (AC) as shown in inset. The AC is defined as the ratio of the total SFR (SF R e ) to the SFR within the SDSS fiber. For clarity the data are shifted by 0, +1 and +2 for the first, second and third bin respectively. The solid lines are the linear fits in each bin while the dashed lines show the correlation for the full sample (Eq. 8) shifted by the appropriate amount. Aperture effects do not bias the correlation. The relation between A N U V and A F U V as defined by the SF R U V /SF R e ratios (Eq. 6). The solid blue line is the fitted correlation; the dashed green line at A N U V = 0.75 A F U V is the ratio expected from a λ −0.7 absorption curve (Charlot & Fall 2000). The distribution of the A N U V to A F U V ratios is shown in inset (solid histogram) with that of the fits ratios (A N U V,n−r /A F U V,f −g ; dotted histogram). The median ratio for the measured quantities is 0.74 (0.75 for the fits), in excellent agreement with the prediction. The SF R N U V to SF R e ratio as a function of 0.1 (n − r) for the local blue population (blue dots), the sample of local compact and "supercompact" UV Luminous Galaxies (UVLGs) of Hoopes et al. (2006) (open squares and filled yellow triangles respectively) and the z ∼ 2 Lyman break galaxy (LBG) sample of Erb et al. (2006b) assuming SF R Hα = SF R e (pink filled circles; see text for details). The straight line is the correlation fitted to the local blue population (Eq. 7). Fig. 10. The two horizontal dashed lines mark the age of the Universe at z = 2 and z = 0 (3 and 13.8 Gyrs respectively). This plot is similar to Fig. 1 with the addition of an ultrablue sequence at 0.1 (n − r) < 1.5 and T SF R < 3 Gyrs consisting of young compact starburst galaxies for which the local UV attenuation/UV-optical color relations derived in this paper do not apply. | 2007-07-20T23:21:15.000Z | 2007-07-20T00:00:00.000 | {
"year": 2007,
"sha1": "b13b3dc648b6a09b87e07baf7c4c74b096f3bf15",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/16865/1/TREapjss07.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "758bb13395e95ca29bd2386d7d214af0d53f3da8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16394061 | pes2o/s2orc | v3-fos-license | PDB-UF: database of predicted enzymatic functions for unannotated protein structures from structural genomics
Background The number of protein structures from structural genomics centers dramatically increases in the Protein Data Bank (PDB). Many of these structures are functionally unannotated because they have no sequence similarity to proteins of known function. However, it is possible to successfully infer function using only structural similarity. Results Here we present the PDB-UF database, a web-accessible collection of predictions of enzymatic properties using structure-function relationship. The assignments were conducted for three-dimensional protein structures of unknown function that come from structural genomics initiatives. We show that 4 hypothetical proteins (with PDB accession codes: 1VH0, 1NS5, 1O6D, and 1TO0), for which standard BLAST tools such as PSI-BLAST or RPS-BLAST failed to assign any function, are probably methyltransferase enzymes. Conclusion We suggest that the structure-based prediction of an EC number should be conducted having the different similarity score cutoff for different protein folds. Moreover, performing the annotation using two different algorithms can reduce the rate of false positive assignments. We believe, that the presented web-based repository will help to decrease the number of protein structures that have functions marked as "unknown" in the PDB file. Availability and
Background
Over 30 structural genomics centers have been established worldwide with the common goal of large-scale, highthroughput structure determination using X-ray crystallography and NMR [1]. One challenge is to predict the function of the proteins from their three-dimensional structures, primarily those which have no detectable sequence similarity to any protein of known function [2]. Currently, the total size of the Protein Data Bank (PDB) [3] is more than 32,000 entries, which contain over 29,000 different (63,000 redundant) protein chains. Many of the PDB chains have been mapped to Enzymatic Classification (EC) numbers via the Swiss-Prot database [4]. The mapping information has been presented as a PDBSprotEC database [5], which is available on the Internet. SCOPEC [6] is another web-based repository which is similar to PDBSprotEC collection. The SCOPEC set contains a description of the protein catalytic domains with assigned enzyme function. Prediction of protein function has been conducted using sequence similarity in both web-accessible databases. There is no doubt the PDBSprotEC and SCOPEC databases are full of very useful EC number assignments. However, none of these services contains predictions for proteins that have no sequence similarity to known enzymes. Moreover, neither PDB-SprotEC nor SCOPEC includes any data for recently deposited PDB structures. The "youngest" annotated in PDBSprotEC or SCOPEC protein was released by PDB in PSI-BLAST score of the most similar protein with the same enzyme function versus PSI-BLAST score of the most similar protein with different enzyme function at the 1 st (upper left chart), 2 nd (upper right chart), 3 rd (lower left chart), and 4 th EC level (lower right chart) Figure 1 PSI-BLAST score of the most similar protein with the same enzyme function versus PSI-BLAST score of the most similar protein with different enzyme function at the 1 st (upper left chart), 2 nd (upper right chart), 3 rd (lower left chart), and 4 th EC level (lower right chart). Calculation was conducted for non-redundant set of 3,135 chain sequences (amino acid identity < 90%) of known structure and enzyme function. Each PSI-BLAST score was taken after the third iteration using 10,278 nonredundant sequence chains (including 3,135) from the Protein Data Bank to build a sequence profile. In each of the charts there are four clusters of points (A, B, C, and D) separated by the horizontal and vertical line. The A and C groups correspond to sequences that are not similar to any enzyme with a different EC number. Two other clusters (B and D) contain proteins from sequence superfamilies that have more than one function. Last two groups (E and F -not shown in the charts) include proteins of orphan function in this dataset. F group contains sequences which are significantly similar to other proteins, while E group corresponds to singleton sequences.
August 2004 or in February 2003, respectively. Therefore, we decided to use the structure-function relationship [7][8][9] for automatic assignment of the EC number to 499 protein structures that came from the structural genomics centers and whose function is marked as "unknown" in the PDB file. All assignments are combined into a webaccessible database, which will be updated as soon as the new structures from structural genomics projects are released. Because most of these PDB entries are still not published, we believe that our repository will help to reduce the number of proteins that have functions marked as "unknown" in the PDB file.
Sequence-function relationship
Before predicting the enzyme function based on structural relationship, we checked if it was possible to assign the EC number to the protein using only sequence similarity information. George et al. found that even for homologues detected by a third iteration PSI-BLAST profile there is a 50:50 chance of assigning a fairly specific threedigit EC number [6]. This work seems to be in contrast with many reports suggesting that it is very difficult to successfully infer function below 40% sequence identity [10,11]. Therefore, we conducted an experiment to investigate both claims. Sequence chains from the Protein Data FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st EC level versus Z-score of the most similar protein with different enzyme function at the 1 st EC level Figure 2 FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st EC level versus Z-score of the most similar protein with different enzyme function at the 1 st EC level. Calculation was conducted for 3,135 sequentially non-redundant structures of known function. Corresponding histograms are shown below the charts. If FSSP Z-score = 13 and 3D-Fun Z-score = 5 were used as a cutoff value we would obtain sensitivity of 79% and 83% with specificity at 74% and 79%, respectively.
Bank were clustered by similarity using 90% of amino acid identity (AA id.) as a cutoff value. We got 3,135 groups containing one or more proteins with known enzyme function (a total number of clusters >10,000). Next, we calculated a PSI-BLAST alignment score between each of the pairs of the 3,135 representative sequences. 565 (18%) of the proteins were classified to superfamilies that contains at least two enzymes whose EC numbers were different at the first EC level (upper left chart in the Figure 1). But on the other hand, 781 (25%) of the sequences had significant similarity to enzymes with the same EC number at all EC levels, and were not similar to any others (lower right chart in the Figure 1). All the results suggest that there is no general cutoff value of sequence similarity which could be used to assign function to the query. Probably each of the known enzymes should have its own cutoff for function assignments. Here we show that a similar situation is observed when the EC number is predicted using information about structurefunction relations.
Construction and content 3D-Hit algorithm
Two different strategies were applied to annotate the proteins with EC numbers: namely 3D-Hit and 3D-Fun. The first method simply scans using the 3D-Hit program [12] a sequentially non-redundant database of structures that are characterized by four cutoff values. Each value is FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st and 2 nd EC level versus Z-score of the most similar protein with different enzyme function at the 1 st or 2 nd EC level Figure 3 FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st and 2 nd EC level versus Z-score of the most similar protein with different enzyme function at the 1 st or 2 nd EC level. Calculation was conducted for 3,135 sequentially non-redundant structures of known function. Corresponding histograms are shown below the charts. If FSSP Z-score = 15 and 3D-Fun Z-score = 7 were used as a cutoff value we would obtain sensitivity of 73% and 79% with specificity at 68% and 70%, respectively. defined by the highest, known score of structural similarity to any protein with different enzyme function at the corresponding or lower EC level. In the 3D-Hit strategy, the EC number of the protein with the strongest structural similarity is completely (or partially) assigned to the query, if the similarity score is greater than all (or any) of the cutoff values. As an example; let us consider a query protein which has the 3D-Hit score = 150 to the enzyme with the EC number 1.2.3.4 and the cutoff values = 100, 120, 180, 200, respectively. This structure will obtain an EC number assignment of 1.2.?.?.
3D-Fun algorithm
All structural similarity scores are used for annotation in the 3D-Fun strategy. First, the query structure and all sequentially non-redundant proteins are hierarchically clustered (grouped) by structural similarity using complete-link algorithm [13,14]. Next, the EC number is completely (or partially) assigned to each group in each clustering iteration, if all of the enzymes in the group have the same function at all (or any) of the EC levels; otherwise the EC number is assigned as unknown. As an example let us consider a cluster that contains 4 structures: the FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st , 2 nd and 3 rd EC level versus Z-score of the most similar protein with different enzyme function at the 1 st or 2 nd or 3 rd EC level Figure 4 FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the 1 st , 2 nd and 3 rd EC level versus Z-score of the most similar protein with different enzyme function at the 1 st or 2 nd or 3 rd EC level. Calculation was conducted for 3,135 sequentially non-redundant structures of known function. Corresponding histograms are shown below the charts. If FSSP Z-score = 17 and 3D-Fun Z-score = 8 were used as a cutoff value we would obtain sensitivity of 70% and 78% with specificity at 64% and 66%, respectively. query protein and 3 enzymes with EC numbers 1.2.3.4, 1.2.3.6, and 1.2.4.1. This cluster will obtain an EC number assignment of 1.2.?.?. For the final prediction, the enzymatic function of the smallest cluster which contains the query structure is used. In the contrary to the 3D-Hit strategy, the 3D-Fun algorithm takes into account the enzymatic function of all structures that have greater values of similarity to the query than to all other proteins of the whole set.
Final assignments
We used both presented algorithms to infer the EC number for the 499 proteins from structural genomics that are currently available and have unknown functions.
In order to avoid over-annotation due to partial EC numbers we used Green and Karp recommendation [15]. If 3D-Hit and 3D-Fun methods were inconsistent in predicting enzyme function at any EC level it was indicated with a '?' symbol in its corresponding position (e.g. 2.3.4.?). If assignments were fully consistent, we indicated it with an 'n' in the fourth EC level (e.g. 2.3.4.n) which means that exact activity of this enzyme was predicted, but a sequence number has not been yet assigned by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology (NC-IUBMB).
FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the all EC level versus Z-score of the most similar protein with different enzyme function at any EC level Figure 5 FSSP (on the left) and 3D-Fun (on the right) Z-score of the most similar protein with the same enzyme function at the all EC level versus Z-score of the most similar protein with different enzyme function at any EC level. Calculation was conducted for 3,135 sequentially non-redundant structures of known function. Corresponding histograms are shown below the charts. If FSSP Z-score = 23 and 3D-Fun Z-score = 9 were used as a cutoff value we would obtain sensitivity of 55% and 60% with specificity at 53% and 55%, respectively.
Structure-function relationship
In the Figures 2, 3, 4, 5, we presented a detail comparison of quality of the predicting the EC number based on the 3D-Fun Z-score and the FSSP Z-score [16]. The experiment was performed with 3,135 sequentially non-redundant structures of known enzyme function that were used in the sequence-function test. The Figures show that we would obtain better sensitivity (from 4% to 8%) with better specificity (from 2% to 5%) at all EC levels if we used ROC curves for the 1 st EC level (upper left chart); 1 st and 2 nd EC level (upper right chart); 1 st , 2 nd and 3 rd EC level (lower left chart) and for all EC levels (lower right chart) Figure 6 ROC curves for the 1 st EC level (upper left chart); 1 st and 2 nd EC level (upper right chart); 1 st , 2 nd and 3 rd EC level (lower left chart) and for all EC levels (lower right chart). Calculation was conducted for 3,135 sequentially non-redundant structures of known function. A random ROC curve (magenta colored) is not a diagonal line (usually presented in ROC plots) because assignment of enzyme function is more complicated than a problem of bimodal classification. Clearly, the probability of assignment of an incorrect EC number is bigger (for the 1 st EC level) or much bigger (for all EC levels) than the correct one.
the 3D-Fun Z-score instead of the FSSP Z-score as a cutoff value for assigning an EC number. This claim is also supported by four ROC curves presented in Figure 6, which portray EC number assignment accuracy of FSSP, 3D-Hit and 3D-Fun algorithms. The most important conclusion of the presented comparisons is the fact that there is not a single, good FSSP or 3D-Fun Z-score cutoff which could be used for assignments of the enzyme functions without human intervention. Of course, we might have used the FSSP Z-score = 50 or the 3D-Fun Z-score = 17 as cutoff and theoretically obtained a false positive rate only slightly greater than zero up to the third EC level. However, to our knowledge, there is no structure of unknown function, which comes from structural genomics, and has similarity to any protein of known structure >50 (FSSP Z-score) or >17 (3D-Fun Z-score). Therefore, we suggest having different cutoffs for different protein folds when similar experiments are carried out. In our case, the cutoff values are defined by the most similar structure with different function.
Distribution of consistent (blue lines) and inconsistent (red lines) predictions of an enzyme function conducted using 3D-Hit (left charts) and 3D-Fun (right charts) algorithms Figure 7 Distribution of consistent (blue lines) and inconsistent (red lines) predictions of an enzyme function conducted using 3D-Hit (left charts) and 3D-Fun (right charts) algorithms. The assignments were performed at the 1 st (upper charts), 2 nd (middle charts), and 3 rd EC level (lower charts) for 499 proteins of unknown function from the structural genomics centers. Marked cutoff values (3D-Hit = 89 and 3D-Fun = 3.1) corresponds to a 50:50 probability of assigning a consistent EC number.
Meta-strategy
In spite of the fact that the 3D-Hit and 3D-Fun algorithms used fold-specific cutoffs of similarity score, both of them gave conflicting predictions for some of the 499 proteins selected from structural genomics. For example, the EC number was correctly assigned to 1RVK and 1K77 structures only by one program, 3D-Hit or 3D-Fun, respectively. Figure 7 shows distribution of consistent and inconsistent EC number assignments conducted by both methods. This figure justifies the usage of well-known Meta-strategy, which dramatically increased the specificity of sequence similarity search methods in the past [17]. In the set of the 499 structures, we could not find any example of wrong prediction at the first EC level with the 3D-Hit score >89 and the 3D-Fun Z-scores>3.1, which was made incorrectly by both programs in the same way. However one of our reviewers found one example, namely that 1Y7I [18] is now known to be a methyl salicylate esterase (3.1.1.?) while 4.?.?.? (lyase) was predicted by both algorithms.
PDB-UF accuracy
Structural genomics initiatives tend to target structures that are less typical of the PDB as a whole and so the cutoffs derived from the whole PDB may not be entirely applicable. Therefore, we analyzed 58 structures with predicted EC numbers, which were recently published and functionally annotated since this may give a truer indication of the accuracy. We found only one additional (except described above) incorrect prediction: 1VGY [19] had been characterized as a succinyl diaminopimelate desuccinylase (3.5.1.?) while metallocarboxypeptidases function (3.4.17.n) was assigned. All such predictions will be manually corrected. However, as more structures are solved in the Protein Data Bank, the PDB-UF method will be more and more accurate and human intervention will not be required.
Example of PDB-UF record
Four three-dimensional structures of hypothetical proteins from various species (A. Aureus, E. Coli, T. Maritime, and B. Subtilis), which came from different structural genomics consortia, were chosen to demonstrate the utility of the algorithm. The EC numbers of these bacterial proteins have not been assigned in PDBSprotEC and SCO-PEC databases. Moreover, standard sequence comparison tools such as PSI-BLAST run against the NCBI non-redundant protein sequence database or RPS-BLAST applied using the Conserved Domain Database [20] failed to assign any function to them. A 3D-Hit structural search detected a strong similarity to a TrmD methyltransferase (MTase) family, represented by the 1P9P [21] and 1UAJ [22] structures. The 3D-Fun program provided similar results by clustering the query model and all TrmD structures into one group with Z-score from 3.64 to 4.22 (depending on the chosen query). Moreover, 3D-Fun found additional similarity to 4 members of a SpoU MTase family. The TrmD and SpoU methyltransferases share a common evolutionary origin and form a single SPOUT (SpoU-TrmD) class [23]. A fold of the SPOUT class is distinct from the consensus MTase fold. All SPOUT proteins contain a deep trefoil knot structure in the catalytic domain and a non-canonical AdoMet/AdoHcy-binding site. A superimposition of 2 TrmD MTases and 4 query structures are presented in Figure 8.
Conclusion
The PDB-UF database is a collection of assigned EC numbers to protein structures of unknown function, which The backbone superposition for 2 deep trefoil knotted TrmD methyltransferases (PDB codes: 1P9P, and 1UAJ) and 4 hypothetical proteins from A. Aureus, E. Coli, T. Maritime and B. Subtilis (PDB codes: 1VH0, 1NS5, 1O6D, and 1TO0, respectively). All of the chains are colored from blue (N-ter-mini) to red (C-termini) Figure 8 The backbone superposition for 2 deep trefoil knotted TrmD methyltransferases (PDB codes: 1P9P, and 1UAJ) and 4 hypothetical proteins from A. Aureus, E. Coli, T. Maritime and B. Subtilis (PDB codes: 1VH0, 1NS5, 1O6D, and 1TO0, respectively). All of the chains are colored from blue (N-termini) to red (C-termini). The S-adenosyl-L-homocysteine (AdoHcy) co-factor in 1P9P entry is shown in gray. The highest sequence identity among the group of 4 proteins with unknown function is 54% and the highest identity to the two known methyltransferases is 15%.
Publish with Bio Med Central and every scientist can read your work free of charge come from the structural genomics centers. Structurebased prediction of the EC number was conducted having different cutoff values for a different protein folds. In order to reduce the number of false positives the annotation was performed using the Meta-strategy. The webbased repository will be updated automatically when new protein structures are released. | 2014-10-01T00:00:00.000Z | 2006-02-06T00:00:00.000 | {
"year": 2006,
"sha1": "ef1d91187024b614b0b4f0fa9ff66159a2eae048",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-7-53",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d05fe23eaed678631dd0d6c044d7eeab2b565007",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
108653548 | pes2o/s2orc | v3-fos-license | Reclassification of early stage breast cancer into treatment groups by combining the use of immunohistochemistry and microarray analysis
Volume 115| Number 3/4 March/April 2019 Research Article https://doi.org/10.17159/sajs.2019/5461 © 2019. The Author(s). Published under a Creative Commons Attribution Licence. Reclassification of early stage breast cancer into treatment groups by combining the use of immunohistochemistry and microarray analysis AUTHORS: Kathleen A. Grant1,2,3 Ettienne J. Myburgh 4,5 Elizabeth Murray 6,7,8 Fredrieka M. Pienaar 9 Martin Kidd 10 Colleen A. Wright 1,11 Maritha J. Kotze12,13
Immunohistochemistry (IHC) is routinely used to approximate breast cancer intrinsic subtypes, which were initially discovered by microarray analysis. However, IHC assessment of oestrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor-2 (HER2) status, is a poor surrogate of molecular subtype. Therefore, MammaPrint/BluePrint (MP/BP) microarray gene expression profiling is increasingly used to stratify breast cancer patients into different treatment groups. In this study, ER/PR status, as reported by standard IHC and single-gene mRNA analysis using TargetPrint, was compared with molecular subtyping to evaluate the combined use of MP/BP in South African breast cancer patients. Pathological information of 74 ER/PR positive, HER2 negative tumours from 73 patients who underwent microarray testing, were extracted from a central breast cancer genomics database. The IHC level was standardised by multiplying the intensity score (0-3) by the reported proportion of positively stained nuclei, giving a score of 0-300. Comparison between mRNA levels and IHC determination of ER/PR status demonstrated a significant correlation (p<0.001) for both receptors (ER: 0.34 and PR: 0.54). Concordance was shown in 61 (82%) cases and discordance in 13 (18%) of the 74 tumours tested. Further stratification by MP/BP identified 49 (66.2%) Luminal A, 21 (28.4%) Luminal B and 4 (5.4%) Basal-like tumours. Neither IHC nor TargetPrint could substitute BP subtyping, which measures the functional integrity of ER and can identify patients with false-positive tumours who are resistant to hormone therapy. These findings support the implementation of a pathology-supported genetic testing approach combining IHC and microarray gene profiling for definitive prognostic and predictive treatment decision-making in patients with early stage breast cancer.
Significance:
• Single-gene genomic oestrogen and progesterone receptor reporting adds limited additional information to the molecular stratification of breast cancer tumours and does not supersede the immunohistochemistry results.
• Neither single-gene genomic mRNA nor immunohistochemistry reporting of oestrogen and progesterone receptor status can replace the combined use of MammaPrint/BluePrint genomic molecular subtyping.
• Reliable distinction between Luminal A and B type tumours is not possible using immunohistochemistry or single-gene genomic mRNA assessment of oestrogen/progesterone and HER2 receptor status.
Introduction
Breast cancer defines a broad spectrum of histological lesions that are considered highly heterogeneous in presentation, morphological characteristics, prognosis and therapeutic outcome. 1 Microarray-based gene expression profiling led to the discovery of intrinsic molecular subtypes underlying the variability in biological behaviour and response to treatment amongst breast cancer patients. 2,3 Five distinct subtypes were described by Perou et al. 2 , although the normal-like subtype was subsequently considered to represent normal breast tissue within tumour. Luminal tumours expressed oestrogen receptor (ER) and responded to endocrine therapy. While Luminal A tumours show little benefit from the addition of chemotherapy, Luminal B tumours display some genetic similarities to Basal-like tumours in that they have a higher risk of being hormone resistant and show additional benefit from chemotherapy as demonstrated by the significant pathological complete response rate after neo-adjuvant chemotherapy. Human epidermal growth factor receptor-2 (HER2)enriched and Basal-like subtypes are considered more aggressive with an unfavourable prognosis, although paradoxically exhibiting greater chemosensitivity compared to the Luminal subtype. Basal-like breast cancers are inherently resistant to endocrine therapy, and tumours subtyped as HER2 enriched respond to anti-HER2 therapy in addition to chemotherapy. Some tumours reported as HER2 positive are subtyped as Luminal B and retain some responsiveness to endocrine treatment in addition to chemotherapy and HER2 targeted treatments. 4 Borley et al. 5 demonstrated that the HER2 gene copy number provides additional information for stratifying breast cancer patients into different treatment groups, because HER2-positive patients with a low degree of HER amplification were shown to derive less benefit from trastuzumab (the chemotherapy agent more commonly known as Herceptin ® ).
Numerous studies using standard pathology have been performed to identify, with some accuracy, treatable molecular subtypes. Suggestions for incorporating markers such as the epidermal growth factor receptor (EGFR), proliferation marker Ki67, tumour suppressor gene protein p53, transmembrane tyrosine kinase receptor CD117(ckit) and cytokeratin 5/6 into a standard immunohistochemistry (IHC) panel for breast cancer 3,6 have not been adopted because of poor standardisation. High-quality assessed Ki67 is considered most useful when the indication for
FUNDING:
Cancer adding adjuvant chemotherapy to endocrine treatment is uncertain, but molecular classification can help to identify a larger group of early-stage breast cancer patients with low risk of recurrence. 7 Some studies suggested that loss of progesterone receptor (PR) expression might be indicative of the Luminal B subtype 8 , but this association has not been universally reported. ER, PR and HER2 status have been incorporated into standard pathology reporting of breast cancers with reproducible prognostic and predictive value 9,10 . It is only with the advent of genetic tumour profiling that accurate molecular subtyping became part of daily clinical practice. A prospective study performed by Whitworth et al. 11 proved that the combined use of the 70-gene MammaPrint (MP) and 80-gene BluePrint (BP) assays in microarray analysis of mRNA expression may be more accurate than standard IHC to guide treatment decisions. Notably, 22% of over 400 breast cancer patients studied were reclassified into a different subgroup compared with conventional assessment and showed an improved distribution of response rates in the relevant treatment groups. Similar findings were reported by Yao et al. 12 Previous studies have shown that mRNA reporting of ER, PR and HER2 using microarrays is highly comparable to IHC testing. [13][14][15] However, others have cautioned against the preferential use of hormone receptor reports using RNA-based reverse transcription polymerase chain reaction (RT-PCR) technology, highlighting discordance with the IHC results and the potential of denying patients who were ER or PR positive on IHC the benefit of endocrine therapy. 16,17 Application of different methodologies for the same purpose therefore requires careful consideration.
Microarray-based tumour profiling using the 70-gene MP profile has been available in South Africa since 2007 and, from 2009, local referral criteria were introduced for reimbursement by medical aid providers. 18 Initially, analysis was performed on fresh tissue only, but since 2012 the use of formalin fixed paraffin embedded (FFPE) tissue became available and has become the only method used. A central database was established by using an ethics approved protocol for comparative effectiveness studies on data of MP tests requested in southern Africa. In addition to ER/PR mRNA reporting by TargetPrint (TP) from 2009, BP -which determines the tumour molecular subtype 19 -has also become part of the MP service from 2011. BP provides a comprehensive multigene expression analysis of the tumour molecular subtypes, which may not be sufficiently reflected by single-gene IHC or mRNA testing. 20,21 Although our data on HER2 expression indicated a 100% correlation between fluorescence in-situ hybridisation (FISH) and microarray testing using TP 22 , discrepancies in ER/PR reporting between IHC and RNA-based RT-PCR techniques 23,24 warranted evaluation in this study of the value and potential clinical impact of ER and PR status as reported by TP using RNA-based microarray analysis.
Our aim was to evaluate the combined use of MP/BP in clinical practice using a pathology-supported genetic testing approach incorporating ER, PR and HER2 status as part of the above-mentioned referral criteria, called the microarray pre-screen algorithm (MPA). ER and PR status as assessed by IHC was compared with that reported by TP in order to determine the correlation between the two techniques. We used the BP result to identify the molecular subtype and most probable response to therapy and correlated this result to the IHC and TP results. This study is the first to correlate IHC and mRNA hormone receptor status in South African breast cancer patients, using microarrays performed on FFPE specimens in the context of molecular breast cancer subtyping. As a consequence of the MPA employed in southern Africa, which generally excludes patients with ER/PR negative or HER2 positive tumours from testing, our series was limited to ER/PR positive, HER2 negative tumours only. South African patients reclassified as HER2 positive using TP and reflex FISH in a recent study 22 were also excluded from the current analysis.
Methods
This study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. Ethical approval was granted by the Health and Research Ethics Committee (HREC) of Stellenbosch University (reference number N09/06/166).
Study population
The records of 128 patients with 131 tumours were available in a central breast cancer genomic database, which comprised data of all patients referred for MP in southern Africa between 2007 and 2014. The database is maintained centrally under a rigorous quality control programme to ensure the integrity of the data. We extracted anonymised pathological data of 74 HER2 negative tumours from 73 patients who had TP and BP testing on FFPE tissue. The tumour pathology included tumour type, grade and size; ER, PR and HER2 status; MP risk status; TP results for ER, PR and HER2; and the molecular subtype as determined by BP.
Immunohistochemistry testing
Standard pathology reporting of hormone and HER2 receptor status using IHC to measure protein expression levels varies amongst different laboratories. To standardise the data for statistical analysis of hormone receptor status, estimation of the semi-quantitative expression of ER and PR was performed using the intensity score (0-3), multiplied by the reported proportion of positively stained nuclei, thereby calculating a final ER and PR score (0-300).
Microarray-based gene expression profiling
Microarray-based gene expression profiling (MP, TP and BP) was performed on 74 FFPE tissue samples obtained from 73 breast cancer patients, using a pathology-supported genetic testing strategy. 18 An experienced pathologist evaluated tumour suitability for genomic analysis based on confirmation of a minimum tumour cell content of 30% in accordance with compliancy criteria laid out by the US Food and Drug Administration. These samples were transported under an export permit to the Netherlands where tumour assessment was performed at the centralised Agendia Laboratory in accordance with standard testing protocols. 25 mRNA expression for ER and PR is reported on a continuous exponential scale from -1 to 1 and values of less than 0 are considered to be negative.
Comparative analysis
Quantitative analysis was performed by comparing the level of mRNA for ER and PR as reported by TP with the IHC score. ER and PR were considered to be positive when the IHC score was >10 or the mRNA score was >0. Qualitative analysis was performed to allow indepth evaluation of the relationship between IHC and mRNA compared to tumour subtyping, with the aim of determining the clinical implications of individual versus combined assessment of pathology and microarraybased genetic testing.
Statistical analysis
Statistical analysis was performed using the Statistica v.13 software package. Observer agreement measures for IHC testing to determine hormone receptor status and microarray-based mRNA readout assessment were calculated from two-way contingency table analysis. 26 The relationship between protein expression (IHC) and mRNA (TP) levels was assessed using Spearman rank correlation analysis. A possible association between loss of PR expression in ER-positive cases (ER+/PR-) and the high-risk Luminal B subtype as determined by microarray analysis was further assessed. Results corresponding to a p<0.05 were considered statistically significant.
Comparative analysis of ER and PR status
A quantitative comparison between mRNA levels and IHC determination of ER and PR is shown in Figures 1 and 2 The combined score also amplified the differences between the groups, as illustrated in Figure 3. A qualitative analysis was performed to evaluate the relationship between IHC and mRNA in relation to tumour subtyping. The relationship between IHC and mRNA status versus tumour subtype is indicated in Figure 4. Importantly, none of the ER+/PR+ tumours determined on IHC were reported as Basal subtype breast cancers. When either ER or PR expression was lost, on both IHC and mRNA, there was a 57% (4 out of 7) risk of a Basal subtype. No predictive pattern in hormone receptor expression determined by IHC could be identified to distinguish Luminal A from Luminal B tumours identified through combined use of MP/BP.
Discussion
Ongoing validation of emerging genomic technologies against current standards in breast cancer pathology is an important research focus. In South Africa, an increased level of confidence based on growing clinical experience supported the incorporation of MP/BP 19,27 into routine clinical practice. Integration of these results into treatment decision may lead to a change in therapy in one of every two early-stage breast cancer patients treated in South Africa. 28 In the present study, the molecular subtype based on BP was used as an indicator of the expected response to therapy. The Luminal type of breast cancer is expected to respond to endocrine therapy, whereas the Basal subtype is inherently resistant to endocrine therapy. IHC and TP results were evaluated for their ability to predict a non-Luminal subtype.
In contrast to previous results from our research group showing a 100% correlation in HER2 status between FISH and TP, irrespective of whether fresh or FFPE specimens were used 22 , our current qualitative results show 18% discordance between the IHC and TP determination of ER and PR status. More importantly, three (4%) of the ER+/PR+ tumours as determined by IHC/FISH were ER-/PR-on TP regardless of being a Luminal-type. If the TP results were interpreted without the IHC and/or BP results, the results could have a major implication on treatment decisionmaking for the patients. It is therefore important to realise that determination of mRNA status for ER and PR using single-gene mRNA analysis does not necessarily translate to protein expression or reflect the presence of a functioning receptor protein. 29 Because molecular subtyping performed through BP is enriched for several genes involved in ER function, the combined score of 80 genes included in the BP microarray profile provides a better indication of an intact ER-mediated kinase pathway and subsequent response to endocrine therapy. 20 Reasons for the apparent discrepancy between mRNA expression and receptor protein levels based on IHC can be found in our quantitative analyses which indicated that ER and PR protein expression levels varied substantially in relation to mRNA levels. In Figure 1, there were eight tumours with mRNA scores of <0, indicating ER mRNA negativity, but with a mean IHC score of 156 (0-300), and only two had an IHC score <10. As mRNA levels increased above 0, there was only one Luminal A tumour with an IHC score of 0. For PR there seemed to be better correlation of low mRNA levels to the IHC score, with 6 out of 9 tumours reported with a mRNA score <0 showing an IHC score of 0. Similar results were reported in a study using quantitative immunofluorescence for measuring mRNA in situ by Bordeaux et al. 30 relationship between ERα protein expression and mRNA levels on tissue sections visualised using RNA scope probes. In a similar finding to this study, protein expressions varied considerably at very low levels of RNA. In their analysis, mRNA did not show any prognostic value but had some predictive value above and beyond that of the ERα protein expression.
The presence of a splice variant in the ER gene might also influence the apparent ER protein expression, as previously described by Groenendijk et al. 31 Several ER splicing variants have been reported in the literature, resulting in one or more exons being omitted from the ER mRNA. Advanced techniques such as exome sequencing might be particularly effective in identifying these abnormalities. 32 Antibodies used to detect ER protein during routine IHC assessment rely on epitope recognition encoded by the first exon of the ER gene; therefore, antibody binding at these sites will produce a positive ER result, despite the lack of normal ER functionality as a result of a splice variant. Similarly, RNA methods based on single-gene identification of ER status such as TP (microarray, previously provided as a separate readout with MP) and Oncotype DX (RT-PCR, included in 21-gene assay) will not routinely detect such variants. The ability of microarray analysis to detect functional ERα activity, could explain why a patient with the ERΔE7 splice variant as noted by Groenendijk et al. 31 was classified as high risk by MP, but low risk by Oncotype DX (Recurrence Score of 8). Comprehensive genomic evaluation using multi-gene tests such as MP/BP is needed to identify these hormone-resistant tumours. This justifies recent discontinuation of TP and introduction of the European Conformity (CE)-marked nextgeneration sequencing based MP/BP ® (Agendia, Inc.).
Numerous technical factors have an influence on the accuracy of standard IHC reporting, including the effect of cold ischaemic time on ER epitope availability, resulting in false low levels of protein expression. 33,34 Different samples used for initial IHC versus subsequent receptor status determination using TP, could contribute to discordant results as some authors have reported high false positive and negative rates when testing was performed on tissue obtained from core needle biopsies as opposed to resected specimens. 35 Sample differences might also play a role as some of the patients had the initial receptor status reported on core needle biopsies whereas TP was performed on the resection specimen. In most units, however, this is an accepted practice with good correlation, although it should be avoided in ER and PR negative tumours. 36 Variations in individual ER and PR levels measured by IHC or mRNA levels did not discriminate between Luminal A and B tumours. Basal tumours had significantly lower scores for ER and PR on IHC as well as mRNA levels for ER. Utilising a summative ER/PR score for both IHC and mRNA allowed better discrimination between Basal tumours and Luminal tumours. When this combined score was employed, as illustrated in Figure 3, four out of six tumours with an IHC ≤ 270 and mRNA ≤0.1 were Basal while the remaining two were Luminal B breast cancer subtypes. This was also reflected in our qualitative analysis. All the tumours which were ER+/PR+ using IHC were Luminal A or B subtypes and the additional results provided by TP did not add any further information to assist in distinguishing Luminal A from Luminal B tumours identified by BP. In cases where ER or PR was negative on IHC, TP did add some additional value although the numbers in this group were limited (n=11). In this group, all four tumours for which TP showed an ER+/PR+ profile, were Luminal A and would potentially gain little or no benefit from additional chemotherapy. The remainder were either Luminal B or Basal subtypes, suggesting the addition of chemotherapy to the treatment plan.
Interpretation of the results is limited by a relatively small sample size as well as pre-selection of hormone-positive and HER2 negative patients in compliance with the MPA developed for reimbursement purposes in South Africa. 18 This precluded evaluation of the effect TP might have on reclassification of IHC ER-/PR-tumours 22 , possibly classifying some of these as hormone receptor positive or Luminal type supported by BP results. The main strength of the study lies in the reclassification of a subset of patients into the Luminal B and Basal-like subtypes who require more aggressive treatment compared to patients with Luminal A type tumours. Because the 80-gene BP profile is enriched in ER-target genes and measures the functional integrity of ER, it has the potential to identify a subgroup of breast cancer patients who are ER-positive by IHC and/or single-gene mRNA expression analysis but would fail to respond to hormone treatment. 31 In conclusion, our results show that in contrast to the added value of TP as a second opinion for HER2 status 22 , single-gene microarray readout of ER/PR status provided little additional information beyond that obtained from standard IHC results and performed poorly in predicting molecular tumour subtype. If genomic hormone receptor status alone would be used in clinical decision-making, it is possible that some patients might be erroneously denied endocrine therapy. However, in tumours for which ER or PR expression was lost on both IHC and TP, the molecular subtype determined by BP was less likely to be Luminal A, thus indicating the potential benefit of adjuvant chemotherapy. Basal tumours can potentially be identified by utilising the sum of the ER and PR results in both IHC and mRNA and selecting tumours for which IHC≤270 and mRNA≤0.1, or the flow diagram in Figure 4 can be employed. Accurate distinction between Luminal A and B molecular subtypes, and identification of the Basal-type despite apparent positive ER status on IHC and TP noted in at least one South African patient, were only possible by using the BP 80-gene profile. It was our consistent observation that loss of PR expression on IHC was not indicative of the Luminal B subtype. TP provided limited additional information compared to IHC, which justifies recent discontinuation of single-gene mRNA microarray testing of the ER, PR and HER2 genes now incorporated into the 80-gene BP profile.
The clinical relevance of a pathology-supported genetic testing approach to breast cancer management, combining microarray-based analysis as ancillary to existing clinico-pathological risk stratification and prognostication tools, is supported by the results presented in this study. The routine implementation of genomic profiling alongside standard pathology tests may increase clinician confidence in treatment decisionmaking and ultimately optimise individualised management of early-stage breast cancer patients by identifying molecular subgroups more accurately. | 2019-04-12T13:41:40.569Z | 2019-03-27T00:00:00.000 | {
"year": 2019,
"sha1": "72e61646e23e4ed44ca8968a734bf50204552d45",
"oa_license": "CCBY",
"oa_url": "https://www.sajs.co.za/article/download/5461/7512",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d7fffd7c25ed06f721cbd438116b71ebd26c4a72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125567365 | pes2o/s2orc | v3-fos-license | Closed form Solutions of New Fifth Order Nonlinear Equation and New Generalized Fifth Order Nonlinear Equation via the Enhanced (G’/G)-expansion Method
Closed form solutions of nonlinear evolution equations (NLEEs) are very imperative in order to better understand the inner mechanism and complexity of complex physical phenomena. The enhanced ( / ) G G expansion ′ − method is a effectual and proficient mathematical tool which can be used to discover the closed form solutions of NLEEs arising in mathematical physics, applied mathematics and engineering. In this article, the enhanced ( / ) G G expansion ′ − method is recommended and carry out to investigate the closed form solutions of the new fifth order non-linear equation and the new generalized fifth order non-linear equation. The performance of this method is reliable, proficient and possible to obtain a lot of new exact solutions than the existing other methods.
Introduction
Closed form solutions of nonlinear evolution equations (NLEEs) are getting importance to study of complex phenomena in the field of science and engineering. NLEEs are frequently appear in various fields, such as plasma physics, geophysics, nuclear physics, biomathematics optical fibers, biomechanics, gas dynamics, chemical reactions, geochemistry etc. Closed form solutions of NLEEs and its graphical representation reveal the inner mechanism of complex nonlinear phenomena. Therefore, it is a urgent issue and very important to search for more closed form solutions to NLEEs in order to better realization of the structure of nonlinear phenomena. But till now there is no distinctive method to inspect all kinds NLEEs. As a result diverse groups of mathematicians, physicist and engineers have been working vigorously to develop effective methods for which to solve all NLEEs.
Algorithm of the enhanced ( / )
In this section, we explore the enhanced ( / ) G G expansion ′ − method for finding traveling wave solutions to NLEEs. Let us consider a nonlinear evolution equation in two independent variables x and t in the form: is an unknown function of x and t and R is a polynomial of ( , ) u x t and its partial derivatives which contains the highest degree nonlinear terms. The essential steps concerning this method are offered in the following: Step 1: Initiating a compound variable ξ with the combination of real variables x and t , Where, specify the speed of the traveling wave.
The traveling wave transformations (2.2) permit us in dropping Eq. (2.1) to an ordinary differential equation (ODE) for Where, S is a polynomial in ( ) u ξ and its derivatives with respect to ξ .
Step 2: The solution of Eq. (2.3) can be expressed in the following form: ω . Solving this system of equations supplies the values of the unknown parameters.
Step 5: From the general solution of equation ( into (2.4), we obtain more general and some fresh traveling wave solutions of (2.1).
Applications of the method
In this section, we inspect the closed form solutions of the new fifth order non-linear equation and new generalized fifth order non-linear equation with the help of the enhanced Example 1: In this subsection, we will use the enhanced method search for the exact solution to the following new fifth order non-linear equation of the form [21] ( ) ( ) The equation (3.1) transfer to ODE in the following form using wave transformation (2.2) Integrating (3.2) with respect to twice and taking integration constant to zero, we get balancing the highest-order derivative term u′′′ and the highest-order nonlinear term '2 u yields 1. n = Thus, the solution structure of Eq. (3.3) becomes ( )
Substituting (3.4) with the equation (2.5) into equation (3.3), we attain a polynomial
From this polynomial we get the coefficients ' ( / ) i G G of and ' 2 ' Equating them to zero, we achieve an over-determined system that contains thirty algebraic equations (for simplicity we skip to display them). Solving this system of algebraic equation, we get Set 1:
Biostatistics and Biometrics Open Access Journal
Set 2: Set 3:
Set 4:
Set 5: Now substituting solution set 1-5 with equation (2.5) into equation (3.4), we get sufficient traveling wave solution to Eq. (3.1) as follows: When, 0 µ < , we get the hyperbolic solution, Again, for Where, x t ξ µ = ± − Example 2: In this subsection, we will apply the given method in section 2 for the exact solution and then the solitary wave solution to the following generalized new fifth order nonlinear equation of the form [21] ( ) Where, α and β are constant.
Integrating (3.2) with respect to ξ twice and taking integration constant to zero, we attain
Biostatistics and Biometrics Open Access Journal
Balancing the highest-order derivative term u''' and the highest-order nonlinear term '2 , u yields
n =
Thus, the solution structure of Eq. (3.26) becomes ( ) Equating the coefficient of these to zero, we achieve a system of algebraic equation which on solving, we get Set 1: Where, Again, for 0, µ > we get the following trigonometric solution: Type-6: ( ) 10 2
Results and Physical Explanations
In this section, we have discussed about the obtained solution of new fifth order non-linear equation and new generalized fifth order non-linear equation. As of the above solution, it has been noticed that x t − ≤ ≤ is given in Figure 6. For simplicity we ignored the others figures. G G expansion ′ − method against other methods is that the method provides more general and huge amount of new closed form wave solutions. The closed form solutions have its great importance to interpretation the inner mechanism of the complex physical phenomena. Therefore this method is very concise and straightforward to handling and can be applied for finding closed form solutions of other NLEEs arising in science and engineering. | 2019-04-22T13:12:46.318Z | 2017-12-08T00:00:00.000 | {
"year": 2017,
"sha1": "722b4a056e8d1cfa1f26762704dc9b71abcf6770",
"oa_license": "CCBY",
"oa_url": "https://juniperpublishers.com/bboaj/pdf/BBOAJ.MS.ID.555631.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5052ab1f66a9b4eb4f8c1b6671c5a260b084bd74",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
264482222 | pes2o/s2orc | v3-fos-license | Effects of High-Biotin Sample Interference on Antibody Concentrations in Sandwich Immunoassays
The use of antimicrobial growth promoters (AGPs) is banned because of problems associated with drug residues in animal products and increased bacterial resistance. The immunization of chickens with specific antigens is a promising strategy for generating specific antibodies that can target a wide range of antibiotic-resistant bacteria and can be used as an alternative to antibiotics. Immunoglobulin Y (IgY) antibodies in a polyclonal antibody (pAb) format, when administered orally, modulate the ruminal microbiome and maintain animal health and performance; however, there are concerns pertaining to protein impurities and biotin concentrations in the samples. Signal amplification strategies involving the noncovalent interaction of biotin with streptavidin is extensively used in diagnosis and scientific research, particularly in enzyme-linked immunosorbent assays (ELISAs). However, the high concentrations of biotin in samples, especially in those derived from rich sources such as egg yolk, can pose challenges and potentially harm the accuracy of diagnostic tests and protein concentration measurements. This study aimed to evaluate the influence of biotin on the measurement of IgY in freeze-dried egg yolk samples obtained from immunized laying hens using immunoassays with biotin–avidin/streptavidin. The detection of IgY in yolk samples using ELISA with streptavidin–biotin binding could lead to misdiagnosis due to biotin interference; the level of interference varies with the specific assay conditions and the concentration of biotin in the yolk samples. An ELISA without streptavidin–biotin binding is advisable to avoid interactions between biotin and target proteins, prevent biotin interference with the results, and achieve more reliable and accurate results.
Introduction
Antimicrobial resistance (AMR) is a global health crisis with significant consequences.Approximately 700,000 fatalities are attributed to AMR annually.By 2050, AMR could lead to approximately 10 million deaths per year and global social costs of USD 100 trillion, underscoring the wide-ranging impact of AMR [1].The use and misuse of in-feed antibiotics in animal agriculture as growth promoters in poultry and livestock have raised significant concerns related to drug residues in animal products and the development of AMR [2].
Vaccines 2023, 11, 1627 2 of 12 IgY antibodies can be obtained in larger quantities from the egg yolks of immunized laying hens compared to those obtained from other traditional sources, such as serum from rats, rabbits, goats, and sheep [10].Unlike traditional mammalian antibody production, which could involve blood collection that can be stressful, other invasive procedures, and even sacrificing the animal [11], this non-invasive egg collection and IgY extraction from egg yolk is cost-effective and ethical and is a sustainable source of antibodies [12].Furthermore, egg yolk antibodies do not interact with rheumatoid factors in the serum of mammals and do not bind to proteins A and G, mammalian Fc receptors, or mammalian complements [13].IgY antibodies are stable and resistant to degradation, allowing for long-term storage and retained activity through different manufacturing steps.Dried IgY batches can maintain their biological activity for several years [14,15].
The oral administration of IgY as an alternative to antibiotics or for microbiome modulation, especially in the polyclonal antibody (pAb) format, often does not require IgY purification for large-scale production.The use of polyethylene glycol (PEG), such as PEG6000, for protein collection from supernatants is a common method for removing fat from samples; however, it can sometimes result in protein impurities and high levels of biotin in the samples, which can be problematic in certain applications [16].When using IgY without extensive purification, it is important to consider the high level of biotin naturally present in egg yolks, 988-1050 ng/g [17,18], because this could lead to potential biotin interference in assays; it is necessary to consider appropriate strategies to mitigate its impact.Biotin interference is a recognized challenge in immunoassays using streptavidin-biotin-binding interactions [19]; it can affect the accuracy of the results in both research applications and human diagnostics.Approximately 85% of chemiluminescence immunoassays are based on biotin-avidin/streptavidin and are used by more than twothirds of laboratories in China [20,21].All laboratories using signal amplification through biotin-streptavidin interactions face the threat of misdiagnosis due to biotin interference caused by excessive biotin consumption [22,23].
This study aimed to evaluate the influence of biotin on IgY quantification in unpurified freeze-dried egg yolk samples obtained from antigen-inoculated hens using biotinavidin/streptavidin.
Preparation of Antigen
Streptococcus equinus was used as an antigen to produce yolk antibodies.The stock culture was thawed and grown on blood plates for 24-48 h at 38 • C under semianaerobic conditions.Colonies were collected by scratching, and they were aseptically transferred to a broth tube containing 2.5 mL of growth medium.The bacteria were grown in a basal medium (39 • C) that contained (per liter) 22 mmol glucose, 1.7 mmol K 2 HPO 4 , 2.1 mmol KH 2 PO 4 , 3.6 mmol (NH 4 ) 2 SO 4 , 8.3 mmol NaCl, 0.75 mmol MgSO 4 •7H 2 O, 0.43 mmol CaCl 2 •2H 2 O, 2.8 mmol cysteine hydrochloride, 38 mmol Na 2 CO 3 , 5 mg/mL casamino acids (Difco), 10 mg/mL Trypticase, and 5 g yeast extract.A 1 L borosilicate bottle was used as the container for preparing the growth medium, which was autoclaved for 15 min at 121 • C and at a pressure of 1 atmosphere.A 40% glucose solution was added at room temperature to prevent caramelization.The pH of the medium was adjusted to 6-7 using either 1 M NaOH or 1 M HCl as required.
The turbidity of the culture was checked and compared to the McFarland standards to obtain a final density of 5 × 10 9 cells/mL, and the culture was transferred to an Erlenmeyer flask containing 50 mL of culture media.When the culture achieved turbidity at 0.5 McFarland, it was transferred to an Erlenmeyer flask with 150 mL of culture media with CO 2 , which was sealed and incubated for 48 h at 38 • C. The homogenized content of the Erlenmeyer flask was filtered through four layers of cheesecloth.The solid in the filter was washed with 0.9% saline, and the total filtrate was transferred to centrifuge containers.The containers with the filtrate were balanced in pairs and centrifuged at 1000× g for 10 min at 4 C.The supernatant from the second centrifugation was discarded, and the precipitate was transferred to a single container using a spatula and as little deionized water as possible to obtain the final bacterial pellet.The bacterial pellet was resuspended in phosphate-buffered saline (PBS) (pH 7.4) to obtain 5.3 × 10 8 colony-forming units/mL of S. equinus and a turbidity of 0.5 McFarland.To this solution, 4% formaldehyde (18.5%) was added, which was followed by 30% Imject Alum Adjuvant (TermoFisher Scientific, Life Tech Brasil, Itapevi, Sao Paulo, Brazil).The controls were prepared using the same amounts of PBS, formaldehyde, and adjuvant without bacterial pellets.For antigen adsorption, the adjuvant was added slowly, which was followed by constant agitation for 4 h.These immunologic response inductors were transferred to sterile serum bottles, capped, and stored at 4 • C until further use.
Immunization of Hens with Streptococcus Equinus and Preparation of Samples
White Leghorn hens (25-week-old) were divided into two groups.Solution (500 µL), with or without antigen, was injected deeply into the pectoral muscles of each group every 14 days for 56 days.Eggs were collected weekly, broken, and the shells, yolks, and egg whites were separated.The yolk was subjected to freeze drying and delipidation, according to Akita and Nakai [20], to concentrate the IgY and biotin in the aqueous protein fraction.The lyophilized yolk sample (1 g) was weighed in a 15 mL Falcon tube; 6 mL of phosphate-buffered saline (PBS) and 0.210 g of polyethylene glycol (PEG) 6000 were added.The mixture was vortexed for 1 min and incubated at 4 • C with shaking for 10 min.The samples were centrifuged twice at 4 • C, at 10,000 rpm, for 20 min.The precipitate consisted of solids and fat; one fraction of yellow fat floated above the supernatant, and the transparent supernatant in the middle contained biotin and proteins.The supernatant containing protein and biotin was separated from the solids and fat and centrifuged again; the supernatant was collected using a needle and syringe.The protein fraction was filtered using a funnel and paper filter (40 µm) to remove the suspended insoluble solid residue particles and yellow fat and to clarify the sample for the analysis of IgY and biotin using 'IDK Biotin ELISA,' manufactured by Immunodiagnostik AG (Bensheim, Germany).Crude IgY from non-immunized hens was used as the control.
Immunoassays Procedures
To evaluate the interference of biotin in the detection of Immunoglobulin Y (IgY, an antibody class found in chicken yolk), two commercial kits for sandwich enzyme-linked immunosorbent assay (ELISA) and two non-commercial plates specific to this study were used.These commercial kits, used for quantitatively detecting IgY in various biological samples for various purposes, come with all the necessary reagents, including antibodies, substrates, and standards, along with detailed instructions for conducting the assay.The capture antibody was added at a concentration of 0.5-4 µg/mL (pre-coated plate), while they usually use detection antibodies at 0.5-1 µg/mL.In these commercial ELISA kits, the detection antibodies are nonspecific; freeze drying makes the proteins and biotin in the sample very concentrated; therefore, a high dilution factor (1:1,000,000) is required to ensure that the sample reading signal is within the values of the standard curves.
The commercial test used was ECH0032 from FineTest, in which two specific antibodies were used to "sandwich" IgY between them.One antibody, known as the capture antibody (anti-IgY, usually 0.5-4 µg/mL), was immobilized on a solid surface (e.g., the surface of a microplate well); it binds specifically to IgY.Diluted test samples and standards (0.1 each) were added to the pre-coated plates.The plates were sealed with a cover and incubated at 37 • C for 90 min.The cover was removed, the solution was discarded, and the plate was washed twice with Wash Buffer.Subsequently, 0.1 mL of biotin-conjugated detection antibody was added to the wells, and the plate was sealed with a cover and incubated at 37 • C for 90 min.The cover was removed, and the plate was washed thrice with Wash Buffer; the wash buffer was allowed to stand in the wells for 1 min each time.Horseradish peroxidase (HRP-streptavidin) was added to each well, and the plate was sealed with a cover and incubated at 37 • C for 30 min.Subsequently, the cover was removed, and the plate was washed five times with wash buffer, and the wash buffer was allowed to stand in the wells for 2 min.Then, 90 µL of TMB (3,3 ,5,5 tetramethylbenzidine), the substrate for HRP, was added into each well, and the plate was incubated at 37 • C in the dark for 15-30 min to visualize the HRP enzymatic reaction.Finally, 50 µL of the stop solution was added to each well and mixed thoroughly.The OD 450 was recorded immediately after adding the stop solution using a spectrophotometer, Biochrom EZ Read 400 Microplate Reader from Holliston, MA, USA.The calibration curve obtained from the serial dilution of the standard is shown in Figure 1.
cover and incubated at 37 °C for 90 min.The cover was removed, the solution was discarded, and the plate was washed twice with Wash Buffer.Subsequently, 0.1 mL of biotin-conjugated detection antibody was added to the wells, and the plate was sealed with a cover and incubated at 37 °C for 90 min.The cover was removed, and the plate was washed thrice with Wash Buffer; the wash buffer was allowed to stand in the wells for 1 min each time.Horseradish peroxidase (HRP-streptavidin) was added to each well, and the plate was sealed with a cover and incubated at 37 °C for 30 min.Subsequently, the cover was removed, and the plate was washed five times with wash buffer, and the wash buffer was allowed to stand in the wells for 2 min.Then, 90 µL of TMB (3,3′,5,5′ tetramethylbenzidine), the substrate for HRP, was added into each well, and the plate was incubated at 37 °C in the dark for 15-30 min to visualize the HRP enzymatic reaction.Finally, 50 µL of the stop solution was added to each well and mixed thoroughly.The OD450 was recorded immediately after adding the stop solution using a spectrophotometer, Biochrom EZ Read 400 Microplate Reader from Holliston, MA, USA.The calibration curve obtained from the serial dilution of the standard is shown in Figure 1.The IRKTAH1109 test from Innovative Research Incorporation was used to compare the results for IgY concentration obtained through the interaction of biotin with streptavidin without signal amplification.In the IRKTAH1109 assay, the IgY in the samples reacted with the anti-IgY antibodies adsorbed on the surface of polystyrene microtiter wells.After removing the unbound proteins by washing, an enzyme-antibody conjugate (0.1 mL) with horseradish peroxidase (HRP) was added to each well, and these anti-IgY antibodies conjugated with the previously bound IgY to form complexes.The plates were washed, and 100 µL of TMB (3,3′,5,5′ tetra-methylbenzidine), the substrate for HRP, was added into each well; the plates were incubated in the dark at room temperature for 10 min to visualize the HRP enzymatic reaction.Finally, 100 µL of the stop solution was added into each well and mixed thoroughly.The OD450 was recorded immediately after the addition of the stop solution using a spectrophotometer.The calibration curve obtained from the serial dilution of the standard is shown in Figure 2. The IRKTAH1109 test from Innovative Research Incorporation was used to compare the results for IgY concentration obtained through the interaction of biotin with streptavidin without signal amplification.In the IRKTAH1109 assay, the IgY in the samples reacted with the anti-IgY antibodies adsorbed on the surface of polystyrene microtiter wells.After removing the unbound proteins by washing, an enzyme-antibody conjugate (0.1 mL) with horseradish peroxidase (HRP) was added to each well, and these anti-IgY antibodies conjugated with the previously bound IgY to form complexes.The plates were washed, and 100 µL of TMB (3,3 ,5,5 tetra-methylbenzidine), the substrate for HRP, was added into each well; the plates were incubated in the dark at room temperature for 10 min to visualize the HRP enzymatic reaction.Finally, 100 µL of the stop solution was added into each well and mixed thoroughly.The OD 450 was recorded immediately after the addition of the stop solution using a spectrophotometer.The calibration curve obtained from the serial dilution of the standard is shown in Figure 2. The interference of biotin in specific antibody detection was evaluated with two-plate trapped antigen-enzyme linked immunosorbent assay (PTA-ELISA).After the bacterial pellet was resuspended in PBS (pH 7.4), the microtiter plate was coated with 100 µL per well of the suspension containing 5.3 × 10 8 colony-forming units/mL of S. equinus using 1 µL/100 µL of 0.05 M carbonate-bicarbonate buffer (Sigma-Aldrich C3041, St. Louis, MO, USA) (pH 9.6) and incubated at 4 °C overnight on plate columns from 3 to 12 (Table 1).The plates were washed thrice with PBS-Tween (PBS-T; 0.85% NaCl in 0.01 M phosphate buffer, pH 7.2, containing 0.05% Tween 20).The plates were treated with cold water fish skin gelatin (0.5%; 200 µL per well) in PBS and incubated at room temperature for 1 h to block the plates.The blocking buffer was removed.Serial dilutions were prepared (Table 1) as follows: 100 µL of PBS was placed into wells (except in columns 1 and 4); 200 µL of sample was placed into the wells of column 4; 100 µL was transferred from wells of column 4 to wells of column 5 (1:1), and then from column 5 to column 6 and so on (1:2; 1:4; 1:8; 1:16; 1:32; 1:64), until column 12 (1:128) (Table 1).The serial dilutions of the samples were performed in duplicate; the plates were sealed with a cover and incubated at 37 °C for 90 min on a soft shaker.The content was discarded, the plates were washed twice, and 100 µL of biotin-labeled antibody was added into the wells.The plates were sealed with a cover and incubated at 37 °C for 60 min.The cover was removed, and the plates were washed thrice; HRP-streptavidin conjugate (0.1 mL) was added to each well, and the plate was covered and incubated at 37 °C for 30 min.Subsequently, the cover was removed, the plate was washed five times with wash buffer, and the wash buffer was allowed to stand in the wells for 2 min.Then, 90 µL of TMB (3,3′,5,5′ tetramethylbenzidine), the substrate for HRP, was added into each well, and the plate was incubated at 37 °C in the dark for 15-30 min to visualize the HRP enzymatic reaction.Finally, 50 µL of the stop solution was added to each well and mixed thoroughly.The OD450 was recorded immediately after the addition of the stop solution using a spectrophotometer.The interference of biotin in specific antibody detection was evaluated with two-plate trapped antigen-enzyme linked immunosorbent assay (PTA-ELISA).After the bacterial pellet was resuspended in PBS (pH 7.4), the microtiter plate was coated with 100 µL per well of the suspension containing 5.3 × 10 8 colony-forming units/mL of S. equinus using 1 µL/100 µL of 0.05 M carbonate-bicarbonate buffer (Sigma-Aldrich C3041, St. Louis, MO, USA) (pH 9.6) and incubated at 4 • C overnight on plate columns from 3 to 12 (Table 1).The plates were washed thrice with PBS-Tween (PBS-T; 0.85% NaCl in 0.01 M phosphate buffer, pH 7.2, containing 0.05% Tween 20).The plates were treated with cold water fish skin gelatin (0.5%; 200 µL per well) in PBS and incubated at room temperature for 1 h to block the plates.The blocking buffer was removed.Serial dilutions were prepared (Table 1) as follows: 100 µL of PBS was placed into wells (except in columns 1 and 4); 200 µL of sample was placed into the wells of column 4; 100 µL was transferred from wells of column 4 to wells of column 5 (1:1), and then from column 5 to column 6 and so on (1:2; 1:4; 1:8; 1:16; 1:32; 1:64), until column 12 (1:128) (Table 1).The serial dilutions of the samples were performed in duplicate; the plates were sealed with a cover and incubated at 37 • C for 90 min on a soft shaker.The content was discarded, the plates were washed twice, and 100 µL of biotin-labeled antibody was added into the wells.The plates were sealed with a cover and incubated at 37 • C for 60 min.The cover was removed, and the plates were washed thrice; HRP-streptavidin conjugate (0.1 mL) was added to each well, and the plate was covered and incubated at 37 • C for 30 min.Subsequently, the cover was removed, the plate was washed five times with wash buffer, and the wash buffer was allowed to stand in the wells for 2 min.Then, 90 µL of TMB (3,3 ,5,5 tetramethylbenzidine), the substrate for HRP, was added into each well, and the plate was incubated at 37 • C in the dark for 15-30 min to visualize the HRP enzymatic reaction.Finally, 50 µL of the stop solution was added to each well and mixed thoroughly.The OD 450 was recorded immediately after the addition of the stop solution using a spectrophotometer.Carbonate-bicarbonate-buffer without antigen (100 µL) was added to the wells of columns 2 and 3 as assay control to monitor direct cross-reactions between the antigen, blocking, target antibodies, biotin, and HRP-streptavidin interactions; the samples were not incubated (Table 1).
Another PTA-ELISA with a specific antigen coated on the wells was used similarly, without signal amplification, through the interaction of biotin with streptavidin.After removing the unbound antibodies by washing, HRP-conjugated rabbit anti-chicken IgY (IgG) (Sigma-Aldrich A9046) was added, which was followed by the same washing steps.Then, 90 µL of TMB (3,3 ,5,5 tetramethylbenzidine), the substrate for HRP, was added into each well, and the plate was incubated at 37 • C in the dark for 15-30 min to visualize the HRP enzymatic reaction.Finally, 50 µL of the stop solution was added to each well and mixed thoroughly.The OD 450 was recorded immediately after the addition of the stop solution using a spectrophotometer.The intensity of the yellow color was proportional to the concentration of IgY captured from the sample.
Statistical Analysis
The standards that come with commercial plates for quantitative analysis do not contain specific antibodies against the antigens used for coating the wells in the noncommercial plates.We analyzed two samples, from immunized hens and from nonimmunized hens, and 5 plates, 3 plates for signal-based detection and 2 commercial plates for IgY concentration.We used the complete randomized block design with a 2 × 3 factorial arrangement (3 plates × 2 samples) for signal values and a 2 × 2 factorial arrangement (2 plates × 2 samples) for IgY concentrations with eight replicates.Regression analysis of the serial dilution assays from PTA-ELISA with streptavidin-biotin detection was performed to fit a better model.Statistical analyses were performed using the GLM procedure of SAS 9.4 (SAS Institute Inc., Cary, NC, USA).The 95th percentile was calculated; p < 0.05 was considered statistically significant.The least-square means were adjusted for multiple comparisons using the Tukey-Kramer method.
Results
The signals for the yolk samples obtained from hens inoculated with the antigen were higher than those for samples from hens inoculated without the antigen (p < 0.001).Regression analysis of the serial dilution assays from PTA-ELISA with streptavidin-biotin detection described the relationship between the observed signal and the dilution levels, indicating a significant and stronger relationship in the samples from hens inoculated with the antigen (R 2 = 0.96) compared to that in samples from hens inoculated without the antigen (R 2 = 0.52) (Figure 3).The signal values obtained from control wells demonstrated that the blocking worked well; when the antibodies were not bound to the plate without antigen, there was no cross-reaction between the blocking and secondary antibodies or between the secondary antibodies and the antigen pre-coated on the plate (Figure 3).There was an interaction between the samples from hens inoculated with or without the antigen and the plates containing biotin-avidin/streptavidin (p < 0.001).There was no difference in signal values or IgY concentration between samples from hens inoculated with or without antigen in immunoassays using biotin-avidin/streptavidin; however, the signal values and IgY concentrations of the samples from hens inoculated with antigen were higher than that in the samples from hens inoculated without antigen (Figures 4 and 5).There was a trend of higher signal values in the samples from hens inoculated with antigen compared to that in samples from hens inoculated without antigen in the PTA-ELISA without the use of biotin-avidin/streptavidin (p < 0.06).The signal values in this plate were lower than those obtained from the commercial ELISA kits (p < 0.001), except for those signals obtained in the control sample, in the plate using biotinavidin/streptavidin.There was an interaction between the samples from hens inoculated with or without the antigen and the plates containing biotin-avidin/streptavidin (p < 0.001).There was no difference in signal values or IgY concentration between samples from hens inoculated with or without antigen in immunoassays using biotin-avidin/streptavidin; however, the signal values and IgY concentrations of the samples from hens inoculated with antigen were higher than that in the samples from hens inoculated without antigen (Figures 4 and 5).There was a trend of higher signal values in the samples from hens inoculated with antigen compared to that in samples from hens inoculated without antigen in the PTA-ELISA without the use of biotin-avidin/streptavidin (p < 0.06).The signal values in this plate were lower than those obtained from the commercial ELISA kits (p < 0.001), except for those signals obtained in the control sample, in the plate using biotin-avidin/streptavidin.There was an interaction between the samples from hens inoculated with or without the antigen and the plates containing biotin-avidin/streptavidin (p < 0.001).There was no difference in signal values or IgY concentration between samples from hens inoculated with or without antigen in immunoassays using biotin-avidin/streptavidin; however, the signal values and IgY concentrations of the samples from hens inoculated with antigen were higher than that in the samples from hens inoculated without antigen (Figures 4 and 5).There was a trend of higher signal values in the samples from hens inoculated with antigen compared to that in samples from hens inoculated without antigen in the PTA-ELISA without the use of biotin-avidin/streptavidin (p < 0.06).The signal values in this plate were lower than those obtained from the commercial ELISA kits (p < 0.001), except for those signals obtained in the control sample, in the plate using biotinavidin/streptavidin.
Discussion
Various sample components can lead to false positive or false negative results and affect the sensitivity and specificity of an assay.Lipids are components that contribute to the matrix effect; therefore, they are removed.The antibodies pre-coated on the plate were specific anti-IgY antibodies, and the matrix effect was mitigated to a certain extent through delipidation.However, we cannot exclude the matrix effect and the cross-reactions with proteins other than IgY; the high biotin level was likely the main influence in the sample matrix.
The amount of biotin can interfere with complex formation and influence the solubility and availability of Ab in different dilutions and the Ag-Ab relationships.Dilutions decrease the target protein concentrations; therefore, the lower signal values at the initial points of the serial dilutions were unexpected and are one of the most difficult types of interference to explain.
In case of a high concentration of IgY in the sample, a limited number of anti-IgY or antigen sites (Ag) are encountered by a huge number of analyte molecules, resulting in a deficit of anti-IgY or Ag in the solid phase because the anti-IgY or Ag used in the assay can be completely coated and saturated with the analyte.Therefore, the excess antibodies in the sample do not form complexes with anti-IgY or Ag (prozone) because the analyte is only partially bound.This underestimates the IgY concentrations in the samples from immunized hens, generating false similarities between the antibody concentrations of the different treatments.
Undiluted and diluted samples are often tested to detect the prozone effect.If the results of the diluted sample are higher than that of the undiluted sample, then the undiluted sample most likely exhibits the prozone effect.This phenomenon can lead to misleading test results, including false negatives and artificially elevated signals after sample dilution [24].This could be overcome by adequately diluting the samples; the analyte concentration can be reduced to a level where the anti-IgY or Ag is not saturated, allowing for accurate measurement.
The dilution curve obtained from serial dilutions (Figure 3) is essential for determining the optimal relationship between antigen (Ag) and antibody (Ab)
Discussion
Various sample components can lead to false positive or false negative results and affect the sensitivity and specificity of an assay.Lipids are components that contribute to the matrix effect; therefore, they are removed.The antibodies pre-coated on the plate were specific anti-IgY antibodies, and the matrix effect was mitigated to a certain extent through delipidation.However, we cannot exclude the matrix effect and the cross-reactions with proteins other than IgY; the high biotin level was likely the main influence in the sample matrix.
The amount of biotin can interfere with complex formation and influence the solubility and availability of Ab in different dilutions and the Ag-Ab relationships.Dilutions decrease the target protein concentrations; therefore, the lower signal values at the initial points of the serial dilutions were unexpected and are one of the most difficult types of interference to explain.
In case of a high concentration of IgY in the sample, a limited number of anti-IgY or antigen sites (Ag) are encountered by a huge number of analyte molecules, resulting in a deficit of anti-IgY or Ag in the solid phase because the anti-IgY or Ag used in the assay can be completely coated and saturated with the analyte.Therefore, the excess antibodies in the sample do not form complexes with anti-IgY or Ag (prozone) because the analyte is only partially bound.This underestimates the IgY concentrations in the samples from immunized hens, generating false similarities between the antibody concentrations of the different treatments.
Undiluted and diluted samples are often tested to detect the prozone effect.If the results of the diluted sample are higher than that of the undiluted sample, then the undiluted sample most likely exhibits the prozone effect.This phenomenon can lead to misleading test results, including false negatives and artificially elevated signals after sample dilution [24].This could be overcome by adequately diluting the samples; the analyte concentration can be reduced to a level where the anti-IgY or Ag is not saturated, allowing for accurate measurement.
The dilution curve obtained from serial dilutions (Figure 3) is essential for determining the optimal relationship between antigen (Ag) and antibody (Ab) concentrations, and it helps to identify the point where complex formation is most effective.Beyond this point, further dilution reduces the concentration of specific antibodies available for complex formation, leading to a decrease in the signal.The initial dilutions can lead to an upward curve in the assay results owing to the prozone phenomenon and the greater solubility, Vaccines 2023, 11, 1627 9 of 12 availability, and dispersion of IgY proteins in the solution, which promotes better contact between the antibodies and the antigen, leading to increased complex formation.
Considering the use of streptavidin-biotin binding as part of the assay methodology, the hypothesis of interference from high amounts of biotin in the sample prevails.When the concentration of biotin in the sample is extremely high, the available binding sites on the reagents can become saturated.This excessive binding can prevent the formation of secondary biotin-labeled antibody complexes, which are washed away before the addition of HRP-streptavidin.This could decrease the number of biotin marker-binding sites in the secondary antibody compared to that in the primary antibody, keeping these sites occupied.In addition, the target antibodies (Ab) in the sample are bound to endogenous biotin even before being captured by the coated anti-IgY and will form streptavidinbiotin complexes, amplifying the signal after the addition of HRP-streptavidin.Therefore, changes in the measured values of sandwich immunoassays can lead to false low or high values in the measured results, depending on the concentrations and the relationships among the proteins, biotin, and reagents in the assays using biotin-avidin/streptavidin.Kabiri et al. [22] studied biotin interference in protein measurements and concluded that biotin in the sample leads to false high values in sandwich immunoassays.Liu et al. [23] observed that biotin in the sample led to a false negative result because the assay failed to detect the target Ab even though it was present in the sample.
Liu et al. [23] tested the levels of streptavidin-coated magnetic microparticles required to neutralize a high concentration of biotin in a sample; the results varied with the biotin and target protein concentration relationships.The results from this study are in line with those from Liu et al. [23] until a dilution factor of 1:16; lower sample dilutions do not result in higher immunocomplex formations.Instead, there was a decrease in biotin interference at higher dilutions, being the amount of IgY on samples from hens inoculated with antigen, enough to increase immunocomplex formations with less biotin interference (Figure 3).
The smaller IgY concentration in the control samples from hens inoculated without antigen was not sufficient to obtain the signal until a dilution factor of 1:16; immediately after the reaction, some nonspecific Abs were bound to the antigen (Figure 3).The strong affinity of streptavidin for biotin can lead to signal amplification even when small amounts of endogenous biotin are present in the sample.This can result in false-positive signals in assays because streptavidin may bind to endogenous biotin, producing a signal that is not related to the target of interest.Therefore, appropriate blocking methods and controls must be employed to address this issue.In some cases, when primary antibodies are not adequately washed off or blocked, streptavidin can amplify signals from nonspecifically bound primary antibodies.This can lead to false positive results or increased background signals.Cross-reactivity with irrelevant antigenic targets is common because of the multipleepitope recognition of pAbs, which bind to common antigenic motifs [25].
In line with our results, Liu et al. [26] and Samarasinghe et al. [27] found falsely decreased signal values due to biotin interference in sandwich immunoassays.Liu et al. [23] argued that the biotin in the sample and the reagent compete for binding sites on streptavidin reagents; as a result, luminescent substances cannot be captured by streptavidin.They used streptavidin-coated magnetic microparticles to neutralize a high concentration of biotin in a sample; the signals were markedly greater when the concentration of biotin was approximately 500 ng/mL and 1000 ng/mL; the detection results differed by approximately 10% and > 20%, respectively, from those obtained in samples without biotin addition.Similarly, IgY detection increased with a reduction in biotin concentration from 5.250, 2.620, 1.310, and 650 ng/mL to 320 ng/mL, after which biotin interference did not occur, and dilutions began to decrease IgY detection (Figure 3).
If the amount of streptavidin is sufficient for all Abs, when biotin-streptavidin binding is used as part of a 'one-step sandwich' immunoassay format, excess biotin in the sample can block the binding of biotinylated antibodies to biotin-binding sites on the streptavidin-coated solid phase, resulting in false low results [19].However, using a sandwich immunoassay with washing steps, the absence of a difference between the signals of samples from hens inoculated with or without the antigen on plates using biotinavidin/streptavidin (Figure 4) can be attributed to the same amount of streptavidin being deposited in all wells during the assay, resulting in similar signals even with different IgY concentrations.Avidin and other biotin-binding proteins, including streptavidin, have the ability to bind up to four biotin molecules; however, the reason for biotin interference in samples with high biotin concentrations could be an insufficiency of streptavidin; a case where all added streptavidin binding sites are occupied, making it impossible to differentiate between samples with different Ab concentrations (Figure 4).
A method without signal amplification, using the interaction of biotin with streptavidin, provided clearer differentiation between samples.When signal amplification is not involved, the signal intensity directly reflects the concentration of the target analyte, making it easier to determine the relative differences between samples (Figure 4).Without signal amplification or interference from biotin, plates coated with nonspecific anti-IgY antibodies and plates coated with specific antigens showed similar relative differences between samples for both signal and IgY concentrations (Figures 4 and 5).This suggests that the plates provide consistent results, which is a positive aspect of the assay.The HRP-streptavidin method provided similar IgY concentrations between samples with and without antigen (Figure 5); this combines the issues, leading to the increase and decrease in IgY concentrations, respectively, compromising the comparison between different treatments.The variation observed in the IgY levels in the ELISA assay, where hens inoculated with antigen were tested without employing streptavidin-biotin binding, can be attributed to the immune responses of the hens to antigen inoculation, especially when adjuvants were involved.This could lead to differences in the concentration and specificity of the antibodies produced.However, the IgY concentrations were consistently higher in hens inoculated with antigens compared to that in those inoculated without antigens, particularly in the absence of biotin interference.
The lower signal values of samples from hens inoculated with antigen in the PTA-ELISA compared to their corresponding signals from commercial ELISA kits could be attributed to the measurement of specific Abs; other nonspecific Abs may not bind to the antigen pre-coated on the plate.Egg yolk can provide 70-150 mg of Ab, with only 2-10% Ag specificity for the inoculated antigens [28].The IgY concentrations in this study were similar to those reported earlier.Losch et al. [29] described that one egg yolk provides 40-500 mg of IgY, while Kowlaczyk et al. [30] showed the amount of IgY in the egg yolk was 15.7 mg/mL with a range from 5.3 to 43.3 mg/mL; in theory, 120 mg IgY can be obtained from one egg.However, there is a wide range of results.The results of the samples with excess biotin, analyzed using ELISA with signal amplification using the biotin-streptavidin interaction, could lead to misdiagnosis owing to biotin interference.Therefore, biotin-stripping methods can be used to mitigate the risk of false negatives and false positives to remove excess biotin from the sample.Biotin-blocking reagents can prevent biotin interference by binding free biotin molecules in the sample before performing the ELISA.An alternative antibody-based detection method that does not involve biotin in the streptavidin-biotin binding can be used for better accuracy.
Conclusions
The strong affinity of streptavidin for biotin can significantly affect the IgY concentration in different assays, particularly when different levels of antibodies and biotin are present in the sample.Strategies to neutralize biotin levels in samples or alternative detection methods are crucial for avoiding potential biotin interference and ensuring reliable results.Detection methods based on the direct labeling of antibodies with enzymes that do not rely on streptavidin-biotin interactions are desirable and should preferred for preventing biotin interference.
Figure 1 .
Figure 1.The calibration curve obtained from the serial dilution of the standard from ICH0032 (FineTest).
Figure 1 .
Figure 1.The calibration curve obtained from the serial dilution of the standard from ICH0032 (FineTest).
Figure 2 .
Figure 2. The calibration curve obtained from the serial dilution of the standard from IRKTAH1109.
Figure 2 .
Figure 2. The calibration curve obtained from the serial dilution of the standard from IRKTAH1109.
Figure 3 .
Figure 3. Correlation between the optical signal value for IgY detection (absorbances at 450 nm) and the levels in the serial dilutions in PTA-ELISA.Assay controls: A = without antigen with sample; B = without antigen and sample; and C = with antigen without sample.
Figure 3 .
Figure 3. Correlation between the optical signal value for IgY detection (absorbances at 450 nm) and the levels in the serial dilutions in PTA-ELISA.Assay controls: A = without antigen with sample; B = without antigen and sample; and C = with antigen without sample.
Figure 3 .
Figure 3. Correlation between the optical signal value for IgY detection (absorbances at 450 nm) and the levels in the serial dilutions in PTA-ELISA.Assay controls: A = without antigen with sample; B = without antigen and sample; and C = with antigen without sample.
Figure 4 .
Figure 4. Comparison of IgY detection in yolk samples from hens inoculated with or without antigen through ELISA using streptavidin-biotin binding (HRP-Streptavidin) or not (HRP) and on specific antigen pre-coated plates without the use of streptavidin-biotin binding-HRP (PTA-ELISA).
Figure 4 .
Figure 4. Comparison of IgY detection in yolk samples from hens inoculated with or without antigen through ELISA using streptavidin-biotin binding (HRP-Streptavidin) or not (HRP) and on specific antigen pre-coated plates without the use of streptavidin-biotin binding-HRP (PTA-ELISA).
Figure 5 .
Figure 5.Comparison of IgY concentration in yolk samples from hens inoculated with or without antigen through ELISA using streptavidin-biotin binding (HRP-Streptavidin) or not (HRP).
Figure 5 .
Figure 5.Comparison of IgY concentration in yolk samples from hens inoculated with or without antigen through ELISA using streptavidin-biotin binding (HRP-Streptavidin) or not (HRP).
McDougall's solution.The content was redistributed in centrifuge containers and centrifuged at 11,250× g for 20 min at 4 • C. The supernatant was discarded, and the precipitate was resuspended in
Table 1 .
Serial dilution and the concentrations of biotin and assay controls in the PTA-ELISA columns. | 2023-10-26T15:13:24.253Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "f275656569a7d10e2587cfebebd04fcc82c730c3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/11/11/1627/pdf?version=1698126462",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d7519cf2402f234940133feaacf2b5704eed719",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
256506104 | pes2o/s2orc | v3-fos-license | Obstetric and perinatal outcomes following frozen and fresh embryo transfer in patients with endometrial hyperplasia and carcinoma: a retrospective study in a high-volume reproductive center
Background There is ongoing debate regarding which embryo transfer procedure can achieve a higher live birth rate. Research has suggested that frozen ET might be beneficial for certain populations, such as hyper-responders. This study aimed to compare outcomes of pregnancies between frozen and fresh embryo transfer cycles in patients with endometrial hyperplasia and carcinoma. Methods This retrospective cohort study was conducted at a high-volume reproductive center from January 2010 to January 2022. Patients who were diagnosed with endometrial hyperplasia with atypia and endometrial carcinoma were included. They all underwent in vitro fertilization after conservative treatment. The primary outcome was live birth after frozen and fresh embryo transfer cycles, and secondary outcomes included perinatal complications and other pregnancy outcomes. Results Overall, 259 ET cycles (130 fresh and 129 frozen) were included. The rate of live births per embryo transfer cycle of the whole cohort was 20.8% (54/259), and no significant between-group difference was found after adjusting for potential confounding factors (23.8% vs. 17.8%; adjusted OR, 0.47; 95% CI, 0.21-1.06; p=0.068). Compared to fresh embryo transfer group, the incidence of total maternal complications in the frozen embryo transfer group was significantly higher (30.4% vs. 6.5%, p=0.019). Analyzing each complication as a separate entity, patients in the frozen embryo transfer group had a higher incidence of hypertensive disorders of pregnancy (p=0.028). Multiple logistic regression analysis showed that frozen embryo transfer was related with an increased occurrence of maternal complications (OR, 6.68, 95% CI, 1.01-44.19, p=0.040). Conclusions Among patients with endometrial hyperplasia and carcinoma, the rate of live births was comparable between both embryo transfer procedures, while frozen embryo transfer might be associated with a higher risk of maternal complications compared to that with fresh embryo transfer.
Introduction
Endometrial carcinoma (EC) is a common malignant neoplasm affecting women and its incidence has shown an increasing trend with changes in dietary habits and lifestyles [1]. Endometrial hyperplasia with atypia (AH), also known as endometrial intraepithelial neoplasia (EIN), is a precancerous condition of EC; its rate of progression to cancer is nearly 30-40% [2]. Fertility-sparing treatment has been widely used in patients with EC and AH [3,4], and previous researches have demonstrated the efficiency progesterone with a high response rate of 75-97% [5,6]. After complete remission, close follow-up and pregnancy encouragement are recommended because of the high recurrence rate (40-50%), and the median recurrent time is around 12-28 months [6]. However, natural conception is difficult for EC/AH patients because they are often accompanied with causes of infertility, such as obesity and chronic anovulation [7,8]. Therefore, referral to a reproductive center is highly suggested to expedite treatment with assisted reproduction. In vitro fertilization (IVF) has been recognized as an efficient way to increase the probability of pregnancy and birth in EC/AH.
Recently, there has been an increased trend toward implementing frozen embryo transfer (ET) usage in many reproductive centers with the rapid development of embryo freezing techniques [9]. The advantages of frozen ET include reducing the risk of ovarian hyperstimulation syndrome (OHSS) and achieving a physiological endometrial environment for embryo implantation [10,11]. However, not all infertile women reportedly benefit from the "freeze-all" strategy, and data comparing outcomes of frozen and fresh ET cycles were contradictory. Chen et al. showed that compared with fresh ET, frozen ET achieved a higher live birth rate in patients with polycystic ovary syndrome (PCOS) [12], whereas other researchers did not find significant difference of pregnant outcomes between two ET cycles in ovulatory or non-PCOS women [13,14]. These inconsistent results suggest that frozen ET is more appropriate in certain groups of patients.
IVF treatment is complicated in EC/AH patients. Frozen ET seems suitable due to the relatively high incidence of OHSS and thin endometrium during ovarian hyperstimulation [15][16][17]. However, most EC/AH patients are not willing to take more time to wait longer for the frozen ET cycle, considering the high risk of recurrence in the short term. Therefore, doctors may find it difficult to determine whether to continue the fresh ET cycle or cryopreserve the embryos for subsequent frozen ET to achieve a better pregnancy outcome. To date, only four studies have reported IVF characteristics in EC/AH patients, and each study had a small sample size (8,21,49, and 75 cycles, respectively) [16][17][18][19]. Little is known about the differences between these two ET procedures in this population. The present study aimed to compare outcomes of pregnancies between frozen ET and fresh ET cycles in EC/AH patients.
Study design and population
In this retrospective cohort study, we collected medical records and analyzed data from the Reproductive Center of Peking University Third Hospital (PUTH) between January 2010 and January 2022. Inclusion criteria: (1) histologically-proven endometrial hyperplasia with atypia or well-differentiated endometrioid adenocarcinoma; (2) accepted fertility-sparing treatment and achieved complete remission; (3) underwent standardized controlled ovulation stimulation protocols and achieved embryo transfers; (4) age ≤40 years; and (5) without primary hypertension or diabetes mellitus. Patients with no retrieved oocytes or those with no available embryos to transfer, as well as preimplantation genetic testing (PGT) cycles, were excluded. Figure 1 shows the flow chart of this study. Overall, 290 ET cycles met the screening criteria. After excluding 25 cases with no available embryos to transfer, two cases with no oocytes retrieved, two cases that were aged >40 years, and two cases for the PGT cycle, the remaining 259 cycles were included for analysis and divided into frozen ET and fresh ET groups.
Ethical consideration
Approval was obtained from the Ethics Committee of the PUTH (No. IRB 00006761-M2020004).
IVF treatment
The detail of each ovarian stimulation protocol and the process of ovarian follicular monitor have been previously described [20]. Oocyte retrieval was conducted 36 ± 2 hours after triggering. Methods of oocytes fertilization include conventional IVF and intracytoplasmic sperm injection. When embryos were cultivated to day 3, also known as cleavage-stage embryos, they were assessed under microscope. Top-quality cleavage-stage embryos were the embryos that were derived from 2PN embryos, and could reach 5-8 cells with <30% cytoplasmic fragmentation [21]. Non-top-quality cleavage-stage embryos continued to be cultivated to D5 or D6, also known as blastocysts, which were assessed according to the Gardner grading system [22].
Fresh ET was the first choice unless the patient was under special conditions that deemed them not suitable for immediate transfer. Transfer process was performed by a group of experienced doctors. Patients accepted oral or intravaginal progesterone support from the day of oocyte retrieval, and continued to use until 10 weeks of gestation if pregnant.
Some patients did not accept fresh ET owing to the thin endometrium, premature elevation of progesterone, high risk of OHSS, or patient preference. Frozen ET was performed in natural monitored cycles or programmed artificial cycles, which were determined by doctors. Cryopreservation and recovery of viable embryos, regimens for endometrium preparation, and luteal support of frozen ET were conducted according to the protocol at our center [23]. Briefly, the frozen ET was performed in the natural monitored cycle or in the programmed artificial cycle. For the naturally cycle, the thawed embryo was transferred on day 3 or 5 after ovulation. Luteal support was provided with vaginal administration of progesterone 60 mg/d from the night of transfer. For the artificial cycle, the patient took daily oral estradiol to make endometrium development and added oral progesterone when the endometrial thickness was suitable. The thawed embryo was transferred on day 5 after initiation of the progesterone treatment.
During IVF treatment, patients were also followed-up every 3 months by the gynecologic oncologist. Transvaginal ultrasound was performed at each follow-up, and hysteroscopy was performed if irregular vaginal bleeding occurred or abnormality was suggested by ultrasound.
Study outcomes
The primary outcome was live birth, defined as birth of a live baby beyond 28 weeks of gestation. The secondary study outcomes included perinatal outcomes and other pregnancy outcomes (clinical pregnancy, ectopic pregnancy, miscarriage, and implantation). Perinatal outcomes included maternal complications (hypertensive disorders of pregnancy [HDP], antepartum hemorrhage, gestational diabetes mellitus [GDM]) and neonatal complications (preterm birth, small-for-gestational age [SGA], low birthweight, large-for-gestational age [LGA], and macrosomia).
Serum β human chorionic gonadotropin (β-hCG) levels were measured in all women 14 days after ET. A woman with a positive pregnancy blood test underwent transvaginal ultrasonography at 4 to 6 weeks after ET to identify intrauterine pregnancy, signifying a clinical pregnancy. The implantation rate was calculated as the number of gestational sacs divided by the number of transferred embryos. Ectopic pregnancy was the pregnancy with embryos implanted outside the uterine cavity. Miscarriage was any loss of intrauterine pregnancy before 28 weeks. Ongoing pregnancy was the presence of a fetal heartbeat observed by ultrasonography after 12 weeks. Preterm birth was any delivery event happened between 28 and 37 weeks of gestation. Macrosomia and low birthweight and macrosomia were respectively defined as birth weight ≥4,000 g or <2,500 g. LGA and SGA were defined according to the 10th and 90th percentiles of birth weight reference for the Chinese population [24].
Statistical analysis
Normally and non-normally distributed continuous variables are respectively presented as mean ± standard deviation and medians (interquartile range [IQR]). And the t-test was used to compare normally distributed continuous variables between two groups, while the Mann-Whitney U-test was used to compare non-normally distributed ones. Categorical variables are presented as numbers (percentages) and were compared using the Pearson's chi-squared test or Fisher's exact test. The chi-squared test was not suitable when the total sample size was below forty or the expected values in any of the cells of a contingency table were below 5. Multiple logistic regression analyses were used twice for various purposes. The first multiple analysis aimed to examine the relationship between two ET procedures and pregnancy outcomes, and confounding factors included body mass index, maternal age, histological type, endometrial thickness, infertility type, infertility duration, basal antral follicle count (AFC), basal follicle stimulating hormone (FSH), and the stage of transferred embryos. The second multiple analysis included eight independent variables to identify the factors associated with maternal complication. All analyses were performed using the SPSS software. Significance was defined as a two-sided p-value of <0.05.
Baseline characteristics
Overall, 259 ET cycles in patients with EC or AH were analyzed, including 129 (49.8%) in the frozen ET group and 130 (50.2%) in the fresh ET group. Table 1, shows the baseline characteristics of this study cohort. Patients in the frozen ET group had more AFC in both ovaries (12.0 vs. 7.0, p<0.001) and a higher percentage of PCOS (35.7% vs. 14.6%, p<0.001) than patients in the fresh ET group. No significant differences of other baseline characteristics were found between two groups (p>0.05). For frozen ET, there were two types of endometrial preparation protocols, including natural monitored cycle (38.8%) and programmed artificial cycle (61.5%).
Live birth rate and other pregnancy outcomes
Pregnancy outcomes are listed in Table 2. A total of 59 live births, including 49 singletons and five twins, were achieved. Overall, 54 out of 259 cycles achieved live birth and the whole live birth rate per ET cycle was 20.8%, which was 17.8% (23/129) in the frozen ET group and 23.8% (31/130) in the fresh ET group. No significant between-group difference was observed after adjusting for potential confounding factors (adjusted OR, 0.47; 95% CI, 0.21-1.06; p=0.068). Both groups had comparable implantation rate, clinical pregnancy rate and ongoing pregnancy rate. (p=0.411, 0.258 and 0.248, respectively).
Of 259 ET cycles, 89 (34.4%) clinical pregnancies were recorded. The miscarriage rate per clinical pregnancy was 34.8% (31/89), of which early and late abortion rates were 20.3% (27/89) and 4.5% (4/89), respectively. Among the four patients who experienced miscarriages at 12-28 weeks of gestation, twin pregnancies of two patients were inevitably aborted because of preterm premature rupture of membranes, one patient experienced spontaneous abortion because of suspected cervical incompetence, and one patient underwent induced abortion owing to intrauterine fetal death for unknown reasons.
Neonatal complications
As shown in Table 3, the mean gestational age was 37.7 ± 2.8 and 38.3 ± 1.5 weeks in the frozen ET and fresh ET groups, respectively (p=0.653). The incidence of preterm birth among all deliveries was similar between two groups (16.1% vs. 17.4%, p=0.902). The mean birth weights of singletons in two groups were 3429.1 ± 583.7 g and 3259.3 ± 371.8 g, respectively (p=0.472). There were no significant between-group differences in terms of incidence of SGA, LGA, low birthweight, and macrosomia among all live newborns (p=0.264, 0.294, 0.294 and 0.294, respectively).
Maternal complications
Nine (16.7%) maternal complications were recorded, of which four were HDP, four were GDM and one was antepartum hemorrhage. The incidence of total maternal complications was significantly higher in the frozen ET group (30.4% vs. 6.5%, p=0.019). When analyzing each complication as a separate entity, compared with patients in the fresh ET group, patients in the frozen ET group had a higher prevalence of HDP (0 vs. 17.4%, p=0.028), whereas no significant differences were identified in the incidences of antepartum hemorrhage (0 vs. 4.3%, p=0.426) and GDM (6.5% vs. 8.7%, p>0.999).
Logistic regression analysis was performed to find factors might be related with overall maternal complications ( Table 4). We included eight independent variables for multiple analyses, and the results showed that frozen ET was associated with an increased occurrence of maternal complications (OR, 6.68, 95% CI, 1.01-44.19, p=0.040).
Discussion
This retrospective study showed IVF-related pregnancy outcomes in a large sample of EC/AH patients. To the best of our knowledge, this is the first study to compare perinatal outcomes after frozen and fresh ET in this population. We found that live birth rates were comparable between two ET procedures, although an increasing trend was observed after fresh ET cycles. We also found significant increase in the incidence of maternal complications and pregnancy-induced hypertension in the frozen ET cycles. Our finding is consistent with previous studies, that is there is a comparable live birth rate between two ET procedures in normo-responder patients [25,26]. The exact reason for the increased rate of hypertension in frozen ET is still unknown, which might be due to the non-physiological concentrations
Table 3 Maternal and perinatal complications of two embryo transfer procedures
Data are presented as n (%) or mean ± standard deviation.
Having a healthy baby is the ultimate goal of EC/AH patients receiving fertility-sparing treatment, and IVF-ET has been suggested as the most efficient way to achieve pregnancy. Most previous studies have only reported the total number of patients who finally conceived both naturally or through IVF, and the cumulative live birth rate of EC/AEH patients ranged from 40% to 80% [6,16,17,28]. In clinical practice, patients are also concerned about the probability of live birth after each ET cycle. To date, only two studies have reported the rate of live birth per ET cycle, which was 14.3% and 17.3%, respectively [18,19]. In our study, 54 out of 289 cycles achieved live birth and the live birth rate was 20.8%, which was consistent with that seen in previous studies [18,19]. However, to the best of our knowledge, no studies have compared outcomes following frozen and fresh ET cycles in EC/AH patients.
There is ongoing debate regarding which ET procedure can achieve a higher rate of live births. In a randomised controlled trial (RCT), Chen and colleagues reported frozen ET resulted in a higher live birth rate per first transfer cycle compared to that with fresh ET in 1508 PCOS patients [12]. Another large multicenter RCT also demonstrated an increased singleton live birth rate following frozen single blastocyst transfer in women with normal ovulation [29], whereas most other RCTs showed no difference [13,14]. Two previous meta-analyses, including different RCTs, suggested that frozen ET is associated with higher rates of live births following the first transfer [25,26]. Subgroup analysis indicated that this significant difference only occurred in PCOS/hyper-responder patients, whereas there was no difference in non-PCOS/ normo-responders, suggesting that frozen ET is more appropriate for specific populations which are PCOS/ hyper-responders. These patients usually have a high risk of OHSS after oocyte retrieval, which may damage embryo implantation and increase the abortion rate in fresh ET cycles [12,29]. Therefore, these patients may achieve a higher live birth rate in frozen ET cycles. Considering our study population, in this retrospective study, our results showed similar live birth and clinical pregnancy rates between two ET procedures. These findings indicate that a "freeze-all" strategy may not be appropriate for all EC/AH patients, and that frozen ET is recommended only when specific indications exist.
Several cohort studies and meta-analyses have indicated an increased risk of pregnancy-induced hypertension following frozen ET compared with that in fresh ET. Opdahl et al. analyzed pregnancies in the same women and found that frozen ET was associated with a higher risk of HDP compared to that with fresh ET (OR, 2.63; 95% CI, 1.73-3.99) [30]. A recent Cochrane review including 3940 women from three RCTs also demonstrated the increased risk of hypertensive disorders in "freeze all" strategy (OR, 2.15; 95% CI, 1.42-3.25) [31]. Our findings also support the existing literature, and we found that the prevalence of HDP was higher with frozen ET than that with fresh ET. Additionally, further studies have explored the relationship between perinatal outcomes and different regimens of endometrial preparation in frozen ET; the results indicated an increased rate of hypertensive disorders in artificial cycles [32]. One of the proposed causes for these differences is the effect of non-physiological concentrations of exogenous progesterone and estrogen during artificial cycles [27]. Another possible reason is the lack of corpus luteum function in artificial cycles [33]. Meta-analyses of observational studies and RCTs have demonstrated an increased risk of LGA and a lower rate of SGA in singletons born as a result of fresh ET [25,34]. The outcomes of sibling pregnancies indicated that frozen ET was associated with a higher live birthweight [35,36]. In our study, although there tends a higher rate of LGA in frozen ET group, the difference was not statistically significant (12.5% vs. 2.9%, p=0.294). However, the underlying mechanism is still unclear. One explanation is that the endometrium in the natural cycles of frozen ET is not influenced by supraphysiological serum estradiol concentrations [37]. It has been also assumed that epigenetic event happens during the process of embryo freezing and thawing, which may induce changes of fetal growth potential [38]. It must be highlighted, however, that confounders, such as gestational weight gain, the presence of gestational diabetes, and pre-pregnancy body mass index were not assessed in previous studies. Further studies with larger samples are necessary to obtain confirmed results and investigate the underlying mechanisms.
PGT includes a series of genetic assays used to evaluate embryos prior to transfer to the uterus. After embryos are retrieved and fertilized, the embryologist will perform assisted hatching on embryos at the blastocyst stage to help obtain several cells from the trophectoderm layer, and these cells will be sent for genetic analysis. The current indication for PGT includes couples with monogenic disease or structural chromosomal abnormalities, women with advanced age, recurrent pregnancy loss or repeated implantation failure, and male factor infertility. Therefore, most patients who applied for PGT have a high risk of implantation failure or abortion, and most embryos transferred in PGT cycles have been tested to rule out the possibility of aneuploidy. Considering these confounding factors, we exclude PGT cycles for analysis in this study.
Our study has some limitations. First, selection bias may have occurred owing to the retrospective nature of this study. We can see from Table 1 that the number of basal AFC and the percentage of PCOS were significantly different between two groups. Therefore, to achieve the consolidated results, we performed multiple logistic regression analysis to control for these covariates. Second, although the sample size in this study was relatively large, the interpretation of maternal and neonatal outcomes was limited by the data available for analysis. Therefore, the generalizability of our findings is limited by the sample size and the single center setting. It will be necessary to conduct randomized trials with large samples to further confirm this issue. Finally, some factors that might be related with perinatal outcomes, including the number and quality of transferred embryos, ovarian stimulation regimens, and the total dose of gonadotropins, were not analyzed in this study.
Conclusions
In conclusion, our study showed that in EC/AH patients who underwent IVF treatment, the live birth rate was comparable between the two ET procedures, while frozen ET might be associated with a higher incidence of maternal complications compared to that with fresh ET. Our findings suggest that patients with EC/AH may not benefit from a "freeze-all" management, and that frozen ET is recommended under specific circumstances. In the future, more well-designed RCTs are required to further elucidate this issue. | 2023-02-03T15:10:49.407Z | 2023-02-03T00:00:00.000 | {
"year": 2023,
"sha1": "8a23fd00a22eac242d80d07ee6b839169755a444",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8a23fd00a22eac242d80d07ee6b839169755a444",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
42712523 | pes2o/s2orc | v3-fos-license | Animal experiments and clinical application of CT during percutaneous splenoportography
AIM To introduce computed tomography during percutaneous splenoportography (CTSP), a new method for determining hepatic diseases. METHODS Ten hybrid dogs and 20 patients with primary hepatic cancer (PHC) were included in the study. Each dog was examined by CT, CTAP (computed tomography during arterial portography) and CTSP to compare the enhanced degrees of the liver. The 20 PHC patients were examined by CTSP and the appearance of PHC was compared with their pathological results to evaluate the diagnostic significance of CTSP. RESULTS The animal experiments showed that both CTAP and CTSP could obviously enhance the liver ( P <0.01), but there was no significant difference in the enhanced results between the two methods ( P > 0.05). On the CTSP images in the 20 patients, the density of the livers was increased to 168-192Hu, whereas the density of the cancers remained as low as that on the images of CT scans (<58Hu). The CTSP findings were consistent with the surgical ones from spaceoccupying lesions. Its diagnostic value was obviously superior to that of general enhanced CT and ultrasonic examination. However, it was difficult for CTSP to show nodules less than 1cm in size located on the surface of the liver or the hepatic portal zone. Like CTAP, CTSP is also a sensitive method for showing occupants in the liver. But the equipments and the procedures for CTSP are simpler than for CTAP. Therefore, it is an alternative procedure in clinical practice.
INTRODUCTION
Computed tomography during arterial portography (CTAP) established by Hisa in 1980 is the most sensitive method for showing occupantis in the liver [1][2][3] . Unfortunately, CTAP is not convinient for arterial catherization, which is necessary before CT examination. Based on CTAP, we used CT during splenoportography (CTSP) to show occupants in the liver. CTSP does not require complicated instruments and is easy to operate. And it has been proved to be sensitive and safe for clinical application [4][5] .
Animal experiments
Animals and instruments Ten hybrid dogs were supplied by the Animal Experimental Unit of Nanfang Hospital. The instruments used in this experiment were: 800mA remote controlled gastrointestinal machine (Daojin, Japan), 5F Cobna catheters, somaton plus whole body CT machine (Simens Corporation, German), syringes of MCT310-2 model (Medrad Corporation, USA), and 20G trocar (Terumo Corporation, Italy).
Procedures Firstly, plain CT scaning was performed in all of the dogs from the top of the diaphragm to the inferior margin of the liver. Each layer was and distance between two neighbouring layers was 10mm. Then, the animals were examined by CTAP. Using Seldinger technology, a catheter was inserted from the left femoral artery to the anterior mesenteric artery (Figure 1). Four hours later, 76% compound meglumine diatrizoate (1. 5ml/kg body weight) was injected through the catheter at a rate of 1.5ml/sec and the animals were examined by continous dynamic scanning. Seven days after CTAP, CASP was performed. In this procedure, the point of a 20G trocar was inserted into the relatively larger veins or the splenic paronchyma ( Figure 2). The same quantity of compound meglumine diatrizoate was injected at the same rate as CTAP. The animals were examined as in the second step.
had postoperative recurrence of hepatocellular carcinomas. Diagnosis of PHC in 4 patients was made according to the criteria formulated in the National Consonsus Conference on Prevention and Therapy of Hepatic Carcinoma in 1977.
Instruments
The same trocar, syringes and CT scan machine were used as for the animal experiments.
Examination methods
The position for insertion of a 20G trocar was determined by CT. The puncture point for CT was generally on the middle or posterior axillary line of the 8th to 11th intercostal space. The depth of the needle was determined by CT image. When the needle was inserted, the patients were scanned once again. When proper position, depth and direction of the needle were achieved (Figure 3), a total volume of 60ml-80ml 60% angiografin (SCHRING Corporation, Germany) was injected at a rate of 1ml/sec. Twenty seconds later, continuous dynamic scanning was started. When the scanning was finished, the trocar was quickly pulled out and the puncture point was pressed for 4-5 minutes and covered with gauze. If necessary, the puncture site should be scanned to get 1-3 images to observe whether bleeding happens.
To observe the liver, kidney and spleen, two of the ten dogs were killed each time immediately, 3, 5, 7 or 10 days after CTSP examination. No abnormality was seen in the livers and kidneys of all the dogs. The surface of the spleens was smooth, without haematoma.?In the spleens, dotted bleeding sites about 2mm×2mm appeared where the needle was inserted. Those bleeding sites were substituted by connective tissues later (Figures 5-7).
Clinical application
The apearance of PHC in CTSP. In the 20 patients with PHC, 58 foci with low density were found by CTSP. After the contrast medium was injected, the CT values of normal livers could reach 168Hu-192Hu, whereas the values of hepatic cancers were no more than 58Hu. The carcinomas showed similar density in CTSP and plain CT scan. The neoplasms showed clear borders (Figures 8a-c, 9a-c and 10a-d).
The density of the spleen was not high after injection of the contrast medium, because most of the contrast medium was quickly excreted.
Sometimes, block-shaped contrast medium could be seen remaining in the spleen (Figure 11). In three patients, image of high density could be observed beneath the capsule of the spleen, which may be resulted from reflux of the contrast medium there.
The postoperative reaction of the patients. The patients did not feel obviously uncomfortable when the contrast medium was injected at a rate of 1ml/ sec. They felt slight local swelling and pain at the beginning when the injection was rapid (2ml/sec-3ml/ sec). The pain was alleviated 10-20 minutes later. Three patients felt more severe pain because the drug was injected into the pleural cavity. The contrast medium was spontaneously absorbed one week later. No bleeding occurred. Figure 5 The spleens of the dogs eviscerated 5 days after CTSP. a. The surface of the spleen is smooth and b. there is nobleeding foci in the longitudinally dissected spleen. Figure 6 The splenic tissue of one dog killed immediately after CTSP. HE stain 3.3×10. Figure 7 The spleen of a dog killed 10 days after CTSP. A little proliferation of connecuive tissue could be seen at the point where the catheter entered the spleen. HE stain 3.3×10. Figures 8 PHC of a 59-year-old male patient. a. plain CT scan: only a small cyst was seen in the anterior part of the right lobe of the liver. b. enhanced CT: except the cyst, the carcinoma could not be shown clearly. c. CTSP: two carcinomas were clearly shown behind the cyst, and proved to be primary hepatocyte carcinoma.
Figure 9
Primary hepatocyte carcinoma of a 30-year-old female patient. a. plain CT scan; b. enhanced CT; c. CTSP. Though they all could show carcinoma, CTSP not only could show it more clearly than the other two, but also show metastases to the right of the inferior vena cava. Figure 10 Postoperative recurrence and hepatic metastases of primary hepatocyte carcinoma of a 44-year-old female patient. a, b, plain CT scans; c, d, CTSP. Figure 11 The contrast medium remained in the spleen after CTSP. The splenic veins show high density. The recurrent cancer in the liver and the child focus in the portal vein (the left posterior arrow) are both shown clearly.
DISCUSSION
The liver was supplied with blood by the portal vein and the hepatic arteries, but by the portal vein, whereas the hepatic carcinoma was mainly supplied by the hepatic arteries. In CTSP, the contrast medium injected into the spleen reached the liver through the portal vein, therefore, the density of the normal liver tissue obviously increased and that of the hepatic carcinoma tissue remained unchanged. Animal experiments showed that CTSP could have the same enhancement effects as CTAP did, which provided a good radiological basis for CTSP to show occupants in the liver.
CTSP has been shown to be a very sensitive method for showing both primary and metastatic foci of PHC. In 9 patients, CTSP was as sensitive as operative exploration in finding the foci of 1cm in size in the liver. In the other three patients, CTSP could not find nodules <1cm on the surface of the liver or metastatic foci <1cm in the hepatic portal zone that were found during operation. Compared with ultrasonic examination, CTSP could find more foci. Ultrasonic examination could find only 5 of 15 foci 1.5cm found by CTSP in 12 patients. CTSP was even superior to ultrasonic examination in showing nenoplasms located on the diaphragmatic surface of the liver. In four patients examined by enhanced CT and CTSP, CTSP showed seven foci 1.5cm, whereas enhanced CT found only three foci unclearly. In general, according to our limited experiences, CTSP could show PHC with high sensitivity. Further work is necessary for qualitative diagnosis of PHC by CTSP.
CTSP is sensitive in showing occupants of as small as 5mm in size in the liver. Therefore, CTSP should be done when primary or metastatic occupants were suspected in the liver that could not be shown clearly by other examinations. For patients with occluded portal veins, CTSP is not acceptable, because the contrast medium is difficult to pass through the portal veins.
CTSP is a microtraumatic examination. Though early observation in animal experiments could find bleeding foci at the splenic puncture sites, their diameters were less than 2 mm and substituted by connective tissues later, without spleen rupture. No bleeding in the abdominal cavity and spleen rupture occurred either in our clinical application. The vein intima remained smooth and no portal veins were obstructed, no injuries were seen in the livers and the kidneys. Therefore, CTSP could be safely used in clinical practice. | 2018-04-03T05:24:25.199Z | 1998-06-15T00:00:00.000 | {
"year": 1998,
"sha1": "a8bd6ca634fb7bc3655a38d36adb8f210dc5af54",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v4.i3.214",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "77addcf88d308ff11f5a48a9ef475395e85ac625",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250140430 | pes2o/s2orc | v3-fos-license | Design of Dual-Functional Metaoptics for the Spin-Controlled Generation of Orbital Angular Momentum Beams
The capability of multiple orbital angular momentum (OAM) modes generation with high resolution and diversified functionalities in the visible and near-infrared regime is challenging for flat and integrated optical devices. Additionally, having a static tiny optical device capable of generating multiple structured spots in space reduces the complexity of optical paths that typically use dynamic optical components and/or many standard elements, leading to unprecedented miniaturization and compactness of optical systems. In this regard, we propose dual-functional transmission dielectric metalenses based on a set of Pancharatnam-Berry phase meta-atoms with different cross-sections, for the combined manipulation of the dynamic and geometric phases. In particular, we present and describe the numerical algorithms for the computation of dual-functional metaoptics and we apply those techniques to the design of optical elements which are able to generate and focus different OAM modes at distinct points in space. In the specific, the designed elements enable the independent or simultaneous manipulation of right-handed and left-handed circularly polarized waves, by acting on the helicity of the input beam to enable or disable a specific optical operation. The theoretical proof-of-concept results highlight the capability of the designed metalenses to generate multiple high-resolution focused OAM modes at different points in space by exploiting the polarization of the incident beam as a degree of freedom, thus providing new integrated optics for applications in the fields of high-resolution microscopy, optical manipulation, and optical communications, both in the classical and single-photon regimes.
INTRODUCTION
Since the seminal paper of Allen and coworkers in 1992 [1], structured light ignited a flourishing research area, paving the way for scientific milestones and disruptive applications in an amazing variety of fields [2], including life science, soft and condensed matter, information and communication technology. In particular, orbital angular momentum (OAM) beams [3], also called optical vortices (OVs), offered a new degree of freedom to encode information in classical communications [4] or increase the Hilbert state space in quantum applications [5,6], while their peculiar intensity and phase distributions enabled innovative and advanced techniques in microscopy [7], micro-manipulation [8], and light-matter interaction [9]. Concurrently, the necessity to tailor and control this spatial property of light inspired the design and engineering of new techniques with different levels of complexity and integration [10][11][12][13][14]. Among all, spiral phase plates (SPPs) [15] represent one of the first optical elements purposely introduced to impart orbital angular momentum to common non-structured beams. Such optical elements are characterized by 3D spiral staircase profiles [16], reproducing the twisting wavefront to impart to the input beam. Their fabrication with high-resolution lithographic techniques [17] has proved the generation of OAM beams with high purity, while the introduction of radial discontinuities [18] in their design enabled the additional control on the radial number and the excitation of higher-order modes. While providing a stable, efficient, and compact method for OAM generation, a strong limitation of SPPs is given by their reduced functionality, which is basically limited to a specific OAM value. q-plates [19] provided a first evolution of standard SPPs, acting on the geometric phase to implement polarization-dependent OAM generators. Instead of spatially changing the local thickness of an isotropic material to tune the optical path, i.e., the dynamic phase, q-plates rely on shaping the wavefront by acting on the polarization state with a 2D birefringent medium. The imparted phase, having a geometric nature, is equal to twice the local angle formed by the extraordinary axis, with a sign depending on the handedness of the circular polarization in input [20]. That is achieved by structuring the optical element at the subwavelength scale in the form of a spatially-variant half-wave plate, by exploiting the inherent anisotropy of liquid crystals [21,22], or using properly-oriented digital gratings [23] or dielectric resonators [24], the so-called metaunits, to induce an effective form birefringence. Moreover, moving from 3D sculptured surfaces to 2D digital optics, metasurfaces open to the exploitation of semiconductor manufacturing [25], achieving the actual merging between optics and silicon photonics [26]. The metasurface paradigm opens to polarization as an additional degree of freedom, enabling the design of spin-dependent optical elements for the generation of beams with opposite values of OAM. However, it would be extremely advantageous to decouple spin from phase reshaping in order to effectively extend the optical functionality.
The solution is offered by the design of spin-decoupled dualfunctional metasurfaces [27][28][29][30], which locally combine the dynamic phase from the refractive index profile with the geometric phase from the induced form birefringence. That is achieved by using anisotropic nanopillars with both different orientations and shapes. While the dynamic phases along the main axes of the nanopillars can be adjusted by tailoring their cross-section, an arbitrary geometric phase can be realized by controlling the local fast-axis orientation. The proper combination of the dynamic phase with the polarizationdependent geometric one enables the encoding of two totally different optical operations for the two circular polarization states.
In this work, we focus on the design of dual-functional metasurfaces for the generation of OAM beams. In particular, we provide a numerical recipe for the design of any dualfunctional optical element, showing its application to the design of metaoptics enabling the spin-controlled generation of beams with different values of OAM. Moreover, by encoding different focal terms on the two contributions, we show the possibility to control the positions of the two beams independently. The design extends significantly the functionality of standard SPPs and q-plates, providing advanced optical elements for applications in microscopy, optical micromanipulation, and classical and quantum information, with unprecedented potential levels of compactness and integration into today's technology.
MATERIALS AND METHODS
In this work, we propose metaoptics which are able to generate and manipulate optical vortices (OVs) with different OAM values at distinct points in space. Mathematically, OVs are wavefront dislocations, with peculiar topological properties, due to phase singularities in the optical field. Light beams carrying OAM are characterized by a dark central region that arises from the destructive interference phenomena leading to the typical doughnut-shaped intensity distribution [31]. In order to generate an OV, the optical element is required to implement an azimuthal phase profile: where ℓ, in units of the reduced Planck constant, is the amount of orbital angular momentum per photon transferred to the impinging beam, (r, θ) are polar coordinates on the plane of the optical element. The metasurface proposed in this work is a dielectric dualfunctional metalens (DFML), made of a 2D array of birefringent metaunits that exploit both the Pancharatnam-Berry (geometric) phase and the dynamic one. Our DFML is constituted of subwavelength metaunits (MUs), the so-called metaatoms (MAs), arranged over a square lattice, represented by amorphous silicon nanopillars on a silica substrate, surrounded by air. Each pillar belongs to a subset of nanostructures with different cross sections and orientations but the same height ( Figure 1A), and acts as a half-wave plate in order to maximize the polarization conversion and, therefore, the optical efficiency [32][33][34] . While the pillar orientation allows controlling the geometric phase, by varying the cross section it is possible to act on the dynamic phase. Combining these two properties, the metalens is able to behave in two different ways depending on whether the input beam is right-handed (RCP) or left-handed circularly polarized (LCP) ( Figure 1B).
In fact, while a variation on the dynamic phase has the same effects both on LCP or RCP input beams, on the other hand a change in the geometric phase implies symmetrical (opposite) response on a LCP beam compared to on a RCP one. In this way, controlling simultaneously both the shape of the pillars and their rotations, it is possible to generate totally different behaviors under LCP or RCP illumination in input [35,36].
For the benefit of the reader, we provide in the following the theory underlying the working principle of anisotropic metaunits. As well known, in dual-functional MLs the incident beam is supposed to be either left-handed circularly polarized |R〉 where the normalization factor 1/ 2 √ has been omitted. Then, the transmitted output waves are cross-polarized and carry the spatial phase shifts ϕ − (x, y) or ϕ + (x, y), respectively.
In particular, the Jones matrix J for the metaatom at the coordinates (x, y) is: where R(θ) is the unitary rotation matrix: being θ the local orientation of the metaatom fast axis. After straightforward calculations, Eq. 2 can be expressed in the form (the spatial dependence has been omitted to simplify the notation): where Δ δ y − δ x is the phase retardation between the two axes of the metaunit. It is worth noting from Eq. 4 that, under the choice Δ π, the optical behavior of the metaunit is that of a rotated half-wave plate. Therefore, for circularly-polarized light in input, each metaatom behaves as a polarization converter: imparting a polarization-independent dynamic phase term equal to (δ x + δ y )/2, plus a polarization-sensitive geometric phase equal to twice the orientation angle θ, and with sign depending on the input handedness. Therefore, when the condition Δ π is satisfied, the cosine term in Eq. 4 representing the zero-order contribution is completely suppressed. Thus, in conclusion, the metaunit acts as a pure half-wave plate and, since the spurious zero-order component has been erased, the diffraction efficiency is optimized. Then, it is possible to define a spin-decoupled optical response with the following definitions: Being E e the electric field along the extraordinary axis, E o the electric field along the ordinary axis, E x the electric field along the x-axis, E y the electric field along the y-axis, axis and E 0i the input electric field, we consider a TE input polarization when E o Ex E 0i and E e E y 0. Instead, we assume a TM input polarization when E o E x 0 and E e E y E 0i . (F) Cross section of a pillar showing a rotation θ in order to transfer a geometric phase term equal to 2θ.
Frontiers in Physics | www.frontiersin.org June 2022 | Volume 10 | Article 870770 being ϕ + and ϕ − the phase to impart to left-handed and righthanded circular polarization, respectively. As a matter of fact, it is straightforward to prove that: Hence, by accurately selecting a set of nanopillars with different sizes, which satisfy the required phase delays, i.e., (δ x , δ y ), and rotating each nanopillar of an angle θ with respect to the x-axis positive direction, it is possible to design a metalens that imparts a phase delay ϕ + (x, y) to an LCP input light and a phase delay ϕ − (x, y) to an RCP input light.
We exploit these properties to design and test DFMLs which are able to generate and focus different orbital angular momentum beams at distinct fixed points in space, depending on the input circularly-polarized state. To implement these functionalities, the spatial phase patterns ϕ + (x, y) or ϕ − (x, y) must be accurately engineered. To this purpose, we suggest a new converging lens profile ϕ, which is able to generate a focused beam carrying OAM at a desired position. In detail: where ℓ is the amount of OAM per photon transferred to the impinging beam, in units of Z, λ is the working wavelength, f is the focal length, and (x 0 , y 0 ), are the focus coordinates on the focal plane perpendicular to the propagation optical axis (z). The first part of the equation, i.e., ℓ arctan(y/x) is the azimuthal phase necessary to generate an optical vortex (Eq. 1), while the second , is an hyperboloid focusing profile, without spherical aberration if illuminated by a plane-wave [37,38], mandatory to focus the optical vortex onto a desired point in space. After defining the theoretical framework for the wavefront engineering, we performed a custom Finite-Element Method (FEM) simulation in the wavelength domain using COMSOL Multiphysics ® to extrapolate the set of metaunits which compose the metalens (Figure 2). The geometry of the metaatoms was modelled as an amorphous silicon pillar (n Si = 3.425) deposited on a glass substrate (n Glass 1.450) and surrounded by air (n Air 1). Periodic port conditions were set at the base of the substrate (at a distance equal to λ) and at a distance greater than λ in the upper zone simulating air, to ensure the Fraunhofer regime [39]. Perfectly Matched Layers (PML) conditions were imposed in the areas outside the ports in order to absorb the field in the simulation volumes not of interest so to avoid multiple reflections. Finally, periodic boundary conditions were set (along the xz and yz planes) to correctly simulate the interaction between the various metaatoms of the metalens [40]. The period of the metaunits array was fixed at 400 nm in both directions, while the parameters of the pillar cross-section (L, W) were swept during the simulation considering the fabrication limits and the sub-wavelength working regime. In particular, the height (H) was fixed at 500 nm considering the fabrication limits ( Figures 1C-F). For a given phase delay along the long axis of the pillar, the final cross-section was selected in order to satisfy the condition Δ π. Concurrently, provided the last condition is satisfied, the transmission values for TE and TM polarizations must be as close as possible, in order guarantee the expected polarization conversion under circularly-polarized light in input. Therefore, previous requirements significantly limit the choice of possible cross-sections for the given thickness and shape. Moreover, in order to increase the degrees of freedom to find the adequate nanostructures to cover the whole 2π range, both rectangular-based and elliptical-based silicon pillars were considered. Finally, a library of 12 different nanopillars has been extrapolated from the simulations, which permits to have a well-distributed 12-level discretization of the phase over the range 0-2π ( Figure 3). Conversely, we assumed a continuous rotation of the metaatom, i.e., no discretization on the geometric phase was applied. Then, for given phase patterns ϕ + and ϕ − , we were able to calculate the corresponding maps for the dynamic and geometric phases using Eqs 7-9. Those maps provide the recipe to compute the metaatoms pattern of the desired DFML. While the geometric phase map gives the local orientation of the metaatom, the required dynamic phase delay allows one to select the required cross-section referring to the lookup table in Figure 3.
RESULTS
Using a custom code implementing the Fresnel propagation integral [41] in MatLab ® environment, we simulated the optical response of several dual functional metalenses designed according to Eq. 12. In particular, we selected different combinations of the parameters ℓ, f, x 0 , and y 0 , in such a way to exhibit all the potentialities emerging from the combination of the DFML paradigm Eqs 7-11 with the converging spiral profile of Eq. 12. All the simulations were performed implementing metasurfaces of radius 100μm, with 12 levels of phase discretization, working at λ 775nm, and illuminated by a Gaussian beam, with a crosssection as given by exp(−r 2 /w 2 0 ), being r x 2 + y 2 , with w 0 100μm chosen to properly illuminate the entire metasurface pattern. The intensities of the simulations have been normalized according to the formula: I normalized x, y E x, y 2 max |E| 2 (13) In the following, we report for each case of interest the phase patterns experienced by the two circular polarization states, and a simulation of their propagation after the optical element. In particular, we show a cross-section of the beam at the focal planes, in order to highlight the expected phase and intensity profiles.
We started implementing a dual-functional metalens able to focus two beams carrying different values of OAM at the same focus along the optical axis, with OAM depending on the input polarization.
Thus, we imposed the following focusing profiles: being ℓ 1 1, ℓ 2 3, and f 500µm. As shown in Figure 4, two structured beams are well generated under both LCP and RCP polarization. From the phase profile (Figure 4), it is clearly observable that the two beams carry different OAM. In particular, a first-order OAM beam is generated with a lefthanded helicity, while under a right-handed helicity a thirdorder OAM beam is produced. In fact, there is only one phase variation from 0 to 2π in the first case, while in the second case 3 phase steps in the range 0-2π appear. That is further confirmed from the singularity size of the generated doughnut, which is larger in the OAM beam with ℓ 3 with respect to that with ℓ 1, as expected from the theory [31]. Subsequently, we implemented a DFML able to focus two beams carrying different OAM values at different focal lengths along the optical axis, depending on the input helicity. To obtain a metasurface as described above we encoded the following profiles: being ℓ 1 1, ℓ 2 3, f 1 500µm, and f 2 1000µm. As expected, in this case two different doughnut spots are well generated at different focal lengths ( Figure 5). Moreover, it can be noticed that the beam carrying OAM with ℓ 3 and focalized at 1000 µm (d-f)), is bigger than the same beam focalized at 500µm. This behaviour is also due to the properties of the Fourier transform of a focusing lens [39] .
After that, we tried to further extend the focusing on different points in space not located along the propagation axis (z-axis). To this aim, we added two tilting parameters in the focusing formula according to Eq. 12. Initially we propose a DFML able to focus two beams carrying different OAM at the same focal length but onto two distinct points on the same focal plane. To exploit these functionalities the implemented formulas were: being ℓ 1 1, ℓ 2 3, f 1 750µm, x 1 y 1 −50µm, and x 2 y 2 +50µm. As shown in Figure 6, the focalized OAM beams are generated at the expected points in space for both the polarizations.
In order to demonstrate the ability to focalize two beams carrying different OAM onto two distinct points in space at different focal lengths, we tried to focalize a spot at the same coordinates xy but at a higher focal length. In this regard, we simulated a DFML implementing the phase profiles: being ℓ 1 1, ℓ 2 3, f 1 500µm, f 2 1000µm, x 1 y 1 −50µm, and x 2 y 2 +50µm.
The resulting simulations ( Figure 7) show how using the above formulas it is possible to focus different OAM beams in completely different points in space. Accurately choosing the design parameters of a DFML, it is possible to avoid symmetry problems due to both high deviation from the propagation axis and short focal lengths.
DISCUSSION AND CONCLUSIONS
We have here presented the numerical design and simulation of dual-functional metalenses for the spin-controlled generation of OAM beams. The designed optical elements have been engineered to focus different OAM beams at distinct points in space, depending on the handedness of the circularly polarized state in input. That is achieved by properly selecting the resonant metaunits in order to act both on the dynamic and geometric phase imparted to the input beam, so that the polarization- dependent geometric term combines with the polarizationinsensitive dynamic one in order to induce a spin-decoupled functionality. Numerical simulations based on FEM analysis have been performed to define an optimized set of silicon resonators, with fixed thickness and different cross-sections, acting as halfwave plates with different phase delays on two orthogonal optical axes. In such a way, while a rotation of the pillar enables the direct control on the geometric phase, the selection of the proper crosssection is dictated by the specific dynamic phase to be imparted. We provided a set of 12 different nanopillars optimized for the wavelength of 775 nm, which can be used as lookup table for the design of any dual-functional metasurface. In particular, we proved the dual-functional behavior for the specific case represented by the spin-controlled generation of OAM beams, showing the on-demand focusing of beams with different values of OAM at distinct points in space. Such optical elements represent the ultimate optical evolution of standard spiral phase plates and q-plates. While the former optical elements are refractive/diffractive optics restricted to the generation of a specific OAM value, the latter ones are their metaoptics evolution, introducing polarization as an additional degree of freedom to control the helicity of the output twisted wavefronts. Dualfunctional metalenses allow to further extend the optical functionality to the generation of any pair of different OAM beams. Moreover, additional functionalities can be added, as the focus onto distinct points in space. Again, that is made possible by the dual-functional behavior, since a standard metalens would focus only one circular polarization in input, while defocusing the orthogonal one. Therefore, the dual-functional implementation could be of extreme interest in total angular momentum analysis, boosting the evolution from diffractive [42] to metasurface [43,44] optical architectures. Moreover, the possibility to design polarization-switchable optical elements can be of high interest in imaging and microscopy, to enable the compact integration of a high-resolution analysis in the setup. Besides the undoubtful gain in functionality, the metasurface paradigm provides an unprecedented increase in compactness and integration, representing the real merging between lens design and silicon photonics. The designed pattern can be fabricated by transferring a lithographic mask to the silicon substrate by means of etching techniques [45], or chemical vapor deposition [46]. Furthermore, the possibility to select even more complicated metaunits, such as coupled resonant nanofins, opens to the possibility to tune the wavelength dispersion and achieve full achromaticity, as demonstrated for imaging metalenses [47,48], further extending the working range of the designed metaoptics.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
AV performed FEM numerical analyses, metasurfaces design and optical simulations. GR gave suggestions in numerical simulations and contributed to the algorithms for beam propagation and metasurface computation. FR addressed the aim of the project and defined the design target specs of the optics. All authors discussed the results and the optimization configurations, contributed to the writing of the manuscript, and approved the final version. | 2022-06-30T15:20:07.856Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "08c4cbe4f6efe1d4dee492e0abf21ce85c635660",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2022.870770/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "d123f27dac195911dcfbbe8c1c98955f634ee75d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
54804846 | pes2o/s2orc | v3-fos-license | Characterization of VOC emission profile of different wood species during moisture cycles
Martina Sassoli , Cosimo Taiti , Werther Guidi Nissim , Corrado Costa , Stefano Mancuso , Paolo Menesatti , Marco Fioravanti (1) This study addresses the characterization of volatile organic compounds (VOCs) emitted by samples of 13 different wood species, belonging to both softwoods and hardwoods groups, regularly measured at different intervals of time, after the first measurement on green wood. The same wood specimens were subjected to several cycles of water desorption and adsorption, assuming that moisture variation might play a role in both the formation and emission of VOCs. Proton Transfer Reaction-Time of Flight-Mass Spectrometry (PTR-TOFMS) was used as a tool to characterize the emission of VOCs. Coupled with a multivariate class-modelling approach, this tool was able to discriminate between groups (softwood and hardwood) and in some cases between different species. However, results showed that the discriminant capacity of VOCs emission to separate species and families rapidly decreases after the first cycles of moisture variation in wood. The green wood was characterized by a richness of volatile compounds, whereas, after only the first dry cycle, wood emitted a more restricted group of compounds. We hypothesized that most of these VOCs might have originated from structural changes and degradation processes that involve the main polymers (particularly hemicellulose) constituting the cell wall of wooden cells. The results obtained are in agreement with the physical and chemical modification processes that characterize wood ageing.
Introduction
Volatile organic compounds (VOCs) represent a large and chemically diverse group of carbon-based molecules, such as hydrocarbons and other organic molecules, with a high vapour pressure at room temperature.They are emitted into the atmosphere from anthropogenic and biogenic sources (plants, animals, microorganisms, production processes, and/or their products - Jantunen et al. 1997, Guenther et al. 2000).These compounds may also be formed in situ in the atmosphere as products of the transformation of other VOCs (Atkinson & Arey 2003).The major classes of emitted VOCs are alkanes, alkenes, aromatic hydro-carbons, and oxygenated compounds, with vegetative emissions typically being composed of alkenes (such as isoprene, monoterpenes, and sesquiterpenes) and oxygenated VOCs (including acetone and methanol - Atkinson & Arey 2003).
The classification proposal of the World Health Organization (WHO 1989) is the most widely used.The WHO classifies pollutants into very volatile organic compounds (VVOCs), volatile organic compounds (VOCs), semi-volatile compounds (SVOCs), and organic compounds associated with particulate matter (POM).
The emission from vegetation (biogenic emission) has been widely studied: condi-tions that have been explored include diurnal and seasonal variations (Grabmer et al. 2006, Karl et al. 2003) in response to temperature changes (Filella et al. 2007) and to different environmental conditions (Holzinger et al. 2000).
VOC emissions are also present in wood, and they are dependent upon many factors including species, age, and pH value; these emissions differ significantly between hardwoods and softwoods (Taiti et al. 2016, Roffael et al. 2015, Steckel et al. 2010, Roffael 2006).Specifically, Fengel & Wegener (1989) reported that softwood polyoses contain higher amounts of mannose and galactose than hardwoods, whereas hardwoods are richer in pentoses, characterized by higher amounts of acetyl groups.VOCs from wood can originate either from compounds present in the native structure of wood, or through different chemical processes, such as oxidation and hydrolysis, that involve wood components (Roffael et al. 2015).In general, the most common VOCs emitted from wood are terpenes, aliphatic aldehydes, and organic acids (Schumann et al. 2012).Terpenes are the main constituent of the resin of softwoods, and aldehydes are formed by oxidative decay of fatty acids.Among the organic acids, acetic acid occurs due to cleavage of acetyl groups from hemicelluloses, whereas hexanoic acid originates from the decay of fatty acids.Furthermore, by increasing the time of storage and modi- @ @ Martina Sassoli (martina.sassoli@unifi.it)fying the conditions of wood storage, the release of some VOCs declines dramatically over time, reaching very low emission levels (e.g., terpenes in softwoods and formaldehyde in both soft-and hardwoods are naturally occurring chemicals in wood -Roffael 2006), whereas specific treatments could determine the transformation of the originals VOCs from wood, changing their release rates (Manninen et al. 2002, Hyttinen et al. 2010).In addition, during storage the composition of extractives changes: the content of extractives decreases whilst free sugar, lipophilic fats, fatty and resin acids, and sterols substantially decreases (Salem & Böhm 2013).
Previous studies have already dealt with products that derived from the degradation of wood polysaccharide fraction, after specific treatments or processing procedures of wood, showing the formation of compounds such as: (1) formaldehyde, acetaldehyde, propenal, butanal, and butanone resulting from the breakdown of the polysaccharide fraction of the wood during pyrolysis (Salem & Böhm 2013) 4) furfural due to hydrolysis of carbohydrates (Roffael et al. 2015); and (5) furan and furan derivatives as thermal degradation products of cellulose and other polysaccharides (Manninen et al. 2002, Fernández de Simón et al. 2009).
In the literature, no studies have been carried out on green wood (i.e., measured before the first drying cycle) and subsequently repeated on the same samples at regular intervals of time after wood moisture cycles.Assuming that moisture cycles could have an effect in VOC formation and emission, having an active (hydrolysis) or passive (carrier) role in such phenomena described as wood ageing (i.e., change in the chemical composition of constitutive polymers), the aims of the present study were: • assessing the VOCs emitted by the wood of different species in the green condition (e.g., immediately after tree felling or increment cores sampling) and over the time in response to several conditioning cycles (e.g., variation of moisture content of the sample); • estimating the possibility to use VOC emission as a discriminant between different wood taxaafter ageing (e.g., after repetition of different cycles of moisture variation); and • evaluating whether VOC compounds yield information about the processes of wood modification occurring during wood ageing.PTR-TOF-MS, used to characterize these emissions, is a useful analytical technique largely applied in order to provide an overview of the mass spectra of volatile compounds emitted by different materials (Vita et al. 2015, Mancuso et al. 2015, Soukoulis et al. 2013, Cappellin et al. 2013, Han et al. 2010).A detailed description of the PTR-TOF-MS tool is discussed by Blake et al. (2009).
Study area and sampling design
Sampling procedures including site and species description have been thoroughly described elsewhere (Taiti et al. 2016).The list of the species studied is reported in Tab. 1.The specimens, mainly constituted by increment cores, originated from the same plant and made up of both heartwood and sapwood, were analysed in the green condition and after several cycles of moisture variation.
Moisture cycles
The first VOC analysis was performed by PTR-TOF-MS on green state increment cores (indicated as measurement time T1).Subsequently, samples were submitted to three moisture cycles from fiber saturation point (FSP) to a moisture content (MC) of 10-12%, which were intended to simulate the natural ageing of wood through cycles of moisture variation that typically occur in wood during this process (Akahoshi & Obotaya 2015).Emissions were regularly measured at different intervals of time on samples with moisture content of 10-12% (measurement times T2, T3, and T4).
The following drying-moistened conditioning schedules were followed: • for T2 analysis (day 7): all the samples were placed in a sealed basin (volume: 72 l) for 7 days and dried with magnesium nitrate hexahydrate for analysis (EM-SURE ® ACS, Reag.Ph Eur.CAS 13446-18-9, EC Number 233-826-7, chemical formula Mg(NO3)2•6H2O at T 20 °C (obtaining a RH 54%-58%) until no weight loss was detected and a moisture content (MC) of 10-12% was achieved for the analysis of emissions; • for T3 and T4 analysis: subsequently, the increment cores were moistened at T 20 °C and RH 100% in the same basin for 7 days and submitted to another drying cycle with Mg(NO3)2•6H2O for 7 additional days.At the end of the drying phase, samples were analysed by PTR-TOF-MS (T3, day 21).The same procedure was applied for T4 measurement (day 35 -Fig.1).
Soxhlet extraction and UV treatment of core samples
In order to assess whether the emissions were due to, or influenced by, the presence of extractives in wood or to the microbiological activity (bacteria), a supplementary set of Norway spruce samples was prepared.One sub-sample was extracted in alcohol, whereas a second subsample was subjected to UV radiation, maintaining for each test an untreated sample as control.Extractives were removed by means of Soxhlet extraction in ethanol (96%), according to standard prescription (TAPPI T 204 om-88), but without the grinding stage.The other specimen was exposed to UV treatment (Sankyo DenKi G20T10 ® lamp, 20 Watt) for 1 hour, in order to eliminate possible microbiological organisms.
At the end of these treatments, all the specimens, treated and control, were analysed by PTR-TOF-MS.
PTR-TOF-MS and VOCs analysis
The real-time detection of VOCs emitted by different wood cores was achieved using a PTR-TOF 8000 ® system (Ionicon Analytik Innsbruck, Austria).The same samples were analysed again with the same equipment to monitor the VOCs' profiles in response to subsequent ageing cycles.All VOCs emitted by the samples were assessed using a setup previously reported in Taiti et al. (2016).Before analysing the samples, an empty jar was measured.The inlet flow was set at 100 sccm.The headspace analysis was recorded in a range between m/z 20-210 for 60 s with an acquisition rate of one spectrum per second.All of the measurements were carried out under the following drift tube conditions: 600 V drift voltage, 60 °C temperature, and pressure of 2.23 mbar, resulting in an E/N value (electric field strength/gas number density) of 130 Townsend (Td, 1 Td = 10 -17 cm 2 /V s).
PTR-TOF-MS data analysis
The raw data were acquired by the TOF-DAQ Viewer ® software (Tofwerk AG, Thun, Switzerland).Data acquisition and quantification of peaks, expressed as normalized count for seconds (ncps), were corrected according to the duty cycle, and the signals were normalized to the primary ion signal (cps ncps → -Herbig et al. 2009).To allow a rapid identification of compounds with a high level of confidence, the internal calibration of mass spectral data was based on three points calibration using m/z = 21.022(H3O + ), m/z = 59.049 (C3H7O + ), and m/z = 137.132(C10H17 + ); this was performed off-line (Lanza et al. 2015).The tentative identification of VOCs provided by the tool (high sensitivity and with a fast selective identification) was compared on models of fragmentation available in the literature and compared with published VOCs emitted by wood species (Tab.S1 in Supplementary material).Dead time correction and peak extraction were performed according to a procedure described elsewhere using a modified Gaussian peak shape (Cappellin et al. 2011).Subsequently, the peaks associated with the PTR-MS ion source including m/z = 32 (O2 + ) and m/z = 37 (water cluster ion) were eliminated.
Statistical analysis
The matrix composed by 80 VOCs × 156 wood samples (13 species; 4 measurement times T1, T2, T3, T4; 3 replicates each) was used to assess the potential of the method to discriminate: (i) between softwoods and hardwoods (2 groups); (ii) among families (9 groups); and (iii) among plant species (13 groups).Three class-modelling approaches were applied to predict the aforementioned groups: (i) the model M1 built on the first measurement dataset (T1); (ii) the model M2 built on the fourth measurement dataset (T4); and (iii) the model M3 that include all the measurement datasets (T1, T2, T3, T4).Matrices at each time were separately autoscaled by column.
After this procedure, the matrix was preprocessed using the normalization algorithm (which performed normalization of the rows; i.e., samples).A partial least squares discriminant analysis (PLSDA) approach was applied for the classification of the matrix to determine hardwoods and softwoods or the 13 species using the VOCs.The models were developed using a procedure written in the MATLAB 7.1 R14 environment.PLSDA (Sjöström et al. 1986, Sabatier et al. 2003, Infantino et al. 2015) is a PLS regression (SIMPLS algorithm -De Jong 1993) in which the response variable is categorical, expressing the class membership of the statistical units.The objective of PLSDA is to find a model, developed from a training set of observations of known class membership, that separates classes of objects on the basis of their Xvariables.The percentages of correct classification were calculated for the calibration and validation phases, and then used for model selection.The PLSDA model selection was mainly based on the efficiencies and robustness parameters described above.For M1 and M2, T1 and T4 were respectively used as the calibration and validation sets, and the rest of the samples as the test set; for M3, the dataset was divided into a calibration/validation set composed by 66.67% of samples and an internal validation set represented by the remaining 33.3%.The partitioning of the models for M3, which include all the sampling times, was conducted optimally by choosing the Euclidean distances based on the algorithm of Kennard & Stone (1969) that selects objects without the a priori knowledge of a regression model.A summary of the relative importance of the Xvariables for both Y and X model parts is given by Variable Importance in the Projection (VIP - Taiti et al. 2015, Infantino et al. 2015).VIP scores estimate the importance of each variable in the PLS-based models.
VIP scores were calculated according to Chong & Jun (2005).The explanatory variables with VIP scores values larger than 1 tend to be more important than others, although this does not imply that a variable with a low VIP score is not relevant for the model.
Results and discussion
Several mass peaks in the range of measured masses (m/z = 20-210) were collected from 13 different wood species at four different sampling times; 80 mass peaks were detected in the specimens from the first measurement (Taiti et al. 2016), but their number subsequently decreased after several ageing cycles.In Tab.S1 (Supplementary material) are reported the most significant putative molecules identified in the first analysis from wood in the green condition, including their measured m/z ratio, protonated molecular formula, chemical name, and related reference.The signals observed from green specimens varied in terms of the nature and intensity for each wood species.
All the VOCs emissions taken into account decreased in intensity starting from the second analysis (after the first drying of wood), and they subsequently disappeared after the second and third steps, excluding some VOCs that continued to be emitted in minimum quantities by some species.For instance, Pinus nigra and Cupressus sempervirens showed the highest signal intensity and the highest number of total peaks in softwood during first sampling time (Fig. 2 and Fig. 3, Tab.S1 in Supplementary material), but in the subsequent analysis, only their emission rates of acetaldehyde and acetone were slightly higher in comparison with all other VOC species.More generally, the softwoods group (Cedrus, Pinus, and Cupressus) continued to emit more methanol than the hardwoods group.
Fresh hardwood species released higher amounts of acetic acid, probably originating from the hydrolyses of acetyl groups in hemicellulose, than terpenes compounds (Fig. 2 and Fig. 3).Regarding specific profile emission from hardwoods during ageing, only Populus nigra and Ficus carica showed ethanol emission even though in very low amounts, differentiating them from the other hardwood and softwood species here examined.
The emission intensity of compounds tentatively identified as terpenes, terpenoids, sesquiterpenes, and their fragments (m/z = 81.069;93.070; 153.127; 137.132; 205.195) was higher in softwoods (Cedrus, Pinus, and Cupressus) in comparison with most hardwood species (Fig. 3), with terpene compounds being the essential part of the resin composition in many softwood species (Risholm-Sundman et al. 1998, Baumann et al. 1999, Schumann et al. 2012, Roffael et al. 2015).The only exception was represented by Bay laurel (Laurus nobilis) that showed similar peak intensities for the masses regarding terpenes, due to the richness of such compounds in this species (e.g., oxygenated monoterpenes and monoterpene hydrocarbons - Flamini et al. 2007).However, given that terpenes are very volatile, their intensity decreased drastically by the second analysis, and, in the following cycles, terpene fragments and monoterpenes (m/z 81.070 and m/z 137.132, respectively) were recorded only in softwoods, especially in Black pine (Pinus nigra).
The comparison of the emission profiles of extracted and UV-treated samples with reciprocal control samples showed no significant changes in the observed emitted compounds.This finding seems to indicate that all the revealed emissions have to be attributed to modification processes occurring in wood and in its constitutive polymers, avoiding the hypothesis of possible interference in the results due to the presence of extractives or because of microbiological activity.
The statistical significance of the data sets measured at the different times (T1 to T4) was tested according to the models previously described (M1, M2, M3).
The performance indicators of the PLSDA models M1 and M2 tested on the other measurement times are reported in Tab. 2.
The M1 model adopted to discriminate hardwood vs. softwood samples using 3 Latent Vectors showed 0.98 and 0.93 mean sensitivity and specificity values, respectively.The mean classification error was 0.05.The mean percentage of correct classification was determined to be equal to 100% for the calibration/validation set (at T1), whereas at T2 it decreased to 84.6%, and at T3 and T4 to 79.5%.The M1 model adopted to discriminate the 9 families used 15 Latent Vectors and showed 1 as the mean sensitivity and specificity values.The mean classification error was equal to 0. Even in this case, the mean percentage of correct classification was equal to 100% for the calibration/validation set (at T1), while at T2 it steeply decreased to 10.3%, and at T3 and T4 to 7.7%.The M1 model that was adopted to discriminate the 13 species used 10 Latent Vectors and showed 1 as the mean sensitivity and specificity values.The mean classification error was equal to 0. The mean percentage of correct classifications in this case was determined be equal to 100% for the calibration/validation set at T1, while at T2 it sharply decreased to 18.0%, and at T3 and T4 to 25.6%.
The results obtained applying the M1 model to the classification of the samples clearly showed the difference in the quality of the VOCs emitted at T1 (fresh wood) and in the subsequent steps (T2 to T4).
The M2 model adopted to determine hardwood vs. softwood samples at T4 using 6 Latent Vectors showed mean sensitivity and specificity values of 1 and 0.98, respectively.The mean classification error was 0.01.The mean percentage of correct classification was determined to be equal to 100% for the calibration/validation set at T3 and T4, whereas at T2 it decreased to 84.6% and at T1 to 66.7%.The M2 model used 9 Latent Vectors to discriminate the 9 families and showed mean sensitivity and specificity values of 0.9 and 1, respectively.The mean classification error was equal to 0.08.The mean percentage of correct classification was found to be equal to 84.6% for the calibration/validation set at T4, while at T3 it decreased to 79.5%, then at T2 it drastically decreased to 20.5%, and at T1 to 7.7%.Then, to discriminate the 13 species, M2 used 8 Latent Vectors and showed mean sensitivity and specificity values of 1 and 0.9, respectively.The mean classification error was equal to 0.03.The mean percentage of correct classification was determined to be equal to 80.8% for the calibration/validation set (at T4), whereas at T3 it decreased to 74.4%, then at T2 it drastically decreased to 20.5%, and at T1 to 7.7%.
The performance indicators of the PLSDA model M3 built on 66.7% of the full dataset and tested on the remaining 33.3% are reported in Tab. 2. The M3 model adopted to determine hardwood vs. softwood sam-ples using 6 Latent Vectors showed mean sensitivity and specificity values of 1.The mean classification error was 0. The mean percentage of correct classification was found to be equal to 100% for both the calibration/validation and the test sets.The M3 model adopted to discriminate the 9 families used 15 Latent Vectors and showed mean sensitivity and specificity values of 0.88 and 0.93, respectively.The mean classification error was equal to 0.10.The mean percentage of correct classification was observed to be equal to 82.6% for the calibration/validation set and 67.3% for the test set.The M3 model adopted to discriminate the 13 species used 17 Latent Vectors and showed mean sensitivity and specificity values of 0.89 and 0.93, respectively.The mean classification error was equal to 0.10.The mean percentage of correct classification was found to be equal to 78.9% for the calibration/validation set and 73.1% for the test set.
Tab. 3 and Tab. 4 show the classification test set results for the M3 families and species, respectively, reported as a confusion matrix.Each row of the square matrix represents the instances in the actual class (i.e., observed), while each column of the matrix represents the instances in the predicted class.Each entry, then, gives the number of instances of real classes that were classified as the predicted class.As a result, all correct classifications are on the main diagonal of the matrix; any value off that diagonal is an incorrect classification observation.The accuracy of the statistical classification of the model is evaluated by two different measures, the producer's accuracy (PA) and the user's accuracy (UA), reported in Tab. 5 and Tab.
Tab. 4 -Confusion matrix for the 13 M3 species test set.Correct classification of wood samples are on the main diagonal of the matrix, while any value off the diagonal represents an incorrectly classified observation.
Actual species
Cedrus atlantica iForest -Biogeosciences and Forestry categories are less misclassified into the category in question.
The two matrices return an accurate representation of statistical classification, and incorrect classifications based on the M3 model occur especially for the Fabaceae family (PA = 25% and UA = 33.3%)and regarding species as Prunus amygadalus (PA = 0% -Tab.5 and Tab. 6).
In Tab. 7, the protonated measurement masses presenting high VIP scores for taxonomic, families, or species discrimination based on the three different models are shown.In particular, it is worth noting that the chemical species with higher significance for hardwoods and softwoods discrimination in the M1 model were m/z = 81.070(TI: monoterpene fragment) and m/z = 205.195(TI: sesquiterpenes), but in the other two models these masses were no longer suitable for taxonomic discrimination.On the other hand, compounds with higher VIP value that are able to discriminate families and species, both in M1 and M2 models and in some cases also in the M3 model, are m/z = 33.033(TI: methanol), m/z = 43.050(TI: alkyl fragment), m/z = 45.033(TI: acetaldehyde), m/z = 47.049(TI: ethanol), m/z = 61.028(TI: acetic acid), and m/z = 123.116(sesquiterpene fragment).
The M1 model confirms that VOC emissions tend to decrease with time; in fact, green wood samples are characterized by a multiplicity of compounds not present or negligible in wood after a few moisture cycles.Softwoods and hardwoods are still discerned (> 79% for all sampling-analysis time), whereas the predictive capability for families, and especially for species, drastically decreases even as soon as T2.Even applying the M2 model, it is possible to distinguish between softwoods and hardwoods with a mean % value of at least 66.7% for T1; nevertheless, this model is able to identify families and species with a mean % value higher than 50% only at the T3 sampling time.
The M3 model maintains an accurate predictive capacity only because it also contains the data obtained with measurements carried out on fresh wood, in the absence of which the prediction capability of the models becomes very poor (as shown by Model 2).
Despite the usefulness of the three models in using VOCs as a possible tool for iden-tifying species, families, or groups, the statistical analysis clearly showed the significant difference between the measurements at T1 (green wood) and the measurements after a few cycles of moisture variation (T2 to T4).
In particular, the results of this study indicate that after the first cycles, both softwood and hardwood species tend to converge towards a common class of compounds, but a small residue of terpenes persists in the case of softwoods.It seems that green wood is still rich in compounds originated from physiological activity of the tree.Furthermore, it seems that the majority of these compounds are very volatile and, during moisture cycles, the iForest 10: 576-584 582 Tab. 5 -Producer accuracy (PA) and user accuracy (UA) for families based on the M3 model.residual compounds measured can be correlated to the processes of structural modification of the cell walls.This explanation is confirmed by studies carried out on aged wood that show a lower content of hemicellulose and a significant reduction of its hygroscopicity (Obataya 2007).This analysis of VOCs indicates that such process starts when the first drying of wood occurs, and the tests on extractives and microbial-free wood allows us to exclude the influence of these two factors on the products emitted.
In this study, moisture cycles were used to simulate and accelerate a natural wood process which occurs gradually over time, with a rate that can change according to wood species and timber thickness.Further investigation are required in order to assess whether water and moisture changes inside wood have an active role in the alteration of polymers.However, despite the fact that heat treatment changes the composition of wood permanently and modifies the emission of VOCs from the material (e.g., increasing the emission of furfural that is the main thermal degradation product of hemicellulose), it is interesting to compare our results with those obtained by Manninen et al. (2002) and Hyttinen et al. (2010) on Norway spruce (Picea abies), Scot pine (Pinus sylvestris), and European aspen (Populus tremula).In their works, they demonstrated that the emissions of terpenes from softwoods and aldehydes from hardwoods decrease significantly after heat treatment.Assuming that the heat treatment produces an extreme and very intense ageing of wood, this confirms the trend showed by our results and clearly attests to the loss of these compounds by different wood species during normal ageing of wood.
Conclusions
Our results showed that, as a consequence of the moisture cycles applied, the spectra of VOC emissions from wood were significantly changed.Emission rates of all observed compounds decreased after only a few moisture (ageing) cycles.Moreover, we did not observe any increase in the emissions of specific compounds or the presence of new compounds during the simulated ageing.
The statistical analysis showed that the two PLSDA models (M1 and M2) based on a single set of measurements (T1 and T4, respectively) were not suitable for discriminating among groups of aged wood samples (hardwoods vs. softwoods, families and species), whereas using the M3 model allowed to correctly discriminate between softwoods and hardwoods.This depends on the fact that the M3 model also contains the data set acquired at T1 (green wood), combined with the other 3 datasets (T2, T3, T4).
During the moisture cycles, wood VOC emissions were subjected to quantitative and qualitative modification of the ac-quired spectra in all the species studied.Apparently, losing the compounds produced by the metabolic activity of the tree, the wood emits compounds that possibly stem from structural changes and degradation processes of the main polymers constituting the cell wall.These polymers are common to all wood species, with slight differences in their chemical composition.These results clearly indicate that the analysis of VOCs might be applied to the identification of wood species only on very fresh wood, when the wood material still contains all the classes of compounds characterized by strong volatility.As soon as the natural cycle of variation in the moisture content (i.e.moisture desorption and adsorption) begins, the composition of emissions changes, making wood identification by VOCs very hard even at the highest hierarchical level of taxa (i.e., families).
The results of this study do not clarify the role of water in the observed processes.Further studies are needed to elucidate if water can be considered only as a VOC carrier or it has an active role in the hydrolysis of wood polymers (or both of these processes).Specific treatments (Soxhlet extraction and UV treatment) applied to a separate and supplementary set of samples of Norway spruce (Picea abies) allowed us to exclude the potential effect of microbiological activity or extractives on the observed results.
In this study both the nature and the amount of VOCs emitted by wood at different times seem to indicate that the native organization of the wooden structural polymers (i.e., cellulose, hemicellulose, lignin, and their reciprocal bonding) starts to be modified immediately after the first exposure to the external environment, initiating a process of ageing that can affect the physical, mechanical, and acoustical properties of wood, and whose practical consequences are only recognizable over very long periods of time.
Extending the analysis of VOCs by PTR-TOF-MS on increasingly aged and naturally aged wood samples, will help verify the presence of new signals to be possibly matched with specific degradation processes of wood components.
6.The PA of a category indicates to what extent the reference samples of the category are correctly classified, whereas the UA of a category represents to what extent the other Tab. 3 -Confusion matrix for M3 families test set.Correct classification of wood samples are on the main diagonal of the matrix, while any value off the diagonal represents an incorrectly classified observation. | 2018-12-06T20:33:46.865Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "4124362795f4c43dca47c557f8452c6d05f2ac8f",
"oa_license": "CCBYNC",
"oa_url": "http://www.sisef.it/iforest/pdf/?id=ifor2259-010",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4124362795f4c43dca47c557f8452c6d05f2ac8f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
9085540 | pes2o/s2orc | v3-fos-license | Cones of positive maps and their duality relations
The structure of cones of positive and k-positive maps acting on a finite-dimensional Hilbert space is investigated. Special emphasis is given to their duality relations to the sets of superpositive and k-superpositive maps. We characterize k-positive and k-superpositive maps with regard to their properties under taking compositions. A number of results obtained for maps are also rephrased for the corresponding cones of block positive, k-block positive, separable and k-separable operators, due to the Jamiolkowski-Choi isomorphism. Generalizations to a situation where no such simple isomorphism is available are also made, employing the idea of mapping cones. As a side result to our discussion, we show that extreme entanglement witnesses, which are optimal, should be of special interest in entanglement studies.
Introduction
Positive linear maps of C * -algebras has been a subject of the mathematical literature for several years. In short, such a map sends the cone of positive operators acting on a given Hilbert space into itself. A map Φ is called completely positive (CP), if the tensor product Φ ⊗ ½ k is positive for any dimension k of an auxiliary Hilbert space.
On the one hand, the structure of the set of completely positive maps, which forms a proper subset of the set of positive maps, is already well understood. Completely positive maps find direct application in quantum theory as they correspond to quantum operations, which can be realized in a physical experiment. On the other hand, in spite of a considerable effort several years ago [1][2][3][4][5][6][7][8][9][10][11][12][13][14] and more recently [15][16][17][18][19][20][21][22][23][24] the structure of the set of positive maps acting on operators defined on a d dimensional Hilbert space H d is well understood only for d = 2. In this case every positive map is decomposable, as it can be represented as a sum of a completely positive map and a completely co-positive map.
This mathematical fact, following from the results of Størmer [1] and Woronowicz [7], has profound consequences for the entire theory of quantum entanglement. It implies that the commonly used PPT criterion for quantum separability [25] works in both directions only for 2 × 2 quantum systems [26]. In other words, any state of a two qubit system is separable if and only if it has the property of positive partial transpose (PPT). Hence in this simplest case the sets of separable states and PPT states coincide, and any state characterized by a negative partial transpose is entangled. This is not the case for higher dimensions. For instance, the existence of nondecomposable positive maps shown for d = 3 by Choi [6], implies that for a 3 × 3 quantum system there exist PPT entangled states. Such quantum states are called bound entangled [27], as they cannot be distilled into maximally entangled states, and their subtle properties became recently a subject of a vivid scientific interest [28,29]. In general, the question of characterizing the set of entangled states for an arbitrary quantum system composed of two subsystems of size d, remains as one of the key unsolved problems in the theory of quantum information. However, from a mathematical perspective this problem is related to characterization of the set of all positive maps in d dimensions, which is known to be difficult.
It is convenient to define a subclass of positive maps, called k-positive, such that Φ⊗½ k is positive. 1 It is well known that d-positive maps are completely positive [30]. Due to the theorem of Stinespring [31] any CP map can be represented as a sum of similarity maps: x → x i := a * i xa i , where * denotes the Hermitian conjugation, and the operators a i are arbitrary. In physics literature the operators a i are called Kraus operators, [32] and it is possible to find such representation for which the number of them does not exceed d 2 .
In general the operators a i are of rank d, but it is useful to distinguish the class of linear maps for which there exists a representation into Kraus operators of rank not greater than k, where k = 1, . . . , d − 1. These maps will be called k-superpositive, since in the case k = 1, the set of maps (denoted by S (H) in [12]) for which all Kraus operators can be chosen to be of rank 1, coincides with the set of superpositive maps, introduced by Ando [33] (see also [34]).
Any linear map acting on a set of positive operators on H d represents an operator acting on the composed Hilbert space H d ⊗ H d . This fact, known as Jamio lkowski isomorphism due to his early contribution [4], implies an intrinsic relation between the sets of quantum maps and quantum states [35,36]. In particular, positive maps correspond to block positive operators [4], while completely positive maps are represented by positive operators [30]. Thus a positive matrix representing a completely positive map in this isomorphism is called a Choi matrix or dynamical matrix [37].
Making use of the standard Hilbert-Schmidt scalar product of two operators one can introduce a duality relation between sets of operators. The set of positive operators B + is selfdual. The sets of block positive operators is known to be dual to the set of separable operators. Therefore we cannot resists a temptation to call elements of the set dual to the set of k-block positive operators as k-separable, although the same set appears in the literature [38][39][40] and is characterized by the maximal Schmidt number of its element.
Note that the sets of operators which are a) block positive, b) 2-block positive, c) positive, d) 2-separable and e) separable, form a nested chain of proper subsets, see Fig. 1 and Table 1 The same inclusion relations holds for the corresponding sets of maps. As the elements of the cone dual to the cone of positive maps are called superpositive maps [33] (or entanglement breaking channels [34,41]), the dual to the set of k-positive maps consists of k-superpositive maps.
Since the set of block positive operators and separable operators are dual, any positive map (which is not completely positive) can be used to detect quantum entanglement. In particular, the Choi matrix representing such a map is given by a block positive operator and it may play the role of an entanglement witness [26,42].
Such a hermitian operator W is characterized by the property that TrW σ 0 for any separable state σ, while negativity of TrW ρ confirms that the analyzed
Linear maps
Operators acting on Table 1. The cones of linear maps acting on the set of operators on H d and the isomorphic cones of operators. Strict inclusion relations hold upwards (∪) for the cones in columns a), a') and downwards (∩) for the corresponding dual cones in columns b) and b'). In the case k = d the cone of completely positive maps is selfdual and so is the corresponding cone of positive operators.
state ρ is entangled. The key advantage of this notion is due to the fact that the Hermitian operator W can be considered as an observable, and the expectation value TrW ρ can be decomposed into a sum of quantities, which may be directly measured in a laboratory. In such a way one may experimentally confirm that an analyzed quantum state ρ is indeed entangled [43,44]. The set of entanglement witnesses corresponds thus to the set of block positive operators, the structure of which for d 3 is still being investigated [22,45,46]. It is worth to emphasize that there is no universal witness, which could detect entanglement of any state, but for any entangled state a suitable witness can be found. The most valuable are extreme entanglement witnesses, which form extreme points of the set of block positive operators, since they can also detect entanglement of some weakly entangled states. In this way the theory of quantum information provides a direct motivation to study the structure of the set of block positive operators (i.e. the set of entanglement witnesses) and its various subsets.
The aim of this work is to contribute to understanding of the non trivial structure of the set of positive maps and the corresponding set of block positive operators. We provide a constructive characterization of various subsets of the set of positive maps. In particular we study relations based on duality between convex cones. Another class of results concerns composition of quantum maps. This paper is organized as follows. In section 2 we review necessary definitions of k-positive and k-superpositive maps and formulate a kind of generalized Jamio lkowski-Choi theorem, which relates them to k-block positive and k-separable operators. Several other characterizations of these sets are proved. In section 3 we discuss the duality between the cones of k positive and k-superpositive maps and analyze its consequences.
In section 4 we study the relations of the results obtained in the previous sections to K-positive maps, where K is a so-called mapping cone, introduced in [12].
Cones of positive maps and the corresponding sets of operators
In this section we give the definitions to which we refer in later parts of the paper and provide some concrete examples of objects that match these definitions. We review certain results already known in the literature and for convenience of the reader we prove some of them.
In the entire paper, we shall consider only finite dimensional linear spaces. Let is positive for an arbitrary k-dimensional orthogonal projection q in H.
This is the same as Because of the assumed form of q, in index notation the condition (3) reads This condition means that (½ k ⊗ Φ) |ψ ψ| 0 for any one-dimensional projector |ψ ψ| on k ⊗ H, which is equivalent to k-positivity of Φ.
If Φ is k-positive for every k ∈ AE, we call it completely positive. We shall denote the family of completely positive maps with CP (H). Obviously, CP (H) = k∈AE P k (H), but it is also a well known fact [30] that for k d, we get P k (H) = CP (H). A natural question arises whether the sets P k (H) with k d are all distinct one from another. An affirmative answer can be found in [48]. For k = 1, . . . , d, the map turns out to be k-positive iff λ 1 k . This is a generalization of the famous example by Choi [5] of a map that is d − 1-positive, but not completely positive, Consider an operator a ∈ B (H). It defines a similarity map (also called adjoint ): Ad a : B (H) ∋ x → a * xa ∈ B (H). For any operator a such a map is completely positive. As observed by Kraus [32], any completely positive map can be written in the form For k ∈ AE, we say that Φ is k-superpositive iff rk a i k for all i = 1, . . . , n (rk a i denotes the rank of a i ). We denote the set of k-superpositive maps by SP k (H).
It is natural to ask whether the classes SP k (H) with k d are all distinct one from another. It turns out that they are, as follows from the Proposition 2.6 at the end of this section. Maps which are 1-superpositive are simply called superpositive [33] and we abbreviate the notation All the sets of operators that we introduced above have their corresponding left transposed partners. For any A ⊂ L (H), we define where t is the transpose map. It is customary that the name of A τ differs from the name of A by a "co" suffix. For example, CP (H) τ is called the set of completely copositive maps. One can easily check that P (H) = P (H) τ and SP (H) = SP (H) τ . As a conclusion of the above discussion, we get the following chain of inclusions see columns b) and a) in Table 1. Finally, we define the following three families of We call them (k, m)-decomposable, (k, m)-positive and (k, m)-superpositive maps, respectively. Obviously, P k,0 (H) = P k (H), S k,0 (H) = SP k (H), P 0,m (H) = (P m (H)) τ and S 0,m (H) = (SP m (H)) τ , so all the previously discussed classes of maps are included in the definitions (11) and (12). It is also easy to see that Note that similar families of maps and inclusion relations between them were analyzed by Chruściński and Kossakowski [22], who called k-superpositive maps partially entanglement breaking channels. In [49] the author defines a family of maps which he calls "2-decomposable", but they correspond to S 0,2 (H) in our notation. That is, we call them "2-supercopositive maps". On the other hand, the families D 2,2 3 and D 2,2 4 , which we would call 2-decomposable, appeared many times in the context of atomic maps [15,50,51]. An element of L (H) is called atomic iff it does not belong to D 2,2 (H). In particular, in [15] it was proved that all the known generalized indecomposable Choi maps of B 3 are atomic. This falsifies the possible conjecture that the Størmer-Woronowicz theorem ( [1], [7]) has a generalization of the form P ( n ) = D n−1,n−1 ( n ).
Linear operators on B (H) ("maps") can be identified with corresponding elements of B (H ⊗ H) ("operators"). In the following, we shall introduce the B (H ⊗ H) counterparts of the families of maps that we defined above.
Let Φ be an element of L (H). Following Jamio lkowski [4] and Choi [30], we define where Ψ + = i e i ⊗ e i is a maximally entangled state on H ⊗ H. We shall denote the map Φ → C Φ by J, It is well known [2,4]
that J| E(H) is an isomorphism between E (H) and the set of Hermitian operators on H ⊗ H, E (H ⊗ H). Since P (H) ⊂ E (H), we shall concentrate on Φ| E(H) in most of what follows and we omit the subscript | E(H) .
Thus J can be regarded as a Ê-linear isomorphism between the Ê-linear spaces
E (H) and E (H ⊗ H).
Let us introduce the so-called set of k-block positive operators (k ∈ AE), where the a's are elements of B (H ⊗ H). We write BP (H ⊗ H) instead of 1-BP (H ⊗ H) and simply call 1-block positive operators block positive. One can easily prove that k-BP (H ⊗ H) ⊂ E (H ⊗ H) for arbitrary k 1 (cf. [47]). Moreover, we have the following Proposition 2.2. (Generalized Jamio lkowski-Choi theorem) Let k be a positive integer. The sets P k (H) and k-BP (H ⊗ H) are isomorphic. We have where the isomorphism J was defined in (14).
Proof. Let Φ be an element of E (H). We shall prove that Φ ∈ P k (H) is equivalent to C Φ ∈ k-BP (H ⊗ H) and thus we will have proved (16). We start from the following lemma, Proof. By definition (see (13)), C Φ = d r,s=1 e rs ⊗ Φ (e rs ). In index notation, From (18) we readily get which is the expected formula. Such a reordering of elements of the superoperator Φ, first used by Sudarshan et al. [37] to obtain the matrix C Φ , was later called reshuffling [52].
Now we can prove Proposition 2.2. When applied to C Φ , the k-block positivity condition that appears in (15) may be rewritten in index notation as ⊂ . Since this should hold for arbitrary sets of complex numbers ψ j i , φ m l , we can complex conjugate all of them in (20). We also change the names of indices like j ↔ r and m ↔ s. After all these changes we get as equivalent to (20), After small rearrangements, this is precisely condition (4). The only difference is that the position of the first index in φ ij and in ψ lm was changed, which is not significant. As we mentioned in the proof of Lemma 2.1, (4) is equivalent to k-positivity of Φ and so is (22).
Proposition 2.2 appears in the early work by Takasaki and Tomiyama, [10] (it was also proved in [53] Thus for intermediate integer values, k = 2, . . . , d − 1, we get a kind of discrete interpolation between the theorems of Jamio lkowski and Choi.
To find the sets of operators corresponding to k-superpositive maps, we shall need the following lemma, where α ∈ H ⊗ H, r := rk a and Any operator |α α| with α of the form (25) can be obtained as C Ad a for some a ∈ B (H).
Proof. From the polar decomposition of a, we have a = r l=1 √ λ l U |ψ l ψ l |, where the λ l 's are the eigenvalues of |a| := √ a * a, U is a unitary operator on H and the vectors ψ l ∈ H are orthonormal. By the definition (13), ψ j l e j and U i j are matrix elements of U . The vectors φ i are mutually orthogonal. We get It is easy to show that ψ l e ijψm e ij = |φ l φ m |. Hence (27) can be rewritten as which equals |α α| for α = r l=1 φ l ⊗ ψ l . This proves the main part of the lemma. The fact that any projector |α α| can be obtained in this way follows from the calculation of C Ad a for a = k i=1 Using Lemma 2.5, we can prove the promised result that all the sets P k (H) for k = 1, . . . , H are distinct. We have the following Proof. Let a be as in the assumptions of the proposition. Obviously, Ad a is an element of SP k (H). Let us assume Ad a = i Ad ai for some nonzero operators for some m ∈ AE and nonzero vectors α ∈ H, {α l } m l=1 ⊂ H such that C a = |α α| and C a l = |α l α l |. But (29) can only hold if all the vectors α l are scalar multiples of α. According to Lemma 2.5, α is of the form k l=1 φ l ⊗ ψ l , so all the vectors α l have to be of the same form as well. Using Lemma 2.5 again, we conclude that rk a l = k. Since we made no assumptions about the a l 's, the equality rk a l = k implies that Ad a cannot be an element of SP k−1 (H). This proves our assertion 2 .
In short, Proposition 2.6 implies that SP k−1 (H) SP k (H) for k d, as we already mentioned above.
for some m ∈ AE and sets of vectors Obviously, operators on the right hand side of (30) make up the convex cone spanned by the positive rank 1 operators k i,j=1 |φ i ⊗ ψ i φ j ⊗ ψ j |. This is nothing else as the definition of an operator with the Schmidt number equal to ksee [38,39,53].
Thus we get the following (31) Thus the set of k-superpositive maps is isomorphic to k-Sep (H ⊗ H), We can now write a chain of inclusions corresponding to (9), 2 A simpler proof of Proposition 2.6 can be obtained by noting that the Choi matrix C Ada is a positive rank one operator, and so are all the Choi matrices C Ada i , hence the Ada i are scalar multiples of Ada. We have kept the longer proof because of its connection with Lemma 2.5 3 We do not assume the vectors to be nonzero (we omit the brackets (H ⊗ H) to fit the formula into the page and write Sep instead of 1-Sep to simplify notation. The elements of Sep (H ⊗ H) are called separable operators). This chain of inclusions, studied earlier in [22], corresponds to columns b') and a') in Table 1 on page 3. To find the sets of operators corresponding to completely copositive (CP (H) τ ), kcopositive (P k (H) τ ) and k-supercopositive maps (SP k (H) τ ), we use the following lemma Proof. From the definition (13), we have This gives us J (t • Φ) = (½ ⊗ t) J (Φ), which proves the lemma.
The map ½ ⊗ t that appears in Lemma 2.8 is called partial transposition. Using the lemma, we trivially get Proposition 2.9. Let k be a positive integer. We have the correspondences The 3. Relations between k-positive and k-superpositive maps. Other relations
It is a well known fact that E (H ⊗ H) is a d 4 -dimensional vector space over Ê
and it is equipped with the symmetric Hilbert-Schmidt product, where a, b ∈ E (H ⊗ H), and the last equality holds due to the Hermiticity of a.
Let A be a cone in E (H ⊗ H). We define the dual cone of A, By comparing the definitions (15) and (31), we easily get • . The plot above shows unbounded cones and the normalization hyperplane Tr x = 1, while the convex sets below represent their cross-sections. The same sketch is applicable to the corresponding cones of block positive, positive semidefinite and separable operators. it follows that (A • ) • =Ā for any cone A ∈ E (H ⊗ H). In particular,
Proof. Follows directly from the definition of k-BP (H ⊗ H) if we observe that
for a closed cone A ⊂ E (H ⊗ H). We call this fact the bidual theorem. As a consequence, we have In is easy to show that the set k-Sep (H ⊗ H) is closed (cf. e.g. [47]). Thus we can use the bidual theorem together with Proposition 3.1 to prove our assertion.
Using the natural duality in E (H ⊗ H), we can introduce an analogous operation in E (H). Let X ⊂ E (H) be a convex cone. We define the dual cone of X as It is easy to notice that (46) can as well be written as which makes the definition (46) transparent. As a direct consequence of (47) and Propositions 2.2 and 3.1, we obtain The relations expressed in Propositions 3.3 and 3.4 can be depicted as in Figure 1, which shows the the cones of block-positive, positive and separable operators for d = 2 and d = 3. Note that the self-dual cone for positive operators is represented by the right-angled triangle. The same sketch represents also the corresponding cones of maps. In physical application one is often interested in a set of normalized operators. For instance, the trace normalization Tr x = 1 corresponds to a hyperplane, represented by a horizontal line.
The cross-section of such a normalization hyperplane with each cone gives bounded convex sets of a finite volume estimated in [55]. Their structure for d = 3 is sketched in Fig. 2. The picture is exact in the sense that there exist convex cones in Ê 3 such that their section by an appropriately chosen plane gives the above sets which fulfill the duality relations in accordance with Propositions 3.3 and 3.4. For example, the circle in Figure 2 is a section of a cone of aperture π/2 by a plane perpendicular to its axis. The cone is self-dual, just as the set CP (H) which it represents.
By modifying Figure 2 a little, we get a sketch that illustrates the important notion of an optimal entanglement witness [45] (cf. also [56]). By definition, a block positive operator W ∈ B (H ⊗ H) is called optimal if and only if the set ∆ W := {ρ ∈ B + (H) | Tr (ρW ) < 0} is maximal (with respect of inclusion) within the family of sets ∆ W ′ (for W ′ ∈ BP (H ⊗ H)). It is known [45] that optimal witnesses have to lie on the boundary of BP (H ⊗ H) and in the case of ′ is an optimal entanglement witness.
Proof. According to Theorem 1 in [45], an entanglement witness W is optimal iff (1 + ε) W − εP ∈ BP (H ⊗ H) for arbitrary ε > 0 and a nonzero P ∈ B + (H ⊗ H). It is therefore natural to define extreme entanglement witnesses as the extreme points of BP (H ⊗ H) ′ and to give priority to witnesses which are not only optimal, but also extreme. We have extreme entanglement witnesses = extreme points of BP (H ⊗ H) ′ , and in principle, no other witnesses are needed to describe the set of separable states. It should be kept in mind that Fig. 3 presents a highly simplified sketch of the problem. Even in the simplest possible case of a 2 × 2 system the set of separable states is 15 dimensional and it is well known that this convex set is not a polytope and its geometry is rather involved [52]. It is not our intention to discuss it here in detail and we return to the subject of duality relations.
Using the results presented earlier, it is straightforward to show the following The next result, related to composition properties of maps [22,47,52], will be crucial for our later discussion Proof. Being more explicit, we want to prove that Φ • Ψ ∈ SP k (H) and Ψ • Φ ∈ SP k (H) for arbitrary k ∈ AE, whenever Φ ∈ SP k (H) and Ψ ∈ P k (H). It is sufficient to show this for Φ = Ad a with an arbitrary a ∈ B (H) of rank k. We prove first that Ψ • Ad a is an element of SP k (H). For this we shall need the following lemma Now we can prove that Ψ • Ad a ∈ SP k (H). Let us take an arbitrary element x ∈ B (H). The fact that rk a k is equivalent to Now we calculate the action of Ψ • Ad a on x, This is a sum of terms of the form (49) and we get Ψ • Ad a = m l=1 Ad a l , where the operators a l := k j=1 |φ j ξ (l) j all have rank lower or equal k. Thus we have proved Ψ • Ad a ∈ SP k (H), which implies that Ψ • Φ ∈ SP k (H) for arbitrary Φ ∈ SP k (H). We still need to show that Φ • Ψ ∈ SP k (H). This can be easily deduced from the following lemma, Lemma 3.10. Let Φ be an element of SP k (H) and Ψ an element of P k (H). Let Φ * , Ψ * be the adjoint operators of Φ, Ψ (resp.) with respect to the Hilbert-Schmidt product on B (H), given by the formula (42) with a, b ∈ B (H). We have Φ * ∈ SP k (H) and Ψ * ∈ P k (H).
Proof. Just as
. The definition of k-positivity of Ψ can be restated as Equivalently, But this is just the condition (51) for Ψ * . Hence Ψ ∈ P k (H) ⇔ Ψ * ∈ P k (H). To prove an analogous equivalence for Φ, it is enough to consider the specific case Φ = Ad a with rk a k. We have Tr (Ad a (x)) * y = Tr (a * xa) * y = Tr x * (aya * ) * = Tr x (Ad a * (y)) * This gives us (Ad a ) * = Ad a * . The ranks of a and a * are equal, so Ad a ∈ SP k (H) ⇔ (Ad a ) * ∈ SP k (H), which implies Φ ∈ SP k (H) ⇔ Φ * ∈ SP k (H) and finishes the proof of the lemma. Now we can finish the proof of Theorem 3.8. By Lemma 3.10, The last equality holds according to Lemma 3.10 and to the first part of the theorem.
In short, we proved that for any Φ k-superpositive and Ψ k-positive, the products Φ • Ψ and Ψ • Φ are k-superpositive.
It is good to notice that Theorem 3.8 justifies the name entanglement breaking channels, which is often used for superpositive, trace preserving maps of B (H). To make this precise, we show the following Corollary 3.11. Let Φ be superpositive. For any ρ ∈ B + (H ⊗ H), we have Proof. Since J (CP (H)) = B + (H ⊗ H), where J is the isomorphism defined in (14), we have ρ = (½ ⊗ Ψ) |ψ + ψ + | (55) for a suitably chosen Ψ ∈ CP (H). We have Because CP (H) is a subset of P (H), Ψ is an element of P (H) an we get from Theorem 3.8 the inclusion Φ • Ψ ∈ SP (H). By Proposition 2.7, the operator (½ ⊗ Φ • Ψ) |ψ + ψ + | is separable. Comparing this with (56), we immediately see that (54) is true.
Obviously, it is possible to repeat the argument given above in the case when we assume k-superpositivity of Φ and demand k-separability of (½ ⊗ Φ) ρ. Therefore one could think of calling k-superpositive and trace preserving maps k-separability inducing channels.
We shall finish this section with a number of characterizations of the sets SP k (H) and P k (H). Together with Theorem 3.8, the following four theorems should be regarded as some of the most important material included in the paper and be studied with care.
The following three characterization theorems can be proved in practically the same way as Theorem 3.12.
Theorem 3.13. Let Φ ∈ E (H) and k ∈ AE. The following conditions are equivalent: Theorem 3.14. Let Φ ∈ E (H) and k ∈ AE. The following conditions are equivalent: Theorem 3.15. Let Φ ∈ E (H) and k ∈ AE. The following conditions are equivalent: Theorem 3.13 is much the same as Theorem 3.12, but the order of the operators Ψ, Φ is different in these theorems. Theorems 3.14 and 3.15 are in complete analogy with 3.12 and 3.13 (resp.), but the roles of k-positive and k-superpositive maps have been exchanged. In section 4 we shall add two more to the list of equivalent conditions in the the above theorems, see Corollaries 4.3 and 4.4.
We should remark that the four theorems given above make up a broad generalization of a number of relatively well known facts about the sets P (H), CP (H) and SP (H), (these can be found on page 345 of [52]). We should emphasize that the results like (61)-(63) and our four theorems do not simply follow from the closedness relations of the type Φ, Ψ ∈ CP (H) ⇒ Φ • Ψ ∈ CP (H) (and similarly for P (H), P k (H), SP k (H) and SP (H)).
Mapping cones
In the previous sections we have studied maps of B (H) into itself for H a finite dimensional Hilbert space, and much of the technical work has involved the Choi matrix (13) and the Jamio lkowski (14) isomorphism. In more general situations these techniques are not available, and one of us introduced in [12] an alternative approach to study positivity properties of maps of a C * -algebra into B (H). We now recall some of the definitions. For simplicity we continue to assume H is finite dimensional.
Let A be a C * -algebra. Then there is a duality between bounded linear maps Φ of A into B (H) and linear functionalsΦ on A ⊗ B (H) given bỹ where Tr is the usual trace on B (H) and t the transpose. Furthermore, Φ is positive iffΦ is positive on the cone A + ⊗ B + (H) of separable operators. We say a nonzero cone K in P (H) is a mapping cone if Φ ∈ K implies Ψ • Φ • Υ ∈ K for all Ψ, Υ ∈ CP (H). Well known examples are P (H), CP (H), the copositive maps and SP (H). We define Proof. If Φ ∈ P k (H) then ½ k ⊗ Φ 0, where ½ k is the identity map on a kdimensional Hilbert space. Thus if Ψ ∈ CP (H), (67) Thus P k (H) is a mapping cone.
If rk a k then for all b ∈ B (H), rk ab k and rk ba k. Thus Ad b • Ad a = Ad ba ∈ SP k (H), and Ad a • Ad b ∈ SP k (H). It follows that SP k (H) is a mapping cone. From the definitions of D k,m (H) and S k,m (H) it follows that they are also mapping cones. Proof. By Theorem 3.14, Φ ∈ P k (H) iff Ψ • Φ ∈ CP (H) for all Ψ ∈ SP k (H). Hence by [ Using the above theorem and its proof together with Theorem 1 in [24] we can add two more conditions to the equivalent conditions in Theorems 3.12 and 3.14, Corollary 4.3. The following conditions are equivalent for Φ ∈ E (H), Corollary 4.4. The following conditions are equivalent for Φ ∈ E (H), Using Proposition 2.7, it becomes evident that the condition 2) in Corollary 4.4 is the same as the k-positive maps criterion by Terhal and Horodecki [38] (for k = 1, we get the well known positive maps criterion by Horodeccy 5 , [26]). Corollary 4.3 provides us with an analogous characterization of the set of k-block positive operators: An operator a ∈ B (H ⊗ H) is k-block positive iff (½ ⊗ Ψ) a 0 for all k-superpositive maps Ψ.
Furthermore, the main theorem in [49] is a version of Corollary 4.3, slightly modified to encompass 2-copositive maps. One can easily deduce from it that the set of one-undistillable states on H ⊗ H is precisely 2-BP (H ⊗ H).
Concluding remarks
In this paper we studied the structure of the set of positive maps from the space B (H) of linear operators on a finite-dimensional Hilbert space H into itself. This topic is of substantial interest in quantum physics, since positive maps are closely related to the separability problem due to the positive maps criterion by Horodeccy [26]. More generally, but less acute, positive maps are related to the separability problem because they correspond to hyperplanes that separate entangled states from the separable ones.
Here we developed general methods for proving results like the Horodeccy criterion, both in the situation where the Jamio lkowski isomorphism is at hand and within a more general setup, where other techniques need to be used, based on mapping cones (cf. Section 4). Our discussion concentrated on k-positive maps and on the dual cones of k-superpositive maps, consisting of completely positive maps that admit a Kraus representation by operators of rank k (such maps are also called partially entanglement breaking channels, [22]). We gave a number of characterization theorems (Theorems 3.12, 3.14, 3.13, 3.15 and Corollaries 4.3, 4.4) for both k-positive and k-superpositive maps, pertaining to their properties under taking compositions. Central to these results is the observation that a product of a k-superpositive map and a k-positive map is again a k-superpositive map (Theorem 3.8). We have not seen that particular result anywhere in the literature. Also our characterization theorems seem to appear for the first time in this paper.
We introduced (similary to [22], only using different notation) the cones of (k, m)separable, (k, m)-decomposable and (k, m)-positive maps (S k,m (H), D k,m (H) and P k,m (H), respectively). The main results of this paper can be trivially generalized to these families of maps.
Most of our work relied on the simple and fine idea of duality between convex cones [54], which is nevertheless hard to grasp intuitively for spaces of dimension higher than 3 (it is not even completely trivial for three-dimensional cones, see Figure 2). We hope that the figures we included in Section 3 could help the reader to develop basic intuitions about the geometric background to our work. On that occasion we touched upon the question of optimality of entanglement witnesses. By pointing out that the extreme points of the set of unital witnesses are optimal, we tried to spill the idea that future efforts could concentrate on witnesses which are not only optimal, but also extreme. 6 Within this paper several results by other authors [4,26,30,38,49,53] appear as special cases of general theorems. Presented in the way we did it, they start to reveal a mathematical structure of a certain degree of generality. For a mathematican, it is natural to ask if there are many examples of this structure, or maybe it is very specific to the studied cones. In other words, the question is, how many are there interesting examples of mapping cones K in L (H) such that P K (H) = K? We do not know the answer at the moment. From a physicist's perspective, the key question here is to what extend the families S k,m (H), D k,m (H) and P k,m (H) can be useful in entanglement research and how our theorems can be applied in practice. The example of the paper [49] suggests that our discussion is not purely abstract and may relate to physically relevant questions like the distillability of entanglement. | 2009-02-27T17:41:36.000Z | 2009-02-27T00:00:00.000 | {
"year": 2009,
"sha1": "ee58837112f0fb0e0204a781d8b66ba131562a91",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0902.4877",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8a97baa9f35e26ff599025a7d669fe3f6d0cfb0f",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
52185729 | pes2o/s2orc | v3-fos-license | Repetitive Nerve Stimulation in Amyotrophic Lateral Sclerosis
Background: Nowadays, it is widely known that decremental responses in low-frequency repetitive nerve stimulation (LF-RNS) are frequently observed in patients with amyotrophic lateral sclerosis (ALS). The pathological mechanism of this phenomenon remains unknown. This study aimed to illuminate the features of RNS in Chinese patients with ALS. Methods: Clinical and electrophysiological data of 146 probable and definite ALS patients who underwent RNS were retrospectively enrolled and analyzed. LF-RNS (3 Hz) was performed in trapezius, deltoid, abductor digiti minimi (ADM), quadriceps femoris, and tibialis anterior. High-frequency RNS (HF-RNS, 10 Hz) was performed only in ADM. The two-sample t-test and Chi-squared test were used for statistical analysis. Results: Decremental responses to LF-RNS (≥10%) in at least one muscle were detected in 83 (56.8%) of the cases and were most commonly seen in trapezius and deltoid. The incidence of decremental response was higher in patients with upper limb onset. Incremental responses to HF-RNS (≥60%) in ADM were observed in 6 (5.6%) of the cases. In 106 muscles with decremental response, 62 (57.4%) muscles had a continuous decremental pattern, more than a U-shape pattern (37 cases, 34.3%). Nineteen cases showed definite decrements in LF-RNS tests in trapezius, while no abnormalities were found in the electromyography and neurological examination of the sternocleidomastoid muscle, supplied by the accessory nerve as well. Conclusions: Decremental responses in the RNS are commonly observed in ALS patients. The findings regarding the trapezius indicated that some ALS onsets could be initiated by a “dying back” process, with destruction of neuromuscular junctions (NMJs) before motor neurons. Incremental responses in the ADM implied damage of the NMJs involved both the post and presynaptic membranes.
Ethics approval
Chinese People᾽s Liberation Army General Hospital Ethics Committee approved the study. All patients were involved in the study based on the voluntary principle and signed informed consent forms. We maximized the protection of the interests of patients and did not harm any patients.
Subjects
Clinical and neuroelectrophysiological data of patients diagnosed with ALS were recruited from the Neurology Department, Chinese People᾽s Liberation Army General Hospital, between April 2016 and December 2017. All patients were examined and diagnosed by experienced senior neurologists. According to the revised El Escorial criteria, [11] the cases were further categorized as clinically definite ALS, clinically probable ALS, clinically probable-laboratory supported ALS, or clinically possible ALS. Patients that met the following conditions were excluded: (1) under the diagnostic degree of clinically probable-laboratory supported ALS, (2) positive response toward anticholinesterase, (3) obvious improvement after clinical follow-up, (4) incomplete clinical data. The patients with progressive muscular atrophy were included in this study, categorized as lower MN disease (LMND) ALS. [12][13][14] A total of 146 cases were enrolled in this study (91 men, mean age: 54.3 ± 10.8 years, range: 22-78 years). The diagnostic categories by data collection were 93 clinically definite ALS, 33 clinically probable ALS, 8 clinically probable-laboratory supported ALS, and 12 LMND ALS.
Repetitive nerve stimulation methods
The electrodiagnostic studies, including RNS, electromyography (EMG), and nerve conduction studies, were performed in a keypoint workstation machine (31A06, Alpine Biomed ApS, Denmark). Skin temperature over the examined muscle was maintained at 32°C or above throughout the entire measurement. Surface electrodes were used to record the belly-tendon compound muscle action potential (CMAP). At least one muscle was examined in each patient. RNS was performed in the following muscles: deltoid for the axillary nerve (n = 51), trapezius for the accessory nerve (n = 149, both sides tested in 5 cases), abductor digiti minimi (ADM) for the ulnar nerve (n = 110), quadriceps femoris (QUA) for the femoral nerve (n = 25), tibialis anterior (TIB) for the common peroneal nerve (n = 50), and orbicularis oculi (ORB) for the facial nerve (n = 8, both sides tested in 2 cases). A low frequency of 3 Hz train of the 10 stimuli was delivered to the nerves and recorded. A decrement of the peak-to-peak amplitudes of the CMAP of the 4 th -1 st responses was measured. Based on the conventional criterion, a decremental response of 10% or greater was considered positive, in accordance with the suggestions of the American Academy of Emergency Medicine Quality Assurance Committee. [15] A high frequency of 20 Hz train of 100 stimuli was given to only ADM in 108 participants, after considering tolerance. An incremental response of 60% or greater was considered positive, because of its high diagnostic sensitivity and specificity, in accordance with previous study recommendations. [16] We investigated decremental patterns in 108 muscles with positive decrements, to determine if the U-shape pattern in the MG was commonly found in ALS patients.
Statistical analysis
Statistics calculations were performed using SPSS 22.0 software (SPSS, Chicago, IL, USA). The significance level was set at P < 0.05. Descriptive statistics satisfying the normal distribution were performed using means and standard deviations. Enumeration data, such as frequency of decrements, were compared using Chi-squared test and measurement data, such as decremental percentage, were compared using two-sample t-tests.
results
The EMGs showed diffuse neurogenic changes in all the cases. A significant decremental response was observed in at least one muscle in 83 (56.8%) patients. Clinical features between patients with and without decremental response are summarized in Table 1. Age, age of onset, disease duration, ALS Functional Rating Scale-Revised score, and disease progression rate showed no statistical difference. The decremental responses were more frequently seen in upper limb onset cases and less common in bulbar-onset cases.
Distributions of decrement in the different muscles are illustrated in Table 2. Decremental responses were most frequently seen in the trapezius (55.0%) followed by the deltoid (52.9%). Decrements in ADM, QUA, and TIB seemed rare with a frequency of 5.5%, 4.0%, and 2.0%. There was no decrement seen in the obicularis oculi.
In the 106 patients who underwent the 20 Hz stimulation for ADM, 6 (5.6%) had incremental response greater than 60%, while no decremental response in the 3 Hz stimulation was found in the same muscle. EMG in ADM showed neurogenic change in all the cases. 3 Hz RNS of trapezius and TIB was performed in the 6 participants, where 5 of them had decremental responses in the trapezius at the same time.
In electrophysiological images of the 108 muscles with decremental responses, 62 (57.4%) showed a continuous decrease [ Figure 1a], while only 37 (34.3%) showed a classical U-like recovery [ Figure 1b]. There were also 9 (8.3%) atypical cases [ Figure 1c]. Unlike the MG, the distinct U-shape pattern did not dominate in ALS cases.
The electrophysiological results showed that 19 patients had a definite decremental response in the trapezius, yet there was no sign of spontaneous potential or neurogenic motor unit potentials in the EMG of the sternocleidomastoid muscle, which is supplied by the accessory nerve as well. The clinical data were reviewed. The neurological examination showed no weakness in the flexing, turning of the neck, or shoulder shrugging, indicating a normal muscle strength of trapezius and sternocleidomastoid muscle.
dIscussIon
There are a number of reports stating that decremental responses are commonly observed in patients with ALS, beginning with the first description from Mulder et al. in 1959. [3] Our study found decremental responses in at least one muscle in more than half of the patients enrolled (83 patients, 56.8%). In Table 3, the incidence and distribution of decremental responses of the present and previous reports [4][5][6][7][8][9][10] are summarized, as a comparison. The deltoid and trapezius showed decremental responses more frequently than the other muscles, indicating that proximal muscles were more sensitive than distal muscles in the RNS in ALS patients, which was in line with previous reports. [4,6] Some researchers have stated that the incidence of decremental responses in RNS in ALS patients with the rapidly progressing patients is significantly higher than in the slowly progressing patients. [5,[17][18][19] Other studies, [7,10] as well as the current study, did not report this correlation. The incidence of decremental response was significantly higher in patients with upper limb onset and lower in patients with bulbar onset when compared to patients with other onset regions. This was in agreement with a Japanese report [9] and two Chinese reports. The cause of these phenomena remains unknown.
The cause of decremental responses in ALS patients remains controversial, but it is typically attributed to immature sprout, due to progressive denervation and chronic innervation at early stage of the disease, resulting from MN loss. [20] This is also termed the "dying forward" theory.
In the present study, 19 cases showed a definite decremental response in the trapezius during LF-RNS. The sternocleidomastoid muscle -which is supplied by accessory nerve as well as the trapezius -appeared completely normal, without weakness in the neurological examination or neurogenic damage in EMG. This has never been reported before. This is interesting as it seems to be clinical proof for the "dying back" theory, which was only seen in animal models. [21] This suggested that the NMJs destruction and axon degeneration of the accessory nerve occurred before MN cell loss. Contrary to the dying forward theory, Fischer et al. performed quantity of pathological experimentation on SOD1 G93A mice model. He found that end plates denervation and ventral root axons loss occur before any loss of MNs in the spinal cord anterior corn. An autopsy on a single ALS patient showed denervation and reinnervation changes in muscles, where the MNs remained structurally intact. [21] These results demonstrated that the MN pathology begins with NMJs and distal axons, then proceeds in reverse, as a ''dying back'' pattern. A similar pathological pattern is reported in several animal models of MN disease such as the motorneuron degeneration model and the progressive motor neuronopathy model. [22] Even a complete rescue of the MN cells cannot effectively delay muscle denervation and only moderately suspends the mSOD1 mice death. [23][24][25] The mechanism of the "dying back" pattern remains unclear. Nutritional status is positively correlated with prognosis and survival time, in both human and transgenic mice. [26] Nardo et al.'s study in SOD1 G93A mice detected different progression rates and life spans in models of various genetic backgrounds. [27] The fast-progressing mice that had a higher basal metabolic rate, [28] seem to have NMJs innervation more than the slow-progressing mice at onset, while MN loss was equal to the latter. Energy deficits, resulting from hypermetabolism and malnutrition of ALS, could give rise to muscle-initiated NMJs dysfunction. NMJs could be more vulnerable to metabolic demand gap since they are the most distal region of the axon and therefore display an alteration ahead of all. [29] The NMJs dismantlement contributed to muscle denervation and MN death. Another possible explanation is that sublethal damage to the cell body, such as accumulation of insoluble complexes of mutant SOD1 protein [30] or chronic glutamate toxicity, [31] causes the deficient transport of component to NMJs, which are necessary for maintaining synaptic activity. This undernourishment results in degeneration of the distal axons, while the structural integrity of the cell body remains unaffected. [21,32] The "dying back" pattern in electrophysiological tests was observed other than in the sternocleidomastoid muscle. Additional cases should be enrolled for novel discovery and further research. If this phenomenon is observed in the deltoid as well, it would be reasonable to perform a muscle biopsy. Beside routine observation under a light microscope, morphological changes of the NMJ structures, such as synaptic membrane and mitochondria, as well as dysfunction of the voltage-generated calcium channels, could be observed through electron microscope, to see if these occur ahead of the MN damage. The "dying back" theory could become even more persuasive and valuable for understanding the underlying pathogenesis of ALS.
A recovery after the decrement reaches the bottom is seen in patients with MG, as the U-shape pattern. [33] Some authors have stated that U-shape has also been seen in ALS patients [34] although partially with recoveries less than in MG. [6] In this study, unlike classical postsynaptic destruction, such as MG, the U-shape was not seen much in the RNS tests of ALS patients (37 cases, 34.3%), when the stimulus reached the 6 th or 7 th wave. This likely resulted from decreased release and storage of acetylcholine in synaptic vesicles, which could not be supplemented and renewed as MG, resulting in continuous dropping safety threshold of the NMJs. [35] The results revealed that the NMJs destruction in ALS was not limited to postsynapse. The incremental responses in the HF-RNS of ADM were found in 6 (5.6%) cases in the current study, without decremental responses in the LF-RNS in the same muscle. Five of the cases were combined with decrements in LF-RNS in the trapezius. The remaining case had an increment in ADM, which was the only abnormality in the RNS, suggesting exclusive presynaptic destruction in the NMJs. The findings above implied a coexistence of pre and postsynaptic changes, which favored the theory stating that more than one component of the NMJ could be involved in the pathological process.
The decremental responses in LF-RNS are often seen in patients with ALS, yet the pathophysiological mechanism remains unclear. Our results might suggest a "dying back" process in some ALS patients, in accordance with evidence in previous animal experiments. The NMJs dysfunction likely played an important role in initiating the disease, before the MN loss. The NMJs changes in ALS were more likely an overall involvement of the synapse and not restricted to a certain region. However, features revealed from the electrophysiological examination varied with disparate nerves or individuals. The multiple mechanisms made a contribution to the RNS decremental responses in ALS patients. The mechanism underlying could be rather complex. | 2018-09-16T06:22:59.887Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "e3dfb778ad39a6b39b123762db4808ff723414bd",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.240798",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3dfb778ad39a6b39b123762db4808ff723414bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208344463 | pes2o/s2orc | v3-fos-license | Expression Profile Analysis of Differentially Expressed Circular RNAs in Steroid-Induced Osteonecrosis of the Femoral Head
Background A growing number of studies have suggested that circular RNAs (circRNAs) serve as potential diagnostic biomarkers in many diseases. However, the role of circRNAs in steroid-induced osteonecrosis of the femoral head (SONFH) has not been reported. Methods Secondary sequencing was performed to profile circRNA expression in peripheral blood samples from three SONFH patients and three healthy individuals. We confirmed our preliminary findings by qRT-PCR. Bioinformatics analysis was conducted to predict their functions. Results The result showed 345 dysregulated circRNAs. qRT-PCR of eight selected circRNAs preliminarily confirmed the results, which were consistent with RNA sequencing. Bioinformatics analyses were performed to predict the functions of circRNAs to target the genes of miRNAs and the networks of circRNA-miRNA-mRNA interactions. Conclusions This study provides a new and fundamental circRNA profile of SONFH and a theoretical basis for further studies on the functions of circRNAs in SONFH.
Background
Steroid-induced osteonecrosis of the femoral head (SONFH) is an intractable disorder caused by the use of glucocorticoids (GCs) and leads to the progressive necrosis of osteocytes and the bone marrow [1,2]. SONFH cases are increasing owing to long-term GC use or high-doses of GCs in patients [3]. Patients with SONFH are commonly asymptomatic, which makes its early diagnosis a challenge. SONFH usually occurs within two years after GC treatment, typically within the first six months [4]. Various joint preservation procedures have been performed for the treatment of precollapse disease [5]. However, in the absence of appropriate therapeutic interventions in the early stage, the collapse of the femoral head may later become unavoidable, resulting in severe secondary osteoarthritis and warranting artificial joint replacement in nearly 70% of patients [6,7].
Thus, there is an urgent requirement to identify effective diagnostic biomarkers for SONFH. Circular RNAs (circRNAs) are a large class of noncoding RNAs. Advances in sequencing technologies have led to the recognition that circRNAs are well-expressed in peripheral blood. CircRNAs regulate gene expression by interacting with microRNAs (miRNAs) as sponges and participating in biological activities [8]. Recent studies have shown that cir-cRNAs in blood samples are potential clinical biomarkers for various diseases [9,10]. However, to our knowledge, no study has profiled circRNA expression in SONFH by RNA sequencing (RNA-seq).
To explore the functions of circRNAs in the development of SONFH, we undertook RNA-seq of samples from SONFH patients and healthy individuals. We identified differentially expressed circRNAs and predicted their functions and pathways. Putative networks of circRNA-miRNA-mRNA interactions were constructed for further studies on clinical diagnosis and treatment.
Materials and Methods
2.1. Case Information. The Institutional Review Board (Protocol Number: 2018-KL-075-02) approved the study, and every participant signed an informed consent form. Six peripheral blood samples were used for RNA-seq, including three patients with SONFH and three healthy control individuals with a history of GC use. Ten pairs for SONFH patients and control subjects were validated by qRT-PCR. The diagnosis of SONFH was based on each patient's history and imaging (radiograph, CT, and MRI). All patients in the case group were at Ficat stage II or III [11]. The inclusion criteria for SONFH and the control group were that they should have a history of GC use (>2 g within a 3-month period) and should have had no systemic disease, such as rheumatoid arthritis, ankylosing spondylitis, or systemic lupus erythematosus. The two groups were matched for age, sex, race, and region.
2.2. RNA Extraction and circRNA Sequencing. The HiPure PX Blood RNA Mini Kit (Amgen, Guangzhou, China) was used to extract total RNA from fresh blood mixed with three volumes of RNASafer LS Reagent (Amgen). The RNA concentration was obtained by a Qubit 3.0 fluorometer (Invitro-gen, Carlsbad, CA, USA), and the integrity was evaluated with the Agilent 2100 Bioanalyzer (Applied Biosystems, Carlsbad, CA, USA). RNase R (Epicentre Technologies, Madison, WI, USA) was used to digest linear RNAs. Cir-cRNA sequencing analysis was performed by Illumina HiSeq X Ten to identify circRNAs in patients with SONFH (n = 3) and control subjects (n = 3). CircRNAs showing a fold change > 1:5 and p < 0:05 were considered differential expressed. Raw sequence reads are presented in the Sequence Read Archive (SRA) database [12] (Number: PRJNA522627).
Bioinformatics Analysis.
Functions of target genes and enrichment pathways were analyzed by using Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and Reactome analyses. CircRNA-miRNA-mRNA networks were predicted by miRanda.
2.4. qRT-PCR. qRT-PCR was performed to preliminarily confirm candidate circRNAs in 10 pairs of samples. Total RNA was extracted from all groups by using TRIzol reagent (Invitrogen) and RNeasy Plus Mini Kit (Qiagen, Guangzhou, China). qRT-PCR was implemented by using Geneseed qPCR SYBR Green Master Mix on an ABI 7500 system. The relative expression levels of selected circRNAs were determined using the 2 −△△CT method. Glyceraldehyde 3-phosphate dehydrogenase was utilized to normalize RNA preparations. Primer sequences for circRNAs were designed using Primer 5. The primer sequences of the eight selected circRNAs are shown in Table 1, and back-splicing sites are shown in Supplemental figure 1.
Statistical
Analysis. All data were analyzed by SPSS 24.0 software. PCR data were analyzed by Student's t-test and presented as the mean ± standard error of the mean; p < 0:05 was considered statistically significant.
Results
3.1. CircRNA Identification. RNA-seq was performed to profile circRNA expression from three patients with SONFH and three healthy individuals. A total of 33,098 circRNAs were discovered. We found that 229 circRNAs were upregulated, and 116 circRNAs were downregulated ( Figure 1). Furthermore, we analyzed the category and distribution of the circRNAs (Supplemental figure 2).
GO, KEGG, and Reactome Analysis.
To infer the biological functions of the dysregulated circRNAs in the pathogenesis of SONFH, GO, KEGG, and Reactome analyses were used in this study to predict the function of cir-cRNAs. GO analyses revealed that the most enriched GO terms were in cellular components, molecular functions, and biological processes. The most enriched KEGG pathway included ubiquitin-mediated proteolysis and protein processing in the endoplasmic reticulum. The top two pathways in the Reactome analysis were M phase and G2/M transition ( Figure 2).
Validation of RNA-seq by qRT-PCR.
To verify the RNAseq data, eight selected circRNAs were further validated by qRT-PCR. The results were consistent with the RNA-seq data ( Figure 3).
Prediction of circRNA-miRNA Interaction Networks
and Target Genes for miRNAs. To explore the molecular mechanism and functions of the circRNAs, we investigated potential miRNAs binding with the circRNAs (Figure 4). The circRNA-miRNA-mRNA interaction network of the eight candidate circRNAs was predicted by miRanda and mapped by Cytoscape ( Figure 5).
Discussion
In recent years, molecular biology has advanced considerably, and noncoding RNAs are attracting significant attention in the field of medicine. MiRNAs play important roles in the regulation of transcription [13]. Increasing studies have investigated differentially expressed miRNAs related to SONFH, identifying hundreds of such miRNAs [14]. patients and control individuals. Their data indicated that circulating miRNAs in the serum might play notable roles in the development of SONFH and act as diagnostic biomarkers [15]. Kao et al. compared miRNAs in peripheral blood by microarray and PCR, implying the possibility of using miRNAs as novel diagnostic or therapeutic targets [16].
Compared with miRNAs, circRNAs are more promising diagnostic biomarkers because they are more stable [17]. Being closed loops, circRNAs are not easily degraded by exonucleases. In addition, the half-life of circRNAs is doubled that of linear RNAs [18]. CircRNAs are reproducibly and easily detected in clinical blood samples. Stable and enriched circRNAs have been found in peripheral blood [19,20], as well as blood components, such as exosomes [21], plasma [22], platelets [23], erythrocytes [24], and mononuclear cells [25].
For this study, we used peripheral blood. Our study found 229 upregulated circRNAs and 116 downregulated cir-cRNAs. We further selected eight circRNAs for preliminary qRT-PCR validation based on a combination of previous studies and the circRNA-miRNA prediction network (Table 2) [15,[26][27][28][29]. The results indicated that the RNAseq data was reliable and worthy of further study.
The use of GCs-oral, intravenous, or even inhaled-may alter the characteristics of peripheral blood and strongly affect the blood supply to the femoral head. Fat embolism and coagulation disorders are two of the most common causes of interrupted blood supply. GCs could directly or indirectly lead to hypofibrinolysis and thrombophilia, dysfunction and apoptosis of endothelial cells, lipid metabolism alterations, and platelet activation, which are followed by poor blood flow, ischemia, and eventually, osteonecrosis [30][31][32][33]. Nevertheless, the effectiveness of GCs differs among individuals owing to varied GC sensitivity [34].
As compared to invasive organ biopsy, the blood-based biomarker assay is a relatively economical and noninvasive method to detect disease, owing to its ease of accessibility and the low risk associated with sample collection. In previous studies, noncoding RNA expression patterns in peripheral blood have been used as the basis for the detection of disease [35][36][37][38][39][40]. However, peripheral blood can be affected by various factors. It is possible that changes in the expression profile of the disease reflect shifts in cell populations [41], which is a potential confounding factor to be considered.
Although we identified the differentially expressed cir-cRNAs in SONFH, the underlying mechanism remains poorly understood. With the advance of high-throughput sequencing technologies, GO annotations have been good predictors of the functions and trends of genes [42]. KEGG is a reference knowledge database that describes the functional pathways that contribute to disease processes, and it is extensively used in enrichment analyses [43,44]. Reactome analysis is utilized to visualize mRNA expression data and can be combined with other databases [45]. The molecular mechanism underlying the interactions of circRNAs and miRNAs in SONFH has not been deciphered. Therefore, the circRNA-miRNA-mRNA network was built based on our RNA-seq data. Understanding the interaction between the different pathways could provide novel strategies for managing bone disease [46]. These original findings might enhance our understanding of the functions of circRNAs in the mechanism of SONFH. For example, circ_0004692 may control hsa-miR-222-3p, which could further regulate the expression of the target gene SETD2. However, a single cir-cRNA interacts with multiple miRNAs, and the target gene is regulated by many miRNAs. Network prediction may enrich future studies with novel perspectives to confirm the association between these dysregulated circRNAs and miR-NAs in SONFH.
Conclusions
In conclusion, by RNA-seq, we identified 345 differentially expressed candidate circRNAs in SONFH. We also preliminarily confirmed the results by qRT-PCR with a small sample and conducted bioinformatics analysis to predict their functions. Our study provides a novel theoretical basis for further research on the functions of circRNAs in SONFH. However, large-sample validation and studies on specific regulatory mechanisms are required to realize the potential value of cir-cRNAs as diagnostic biomarkers for SONFH in a more complete and in-depth manner.
Data Availability
Raw sequence reads are presented in the Sequence Read Archive (SRA) database. | 2019-11-22T00:39:00.019Z | 2019-11-15T00:00:00.000 | {
"year": 2019,
"sha1": "79a23bfe9aab355ec09edc9d4a37c37b21bef8cd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/8759642",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "674f2c8a8ebf5d869f5a2f6ff40eab4408953771",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
201041748 | pes2o/s2orc | v3-fos-license | Self-Nano-Structuring in SrTiO 3 : A Novel Strategy for Enhancement of Thermoelectric Response in Oxides
: Nano-structuring is recognised as an efficient route for enhancing thermoelectric response. Here we report a new synthesis strategy for nanostructuring oxide ceramics and demonstrate its effectiveness on an important n-type thermoelectric SrTiO 3 . Ceramics of Sr 0.9 La 0.1 TiO 3 with additions of B 2 O 3 were synthesized by the mixed oxide route. Samples were sintered in air followed by annealing in a reducing atmosphere. Crystallographic data from X-ray and electron diffraction showed Pm (cid:885)(cid:3364) m cubic symmetry for all the samples. High resolution transmission electron microscopy (HRTEM) showed the formation of a core-shell type structure within the grains for the annealed ceramics. The cores contain nanosize features comprising pairs of nano-size voids and particles; the feature sizes depend on annealing time. Atomic-resolution, high-angle annular-dark-field imaging and electron energy loss spectroscopy in the scanning transmission electron microscopy (STEM-HAADF-EELS) showed the particles to be rich in Ti and the areas around the voids to contain high concentrations of Ti 3+ . Additionally, dislocations were observed, with significantly higher densities in the shell areas. The observed dislocations are combined (100) and (110) edge dislocations. The major impact of the core-shell type microstructures, with nano-size inclusions, is the reduction of the thermal conductivity. Sr 0.9 La 0.1 TiO 3 ceramics containing grain boundary shells of size 1 µm and inclusions in the core of 60 to 80 nm exhibit a peak power factor of 1600 W/m.K 2 at 540 K; at 1000 K they exhibit a low thermal conductivity (2.75 W/m.K) and a power factor of 1050 W/m.K 2 leading to a high of ZT of 0.39 ± 0.03. This is the highest ZT reported so far for Sr 0.9 La 0.1 TiO 3 based-compositions. This nanostructuring strategy should be readily applicable to other functional oxides.
thermal conductivity. Over the last two decades, improvements of the ZT value have been achieved in traditional thermoelectric materials and to some extent in newly developed oxide thermoelectrics by micro-and nano-structuring of the materials 1,[4][5][6][7] Part of the driving force for nanostructuring has been the need to decouple the closely related material properties S, and k and thereby increase the thermoelectric figure of merit, ZT beyond that in normal bulk materials [8][9][10][11][12] .
For many nanostructured bulk materials, the enhancement in the value of ZT can be attributed to a significant reduction in the lattice thermal conductivity as a result of changes in structure and local chemistry, which give rise to a high density of phonon-scattering interfaces 13 . However, the introduction of such interfaces can be counterproductive, as it will generally result in a significant reduction of the electrical conductivity 2 . An alternative approach for the reduction of thermal conductivity is by the fabrication of nanocomposite structures, for instance by compaction of the main material with nanosize inclusions, or by in-situ nano-inclusion formation by means of precipitation [14][15][16] . Here, the main role of nanoinclusions is to generate additional phonon scattering centres without severely reducing the bulk electrical conductivity. The size, shape and volume fraction of the nanoparticles have a significant influence on the scattering efficiency, 17 . However, it is reported that inclusions could increase electrical conductivity while reducing thermal conductivity 18 . The general approach for nanostructuring thermoelectrics involves the introduction of nanosized features such as voids, particles with round or faceted morphology, atomic scale platelets dispersed within the grains or the grain boundaries of the bulk material to reduce thermal conductivity.
In traditional thermoelectrics, significant ZT improvements have been achieved in many systems; in half-Heusler alloys by adding nano-ZrO 2 19 , in Bi 2 Te 3 by adding nano-SiC particles 20 , and in Yb 0.2 Co 4 Sb 12+y by dispersing in situ partially oxidized Yb 2 O 3 nanoparticles 21 It is also has been reported that more significant enhancements can be achieved by embedding metal or conductive nanoparticles into the matrix: examples include lead and antimony in PbTe 22 antimony in Yb y Co 4 Sb 12 23 .
Metal oxides have great potential as thermoelectric materials for high temperature applications, with SrTiO 3 being considered one of the most promising n-type oxides 2,24 . It crystallizes in a simple cubic Pm m perovskite structure, has a high S value of 600 µV/K and is normally utilised as an insulator. However, by doping the Sr sites with lanthanides or the Ti sites with Nb, it becomes an n-type semiconductor 25,26 . The thermoelectric power factor, S 2 , of La-doped single-crystal strontium titanate is comparable with that of traditional thermoelectric materials such as Bi 2 Te 3 , 2 . Despite these excellent electronic transport properties, the lattice thermal conductivity of SrTiO 3 is comparatively high, limiting its usability for practical applications. Various strategies have been employed to reduce the thermal conductivity of SrTiO 3 ceramics including controlling the grain size 27 simultaneously increases the Seebeck coefficient and the electrical conductivity 38 .
Here, we present a novel strategy for self-nanostructuring in oxide ceramics and demonstrate its effectiveness for optimising the transport properties of strontium titanate based thermoelectrics. Through the use of different sintering atmospheres, a distinct core-shell grain microstructure is achieved. In turn, the core of the formed grains is itself nano-structured, comprising a matrix surrounding uniformly distributed pairs of nano-size precipitates and voids. Atomic level imaging and chemical characterization, using scanning transmission electron microscopy (STEM), high-angle annular dark field (HAADF), and electron energy loss spectroscopy (EELS) has been employed to investigate the microstructural changes that occur in Sr 0.9 Ln 0.1 TiO 3 upon annealing in a reducing atmosphere, with a particular focus on the formation of these core-shell type structures containing pairs of voids and inclusions and how they affect the thermoelectric response of the material. This approach to controlling the nanostructure of thermoelectrics could find much wider applicability in the routes to synthesise future target materials and achieving higher performance.
Experimental:
Ceramics of Sr 0.9 La 0.1 TiO 3 were produced by the standard mixed oxide route.
X-ray Diffraction
XRD spectra of L10 samples sintered in air and then annealed at 1350C for different times are shown in Figure 1a. For convenience, these samples will be referred to as 0h, 12h, 24h,
Scanning Electron Microscopy
To evaluate the sample microstructures after sintering and annealing SEM characterization was initially performed. Figure
Transmission Electron Microscopy.
To provide further insight into the details and nature of the core-shell type structure, conventional TEM and advanced electron microscopy using atomically resolved STEM and EELS was performed. TEM data for the shell and core areas of a L10 sample annealed for 24 h in Ar -5%H 2 are presented in Figure 4. The presence of a high density of dislocations in the shell area is the main feature of the microstructure in Figure 4a. Some dislocations are also visible in the core area of the sample. Dislocations are common lattice imperfections in un-doped single crystal and polycrystalline SrTiO 3 prepared in air [47][48][49] . Some of the nanosized inclusions observed in the SEM microstructure analysis can also be seen in the core area in this micrograph (arrowed, blue in colour). Higher magnification TEM images of the core area for the sample, Figure 4(b, c), show a uniform distribution of inclusions, with sizes ranging from 20 nm to 80 nm. The macroscopic morphology of these inclusions can be broadly described as a 'peanut' or 'dumbbell' shape: closer inspection shows that the inclusions actually comprise two 'sub-structures' (see inset Figure 4b) images acquired from such particles and the surrounding matrix, shows that the particles and the matrix of the core may differ in structure (Figure 4c). Furthermore, as noted above, much lower densities of dislocations were observed within the cores, but when present, the dislocations were frequently observed to be connecting sets of particle-void pairs. More detailed analysis of the secondary phase was performed using precession electron diffraction tomography, allowing the complete reciprocal lattice of the crystal structure to be determined. The reciprocal lattice projections were found to be a superstructure of the parent perovskite structure of SrTiO 3 and projections of the crystal structure along one of the perovskite <100> directions and along one of the cubic <110> directions are shown in Figure 5a and 5b respectively. The superlattice reflections between the parent perovskite reflections indicate a much larger unit cell with a volume potentially as large as 1800 Å 3 rather than the perovskite cell volume of 64 Å 3 . The precise atomic arrangement that gives rise to the structure is still under investigation but the high degree of coherency between this phase and the parent perovskite structure is clear from the reciprocal lattice reconstructions, hence the strong directionality of the nanosized features in Figure 3. The small size of these phases, the relatively weak superstructure reflection intensity and the high degree of coherency suggest that this phase would not be easily identified in the conventional XRD analysis.
STEM-HAADF-EELS
In order to provide further insight into the details of the nano size features and dislocations, atomically resolved STEM imaging and EELS measurements were performed. Firstly, the particle-void nature of the inclusions was independently confirmed by low loss EELS measurements of embedded nanoinclusions, demonstrating a drop in relative thickness (see Figure S3) consistent with the presence of a nano-void. The chemistry of the particle part of the inclusion can be further elucidated by looking at the near edge fine structure differences of the Ti L 2,3 and O K EELS edges from a particle/void and matrix area presented in Figure 7a. Figure 7b shows a Ti L 2,3 spectrum extracted from the Sr 0.9 La 0.1 TiO 3 matrix: the spectrum shows the characteristic white line shape of the edge, with the additional splitting of the L 3 and L 2 peaks into e g , t 2g sub-components, corresponding to the Ti +4 octahedral coordination, as expected from SrTiO 3 type compounds 50,51 . The matrix spectrum is plotted against a Ti L 2,3 spectrum extracted from the particle part of the inclusion. It can be readily seen that the e g -t 2g splitting of the L 3 and L 2 peaks is far less pronounced, while the onset of the edge is slightly shifted to lower energies, indicating that the valence of Ti in the particle is reduced towards Ti +3 . Similarly, the near edge fine As mentioned earlier, dislocations were observed in the core and shell areas of the sample.
The nature of the dislocations was analysed using HAADF images by Fourier filtering to enhance the visibility of the location of the dislocation core (as presented in Figure 8a-c). The application of a Fourier filter on the (100) and (011)
Thermoelectric Response
The air-sintered samples are, as expected, insulators and exhibit low electrical conductivity, consistent with the transport properties of other air-sintered strontium-titanate-based thermoelectrics 32 . Thus, the discussion will focus on the transport properties of samples annealed under reducing conditions. However, the discussion of thermal properties includes the air sintered sample as a reference baseline.
The transport properties of the samples are presented in Figure 9. Although it is difficult to determine the exact structure and chemistry of the nano inclusions, the STEM-EELS data point towards a Ti rich structure, still containing Sr, and exhibiting high level of oxygen deficiency. This phase could be akin to a reduced ternary titanate or a Sr-doped TiO x Magneli-type phase 53 . The electrical conductivity of TiO x -based Magneli phases is high, ranging from 800 S/cm to 300 S/cm at temperatures of 450 K to 1000 K, depending on the oxygen deficiency level 54,55 . Therefore, particles of a similar nature within the microstructure would have a positive effect on the overall electrical conductivity. The pore components of the nano inclusions will, by comparison, have low electrical conductivity. However, since there is no significant reduction in the overall macroscopic density of the annealed samples, it would appear that the effect of voids on the overall electrical conductivity is minimal. Another microstructural feature that will influence electrical conductivity is the presence of dislocations. The full charactorization of the types of dislocations present in these materials is beyond the scope of this study and requires further exploration. However, an atomistic simulation study of the role of <100>{011} edge dislocations, similar to that highlighted in Figure 8, on the defect chemistry and oxide ion transport properties of SrTiO 3 found that oxygen vacancies close to the dislocation core have lower formation energies 56 . Thus dislocations facilitate the removal of oxygen from the structure and aid the formation Ti 3+ ions; in turn these are beneficial to the enhancement of the electrical conductivity 56 . Additionally, dislocations (as structural defects) can contribute to phonon scattering and help to reduce thermal conductivity 36 .
The absolute value of Seebeck coefficients range from 85 to 240 V/K in the temperature range 300 K to 1000 K and reduces systematically with increasing annealing time (Fig. 9b).
As discussed above, this decrease is mainly attributed to an increase in carrier concentration, To further understand the electrical conduction in the nanostructured samples, the carrier concentrations and mobility were calculated from the electrical conductivity and Seebeck coefficients. The carrier concentrations for the sample are calculated by the modified Hiekes' where is the number of available sites for carrier per unit volume of the unit cell. For the cubic perovskite structure A is 1 and the V is the volume of the unit cell obtained from the refinement of the XRD data shown in Figure 1b. The factor e/k is the ratio of the electronic charge (e) to the Boltzmann constant (k) and is approximately 0.011587 . S is the Seebeck coefficient as shown in Figure 9b.
The modified Hiekes' equation is based on the assumption that only one electron is permitted on a given site and both degeneracies of spin and orbital are negligible 60 . The attribution of vibrational entropy part is also assumed to be 0. Therefore, the calculated values of carrier concentration are expected to be higher than the real values. The carrier mobility can be calculated from the carrier concentration and electrical conductivity ( ) using the following equation: (2) The calculated values for carrier concentration and charge mobility are shown in Table 1 In spite of the high power factor, it is the high thermal conductivity of strontium titanate that limits its use for thermoelectric applications 2 ; a reduction of thermal conductivity is essential for further improvement of the thermoelectric response of this material. The total thermal conductivity of the air sintered and the annealed samples are presented in Figure 9d. The air sintered samples show the expected very high thermal conductivity, typically 7.8 W m -1 K -1 at 350 K to 5.0 W m -1 K -1 at 1000 K, being slightly lower than that for un-doped polycrystalline, air-sintered strontium titanate 32 To explore the relationship between the size of the void-particle nanostructures and thermal conductivity, the size of the longest void-particle pairs in each of the samples (evaluated from the BSE-SEM images), and the lattice thermal conductivity at 1015 K were plotted against the annealing time spent in the reducing atmosphere ( Figure 11). It can be seen that there is a direct correlation between lattice thermal conductivity and void-particle size; the smaller the size of the nano-sized features the lower the lattice thermal conductivity. This trend is valid for all the measurement temperatures for thermal conductivity. However, we do not have reliable data for the density of void-particle pairs in the different samples.
Conclusions
We have established a new path towards engineering nanostructures in strontium titanate based thermoelectrics which may be applicable to other oxides. High density Sr 0.9 La 0.1 TiO 3 polycrystalline samples, with regular polygonal grains having a narrow grain size distribution and average size ~ 7 µm, were synthesised through liquid phase sintering by addition of boron and sintering in air atmosphere; they were then annealed in a reducing environment.
This combined heat treatment led to the formation of a complex nano-structure within the grains.
X-ray diffraction confirmed Pm m cubic crystal structure for all the samples. SEM revealed a core-shell type microstructure with the presence of nano-size features within the core area.
HRTEM showed that the nano-size features are in the form of pairs of nano-sized voids and particles; their sizes depending on the annealing time. Atomic-resolution STEM-HAADF-EELS characterization in an aberration-corrected microscope showed that the precipitates are rich in Ti and the areas around the voids contain a high concentration of Ti 3+ . Additionally, a high density of dislocations was observed in the shell areas; dislocations are also present in the core of the nano-structures, but with a lower density. Their presence may enhance electrical conductivity, although this aspect of the structure remains to be explored in more details.
The self-nano-structured Sr 0.9 La 0.1 TiO 3 ceramics showed a high power factor of 1600 W/m.K 2 to 1050 W/m.K 2 at temperatures of 600 K to 1015 K. However, the major impact of nano structuring was the reduction of thermal conductivity. Nano structured Sr 0.9 La 0.1 TiO 3 ceramics with shell size of ~1 micron and inclusions of 60 to 80 nm exhibit a low thermal conductivity of K = 2.75 W/m.K at 1015 K leading to a high of ZT of 0.39±0.03 at this temperature. This is the highest ZT achieved for the highly studied Sr 0.9 La 0.1 TiO 3 composition 15,25,26,34,59,[69][70][71] and for other 10 mole % lanthanide doped SrTiO 3 thermoelectrics 25 ; a summary of published data is provided in Table S1. The study demonstrates a powerful nanostructuring strategy for significantly enhancing the performance of thermoelectric oxides; the approach could find much wider application in providing valuable guidance in the routes to synthesise future target materials. | 2019-08-18T13:04:42.366Z | 2019-08-16T00:00:00.000 | {
"year": 2019,
"sha1": "d55d50bb80d2f47f25c4dbae339b4a2b85c1b26f",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsami.9b06483",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "bc2b863925ed2306c1ac6762c9a3fb3e962ef322",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
244828522 | pes2o/s2orc | v3-fos-license | Analysis of GNSS, Hydroacoustic and Optoelectronic Data Integration Methods Used in Hydrography
The integration of geospatial data in hydrography, performed using different measurement systems, involves combining several study results to provide a comprehensive analysis. Each of the hydroacoustic and optoelectronic systems is characterised by a different spatial reference system and the method for technical implementation of the measurement. Therefore, the integration of hydrographic data requires that problems in selected fields of electronics, geodesy and physics (acoustics and optics) be solved. The aim of this review is to present selected fusion methods applying the data derived from Global Navigation Satellite System (GNSS), Real Time Kinematic (RTK) measurements, hydrographic surveys, a photogrammetric pass using unmanned vehicles and Terrestrial Laser Scanning (TLS) and compare their accuracy. An additional goal is the evalution of data integration methods according to the International Hydrographic Organization (IHO) S-44 standard. The publication is supplemented by implementation examples of the integration of geospatial data in the Geographic Information System (GIS). The methods described indicate the lack of a uniform methodology for data fusion due to differences in both the spatial reference systems and the techniques used. However, the integration of hydroacoustic and optoelectronic data allows for high accuracy geospatial data to be obtained. This is confirmed by the methods cited, in which the accuracy of integrated geospatial data was in the order of several centimetres.
Introduction
The marine coastal zone is the most dynamic (in terms of geomorphological changes) area on the Earth. It includes the interface of the land and the sea and is under constant impact of the atmosphere, hydrosphere and intensive human activities [1,2]. Due to the intensive human use of coastal areas as well as the results of climate change, increased precipitation, rising sea levels and storm surges have occurred in this area [3]. Therefore, coastal zone monitoring, including bathymetry and coastal topography should be carried out.
In view of the rapid development of the devices and systems applied in hydrography, bathymetric surveys have been conducted using hydroacoustic methods [4]. The operation of hydroacoustic devices [5] is based on the phenomenon of acoustic location. The system sends out a high frequency sound wave into the water and then records the vibrations of • A positioning system is used to determine the coordinates of the vessel's position.
In hydrography, the most commonly applied positioning system is a Differential Global Positioning System (DGPS) or a Real Time Kinematic (RTK) receiver [7]. It determines the position coordinates that are presented in curvilinear coordinates (B, L, h) in relation to the reference ellipsoid used or in Cartesian coordinates (X, Y, Z); • A sound velocity probe determines the sound velocity in water. The propagation of sound waves in water can be measured directly with a Sound Velocity Profile (SVP) or indirectly using a Conductivity, Temperature, Depth (CTD) sensor which measures the conductivity, hydrostatic pressure and temperature in seawater and then, based on these physicochemical variables, determines the sound velocity, as well as the seawater density and salinity [8]. A measurement of a sound wave in a vertical distribution is expressed in a vertical datum, while in a water column, it is expressed in a local datum associated with the sensor location; • An Inertial Navigation System (INS) records the pitch, roll and yaw angles (RPY angles) of a vessel. It enables the determination of the vehicle's orientation based on the knowledge of the RPY angles in the presence of disturbances due to waves and wind. Devices equipped with sensors of this type measure linear accelerations and rotation angles in three planes (X, Y, Z) in relation to a specified local system. Linear accelerations are determined by applying accelerometers, while the RPY angles are measured using a gyroscope; • A hydrometric station records and collects data concerning the water quantity status on a lake, reservoir or river. An instrument used to measure water levels is a staff gauge, while the device used for the same purpose is a tide gauge. Some hydrometric stations are equipped with a telemetric function which enables automated data transfer, i.e., a General Packet Radio Service (GPRS) or radio modem. Information on water levels during bathymetric surveys enables the expression of the measured depths in relation to a pre-determined vertical datum. Changes in water levels need to be recorded and included during the hydrographic data processing; • A Single Beam EchoSounder (SBES) is a device used for measuring depth in the vertical direction. A SBES generates a single, narrow-angle acoustic pulse, which enables the recording of only depth data following sounding profiles [9]. The disadvantage of this measurement is the lack of information on the depths between the profiles. A measurement system built from a Global Navigation Satellite System (GNSS) and a SBES can be a separate system to record the X, Y, Z data synchronically in the Universal Transverse Mercator (UTM) system; • A MultiBeam EchoSounder (MBES) is a system that records bathymetric data over a wide swath of the bottom, perpendicular to the direction of a vessel's movement. The transducer in a multibeam echosounder generates multiple acoustic beams. For this reason, bathymetric surveys conducted using a MBES can completely cover the studied bottom with depth data [10]. Multibeam echosounders are applied in both shallow and deep-water surveys. The data derived from this system are expressed in the vertical datum; • A SOund Navigation And Ranging (SONAR) is a device used to determine the location and classification of submerged objects using sound waves. A sonar emits sound pulses in water, which are then sent out and reflected off the bottom, fish or vegetation.
The returning sound pulses are converted into electrical signals. The measured velocity of sound wave propagation in water enables both the estimation of the depth of the object from which the wave was reflected and the identification of underwater objects. Sonar data provide information on underwater objects that may pose a navigational hazard for other vessels. objects. Sonar data provide information on underwater objects that may pose a navigational hazard for other vessels. Monitoring studies of the terrestrial part of the coastal zone apply optoelectronic methods that use the properties of light to record and process geospatial data. The optoelectronic measurements provide the necessary support to hydroacoustic surveys, as they allow data to be acquired from the marine coastal zone in shallow water areas and those adjacent to the coastline [11]. The operation of optoelectronic devices involves the conversion of electrical signals into optical signals and of optical signals into electrical signals. The following devices and systems are used to carry out the geodetic and hydrographic tasks in optoelectronic measurements ( Figure 1):
•
A positioning system determines the position coordinates of an aerial vehicle. An optoelectronic system can be equipped with a GNSS receiver which records curvilinear coordinates (B, L, h) in relation to the reference ellipsoid applied or in Cartesian coordinates (X, Y, Z). Where an optoelectronic device includes a low accuracy GNSS system, it is reasonable to determine the coordinates of selected points in the field using satellite techniques. Thanks to this method, it is possible to make a correction to the vehicle's coordinates; • An image sensor is used to convert electromagnetic waves into electrical impulses which are converted in the electronic system into an image of the scanned surface. In optoelectronic devices, an image is obtained using a so-called photodiode detector, a photomultiplier tube or Charge-Coupled Device (CCD) and Complementary Metal-Oxide-Semiconductor (CMOS) cameras. Image sensors are used in photo cameras, radars, sonars and Unmanned Aerial Vehicles (UAV) [12]; • An INS system is a device for measuring accelerations, rotation angles in three planes (X, Y, Z), and the Earth's magnetic field. One of the components of the INS system installed on aircraft is an Inertial Measurement Unit (IMU), i.e., a device comprised of sensors such as accelerometers, gyroscopes and magnetometers. The system pro- Monitoring studies of the terrestrial part of the coastal zone apply optoelectronic methods that use the properties of light to record and process geospatial data. The optoelectronic measurements provide the necessary support to hydroacoustic surveys, as they allow data to be acquired from the marine coastal zone in shallow water areas and those adjacent to the coastline [11]. The operation of optoelectronic devices involves the conversion of electrical signals into optical signals and of optical signals into electrical signals. The following devices and systems are used to carry out the geodetic and hydrographic tasks in optoelectronic measurements ( Figure 1):
•
A positioning system determines the position coordinates of an aerial vehicle. An optoelectronic system can be equipped with a GNSS receiver which records curvilinear coordinates (B, L, h) in relation to the reference ellipsoid applied or in Cartesian coordinates (X, Y, Z). Where an optoelectronic device includes a low accuracy GNSS system, it is reasonable to determine the coordinates of selected points in the field using satellite techniques. Thanks to this method, it is possible to make a correction to the vehicle's coordinates; • An image sensor is used to convert electromagnetic waves into electrical impulses which are converted in the electronic system into an image of the scanned surface. In optoelectronic devices, an image is obtained using a so-called photodiode detector, a photomultiplier tube or Charge-Coupled Device (CCD) and Complementary Metal-Oxide-Semiconductor (CMOS) cameras. Image sensors are used in photo cameras, radars, sonars and Unmanned Aerial Vehicles (UAV) [12]; • An INS system is a device for measuring accelerations, rotation angles in three planes (X, Y, Z), and the Earth's magnetic field. One of the components of the INS system installed on aircraft is an Inertial Measurement Unit (IMU), i.e., a device comprised of sensors such as accelerometers, gyroscopes and magnetometers. The system provides information on the vehicle's motion parameters, i.e., the acceleration and velocity as well as the orientation in space [13]. The measurements of the IMU angles are taken in relation to the locally adopted coordinate system; • A laser rangefinder is an instrument used to measure distance. A rangefinder sends out a laser pulse in the form of a laser beam, which is then reflected off the surface being measured and returns to the measurement instrument. The device circuit then calculates distances based on the time between the emission and reception of the sound wave. A rangefinder is a stand-alone instrument which is also available in the form of modules that are incorporated into considerably larger systems or integrated with additional systems, e.g., camera or a GNSS. Data logging is recorded in a local system; • A Terrestrial Laser Scanning (TLS) is a system that measures the angle and distance between the instrument and the surface being measured. As a result of the reflection of a laser beam off the observed object, both the distance between the instrument and the measurement point, as well as the horizontal and vertical angle are determined. Surveys carried out using TLS scanning provide geospatial data on the studied object or surface in the form of a point cloud, recorded in the local system of the device with the coordinates of X, Y, Z [14]; • An Airborne Lidar Bathymetry (ALB) is a device that measures the distance from a flying aircraft to the ground points. The airborne laser scanning system includes three main devices, i.e., a laser rangefinder interfacing with the GNSS and INS systems. The data integration from these three measurement systems provides information on the position from which the distance measurement was taken and the distance itself and its direction in space. The device records the ground point coordinates in the rectangular coordinate system [15]; • A RAdio Detection And Ranging (RADAR) is an instrument that determines the angle, distance (range) or velocity of an object. The radar operation involves the measurement of the time between the signal transmission and the recording of its echo. During hydrographic surveys, the device supports automated measurement platforms in monitoring the environment while preventing hazardous events.
If bathymetric and topographic data are collected separately, it is complex to use them together due to differences in accuracies, datums and formats. Therefore, the data integration is essential. It is a process that involves collecting data from different sources and homogenising them in a database to provide a unified environment for modeling, processing and visualization [16]. The first step toward solving the problems of integration is a description and comparison of the methods used before.
The paper describes selected data integration methods important from the point of view of the authors of the review. This publication is structured as follows: The Introduction presents issues and motivation to discuss GNSS, hydroacoustic and optoelectronic data integration methods used in hydrography. It indicates the occurrence of spatial data of various types and is characterised by different coordinate systems. Chapter 2 contains a description of four data integration methods. The paper concludes with final (general) conclusions that summarise its content. In particular, the accuracy of selected GNSS, hydroacoustic and optoelectronic data integration methods used in hydrography was presented in this section. The method of data integration [17] was developed based on the tombolo phenomenon measurement campaign in Sopot in 2019, during which land GNSS measurements, laser scanning, hydrographic [18] and photogrammetric [19] surveys were performed. The authors of the article noted the indeterminacy problem of geodetic and hydrographic coordinate systems in the data integration, and they accurately described mathematical procedures to bring the data to a uniform reference system and then verify the data on the measurement results from the campaign. The presented actions are necessary when integrating data derived from different sources. Figure 2 provides a simplified block The harmonisation process of a three-dimensional dataset [20] involves the determination of the scale factor, three rotation angles around three axes of local coordinate systems and the translation vector while considering the height coordinates of three-dimensional sets in the transformation. In three-dimensional space, rotations around the axes are described using elementary rotation matrices [21], which are functions of rotation angles around selected axes of coordinate systems [22]. The formula for the harmonisation of three-dimensional data devoid of vertical deviations was implemented based on the following formulas [17]:
Review of GNSS, Hydroacoustic and Optoelectronic Data Integration Methods
where: x , y , z -point coordinates in the local based coordinate system X , Y , Z ; x , y , z -point coordinates in the local modified coordinate system X , Y , Z ; θ-rotation angle; However, the formula for the harmonisation of spatial data with a found deviation in their numerical representation from the vertical is written as follows: where: R-rotation matrix; U, V T -partial rotation matrices; Λ-scaling matrix.
Before undertaking the data harmonisation process, it was decided that the target system would be the PL-2000 plane coordinate system, while the elevations would be expressed in the normal height system. The first stage of work involved the data georeferencing [23] derived from TLS, based on the presented mathematical assumptions. In the first place, a TLS point cloud was recorded in an unspecified local coordinate system. The scans were then combined into a single point cloud and georeferenced in relation to the extreme and middle markers obtained from land GNSS measurements. The next stage of work involved the determination of the scale change coefficient in relation to the TLS point cloud and the reference GNSS RTK surveys. On the other hand, the rotation matrix and the translation vector were calculated. The characteristic point coordinates were then compared to the values derived from land GNSS measurements.
The next spatial data set, whose coordinate values were reduced to the values of PL-2000 plane system's coordinates and the normal height system, was derived from UAV surveys. The obtained point cloud generated from the photogrammetric model was initially georeferenced, but its high inaccuracy necessitated the correction of the point locations. Therefore, the TLS cloud was adopted as a reference object in relation to the UAV cloud. At this stage, the main aim was to determine the transformation parameters from both clouds (TLS and UAV), from which control points were later determined. At the data processing stage, a scale difference was noted in the spatial sets of both clouds. Therefore, the Singular Value Decomposition (SVD) method was applied to obtain the components of rotation matrices, from which in the next step the rotation angles around the X, Y, Z axes were determined. The last stage was the spatial rotation operation taking into account the V OFF vector is expressed by the following formula: where: P I -adjustment point coordinates in the corrected coordinate system; P II -adjustment point coordinates in the corrected coordinate system after rotation; V OFF -offset vector.
The bathymetric data were assigned the coordinates from land GNSS measurements, and the depths recorded by the echosounder were obtained in the target coordinate system. The transformations performed on the actual spatial data confirmed the effectiveness of the mathematical procedures used in the spatial data harmonisation process. The integration of data from GNSS, TLS, UAV and USV surveys (Figure 2) indicates the significance of the data harmonisation process. An important aspect is to acquire and identify the coordinates of the corresponding points in datasets with different coordinate systems. Georeferencing is a necessary component for data integration, particularly when these data are derived from different measurement instruments. The material, presented in the form of an article, is a valuable source of information on procedures for the data transformation derived from hydroacoustic (USV) and optoelectronic (TLS and UAV) systems [17]. The data integration derived from hydroacoustic and optoelectronic systems is increasingly becoming a source of information on the changes taking place in the natural environment. An example of the use of data fusion in monitoring quantitative changes in water resources was a study conducted on composite data derived from hydroacoustic and optoelectronic measurements on the reservoirs of the Segura River Basin [24]. Due to climate change resulting in a reduction in the annual precipitation values, waterbodies in this region are becoming shallower [25]. Therefore, numerous measurement campaigns were conducted, including Aerial Laser Scanning (ALS), bathymetric sounding using an USV and a photogrammetric pass using two UAVs. Based on the data obtained from surveys, a model was then generated to provide information about the total capacity, the volumes of sedimented materials in the reservoirs and their water retention capacity. The presented data integration method [24] is based on the use of spatial information systems in the process of building spatial data models (Figure 3). The main works when developing the topobathymetric model involved the recording and processing of data, as well as assigning them a specified coordinate system. In order to record both the surface area of the waterbody with the adjacent area and the waterbody volume, two low-altitude passes with a spatial resolution of 0.05 and 0.2 m, using a Sony QX1 photogrammetric camera, were made. However, when a high-altitude flight was required, the Sony camera was installed on a Cessna 150 aircraft. Additionally, a measurement of the coordinates of Ground Control Points (GCP) was taken by the Global Positioning System (GPS) RTK method [26], which enabled the transformation of the photograph coordinates, expressed in a pixel system (of the image), to the World Geodetic System 1984 (WGS-84). The photographs were then classified and matched while creating a uniform X, Y, Z point cloud. The generated point cloud was then transformed to the European Terrestrial Reference System 1989 (ETRS89) and the elevations were referred to the European Vertical Reference System 1989 (EVRS89). The prepared data were implemented into a geodatabase in the ArcGIS software in order to create, at a later stage, a terrain model along with the processed data derived from the Light Detection And Ranging (LiDAR) system. Moreover, the photogrammetric data were used to create a Digital Terrain Model (DTM) in the form of a Triangulated Irregular Network (TIN) [27] using the ArcGIS software. The processed photogrammetric data in the form of a TIN model enabled the data acquisition on the capacity of the studied reservoirs.
The next phase of work involved the development of bathymetric data derived from a GNSS RTK receiver, an USV equipped with an echosounder and a sonar. The integrated system recorded the vehicle's position data in the ETRS89 system. The depths were recorded in the EVRS89 system in relation to the mean sea level in Alicante, pursuant to the INSPIRE Directive [20].
In quantitative analyses of waters, high accuracy elevation data (which are often acquired using the ALS method) are necessary. The ALB survey was conducted with a scanning density of 0.5 point/m 2 and covered the area of the entire Segura River Basin. The height values, as for the depth, were recorded in the EVRS89 system in relation to the sea level in Alicante. LiDAR data were converted using ArcGIS 10.5 and LAStools softwares.
The process of bathymetric and optoelectronic data integration was carried out using the ArcGIS 10.5 software. The data derived from the USV were used to generate a Digital Surface Model Bathymetry (DSMB), while the data from the LiDAR and UAV helped acquire a Digital Surface Model Photogrammetry (DSMP). The photogrammetric data integration with the data acquired from laser scanning contributed to the improvement in the quality of the models obtained. The above models were combined with each other, thus creating a terrain model with a spatial resolution of 1 m. The implementation of the data integration process in the GIS is rather common [28,29]. However, it is the methods applied for acquiring spatial data that deserve special attention. For the purposes of the development of a DTM, tests were conducted using both hydroacoustic and optoelectronic systems. Additionally, the process of preparing data derived from different systems (which determines the accuracy of data integration) was discussed.
Method of UAV and USV Data Integration according to Genchi S.A. et al.
The method of UAV and USV data integration [30] was developed based on UAV and USV measurements conducted in November 2018 and January 2019, respectively, in the Bahia Blanca Estuary. The estuary comprises a meandering tidal channel and tidal flats, i.e., a wide plain that is flooded at high tide [31]. It is a waterbody characterised by variable hydrodynamic conditions, affected by the tidal phenomenon. Figure 4 presents a diagram showing the main stages of the proposed UAV and USV data integration method. A simplified block diagram presenting the UAV and USV data integration according to [30].
One of the aims of the study was to generate a topographical terrain model that covers the river mouth area using the Structure from Motion (SfM) method [32]. This involved the creation of a three-dimensional model from a set of photographs, based on the observation and description of the location of points on the basis of a change in the perspective, i.e., the location of camera positions. To this end, a photogrammetric pass was conducted using a DJI Phantom 3 standard quadcopter. When planning the pass in the intertidal zone, the atmospheric conditions associated with tidal flows were taken into account. The UAV measurement was taken at low tide, as it enabled the recording of a larger terrain area. Moreover, before the pass, the coordinates of seven GCPs and four checkpoints were determined using a GNSS RTK receiver. The first stage of data processing involved image recording and matching. It was described in detail in [33]. Next, the georeferencing process was undertaken. The point cloud was assigned a spatial reference WGS-84/UTM zone 20S based on the coordinates of GCPs, while the SfM algorithm was used to generate a dense point cloud. Furthermore, the model accuracy was checked by comparing selected coordinates of the three-dimensional model with the coordinates of control and ground points [34]. The accuracy was assessed by determining the Root Mean Square Error (RMSE). The RMSE of the interpolated Topographic Point Cloud (TPC) in relation to the ground points amounted to: 0.13 m for the northern coordinate, 0.15 m for the eastern coordinate and 0.007 m for the height coordinate. The SfM method applied enabled the generation of a point cloud comparable, in terms of accuracy and density, with the data acquired by means of ALS and TLS. The last data processing stage involved the cleaning of data covering the aquatic and coastal areas. The performed operation allowed a three-dimensional model without water coverage to be obtained.
Bathymetric surveys were carried out using an USV equipped with a GPS receiver and a SBES. As with photogrammetric measurements, the atmospheric conditions were of importance. The bathymetric survey was conducted at high tide to record the depths and the spatial extent of water. Moreover, the depths obtained were referred to as the tidal height. The information on the tides was acquired from the nearest hydrometeorological station. The bathymetric data required no georeferencing, as they were assigned coordinates from a GNSS receiver. Since the Bathymetric Point Cloud (BCP) did not cover the entire area with data, it was reasonable to perform data interpolation. The data were processed using the most well-known interpolation methods: Inverse Distance Weighting (IDW) [35], kriging [36], minimum curvature and natural neighbour. Next, the accuracy of each interpolated BCP was assessed in relation to the TPC using typical accuracy measures: Mean Absolute Error (MAE) [37], RMSE [38] and coefficient of determination (R 2 ): where: n-number of points; y i -y value for observation i; y i -predicted value of y for observation i; y-arithmetic mean of y value.
According to the proposed accuracy assessment, the IDW method was selected. The RMSE value (0.18 m) of the interpolated BPC in relation to the TPC indicates a high degree of model fitting. The coefficient of determination also indicated a very high model fitting (0.90). On the other hand, the coordinates of the BPC deviate from the TPC by 0.05 on average (MAE).
The final stage of work involved the integration of two-point clouds. In order to visualise the model, the overlapping points were removed from the BCP. The topobathymetric model ( Figure 5) had a spatial resolution of 0.08 m for the topographic part and 0.5 m for the bathymetric part.
The final stage of work involved the integration of two-point clouds. In order to visualise the model, the overlapping points were removed from the BCP. The topobathymetric model ( Figure 5) had a spatial resolution of 0.08 m for the topographic part and 0.5 m for the bathymetric part. The topobathymetric model shows the dynamic conditions of the area (tide at different conditions/stages) and the topography. Therefore, the described example can be qualified as an attempt to use the data assimilation process [39] in data integration. Data assimilation based on the regional climate model or the surface model is a new issue that will contribute to the development of data integration methods in the future.
Method of LiDAR, NOAA and USGS Data Integration according to Gesch D. and Wilson R.
One of the first DTMs, a topographic-bathymetric elevation model of the coastal zone, was created by integrating the data [40] covering the Tampa Bay area ( Figure 6). In view of the spatial extent of the area, the data measured by the National Oceanic and Atmospheric Administration (NOAA) were used [41], while the topographic data was acquired from the United States Geological Survey (USGS) database. Moreover, LiDAR data were used for the first time when developing a topobathymetric model. The research work was initiated as part of the project aimed at developing techniques and tools to facilitate the integration of data derived from different sources [42]. [40].
A key component in the creation of a model is to prepare input data which must have a uniform reference system. The first stage of works commenced with the construction of an elevation database. To this end, the topographic data covering the bay area were extracted from the USGS National Elevation Dataset (NED). The NED comprised data derived from both optoelectronic measurements and topographic maps. They were converted to a horizontal reference North American Datum of 1983 (NAD 83) and the heights were expressed in relation to the North American Vertical Datum of 1988 (NAVD 88). Since the extracted topographic data were recorded in the target datum, the georeferencing process was not implemented. However, for the purpose of processing the elevation data, the height unit of a foot was assigned, and the data transformation to the reference NAVD 88 was carried out.
The bathymetric data were processed in several stages. In the first step, the depth data (approx. 800,000 X, Y, Z points) were imported to the ArcView software. They were derived from 47 bathymetric soundings conducted in the years 1950-1956. The data overlapped due to the extensive temporal frequency of surveys. As a result, a process was implemented to select the bathymetric data with the most recent date. The next stage involved the height transformation of the reference system. Depth points were initially recorded in relation to the vertical reference system of the Mean Low Water (MLW) and the vertical reference system of the Mean Lower Low Water (MLLW). In view of the above, the transformation was performed using the VDatum tool [43][44][45], which allowed the depth data to be assigned a NAVD 88 system. The final stage of the depth data preparation involved the generation of a DTM in the form of a GRID model.
A very important aspect of this integration was the combination of the data derived from LiDAR and the depth data for the first time. The topographic data derived from the aerial LiDAR conducted by the University of Florida were converted for several test areas to demonstrate both the usefulness of incorporating the most recent high-resolution data and the high accuracy of the topobathymetric model. As part of the test work, the X, Y, Z coordinates were recorded and assigned NAD 83 and NAVD 88 systems and a GRID model was generated.
Before undertaking the main stage of work, representative bathymetric points and elevation points from USGS were selected to determine the coastline, and on its basis, the models were fitted. It was necessary because the data did not overlap in terms of time. After determining the coastline course, the points selected from it were converted with the ArcInfo software using the Thin Plate Spline (TPS) algorithm [46]. The final step in data processing was the mosaicking of the bathymetric and elevation grids. The automated mosaicking technique is indicated for a large area. The mosaicking process involves the processing of a set of several or a few dozen digital component images to obtain a single image.
The resulting model ensured complete coverage of the area. The procedure for generating the depth and LiDAR models was the same as for the bathymetric and topographic USGS data [47]. The integration of LiDAR and NOAA data enabled the creation of a high-resolution raster (Figure 7). data processing was the mosaicking of the bathymetric and elevation grids. The automated mosaicking technique is indicated for a large area. The mosaicking process involves the processing of a set of several or a few dozen digital component images to obtain a single image.
The resulting model ensured complete coverage of the area. The procedure for generating the depth and LiDAR models was the same as for the bathymetric and topographic USGS data [47]. The integration of LiDAR and NOAA data enabled the creation of a highresolution raster (Figure 7).
(a) (b) Figure 7. Results of the integration of (a) bathymetric and LiDAR data; (b) depth and topographic data according to [40].
Discussion and Conclusions
Conducting complex spatial analyses of water areas requires an integrated approach to measurement and data integration. However, thus far no procedures have been developed to integrate hydrographic data from hydroacoustic and optoelectronic systems. The problem in processing a universal method of data integration probably results from the Figure 7. Results of the integration of (a) bathymetric and LiDAR data; (b) depth and topographic data according to [40].
Discussion and Conclusions
Conducting complex spatial analyses of water areas requires an integrated approach to measurement and data integration. However, thus far no procedures have been developed to integrate hydrographic data from hydroacoustic and optoelectronic systems. The problem in processing a universal method of data integration probably results from the continuous development of measurement techniques, the presence of diverse spatial data with different coordinate systems and the specificity of the studied waterbody.
This paper is an attempt to analyse selected methods of data integration, which additionally presents the process of data acquisition and processing. The method review will contribute significantly to the development of the data integration model, which will be presented in the next publication.
The assessment of the applied mathematical assumptions in data integration [17] was verified on the basis of the characteristic point coordinates of the TLS cloud related to the PL-2000 plane coordinate system, which were compared to the coordinates obtained from land GNSS measurements. The deviation values did not exceed 0.016 m in the horizontal plane, however, the deviation values did not exceed 0.027 m in the vertical plane. This proves the accuracy of the harmonisation process. No verification procedures were performed for the GNSS, TLS, UAV and USV data integration methods [24]. In the case of data integration accuracy [30], the RMSE of the interpolated TPC in relation to the ground points amounted to 0.13 m for the northern coordinate, 0.15 m for the eastern coordinate and 0.007 m for the height coordinate. The accuracy of the point cloud generated by the SfM method is comparable with the data from LiDAR (accuracy at the level of 0.15-0.25 m). It is also worth paying attention to the procedure of selecting the best interpolation method for bathymetric data. The values of MAE, R 2 and RMSE were calculated for the interpolated BCP with respect to the TPC. According to the proposed accuracy assessment, the IDW method was selected. The RMSE value (0.18 m) of the interpolated BPC in relation to the TPC indicates a high degree of model fitting. The coefficient of determination also indicated a very high model fitting (0.90). On the other hand, the integration of large-scale data [40] processed into two numerical models (BPC and TPC) was characterised by high accuracy. The accuracy was assessed by comparing the bathymetric model with precise reference data. As a result, the RMSE value of 0.43 m was obtained. The accuracy of selected GNSS, hydroacoustic and optoelectronic data integration methods used in hydrography is presented in Table 1. According to the International Hydrographic Organization (IHO) S-44 standard [48], hydrographic surveys described in this paper should be carried out with respect to Order 1a. Data used in the method proposed by Gesch D. and Wilson R. come from many sources and vary in time, and therefore, they were initially classified as those that do not meet IHO requirements. In other methods, bathymetric measurements were not verified with indicators such as Total Propagated Uncertainty (TPU) and Total Vertical Uncertainty (TVU), as well as data did not have 100% bathymetric coverage. In summary, the presented methods do not meet IHO requirements.
The analysed examples show that there is no single data fusion scheme. This is due to the different specifications of the devices used, research aims and types of waterbodies. In all schemes, the data fusion was multi-stage and required the use of commercial software such as ArcGIS, CloudCompare and VDatum. The integration of hydroacoustic and optoelectronic data is a new issue that requires detailed study. An alternative to complex methods of spatial data integration are machine learning methods which, using artificial intelligence, automate the process of creating models [49]. The model is built on the training set, which are model patterns. One of the machine learning methods is Artificial Neural Networks (ANNs), which are used to transform coordinates with a small number of references or to create DTMs. Machine learning methods will help to improve the accuracy of integrated data, assuming that the training set and the analysed data will come from the same devices and systems [50]. In addition, in the future, an integral system should be created in which data processing would take place at the stage of obtaining data. | 2021-12-03T16:38:23.413Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "1edde7ae331c1da7489fad89a12dfcd43e61fc9f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/23/7831/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f84e2bea9630b4476c4fabd632f0a25f4d4d99d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
259111173 | pes2o/s2orc | v3-fos-license | Emerging Chemical Biology of Protein Persulfidation
Significance: Protein persulfidation (the formation of RSSH), an evolutionarily conserved oxidative posttranslational modification in which thiol groups in cysteine residues are converted into persulfides, has emerged as one of the main mechanisms through which hydrogen sulfide (H2S) conveys its signaling. Recent Advances: New methodological advances in persulfide labeling started unraveling the chemical biology of this modification and its role in (patho)physiology. Some of the key metabolic enzymes are regulated by persulfidation. RSSH levels are important for the cellular defense against oxidative injury, and they decrease with aging, leaving proteins vulnerable to oxidative damage. Persulfidation is dysregulated in many diseases. Critical Issues: A relatively new field of signaling by protein persulfidation still has many unanswered questions: the mechanism(s) of persulfide formation and transpersulfidation and the identification of “protein persulfidases,” the improvement of methods to monitor RSSH changes and identify protein targets, and understanding the mechanisms through which this modification controls important (patho)physiological functions. Future Directions: Deep mechanistic studies using more selective and sensitive RSSH labeling techniques will provide high-resolution structural, functional, quantitative, and spatiotemporal information on RSSH dynamics and help with better understanding how H2S-derived protein persulfidation affects protein structure and function in health and disease. This knowledge could pave the way for targeted drug design for a wide variety of pathologies. Antioxid. Redox Signal. 39, 19–39.
In the animal kingdom, H 2 S is produced in the transsulfuration pathway by the action of the pyridoxal 5¢-phosphate (PLP)dependent enzymes cystathionine b-synthase (CBS) and cystathionine c-lyase (CSE; also known as CTH), but it can also be produced in the cysteine catabolism pathway by cysteine aminotransferase/mercaptopyruvate sulfurtransferase (MPST) ( Fig. 1) Kabil and Banerjee, 2014;Kabil et al., 2011). In addition, a fourth enzyme has been described, methanethiol oxidase (SELENBP1); it converts methanethiol into formaldehyde, hydrogen peroxide (H 2 O 2 ), and H 2 S (Pol et al., 2018). Differentially expressed in different tissues (and even cellular compartments), these enzymes control H 2 S production with different efficiencies.
It has been estimated that the flux of sulfur into H 2 S is similar to that directed into the flux of glutathione (GSH) production (Kabil et al., 2011;Vitvitsky et al., 2012). However, the intracellular steady-state levels of H 2 S are quite low (submicromolar for most tissues) , suggesting very high clearance and/or consumption. H 2 S is primarily oxidized in the mitochondria by sulfide:quinone oxidoreductase (SQR), yielding glutathione persulfide (GSSH), which can be further processed either by thiosulfate sulfurtransferase (TST, also known as rhodanese) or by persulfide dioxygenase (ETHE1) (Fig. 1) Kabil and Banerjee, 2014;Kabil et al., 2011). How all these enzymes control the spatiotemporal production/distribution of H 2 S is still unclear, but knockout of some of them has been shown to have a clear phenotype in experimental animals.
CSE knockout mice show abnormal hind limb clasping and clenching reminiscent of Huntington's disease (Paul et al., 2014). Mice deficient in CBS suffer from severe growth retardation, and a majority of them die by 5 weeks of age. TST and MPST knockout mice develop obesity (Katsouda et al., 2022;Morton et al., 2016), while the knockout of ETHE1 in mice and the loss of its expression in humans result in ethylmalonic encephalopathy (Tiranti et al., 2009), suggesting that tight regulation of H 2 S production and consumption is important for normal cellular function.
One of the main mechanisms via which H 2 S is believed to convey its biological effects is protein persulfidation, an oxidative posttranslational modification in which thiol groups (RSH) in cysteine residues are converted into persulfides (RSSH), and in this review, we focus on current advances in studying protein persulfidation, their biological roles, and on some open questions that this research field is facing.
The initial term that was introduced to name this modification was ''sulfhydration'' (Mustafa et al., 2009); however, this term was ambiguous since no water molecule is introduced as the word ''hydration'' would imply. The process instead involves ''sulfuration'' because it involves the addition of a sulfur atom. The name ''persulfidation'' has gained the widest acceptance, although ''hydropersulfide'' is also in use. The IUPAC has suggested the names hydridodisulfide, disulfane, or dithiohydroperoxide. In this article, ''persulfide'' is used for the mixture of RSSH and RSSgroups in aqueous solution at a certain pH, unless otherwise specified.
In terms of their chemical reactivity, persulfides have characteristics in common with thiols, disulfides, polysulfides, hydroperoxides, and sulfenic acids. This is one of the main reasons why dissecting their specific biological functions is difficult.
The S-H bond found in persulfides is weaker when compared with the corresponding thiol ; therefore, the acidity of persulfides is expected to be higher. A computational estimate of the difference between the pK a of cysteine persulfide and cysteine gave a value of -4, which suggests that the pK a of cysteine persulfide is 4.3 (Cuevasanta et al., 2015). Recently, Alvarez's group experimentally determined the pK a of GSSH to be 5.45, 3.49 units below that of GSH (Benchoam et al., 2020). All these data suggest that in the range of physiological pH values, persulfides exist almost completely as anionic species. It is worth noting that in the case of proteins, interaction with other functional groups surrounding persulfidated cysteine residue likely modulates its pK a .
Although the basicity represents only a particular affinity toward H + , the nucleophilicity and basicity are often correlated, since the stronger the basicity, the higher the nucleophilicity. Following that rule, RSSH should be weaker nucleophiles than RSH. However, a notable exception to this rule occurs when a vicinal (adjacent) atom carries a nonbonding electron pair. This has classically been referred to as the alpha effect (e.g., HOOis more nucleophilic than HO -). Thus, the presence of the alpha effect in RSSH makes them more nucleophilic than RSH. In addition, the lower pK a keeps RSSH in the correct ionized state for the reaction with electrophiles [Eq. (2)].
An illustration of this ''super nucleophilic'' feature of RSSH is the apparent rate constant for the reaction of albumin persulfide with an electrophile, 4,4¢-dithiodipyridine, which is *20 times higher than that of albumin's thiol (Cuevasanta et al., 2015). Persulfide of the one-cysteine peroxiredoxin (Prx) alkyl hydroperoxide reductase E from Mycobacterium tuberculosis reacted 43 times faster than did the RSH form of this protein . The nucleophilicity of RSSH is best demonstrated by their reaction with thiol alkylating agents, different disulfides, and electrophiles, such as 8-nitroguanosine 3¢,5¢-cyclic monophosphate (Artaud and Galardon, 2014;Ida et al., 2014) and methyl mercury (Abiko et al., 2015). Although RSH can also react with most of those substrates, RSSH do so faster.
Electrophilicity and stability
RSSH are sulfane sulfur-containing compounds. Sulfane sulfur is defined as a sulfur atom bound to two sulfurs or to a sulfur and an ionizable hydrogen . Therefore, in their protonated form, RSSH are weak electrophiles. When reacting with a nucleophile, nucleophilic attack can occur either on the inner sulfur, releasing H 2 S, or on the outer sulfur with elimination of the thiol ( Fig. 2A). Unless the protein environment surrounding cysteine residues influences the electron density/electrophilicity of two sulfur atoms, the former (Path a, Fig. 2A) is always preferred over the latter (Path b, Fig. 2A). The reason for this is that H 2 S has a lower pK a (6.98) than the average pK a of thiol (pK a 8-9), which makes it a much better leaving group.
A notable exception to this rule would be the reaction of an RSSH with cyanide (Wood, 1987), where thiocyanate is formed [Eq. (3)]. During evolution, this reaction was used for enzymatic cyanide detoxification by rhodanese (TST).
FIG. 2. Electrophilic nature of organic persulfides and polysulfides. (A) Nucleophilic attack on electrophilic sulfur (red circle) of RSSH could occur on either of the two sulfur atoms with reaction path a being favored most of the time.
(B) Instability of organic trisulfides is illustrated by their amine-induced or thiol-induced decomposition.
In regard to the reaction with thiols, experiments with either low-molecular-weight (LMW) persulfides (Artaud and Galardon, 2014;Bailey et al., 2014;Kawamura et al., 1966) or protein persulfides (Pan and Carroll, 2013) resulted in the release of H 2 S. This is important to highlight because thiols are present at millimolar concentrations inside cells. Recent work from Murphy's group on the development of the LMW mitochondria-targeted RSSH MitoPerSulf demonstrates how this molecule, when used in a biological context, readily reacts with GSH to release H 2 S (Miljkovic et al., 2022). However, some protein persulfides formed in the active site of enzymes can react with thiols following Equation (4). This reaction is called transpersulfidation (vide infra).
Both the nucleophilic nature and electrophilic nature of RSSH make them very unstable in solution and difficult to work with. For example, the real-time mass spectrometry (MS) of N-acetyl penicillamine persulfide showed that it decays with a half-life of 2.7 min (Wedmann et al., 2016); slightly higher values have been reported for the decay of CysSSH (Yadav et al., 2016). In both cases, the decay predominantly occurred through the reaction shown in Equation (5).
Of note, the trisulfides formed in this reaction are gaining much attention as potential precursors of RSSH Bianco et al., 2019). However, cysteine and glutathione trisulfide are relatively unstable under physiological conditions, undergo amine-induced decomposition ( Fig. 2B) (Brown and Bowden, 2022), and act as electrophiles, directly modifying cysteine residues (Switzer et al., 2021), and so, their biological effects should not be equated to those of RSSH.
Reaction with one-and two-electron oxidants: chemical basis for antiferroptotic and antioxidant effects RSSH have a lower energy of dissociation of the S-H bond than thiols (293 kJ/mol vs. 385 kJ/mol) ) and a much more favorable one-electron reduction potential [E°¢(RSS /RSS -) = 0.68 V vs. E°¢(RS , H + /RSH) = 0.96 V] (Koppenol and Bounds, 2017), which makes them good one-electron reductants. Furthermore, in perthiyl radicals, the unpaired electron is delocalized between the two sulfur atoms, leading to resonance stabilization and increased stability of these radicals (Everett and Wardman, 1995;Everett et al., 1994). Depending on the one-electron oxidant, they can be involved in either electron transfer or hydrogen atom transfer [Eqs. (6) and (7)].
The scavenging of alkoxyl and peroxyl lipid radicals is of particular importance, as they lead to cell membrane damage and ferroptotic cell death ( Jiang et al., 2021). LMW RSSH efficiently prevented ferroptotic cell death caused by either inhibition or silencing of glutathione peroxidase-4 Wu et al., 2022). These are exciting observations that demonstrate the pharmacological potential of LMW RSSH. Silencing CSE aggravated lipid peroxidation, while silencing ETHE1 prevented this process, suggesting that endogenously generated LMW RSSH could also act as antiferroptotic agents . This is curious, considering that ETHE1 is located in the mitochondria (Libiad et al., 2014), and so, GSSH accumulation caused by its knockout would be localized in this organelle. Furthermore, ETHE1 silencing causes a devastating disease characterized by cell death (Tiranti et al., 2009).
Keeping in mind the abovementioned chemical characteristics of RSSH, particularly their instability and reactivity with thiols that are present in abundance in the cells, it remains unclear how endogenously generated RSSH could survive long enough to be able to accumulate in and around the lipid bilayer, and even if they do, they would almost exclusively be present in the deprotonated RSSstate (Benchoam et al., 2020;Cuevasanta et al., 2015), which is incompatible with the hydrophobic nature of the cell membrane.
The fate of formed RSS groups is also of interest. They are relatively stable and do not react with oxygen (Bianco et al., 2016;Chauvin et al., 2016) or nitric oxide (Bianco et al., 2016) (both of which are expected to be more concentrated in the lipid bilayer) (Cuevasanta et al., 2012;Filipovic et al., 2018). If generated in sufficient amounts, RSS can dimerize to form RSSSSR. Furthermore, biologically relevant electron donors that have sufficient redox potential to reduce RSS could be vitamin E or vitamin C (0.5 and 0.28 V, respectively) (Buettner, 1993;Everett et al., 1992).
Another biologically important feature of RSSH is their reactivity with biologically important two-electron oxidants, such as H 2 O 2 and peroxynitrite. Although limited data are available, they all suggest that RSSH react with these species much faster than the corresponding thiols. The reaction of peroxynitrite with albumin persulfide is one order of magnitude faster [(1.2 -0.4) · 10 4 M -1 s -1 at 20°C] than that of the reduced protein (Cuevasanta et al., 2015). Similarly, GSSH reacted 22 times faster with H 2 O 2 than GSH (Benchoam et al., 2020). Considering the importance that reactive oxygen species (ROS) and reactive nitrogen species have in both the signaling and pathogenesis of many diseases (D'Autréaux and Toledano, 2007;Ferrer-Sueta et al., 2018;Gupta and Carroll, 2014), this ''antioxidant'' property of RSSH, originally suggested by Paul and Snyder (2012), warrants special attention.
Similar to thiols, RSSH react with H 2 O 2 to form RSSOH, a perthiosulfenic acid (Fig. 4). Computational studies predicted that the reaction of H 2 O 2 with ethyldisulfane to form the pethiosulfenic species was *6.2 kcal/mol more favorable than the oxidation of the corresponding thiol, ethanethiol (Heppner et al., 2018). In the presence of excess oxidant, RSSOH could then undergo further oxidation to 22 VIGNANE AND FILIPOVIC perthiosulfinic and perthiosulfonic acids (RSSO 2 H and RSSO 3 H) (Filipovic, 2015;Filipovic and Jovanović, 2017;Ono et al., 2014;Zivanovic et al., 2019). The latter has been observed as an oxidation product while working with different protein persulfides (Cuevasanta et al., 2015;Xiao et al., 2014;Zhang et al., 2014). Unlike in the case of protein thiols, where oxidation to sulfinic (RSO 2 H) and sulfonic (RSO 3 H) acids is mostly an irreversible process (Akter et al., 2018;Paulsen and Carroll, 2013), due to the S-S bond present in ''perthio'' equivalents of these modifications, it is possible to reduce them and restore the thiol (Fig. 4). Indeed, our recent study showed that thioredoxin reacts with cysteine perthiosulfonate *2 orders of magnitude faster than it does with cystine (Zivanovic et al., 2019). Nagy's group also confirmed that different thioredoxins (Trx1 and TRP14) could reduce protein-bound per-thiosulfonate (Dóka et al., 2020). This is no surprise, as one of the well-known substrates for Trx is the enzyme 3¢-phosphoadenosine-5¢-phosphosulfate reductase, which forms perthiosulfonate in its active site during the catalytic cycle (Palde and Carroll, 2015).
The existence of this ''rescue loop'' in which oxidized thiols can be restored back to their reduced state represents the core of the RSSH protective and antiaging hypothesis (Zivanovic et al., 2019) that is discussed in detail later.
Persulfide Detection
Persulfide detection and quantification have been met with many challenges. The initial reports that 25% of all proteins are in RSSH form seem to be an overestimation (Mustafa et al., 2009). Historically, protein persulfides have been
FIG.
3. Antiferroptotic effects of RSSH. RSSH are excellent H-atom donors (upper reaction scheme) preventing formation and radical amplification of lipid radicals (L ), peroxyl radical (ROO ), and alkoxyl radical (LO ), which are the main drivers of membrane damage and ferroptosis (lower reaction scheme). GPX4, glutathione peroxidase 4. quantified within the total sulfane sulfur pool (Ubuka, 2002;Wood, 1987). Most likely, the most realistic values come from triphenylphosphine-based approaches for sulfane sulfur extraction, as S 0 is mainly derived from RSSH. As summarized in Table 1, regardless of the final detection method, the reported values in two studies performed 25 years apart are very similar, and they are in the low micromolar range (Hannestad et al., 1989;Liu et al., 2014). Considering that these approaches also detect LMW RSSH, the recent reports of high (100-400 lM) GSSH concentrations are puzzling (Akaike et al., 2017;Ida et al., 2014).
In alkaline solutions, persulfides show an absorption maximum at 335-340 nm and a relatively low absorption coefficient (*310°M -1 cm -1 ) , but due to the abovementioned reactivity, it has been challenging to selectively detect persulfides in complex mixtures, such as cell lysates. Several methodological approaches have been described for RSSH labeling (Cuevasanta et al., 2015;Mustafa et al., 2009;Sen et al., 2012), but they lacked either specificity (Pan and Carroll, 2013) or sensitivity . A detailed description of these approaches and their limitations can be found elsewhere . In this study, we focus on three methodological approaches that have been recently developed and used for global proteomic mapping of the cellular persulfidome. All three methods rely on the nucleophilic nature of RSSH in solution, and they require an initial reaction with an electrophile, that is, a thiol blocking reagent.
The biotin thiol assay (BTA) (Gao et al., 2015) and its variants ProPerDP (Dóka et al., 2016) and qPerS-SID (Longen et al., 2016) use a thiol blocking reagent that carries biotin to block both RSH and RSSH, generating thioether in the case of the former and mixed disulfide in the case of the latter (Fig. 5A). Streptavidin enrichment of either proteins or peptides is then performed, and upon treatment with reducing agents (either dithiothreitol [DTT] or tris (2-carboxyethyl) phosphine [TCEP]), a selective release of those that contained an RSSH is achieved (Fig. 5A). However, several concerns have been raised about the selectivity of this approach (Fan et al., 2020). Namely, RSOH also readily reacts with both N-ethyl maleimide and iodoacetamide, yielding an adduct that is cleavable with DTT or TCEP (Reisz et al., 2013).
In addition, thioethers formed in the reaction with N-ethylmaleimide are also known to be cleaved by DTT via the retro-Michael addition (Fontaine et al., 2015). Taking that into consideration, Bibli et al. recently developed a method where cells and tissues were initially lysed in the presence of dimedone to trap all RSOH proteins and then subjected to labeling, and they identified 1536 differentially persulfidated cysteine residues in endothelial cells (Bibli et al., 2021).
Using the same chemical logic, Yang's group recently reported a ''direct'' method for RSSH detection, low-pH quantitative thiol reactivity profiling (QTRP) (Fu et al., 2020). Alkylation is performed at low pH (5.0), where RSSH are expected to maintain high reactivity (owing to their lower pK a and increased nucleophilicity), whereas the majority of free thiols remain protonated and less reactive (Fig. 5B). Using click chemistry and ultraviolet-cleavable biotinylated probes for peptide release (Yang et al., 2015), the authors cleverly avoided the reduction step and directly compared the m/z of peptides, considering the presence of an additional sulfur in the persulfide-containing peptides. A total of 1547 persulfidated sites on 994 proteins were identified in several cell lines (Fu et al., 2020).
We compared their data sets with those generated by BTA (Gao et al., 2015) and qPerS-SID (Longen et al., 2016) and obtained only a minor overlap (82 proteins) between all 3, although the measurements by qPerS-SID and QTRP were performed on the same cell line.
It has been reported recently that the mixed disulfides formed in the reaction of RSSH with commonly used electrophiles (iodoacetamide, N-ethylmaleimide, monobromobimane, etc.) undergo thiosulfoxide tautomerization and are prone to decomposition in the presence of high amounts of nucleophiles (such as cyanide), leading to the formation of a thioether . However, these conditions are not compatible with standard protocols for cell lysis and labeling and should be of concern only when working with isolated proteins. In fact, the same authors recently used the criticized approach to quantify GSSH levels in cell lysates .
Selective labeling of protein persulfides has also been attempted by the tag-switch method (Park et al., 2015;Wedmann et al., 2016;Zhang et al., 2014). This method is based on the premise that thiols and persulfides act as nucleophiles, and so in the first step, a thiol blocking reagent is introduced to tag both P-SH and P-SSH (forming intermediate T) (Fig. 5C). If an appropriate tag is used, the disulfide bonds in the persulfide products will show greatly enhanced reactivity to certain nucleophiles compared with common disulfides in proteins (where there is little difference in the electrophilicity of the two sulfur atoms) (Fig. 5C). Therefore, it is possible to introduce a tag-switching reagent (containing both the nucleophile and a reporting molecule, such as biotin) to label only the persulfide products.
It should be noted that thiol products are thioethers, which are not expected to react with the nucleophile. In the original version of the method, we used methylsulfonyl benzothiazole (MSBT) or its more water-soluble analog (benzothiazole-2sulfonyl)-acetic acid (MSBT-A) as a thiol-blocking reagent since the mixed aromatic disulfide formed from blocking RSSH showed enhanced reactivity toward cyanoacetic acidbased nucleophiles (Park et al., 2015;Wedmann et al., 2016;Zhang et al., 2014 a Cells are typically 10%-20% protein and 60%-75% water. Thus, 1 nmol of sulfane sulfur (mg protein) -1 is equivalent to *200 lM .
RBC, red blood cell.
VIGNANE AND FILIPOVIC
persulfidated proteins in Arabidopsis thaliana (*5% of total proteome) using this approach. The disadvantage of this approach is that MSBT shows limited solubility in water and that cyanoacetate-based probes are unstable under MS conditions, making the analysis very complicated (Aroca et al., 2017;Filipovic et al., 2018).
Recently, we proposed a modification of this approach and named it the dimedone switch method (Zivanovic et al., 2019). MSBT was replaced with 4-chloro-7-nitrobenzofuran (NBD-Cl), a more soluble and well-known thiol blocking reagent (Bernal-Perez et al., 2012;Ellis and Poole, 1997), while the cyanoacetic acid-based probe was replaced by a dimedone-based probe (Fig. 5D) (Klomsiri et al., 2010;Paulsen and Carroll, 2013). Dimedone probes constitute a real advantage since a great number of them are commercially available. Indeed, dimedone is well used for sulfenic acid labeling (Klomsiri et al., 2010;Paulsen and Carroll, 2013). The main challenge was being able to block sulfenic acid and allow dimedone to react with derivatized-persulfides This is one of the advantages of NBD-Cl, which blocks not only RSH and RSSH but also RSOH (Ellis and Poole, 1997).
In addition, NBD-Cl blocks amino groups resulting in green fluorescence, which could be used as a measure of total protein load (Zivanovic et al., 2019). This method has been used to detect persulfides directly in-gel or in cells and whole organisms by microscopy, to perform antibody microarray analysis of persulfidated targets, and for persulfidome analysis by MS (Aroca et al., 2021;Bibli et al., 2021;Pedre et al., 2023;Statzer et al., 2022;Zivanovic et al., 2019).
All methods suffer limitations and will require further improvements. The challenge of selectivity remains the main obstacle.
Cellular Mechanisms of Persulfide Formation and Removal
To better understand the biological role of RSSH, it is important to understand the cellular mechanisms of their formation. In the last decade, several reaction mechanisms have been shown to lead to intracellular RSSH formation, but further studies are required to better understand how cells produce RSSH.
The formation of cysteine persulfide from cystine and H 2 S was reported almost 100 years ago (Andrews, 1926). Thermodynamic calculations suggest that the reaction between HSand a typical low-molecular disulfide RSSR, such as cystine, to form RSSand RSH is thermoneutral and has equilibrium constants of *1 (Koppenol and Bounds, 2017). Our systematic study of the kinetics of the reactions of H 2 S with low-molecular-weight disulfides and with mixed albumin disulfides (formed between Cys34 of human serum albumin and low-molecular-weight thiols) showed that the reactions are indeed slow (k pH7.4 = 2-3 M -1 s -1 for cysteinylated or glutathionylated albumin) (Cuevasanta et al., 2015). The rate constants show a clear correlation with the pK a of the leaving thiol (Cuevasanta et al., 2015).
The slow reaction and relatively low steady-state concentrations of H 2 S suggest that this mechanism probably does not play a major role in protein RSSH formation Wedmann et al., 2016). However, the specific protein environment could make some cysteines more acidic than others. Prxs are a good example, with the pK a of the active-site cysteine being below 6 (Wood et al., 2003). Indeed, in persulfidome analysis, two Prxs from Aspergillus fumigatus (Aspf3 and PrxA) were found to be persulfidated, and the presence of their disulfide form increased in cells lacking H 2 S-producing enzymes (Sueiro-Olivares et al., 2021).
Metal-catalyzed protein persulfidation via radical formation
Redox-active metal centers, particularly in iron heme proteins, are known to react with sulfide, resulting in its oxidation to different reactive sulfur species (Bostelaar et al., 2016;Pálinkás et al., 2015;Ruetz et al., 2017;Vitvitsky et al., 2015). One such protein is cytochrome c (Cyt C). Heme in Cyt C is more exposed, and the product of H 2 S oxidation can easily be reached by the surrounding proteins, resulting in protein persulfidation (Alvarez-Paggi et al., 2017). Reaction between Cyt C and H 2 S results in the initial formation of an HS /Sradical (Vitvitsky et al., 2018;Wedmann et al., 2014) that can react with proteins to yield an RSSH [Eqs. (9) and (10)].
HS þ RS À ! RSSH À As reduced Cyt C passes on electrons to complex IV of the mitochondrial respiratory chain, it reoxidizes, establishing a pseudocatalytic cycle for HS /Sradical generation and mitochondrial protein persulfidation (Fig. 6A). We observed that H 2 S reacts with Cyt C, leading to persulfidation of various targets (Vitvitsky et al., 2018). Silencing of Cyt C resulted in a profound decrease in protein persulfidation caused by mitochondria-targeted H 2 S delivery (Vitvitsky FIG. 6. Metalloprotein-catalyzed RSSH formation. (A) Cytochrome c oxidizes H 2 S to HS , which can react with protein thiols to form RSSH. Reduced cytochrome c is then reoxidized by complex IV of mitochondrial respiratory chain, establishing a pseudocatalytic cycle for RSSH formation. (B) Zinc center in metalloproteins could play a general role in catalyzing protein persulfide formation by: (i) shifting the redox potential of O 2 to a more positive value favoring superoxide formation, (ii) lowering the pK a of H 2 S (akin to Zn binding to OHin carbonic anhydrase) favoring formation of HS , and (iii) acting as a template to bring O 2 and H 2 S in a close proximity enabling efficient electron shuttling.
26
VIGNANE . During the initiation of apoptosis, Cyt C leaks out of mitochondria putting it in proximity of procaspase 9 (Riedl and Salvesen, 2007). We reported that Cyt C catalyzes procaspase 9 persulfidation, inhibiting apoptosis (Vitvitsky et al., 2018). Far more intriguing was the observation that H 2 S can interact with zinc finger proteins causing protein persulfidation (Lange et al., 2019). Zinc is not a redox-active metal, and so, no reaction other than potential coordination is expected. However, exposing the zinc finger protein tristetraprolin to H 2 S in air resulted in persulfide formation, something that could not be achieved under anaerobic conditions. We proved that Zn 2+ would coordinate HSand serve as a catalyst for electron shuttling from HSto O 2 , resulting in the formation of HS /Sand superoxide anion (Fig. 6B) (Lange et al., 2019). This mechanism may not only be characteristic for this particular protein. Indeed, analysis of recently published persulfidome data sets (Fu et al., 2020) shows that a significant portion of identified proteins (104/994) contains zinc.
Reaction of RSOH with H 2 S
Sulfenylated cysteines are products of the reaction of H 2 O 2 with RSH, and they play an important role in H 2 O 2 -based redox signaling Paulsen and Carroll, 2013;Yang et al., 2014). Early attempts to generate protein persulfides showed that the treatment of proteins with H 2 O 2 and H 2 S resulted in RSSH formation [Eq. (11)] (Ida et al., 2014;Zhang et al., 2014).
RSOH of human serum albumin reacted *600 times faster with H 2 S than with GSH at pH 7.4, while the intracellular RSSH levels increased upon treatment with H 2 O 2 in a manner that is dependent on the activity of H 2 S-producing enzymes (Cuevasanta et al., 2015;Wedmann et al., 2016).
Recent attempts to understand the importance of this reaction showed that cells lacking CSE undergo massive sulfenylation when exposed to H 2 O 2 concentrations that do not change the RSOH status in wild-type cells. Treatment with an H 2 S donor abrogated this effect, suggesting that RSOH to RSSH conversion may be the main manner of sulfenylation resolution in some cases (Zivanovic et al., 2019).
A good example is signaling by receptor tyrosine kinases, which is intrinsically linked to H 2 O 2 formation (Paulsen and Carroll, 2013;Paulsen et al., 2012;Sundaresan et al., 1995). By treating the cells with epidermal growth factor (EGF), vascular epithelial growth factor, or insulin, we observed that the initial wave of RSOH formation (caused by the activation of the corresponding receptors and subsequent H 2 O 2 generation) was followed by a wave of persulfidation in a phaseshifted manner. Manipulation of H 2 S generation affected both the amplitude and the duration of the phase, leading to the conclusion that the RSH to RSOH to RSSH transformation represents an inherent thiol redox switch (Fig. 7).
Redox switching can alter protein structure and function (D' Autréaux and Toledano, 2007). The EGF receptor (EGFR) undergoes RSH to RSOH transformation at C797 upon activation with EGF, which enhances its kinase activity (Paulsen et al., 2012). We observed that EGFR stimulation increases the expression of MPST, CSE, and CBS to produce more H 2 S, resulting in the RSOH to RSSH transformation. This switch inhibits kinase activity, as demonstrated by decreased receptor activity and Y1068 phosphorylation in cells treated with H 2 S donors (Zivanovic et al., 2019). Furthermore, EGFR downstream targets have also been found to be persulfidated upon receptor activation (Fig. 7).
Enzymatic production of LMW RSSH
The pyridoxal-phosphate enzymes CBS and CSE can, in addition to their canonical reactions, use cystine and, in the case of CSE, homocystine and cysteine-homocysteine disulfide, to form the corresponding persulfides (Ida et al., 2014;Yadav et al., 2016). Work from Akaike's group originally suggested that CSE and CBS could serve as the main sources of cysteine persulfide in cells (Ida et al., 2014). However, simulation of cysteine persulfide and homocysteine persulfide formation from CSE and CBS indicated that cysteine persulfide formation by CSE and CBS is very low and that homocysteine persulfide synthesis by CSE is negligible under FIG. 7. Regulation of EGFR signaling by RSOH to RSSH switching. Upon the receptor stimulation with EGF, NOX produces H 2 O 2 , which is transported into the cell via aquaporins, and H 2 O 2 modifies C797 of EGFR to RSOH, increasing its kinase activity. H 2 O 2 also stimulates expression of H 2 S producing enzymes. H 2 S reacts with RSOH to form RSSH on EGFR, which inhibits the phosphorylation of EGFR and downstream signaling. The temporal phase shifted waves of RSOH and RSSH caused by EGFR activation modulate many of the downstream targets involved in cytoskeleton regulation and cell motility. EGFR, epidermal growth factor receptor; H 2 O 2 , hydrogen peroxide; NOX, NADPH oxidase.
intracellularly available substrate concentrations (Yadav et al., 2016). Nonetheless, under specific pathological conditions that could result in increased cystine concentrations, this reaction could become an important source of LMW RSSH.
CARS is an enzyme that produces Cys-tRNA via cysteine and aminoacyl-tRNA (Fujii et al., 2019), but CARS also has a moonlighting function as a cysteine persulfide synthase in a manner that is independent of the aminoacyltransferase reaction (Akaike et al., 2017). LMW RSSH levels were also lower in mitochondria from CARS2 knockout mice. In addition to cysteine persulfide synthase activity, the authors also proposed that CARS2 could use cysteine persulfide as a substrate, making Cys-SSH-tRNA, which could eventually be integrated into proteins (Akaike et al., 2017).
SQR remains the main source of LMW RSSH. Localized in the mitochondrial inner membrane, SQR is a member of the flavin disulfide reductase superfamily involved in H 2 S detoxification Landry et al., 2019;Mishanina et al., 2015). Recent crystal structure analysis of human SQR revealed that the enzyme's active-site resting state is in the form of a trisulfide (Landry et al., 2019). H 2 S reacts with it, forming RSSH at C201, which forms an intense charge-transfer complex with flavin adenine dinucleotide and persulfide formation at C379, which transfers sulfur to an external acceptor. Both GSH and CoA can serve as the external acceptor, resulting in efficient formation of GSSH and CoA-SSH, respectively (Landry et al., 2019).
The electrons released in this process are transferred to coenzyme Q, which feeds them into the mitochondrial respiratory chain, leading to the proposal that H 2 S is an inorganic substrate for mammalian respiration (Goubern et al., 2007). It is worth mentioning that both MPST and TST could also be involved in the formation of LMW RSSH, particularly GSSH Mishanina et al., 2015;Yadav et al., 2013).
Transpersulfidation
The transfer of sulfane sulfur has emerged evolutionarily as a method to allow iron-sulfur cluster assembly or regulate transcription through thionucleoside generation on tRNA (Fig. 8) (Kessler, 2006;Lill and Mühlenhoff, 2005). All these FIG. 8. Transpersulfidation steps in RNA thiolation and iron-sulfur cluster assembly. Cysteine desulfurase (IscS in prokaryotes, NFS1 in eukaryotes) converts cysteine to alanine forming the intermediate persulfide (sulfur originating from the cysteine is marked red). Through transpersulfidation, this sulfur is transferred to ThiI and then to tRNA (upper figure), or transferred to ISCU. Alternatively, NFS1 persulfide could transfer sulfur to LMW RSH. FXN, frataxin, a protein involved in iron-sulfur cluster assembly; ISCU, ironsulfur cluster assembly scaffold protein; LMW, low molecular weight; ThiI, tRNA sulfur transferase.
28
VIGNANE AND FILIPOVIC processes involve protein persulfides, and sulfane sulfur is transferred from one protein to another via transpersulfidation. The process starts with cysteine desulfurase, NFS, a PLP-dependent enzyme, which converts cysteine to alanine, generating protein persulfide (Fig. 8) (Mueller, 2006;Zhang et al., 2010). On the way to the iron-sulfur cluster assembly, this persulfide is then transferred further to other protein targets, as well as LMW thiols (Parent et al., 2015). In addition, the cysteine desulfurase persulfide shuttles the sulfhydryl sulfur to a rhodanese-like sulfur transferase, ThiI, forming ThiI persulfide. ThiI persulfide then uses the sulfur for synthesis of the 2-thiouridine modification in tRNA (Fig. 8).
The enzymes involved in sulfur transfer share homology with the rhodanese domain fold with an a/b topology, which can be present in a single copy, in tandem repeats or fused with other domains (Bordo, 2002;Libiad et al., 2015). In fact, Bonomi et al. (1977a) and Bonomi et al. (1977b) observed that rhodanese is capable of transferring sulfane sulfur to protein targets, such as succinate dehydrogenase, yeast alcohol dehydrogenase, and bovine serum albumin. MPST is also known to be able to transfer sulfur from its substrate, 3mercaptopyruvate, to the iron-sulfur chromophore of adrenal ferredoxin, similar to rhodanese (Taniguchi and Kimura, 1974). Thiosulfate sulfur transferase-like domain-containing 1 protein (TSTD1) has been recently identified (Libiad et al., 2018).
The protein structure shows an active site that is quite exposed and distinct from that of rhodanese and MPST (Fig. 9A). TSTD1 can readily transfer sulfur to Trx, forming thioredoxin persulfide. The other potential protein targets of TSTD1 remain unclear (Libiad et al., 2018).
Transpersulfidation in these proteins occurs only because of the structure of the active site that surrounds the reacting cysteine. For example, C247 of rhodanese is located at the intersection of the axes of two helices, which contribute a significant electrical field and lower the pK a of the thiol group (estimated to be 6.5) by *3.5 units, making this sulfur atom very nucleophilic. The terminal sulfur atom in the persulfi-dated protein interacts with the positively charged side groups of Arg186 and Lys249, which masks the negative charge of the persulfide, making this sulfur atom less nucleophilic than it would be in a nonprotein environment (Ploegman et al., 1979;Ploegman et al., 1978). Therefore, the setting of the active site favors nucleophilic attachment at the terminal sulfur, that is, it favors transpersulfidation.
LMW RSSH are not expected to get involved in the transpersulfidation reaction; formation of a mixed disulfide and the release of H 2 S are preferred ( Fig. 2A). For transpersulfidation to happen, an RSSH would have to undergo tautomerization to its thiosulfoxide form [RS(H) = S] (Steudel et al., 1997). Computational studies and bond energies suggest that although thiosulfoxide tautomers are only 5 kJ/mol less stable than the corresponding disulfanes, they cannot be formed, as the energy barrier for isomerization is >100 kJ/mol (Steudel et al., 1997) (Fig. 9B).
In addition to sulfur transferases, the rhodanese homology domain is present in phosphatases of the Cdc25 family (involved in regulation of the cell cycle), phosphatases of the mitogen-activated protein kinase family, several ubiquitin hydrolases, and heat shock, cold shock, and phage shock proteins (Bordo, 2002). It is tempting to speculate that some of those proteins could also serve as transpersulfidases, providing the context for some specificity of RSSH formation. Recently, Pedre et al. (2023) suggested that MPST could be the main cellular persulfidase. They showed that expression of Saccharomyces cerevisiae ortholog of MPST, TUM1, increases protein persulfidation in mammalian cells, which contrasts with the observation of Zivanovic et al. (2019), who showed no change in RSSH levels in Dtum-1 mutants of S. cerevisiae. Knockout of MPST resulted in downregulation of protein persulfidation of only 64 proteins.
For comparison, the CSE knockout results in reduced persulfidation of 188 proteins (Bibli et al., 2021). It remains to be elucidated how MPST interacts with the identified targets (some of which do not contain surface-exposed cysteine) and how it transfers the sulfane sulfur.
Depersulfidation
For persulfidation to have a regulatory function, cells should have mechanism(s) to remove persulfidation, that is, depersulfidase proteins, and restore reduced thiols. In cells, Trx, a disulfide oxidoreductase, serves as a main redox partner of a variety of client proteins. Trx performs disulfide reduction in conjunction with thioredoxin reductase (TrxR) (Buchanan et al., 2012;Lu and Holmgren, 2014b). As protein persulfides are analogous to disulfides, the Trx/TrxR system seems to be a good candidate for protein depersulfidation. Trx is involved in H 2 S release from MPST (Yadav et al., 2013), and it was *200 times more efficient than DTT in reducing persulfidated phosphatase PTP1B (Krishnan et al., 2011). Nagy's group (Dóka et al., 2016) and our group (Wedmann et al., 2016) observed that the Trx/TrxR system controls intracellular persulfidation globally (Fig. 10A).
When total intracellular persulfidation levels were assessed as a function of lysis time, the levels notably decreased. The addition of auranofin, an inhibitor of the Trx/TrxR system, to the lysis buffer normalized these levels, keeping the persulfidation constant (Wedmann et al., 2016). Trx efficiently reduced both LMW and protein persulfides, leading to H 2 S release. The first-order rate constant for the reaction of Trx with cysteine persulfide was estimated to be 4.5 -0.1 M -1 s -1 , which is almost one order of magnitude higher than that for cystine. A similar rate constant was observed for HSA-SSH (4.1 -0.8 M -1 s -1 ) (Wedmann et al., 2016). Lower circulatory sulfane sulfur levels were observed in HIV patients with a high viral load and high circulatory Trx levels than in patients treated with antiretroviral therapy, supporting that Trx has depersulfidase activity even in humans (Wedmann et al., 2016).
Two mechanisms are possible to explain the reaction: (i) transfer of the outer sulfur from the persulfide to the catalytic cysteine of Trx leading to the transient formation of Trx-SSH, which would then react with the resolving cysteine of Trx, resulting in a displacement of HS-and formation of an intramolecular disulfide bond in Trx, or (ii) a nucleophilic attack of one of the Trx cysteines to the inner sulfur of the persulfide with immediate elimination of H 2 S and formation of a mixed Trx-client disulfide complex, followed by the displacement of the client thiol by the resolving cysteine and formation of a disulfide bond in Trx (Fig. 10A). While the latter mechanism is used to explain the disulfide reductase activity of Trx, in the case of depersulfidase activity, either mechanism could result in H 2 S generation (Libiad et al., 2018).
A thioredoxin-related protein of 14 kDa (TRP14) could also play an important role in depersulfidation, as its knockdown led to an increase in protein persulfidation (Dóka et al., 2016). TRP14 is particularly interesting since it may become FIG. 10. Enzyme-catalyzed protein depersulfidation. (A) Trx catalyzes protein depersulfidation via two different mechanisms. Oxidized Trx is then reduced by TrxR. (B) Grx catalyzes protein depersulfidation oxidizing GSH, which is reduced back by GR. GR, glutathione reductase; Grx, glutaredoxin; TrxR, thioredoxin reductase.
30
VIGNANE AND FILIPOVIC the main depersulfidase under conditions of oxidative stress when Trx becomes heavily engaged in turning over the Prx system (Dóka et al., 2016). Nagy et al. (2014) also addressed the role of the GSH system (glutathione reductase [GR], GSH, and glutaredoxin [Grx]) in catalyzing depersulfidation. The overall structure of GR is similar to that of TrxR and conserved throughout all kingdoms. GR is also an FAD-containing disulfide oxidoreductase that uses NADPH as a source of electrons, but unlike TrxR, GR does not have selenocysteine in its active site (Fig. 10B) (Lu and Holmgren, 2014a). In the presence of GSH and NADPH, GR efficiently reduced polysulfides and BSA-SSH in vitro and at higher rates when Grx was introduced (Dóka et al., 2016).
The removal of protein persulfides seems to be intrinsically linked to the availability of NADPH as a reducing power. Future studies should address how metabolic changes in the NADPH/ NADP + ratio control RSSH levels of specific proteins.
Persulfidation Affects Protein Function
Despite the large number of proteins found in different proteomics screens, there are only a few examples where a clear correlation between persulfidation and function is established. We divided those into examples showing the gain of enzymatic activity or the loss of enzymatic activity.
Gain of enzymatic activity
GAPDH was the first protein characterized as persulfidated in a study that sparked this whole research field (Mustafa et al., 2009). GAPDH is an important glycolytic enzyme but is also known as a regulator of the cell death cascade (Hara et al., 2005). Snyder's group showed that GAPDH is persulfidated at C152, and this modification increases its enzymatic activity approximately sevenfold. DTT treatment of GAPDH decreases its activity, suggesting that endogenous persulfidation regulates its function. CSE knockout mice showed *35% reduced activity of GAPDH compared with control mice (Mustafa et al., 2009). Under conditions of endoplasmic reticulum stress, where H 2 S production increases, the activity of GAPDH increased as well, while the total amount of protein remained the same, suggesting some sort of posttranslational activation of the enzyme (Gao et al., 2015).
Cotreatment of GAPDH with H 2 O 2 and H 2 S results in persulfidation of C152 and an increase in enzymatic activity, further supporting the original claims (Gao et al., 2015).
However, Jarosz et al. (2015) worked with purified GAPDH to show that treatment with polysulfides in fact results in inhibition of enzyme activity (*42% compared with the untreated enzyme) ( Jarosz et al., 2015). The inhibition was due to the persulfidation of cysteine residues out of the active site, that is, C156 and C247. Furthermore, when they used the C156S mutant, they observed additional persulfidation at C152 in GAPDH treated with polysulfides, but the enzyme was still inhibited. However, the mutant was also modified at C247 ( Jarosz et al., 2015). The identification of C156 and C247, but not C152, as targets of persulfidation was recently confirmed by proteomic studies as well (Fu et al., 2020). More detailed studies addressing the actual conformational effect of RSSH on specific cysteine residues could help resolve these conflicting observations. Similar to GAPDH, persulfidation of parkin, an E3 ligase implicated in Parkinson's disease (Chung et al., 2004), increased its enzymatic activity (Fig. 11A) and rescued neurons from cell death by removing damaged proteins (Vandiver et al., 2013). More importantly, markedly decreased parkin persulfidation has been found in Parkinson's disease human brains, whereas S-nitrosation was increased (Vandiver et al., 2013).
Loss of enzymatic function
ATG18a is a core autophagy protein in A. thaliana that binds to phosphoinositides (Dove et al., 2004;Wun et al., 2020). It forms a complex with ATG2, leading to autophagosome formation. This protein is of particular importance for the induction of autophagy under abiotic stress. Recently, Aroca et al. (2021) showed that persulfidation of C103 in ATG18a inhibits autophagy under endoplasmic reticulum stress conditions by regulating the number and size of autophagosomes. The C103 residue is located inside a hydrophobic cavity formed by 83 FNQD 86 and F90 amino acids that have also been found in human orthologs of this protein.
Through electrostatic interactions, positive charges around this region strengthen the binding of negatively charged phosphatidylinositide molecules. Persulfidation of C103 results in the introduction of a bulky sulfur that alters the cavity size, disturbing the existing hydrophobic interactions and enforcing new ones.
The negative charge that RSH to RSSH replacement introduces into this hydrophobic cavity due to the lower pK a of persulfide further affects the size of the cavity, hindering the interaction with additional phosphoinositide molecules (Aroca et al., 2021).
DJ-1 (also called PARK7) is a ubiquitously expressed protein associated with autosomal recessive early-onset Parkinson's disease (Bonifati et al., 2003). The protein has glyoxalase and deglycase enzymatic functions, but also serves as a redox sensor and a buffer for reactive species (Canet-Avilés et al., 2004;Kinumi et al., 2004;Oh and Mouradian, 2017). C106 was shown to undergo hyperoxidation to RSO 2 H and RSO 3 H. The former is known to translocate to the mitochondria, where it protects cells against cell death (Canet-Avilés et al., 2004). We found DJ-1 to be endogenously persulfidated in human red blood cells, which prompted us to test the hypothesis of protective waves of persulfidation, since chemical tools to detect all thiol oxidation states (RSH, RSOH, RSSH, RSO 2 H, and RSO 3 H) exist (Zivanovic et al., 2019).
Indeed, mouse embryonic fibroblasts exposed to H 2 O 2 show a protective wave of persulfidation of DJ-1, while cells lacking CSE show pronounced hyperoxidation of C106 (Zivanovic et al., 2019). Recently, Galardon et al. (2022) studied the effect that persulfidation can have on DJ-1 structure and activity. Although both RSSH and RSO 2 H formations led to inhibition of enzymatic function, the structural analysis showed quite distinct conformational changes. Crucial structures implicated in the stabilization of reduced dimers are either lost or weakened in persulfidated proteins. Furthermore, C106 becomes more accessible after persulfidation, making it an easy target for the depersulfidase activity of Trx/TrxR, which would restore the RSH form of C106 (Galardon et al., 2022). Another example where RSSH exhibits an inhibitory effect is glycogen synthase kinase 3b (GSK3b) (Giovinazzo et al., 2021). GSK3b is one of the busiest kinases in the cell, with >100 identified substrates (Beurel et al., 2015). In Alzheimer's disease (AD), GSK3b phosphorylates tau protein (MAPT), leading to its dissociation from microtubules and increasing its propensity to aggregate, which results in the formation of neurofibrillary tangles, one of the most prominent features of the disease (Crews and Masliah, 2010;Hooper et al., 2008;Lauretti et al., 2020). Recently, we showed that GSK3b could be persulfidated and that its persulfidation levels (as well as global persulfidation levels) decreased in cells, mouse and human brain samples of AD patients (Giovinazzo et al., 2021). MS analysis identified C218 as a target of persulfidation.
C218 is located very close to the active site, E181, so insertion of a bulky sulfur carrying a negative charge would conformationally change the active site, leading to the observed loss of kinase activity (Giovinazzo et al., 2021). We postulated that persulfidation of GSK3b would have protective effects against AD due to its role in inhibiting the enzyme and that loss of persulfidation caused by the decline of CSE levels due to age and disease would result in higher kinase activity and tau hyperphosphorylation (Fig. 11B).
Future Directions: Protection Versus Signaling
In addition to signaling effects, protein persulfidation could have a general protective effect. During oxidative stress, cysteine residues are oxidized to sulfenic acids. This represents an important signaling event for the cell to either start proliferating or to die (depending on the amount of H 2 O 2 ) (Lo Conte and Carroll, 2013;Paulsen and Carroll, 2013). If buried deep into protein pockets, an RSOH could
FIG.
11. Persulfidation affects enzyme activity. (A) Persulfidation of parkin stimulates its E3 ligase activity leading to higher ubiquitination of target proteins and their proteosomal degradation. In Parkinson's disease, H 2 S production declines and nitrosation of thiols occurs, leading to inactivation of parkin. (B) Tau interacts with CSE stimulating its H 2 S producing activity, which results in persulfidation of GSK3b and inhibition of its activity. With aging and in AD, there is a decline in CSE levels. GSK3b becomes more active leading to hyperphosphorylation of tau and its aggregation. AD, Alzheimer's disease; GSK3b, glycogen synthase kinase 3b.
32
VIGNANE AND FILIPOVIC become stabilized and not easily reachable for reduction (Paulsen and Carroll, 2013;Shi and Carroll, 2020). H 2 S is small and can reach deep into the protein structure to lead to RSSH formation. Once formed, an RSSH can be reduced back to a thiol by the Trx system (Dóka et al., 2016;Wedmann et al., 2016). When oxidative stress persists (such as in aging and many ROS-related diseases), sulfenylated cysteines oxidize further to RSO 2 H and RSO 3 H , oxidations that are generally considered irreversible (although some RSO 2 H groups can be reduced back to thiols) (Akter et al., 2018). Persulfidated residues are expected to act as better scavengers of ROS than regular thiols, resulting in the formation of RSSO 3 H. The existence of the S-S bond makes this species a potential target for Trx and the restoration of native thiolate via this rescue loop (Fig. 12) (Dóka et al., 2020;Zivanovic et al., 2019). Thus, the overall structure, function, and half-life of thiol-containing proteins are preserved.
We proposed that this recue loop exemplifies a remnant of times when life emerged in an H 2 S-rich environment and that it represents the simplest way to resolve cysteine oxidation and protect proteins from oxidative damage (Petrovic et al., 2021;Zivanovic et al., 2019). Indeed, persulfidation is evolutionarily conserved, and its levels are directly correlated with the organismal ability to fight oxidative stress (Zivanovic et al., 2019).
When life emerged in an H 2 S-rich environment, despite the absence of oxygen, living systems were exposed to high amounts of ROS formed by photolysis of water and metalcatalyzed processes (He et al., 2021). The described rescue loop would have saved the proteins from oxidative damage and extended their ''half-life'' and their function, playing a role in successful maintenance of life itself. However, a decrease in the expression of H 2 S producing enzyme (primarily CSE, but also CBS and MPST) and consequently the loss of persulfidation were found in aging and different age-induced diseases, such as neurodegenerative diseases (Giovinazzo et al., 2021;Petrovic et al., 2021;Snijder et al., 2015;Statzer et al., 2022;Zivanovic et al., 2019).
This led us to establish the hypothesis of evolutionarily conserved, protective antiaging effects of protein persulfi-dation (Zivanovic et al., 2019). It is not surprising then that one of the genes affected in many different life span extension interventions is actually CSE (Tyshkovskiy et al., 2019). Furthermore, several studies have shown that dietary interventions, such as calorie restriction, lead to increased H 2 S formation and higher RSSH levels via CSE upregulation (Bithi et al., 2021;Hine et al., 2015;Statzer et al., 2022;Zivanovic et al., 2019).
Due to the absence of a clear protein-catalyzed mechanism of RSSH formation and only a dozen examples with clear functional outcomes, how much protein persulfidation is indeed functional is unclear (Hanna et al., 2023). In the eyes of a biologist, this process might appear stochastic and therefore can be disregarded as a nonsignaling process; however, that presumption is inaccurate. As mentioned, RSO x H is mainly an irreversible modification that leads to inactivation and possibly acts as a target for protein degradation. Even if the RSSH modification had the same inhibitory effects on the enzyme's function, this inactivation would be reversible and therefore short-lived. The best examples are the abovementioned DJ-1 and the protein tyrosine phosphatase PTP1B. It has been shown recently that the reversibility of the RSSO 3 H modification on PTP1B represents a way that signaling pathways controlled by this phosphatase are regulated (Dóka et al., 2020).
Furthermore, it seems that many proteins found to be persulfidated are structural proteins (Bibli et al., 2021;Fu et al., 2020), which raises the question of how the RSSH modification affects their role. Recent advances in cell biology suggest that the regulation of cellular function could be achieved through a biophysical phenomenon called liquid-liquid phase separation (LLPS) (Banani et al., 2017;Brangwynne et al., 2009;Hyman et al., 2014;Lafontaine et al., 2021;Shin and Brangwynne, 2017). LLPS is a dynamic and reversible process through which biomolecules, such as proteins or nucleic acids, associate and form membrane-less structures, adding another layer of complexity to the regulation of cellular processes (Guillén-Boixet et al., 2020;Molliex et al., 2015;Yang et al., 2020).
In addition, recent studies have shown that most of the proteins whose aggregation is implicated in neurodegeneration undergo phase separation and that aberrant LLPS could be a driver of disease progression (Alberti and Hyman, 2021;Banani et al., 2017;Ray et al., 2020;Shin and Brangwynne, 2017;Wegmann et al., 2018;Wolozin and Ivanov, 2019). Tu's team showed that H 2 O 2 -induced changes in methionine oxidation control LLPS of ataxin-2 (Kato et al., 2019). This study supports the possibility that even cysteine redox changes could control LLPS. RSH to RSSH transformation instead of RSH to RSOH to RSO x H transformation could have different effects on proteins that undergo LLPS, regulating their function by changing their basic biophysical properties and, thus, their phase separation.
Finally, RSSH modification could affect other posttranslational modifications. Neighboring S, T, or Y residues could be affected by the bulkiness and negative charge of sulfur and be less prone to phosphorylation. Cross talk between cysteine reactivity and phosphorylation has been recently shown by Cravatt's group (Kemper et al., 2022). In addition, RSSH modification could have effects on signaling via other cysteine posttranslational modifications, such as S-nitrosation and S-acylation. All of these processes could have profound FIG. 12. Hypothetic scheme for antiaging effects of RSSH. During aging or age-induced diseases, restoring protein persulfidation can serve as a mechanism to reduce ROS-oxidized cysteine residues, preventing their irreversible overoxidation. effects on cellular signaling, and so, even if persulfidation is just protective, it is still a modification that affects cellular function.
The relatively new field of signaling by protein persulfidation still has many unanswered questions: the mechanism of persulfide formation and transpersulfidation and the identification of ''protein persulfidases,'' the improvement of methods to monitor RSSH changes and identify protein targets, and understanding the mechanisms through which this modification controls important (patho)physiological functions. Once this knowledge is obtained, it will pave the way for the development of new drugs that could be used to improve health. Therefore, this exciting field of research will remain a hot topic for many years to come.
Author Disclosure Statement
No competing financial interests exist.
Funding Information
This work was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 864921). | 2023-06-09T06:17:27.511Z | 2023-06-08T00:00:00.000 | {
"year": 2023,
"sha1": "442029dbad1ec827b67edf9620e1855f26722563",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4761b3793ee96eecc4db20873a69a8d631e8a600",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267209850 | pes2o/s2orc | v3-fos-license | GlioPredictor: a deep learning model for identification of high-risk adult IDH-mutant glioma towards adjuvant treatment planning
Identification of isocitrate dehydrogenase (IDH)-mutant glioma patients at high risk of early progression is critical for radiotherapy treatment planning. Currently tools to stratify risk of early progression are lacking. We sought to identify a combination of molecular markers that could be used to identify patients who may have a greater need for adjuvant radiation therapy machine learning technology. 507 WHO Grade 2 and 3 glioma cases from The Cancer Genome Atlas, and 1309 cases from AACR GENIE v13.0 datasets were studied for genetic disparities between IDH1-wildtype and IDH1-mutant cohorts, and between different age groups. Genetic features such as mutations and copy number variations (CNVs) correlated with IDH1 mutation status were selected as potential inputs to train artificial neural networks (ANNs) to predict IDH1 mutation status. Grade 2 and 3 glioma cases from the Memorial Sloan Kettering dataset (n = 404) and Grade 3 glioma cases with subtotal resection (STR) from Northwestern University (NU) (n = 21) were used to further evaluate the best performing ANN model as independent datasets. IDH1 mutation is associated with decreased CNVs of EGFR (21% vs. 3%), CDKN2A (20% vs. 6%), PTEN (14% vs. 1.7%), and increased percentage of mutations for TP53 (15% vs. 63%), and ATRX (10% vs. 54%), which were all statistically significant (p < 0.001). Age > 40 was unable to identify high-risk IDH1-mutant with early progression. A glioma early progression risk prediction (GlioPredictor) score generated from the best performing ANN model (6/6/6/6/2/1) with 6 inputs, including CNVs of EGFR, PTEN and CDKN2A, mutation status of TP53 and ATRX, patient’s age can predict IDH1 mutation status with over 90% accuracy. The GlioPredictor score identified a subgroup of high-risk IDH1-mutant in TCGA and NU datasets with early disease progression (p = 0.0019, 0.0238, respectively). The GlioPredictor that integrates age at diagnosis, CNVs of EGFR, CDKN2A, PTEN and mutation status of TP53, and ATRX can identify a small cohort of IDH-mutant with high risk of early progression. The current version of GlioPredictor mainly incorporated clinically often tested genetic biomarkers. Considering complexity of clinical and genetic features that correlate with glioma progression, future derivatives of GlioPredictor incorporating more inputs can be a potential supplement for adjuvant radiotherapy patient selection of IDH-mutant glioma patients.
radiation oncologists.Identification of IDH-mutant glioma patients at high risk of early progression is critical for personalized radiotherapy treatment planning.
According to the 2021 WHO central nervous system (CNS) classification system, the ATRX (alpha-thalassemia/mental retardation, X-linked) retained and 1p/19q-codeleted group defines a WHO Grade 2 or Grade 3 1p/19q codeleted oligodendroglioma; ATRX lost and homozygous deletion of CDKN2A/B is sufficient to classify IDH-mutant glioma as WHO Grade 4, and those without CDKN2A/B deletion are WHO Grade 2 or 3 astrocytoma 6,7 .Therefore, multiple possible WHO Grades can be designated within a biomarker-defined diagnostic entity, representing a major departure from prior histology-based CNS tumor classifications and highlighting the importance of molecular biomarkers in guiding glioma treatment 6 .Molecular biomarkers currently used for IDH-mutant glioma classification have complex interrelationships and multiple other molecular biomarkers are emerging as potential new candidates.Therefore, a systematic selection and integration of candidate biomarkers for risk assessment of IDH-mutant glioma is warranted.To this end, the objective of this study is to train and validate a supervised machine-learning (ML) based algorithm to identify IDH-mutant glioma patient at high risk of early progression.
Supervised ML is now widely used in the medical field to produce models and classifiers from training data for automation of tasks.Artificial neural network (ANN) is a subtype of ML technology that can analyze large datasets as inputs and make predictions with the probability of accuracy as outputs 8,9 .An ANN with two or more hidden layers is often called a deep neural network (DNN), which is particularly robust in making predictions for complex situations 10,11 .Basic requirements for supervised DNN training includes identification of relevant inputs with reduced dimensionality and redundancy, as well as a set of accurately labeled training data as output values 12 .Due to the longevity and lack of accurate long-term follow-up data of IDH-mutant patients, disease progression information in large public datasets is often censored.We attempted to identify and train genetic and clinical features that have no direct causal relation with IDH status to identify IDH-mutant glioma patients that have similar genetic background as IDH-wildtype.
Patient selection
Training and validation of artificial neural networks (ANNs) were carried out using WHO Grade 2 and Grade 3 cases from The Cancer Genome Atlas dataset.Copy number variations (CNVs) of genes such as PTEN, EGFR, CDKN2A, and mutation status of genes such as IDH1, TP53, and ATRX, clinical data including age, gender, progression-free interval (PFI), overall survival (OS) days, as well as histological classifications of the TCGA cases were derived from the UCSC xena platform (https:// xenab rowser.net/) (Fig. 1).CNVs and RNA-Seq raw data were processed using GISTIC2.0 and Log2(norm_count + 1) algorithms, respectively.1309 and 404 Grade 2 and Grade 3 glioma cases from AACR GENIE v13.0 and Memorial Sloan Kettering (MSK) datasets, respectively, were derived from publicly assessable cBioportal (https:// www.cbiop ortal.org/).This retrospective study followed the STROBE reporting guideline for publicly available datasets including TCGA, and MSK datasets included in the cBioportal.Data downloaded from a publicly available cBioPortal database does not require ethical approval.All patients whose samples were used in this analysis signed informed consent (https:// docs.cbiop ortal.org/ userguide/ faq/).IDH1-mutant WHO Grade 3 cases with subtotal resection (STR) who received adjuvant concurrent chemoRT were derived from Northwestern University (NU) (n = 21).Patient data was accessed with the approval of the Institutional Review Board (Study number STU00213078, August 2020) and was performed in accordance with the 45 Code of Federal Regulations Part 46 (45 CFR 46), Protection of Human Subjects (https:// irb.north weste rn.edu/ about/, irb@northwestern.edu).The workflow of datasets used for construction and validation of the model was illustrated in Fig. 2.
Genomic alterations and genetic mutations
Comparison and alignment of WHO Grade 2 and Grade 3 cases from TCGA dataset (n = 516) of most frequently altered chromosome cytobands were conducted in Firebrowse (http:// fireb rowse.org/).Cases were first aligned by patients' age at glioma diagnosis.Corresponding cases with mutated genes were indicated and types of mutations, including frameshift, splice site, missense, inframe, and synonymous mutations, were color-coded.The most frequently mutated genes were listed.Copy number gain and loss are also listed based on the frequency of alterations.The prevalence of genetic mutations, CNVs of WHO Grade 2 and Grade 3 patients were studied at different age groups (18-40, 40-60, > 60) for MSK datasets (n = 279).Age at diagnosis less than 18 were included in the 18-40 subgroup.Genetic markers from 1309 glioma patients of AACR GENIE v13.0 dataset were subgrouped into IDH1-mutant (IDH1_MT) and -wildtype (IDH1_WT) and aligned based on CNVs status of EGFR, CDKN2A, and PTEN, as well as mutation status of TP53 and ATRX.
Data preprocessing
Input data preprocessing was carried out in the Jupyter Notebook using Python programming language.The TCGA cases that had missing data on any input parameters were dropped.In the binary output, '0' stands for IDH1 mutated, '1' stands for IDH1_WT.Genetic inputs with missense mutation, and truncating mutations including nonsense, frameshift, nonstart, nonstop, and splice mutations were considered positive and were assigned '0' , wildtype inputs were assigned '1' .Cases were randomly assigned to the training set, and 30% were www.nature.com/scientificreports/assigned to the validation set.Inputs were selected and tested based on its variation prevalence in glioma (> 20%) and features with a correlation coefficient of > 0.2 or < − 0.2 were considered to be positively or negatively correlated.
ANN model construction and performance assessment
The model Sequential was imported from the Keras Python library.Briefly, the argument Dense was deployed for each layer with activation function relu for all the hidden layers.Since it is a binary classification task, sigmoid and adam were chosen as the activation function and optimizer, respectively, for the output layer.The loss function was fetched with the 'binary_crossentropy' command.The 'early_stop' and accuracy functions were deployed to prevent overfitting and evaluate models' performance, respectively.Accuracy and loss function for both the training set and the validation set were plotted for each epoch.Figure 3C is a schematic overview of the architecture of the ANN (6/6/6/6/1).The best performing ANN was named as GlioPredictor for prediction of glioma early progression, with weights and biases derived from Python and reconstructed in Microsoft Excel ® .A GlioPredictor score was calculated as 100 minus the integral numbers of sigmoid activation value that was multiplied by one hundred: GlioPredictor Score = (100 − INT(100 * 1/(1 + e (−x) )).www.nature.com/scientificreports/
Statistical analysis
Progression-free survival (PFS) analyses were carried out using the GraphPad Prism version 8.0.Patients at risk at major time points were listed.Log-rank analysis was used to generate survival curves.Violin plot and oneway ANOVA analyses of GlioPredictor score for TCGA datasets were also carried out using GraphPad Prism.Python 3.9.0 was used for data analysis and model construction.Correlation analysis was performed using the 'corr()' command, which corresponds to pairwise Pearson analysis.ROC curve and AUC score analyses were conducted using 'roc_curve' and 'roc_auc_score' functions derived from the sklearn package.Univariate and multivariate analyses were carried out using IBM ® SPSS ® .All statistical tests were 2-sided, and p-values smaller than 0.05 were considered statistically significant.
Different age at IDH-mutant glioma diagnosis reflects unique genetic features
Age > 40 years at diagnosis was the criterion adopted by multiple guidelines in risk stratification for glioma patients 13,14 .We first tested potential genetic discrepancies of WHO Grade 2 and 3 diffuse glioma cases diagnosed at different ages.Progression free survival (PFS) data were derived from the Memorial Sloan Kettering (MSK, n = 250) dataset.Patients were subgrouped based on age at disease diagnosis (18-40, 41-60, > 60) (Fig. 3).We found 81% of patients aged 20-40 have IDH1 mutation, compared to 31% in age > 60 (Fig. 3A).Younger glioma patients have significantly better PFS rates (Fig. 3C, p < 0.001).However, for IDH1-mutant (IDH1_MT) glioma cases (n = 182), age at disease diagnosis had no significant impact on PFS rates (Fig. 3D, p = 0.89).This finding is also true in the independent TCGA dataset (Supplementary Fig. 1).Genes with the most prevalent mutations at different age groups were presented in both IDH1_WT and IDH1_MT glioma (Fig. 3A,B).We found, regardless of IDH1 mutation status, younger glioma patients have statistically significant higher prevalence of P53 or ATRX mutations, and lower PICK3CA mutations (p < 0.05, Fig. 3A,B).These data indicated that age at glioma diagnosis reflects a unique genetic background and using age alone cannot predict progression of IDH_MT glioma.
GlioPredictor in prognosticating glioma treatment response
We further evaluated the potential of GlioPredictor in prognosticating adjuvant treatment response.In TCGA_IDH1_MT WHO Grade 2 and 3 glioma patients treated with adjuvant radiotherapy (w/RT, n = 211), no significant difference on PFS was observed between those with GlioPredictor < 50 (n = 11) vs. GlioPredictor ≥ 50 (n = 200, p = 0.1, Fig. 6C).For those without adjuvant RT (w/o RT, n = 147), we found statistically significant worse PFS for the cohort with GlioPredictor < 50 (n = 14) vs. GlioPredictor ≥ 50 (n = 133, p = 0.029, Fig. 6C), indicating adjuvant RT is warranted in this molecularly high-risk glioma cohort.We then studied the potential of GlioPredictor in prognostication of glioma patients who histologically would warrant adjuvant treatment.IDH1_MT WHO Grade 3 cases with subtotal resection (STR) were derived from Lurie Robert H. Lurie Comprehensive Cancer Center of Northwestern University (NU_STR) who had received adjuvant concurrent temozolomide and RT.We found a correlation among 87% of NU_STR patients with enhancing lesions on their initial post-surgical radiological imaging 87% with a GlioPredictor < 50 vs.52% of patients with enhancement on radiological imaging with a GlioPredictor ≥ 50 as shown in Fig. 6E.For PFS analysis, NU_STR patients with Gliopredictor < 50 (n = 7) have earlier disease progression than those with GlioPredictor ≥ 50 (n = 14), p = 0.0238) (Fig. 6D).
Discussion
This study provided a functional deep neural network (DNN) that can identify high-risk IDH-mutant glioma patients and assist with prognostication for post-operative management.The model we built first predicted IDH1 mutation status in the TCGA dataset with a 90% accuracy and AUC score 0.91 with 6 readily available genetic and clinical characteristics including: TP53 and ATRX mutation status, CNVs for PTEN, EGFR, and CDKN2A, and age at diagnosis.We then used the trained model to generate the GlioPredictor score, with a lower score reflecting a genetic background similar to IDH1 wildtype.We then demonstrated that a low GlioPredictor score can identify a group of IDH1-mutant patient at higher risk of early progression.Therefore, GlioPredictor assessment is capable of integrating important molecular features and clinical information into a simplified risk stratification score.Clinical trial results of Radiation Therapy Oncology Group (RTOG) 9802 and the European Organization for Research and Treatment of Cancer (EORTC) 22033-26033 suggests adjuvant radiotherapy (RT) either alone or in combination with chemotherapy for high-risk WHO Grade 2 glioma patients 13,14,19,20 .Prior to the molecular biomarker-based WHO 2021 classification, high-risk WHO Grade 2 glioma patients were often defined as patients with age > 40 years or a less than total gross resection, the criterion adopted from the RTOG 9802 trial and recommended in the most recent NCCN guidelines 13,14 .It is now known that risk assessment and corresponding treatment planning should incorporate a tumor's genetic features as critical decision-making factors.However, molecular biomarkers have complex biological implications and are often interrelated; as such, a method to systematically access IDH-mutant glioma patients can add prognostic value.
The role of immediate adjuvant radiotherapy (RT) in IDH-mutant management is debatable, and concerns regarding RT-induced long-term neuropsychological side effects are not negligible 19,21,22 .All the molecular markers evaluated in this study, i.e., TP53, ATRX, PTEN, EGFR, and CDKN2A, have been proposed as radiosensitivity biomarkers of glioma.The GlioPredictor model integrates these markers with clinicopathologic information to provide a tool to evaluate the role of radiation therapy in glioma patients.
While we believe our model has good efficacy and applicability, several drawbacks remain that await further study.First and foremost, the sample size and tumor characteristics are limited based on features reported in public datasets.If more samples were available to train the neural network, we believe the performance will be further improved.Secondly, treatment-related details were not available for datasets involved in model training, www.nature.com/scientificreports/validation, and cross validation.Thirdly, prospective studies are required to demonstrate the clinical applicability of the model, especially when the definition of glioma progression was not clearly specification in those public datasets.w Also, although we tested the trained model in several independent datasets, GlioPredictor was trained using TCGA dataset alone and therefore, sample bias and tumor heterogenicity may compromise the clinical applicability of the model.Last but not least, multiple clinical parameters such as size of the tumor, extent and anatomical location of the tumor involvement, extend of resection, neurological deficits, histology subtypes, gender, history of seizures, treatment received, patient baseline performance, as well as other biomarkers were not incorporated in the current version of GlioPredictor.Those parameters not included are critical for disease status evaluation and treatment recommendation, and can be potential cofounding factors of GlioPredictor score.Furthermore, the GlioPredictor model was not validated in paired recurrent tissues, paired progressed MRI brain.Utilization of GlioPredictor is not a replacement for those known risk assessment criteria.Instead, it is intended to facilitate comprehensive molecular assessment of glioma when clinical decision-making is increasingly dependent upon a panel of seemingly unrelated biomarkers ranging from copy number variation to mutations.
Figure 1 .
Figure 1.Schematic overview of model training.Left panel, illustration of the GlioPredictor structure.The neural network construction starts with identification of proper features, and trial and error in refining the inputs, model hyperparameters.New features can be added if they can further improve the performance of the model.
Figure 4 .
Figure 4. Identification of molecular markers as potential inputs for neural network construction.(A) WHO Grade 2 and 3 glioma cases from TCGA dataset were aligned based on age at diagnosis.Cases with copy number gain (top panel) or loss (lower panel) on cytobands that have most frequently copy number variations (CNVs) were color-coded.Genes selected as inputs in our final model of neural network were indicated.(B, C) 1309 WHO Grade 2 and 3 glioma patients of AACR GENIE v13.0 dataset were grouped into IDH1 mutated (IDH1_MT) and IDH1 wildtype (IDH1_WT) cohorts, and aligned based on CNVs status of EGFR, CDKN2A, and PTEN, as well as mutation status of TP53 and ATRX.NA data not available, SCNAs number of somatic copy number alterations.
Figure 5 .
Figure 5. Artificial neural network (ANN) feature selection, target identification and ANN construction.(A) Correlation study of features of (CNVs of EGFR, CDKN2A, PTEN, and mutation status of TP53, IDH1, ATRX, and age at diagnosis).(B) Evaluation of prediction accuracy for both the test dataset and train dataset.(C) Evaluation of loss function for both the test dataset and train dataset.(D) ROC curve analysis of the built neural network model.(E) Schematic overview of the ANN model (6/6/6/6/2/1).Features selected as inputs are indicated.
Table 1 .
Univariate and multivariate Cox Regression analyses for progression free survival (PFS) for LGG patients with age and genetic alterations as covariables. | 2024-01-26T06:17:34.908Z | 2024-01-25T00:00:00.000 | {
"year": 2024,
"sha1": "6c5bf1d32f739ef809f1080eef724aac5209382f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-51765-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea0c915b3d92457e911a07207fedd1250580e53b",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3318923 | pes2o/s2orc | v3-fos-license | Neighbours of cancer-related proteins have key influence on pathogenesis and could increase the drug target space for anticancer therapies
Even targeted chemotherapies against solid cancers show a moderate success increasing the need to novel targeting strategies. To address this problem, we designed a systems-level approach investigating the neighbourhood of mutated or differentially expressed cancer-related proteins in four major solid cancers (colon, breast, liver and lung). Using signalling and protein–protein interaction network resources integrated with mutational and expression datasets, we analysed the properties of the direct and indirect interactors (first and second neighbours) of cancer-related proteins, not found previously related to the given cancer type. We found that first neighbours have at least as high degree, betweenness centrality and clustering coefficient as cancer-related proteins themselves, indicating a previously unknown central network position. We identified a complementary strategy for mutated and differentially expressed proteins, where the affect of differentially expressed proteins having smaller network centrality is compensated with high centrality first neighbours. These first neighbours can be considered as key, so far hidden, components in cancer rewiring, with similar importance as mutated proteins. These observations strikingly suggest targeting first neighbours as a novel strategy for disrupting cancer-specific networks. Remarkably, our survey revealed 223 marketed drugs already targeting first neighbour proteins but applied mostly outside oncology, providing a potential list for drug repurposing against solid cancers. For the very central first neighbours, whose direct targeting would cause several side effects, we suggest a cancer-mimicking strategy by targeting their interactors (second neighbours of cancer-related proteins, having a central protein affecting position, similarly to the cancer-related proteins). Hence, we propose to include first neighbours to network medicine based approaches for (but not limited to) anticancer therapies.
INTRODUCTION
Cancer is increasingly being considered as a "systems" disease, based on the observation that genetic changes and environmental influence rewire cellular networks during carcinogenesis. 1 Combinational classical chemotherapies, have been successfully applied against fast proliferating haematological cancers, such as acute myeloid or lymphoid leukaemia. 2 However, chemotherapy has only shown moderate effect against solid cancers like colon cancer or non-small cell lung cancer. 2 Hence, even today the most effective therapeutic solution against solid cancers is in many cases of surgery. Although the newest, targeted therapies of solid cancer enhance patient survival, malignant cells often display fast evolution, and thereby develop drug resistance. 3 Therefore, to achieve a higher success rate in curing solid cancers, new therapeutic approaches are required, such as the identification of suitable proteins that can serve as novel, alternative drug targets for treatment. In the following two paragraphs, we describe two sets of proteins that are in the major focus of current anticancer research: proteins encoded by mutated genes, and proteins having a differential expression in normal and disease states.
The number of mutated genes, which are directly involved in carcinogenesis, is very low compared to those encoded by the whole genome. Vogelstein and his colleagues described 138 so called driver genes, 4 which are directly involved in cancer progression. The Cancer Gene Census (CGC) database contains 547 such gene across various cancer types. 5 Remarkably, few driver genes having specific point mutations appear to be sufficient to rewire signalling networks in cancer, 1 which at the same time shows that-at least from the mutational side-cancer does not consist of an "infinite" number of different diseases, and in many cases treatment options targeted against driver genes might be transferred from one case to the next. Biological knowledge and network-based approaches have been developed to understand the mechanisms of driver gene influence. Pathway analysis [6][7][8] showed that most driver genes are part of central signalling pathways, like MAPK, TGF-β, JAK/STAT, Notch, Hedgehog and WNT, 4,9 which are implicated in carcinogenesis, growth and differentiation. However, in most cases, pathway analysis does not explain why some pathway members are much more often found to be mutated than others. To understand the selection mechanism behind mutations, network-based studies were used to estimate the importance of a mutated protein compared to non-mutated ones in signalling and protein-protein interaction networks. [10][11][12][13] Proteins mutated in cancer were found having a high number of interacting partners (i.e., a high degree of connectivity), which indicates high local importance. 10 Mutated proteins are also often found in the centre of the network, in key global positions, as quantified by the number of shortest paths passing through them if all proteins are connected with each other (i.e., they have high betweenness centrality; hereafter called betweenness). 11,12 Mutated proteins also have high clustering coefficients, which means their neighbours are also neighbours of each other. 10,13 Moreover, neighbourhood analysis of mutated proteins have been previously successfully used to predict novel cancer-related genes. 14,15 However, to the best of our knowledge, no study has concentrated particularly on the topological importance of first neighbours of mutated proteins in cancer, and their usefulness as drug targets themselves.
The other frequently studied group of genes in cancer biology is the set of differentially expressed genes (DEGs). Since microarray and next generation sequencing data became widespread, an increasing number of genes were found to differ in expression between cancer systems and healthy cells, either by upregulation or by downregulation. 16,17 To find the most relevant DEGs for disease occurrence and progression, one approach is to select those DEGs that have the most central position in the network. 18,19 We note that these studies do not take into account the interaction neighbourhood of DEG coded proteins if the neighbour proteins have unchanged expression. Pathway analysis is another approach to prioritize DEGs by identifying those DEGs that have been annotated to function in cancer-related pathways. 20 Although here the interactions of a DEG coded protein provide the evidence for the pathway function, network and pathway analysis based studies also do not consider the network parameters or neighbourhood of proteins coded by DEGs. The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium studies extended the scope of DEG analysis by joint examination with mutated proteins. 21 These studies were successful to find different disease clusters in solid cancers. 17,22 Nonetheless, these studies have not yet focused on the role of first neighbours in cancer, other than that first neighbours could connect cancer-related proteins. 23 Prompted by the lack of a focused analysis of the neighbourhood of cancer-related proteins, we compared the network features of mutated genes and DEG coded proteins with their first neighbours. We also aim to investigate whether these first neighbour proteins could be considered as a potential set of novel anti-cancer drug targets in particular in the area of solid cancers, which are in dire need of new treatment modalities. To provide generalizable results, we selected four solid cancer types with high mortality rate, namely colon, breast, and hepatocellular carcinoma (HCC), as well as nonsmall cell lung cancer (NSCLC). Given that the aim of this study was to investigate general trends among first neighbours of cancerrelated proteins, all known subtypes of these cancers were included in the analysis, listed in Supplementary Table 1. For each examined cancer type, we then measured the network parameters of cancerrelated proteins and their first neighbours in multiple protein-protein interaction and signalling network resources and were able to show first neighbours have as high network centrality parameters as cancer related proteins themselves. This led us to evaluate the therapeutic applicability of the first neighbours in anticancer treatment. To select the most relevant first neighbour proteins for oncology, we suggest and provide examples for two complementary strategies: (1) a drug target discovery approach focusing specifically on the first neighbours of differentially expressed proteins and presenting a network medicine-based target selection approach; and (2) a drug repurposing approach based on analysing existing drugs and compounds.
RESULTS
Identification of cancer-related proteins and their first neighbours We considered a protein cancer-related, if it was mutated or had a differential expression in cancer. We collected mutation and expression data from the CGC 5 and the Gene Expression Omnibus (GEO) 24 resources, respectively. We defined a protein differentially expressed in a given tissue, if the corresponding mRNA was either present only in normal tissue and absent in cancerous tissue, and vice versa. After calculating the mean and standard deviation of our datasets, we determined proteins with expression levels below the mean minus the standard deviation as not expressed to discretise the gene expression to an on/off value (see Methods for details). By combining the lists of mutated and differentially expressed genes, for each examined cancer type separately, we defined a protein as cancer-related if (1) its coding gene was listed in the CGC as mutated in the given cancer type, and/or (2) its corresponding mRNA was found to be differentially expressed between the control and cancer tissues. We constructed tissuespecific networks from the differential expression data as well to examine the network effect of differently expressed cancer-related proteins. An interaction from a signalling or protein-protein interaction network was valid if both interactors from the given network were expressed in a given tissue, regardless it was normal or cancer.
First neighbours of cancer-related proteins were defined as proteins (1) directly and physically interacting with cancer-related proteins in human interactomes and signalling networks according the network databases used (see below); and (2) which were not cancer-related proteins themselves. We also defined as "unaffected" those proteins that are neither first neighbours of cancer-related proteins, nor cancer-related proteins themselves in the given cancer type. The cancer type specific analysis is important as some proteins classified as unaffected in one cancer type can be cancer-related in another cancer type. Accordingly, it is worth distinguishing from the many unaffected proteins those that have a directed interaction towards a first neighbour as they have a similar position as a cancer-related protein. We termed these distinctive proteins as influencer proteins.
For the interaction and network data we used three detailed signalling network resources, SignaLink 2, 25 Reactome, 26 and a cancer signalling network compiled by Cui et al. 10 , as well as two more global protein-protein interaction (PPI) networks, namely the manually curated HPRD, 27 resource and the integrated dataset comprising DIP, 28 IntAct, 29 and BioGrid. 30 All these networks have different compilation protocols and thereby provide partially different information. We performed all analyses separately with each network resource, to avoid the study and curation bias of using a single resource and to thereby provide more general conclusions from this work. We developed cancer-specific and tissue-specific networks by combining expression datasets with network information (see Methods for details). We listed the protein classifications for each cancer type in Supplementary Table 2.
First neighbours of cancer-related proteins have high local and global centrality in the network We found cancer-related proteins in locally and globally central positions of networks across each cancer types (measured as degree and betweenness, respectively) in agreement with previous studies. [10][11][12][13]18,19 Throughout the main text, we show the results for colon cancer from the SignaLink network resource.
All results for the other three cancer types with the other four network resources are shown in the supplementary materials. Interestingly, most of the cancer-related proteins are not directly interacting with each other, i.e., cancer-related proteins do not form one giant component in the graph (Fig. 1a. and Supplementary Table 3; Exact test p > 0.05). Meanwhile, first neighbours of the cancer-related proteins form a larger giant component within the network than expected by chance compared to the same amount of randomly selected proteins in the same network (Fig. 1a, Exact test p < 0.001; for the other cancer types and other databases, see Supplementary Table 3; for the methodological details, see Methods). Cancer-related proteins are connected to each other by their first neighbours, and these first neighbours also connect the unaffected proteins. In other words, to signal to another cancer-related protein, and further parts of the signalling network, cancer-related proteins make extensive use of their first neighbours (Fig. 1a). Therefore, the rewiring effect (i.e., the number of affected biological processes) of cancer-related proteins is significantly higher, if we consider those processes that they reach through their first neighbours (p < 0.001 Bernoulli test; see Supplementary Table 4 and Methods for details). For example, through their first neighbours, colon cancer-related proteins affect important processes implicated in carcinogenesis, such as angiogenesis, autophagy, and DNA repair that are not direct functions of the cancer-related proteins themselves.
In a similar fashion as cancer-related proteins, their first neighbours also have higher degree, higher betweenness and higher clustering coefficient, compared to either the whole network or to the proteins unaffected by cancer (p < 0.001 in both cases, Wilcoxon rank sum test; Fig. 1b, Supplementary Table 5). Remarkably, compared to colon cancer-related proteins, first neighbours have higher degree (p < 0.05, Wilcoxon rank sum test), similar betweenness and higher clustering coefficient (p < 0.05, Wilcoxon rank sum test). Likewise, we found similar significant differences in the network topology parameters of first neighbours in the other three cancer types (breast, HCC, NSCLC) (Supplementary Fig. 1. and Supplementary Table 5). In addition, examining the other four (signalling and PPI) network sources provided similar results to those from the SignaLink network (Supplementary Table 5, and Supplementary Fig. 2). There was only one minor exception; in the Reactome network the clustering coefficient was not found to be different between the first neighbours and the unaffected proteins ( Supplementary Fig. 2b), due to the high number of protein complexes within this database. 26 We also measured the network centrality parameters in the non-tissue specific (original) networks. We found similar significant differences to those measured in tissue-specific networks, indicating that not only in context specific networks, but in molecular networks in general we can observe the differences of the network parameters of cancer-related proteins and their first neighbours.
Next we tested with a randomization method the robustness of first neighbour selection (see Supplementary Method). Encouragingly, we found that in all four cancer types, the "real" first neighbours were listed as first neighbours in the randomly generated set more than the unaffected proteins (p < 0.01, Wilcoxon rank sum test; Supplementary Fig. 3). Similarly, in three out of the four cancer types, the "real" first neighbours were found as first neighbours in the randomly generated set more than the cancer-related proteins (p < 0.05; Wilcoxon rank sum test; Supplementary Fig. 3). The only exception was NSCLC, where the reason for the non-significant difference was due to three NSCLC-related proteins (RET, EGFR, PPARγ) having a high degree (i.e., many first neighbours expressed in the lung). In the other cancer types, these three proteins are not cancer-related but first neighbours or unaffected proteins, except for PPARγ, which is cancer-related in breast cancer. Thus, we confirmed that most of the identified first neighbour proteins can be listed based solely on the given network, and their list is independent of the initial set of cancerrelated proteins. This observation emphasizes the relevance of integrating expression data with interaction networks to generate cell and context specific networks for such analysis.
Mutated proteins directly, differentially expressed proteins indirectly, through their first neighbours affect biological networks Encouraged by the finding that first neighbours of cancer-related proteins display a central network position, we investigated the relation between the network topology parameter of a cancerrelated protein and its first neighbour proteins. From this analysis we found that cancer-related proteins have two distinct topology patterns both in signalling and PPI networks: Mutated proteins have a higher or same degree, betweenness and clustering coefficient parameters compared to their first neighbours, while differentially expressed proteins have lower degree and betweenness than their first neighbours ( Fig. 2a- Table 6. for all detailed statistics). Thus, the network position of cancer-related proteins and the topological parameters of their first neighbours are both associated with the alteration type (mutation or differential expression) of the cancer-related protein. We could formalise this observation as two diverse, complementary strategies in carcinogenesis: proteins with high network topology parameters have a high chance to be either mutated themselves, or to be indirectly affected through differentially expressed proteins.
We conclude from using integrative network analysis of four different cancer types in three signalling and two PPI databases that first neighbours (1) have central positions in signalling and PPI networks; (2) connect cancer-related proteins; (3) act like "glue" of the cancerous network to form a giant component; (4) have a potential role in transducing a malignant effect of a cancerrelated protein to alter cellular functions; and (5) allow differentially expressed proteins to exert their effects more extensively. Based on these observations, we now suggest a strategy for disrupting cancer-specific interactomes and signalling networks by targeting first neighbour proteins of cancer-related proteins, especially the first neighbours of differentially expressed proteins.
In the following we will provide two complementary ways and examples for selecting the most potent first neighbours of cancerrelated proteins as drug targets: (1) A drug target discovery approach focusing on first neighbours of differentially expressed proteins for selecting novel drug targets from scratch; and (2) A drug repurposing approach based on analyzing existing drugs and compounds targeting first neighbours of cancer-related proteins.
Selecting novel anticancer targets using first neighbours of differentially expressed proteins While (mutated) proteins with high network topology parameters have been the focus of drug discovery efforts in the past, 31 the results from the current study suggest first neighbours of differentially expressed proteins play a similarly central role in cancer networks. Hence, we propose first neighbours of differentially expressed proteins as a novel way of selecting anticancer drug targets in the future.
In colon cancer, there are 82 proteins classified as first neighbours of differentially expressed proteins. To evaluate the oncological relevance of these proteins, and also to validate our classification process, we searched for scientific publications regarding colon cancer and these 82 first neighbour proteins. We found and manually checked 1820 publications. The validation part of our analysis can be found in the Supplementary Notes. As for the oncological relevance analysis of these 82 proteins, we found 38 proteins (46%) to have indirect (31 proteins) or, in specific cases, direct (7 proteins) implications in carcinogenesis. Thus, nearly half of the first neighbours of differentially expressed Neighbours of cancer-related proteins D Módos et al . Node sizes are proportional to their betweenness centrality network parameter. Green nodes represent cancerrelated proteins, orange nodes are their first neighbours and grey nodes represent the proteins that do not belong to any of these categories (here termed as "unaffected" proteins). Based on cancer-relatedness, sub-graphs have been created, with their major network properties shown below them. Note the different ratio of nodes in the giant component, which is the largest connected graph. (Exact test *** p < 0.001) b Distribution of network topological properties for each subgraph: degree (local centrality), betweenness centrality (global centrality), and clustering coefficient (neighbourhood connectivity). White dots represent median values, boxes stretch from the 25th percentile to the 75th percentile, and whiskers are twice the length of the boxes. Wilcoxon rank sum tests have been used for statistical testing, with the whole network serving as control. The level of significance is represented using the following scale: *p < 0.05, **p < 0.01, ***p < 0.001 Neighbours of cancer-related proteins D Módos et al proteins are already been known or suspected to be associated with colon cancer, while the other half (44 proteins) could be considered as novel genes potentially relevant for anticancer drug target discovery. However, as we pointed out previously, the first neighbours of differentially expressed proteins often have high degree and/or high betweenness parameters, thereby their pharmacological targeting could produce more side effects. 32,33 Accordingly, these proteins are rarely found in any cancer type mutated or differential expressed, probably because most of them (such as SMAD3, GSK3β, ERK1) are multi-functional and have a central position. Therefore, to pharmacologically target these central first neighbours, we suggest a cancer mimicking, indirect approach: the less central interactors of these first neighbours could serve as reasonable drug targets. In other words, either a differentially expressed protein itself or other interactors of the first neighbour could affect the central first neighbour protein with fewer side effects. This network based reasoning is in agreement with current drug target selection attempts focusing on differentially expressed proteins based only on expression analysis. 34 Moreover, by using interaction data we can also extend the target space with proteins classified in the first part of the Result section as influencer proteins: non cancer-related or first neighbour proteins that have a directed interaction towards a first neighbour. By this network position, influencer proteins could efficiently affect these first neighbour proteins, and thereby, they could be relevant novel anticancer drug targets (Fig. 3a). Note that by their definition influencer proteins are two steps away from a differentially expressed protein, thus, their identification requires the integration of expression data and interaction data.
In order to investigate current research efforts into the direction proposed here, we investigated the potential list of such influencer proteins in colon cancer. To avoid the interaction bias of one resource and to provide more confident candidates, we combined the five network resources we used in this study. We listed the influencer proteins having at least one directed interaction to a first neighbour of differentially expressed proteins in colon cancer (Supplementary Table 7). Note that with this approach influencer proteins having interaction to different first neighbours in the different resources were also listed emphasizing the relevance of these influencer proteins. We pointed out that influencer proteins in colon cancer found in at least three different resources could provide a reasonable number of proteins (197) for further investigations (Fig. 3b, and Supplementary Fig. 6 for the other cancer types). Supporting the oncological relevance of influencer proteins, we found 128 drugs available against influencer proteins and that is higher than expected by chance (p < 0.001 χ 2 test compared to rest of the proteins in the databases). From the 128 drugs, 62 drugs are already under clinical trial against cancer, and two (Dasatinib and Vandetanib) are currently used in practice according to our literature survey. Interestingly, half of these drugs are different ion channel inhibitors, and most of them could be used in specific cancer types. Such example is metformin, which causes energy deficiency both in colon cancer cell lines and in prostate cancer 35,36 by targeting different NADH dehydrogenases (influencer proteins) that are interacting with the Cytochrome b-c1 complex (first neighbours of many cancer-related proteins). Interferon gamma (IFN-γ) could also be an interesting treatment option targeting influencer proteins, and as an immunotherapeutic agent, it is under trial in colon cancer with combination of 5-fluoracil or leucovirin. 37 We present a detailed example with IFN-γ and its receptor, IFN-γR below.
Examples of the cancer-mimicking strategy By analyzing the list of all influencer proteins in all the four examined cancer types, we highlight here examples for those influencer proteins that were found in the highest number of network resources. We present three specific cases in colon cancer, and three other examples for each of the other three cancer types we examined. IFN-γR a known drug target we found as an influencer protein in four databases and FRAT1, a protein listed as influencer in all five databases but not yet a drug target demonstrate two different types of connectivity pattern in colon cancer ( Fig. 3c-d): (1) the influencer protein (IFN-γR) has the same position as the cancer-related protein; (2) the influencer protein (FRAT1) is affecting the cancer-related protein through the first neighbour.
IFN-γR is a heterodimer of IFN-γR1 and IFN-γR2 that form the receptor of the Interferon-γ cytokine, which is conventionally associated with antitumor mechanisms during cell-mediated adaptive immune responses. 38 Another key cytokine receptor in these immune responses is the IL10 receptor (IL10R), which is highly expressed in the normal colon, but missing in colon cancer. Both receptors are transducing their effect directly through JAK1 and STAT1 proteins 38 that we found having central network positions in the examined networks. In cancer, the lack of IL10R decreases the production of antitumorigenic cytokines, and thereby also the activity of the IFN-γ pathway. As IFN-γR is also expressed in cancer and directly connected to the same first neighbours (JAK1 and STAT1) as the cancer-related IL10R, IFN-γR may substitute the role of IL10R upon IFN-γ treatment (Fig. 3c). Accordingly, IFN-γ has been used clinically to treat a variety of malignancies. 38 IFN-γ treatment is not always beneficial, and some clinical trials against melanoma pointed out that signalling context and tumour microenvironment factors could even turn IFN-γ to a pro-inflammatory and thus a carcinogenic factor. 38 Nevertheless, studies agree that it could be a viable new therapeutic target for a subset of malignancies. 38 FRAT1 (Frequently Rearranged in Advanced T-cell lymphoma 1) is a known proto-oncogene in some cancer types that promotes the WNT signalling pathway by inhibiting GSK3β-mediated phosphorylation of β-catenin. 39,40 In normal colon, the expression level of β-catenin is low, but its expression in cancer is high, causing dedifferentiation. GSK3β, the first neighbour of β-catenin is a central, highly multi-functional protein known as a key protein difficult to inhibit without causing side effects and toxicity. 41 In glioblastoma and NSCLC cancer, previous studies found that the decreased level of FRAT1 influences the GSK3β activity to phosphorylate β-catenin and by that, inhibit the WNT pathway. 39,40 The role of FRAT1 in colon cancer is less known but based on its function in other cancer types and its special influencing position in colon cancer signalling, we point out its relevance as a potential target in colon cancer therapy (Fig. 3d). FRAT1 was one of the two proteins that have this special position in all the five examined network sources (Fig. 3b). Nonetheless, a recent in vitro study showed a somehow opposite role for FRAT1 in a metastasis suppressing pathway that highlight the importance of stage and context specific treatments as well as the need for detailed and in vivo studies. 42 DUSP3 (DUal Specificity protein Phosphatase 3) was the other influencer protein that we found in all five databases for colon cancer. DUSP3 is an influencer protein because it acts on ERK1 and ERK2, 43 two first neighbours of a colon cancer-related protein, DUSP4, which is only expressed in colorectal cancer cells and not in normal colon cells (Fig. 3e). DUSP3 and DUSP4 are both dual specific phosphatases that dephosphorylate tyrosine and threonine residues and inhibit ERK1 and ERK2. ERK1 and ERK2 are both high network centrality first neighbours and they transduce cell proliferative as well as pro-, and anti-apoptotic signals in a coordinated manner in the MAPK pathway. 44 The cancer-related DUSP4 overexpression blocks ERK1/2 signalling, and this leads to increased cell proliferation in colorectal cells. 45 Expression of DUSP3 is needed during intensive cell proliferation to inhibit the active MAPK signal, and blocking DUSP3 by compounds leads to cell cycle arrest. 46,47 Therefore, DUSP3 could be a potential drug target in colon cancer, similarly as it was proven in cervical cancer, where targeting DUSP3 by small molecules led to decreased proliferation. 48 The concentration of compounds targeting DUSP3 was in nanomolar IC50 range (Fig. 3e), according to ChEMBL, which makes them promising leads. 48 In breast cancer, DUSP4 is not a cancer-related protein but it is situated in an influencer position according to all five databases. In breast, DUSP4 acts also on ERK1/2 along with a breast cancerrelated protein, DUSP6 (Fig. 4a, Supplementary Fig. 6). During the progression of breast cancer, DUSP6 expression decreases according to our and others data. 49 Thus, during cancer progression due to the less active DUSP6, ERK1/2 becomes more active. Interestingly, DUSP4 alone can still limit the ERK1/2 over activation leading to cancer stem cell formation and epithelial mesenchymal transition (EMT) instead of cell-cycle arrest. 50 The inhibition of DUSP4 could prevent the EMT in breast cancer, 50 as well as causing cell cycle arrest, similarly as the inhibition of DUSP3 in colorectal cancer. 46 Therefore, DUSP4 inhibitors (such as the CHEMBL 2146956 compound, which inhibits DUSP4 with an IC50 of 2.29 μm) are under intensive experimental investigations.
In NSCLC, a Notch pathway modulator called LFNG (lunatic fringe) was found as an influencer in all the five databases (Fig. 4b, Supplementary Fig. 6). LFNG can effect the incoming signals of both NOTCH1 and NOTCH2 receptors in a context dependent manner. 51 NOTCH2 is a first neighbour of the NSCLC-related Midkine (MDK) protein, which is expressed only in NSCLC and not in normal lung, and it is a known NSCLC biomarker. 52 MDK can also cause neuroblastoma through activating specifically NOTCH2. 53 LFNG acts like a double-edged sword: by promoting both the Delta1 ligand activated NOTCH1/2 receptors and decreasing the Jagged1 ligand activated NOTCH1 signalling, it contributes to the normal and tumour suppressor effect of NOTCH1. 51 However, if the NOTCH2 specific MDK is expressed (as in NSCLC) and acts together with LFNG, the overall role of LFNG is shifted to promote cancer progression, mostly through a NOTCH2 dependent EMT. 54 A drug selectively targeting the NOTCH2 activating function of LFNG may alter the malignant effect of MDK. The application of such "edgetic" drugs has increased in recent years, especially for difficult but promising drug targets (like LFNG). 31,55 This edgetic drug has the potential to promote and maintain the tumour suppressive inhibition of LFNG on the Jagged1-NOTCH1 signalling, while blocking the LFNG-NOTCH2 stimulatory interaction. This NOTCH1 dependent tumour suppressive effect of LFNG was confirmed in pancreatic cancer supporting that LFNG could be a potential target of specific anti-cancer treatments. 56 In HCC, we found SOCS3 (Suppressor Of Cytokine Signalling 3) as an influencer of the JAK2 signalling in all five databases (Fig. 4c, Supplementary Fig. 6). JAK2 is a first neighbour with high network centrality parameters of the HCC-related IL7R protein. IL7R is expressed in HCC but not in normal liver cells, and can activate JAK2 to increase cell growth. 57 SOCS3 is a negative regulator of the JAK/STAT signalling pathway, and can inhibit JAK2 itself. 58 Therefore, inhibition of JAK2 by SOCS3 could result in decreased proliferation. A possible way to increase SOCS3 expression in HCC would be using adenoviral infection of hepatocellular carcinoma cells containing the SOCS3 gene. This was found to cause lysis in hepatocellular carcinoma cell lines, but not in normal liver cells indicating that further experimental studies on SOCS3 could increase its future applicability as a cancer cell specific anti-cancer agent. 59 Drugs already targeting first neighbours of cancer-related proteins: a drug repurposing approach Alternatively, direct targeting of central proteins is a feasible strategy in certain cases to destroy cancer cells, if the applied drug does not lead to serious side-effects. 31 Based on the presented - Fig. 3 The potential role and number of influencer proteins. a The model of action of an influencer protein, which directly affects a first neighbour of a differentially expressed cancer-related protein. Network centrality differences are indicated by the size of the circles. Influencer proteins having lower network centrality parameters could be better drug targets than those first neighbours that are too central and multifunctional. b The number of influencer proteins of colon cancer in the overlap of the five network resources used in the current study. The number of proteins that are already drug targets are shown in parenthesis. c-e The effect of three influencer proteins, IFN-γR, FRAT1 and DUSP3 in the JAK/STAT, WNT and MAPK signalling, respectively, in normal, cancer and treated cases. At all three examples influencer proteins could be better targets than the first neighbours of differentially expressed cancer-related proteins, which are too central proteins making them difficult to pharmacologically target. Note the colour codes for the proteins: Green node: cancer-related, expressed protein; green empty node: Cancer-related, not expressed protein; orange node: first neighbour protein; grey node: unaffected protein, influencer; arrow: stimulation; blunted arrow: inhibition. For the sake of clarity we present the heterodimer of IFN-γR1 and IFN-γR2 as "IFN-γR" Neighbours of cancer-related proteins D Módos et al importance of first neighbour proteins in cancer, these, often central proteins, can also be considered as anticancer drug targets if efficient drugs (or compounds) can target them without causing major side-effects. Therefore, we investigated the currently used drugs and existing compounds with activity against cancer-related proteins and their first neighbours in the SignaLink network to point out and select potential new anti-cancer drugs (Fig. 5a). For this step we employed compound data from the ChEMBL resource, 60 and considered a compound to be a drug if it was listed as an 'approved drug' according to ChEMBL (see Methods for details). To provide information on the cancer specificity of the targets, we also analysed the occurrence of cancer-related proteins and their first neighbours in different cancer types (Fig. 5b-c).
An analysis of approved drugs identified 223 drugs acting on first neighbour proteins and 122 drugs targeting cancer-related proteins (Fig. 5d-e). Importantly, from the 223 drugs against the first neighbours, only 27 (12%) have currently annotated indications in cancer, based on the anatomical therapeutic chemical 61 (ATC) classification ( Supplementary Table 8a-d). Thus, the remaining 196 drugs serve as an already existing pool for repurposing existing drugs as novel anti-cancer agents. According to our PubMed literature survey (see Methods), 83 of these 196 drugs are already under clinical trials as potential anticancer agents. 60 of these potential anticancer drugs are glucocorticoid steroids targeting the glucocorticoid receptor NR3C1, which is a first neighbour of the cancer-related protein PPARG in breast cancer and NSCLC. Pharmacological targeting of immune signal modulating proteins such as NR3C1 and PPARG were found efficient to decrease the lymphangitic metastasis formation of breast cancer and NSCLC. 62 Another relevant example is tamoxifen, an anti-oestrogen targeting the oestrogen receptors, Fig. 4 Influencer proteins in breast cancer, NSCLC and HCC. a-c Effect of three further influencer proteins, DUSP4, LFNG and SOCS3 in breast cancer, non-small cell lung and in hepatocellular carcinoma, respectively. As in Fig. 3, we compared the probable effect of the influencer proteins on cancer-related proteins in normal, cancer and treated cases. DUSP4 effects ERK signalling (a), LFNG stimulates the Notch pathway (b), while SOCS3 is an inhibitor of the JAK/STAT pathway (c). Note the colour codes for the proteins: Green node: cancer-related, expressed protein; green empty node: Cancer-related, not expressed protein; orange node: first neighbour protein; grey node: unaffected protein, influencer; arrow: stimulation; blunted arrow: inhibition Neighbours of cancer-related proteins D Módos et al which are the first neighbours of the breast cancer-related HER2 (ERBB2) protein. Accordingly, tamoxifen is already being used as an endocrine modulating treatment against breast cancer. 34 Alternatively, a source for new anticancer drugs could be compounds targeting first neighbour proteins that have not yet been applied in oncology. The number of compounds in ChEMBL 60 against cancer-related proteins or first neighbours thereof, below a bioactivity cut-off of 500 nM, is 30 times higher than that of approved drugs against cancer-related proteins or their first neighbours ( Fig. 5a; Supplementary Table 9). Thus, these compounds represent a large collection of potentially relevant agents for anticancer treatments provided they will be safe and efficient in preclinical, toxicology and clinical studies. Interestingly, the ratio between compounds targeting cancer-related or first neighbour proteins are the same as for the approved drugs (p > 0.05 χ 2 test; Fig. 5a). The targeting landscape of these compounds is not homogenous as we observed cancer-specific differences in their distribution (Fig. 5f,g). Remarkably, we found 675 compounds (9.9%) of the total number of compounds targeting first neighbour proteins present in all four examined cancer types. The ratio of these compounds is higher than expected (p < 0001 Bernoulli test to the percentage of targeted first neighbour proteins of all four cancer types), thereby giving a rationale for their experimental testing across all areas ( Fig. 5g; Supplementary Table 10). To provide evidence for the applicability of some of these compounds, we checked the literature, and for those 33 compounds that have an exact name we found 10 of them are already under clinical trial against various cancer types. One such example is midostaurin, which targets multiple kinases, including MAPK9. MAPK9 is a first neighbour of a colon cancerrelated protein, β-catenin, and inhibition of MAPK9 was found to down-regulate β-catenin. 63 Supporting the probably beneficial affect of midostaurin, a previous in vitro study found that midostaurin sensitized colon cancer cells against chemotherapeutic agents. 64
DISCUSSION
In this work we have shown that the first neighbours of cancerrelated proteins have at least as central a position in various human signalling and PPI networks as the corresponding cancerrelated proteins themselves (Fig. 1, Supplementary Fig. 1, Supplementary Table 5). Except for few direct connections, cancer-related proteins are connected through their first neighbours (Supplementary Table 3), and they can affect more Gene Ontology biological processes via their first neighbours than alone (Supplementary Table 4).
When we examined signalling systems in cancer, we found two distinct strategies how mutations and differentially expressed genes affect the network. Firstly, mutated proteins have higher or similar network centralities such as degree or betweenness, compared to those of their first neighbours. Secondly, differentially expressed proteins have lower network parameters than their first neighbours. Thus, mutated cancer-related proteins appear to exert a more direct effect onto the cellular signalling and PPI networks, whereas the differentially expressed proteins may (also) exert their effects indirectly via their first neighbours (Fig. 2). This dichotomy points out the importance of an indirect influence on proteins whose altered function is required during Fig. 5 Number of proteins involved in a particular cancer, as well as drugs and compounds acting on cancer-related and their first neighbour proteins. a, Stacked columns show the number of proteins involved in a particular cancer, as well as the number of drugs and compounds acting on cancer-related proteins and their first neighbours, respectively. All stacked columns were compared to the cancer-related/all proteins ratio (Bernoulli test: p < 0.001). Focusing on first neighbours could provide a twofold increase of testable drugs and compounds for anticancer therapy. b-g The venn diagrams show the distribution of available drugs and compounds against the different cancer types. The colours represent each examined cancer type: pink standing for breast, blue for NSCLC, brown for HCC and tan for colon cancer. b The distribution of cancer-related proteins. c, The distribution of first neighbour proteins. d, The distribution of drugs against cancerrelated proteins. e, The distribution of drugs against first neighbour proteins. f, The distribution of compounds against cancer-related proteins. g, The distribution of compounds against first neighbour proteins Neighbours of cancer-related proteins D Módos et al carcinogenesis but are too essential to be mutated (i.e., mutation of their encoding genes could decrease the viability of cancer cells). Interestingly, differentially expressed proteins can influence these central proteins in a tissue and context specific way, without interfering with all the functions of the central protein. 65 The idea that central nodes of a network are often influenced by their neighbours is supported also by two recent studies on a fish community network and a 14 million people phone call network. 66,67 By analysing these biological and social networks, the authors point out the role of so called influential nodes that are directly connected to high degree nodes of the network.
Current strategies in oncology mainly target mutated cancerrelated proteins themselves. However, given the two strategies of cells to drive cancer elucidated in this work, we now highlighted the option to select first neighbours of differentially expressed proteins as drug targets. There are fewer drugs currently on the market, on a per protein basis, against first neighbours than against cancer-related proteins themselves; however the total number of drugs targeting first neighbours outnumbers those targeting cancer-related proteins (223 vs. 122 drugs, Fig. 5). The presented approach provides a twofold increase of testable drugs and compounds for anticancer therapy (Fig. 5a). Nonetheless, not all first neighbours can be used as drug targets, and not all drugs targeting a first neighbour could be efficient anti-cancer agents due to the complexity of the signalling network, the biochemical properties of the targets, and the highly central role of some of the first neighbours. Thus, proper selection is needed, for which we showed two complementary approaches to select the most suitable first neighbours: (1) mimicking the strategy employed by carcinogenesis and selecting those (non cancer-related) proteins that directly influence first neighbours of differentially expressed proteins having too central a position to be targeted directly, and (2) finding existing drugs and compounds targeting first neighbours in a drug repurposing setting. The two proposed strategies require further experimental analysis in a context (cancer) specific manner due to the biological complexity of both cancerous processes and signalling networks.
In our study, we examined four different types of solid cancers and employed five different and independent network resources, to show a general phenomenon. The above conclusions hold across the datasets and annotations we have used. Nevertheless, the presented study has its limitations. In particular, the definition of cancer-related proteins in this work covered mutated and differentially expressed proteins, while not taking into account copy number variation and epigenetic (e.g., methylation) data. We used the widely accepted CGC as a collection of cancer type specific cancer-causing mutation; however, CGC does not contain mutations that could contribute to cancer progression without cause cancer directly. 68 The expression data used to define differentially expressed proteins in this work were generated by microarray studies, and thus, recently produced RNAseq and protein chip datasets were not considered. This may have introduced a methodological bias. However, we think this bias would be minor, since the number of microarrays we have employed was rather large, comprising a total of 1558 arrays. To validate our external dataset based classification process, with an extensive literature search, checking 1820 papers, we analysed 82 proteins classified as first neighbours of cancer-related proteins in colon cancer, and found only 7 proteins (8.5%) that could have been classified as cancer-related if their properties have been listed correctly in the applied mutation and expression datasets (Supplementary Notes). Therefore, we consider the applied datasets well curated and their coverage appropriate for such a systems-level analysis.
Our study focused on cancer type specific carcinogenic alterations and due to the lack of sufficient amount of data, we could not take cancer heterogeneity (i.e., cell-cell differences within a cancer type) into account. As for the network annotations we have used, we only considered PPI and signalling interactions, and not regulatory connections via transcription factors and microRNAs. Although these data are also available in the SignaLink 2 database, 25 it is lacking in other network resources, thereby making the comparison of results rather difficult. Also the available cancer-specific expression datasets for miRNAs and data on active transcriptional processes are limited. Although tumour microenvironment and inter-cellular communication between different cancer cells and other cell types, such as fibroblasts are important in carcinogenesis, most of the available molecular interaction data is intracellular. Given those reasons, we in this work focused on the PPI and signalling level within a cancer cell, as here we have substantially more data from different sources available, thereby allowing us to obtain conclusions, which appear to hold more generally (and which are independent on one particular annotation).
That cancer-related proteins share significant characteristics from the network perspective has been found originally by earlier studies [10][11][12][13] and most of those findings could be reproduced here. The only exception contrary to previous studies 69,70 is that here cancer-related proteins did not form a connected graph. There are two explanations for this apparent discrepancy; on the one hand, we applied a much stricter definition to select cancer-related proteins, and on the other hand all of the networks we used were tissue-specific, which was not the case in previous studies. As a supporting example for the need of tissue specific studies, previously in glioblastoma first neighbours was found to act like linkers of the network allowing cancer-related proteins to affect more biological processes. 71 We extended this observation to four other solid cancer types. The applicability of interaction data to extend the set of disease-related genes was already successfully applied in previous studies. 14,15 A recent analysis focusing on the network modules of disease-related genes (diseasomes) identified potential disease genes by using various network data. 69 Compared to this diseaseome study, 69 our approach focused only on the direct physical interactors of cancer-related proteins, and did not extend the scope based on regulatory connections and the module structure of the network. By selecting specific interaction data and simpler measurements of the network, the presented study straightforwardly points out cancer-specific key proteins, not listed before based solely on mutation and differential expression screens.
In the field of graph theory, it is known that randomly selecting nodes in a graph and then looking for their first neighbours result in identifying high degree and high betweenness nodes. 72 Accordingly, we found the same in our study by selecting cancer-related proteins and checking the network parameters of their first neighbours (Supplementary Table 5). Interestingly, most of these first neighbours were also found in a randomization test when we randomly selected a set of proteins and looked for their first neighbours (Supplementary Fig. 3). Therefore, to classify a protein as a first neighbour is independent of the original list of cancer-related proteins, and depends mostly on the network topology. Surprisingly, this phenomenon has never been analysed and presented before as a systems-level feature of carcinogenesis. Thus, in the current study we connected a general graph-theory phenomena with actual cancer and drug discovery related problems.
In conclusion, with five different networks and in four cancer types we have shown that first neighbour proteins are at least as central locally and globally (i.e., have similar degree and betweenness) as the cancer-related proteins themselves. While mutated proteins in central positions may have a more direct effect on the cellular network, differentially expressed proteins, which mostly localize to less central positions but often next to a major protein, appear to impact the network more extensively via these first neighbours. This observation opens up new strategies for target selection, and hence, anticancer drug discovery. Supplementary Table 11. We renormalized the chip reads with the Robust multi-array average method. 74 All chips were normalized to each other. After that, the probe sets were matched to UniProt ACs, using the probe set showing the highest level of expression in case more than one mapped to the same ID. We used these expression values to determine the expressed genes for each network in each normal tissue and each cancer type.
Network resources
We used three signalling network resources for the analysis: the SignaLink 2 signalling network resource, 25 the Reactome database, 26 and the signalling network from the study by Cui et al. 2007. 10 In addition, we used two PPI networks: The manually curated HPRD database, 27 and a combined network of a more diverse and more up-to-date set of resources: IntAct, 3 DIP, 28 and BioGRID. 30 For SignaLink 2, where predicted and integrated information is also available, we used only the manually curated pathway data, which is fully independent from the other sources. All databases were downloaded on 27 January 2015. We mapped the protein identifiers to UniProt ACs with the Uniprot mapping service. 73 It allowed the merging of the three resources. We used only reviewed UniProt ACs (Swissprot). If there was multiple Swissprot ACs for one protein, we kept them all. Proteins that were not mapped to such an accession have been discarded. We have not used other big integrated PPI resources, such as STRING, as most of the resources we used here are present in STRING as well. 75 Thus, analyzing the different sources separately provides more information on the different origin of data.
Tissue and cancer-specific networks Using the collected expression data, and the different signalling and PPI databases, we constructed the tissue specific healthy and cancerous signalling and PPI networks. After calculating the mean and standard deviation of expression per cancer, we defined that proteins are considered not expressed if their mRNA expression levels were below the mean minus the standard deviation in the given network. We considered a protein differentially expressed in a given tissue, if the corresponding mRNA was either present only in normal tissue and absent in cancerous tissue, or other way around. Thus, simple overexpression of otherwise normally expressed genes were not considered to select only the outstanding expressional differences. In this way, our analysis was more sensitive to the genes having lower expression. Interactions between proteins were considered if both interacting proteins were present in either healthy or cancerous tissue. In this way we got a tissue-specific network, where we can see the network effect of cancer-related differently expressed proteins.
First neighbour proteins
First neighbours of cancer-related proteins were defined as proteins (1) directly and physically interacting with cancer-related proteins (according the network databases used, see below), and (2) which were not cancerrelated protein themselves. Thus, if a cancer-related protein was also first neighbour of another cancer-related protein it was considered only as cancer-related protein to avoid the overlap and allow the analysis of the clear first neighbours that have not been related to the given cancer before.
Gene ontology analysis
The gene ontology (GO) analysis aim was to determine, whether the cancer related proteins could reach more processes with their first neighbours, than expected. Gene Ontology information (gene_association. goa_human) was downloaded from Gene Ontology website (on 2 June 2015). Only biological processes (BP) were considered. The cancer-related proteins were annotated first, then the first neighbours, which got only those GO BPs that were not annotated before to a cancer protein. This way we could focus on the added functions from first neighbour proteins. After that, we measured with Binomial test whether first neighbours have more GO BPs than expected based on their ratio from the network. This approach is stricter than considering all GO BPs that were annotated to both cancer-related and first neighbour proteins.
Giant component analysis
We used exact statistics to determine the giant components percentage. We perturbed the nodes annotation 1000 times and calculated the giant component ratio. After that we used a Z score based outlier statistical test to determine the data's significance.
Network topology parameters
We measured three major network parameters for each node (proteins): (1) degree, which is the number of its interactions; (2) betweenness centrality, which is another importance measure that is equal to the number of shortest paths from all nodes to all others that pass through the node of interest, and (3) clustering coefficient, which measures how the neighbours of the node of interest are also connected to each other (form a cluster). We used the Igraph, 76 Python plugin to calculate the parameters for each network. Edges in all networks are presented without weight, and self-loops have been removed. Though some of the signalling networks contained direction for an interaction, due to the lack of general comparisons, we measured the network parameters without taking direction into account. We measured the parameters in the non-tissue specific, original networks as well.
Randomization analysis
We investigated that the first neighbour proteins central role remains if we randomly selected proteins for our analyses. For 100 times, in the cancer type specific networks of SignaLink, we randomly selected the same number of proteins as the original set of cancer-related proteins contained. Then, for all 100 cases in each cancer type, we listed the first neighbour interactors of the randomly selected proteins. Finally, we measured the occurrence of each protein in the network as first neighbour and compared this list with the previously identified, "real" list of first neighbours.
Drug and compound analysis
Data on drugs and compounds were downloaded from the ChEMBL Database 77 version 20. We considered a compound as drug if it had been subjected to Phase 4 clinical tests according to ChEMBL. We used those compounds, which are targeting proteins in SignaLink database. We have implemented a relatively strict bioactivity cut-off of 500nM (IC50, Ki, Kd) during our filtering process to identify these compounds, which indicates good activity with a strong potential for therapeutic applications. Importantly, these compounds could provide opportunities for structure/ scaffold similarity studies to identify a recurring sub-structure, the knowledge from which could be utilized in structure/fragment-based drug designing studies. Drug indications have been classified according to the ATC database. 61 Neighbours of cancer-related proteins D Módos et al
Literature mining methods
We used Biopython 78 to query PubMed and download the abstracts of the articles. We searched for the drug or compound name (used in ChEMBL) plus "cancer" and "treatment" as further keywords. We limited our PubMed search for clinical trials.
Statistics
Wilcoxon rank-sum test and Kolmogorov-Smirnov tests have been conducted for nonparametric hypothesis testing. Throughout the text, only Wilcoxon rank-sum test results were presented, as Kolmogorov-Smirnov tests results have appeared to be concordant. The results of all statistics can be found in the supplementary tables. The whole network was used as control, except when noted. We used Binomial tests to compare to a given percentage. If it is not mentioned otherwise, the compared ratio is first neighbour proteins to all proteins. We used the Numpy and SciPy packages for Python for statistical analysis. 79 Other programs For creating the network figures, version 3.1 of Cytoscape 80 was used. Violin plots have been made with the vioplot R package, 81 while supplementary boxplots have been constructed with the help of the matplotlib python package. 82 | 2017-04-27T10:43:15.886Z | 2017-01-24T00:00:00.000 | {
"year": 2017,
"sha1": "fde0b534b903d84311e3dcde582c4ebffcb0ef43",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41540-017-0003-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b015dbd932f9a8dc0530af9efe3b7473d1e120bb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119504379 | pes2o/s2orc | v3-fos-license | Carbon-poor stellar cores as supernova progenitors
Exploring stellar models which ignite carbon off-center (in the mass range of about 1.05 - 1.25 Msun, depending on the carbon mass fraction) we find that they may present an interesting SN I progenitor scenario, since whereas in the standard scenario runaway always takes place at the same density of about 2 X 10^9 gr/cm^3, in our case, due to the small amount of carbon ignited, we get a whole range of densities from 1 X 10^9 up to 6 X 10^9 gr/cm^3. These results could contribute in resolving the emerging recognition that at least some diversity among SNe I exists, since runaway at various central densities is expected to yield various outcomes in terms of the velocities and composition of the ejecta, which should be modeled and compared to observations.
convective region. If the size of the convective region reaches a certain value, further increase in entropy leads to expansion of the center (a decrease of the density at the center -hereafter "DEC"), in opposing to the accretion tending to lead to contraction.
Thus, between IG and DEC the density at the center increases, and after DEC it decreases. The temperature nevertheless continues to increase due to the nuclear reactions, and with it the nuclear reaction rate and the convective flux are also increasing. When the reaction rate reaches a point where the convection can no longer compete with the entropy production rate, nuclear burning continues at almost constant density, and could reach a "dynamic" regime, where the nuclear time (defined as the time needed to exhaust all the fuel, including oxygen, at constant density) is shorter than the dynamic time (which can be defined for example as a pressure scale height divided by the speed of sound). We will refer to this situation as a "runaway" (RA), and it is clear that under these circumstances hydrostatic equilibrium can no longer be assumed. It is worth noting, that the runaway doesn't necessarily lead to explosion, since electron capture behind the explosion front could produce a rarefaction wave which might convert the explosion into a collapse. Figure 1 shows the evolution of the density and temperature at the center of a carbonoxygen star model of mass M = 1.18 M and carbon mass fraction of X c = 0.05, displaying also the relevant carbon ignition line and the points IG, DEC and RA mentioned above. Quantitatively speaking, it is clear that the IG point is dependent on the carbon mass fraction (X c ), and will be located at a higher density for a lower carbon mass fraction.
It also depends on the reaction rate, including the screening factor, and on the neutrino loss rate, but the sensitivity to these parameters is weaker. The DEC point is also dependent on X c . For a low enough X c the amount of carbon is insufficient to raise the temperature to the RA regime, and even not to the DEC point.
If we take into account the Q-value of carbon burning (≈ 4 X 10 17 erg/gr) and the specific heat in the region under discussion ( (∂T/∂e) ρ ≈ 10 -7 o K/erg), we can estimate that the temperature will rise by about 4 X 10 8 o K for a mass fraction of one percent carbon (X c = 0.01). Since for RA the threshold temperature of oxygen ignition should be reached, i.e. about 1.4 X 10 9 o K is needed, it is clear that if IG is reached at T c ≈ 4 X 10 8 o K, we need a carbon mass fraction of about 0.025 to reach RA. Clearly this is a rough estimate, since part of the energy produced by the burning of carbon is transported away, while on the other hand convection supplies fresh fuel. As we shall see later on, this estimate turns out to be pretty good.
To the abovementioned we should add the role which electron capture processes might play during evolution. A detailed discussion will be given in section 2.3.3; here we shall mention that the relevant processes are: 1. Electron capture on Mg 24 , which is a product of carbon burning and is quite abundant, and Na 24 , which cause a decrease of Z/A, and thus a decrease of the effective Chandrasekhar's mass, leading to an accelerated compression and local heating.
In this work we also checked the influence of these processes.
Regarding the astrophysical scenario, it is usual to deal with white dwarves that are the remnants of planetary nebulae (PN) formation. This is a situation when the growth of the carbon -oxygen core of the star causes a luminosity increase leading to envelope instability and ultimately to its ejection.
The typical mass of such white dwarves is about 0.6 M , although more massive ones do exist. According to Liebert et al. 2005, some 6% of the white dwarves have masses above 1 M . Regarding their composition, the carbon mass fraction generally lies within the range 0.25 ≤ X c ≤ 0.55, due to some uncertainties.
As mentioned before, the key point of the standard scenario is mass accretion from a binary companion. Clearly, the accretion rate depends on the structure and evolutionary history of the binary couple. Theoretical surveys have been made in the literature, where the accretion rate, as well as the initial mass of the accreting white dwarf and the composition of the accreted matter served as free parameters (e.g. Nomoto & Sugimoto 1977).
In this work, we referred to the WD mass range where the original carbon core is big enough to ignite carbon before it grows towards M ch , but not too big so that ignition takes place off-center. Alas, investigating the required mass range is especially difficult, due to the need for extremely fine zoning in order to follow the burning shell and the convective region created above it. In section 3.2 we discuss this subject in detail.
In the following, we shall begin by first describing the computational methodology of our work, including the numerical algorithms and input physics in chapter 2. Our results are given in chapter 3. Finally, in chapter 4 we will discuss our results and present our conclusions, including observational predictions and suggestions for further research.
The initial models
For the sake of simplicity, rather than following the complicated evolution of the primary star in order to get the remaining carbon -oxygen white dwarf, we began our calculation with a carbon -oxygen star of appropriate mass and composition, with low enough central density at hydrostatic equilibrium, letting it contract along the well known knee-shaped path on the central density versus central temperature diagram. This approach, which has been used by many authors since, was justified by Barkat 1971, stating that at each point along the evolutionary track of a carbonoxygen core, growing as a result of a burning shell, the central conditions (density and temperature) are very close to the evolutionary track of a carbon -oxygen star of corresponding mass.
Stellar evolution code
The evolution was followed using a quasi-static Lagrangian 1D evolution code named ASTRA, which is an extensively improved version of the ASTRA evolution code first described by Rakavy, Shaviv & Zinamon 1966, and used with some modifications many times. The quasi-static assumption, meaning that the star is dynamically stable, and the time scales for the three main processes that govern the evolution of the star -namely the hydrodynamic motion, the convective mixing, and the exchange of energy due to thermonuclear reactions and radiative transport obey the relation: , is valid for most stages of stellar evolution, with the possible exception of very violent nuclear burning phases, where special care has to be taken. Since these stages are indeed in the scope of our interest, a special treatment of these stages was devised, and will be described in section 2.4.
Convective regions are treated as isentropic and fully mixed. This assumption greatly simplifies the program, as it eliminates the need to calculate the convective energy flux, and is valid (i.e. is in good agreement with the mixing length model of convection) wherever the pressure scale height is large compared to the size of the convective region of interest. In the stellar models of our interest, this kind of condition occurs throughout the stellar core.
Nuclear reaction rates
Nuclear reactions were treated via two different sets of reaction rates, which were then compared to each other:
Electron capture
Three different processes of electron capture are considered:
Thermal Urca (TU)
Thermal Urca is a situation where at some Lagrangian mass point in the star (m urc ) the Fermi energy fulfills E f = E th , so that throughout a mass shell (henceforth "Urca shell" -US), lying in the range of E f = E th ± 1 kT, processes of emission and capture of electrons by some trace nuclei take place. Both processes are accompanied by production of neutrinos, which leave the star and create an effective local heat sink.
Since the star continues to shrink and E f increases, it is clear that m urc increases and the US moves outward. The magnitude of the process depends on the mass fraction of suitable nuclei (Ergma & Paczynski 1974). In our context, the dominant nuclei are Na 23 , with a threshold of E th = 4.4 MeV and consequently ρ th ≈ 1.7 X 10 9 gr/cm 3 , and afterwards Ne 21 , with a threshold of E th = 5.7 MeV, and thus ρ th ≈ 3.5 X 10 9 gr/cm 3 .
We modeled the effect of the TU by means of the prescription by Tsuruta & Cameron 1970, while checking the sensitivity of the result to the reaction rate by varying the mass fraction of the relevant nuclei.
Convective Urca (CU)
In this case convection transfers nuclei through the US. These nuclei pass through a region where the difference E f -E th is of the order of magnitude below 1 MeV, and certainly ∆E >> kT. Since in this case the average electron capture occurs below the Fermi level, a hole is created, which is filled by an electron from above the Fermi level, and the energy surplus is emitted as a γ photon causing local heating. threshold of E th = 5.52 MeV and consequently ρ th ≈ 3.35 X 10 9 gr/cm 3 , and afterwards Na 24 , which is the electron capture product of Mg 24 , with a threshold of E th = 6.59 MeV, and thus ρ th ≈ 5.25 X 10 9 gr/cm 3 .
Note that lately (Gutierrez et al. 2005) there have been claims that Na 23 is much more abundant than previously thought, and is more abundant than even Mg 24 . Since the * Albeit the mass fraction of Ne 20 in the burning products of carbon is high, we didn't need to include EC on it, since the threshold in its case is high, and corresponds to ρ > 6 X 10 9 gr/cm 3 , which is above the limit of our interest. threshold for EC on Na 23 is lower than on Mg 24 , it should have been considered, however Gutierrez et al. 2005 have shown that the effect on the evolution is small.
In order to calculate the effect of EC, it is necessary to know the mass fraction of the relevant nuclei, which is a result of carbon burning and the capture rate, as well as the neutrino loss rate and the resulting heating rate.
As will be evident from our results, this process has an important effect in our cases; therefore it was included as a standard in all our models, except when explicitly mentioned otherwise. Due to the existing uncertainties (cf. Gutierrez et al. 2005), in line with our basic approach, we checked the sensitivity of the results both to the mass fraction and to the capture rate by introducing "fudge factors".
We took the capture rates from Miyaji et al. 1980, but the range of variations we checked covers also the differences versus the rates given by Oda et al. 1994, which differ from the former by as much as an order of magnitude.
Mochkovitch 1984 pointed out that in the presence of EC, due to its effect on the gradient of the electron mole number Y e , the Ledoux criterion for convection has to be used, and hence the extension of the convective region is slower. In our case it turned out, that the effect of using the Ledoux criterion is quite small.
Another important caveat (Stein 2005): since it is clear that EC must start as TU (which is locally cooling), and can become exothermic only when occurring below the threshold by more than kT, the question whether a convective zone broad enough to ensure heating can be formed arises. In our case, where EC starts when significant carbon burning is already present, this problem may not be severe. However, more careful analysis is needed, to find out whether the local cooling might force the base of convection to move outward.
Equation of state
The equation of state takes into account ionizations to all available levels of the different atom species in the composition. The distribution function of the various ionization levels is computed using a method similar to the one described in Kovetz & Shaviv 1994. The resulting electron density is then used together with the temperature in order to extract the pressure, energy, chemical potential and their derivatives respective to the electron density and temperature from a table computed in advance by solving the Fermi-Dirac integrals. The pressure, energy and entropy of the ions are then added as an ideal gas together with those of the radiation.
Treatment of deviations from the quasi-static assumption
As previously mentioned, our evolution code assumes, that the time scales for the three main processes that govern the evolution of the star -namely the hydrodynamic motion, the convective mixing, and the exchange of energy due to thermonuclear reactions and radiative transport, obey the relation: However, as we shall see later, as the star approaches explosion, this relation between the timescales is no longer valid. As the reaction timescale t thermo becomes shorter, it first of all becomes comparable to the convective timescale t convec , so in order to achieve a reasonable modeling, we have to appropriately restrict convection when this occurs. Hence, it is important to estimate these two timescales, and give an adequate treatment when the quasi-static assumption fails. In practice, as a very crude model, we restrict the outer boundary of the convective zone (the inner boundary in our case is always at the center) to be not above the innermost Lagrangian zone, which fulfills t convec (r) > α t thermo , where r is the radius of the zone, and α is a "fudge factor" we use to check the sensitivity of the results. Note that 1/α is in fact the number of convective turnover times during an interval of one thermonuclear timescale. The method for calculating the timescales t thermo and t convec is given below.
The convective timescale
The widely used mixing length theory gives an explicit relation between the convective luminosity and the convective velocity: Here λ is the mixing length, c p = (∂e/∂T) p is the specific heat at constant pressure, and L c is the convective flux through a unit of area.
We don't use the mixing length theory, but rather assume an isentropic convective region. Nevertheless, we can estimate the convective luminosity as follows: Let L c be the convective luminosity and L r the radiative luminosity. At any point we have: However, we assume that ∂s/∂t is uniform throughout the convective region, thus: But ∆L c = 0 since the convective flux vanishes at the boundaries of the convective region, so we have: Together with ( 2-2) we have: Consequently the time to reach a distance r from the inner boundary of the convective region will be: The "convective time scale" will be defined as the time τ(r) to the outer boundary of the convective region, i.e. the time needed to cross the entire length of the region.
It is clear that this is only a rough estimate; nevertheless a comparison with models of convective envelopes calculated for the same conditions with the mixing length theory using the method described by Tuchman et al. 1978 gave an agreement better than a factor of 3.
The nuclear timescale
As the nuclear timescale we define the time needed to burn all the carbon and afterwards the oxygen at a given point (with initial temperature T, density ρ and carbon mass fraction X c ), under the assumption of constant density. Clearly this is a lower limit to the time, since in reality there are also energy losses.
To get a quantitative estimate we used the analytical approximation of Woosley & Weaver 1986 for the reaction rates of carbon and oxygen burning in the relevant range (i.e. 2 < T 9 < 6, ρ 9 < 4): Given an initial temperature T 1 , we can estimate the final temperature T 2 of carbon burnout, since: After we get the time t 1 2 for carbon burnout and the final temperature T 2 , we use T 2 as the initial temperature of oxygen burning, and through the same method we can get the time t 2 3 of oxygen burnout.
Note that we do not take into account the fact, that X c is also a function of time, but it turns out to have a small effect. In fact, in our calculations we checked the validity of these estimates by artificially preventing the density from changing, and actually measuring these times. We found excellent agreement.
Estimating the timescale of oxygen burning is important, since in case the carbon mass fraction is very low, it might burn out without igniting the oxygen.
Overview
The evolution of carbon -oxygen stars is determined by their mass (M c ) and composition. We assume the composition of the star is homogeneous -as a result of the helium burning producing it being convective. Therefore it is sufficient to specify the mass fractions of the relevant elements: C 12 and O 16 , whereas the mass fractions of Ne and Mg are negligible. These mass fractions are determined at the end of helium burning, for which there is a well known uncertainty, due to uncertainties in the crosssection for the reaction C(α,γ)O, which takes place during helium burning. It is common to assume the mass fraction of the carbon ranges between 0.25 ≤ X c ≤ 0.55, and the mass fraction of the oxygen is its complementary to unity (Umeda et al. 1998).
We can identify four fundamentally different ranges. Above Chandrasekhar's mass (M 3 = M ch ≈ 1.4M ) carbon is ignited at the center, and the stellar center evolves toward increasing temperatures and densities, igniting heavier fuels.
Below M 3 , but above a certain limit M 2 , carbon is ignited at the center, but no heavier fuels are ignited, and finally evolution proceeds towards a white dwarf. The value of M 2 depends on the carbon mass fraction; for our example of X c = 0.54 it lies around Below M 2 , but above a lower limit M 1 , carbon is ignited off-center, at a point depending both on the mass of the star and on the carbon mass fraction (see Figure 3 and Figure 4). Subsequently, carbon burning propagates both inwards toward the center and outwards, and after almost all the carbon in the core is exhausted, evolution returns to its original path towards a white dwarf.
As we already suggested in chapter 1, and will show in detail later, these stars are in the focus of our interest, since the off-center burning might leave behind enough carbon, which will "survive" the subsequent evolutionary phases, and finally ignite explosively after the mass reaches M ch through accretion. Figure 2 We can see that at a the point of ignition the density and temperature at the center decrease, as a result of the expansion induced by the carbon burning shell above. As we will explain in detail in section 3.2.3, this shell extinguishes, subsequently causing the center to contract and heat again, but soon another off-center burning shell is ignited, causing the center to expand and cool a second time. Finally the burning reaches the center, causing it to rise to carbon burning temperature with almost no change in density. After the burning ceases, the center returns to its original path, as if no carbon burning had taken place.
Off-center carbon burning
We will limit our interest to the range M 1 ≤ M c ≤ M 3 . In order to point out the major points of interest in this range, we will first describe a specific example.
We will take as an example a model with mass M = 1.17 M , and a homogeneous mass fraction profile, i.e. X c (m) = X c (0) = 0.25. We will investigate the evolution by looking at various physical quantities. Figure 5 presents a Kippenhahn diagram, which shows the history of convective regions in the star. As can be seen, the star goes through various evolutionary stages, each one presenting a burning shell topped by a convective region. We will describe the three burning stages shown above, by following the evolution of the carbon mass fraction profile.
Stage I
The evolution of the carbon mass fraction is shown* in Figure 6, and we can see that in this case carbon ignites off-center, at m ≈ 0.145 M . The nuclear reaction rate (q n ) grows rapidly, and when it exceeds the energy loss rate by neutrino emission (q ν ), a gradually growing convective burning zone develops, while the carbon mass fraction X c gradually decreases. At a certain stage (line 4 in Figure 6) the convective region begins to shrink, by retreating of both its inner and outer boundaries. This evidently leaves behind a gradient in the carbon mass fraction X c . It is notable, that immediately below the inner boundary of the convective region lies a narrow radiative burning shell, which locally exhausts the carbon at a relatively higher rate. This shell slightly penetrates inward due to conduction. The decline of X c , together with the expansion caused by the rise in entropy of the convective region, finally extinguishes q n . It is important to realize, that due to q ν , the nuclear burning q n extinguishes, albeit X c has not completely vanished. This is a major point that will have important repercussions in what follows.
Stage II
After the nuclear burning extinguishes, the star continues to contract, leading momentarily to ignition of carbon in the region of the X c gradient remaining at the base of the former convective burning zone. Again q n rapidly rises above q ν , and a burning zone penetrates inward forming a convective region above it. The evolution of the carbon mass fraction during this stage is shown in Figure 7.
The behavior of X c in the convective region is complex. Although nuclear burning obviously decreases X c , following the growth of the convective region, variations of X c due to incorporation of zones richer or poorer in carbon have to be taken into account. And indeed at certain stages X c in the convective region increases, while at other stages it decreases. This behavior varies from star to star, since it is dependent on the details of the preceding evolutionary stage. It repeats itself several times during subsequent evolutionary stages, and cannot be described in general terms, or as a phenomenon monotonically dependent on the stellar mass or the initial carbon mass fraction X c (t=0).
Stage III
In the following stage, as a result of the contraction of the star, the carbon reignites, usually in a relatively carbon-rich zone, above the extent of the convective regions of the previous stages.
Also in this case a convective region develops, which grows up to a certain extent, and then the reactions extinguish again. At the same time a radiative burning front develops, which advances inward to the carbon-poor region along the composition gradient which has remained there due to the previous burning. We find (see Figure 9) that this flame gradually decays, and is only able to penetrate slightly. The evolution of the carbon mass fraction during this stage is shown in Figure 10. In some cases there are more stages where carbon is ignited in carbon-rich outer areas ( Figure 5). Eventually, further contraction doesn't result in carbon ignition, and the star evolves toward a white dwarf, with a "frozen" carbon profile. Figure 11 displays this final profile for the main elements present in the star -carbon, oxygen, neon and magnesium. Figure 13 displays the carbon mass fraction profile at the end of carbon burning for various initial carbon mass fractions and total mass of M c = 1.22 M . We can see that the typical profile has a "bump" around the center, above it a region almost devoid of carbon, and in most cases a small carbon-rich zone near the outer boundary. The widely accepted scenario for the onset of a type I supernova explosion is accretion of matter by a white dwarf, therefore in the following we will discuss the various possibilities of accretion onto the white dwarves resulting from the evolutionary scenarios described above.
Accretion
For the purpose of understanding the various possibilities, we used the standard technique to investigate the outcome of accretion as a function of the relevant parameters. In a realistic case the accreted matter could be hydrogen, helium, carbon or a mixture of these. In the case of hydrogen or helium accretion, these also ignite, and form a burning shell advancing outward, leaving behind a newly-made layer of carbon -oxygen. Thus the composition of the accreted matter influences the effective rate of carbon -oxygen accretion. Variations in the entropy of the accreted matter can also be represented by dictating a suitable effective accretion rate. As we will show, the carbon mass fraction in the accreted matter can be of importance, but even then the main parameter is M & .
It is known that (Nomoto & Sugimoto 1977, Nomoto et al. 1984) the relevant M & for a supernova has to be: This rate results from the mass-luminosity relation given originally by Paczynski 1970 for a double shell of hydrogen and helium burning: Translating this relation to M & gives: Here X is the hydrogen mass fraction, and Q v is the Q-value of hydrogen burning.
It is worth to note, that in the case of helium burning, M & is expected to be, and indeed is, higher by at least* a factor of about 10, since Q v is lower.
We decided to map the results for a wide range of M & : insignificant, due to the "convergence feature" of the evolutionary paths. This argument is not to be confused with the fact, that when dealing with much smaller accretion rates, the initial mass is of much more importance.
In order to compare our results to the literature, we chose from among the many publications on this subject the extensive survey by Nomoto & Sugimoto 1977, followed by a study of the dependence on the accretion rate by . It was important for us to verify that the evolutionary paths we get are similar to the literature, and thanks to the uniqueness of this path (for given accretion and neutrino loss rates) as mentioned above, this is indeed the case. It is obvious, that the onset of carbon burning depends also on the mass fraction, and we are especially interested in the cases where this mass fraction is particularly low. As mentioned, all the above results are for accretion of matter devoid of carbon. This choice has the following reason. It turns out that, as previously mentioned, in the case of carbon accretion at high M & above a certain threshold, which depends on the mass fraction of carbon in the accreted matter, carbon ignition occurs in the accreted layer, and a burning front develops, advances inward, and as already mentioned (section 3.2.2), might reach the center before explosion occurs. We didn't follow the front in this case, but we mapped the combinations of accretion rates and carbon mass fractions which lead to such a case. Figure 15 demonstrates this for a carbon -oxygen model of mass 1.12 M . We can see, that for an accretion rate of 7.5 X 10 -7 M /yr, carbon does not ignite in the accreted layer, even if the carbon mass fraction in the accreted matter is as high as 0.5. For a higher accretion rate of 7.5 X 10 -6 M /yr, such ignition does not occur for a carbon mass fraction of 0.01, but does occur for 0.05 and above.
The influence of various parameters beyond ignition
Beyond ignition a convective region is formed, which goes on growing while supplying fuel to the nuclear flame, but at the same time also inducing expansion.
Clearly, convection will continue as long as the convective turn-over time-scale t convec is shorter than the fuel exhaustion time-scale t thermo . Similarly, it is clear that the transition to a dynamic regime, i.e. to a situation where assuming hydrostatic equilibrium is no longer valid, will occur when the dynamic time-scale t hydro (which is of order of magnitude of a characteristic length divided by the speed of sound) is no longer short compared to t thermo .
In section 2.4 we discuss the subject of timescales in detail, including the definition of these quantities and treatment of the relevant scenarios. As we already mentioned there, our treatment implies comparing t convec and t thermo , and turning convection off where t convec > α t thermo , where α is a "fudge factor". Figure 16 shows the sensitivity to this factor α, and we can see that the smaller it is (i.e. convection is turned off earlier) runaway occurs earlier, i.e. at higher ρ c , and lower T c , since at this stage convection causes expansion and thus hinders runaway. Keeping in mind the existing uncertainties for this process, we checked the sensitivity of the results by modifying the mass fraction of Mg 24 , which in our standard models came out to be about 0.19. Figure 18 shows the results, and we can see that lowering the mass fraction of Mg 24 has a small effect causing earlier ignition, and thus runaway at lower ρ c . Figure 19 shows the influence of the TU process for our model of 1.18 M , for various mass fraction of the Urca nucleus Na 23 . We get a minor perturbation only, which is manifested as a deviation in the ρ c , T c path to lower temperatures due to the local cooling. As contraction continues, the US moves outward together with the local heat sink. When the US is far enough from the center, the density and temperature at the center start rising again, and shortly the path coincides with that of the TU-less
Convective Urca (CU)
As we already mentioned, we modeled CU by artificially forbidding convection above the Urca shell, and examined its effect by varying the Fermi energy threshold at which the Urca shell is located.
This works in two opposite directions. On one hand the entropy and with it the temperature increase faster (which promotes approaching RA), but on the other hand there is less supply of fresh fuel, which can suppress burning, and might prevent the RA. Clearly, lack of fresh fuel supply might have cardinal importance in a case where the mass fraction of fuel is a-priori low.
We find that a feedback mechanism exists. When the star contracts, the US has to move farther away from the center (due to increase in E f ), thus the outer boundary of the convection moves outward as well, so that the effectiveness of limiting the extent of the convective region on diminishing the fuel supply is small in all the cases we checked. It is interesting to note, that when the star goes through an expansion phase, the situation is the opposite -the US approaches the center, and may limit the convective region. If this limiting is significant enough while the temperature is already high enough, it might induce an earlier runaway, which will thus occur at a higher density. We can see that the higher the threshold the earlier RA occurs, i.e. at a higher density and lower temperature. It is clear that this tendency is limited, since should we raise the threshold to the vicinity of 6.26 MeV, the Fermi energy throughout our model will be below the threshold, and we will effectively be back to the case with no CU at all.
Discussion and Conclusions
Exploring stellar models which ignite carbon off-center (in the mass range of about 1.05 -1.25 M , depending on the carbon mass fraction) we find that they may present an interesting SN I progenitor scenario, since whereas in the standard scenario runaway always takes place at the same density of about 2 X 10 9 gr/cm 3 , in our case, due to the small amount of carbon ignited, we get a whole range of densities from 1 X 10 9 up to 6 X 10 9 gr/cm 3 .
These results could contribute in resolving the emerging recognition that at least some diversity among SNe I exists, since runaway at various central densities is expected to yield various outcomes in terms of the velocities and composition of the ejecta, which should be modeled and compared to observations. Several issues which were beyond the scope of this work call for further investigation: 1. A deeper treatment of the question whether thermal Urca can hinder the formation of a convective zone when electron capture on Mg 24 sets in is needed.
2. Our work provides initial models, which can apparently reach explosive runaway.
However, our treatment of the onset of explosion, involving a very crude treatment of convection and of the convective Urca process, can only be regarded as a preliminary guideline, setting the stage for a much more profound study. The real value of our results would be judged by fitting the results of the dynamical simulations to the observational data.
3. According to Liebert et al. 2005, some 6% of the white dwarves have masses above 1 M (and below Chandrasekhar's mass of 1.4 M ). Since the carbon - | 2019-04-14T01:48:11.878Z | 2007-01-31T00:00:00.000 | {
"year": 2007,
"sha1": "ad2b90cabbc2e81fce77cb53bd5e8da675f115ef",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "ad2b90cabbc2e81fce77cb53bd5e8da675f115ef",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
225637653 | pes2o/s2orc | v3-fos-license | The identity / alterity nexus in international relations : two cases of encounter with difference
Recebido em: 6 jun. 2019. Aprovado em: 22 out. 2019. Publicado em: 4 ago. 2020. Abstract: The aim of this article is to survey the implications of the identity/alterity nexus in international relations (IR) as related to processes of othering for understanding conflict and violence in global politics. I will offer what I could call an ontology of difference in global politics, where I stress the reliance of understanding othering practices in global politics, as I explore two cases from which I ask the following questions: How do identity and identity formation processes occur and develop at different levels, times and dimensions? How do discourses of differentiation and identification help construct state identities and interests? Following Emmanuel Lévinas, I will argue that by seeking ways to reach out towards the Other, we free ourselves from the restraints of selfishness, from indifference and isolation. Finding and coming to terms with a composition of the Self that also includes the Other enables us to take responsibility for him/her inasmuch it prevents the conditions for violence and conflict.
Introduction
The subject of the encounter, above all of the white European encounter, with unfamiliar cultures is a long story of violence and conflict. The world literature is full of examples in that regard: from Homer's Odyssey to The travels of Marco Paolo, from Joseph Conrad's Heart of darkness to EM Foster's A passage to India. They chronicle, narrate, depict, and represent the encounters between the Self and Other. Some end in a true conversation, where difference is embraced and celebrated at constituent features of humankind. Yet sadly, most feature stories of conquest, intolerance, and dominance. They often shape our common sense views of peoples and places, and instil in our minds stereotypes, prejudices, and phobias that usually inform the way we choose to engage with the Other. When the Other is represented positively, he is deemed as non-threating, hence maybe even an equal. If the depiction is negative, on the Other hand, he is taken as undeserving of recognition. Therefore, the way we chose to engage in our encounters with the Other carries an ethical consequence which is political.
As a result, issues of identity, difference, otherness, and alterity are especially relevant in current times for breaking away from recurrent cycles of violence and insecurity; trying to live in peaceful coexistence with others requires the acceptance of difference. Indeed, the debate on those key issues, specifically their impact on foreign and security policy, are very present in International Relations (IR) scholarship. Authors such as Neumann (1999), Houtum and Naerssen (2002), Diez (2005), Hansen (2006) and Reinke de Buitrago (2012; have generated insights about what is considered by some to be the subject of a broader Western metaphysical debate since Hegel (see Bernstein 1991, 68;and Neumann 1996a, 141 Beyond the academic debate of these issues, the study of identity, alterity and othering also comes with great societal and political significance.
The creation and marking of difference and the drawing of borders carries conflict potential for international interactions among states, and for their societies. A more thorough understanding of such processes allows their critical reflection, as well as a reflection of the own role and of involved dynamics. Another possible contribution is to point to fruitful potential of adjusting political actions in a way so as to facilitate bridge-building rather than the creation of boundaries, and to interact with the Other so as to avoid harmful oppositions.
Following this introduction, four sections are presented and each raises a distinct question regarding the problem of otherness in IR. The first will feature a discussion about how we think about identity and difference. After surveying the traditional assumptions about identity and difference, and highlighting how othering processes influence and constitute violence and conflict, I will offer what I could call an ontology of difference in global politics, where I stress the reliance of understanding othering practices in global politics. The second section will feature two illustrative cases where dynamics of identity, discourses of difference, production of otherness, threat perception and threat articulation are intimately connected to identity construction processes as well as foreign policy choices. The third and final section will discuss the implication and consequences of othering practices in IR, as well as a call to an ethics of responsibility to the Other following Emmanuel Levinas' philosophy.
We will argue for the need for encountering the Other without the need to erect barriers of protection as a deluded self-defence mechanism of protecting and preserving own identity.
Following Lévinas, I argue that by seeking ways to reach out towards the Other, we free ourselves from the restraints of selfishness, from indifference and isolation. Finding and coming to terms with a composition of the Self that also includes the Other enables us to take responsibility for him/her inasmuch it prevents the conditions for violence and conflict. In current times where life itself seems so fragile and tenuous, and where we continuously see the Other being reduced to less than human, this stands as an ethical imperative to all of us.
Conceptualizing identity, difference, and otherness
Traditionally, the discipline of International Relations (IR) has understood its object of study as the actions of states outside their national borders in a context of anarchy. Following this canonical rationale, especially within the inter-paradigmatic debate, the discipline has evolved perceiving its problématiques as pertaining to the nexus between war and peace, or conflict and cooperation (Guillaume 2011). However, especially since the mid-1980s and early 1990s, the discipline has gone through profound changes in understanding what IR is about. Inspired by the linguistic, sociological, critical, and aesthetic turns, many scholars have focused their attention towards issues of identity and processes of identity formation -in terms of states having national identities and or being the 2 Albeit identity could be placed under much of the canonical study of IR, especially if treated as a 'possible independent variable to explain war/conflict and peace/cooperation' (Guillaume 2011, 2), some studies demonstrate that perhaps identity could help expand the field of IR if taken more seriously (see Albert et al. 2001;Hansen 2006;Hansen and Waever 2002;Lynn-Doty 1993;Nabers 2009). 3 Additional theorizations of othering/otherness are found in Hansen (2006) and Herschinger (2011). bearer of identity, and how national identities affect interstate relations (Holland 2013a;Neumann 1999 In this sense, IR could be understood as the continuous process of constructing relations of Self and Other (Neumann 1996a;1996b;, or even the construction of difference (Nabers 2015), or furthermore an on-going process of converting difference into otherness. Discourses of alterity, which refer to the construction of the Other in juxtaposition to the Self, have frequently played a role in informing foreign policy. 3 As Holland (2013b, 10-11, 24) and Shapiro (1989, 13-14) state, an identity's specific cultural foundation and its specific nationally oriented view also inform state behaviour.
In addition, the analysis of international actors such as the European Union allows departing from the hierarchical, inward-looking and essentialist conceptions of identity that are typical for nation-states.
Instead the EU provides a case study to reflect on more open-ended, fluid identifications in the global arena (Bretherton and Vogler 2006, 37), and to accentuate the possibility to overcome the pattern of negative othering that is characteristic for national identifications (Diez 2005;Neumann 1996b).
Broadly speaking, identity is the way in which we define ourselves in relationship to our surroundings, the world and others, and how we differentiate ourselves from others. It is not static, but somewhat stable, as it is part of the cognitive and emotional systems and formed early on. It may be defined in different ways in different circumstances, for example individual identity, group identity, or national identity. Identity construction begins in early childhood and in relation to socialisation and surroundings, hence a process. Yet, much of the identity remains relatively stable, once formed (see Boulding 1996;Fisher 1997;Benwell and Stokoe 2006). 4 It is constructed in difference to something other (Connolly 1991;Laclau and Mouffe 1985, 128). During this process of identity construction, the manner in which the Other is defined impacts identity in a process of co-constitution. That is, when the Other is portrayed or experienced as dangerous, the Self may feel threatened; when the Other is portrayed or experienced as non--threatening, the Self feels secure.
According to Münch (2001, 137), collective identity means "the attitudes, which all members of that group have in common in their thoughts and behaviour, which differentiates them from the "Other's". Therefore, identification implies "belonging or membership, which in turn implies the exclusion of non-members." (Bretherton and Vogler 1999, 236). According to Nabers (2015, 82), the logics of equivalence and of difference in processes of identification and differentiation between Self and Other point to the "constructedness of what appears to be a reality that is objective and timeless". Every social group is then constructed in an on-going practice of forming distinctions between us and them, Self and Other. Yet, otherness could come in diverse forms (Croft 2012, 91). If identity is constructed by both differentiating from and engaging with others (Lebow 2012, 270-271), it may be possible to maintain identity without producing harmful barriers toward others.
William Connolly talks about identity being constructed "on the shadow of the Other" (1991,66) in the sense that it is established in relation to a series of differences that have become socially recognized by a group. They are essential to being, he argues, for if they did not coexist, identity would not have its distinctness and solidity. As a result, once confronted with difference, identity becomes unstable, unsecured, hence under powerful pressure to fix, regulate, or exclude the undesirable. "When this pressure prevails, the 4 As correctly pointed out by one of the reviewers of this article, studies about the European Union enlargement show how ambiguous is the identity formation process. For example, in order to present themselves more positively, Eastern European states engaged in a discursive remaking of their own national identities -as victims of Soviet repression policies in the case of the Baltic states, as free market-oriented economies, or human rights sponsors. See Sedelheimer (2003) for the case of Slovakia, and Silva and Resende (2018) for the Baltic States. Both pieces stress the simultaneously fluid and conflictual quality of identity making process in the context of EU enlargement. maintenance of one identity involves the conversion of some differences into otherness, into evil, or one of its numerous surrogates" (Connolly 1991, 64). As Campbell argues, any state that is speaking of itself as being threatened by another state creates difference towards the Other; the manner of threat articulation then shapes political processes (Campbell 1998a, 3, 171). Dirk Nabers highlights the instabilities inherent to any relational meanings, the threat of the inside by an outside, the dependence of the inside on differential relations within, and the infinite dispersion and constant presence of difference -"a whole cosmos of differences" (Nabers 2015, 100).
As suggested by Laclau (1996), identity is constituted based on relations of difference toward the Other, which in turn establishes the borders of the Self under construction. The Other must, necessarily, become the signifier for threat, chaos, instability and negativity in order to allow the articulation of the Self as the signifier for security, order, stability and positivity. As a result, in order to maintain and preserve a so-called cohesive national identity, there is a certain motivation to search for potential enemies, and states may thus easily perceive other states as threatening Others, which motivates the creation of otherness in IR.
Othering may be seen as process with multiple motivating factors. We should then better understand these factors and their interaction, as well as the specific points where othering is motivated, and what may be able to transform or end such a process. Ultimately, it is not every Other that mobilises the "we-feeling", and not every Other is framed in adversarial terms to the Self. The concept of significant Other then becomes useful in understanding the conflictual nature of othering but also the way in which the Other is productive in constructing the identity of agents (Morozov and Rumelili 2012).
We believe that current and past conflicts seem to evidence that othering is negatively biased, with differentiation as well as distancing from the Other.
There is thus further value in elucidating the question if othering is necessarily a negatively tainted process, that is, if the Other is always seen in a negative light. The academic debate diverges on this. For example, Diez (2005, 628-29) argues that the Other can also be seen as neutral. For Hansen (2006), however, there are always views of superiority and inferiority, thus hierarchy. For Bauman (2004), the binary opposition which lies at the base of this hierarchy is constantly subject to reversal.
Ways of perceiving the Other as both different and equal can be a worthwhile endeavour for the facilitation of balanced relations. There is great value in further defining the dynamics in multiple types of Self-Other relations, and the nuances involved in various types of othering processes. In order to illustrate our arguments, I will now explore two cases centred around issues of identity/alterity and how they pertain to specific security and foreign policy choices.
The cases are: 1) the role of otherness in U.S. identity formation processes and foreign policy throughout time, and 2) the use of discourses of differentiation and identification in constructing Russian and European state identities.
Illustrating difference and othering: two cases of othering in foreign policy discourses
Case 1: U. S. national identity, foreign policy and otherness According to Mansbach and Rhodes (2007, 444), politics has always harnessed national identities as the prime discourse for articulating, constructing and disseminating a kind of "marriage" between nation and state. This union has been responsible for creating a vehicle that simultaneously creates, reproduces and rationalizes a specific kind of polity that took dominance in modernity: the nation-state. A common identity gives people a sense of comfort -effective albeit also somewhat illusory -that their lives are connected and that they have a reason to act upon common interests and goals. Constructed and reproduced by discourse, the group becomes an imagined moral community (Anderson 1991) that defines itself and by inference what it is not: the Other. As a result, identity discourses articulate and construct a moral community which defines itself as "us" that is ultimately different from "them", the "them" being excluded from the collective. For these reasons, they are prescriptive for they have the capacity to create realities, subjectivities, and relationships between Self and Other, especially when they define which rights and duties are recognized for the members of the community but withheld from those outside it.
Let us take the United States of America as example. In his study of the Cold War, Campbell (1998a) observed that U. S. foreign policy texts of that time reproduced a very specific representation of reality. Articulations such "free and peaceful America" threatened by "an international conspiracy", set of values "granted by God" which are the roots of the "western civilization", among other representations, created a specific representation of what the American Self was.
Identifying a sort of pattern in the production of difference between America/non-America, Campbell claimed that the constant and deliberate evocation of the "national mission", the "republic's objectives", the "defense of freedom", the "affirmation of the individual", and the "predestination of America" signaled that those policy documents did much more than simply offer an analysis of international politics: they reaffirmed a specific national identity for the United States: "Stamped 'Top Secret' and read by the elite, the texts blurred the lines between internal and external with 'quasi-puritan figurations'" (Campbell 1998a, 32).
Indeed, the discourse identified by Campbell remitted directly to what he claimed to be typically American puritan ideology. Narratives such as an "exceptional America" with the transcending "predestination" to be the "beacon of the world" and to act as "benign empire" -above all as opposed to the "Evil Empire" symbolized by the Soviet Union -seemed to serve foreign policy interests at the time. It indicated the attempt to script puritan ideology with the national, permanent and natural appearance. Employing the language, narrative, and style of the typical sermon of the puritans in Colonial America -the "jeremiad" 5 -, those texts signified the American Self as being ideologically puritan in opposition to the non-puritan Other (Bercovitch 1978;Campbell 1998a;Resende 2012).
This led Campbell to propose a relationship between ideology and U.S. foreign policy, whereby the use of discourse to signify "America" and "non-America" meant that foreign policy discourses construct and reproduce an essentialized, homogenized national American Self. According to him, foreign policy is then to be understood as the practice for constructing borders, since it discursively produces differences based on dichotomies such as inside/outside, domestic/ foreign, and friend/foe based on identity markers of specific ideological content.
Due to its incompleteness as a political project, the state engages in a process of permanent reproduction, seeking situations in which it becomes exposed to the outside/external/foreigner/Other in order to be able to reaffirm that which belongs to the inside/internal/national/Self. Through foreign policy, the privileged arena for contact with difference, the state employs discourses of fear to convert the external into a threat and thus reassure its moral and spatial borders, and to stabilize its own identity. Therefore, the constant articulation of danger through foreign policy is not a threat for the identity or the existence of the state, "but its possible condition" (Campbell 1998a, 13). By constantly articulating insecurity and creating otherness, the state seeks and constructs its own legitimacy.
Considering, however, that threats as well as the identities are never fixed, the state needs to continue to create and signify Others to legitimate its own reason for being. As a result, the discourse on the national Self occurs and develops at different levels, times and dimension, according to specific political and social contexts. Cullinane and Ryan's work on how the Other is constituted 5 Adapted from Jeremiah's preaching in the Old Testament, the 'jeremiad' was a popular genre of public discourse in 1600s Massachusetts. Its key themes are the lamentation of the social decay, announcement of an impending doom, and exhortation to action for zeal and moral surveillance to assure salvation for the community. It has a strong tone of alert about the certainty and imminence of threats and danger at loom, which shall punish those who fell in sin, degeneration and moral vices. See Bercovitch (1975).
Case 2: Russia and the European self/Other identity discourses
Another way to understand the functioning of the identity/difference nexus in IR is to highlight the use of discourses of differentiation and identification in constructing state identities and interests. The first case, taken mostly from Neumann (1996b), emphasis is placed on how "multiple alien interpretations" of Europe "struggle, clash, deconstruct, and displace one another", taking language and discourses of difference seriously (Ashley 1987, 409;Der Derian 1987, 69-70). Recognizing that identity is not located in "essential and readily identifiable cultural traits but in relations [articulated between Self and Other]", and that "the question of where and how borders towards the Other should be drawn therefore become crucial" (Neumann 1996b, 1), he shows how much Russia is caught between the attraction of an economically, politically and socially more developed Europe, and the appeal of being able to play an European style imperial role in less-developed Asia. Russians, by talking about Europe, also talk about themselves. Indeed, Russian interests in Europe are not postulated as objectively, natural, given national interests, but constructed by discourses "confined to Russian debates conducted, directed at Russians, concerning Europe and, by the same token, the Russian themselves" (Neumann 1996b, 2 Russian under Yeltsin, Neumann argues (1996b, 200), when Russia was represented to be similar to Europe, albeit moving slower and less subtle towards liberals ideals of democracy, free economy, and rule of law. A xenophobic one stresses defense and struggle against the sudden Western "occupation" of a new Russia after the Cold War (Neumann 1996b, 197). (Neumann 1996b, 194 (Waltz 1979, 127) resulted in a theory of international relations that does not offer an ontology of the state. Indeed, he has little to say about how the modern state system came about (see Ashley 1984). Neumann's study shows how Russians have interpreted and acted upon to implement this competition.
Implications and consequences of othering practices
Othering may be seen as process with multi- If there is a chance for this, there is the possibility and chance for neutral or positive differentiations between Self and Other, or a form of differentiation that does not result in creating an, in whatever form or degree, threatening Other. Thus, if identity may be maintained without building harmful oppositions, there must not be a negative process of creating otherness or othering.
How may it be/become possible to see and experience difference to the Self in a neutral or even in a positive way? Following Emmanuel Levinas, I propose that this encounter could occur through a dialogue that prevents barriers of protection as a self-defence mechanism of protecting and preserving one's own identity. At the core of Levinas' philosophy are descriptions of the encounter with the Other: the Other impacts me unlike any worldly object or force. I can constitute the other person cognitively, on the basis of vision, as an alter ego.
I can see that another human being is "like me", and acts "like me". As a result, subjectivity is born out of its relations to Others (Levinas 1985, 96), and the responsibility that derives from this encounter (Levinas 1966, 41) for the "being-for-the-other" also defines who we are. In this sense, subjectivity is primordially ethical as responsibility for the Other and is not a derivative feature of our subjectivity, but instead its anchor, as it gives it meaningful direction and orientation (see also Peperzak 1993;Burggraeve 1999;Bergo 2017).
Levinas derives the primacy of his ethics from the encounter with the Other, which he calls the face-to-face. For Levinas, the Other is not knowable and cannot be made into an object of the Self, as is done by traditional ontology.
Responsibility toward the Other precedes any "objective searching after truth". The experience of the face-to-face constitutes a privileged phenomenon in which both the Other's proximity and distance are strongly felt. "The Other precisely reveals himself in his alterity not in a shock negating the I, but as the primordial phenomenon of gentleness" (Levinas 1969, 151).
He goes on to argue that the encounter of the Other through the face-to-face experience forbids any attempts to exclude, domesticate, or reduce to sameness, and simultaneously installs the responsibility for the Other in the Self. Here the connection between responsibility and the Other becomes the basis for Levinas' ethics: to be responsible means to make oneself available for service to the Other so that one's own life is intrinsically linked with the Other's life (Levinas 1985, 97). I am thus a human being in the sole measure that I am responsible for another. Responsibility becomes then the "place where I bind myself to the Other" (Levinas 1981, 12). Rather than the binding of a piece of material to the block of which it is a part, nor as organ to the organism in which it functions (Levinas 1966, 41), it is the place in which the Self enters into a relationship that is disinterested, but not indifferent, with the Other. Responsibility seeks the good of the Other, not looking for recognition in the Other. Thus, the desire for the Other is not appetite but generosity (Levinas 1966, 39). Being-for-itself means that the Self is pre-occupied with itself, and therefore, indifferent to the Other (Levinas 1969, 87).
Levinas' term "being-for-the-Other" (Levinas 1969, 261) indicates a position in which the Self is responsible for the Other, which requires the exercise of sensibility (Levinas 1981, 77), one that implies exposure or being in proximity to the Other. It involves standing in the place of another and offering protection to them (Levinas 1969, 135-136). Therefore Levinas' ethics of alterity consists in opening one's Self to the Other, especially the one that is different, unequal, diverse, and plural, which deserves to be respected as it is, without indifference, dismissal, repulsion, exclusion, or simplification of its particularities.
Only by embracing a conception of the Self that also includes the Other will we be able to let go of restraints of selfishness, indifference and isolation.
After all, we are in some degree dependent on the Other inasmuch our sense of Self emerges from encountering the Other: we are constituted in and by our relationship to it.
Levinas' ethics of otherness was first introduced into IR via three major themes: the ethics of responsibility in genocide, the unintentional violence in knowledge production, and the con-cept of hospitality in forced displacement. Writing about the war in Bosnia, David Campbell used Levinas' philosophy to rethinking the question of responsibility, especially in context of ethnic cleansing like Bosnia (see Campbell 1994;1998b).
Arguing that responsibility is not a choice but an ethical imperative, Campbell wrote that the international community was required to stop and listen to the voices within Bosnia that were not oriented toward nationalism and/or partition, and to consider those Others who were advocating non-nationalist paradigms as potential partners in a multicultural peace process. Levinas' concern over the ethics of responsibility has inspired authors such as Elizabeth Dauphinee and Michael Shapiro to focus on how knowledge systems contribute toward unintentional violence in IR scholarship. While Shapiro (1999) points out that IR itself constitutes an enclosed system of beliefs that undermine alternative possibilities for how we might understand and approach the world, Dauphinee (2007) has argued that knowledge produced by researchers about post-conflict societies involves a rapid categorization of "good" and "evil", which undermines responsibility in Levina's sense, for it suggests that the "good" are the only group to which we are obligated. Finally, Roxanne Lynn-Doty (2006) Compasivas acts with responsibility towards the Other, for they will never know the names, legal status, or identities of those who will rely on their water reserves to survive their deadly journeys.
In common, this scholarship is concerned with understanding the new dilemmas for responsibility in terms of the need to identify to whom we are responsible in our political practices. In short, what does it mean for political practices to find ourselves infinitely obligated to Others? Levinas thinking opens us up to a responsibility equally radical as inescapable that it compares to a hos-tage situation: "I am pledged to the other without any possibility of abdication. I cannot slip away from the face of the other in this nakedness…to approach is to be the guardian of one's brother; to be the guardian of one's brother is to be his hostage" (Levinas 1998, 72). idea of responsibility stems from the awareness that our very existence always generates violence, whether we mean it or not. Our comfortable lives are always made possible by another's suffering, even when we do not wish this outcome. It is our existence itself that causes potential injury to the Other for we are always dependent on the Other for our very sense of Self -be it the American, Puritan sense of Self, or the Russian non-European Self. Ultimately, we are constituted in and by our relationship with the Other, and this this relationship is not based on expectations, on universal rights, on community or family ties, or even on national loyalty. In short, we are unconditionally responsible for the lives of Others, and this is the command that our living together in a world of difference presents to us.
Concluding remarks
This article has attempted to explore what I perceive as the ultimate challenge of our times: How can we live together peacefully in a world that is made of difference? This question leads us to engage with the politics of otherness, that is, to understand how the identity/alterity nexus impact on issues regarding collective identity formation and difference, but as well as to reflect upon our own responsibility regarding the life of the Other. But perhaps this is also a question of matters relating to some level of equality or comparability of abilities and wealth/material security, since humans tend to compare themselves with groups and individuals around themselves. Comparing between Self and Other seems to be a key mechanism for understanding the Self vis-à-vis Others around. Thus, groups that find a somewhat comparable level of wealth in groups surrounding them have one strong motivation less on which to differentiate themselves against Others in a negative manner. But seeing difference neutrally or positively would also seem to be a question of -more or less cultivated -ability of living with difference without needing to judge as better or worse, in a way that difference itself is perceived as less strong or even marginal, as being of little relevance to the safety and security of the Self, thus not serving or being used as motivation to treat others or act towards others in a divisive manner.
In a way, this is a societal task, and an essential one for peaceful relations in our world. But the smallest unit of any society is the individual, and any individual finds itself in the unit of the family; it is in families where socialization of all members of society begins, where the foundation is laid for later thinking and behaviour. Families then also play a pivotal role in how difference between Self and Other is perceived, experienced and acted upon. In fact, it is in families that children, at a very young age, learn -physically and psychologically -to comprehend themselves as unique beings, separate from others, yet also in many ways being similar to others, as well as how to communicate 6 As pointed out by one reviewer, the question of how the role of teaching and family is interplayed within the nation-state and its processes of identity and otherness in Levinas's philosophy remains open in this article. Indeed, theorizing the possibilities for transition from ethics to politics is a true challenging aspect of Levinas' thinking. As noted by Patricia Malloy, this passage is a "bumpy ride" (Molloy 1999, 233), and I consciously avoid doing it here due to the limitation of this article in terms of scope and length. with and behave toward others. Key in such processes is also how parents, but also grandparents and older siblings, teach how to deal with (various types of) difference by acting toward (various) others. As known, difficulties in relations with others as well as pathological or "ineffective" ways of dealing with these are often transferred to children, who then apply similar modes of behaviour. Family is then a key location of teaching about Self-Other difference and deserves much greater focus when it comes to how difference can be dealt with in a neutral or even positive manner.
Another focus may be the role of teaching not only tolerance of difference but, even more, the seeing of difference as enrichment and enjoyment, as something desired and desirable -thus, as truly positive. Such efforts seem existentially important when viewing the global landscape of recurrent conflict and distress. This should begin also in the family, but then be continued in kindergartens and schools (and there are plenty of examples where teachers work towards an inclusive view of difference). There is certainly room for this at the workplace too; how to deal with diversity in a positive manner is in fact of interest for increased productivity, thus in actual monetary terms, but also regarding the creation of a positive work climate. Moreover though, it is political actors who need to recognize that their often instrumentalized use of difference, for various political agendas, is harmful for societal and international relations, causing much greater costs in the middle and long term than bringing hope for gains in the short term. 6 The main point here, following Levinas' ethics of responsibility, is not to judge the Self as morally right or intentionally violent for the Other will always be harmed despite our best intentions.
Rather we should practice a "guiltless responsibility" (Levinas 1989, 83) considering that the Self is always hostage to the Other due to the notion of ethics as the first philosophy according to Levinas. Ethics is the prior condition on which all subsequent philosophy is built, therefore all ethical thought finds its source in our continuous, unconditional, and infinite responsibility to and for the Other. As a result, our very being is tied to the protection of the Other for this is what it means in ethical terms when we argue that Self and Other are mutually constituted. In this sense, it is always helpful to remember Levinas' favourite quote from Dostoyevsky's The Brothers Karamazov: "We are all responsible for everyone else -but I am more responsible than all the others". By taking responsibility for the Other, we also reject the possibility of violence that could emerge from an encounter with the Other, especially when acts of violence are committed and justified in relation to the so-called protection of a specific Self from a dangerous, threating Other. In current times, where life itself seems so fragile and tenuous, and where we continuously see the Other being reduced to less than human, this stands as an ethical imperative to all of us. | 2020-08-06T09:04:10.274Z | 2020-07-20T00:00:00.000 | {
"year": 2020,
"sha1": "be9867b7abb50e6dd7e1d29a7f21f20ce2557806",
"oa_license": "CCBY",
"oa_url": "https://revistaseletronicas.pucrs.br/index.php/civitas/article/download/34347/26181",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "55ad1ce94c17bd7503854062a5121b36083832b0",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
91184759 | pes2o/s2orc | v3-fos-license | Elucidating Tricin-Lignin Structures: Assigning Correlations in HSQC Spectra of Monocot Lignins
Tricin [5,7-dihydroxy-2-(4-hydroxy-3,5-dimethoxyphenyl)-4H-chromen-4-one] is a flavone that has been found to be incorporated in grass lignin polymers via 4′–O–β coupling. Herein, we investigated the tricin-lignin structure using nuclear magnetic resonance (NMR) methods by comparing the 1H–13C heteronuclear correlation (HSQC) NMR spectra of the isolated lignin with a series of dimeric and trimeric tricin-4′–O–β-ether model compounds. Results showed that the tricin moiety significantly affects the chemical shift of the Cβ/Hβ of 4′–O–β unit, producing peaks at around δC/δH 82.5–83.5/4.15–4.45, that differ from the Cβ/Hβ correlations from normal 4–O–β units formed solely by monolignols, and that have to date been unassigned.
Introduction
Lignin, a phenylpropanoid polymer, is one of the major components of plant cell walls. Many aspects of lignin structure and biosynthesis remain elusive [1]. Several novel monomers and the details of various interunit linkages in lignin polymers were not discovered until rather recently [2][3][4]. For example, tricin [5,7-dihydroxy-2-(4-hydroxy-3,5-dimethoxyphenyl)-4H-chromen-4-one], a flavone that derived from a combination of the shikimate and acetate/malonate-derived polyketide pathways, was only recently revealed to be present in the lignin polymer from wheat straw according to the characteristic correlations in the 1H-13C heteronuclear correlation (HSQC) spectrum [5]. Follow-up studies using biomimetic radical coupling reactions authenticated tricin as a monomer incorporated in polymeric lignin via 4 -O-β-coupling with monolignols [6]. Metabolite profiling of the lignifying tissue of maize elucidated the incorporation pathway of tricin into lignin [7], and the absolute contents of tricin in the lignin from various plant species were also investigated using thioacidolysis and liquid chromatography-mass spectrometry (LC-MS) [8].
HSQC is the most frequently applied 2D nuclear magnetic resonance (NMR) technique for lignin characterization because it provides comprehensive information on the types of units and their characteristic interunit linkages in the polymer. Signal assignments in a HSQC spectrum of lignin are mostly based on the assignments of low molecular weight model compounds, particularly dimers and trimers, which provide the key NMR data for structural authentication. In this short communication, we report on work in which we synthesized several 4 -O-β-coupling products of tricin with coniferyl and sinapyl alcohol, as well as a trimer that resulted from further cross-coupling with another monolignol. By comparing the NMR data of the model compounds with those of an isolated lignin sample, we have elucidated the tricin-lignin structure (especially the sidechain structure) in polymeric lignin and reveal characteristic new correlations in the HSQC spectrum.
Materials
All chemicals and solvents used in this study were purchased from commercial sources (analytical grade) and used without further purification. The wheat sample was the same as that used in a previous publication [5]; chalcone synthase (CHS)-deficient and control maize samples were those used in another earlier study [9]. Thin-layer chromatography (TLC) plates (20 × 20 cm 2 , 1 mm, normal phase, Analtech. (Newark, NJ, USA) were used for raw product fractionation and purification using hexane/ethyl acetate or methanol/dichloromethane as eluent. Flash chromatography was performed using Biotage SNAP silica cartridges on an Isolera One instrument (Biotage, Uppsala, Sweden) using a hexane/ethyl acetate (EtOAc) gradient as eluent.
Acetylation of Model Compounds and Lignin
The model compound (10 mg) or enzyme lignin (25 mg) was dissolved in 0.25 mL pyridine and 0.1 mL acetic anhydride (1 mL pyridine and 0.5 mL acetic anhydride for lignin) and stirred for 2 h (12 h for lignin) at room temperature. When the reaction was completed, the solvent was evaporated under reduced pressure at 45 • C. Ethanol was added as co-solvent and repeatedly evaporated to completely eliminate the residue of pyridine and acetic anhydride. The acetylated samples can be readily dissolved in CDCl 3 and then transferred to an NMR tube for characterization.
Results and Discussion
The β-ether unit with its β-O-4-ether interunit linkage is the most abundant in lignin polymers and has already been studied comprehensively. It is well established that the methoxylation degree of the aromatic ring in β-O-4 aryl ethers affects the chemical shift of the Cβ/Hβ correlation in HSQC spectra, resulting in resolvable correlations between β-O-4-guaiacyl units and β-O-4-syringyl units. Recently, tricin was found to be incorporated into lignin via 4 -O-β-coupling with monolignols. Tricin contains one more carbon-carbon double bond and a carbonyl group conjugated to the 3 ,5 -dimethoxylated aromatic ring in its structure, which is different from the β-O-4 interunit formed by the three canonical monolignols. Such an electronically different structure attached at the β-position of a monolignol might be expected to affect the chemical shifts of the Cβ/Hβ correlation.
To confirm our hypothesis, we synthesized four model compounds, including 3 dimers that result from tricin cross-coupled with coniferyl alcohol, sinapyl alcohol, and the phenol-etherified product represented by 4-O-methylated coniferyl alcohol and 1 trimer of tricin-(4 -O-β)-sinapyl alcohol-(4-O-β")-coniferyl alcohol. By comparing the lignin HSQC spectra with those of the model compounds, we were able to resolve the tricin-lignin structure from these Cβ/Hβ correlations. The aromatic region of the tricin HSQC spectrum has been well studied previously [5,6], showing four characteristic correlations corresponding to C6/H6, C8/H8, C3/H3, and C2 6 /H2 6 in tricin. Herein we focus on the sidechain structure of tricin-lignin from the "lignin side". Figure 1 shows the sidechain region of the HSQC spectrum of wheat straw lignin overlaid with those of model compounds. For acetylated samples (Figure 1a), the signals of Cβ/Hβ and Cγ/Hγ (from tricin-monolignol interunit) fell into the same region as the signals of Cβ"/Hβ" and Cγ"/Hγ" (from monolignol-monolignol interunit). The correlations of Cα/Hα and Cα"/Hα" in the two isomers of acetylated T-(4 -O-β)-S-(4-O-β")-G were resolvable. In the case of non-acetylated samples (Figure 1b were sufficiently significantly displaced to be differentiated from the above two cases (Table 1). In fact, in the supplementary information (Supplementary Figure S1) from the original paper identifying tricin in wheat straw lignin [5], there is also a clear long-range (HMBC) correlation to this correlation peak, further supporting the assignment here.
The HSQC spectra of the lignin preparations from monocot species such as wheat [5,10,11], barley [12], sugarcane [13], elephant grass [14], Brachypodium (unpublished data in our lab), and oat (unpublished data in our lab) all clearly showed the correlations in the range of δC/δH 82.5-83.5/4.15-4.45, but researchers were not able to assign this correlation at the time because of the lack of data from model compounds. The lignin from coconut coir fibers contains tricin as well, but the content was too low to show the characteristic tricin correlations in the HSQC spectrum [8,15].
To further confirm our hypothesis, we characterized the lignin isolated from the CHS-deficient maize and its wild-type control. CHS is the main enzyme that controls the carbon flux from the common pathway intermediate p-coumaroyl-CoA toward flavonoid biosynthesis. The tricin moiety was completely depleted in the CHS mutant and, therefore, the corresponding HSQC spectrum did not show the characteristic correlations from tricin in the aromatic region [9]. As was not appreciated at the time, however, the tricin-related Cβ/Hβ correlation in the lignin sidechain region was also depleted ( Figure 2); the HSQC spectrum of the lignin isolated from wild-type samples clearly shows the Cβ/Hβ peak from tricin-(4 -O-β)-substructures ( Figure 2).
Conclusions
In summary, the data from model compounds, together with the study of the lignin structure of different species, as well as the lignin from CHS mutant maize leaf and its corresponding wild-type control, unambiguously support our hypothesis that the tricin moiety affects the Cβ/Hβ 4 -O-β unit chemical shifts such that correlations in the region of δC/δH 82.5-83.5/4. 15-4.45 in the HSQC spectrum are diagnostically assigned to these tricin-lignin unit Cβ/Hβ entities. As such, these correlations provide another "marker" for tricin in lignins, this time from the viewpoint of the lignin rather than the tricin moiety.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-04-05T00:50:51.691Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "441e1c2bd34bd438392c7c3c9490b9505c511f03",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/10/8/916/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "441e1c2bd34bd438392c7c3c9490b9505c511f03",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
21659932 | pes2o/s2orc | v3-fos-license | Interleukin-6–mediated trans-signaling inhibits transforming growth factor-β signaling in trabecular meshwork cells
Glaucoma is one of the major causes of blindness, and transforming growth factor-β2 (TGF-β2) has been found to be elevated in the aqueous humor of eyes with primary open-angle glaucoma (POAG). TGF-β2 in aqueous humor causes the glaucoma-related fibrosis of human trabecular meshwork (HTM), suggesting an important role of TGF-β in POAG pathogenesis. Here, we sought to elucidate the effects of IL-6 trans-signaling on TGF-β signaling in HTM cells. Using a multiplex immunoassay, POAG patients decreased IL-6 levels and increased soluble IL-6 receptor (sIL-6R) levels compared with the controls. In in vitro experiments, we observed that the IL-6 level was increased in the conditioned medium of HTM cells after TGF-β2 stimulation. To elucidate the relationship between TGF-β2 and IL-6 in HTM cells, we conducted Western blotting and immunohistochemical analyses, and we noted that the combination of IL-6 and sIL-6R (IL6/sIL-6R) suppressed TGF-β–induced up-regulation of α-smooth muscle actin in HTM cells, whereas IL-6 alone did not. This suggests that trans-signaling, not classic signaling, of IL-6 suppresses TGF-β–induced fibrosis of HTM. IL6/sIL-6R also suppressed TGF-β–mediated activation of myosin light chain 2 (MLC2), Smad2, and p38. Of note, these inhibitory effects of IL6/sIL-6R on TGF-β were partly reduced by siRNA-mediated knockdown of STAT3. Moreover, IL-6/sIL-6R partly inhibited TGF-β–induced activation of the Smad-sensitive promoter detected with luciferase reporter gene assays and up-regulation of TGFRI and TGFRII, evaluated by quantitative real-time RT-PCR. Strikingly, overexpression of TGFRI and TGFRII diminished these inhibitory effects of IL-6/sIL-6R. We conclude that of IL-6–mediated trans-signaling potently represses TGF-β signaling in HTM cells.
risk factor for the development and progression of glaucoma, reduction of IOP is regarded as the most important therapeutic modality (2). IOP is determined by the balance between inflow and outflow of aqueous humor. Elevated IOP is caused mainly by increased resistance to aqueous outflow. As transforming growth factor (TGF)- was elevated in eyes with primary open angle glaucoma (POAG) and increased aqueous resistance by inducing up-regulation of the extracellular matrix (ECM) in the trabecular meshwork, TGF- is thought to play an important role in the pathogenesis of POAG (3)(4)(5)(6)(7)(8)(9). It has also been reported that TGF- increased ␣-smooth muscle actin (SMA) expression in trabecular meshwork cells (10,11), suggesting the fibrotic changes in the trabecular meshwork may be related to the pathogenesis of POAG. Thus, controlling fibrosis in the trabecular meshwork may be a potent therapeutic target against glaucoma.
Interleukin (IL)-6 has various physiological and pathological functions. IL-6 signal transduction is mediated by binding the complex of the IL-6 receptor (IL-6R) and gp130 on target cells. Although gp130 is expressed on almost all cells, membrane IL-6R (mIL-6R) is present in only limited cell populations, including hepatocytes and some leukocytes (12). Signal transduction through mIL-6R is called classic signaling, whereas trans-signaling is mediated through soluble IL-6R (sIL-6R). sIL-6R is produced by shedding of mIL-6R or by translation of a differentially spliced mRNA (13,14), and enables IL-6 to bind to gp130 on the cell membrane by forming a complex of IL-6 and sIL-6R. Thus, in the presence of sIL-6R, cells can respond to IL-6 even if the cells lack mIL-6R.
Previous studies indicated that various cytokines, including IL-6 and IL-8, and monocyte chemoattractant protein (MCP)-1, were present in the aqueous humor of patients with POAG and secondary glaucoma (15)(16)(17)(18). Among the cytokines detected, IL-6 and MCP-1 increased aqueous outflow in ex vivo experiments (19,20), suggesting that these cytokines are involved in the regulation of IOP. Although only the level of MCP-1 was higher in POAG than in controls, the levels of MCP-1 and IL-6 were positively correlated with each other. Interestingly, it has been reported that sIL-6R is also present in aqueous humor, and its concentration is elevated in patients with uveitis (21-23). Although trans-signaling of IL-6 is thought to be involved in the inflammatory response, its role in the pathology of glaucoma remains unknown.
In this study, we showed that the concentration of sIL-6R is elevated in the aqueous humor in eyes with POAG, and trans-signaling of IL-6 suppresses TGF- signaling through activation of signal transducer and activator of transcription (STAT) 3 in trabecular meshwork cells.
Concentration of sIL-6R is higher in the aqueous humor of glaucoma patients
A multiplex immunoassay was performed to examine the cytokine levels in the aqueous humor in eyes with cataracts (as controls) or POAG. Patient characteristics are shown in Table 1. The mean Ϯ S.D. IL-6 levels were 34.6 Ϯ 44.8 and 15.9 Ϯ 54.7 pg/ml in eyes with cataracts (n ϭ 17) and POAG (n ϭ 22), respectively. The corresponding sIL-6R levels were 68.6 Ϯ 30.9 and 118.7 Ϯ 80.4 pg/ml, respectively (p ϭ 0.025; Fig. 1A). The levels of IL-6 and sIL-6R were not correlated with each other in patients with POAG or cataracts (Fig. 1, B and C). These data suggest that aqueous sIL-6R was elevated in POAG patients independent of IL-6.
Trans-signaling is dominant in the transduction of IL-6 signaling in HTM cells
Phosphorylation of STAT3 was investigated to confirm the effects of sIL-6R on the activation of IL-6 signaling in HTM cells. IL-6 -induced phosphorylation of STAT3 was limited in the absence of sIL-6R (Fig. 2). In contrast, the combination of IL-6 and sIL-6R markedly enhanced STAT3 phosphorylation. Furthermore, the combination of IL-6 and sIL-6R suppressed TGF-2-induced activation of Smad2, p38, and myosin light chain (MLC)-2, whereas IL-6 alone had no such effect. These results suggested that trans-signaling is dominant compared
Inhibition of TGF- by IL-6 trans-signaling
with classic signaling in the transduction of IL-6 signals in HTM cells.
Trans-signaling of IL-6 is active in HTM cells even in the presence of TGF-2
To clarify the effects of TGF-2 on the activation of STAT3, we quantified the ratio of phospho-STAT3 in HTM cells. The combination of IL-6 and sIL-6R significantly increased the ratio of phospho-STAT3 in the presence or absence of TGF-2 (p ϭ 0.0029 and 0.0006, respectively; Fig. 3A). In addition, immunohistochemical studies demonstrated accumulation of phospho-STAT3 in the nuclei of HTM cells after treatment with IL-6 and sIL-6R regardless of the presence or absence of TGF-2 (Fig. 3, B-E).
Trans-signaling of IL-6 inhibits TGF-2-induced formation of ␣-SMA and F-actin
To elucidate the effects of IL-6 trans-signaling on fibrogenic changes, the expression of ␣-SMA, a representative marker of the epithelial-to-mesenchymal transition (EMT) and fibrosis under conditions of TGF-2 stimulation, was investigated. ␣-SMA induction by TGF-2 was significantly suppressed by treatment with both IL-6 and sIL-6R (p Ͻ 0.0001; Fig. 4A). Next, we assessed phosphorylation of MLC2 and F-actin formation, which are related to contractile properties. Phosphorylation of MLC2 was induced after treatment with TGF-2, as expected, and its induction was significantly suppressed by activation of IL-6 trans-signaling (p ϭ 0.0389; Fig. 4B). In addition, the TGF-2-induced polymerization of actin was inhibited by the combination of IL-6 and sIL-6R in HTM cells (Fig. 5).
Effects of IL-6 trans-signaling on the TGF--signaling pathway
To elucidate the effects of IL-6 trans-signaling on canonical and noncanonical TGF--signaling pathways, the phosphorylation levels of Smad2 and p38 were investigated. TGF-2 increased the relative level of phospho-Smad2 (pSmad2; p Ͻ 0.0001), and the induction was partly suppressed by the combination of IL-6 and sIL-6R (p ϭ 0.0482; Fig. 6A), suggesting IL-6 trans-signaling may inhibit Smad2 activation. Similarly, TGF-2-induced induction of p38 was partly inhibited by activation of IL-6 trans-signaling (p ϭ 0.0404; Fig. 6B). Next, to clarify the effects of IL-6 trans-signaling on the transcriptional activity of regulator-Smad, the luminescence from a luciferase reporter gene fused with 12 repeats of the Smad-binding element (CAGA-12) was measured by luciferase assay. The relative levels of CAGA-12 promoter activity were 71.4 Ϯ 5.6, 0.8 Ϯ 0.1, and 40.4 Ϯ 4.3-fold after treatment with TGF-2 in the absence and presence of IL-6/sIL-6R, respectively ( Fig. 6C; significantly different from each other, p Ͻ 0.0001). These results indicated that IL-6 trans-signaling suppressed canonical and noncanonical TGF-2 signaling, at least partly, in HTM cells.
STAT3 knockdown ameliorated the inhibitory effects of IL-6 trans-signaling on TGF-2 signaling in HTM cells
To confirm the inhibitory effects of IL-6 trans-signaling on TGF-2-induced changes in HTM cells, small interfering RNA (siRNA)-mediated control of STAT3 production was induced. and of phospho-myosin light chain 2 (pMLC2), and total MLC2 (B). Relative changes in the ratio of ␣-SMA per -actin (A) and pMLC2 per total MLC2 (B) are shown in the lower graphs. Data are shown as means Ϯ S.E., n ϭ 6. *, p Ͻ 0.05, and **, p Ͻ 0.01, calculated using the Tukey-Kramer HSD test.
Inhibition of TGF- by IL-6 trans-signaling
We confirmed that control siRNA did not affect the activity of IL-6 or TGF-2 signaling and that STAT3 siRNA efficiently knocked down the expression of STAT3 in HTM cells (Figs. 7 and 8). As described above, up-regulated levels of ␣-SMA, pSmad2, p-p38, and pMLC2 expression by TGF-2 treatment were confirmed, and these effects were partly inhibited by knockdown of STAT3 (Figs. 7-9). In addition, TGF-2induced phosphorylation of ERK1/2 was suppressed by the combination of IL-6 and sIL-6R (Figs. 7 and 8). After knockdown of STAT3, the basal level of ERK1/2 phosphorylation was up-regulated, and the inhibitory effect of IL-6 trans-signaling was reduced in HTM cells. Taken together, the inhibitory effects of IL-6 trans-signaling on canonical and noncanonical TGF-2 signaling were dependent on the activation of STAT3 in HTM cells.
IL-6 trans-signaling suppresses expression of the TGF- receptors, TGFBR1 and TGFBR2
To explore the molecular mechanisms underlying the inhibitory effects of IL-6 trans-signaling on TGF- signaling, mRNA levels of TGFBR1 and TGFBR2, encoding TGF- receptors 1 and 2, respectively, were investigated. Our quantitative real-time reverse-transcription PCR (RT-PCR) analyses showed that TGFBR1 mRNA was induced by 10.0 Ϯ 1.6-fold after treatment with TGF-2 (2.5 ng/ml) alone, and this value was significantly reduced to 4.5 Ϯ 0.5-fold after treatment with TGF-2 and IL6/sIL-6R (100 and 200 ng/ml, respectively; p ϭ 0.0180; Fig. 10A). The level of TGFBR2 mRNA was also increased by 1.5 Ϯ 0.1-fold after treatment with TGF-2, and its induction was reduced to the basal Cells were stimulated with TGF-2 (2.5 ng/ml), IL-6 (100 ng/ml), and/or sIL-6R (200 ng/ml) for 24 h. A and B, data shown in upper panels are results of representative Western blotting analyses of phospho-Smad2 (pSmad2) and total Smad2 (A), phospho-p38 (p-p38), and total p38 (B). Relative changes in phospho-Smad2 or phospho-p38 to total Smad2 or p38 expression levels are shown in the respective lower graphs. Data are shown as means Ϯ S.E., n ϭ 6 (A) and n ϭ 5 (B). **, p Ͻ 0.01 compared with control using the Tukey-Kramer HSD test. C, relative signals from a luciferase reporter gene fused with 12 repeats of the Smad complex-binding element (CAGA) were compared among samples. Signals from a plasmid containing Renilla luciferase were used as an internal control. Data are shown as means Ϯ S.E., n ϭ 15. *, p Ͻ 0.05, and **, p Ͻ 0.01, calculated using the Tukey-Kramer HSD test.
Overexpression of the TGF- receptors, TGFBR1 and TGFBR2, partly eliminates the inhibitory effects of IL-6 trans-signaling
To confirm the effects of down-regulation of TGF- receptors on the inhibitory effects of IL-6 trans-signaling, we overex-
Inhibition of TGF- by IL-6 trans-signaling
pressed TGFBR1 and/or TGFBR2 in HTM cells. The ratio of the inhibitory effect of IL-6 trans-signaling in the presence of TGF-2 was 41.8 Ϯ 4.7% in the control condition (Fig. 11). The corresponding values after single induction of TGFBR1 and TGFBR2 were 28.5 Ϯ 8.0 and 38.2 Ϯ 14.5%, respectively. In contrast, the corresponding value after double induction of TGFBR1 and TGFBR2 was 2.4 Ϯ 15%. Thus, the combined induction of TGFBR1 and TGFBR2 partly eliminates the inhibitory effects of IL-6 trans-signaling. Thus, the down-regulation of TGFBR1 or TGFBR2 explains the inhibitory effects of IL-6 trans-signaling on the downstream targets of TGF-.
Discussion
In this study, we showed that the concentration of sIL-6R is elevated in the aqueous humor of eyes with POAG, and transsignaling of IL-6 suppressed TGF--induced fibrogenic changes through activation of STAT3 in HTM cells. To the best of our knowledge, this is the first study to clarify the roles of IL-6 trans-signaling in glaucoma pathology. Moreover, the inhibitory effects of IL-6 trans-signaling on the expression of TGFBR1 and TGFBR2 have not been reported previously. Indeed, overexpression of both TGFBR1 and TGFR2 partly suppresses the inhibitory effects of IL-6 trans-signaling, probably explaining the potential mechanism of the effect. Thus, the findings of this study suggest potential novel targets for glaucoma therapy.
Our results indicated up-regulation of IL-6 secretion from HTM cells after treatment with TGF-2, a dominant isoform in aqueous humor (24). Liton et al. (19) reported that IL-6 expression was elevated under conditions of cyclic mechanical stress partly by TGF-1 in HTM cells. They also found that the TGF-1-induced up-regulation of IL-6 was suppressed by inhibitors of p38, ERK, or JNK (25). As TGF-1 and TGF-2 share receptors, the findings of this study are consistent with previous reports. Thus, it was indicated that IL-6 production from HTM cells was induced by TGF-, and this induction may represent part of a negative feedback system. As the aqueous TGF-2 level was elevated in eyes with POAG, it may be confusing that the level of IL-6 in the aqueous humor in eyes with POAG was not different from that in controls. As the trabecular meshwork is downstream of aqueous flow in the anterior chamber, the IL-6 production from HTM cells may not be directly reflected in the concentration of IL-6 in the aqueous humor. It is also possible that the HTM cells from glaucoma patients do not produce IL-6, which could be a potential mechanism for glaucoma pathogenesis.
The level of STAT3 phosphorylation was elevated after treatment with IL-6 alone in this study. However, its induction was less prominent compared with combined treatment with IL-6 and sIL-6R. These results indicated that trans-signaling is dominant over classic signaling for IL-6 signal transduction in HTM cells. Although the limited reaction to treatment with IL-6 alone could be a result of signal transduction via classical signaling, mIL-6R is expressed in only limited cell types, including hepatocytes and some leukocytes (12). Thus, another explanation could be that external IL-6 binds to sIL-6R, which is produced by HTM cells, and thereby activates STAT3 in the absence of external sIL-6R. Further experiments are required to clarify the occurrence of IL-6 classical signaling in HTM cells.
To the best of our knowledge, this is the first report on the inhibitory effects of IL-6 trans-signaling on TGF- signaling and its downstream targets, ␣-SMA and MLC2, in HTM cells. These results suggest that IL-6 trans-signaling could be involved in the regulation of IOP. First, TGF- signaling has been shown to decrease aqueous outflow by ECM accumulation in ex vivo experiments (5). Second, it has been reported that ␣-SMA expression is a key marker of fibrogenic changes, such as cell contractility, attachment, motility, and EMT-like phenomena, in HTM cells (10,11,27). Third, recent investigations showed that activation of Rho-Rho kinase signaling induces cell contraction accompanied by phosphorylation of MLC2, F-actin formation, and ␣-SMA expression in HTM cells, and Rho-Rho kinase inhibitor lowered IOP by increasing aqueous outflow (11, 28 -31). Notably, it was reported that a T104M mutation in IL-20 receptor 2 significantly impacts the function of this receptor, as shown by decreased pSTAT3 levels and generic matrix metalloproteinase activity (32). Considering the presence of IL-6 and sIL-6R in aqueous humor, the IL-6-STAT3 axis may contribute to the pathophysiology of glaucoma.
Our results showed that activation of IL-6 trans-signaling suppressed TGF-2 signaling, Smad2 activation (canonical signaling pathway), ERK, and p38 activation (noncanonical signaling pathway). As knockdown of STAT3 decreased the inhibitory effects, this cross-talk between IL-6 and TGF-2 depends on activation of STAT3. These results are consistent with previous studies indicating the inhibitory effects of STAT3 signaling on TGF- signaling and tissue fibrosis (33)(34)(35)(36)(37). However, the cross-talk between IL-6 and TGF- signaling is complicated, because STAT3 signaling was reported to be enhanced or required in TGF- signaling and/or fibrogenic changes in some conditions (38 -41). For example, Seong et al. (39) reported that IL-6 participated in TGF--induced trans-differentiation of
Inhibition of TGF- by IL-6 trans-signaling
human Tenon's fibroblasts to myofibroblasts. In addition, in lung cancer cells, JAK/STAT signaling is required for TGF-induced EMT (39). Currently, the reasons for this inconsistency are unclear, but it may be due to differences in cell types or the effects of other regulatory factors.
The molecular mechanisms underlying the inhibitory effects of IL-6 trans-signaling on TGF- signaling in HTM cells are still under investigation.
Although not yet conclusive, we found a potential novel mechanism in HTM cells: activation of IL-6 trans-signaling caused down-regulation of TGF- receptor expression. Moreover, the combined overexpression of TGFBR1 and TGFBR2 rescued the inhibitory effect of IL-6 trans-signaling on CAGA promoter activity, which is sensitive to active Smad2/3. Wang et al. (33) reported that STAT3 interacts directly with Smad3 in vivo and in vitro, resulting in attenuation of the Smad3-Smad4 complex formation and suppression of the DNA-binding ability of Smad3 in HaCaT cells. Taken together, the cross-talk between IL-6 and TGF-signaling pathways may be mediated by multiple steps.
In conclusion, this study indicated that the aqueous humor of patients with POAG contains high levels of sIL-6R. The com-bination of IL-6 and sIL-6R activates STAT3 and down-regulates TGF- receptors, and thereby inhibits TGF-2-induced changes via down-regulation of canonical and noncanonical TGF- signaling in HTM cells. Our results suggest that IL-6 trans-signaling is an important regulatory system of aqueous outflow.
Patients
The study to analyze cytokine levels in the aqueous humor was approved by the Institutional Review Board of Kumamoto University. All of the procedures conformed to the Declaration of Helsinki. Written informed consent was obtained from each patient. Patients that had undergone trabeculectomy for POAG aged Ն20 years old were recruited. Cataract patients without systemic diseases (other than hypertension and hyperlipidemia), ocular diseases other than cataracts, a history of ocular surgeries, or IOP Ͼ21 mm Hg were included as controls. IOP was determined using a noncontact tonometer in cataract cases and a Goldmann tonometer in glaucoma cases. When both eyes of a patient met the inclusion criteria, only the eye treated first was included in the analysis. In all of the participants, the anterior eye segment was examined by glaucoma specialists using a slit-lamp biomicroscope, and all of the changes were recorded. The optic disc was evaluated through dilated pupils with a stereo fundus lens to diagnose glaucoma. The cataract patients and POAG patients coincided with those in our previous reports (16 -18).
Sample collection
Preoperative aqueous humor was obtained at the start of the phacoemulsification surgery and/or trabeculectomy before any incisional procedures, as described previously (16). Briefly, aqueous humor was obtained gently at the start of surgery from the anterior chamber, through limbal paracentesis using a syringe with a 30-gauge needle attached. Approximately 70 -100 l was collected in CryoTubes, registered, and stored at Ϫ80°C until processing.
Multiplex immunoassay analysis
Concentrations of IL-6 and sIL-6R in the aqueous humor samples and supernatant of HTM cells were determined using multiplex bead-based immunoassays, xMAP, and human cytokine/chemokine panel (Luminex, Austin, TX), as described previously (16). Briefly, a 25-l aliquot of aqueous humor sample or supernatant of HTM cells was transferred to a plate, and some of each aliquot was placed into one of the capture microsphere multiplexes. After incubation at 4°C for 18 h, multiplexed cocktails of biotinylated reporter antibodies were mixed and then incubated at room temperature for 1 h. Multiplexes were developed using an excess of streptavidin/phycoerythrin solution. The solution was mixed into each multiplex and then incubated at room temperature for 30 min. Vacuum filtration was performed to reduce the volumes of the multiplexed reac-tions, and then the volume of each reaction was increased by dilution in matrix buffer. A Luminex 200 instrument (Luminex) was used for the analysis, and data were interpreted using proprietary data analysis software (DNASIS Plex, version 2.5; Hitachi Software Engineering, Tokyo, Japan).
Cell culture
Primary HTM cells were obtained from ScienCell (Carlsbad, CA), and maintained in Trabecular Meshwork Cell Medium (TMCM; ScienCell) containing 5% fetal bovine serum (FBS) and supplements (undisclosed growth factors and antibiotics; ScienCell), according to the manufacturer's protocol as described previously (42). Briefly, cells cultured without serum were cultured in Dulbecco's modified Eagle's medium (DMEM; Wako) supplemented with 100 units/ml penicillin and 100 mg/ml streptomycin sulfate (Invitrogen) and GlutaMAX-I supplement (Life Technologies, Inc.). HTM cells were used between passages 6 and 8 in this study. After 24 h of serum starvation, cells were treated with TGF-2, IL-6, and/or sIL-6R for 24 h.
Immunocytochemistry
Immunocytochemical analysis was conducted as described previously (42). Briefly, glass coverslips in 12-well plates were coated with gelatin for 30 min at room temperature and then washed with phosphate-buffered saline (PBS). After drying the plates, HTM cells were grown on gelatin-coated glass coverslips, starved of serum for 24 h, and then stimulated with TGF-2, IL-6/sIL-6R, or TGF-2 with IL-6/sIL-6R. After a 24-h period of stimulation, HTM cells were washed twice in PBS and then fixed in methanol at Ϫ20°C for 15 min. After fixation, the cells were washed three times in PBS, permeabilized, and blocked with 3% FBS in PBS. Subsequently, the cells were incubated with primary antibody (anti-phospho-Smad3; 1:200; Cell Signaling) overnight at 4°C, and then with Alexa-Fluor-conjugated secondary antibodies (1:1000 dilution; Life Technologies, Inc.) for 1 h at room temperature. Phalloidin-FITC (1:200 dilution; Life Technologies, Inc.) was used for F-actin staining. After cells were washed with PBS, they were mounted with VECTASHIELD mounting medium with 4Ј,6-diamidino-2phenylindole (Vector Laboratories, Burlingame, CA), and the slides were observed under a fluorescence microscope (BX51; Olympus, Tokyo, Japan).
Western blot analysis
Western blot analysis was performed as described previously (42). Briefly, HTM cells were grown on 6-cm dishes, starved of serum for 24 h, and stimulated with TGF-2 with or without IL-6 or IL-6/sIL-6R for 24 h. The cells were then washed three times with ice-cold PBS on ice and lysed with radioimmunoprecipitation buffer (ThermoFisher Scientific) containing protease inhibitors (ThermoFisher Scientific) and phosphatase inhibitors (Nacalai Tesque, Kyoto, Japan). Cell extracts were then centrifuged at 15,000 rpm for 10 min at 4°C. Supernatants were collected, and the protein contents were determined using a BCA protein assay kit (ThermoFisher Scientific). Samples were resolved using SDS-PAGE and subsequently transferred onto polyvinylidene difluoride membranes by electroblotting. Mem-
Inhibition of TGF- by IL-6 trans-signaling
branes were blocked with 5% skim milk (Nacalai Tesque) in TBS containing 0.1% Tween 20 (TBS-T) for 30 min at room temperature. After washing with TBS-T, they were incubated with primary antibodies diluted with 5% BSA (Wako) in TBS-T overnight at 4°C. After washing three times for 5 min each time with TBS-T, the membranes were incubated with HRP-conjugated anti-rabbit IgG (1:2000 dilution; Cell Signaling Technology) or HRP-conjugated anti-mouse IgG (1:5000 dilution; GE Healthcare) for 1 h at room temperature. After washing three times for 5 min each time with TBS-T, signals were enhanced using a chemiluminescence system (ImmunoStar LD; Wako), ECL Prime or ECL Western blotting detection reagents (GE Healthcare), and exposed using an LAS-4000 EPUV Mini (Fuji Film, Tokyo, Japan) imager. Densitometry of immunoreactive bands was performed using ImageJ software (National Institutes of Health).
Luciferase assay
Luciferase assay was conducted as described previously (42). Briefly, transcriptional activity was assessed by transient transfection of a luciferase reporter gene fused with CAGA-12. As an internal control, a plasmid containing Renilla luciferase (pRL-TK; Promega, Madison, WI) was cotransfected. Transfection of HTM cells was performed at 80% confluence using GeneJuice transfection reagent (Merck Millipore, Billerica, MA) according to the manufacturer's protocol. HTM cells were seeded the day before transfection in 6-well plates. Twenty four hours after transfection, the medium was changed to serum-free DMEM. Twenty four hours later, cells were stimulated with TGF-2 with or without IL-6/sIL-6R for 24 h.
RNA interference targeting STAT3
HTM cells were plated at 70% confluence before transfection. siRNAs targeting STAT3 were transfected into cells using Lipofectamine TM RNAiMAX (Invitrogen) according to the manufacturer's protocol. HTM cells transfected with siRNA were put into serum-free medium for 24 h and stimulated with TGF-2 with or without IL-6/sIL-6R for 6, 12, or 24 h. The cell lysates were collected for Western blot analysis.
Statistical analysis
Each experiment was repeated a minimum of three times. Data were analyzed using the JMP version 8 statistical software package (SAS Institute, Cary, NC). All data represent the means of at least three independent experiments. Quantitative data were analyzed using the Tukey-Kramer honest significance difference test. In all analyses, p Ͻ 0.05 was taken to indicate statistical significance. | 2018-05-21T20:27:36.721Z | 2018-05-11T00:00:00.000 | {
"year": 2018,
"sha1": "8f4c4db500744650ddd0af13dd6a3aebe3b210b3",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/28/10975.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "b2cb5f0d865794f7f310fe1c11a3eb3fb2aeda07",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
252396117 | pes2o/s2orc | v3-fos-license | High Quality Pt–Pt Metal Bonding for High Temperature Packaging
Platinum is an ideal material for high-temperature resistant device packaging due to its higher melting point and good electrical properties. In this paper, the thermocompression bonding of Pt–Pt metal electrodes was successfully realized through process exploration, and the package interconnection that meets the requirements was formed. A square bump with a side length of 160 µm and a sealing ring with a width of 80 µm were fabricated by magnetron sputtering. Different pressure parameters were selected for chip-level bonding; the bonding temperature was 350 °C for about 20 min. Analysis of the interface under a scanning electron microscope found that the metal Cr diffused into Pt. It was found that two chips sputtered with 300 nm metal Pt can achieve shear resistance up to 30 MPa by flip-chip bonding at 350 °C and 100 MPa temperature and pressure, respectively. The leakage rate of the sample is less than 2 × 10–3 Pa·cm3/s, the bonding interface is relatively smooth, and the hot-pressed metal bonding of Pt electrodes with good quality is realized. By comparing the failure rates at different temperatures and pressures, the process parameters for Pt–Pt bonding with higher success rates were obtained. We hope to provide new ideas and methods for the packaging of high-temperature resistant devices.
Introduction
Sensors working with graphene as a sensitive material have received extensive attention in recent years. The unique thermal properties, electrical properties, and hightemperature resistance [1][2][3] of graphene show a strong potential for enhancing performance and improving the reliability of devices operating at high temperatures and harsh environments. However, the lack of reliable high-temperature packaging technology impedes the application of graphene in the field of high-temperature MEMS devices. Burla et al. [4] achieved nickel wire bonding for high-temperature packaging, and Ni wire bonds were found to be electrically stable for temperatures up to 550 • C. However, the hightemperature oxidation of nickel limits its practical application. On the other hand, Pt has almost perfect corrosion resistance, making it a better high-temperature encapsulation material than nickel. Brachmann et al. [5] demonstrated a Pt wire bonding method that can withstand an 1100 • C environment. The investigated Pt films were composed of a 50 nm thick e-beam evaporated Cr seed layers and an approximately 1 µm electrodeposited Pt film. This study proved the excellent prospects of Pt in the field of high-temperature packaging.
To the best of our knowledge, the current high-temperature packaging mostly uses wire bonding technology, while wire bonding technology can no longer meet the requirements of miniaturization, light weight, high performance, and low power consumption of modern electronic products [6]. Notably, the flip chip is obviously more in line with the future development trend of the electronics industry.
The common electronic packaging method is realized by forming flip chips [7,8] by thermocompression metallic bonding [9] or direct bonding [10]. The essence of metallic bonding is the mutual diffusion of atoms on the surface of two metals [9], which relies on metallic bonds, metal melting, and other factors in order to bond firmly. On the one hand, it combines the excellent properties of the material and, on the other hand, completely utilizes the benefits of the metal film to improve the photoelectric performance of the device. Direct bonding involves cleaning and activating the surface of the bonding sheet [11], directly bonding it at room temperature, and finally combining it with heat treatment to form an interconnected interface. The difference between the two methods is that metallic bonding involves the same kind of metal material, while direct bonding can be carried out with two different materials. Therefore, defects such as the dislocations generated in the process of metal bonding only exist near the interface of metal bonding and do not extend to the entire material and thus hardly affect the performance of the material before bonding. Since a thin oxide film is formed on the surface of the metal in the air, the oxide film blocks the mutual diffusion of atoms on the two metal surfaces, restricting the diffusion of atoms unconditionally. Additionally, the bonding interface of metal bonding cannot add a dielectric layer. Metal bonding is usually achieved by heating and pressing and is different from the eutectic interface interconnection formed between different metals [12]. For example, Au-Au, Cu-Cu [13,14] and Al-Al [15,16] make atomic-level contact under the simultaneous action of heat and pressure. Under the movement of atoms, the two layers of metals undergo diffusion movement, and the diffused atoms connect the two layers of metals together.
In this paper, we have focused on the high-temperature packaging needs of MEMS devices [17][18][19] and used Pt with excellent performance at high temperatures as the bonding material [20,21] to explore the process required for its packaging and its performance after packaging.
Materials and Methods
There are three steps to realize Pt-Pt interconnection: test sample design, fabrication of fine-pitch bumps followed by bumps surface pretreatment, and finally Pt-Pt thermocompression bonding.
Test Sample Design
The top and low substrates of the bonding are two chips of 6 × 6 mm 2 and 8 × 8 mm 2 , respectively. The total bonding area is approximate to 5.5 mm 2 . The schematic diagram of the mask is shown in Figure 1. The outer side length of the sealing ring of the single repeating unit is 1.31 mm, the inner side length is 1.15 mm, and the side length of the small square of the bonding bump is 160 µm. The side length of the total mask structure is 5 × 5 mm, and each structure contains 16 repeating units. The figure on the right of Figure 1a shows a single, repeating unit, which is located on a substrate with a size of 8 × 8 mm. The figure on the right of Figure 1b shows a single, repeating unit, which is located on a substrate with a size of 6 × 6 mm. The above shapes are made of silicon as the substrate through photolithography, sputtering, and other processes. The lithography was performed using an MA6 model lithography machine produced by SUSS MicroTec in Germany. The top and bottom two substrates are bonded by thermocompression to ensure a firm connection between the sealing ring and the bump, forming a closed space.
Substrate Fabrication
The fabrication process of the top substrate is shown in Figure 2. First, the 400 µm thick silicon wafers were routinely cleaned, followed by ultrasonic cleaning with acetone, isopropanol, and water for 5 min each, and finally dried under N2 [ Figure 2a] [22]. A 300 nm thick SiNx passivation layer was first deposited on a 400 µm thick Si wafer by plasma-enhanced chemical vapor deposition (PECVD), and a mask structure was created on the SiNx passivation layer using a negative photoresist. Subsequently, the bottom Cr/Pt electrodes with thicknesses of 50 and 300 nm, respectively, were deposited on the SiNx layer by magnetron sputtering [ Figure 2b]. The electrode should not be too thick because an excessively thick bonding interface layer may cause the formation of microcracks and lead to poor bonding quality [23,24]. Finally, the negative adhesive peeling was completed in acetone to produce the top substrate. The process for fabrication of the bottom substrate is the same as that of the top substrate.
To remove surface oxide film, both the top substrate and bottom substrate were pretreated by Ar (a small amount of H2) plasma with a gas flow rate of 250 sccm under the power of 200 W for 180 s. After the metal surface is activated, the degree of atomic diffusion is increased by heating and pressing to tightly combine the two structures. The bonding adopts the electronic packaging FC150 flip-chip welding machine.
Pt-Pt Thermocompression Bonding
The relationship between bonding time and temperature is shown in Table 1. The present work attempts to use four sets of bonding parameters to determine the range of parameters that can achieve better Pt-Pt bonding interconnections. There are two pur-
Substrate Fabrication
The fabrication process of the top substrate is shown in Figure 2. First, the 400 µm thick silicon wafers were routinely cleaned, followed by ultrasonic cleaning with acetone, isopropanol, and water for 5 min each, and finally dried under N2 [ Figure 2a] [22]. A 300 nm thick SiNx passivation layer was first deposited on a 400 µm thick Si wafer by plasma-enhanced chemical vapor deposition (PECVD), and a mask structure was created on the SiNx passivation layer using a negative photoresist. Subsequently, the bottom Cr/Pt electrodes with thicknesses of 50 and 300 nm, respectively, were deposited on the SiNx layer by magnetron sputtering [ Figure 2b]. The electrode should not be too thick because an excessively thick bonding interface layer may cause the formation of microcracks and lead to poor bonding quality [23,24]. Finally, the negative adhesive peeling was completed in acetone to produce the top substrate. The process for fabrication of the bottom substrate is the same as that of the top substrate.
Substrate Fabrication
The fabrication process of the top substrate is shown in Figure 2. First, the 400 µm thick silicon wafers were routinely cleaned, followed by ultrasonic cleaning with acetone, isopropanol, and water for 5 min each, and finally dried under N2 [ Figure 2a] [22]. A 300 nm thick SiNx passivation layer was first deposited on a 400 µm thick Si wafer by plasma-enhanced chemical vapor deposition (PECVD), and a mask structure was created on the SiNx passivation layer using a negative photoresist. Subsequently, the bottom Cr/Pt electrodes with thicknesses of 50 and 300 nm, respectively, were deposited on the SiNx layer by magnetron sputtering [ Figure 2b]. The electrode should not be too thick because an excessively thick bonding interface layer may cause the formation of microcracks and lead to poor bonding quality [23,24]. Finally, the negative adhesive peeling was completed in acetone to produce the top substrate. The process for fabrication of the bottom substrate is the same as that of the top substrate.
To remove surface oxide film, both the top substrate and bottom substrate were pretreated by Ar (a small amount of H2) plasma with a gas flow rate of 250 sccm under the power of 200 W for 180 s. After the metal surface is activated, the degree of atomic diffusion is increased by heating and pressing to tightly combine the two structures. The bonding adopts the electronic packaging FC150 flip-chip welding machine.
Pt-Pt Thermocompression Bonding
The relationship between bonding time and temperature is shown in Table 1. The present work attempts to use four sets of bonding parameters to determine the range of parameters that can achieve better Pt-Pt bonding interconnections. There are two pur- To remove surface oxide film, both the top substrate and bottom substrate were pretreated by Ar (a small amount of H2) plasma with a gas flow rate of 250 sccm under the power of 200 W for 180 s. After the metal surface is activated, the degree of atomic diffusion is increased by heating and pressing to tightly combine the two structures. The bonding adopts the electronic packaging FC150 flip-chip welding machine.
Pt-Pt Thermocompression Bonding
The relationship between bonding time and temperature is shown in Table 1. The present work attempts to use four sets of bonding parameters to determine the range of parameters that can achieve better Pt-Pt bonding interconnections. There are two purposes for using these four sets of parameters. On the one hand, it can be identified whether the interconnection interface can be formed after bonding, and on the other hand, whether the shear resistance of the sample after bonding is good within this pressure range can be ascertained. Finally, parameters that can form an interconnected interface and have a certain shear resistance are selected. The variables of these four sets of parameters are all pressure, the temperature is 350 • C, and the bonding time is 1200 s. Due to the high melting point of platinum, it is more difficult to bond by thermocompression than other metals such as gold and copper. Therefore, when flip-chip welding is used for thermocompression bonding, the bonding temperature and bonding time are both selected to be close to the upper limit of the instrument. From Figure 1a,b, the bonding area of the bonding pair can be calculated to be about 5.5 mm 2 . According to the formula P = F/S, the pressure of case 1 is 30 MPa, the pressure of case 2 is 50 MPa, the pressure of case 3 is 80 MPa, and the pressure of case 4 is 100 MPa during thermocompression bonding.
Results and Discussion
Four different tests, including interfacial analysis, shear strength analysis, hermeticity detection, and failure analysis were performed to evaluate the bonding performance.
Interfacial Analysis
After polishing at the 10 µm and 1 µm fine levels, the cross-sectional interface of Pt-Pt bonding under SEM can be clearly seen in Figure 3. It can be observed from the backscatter mode of the electron microscope that the bonding interface is relatively flat with no obvious cracks or gaps. This suggests that after the thermocompression bonding described in this article, the Pt-Pt metal electrodes are interconnected.
Micromachines 2022, 13, 1543 4 of 11 poses for using these four sets of parameters. On the one hand, it can be identified whether the interconnection interface can be formed after bonding, and on the other hand, whether the shear resistance of the sample after bonding is good within this pressure range can be ascertained. Finally, parameters that can form an interconnected interface and have a certain shear resistance are selected. The variables of these four sets of parameters are all pressure, the temperature is 350 °C, and the bonding time is 1200 s. Due to the high melting point of platinum, it is more difficult to bond by thermocompression than other metals such as gold and copper. Therefore, when flip-chip welding is used for thermocompression bonding, the bonding temperature and bonding time are both selected to be close to the upper limit of the instrument. Figure 1a,b, the bonding area of the bonding pair can be calculated to be about 5.5 mm 2 . According to the formula P = F/S, the pressure of case 1 is 30 MPa, the pressure of case 2 is 50 MPa, the pressure of case 3 is 80 MPa, and the pressure of case 4 is 100 MPa during thermocompression bonding.
Results and Discussion
Four different tests, including interfacial analysis, shear strength analysis, hermeticity detection, and failure analysis were performed to evaluate the bonding performance.
Interfacial Analysis
After polishing at the 10 µm and 1 µm fine levels, the cross-sectional interface of Pt-Pt bonding under SEM can be clearly seen in Figure 3. It can be observed from the backscatter mode of the electron microscope that the bonding interface is relatively flat with no obvious cracks or gaps. This suggests that after the thermocompression bonding described in this article, the Pt-Pt metal electrodes are interconnected. The thickness of the metal layer is measured to be about 700 nm, which is consistent with the thickness of 50 nm Cr and 300 nm Pt plated during sputtering. Further, no obvious cracks were observed in the metal layer, and the whole surface tends to be smooth, forming a good interconnection interface. Interestingly, no obvious delamination phenomenon of Cr and Pt resulting from the diffusion of Cr is observed in the metal region on each side. with the thickness of 50 nm Cr and 300 nm Pt plated during sputtering. Further, no obvious cracks were observed in the metal layer, and the whole surface tends to be smooth, forming a good interconnection interface. Interestingly, no obvious delamination phenomenon of Cr and Pt resulting from the diffusion of Cr is observed in the metal region on each side. Figure 5 shows the result of the line scan of the bonding interface. Through the longitudinal line scan of the bonding interface, the content of three elements is analyzed. It can be found that most of the Si elements are distributed on both sides of the metal layer, which is more in line with the actual situation, indicating that the hot-press bonding does not cause the non-metallic surface to diffuse inward. Moreover, the content of Pt element is mainly concentrated in the metal layer area, the middle part is more concentrated, and the two sides are more symmetrical and uniform, which shows that the interconnection interface formed after thermocompression bonding has no displacement, and the Pt-Pt interface is better connected with fewer impurities. Additionally, the widely distributed Cr element is essentially more concentrated between the Si layer and the Pt layer, and is consistent with the SEM image shown in Figure 4. The Cr layer is sandwiched between the metal layer and the dielectric layer as an adhesion layer. After thermocompression bonding, a small amount of diffusion of Cr element occurred between the Pt metal layers. Figure 6a represent the distribution of silicon, and the yellow dots in Figure 6b indicate the distributed Pt, and Figure 6c shows the effect of integrating the two SEM images. Through elemental surface scanning analysis, it Figure 5 shows the result of the line scan of the bonding interface. Through the longitudinal line scan of the bonding interface, the content of three elements is analyzed. It can be found that most of the Si elements are distributed on both sides of the metal layer, which is more in line with the actual situation, indicating that the hot-press bonding does not cause the non-metallic surface to diffuse inward. Moreover, the content of Pt element is mainly concentrated in the metal layer area, the middle part is more concentrated, and the two sides are more symmetrical and uniform, which shows that the interconnection interface formed after thermocompression bonding has no displacement, and the Pt-Pt interface is better connected with fewer impurities. Additionally, the widely distributed Cr element is essentially more concentrated between the Si layer and the Pt layer, and is consistent with the SEM image shown in Figure 4. The Cr layer is sandwiched between the metal layer and the dielectric layer as an adhesion layer. After thermocompression bonding, a small amount of diffusion of Cr element occurred between the Pt metal layers. with the thickness of 50 nm Cr and 300 nm Pt plated during sputtering. Further, no obvious cracks were observed in the metal layer, and the whole surface tends to be smooth, forming a good interconnection interface. Interestingly, no obvious delamination phenomenon of Cr and Pt resulting from the diffusion of Cr is observed in the metal region on each side. Figure 5 shows the result of the line scan of the bonding interface. Through the longitudinal line scan of the bonding interface, the content of three elements is analyzed. It can be found that most of the Si elements are distributed on both sides of the metal layer, which is more in line with the actual situation, indicating that the hot-press bonding does not cause the non-metallic surface to diffuse inward. Moreover, the content of Pt element is mainly concentrated in the metal layer area, the middle part is more concentrated, and the two sides are more symmetrical and uniform, which shows that the interconnection interface formed after thermocompression bonding has no displacement, and the Pt-Pt interface is better connected with fewer impurities. Additionally, the widely distributed Cr element is essentially more concentrated between the Si layer and the Pt layer, and is consistent with the SEM image shown in Figure 4. The Cr layer is sandwiched between the metal layer and the dielectric layer as an adhesion layer. After thermocompression bonding, a small amount of diffusion of Cr element occurred between the Pt metal layers. Figure 6a represent the distribution of silicon, and the yellow dots in Figure 6b indicate the distributed Pt, and Figure 6c shows the effect of integrating the two SEM images. Through elemental surface scanning analysis, it Figure 6a represent the distribution of silicon, and the yellow dots in Figure 6b indicate the distributed Pt, and Figure 6c shows the effect of integrating the two SEM images. Through elemental surface scanning analysis, it can be observed that the bonded Pt element is still in the bonding area with no dislocation or drift. can be observed that the bonded Pt element is still in the bonding area with no dislocation or drift. In Figure 7a, the green dots in Figure 7b represent the Si element, the red dots in Figure 7c represent the N element, the pink dots in Figure 7d represent the Cr element, and the yellow dots in Figure 7e represent the Pt element; the yellow dots representing the Pt element, Figure 7a, are a visual SEM image of the final integration of each element. The combination of (Si + Cr + Pt = 100) and (Si + N + Cr + Pt = 100) elements were selected for analysis. The delamination of each element in the bonded sample was more obvious and no dislocation diffusion or element drift was observed. can be observed that the bonded Pt element is still in the bonding area with no dislocation or drift. Figure 8 is a schematic diagram of the shear force test. The equipment used in the experiment is Dage4000 bond tester which can provide testing of bond shear force and tensile force. The sample in the picture is denoted by the blue part in the middle. After the lower substrate is fixed, the push knife moves in the horizontal direction until the upper and lower bonded substrates are separated. At the moment of separation, the force required to stop the pushing knife is the force of bonding at that moment. Figure 8 is a schematic diagram of the shear force test. The equipment used in the experiment is Dage4000 bond tester which can provide testing of bond shear force and tensile force. The sample in the picture is denoted by the blue part in the middle. After the lower substrate is fixed, the push knife moves in the horizontal direction until the upper and lower bonded substrates are separated. At the moment of separation, the force required to stop the pushing knife is the force of bonding at that moment. Based on theoretical inferences, increasing the pressure can enhance the bonding strength. Importantly, the pressure applied should not exceed the threshold of the substrate's withstanding ability. Excessive pressure may cause an overflow of sputtered metal or cracks in the substrate. The shear resistance test in the effectively bonded sample in this experiment is shown in Figure 9. Figure 10 shows the microscope schematic diagrams of the Pt metal sealing ring before thermocompression bonding and the Pt metal sealing ring of the debris after the shear force test. The shear test is generally divided into three fracture modes: IMC mode, Based on theoretical inferences, increasing the pressure can enhance the bonding strength. Importantly, the pressure applied should not exceed the threshold of the substrate's withstanding ability. Excessive pressure may cause an overflow of sputtered metal or cracks in the substrate. The shear resistance test in the effectively bonded sample in this experiment is shown in Figure 9. Figure 8 is a schematic diagram of the shear force test. The equipment used in the experiment is Dage4000 bond tester which can provide testing of bond shear force and tensile force. The sample in the picture is denoted by the blue part in the middle. After the lower substrate is fixed, the push knife moves in the horizontal direction until the upper and lower bonded substrates are separated. At the moment of separation, the force required to stop the pushing knife is the force of bonding at that moment. Based on theoretical inferences, increasing the pressure can enhance the bonding strength. Importantly, the pressure applied should not exceed the threshold of the substrate's withstanding ability. Excessive pressure may cause an overflow of sputtered metal or cracks in the substrate. The shear resistance test in the effectively bonded sample in this experiment is shown in Figure 9. A total of 12 bonding samples were selected and divided into four groups according to the different bonding pressures: 30 MPa, 50 MPa, 80 MPa, and 100 MPa. Their shear resistance was found to be in the range of 12.2-14.3 MPa, 15.1-17.6 MPa, 16.9-18.9 MPa, and 17.3-31.8 MPa. The bonding pressure is positively correlated within the withstandable range of the substrate, and its shear resistance can reach a maximum of 30 Mpa as the pressure increases. When the bonding pressure is 100 MPa, the measured value is the maximum shear force that the silicon wafer can withstand, and the average shear resistance can reach 25 MPa; thereby, the shear strength of Pt-Pt bonding meets the standard. Figure 10 shows the microscope schematic diagrams of the Pt metal sealing ring before thermocompression bonding and the Pt metal sealing ring of the debris after the shear force test. The shear test is generally divided into three fracture modes: IMC mode, Figure 10 shows the microscope schematic diagrams of the Pt metal sealing ring before thermocompression bonding and the Pt metal sealing ring of the debris after the shear force test. The shear test is generally divided into three fracture modes: IMC mode, solder mode, and mixed IMC/solder mode. Fractures in the IMC mode generally occur in the IMC layer. As can be observed from the Figure 10, the shear test destroyed the bonding electrode and the sealing ring, and it can be seen that the fracture mode in the test is mainly the IMC mode [25,26], indicating that the shear resistance is mainly due to the force exerted on the bonding electrode. This demonstrates that the bonding is effective. solder mode, and mixed IMC/solder mode. Fractures in the IMC mode generally occur in the IMC layer. As can be observed from the Figure 10, the shear test destroyed the bonding electrode and the sealing ring, and it can be seen that the fracture mode in the test is mainly the IMC mode [25,26], indicating that the shear resistance is mainly due to the force exerted on the bonding electrode. This demonstrates that the bonding is effective. Of course, some metals do not perform well in shear tests. This may be caused by uneven sputtering during the coating process due to the influence of experimental factors.
Hermeticity Detection
According to the method defined by the inspection standard (GJB 548B-2005 method 1014.2), the purpose of the test is to determine the hermeticity of microelectronic and semiconductor device packages with internal cavities. The hermeticity test of the four groups of bonded pairs is carried out. First, a detailed inspection was carried out using the ZHP-30D helium mass spectrometer leak detector. The sample is kept under pressure of 4 × 10 -5 Pa for 2 h, and the leakage rate is measured with the leak detector after taking it out. The actual leakage rate of the technically required samples is less than 2 × 10 -3 Pa·cm 3 /s, that is, less than the specified leakage rate value (5 × 10 -3 Pa·cm 3 /s). As Table 2 shown, the first group of samples through the experiment showed the minimum leakage rate measured to be 30 MPa/165 N, 350 °C, 1200 s: 3.3 × 10 -4 Pa·cm 3 /s, the maximum leakage rate: 9.8 × 10 -4 Pa·cm 3 /s, the average leakage rate: 6.55 × 10 -4 Pa·cm 3 /s, which is less than the leakage rate value required by the specification. Similarly, the second group of samples showed the leakage rate measured under the bonding conditions of 50 MPa/275 N, 350 °C, 1200 s: 5.9 × 10 -4 Pa·cm 3 /s, 3.3 × 10 -5 Pa·cm 3 /s, average leakage rate: 3.115 × 10 -4 Pa·cm 3 /s, which is less than the specified leakage rate value. The third group of samples: 80 MPa/550 N, 350 °C, the maximum leakage rate measured under the bonding condition of 1200 s: 2.77 × 10 -5 Pa·cm 3 /s, the minimum leakage rate: 1.37 × 10 -5 Pa·cm 3 /s, the average leakage rate: 1.81 × 10 -5 Pa·cm 3 /s. The fourth group of samples: 100 MPa/550 N, 350 °C under the bonding conditions of 1200 s, the maximum leakage rate measured: 1.83 × 10 -5 Pa·cm 3 /s, the minimum leakage rate: 1.29 × 10 -5 Pa·cm 3 /s, average leakage rate: 1.48 × 10 -5 Pa·cm 3 /s. In conclusion, the leakage rate values measured by the four groups of different bonding parameters are all within the range of the leakage rate values required by the specification. It can be inferred from Figure 11 that with the increase in the bonding pressure, the average air tightness of the samples will gradually increase, and the average air tightness of the successfully bonded samples can meet the packaging requirements. Of course, some metals do not perform well in shear tests. This may be caused by uneven sputtering during the coating process due to the influence of experimental factors.
Hermeticity Detection
According to the method defined by the inspection standard (GJB 548B-2005 method 1014.2), the purpose of the test is to determine the hermeticity of microelectronic and semiconductor device packages with internal cavities. The hermeticity test of the four groups of bonded pairs is carried out. First, a detailed inspection was carried out using the ZHP-30D helium mass spectrometer leak detector. The sample is kept under pressure of 4 × 10 -5 Pa for 2 h, and the leakage rate is measured with the leak detector after taking it out. The actual leakage rate of the technically required samples is less than 2 × 10 -3 Pa·cm 3 /s, that is, less than the specified leakage rate value (5 × 10 -3 Pa·cm 3 /s). As Table 2 shown, the first group of samples through the experiment showed the minimum leakage rate measured to be 30 MPa/165 N, 350 • C, 1200 s: 3.3 × 10 -4 Pa·cm 3 /s, the maximum leakage rate: 9.8 × 10 -4 Pa·cm 3 /s, the average leakage rate: 6.55 × 10 -4 Pa·cm 3 /s, which is less than the leakage rate value required by the specification. Similarly, the second group of samples showed the leakage rate measured under the bonding conditions of 50 MPa/275 N, 350 • C, 1200 s: 5.9 × 10 -4 Pa·cm 3 /s, 3.3 × 10 -5 Pa·cm 3 /s, average leakage rate: 3.115 × 10 -4 Pa·cm 3 /s, which is less than the specified leakage rate value. The third group of samples: 80 MPa/550 N, 350 • C, the maximum leakage rate measured under the bonding condition of 1200 s: 2.77 × 10 -5 Pa·cm 3 /s, the minimum leakage rate: 1.37 × 10 -5 Pa·cm 3 /s, the average leakage rate: 1.81 × 10 -5 Pa·cm 3 /s. The fourth group of samples: 100 MPa/550 N, 350 • C under the bonding conditions of 1200 s, the maximum leakage rate measured: 1.83 × 10 -5 Pa·cm 3 /s, the minimum leakage rate: 1.29 × 10 -5 Pa·cm 3 /s, average leakage rate: 1.48 × 10 -5 Pa·cm 3 /s. In conclusion, the leakage rate values measured by the four groups of different bonding parameters are all within the range of the leakage rate values required by the specification. It can be inferred from Figure 11 that with the increase in the bonding pressure, the average air tightness of the samples will gradually increase, and the average air tightness of the successfully bonded samples can meet the packaging requirements. Figure 11. Influence of pressure on bonding leakage rate.
Failure Analysis
Owing to the high melting point and boiling point of Pt, the diffusion process is greatly restricted by temperature during the flip-chip welding process. Since the maximum welding temperature of flip-chip welding in the bulk silicon process does not exceed 400 °C, this work used a welding temperature of 250 °C-350 °C and a bonding pressure of 30 MPa-100 Mpa to explore the influence of different temperatures and pressures on welding failure rate. Bonding failure may be caused by a variety of reasons, and the failure may be manifested in that the top and bottom substrates do not adhere together or the tests such as shear force and air tightness cannot meet the test requirements. It was found that both the bonding pressure and the bonding temperature are positively correlated to the failure rate as shown in Figure 12. When the bonding temperature is 625 K and the bonding pressure is above 80 MPa, the failure rate is less than 0.4; under the same pressure conditions, when the temperature is less than 625 K, the failure rate is greater than 0.5. It shows that the failure rate is greatly affected by the bonding temperature, which should optimally be above 625 K. When the bonding pressure is less than 50 MPa, the failure rate of the bonding is more than 0.5, suggesting the optimal bonding pressure of Pt-Pt bonding is more than 50 MPa.
Failure Analysis
Owing to the high melting point and boiling point of Pt, the diffusion process is greatly restricted by temperature during the flip-chip welding process. Since the maximum welding temperature of flip-chip welding in the bulk silicon process does not exceed 400 • C, this work used a welding temperature of 250 • C-350 • C and a bonding pressure of 30 MPa-100 Mpa to explore the influence of different temperatures and pressures on welding failure rate. Bonding failure may be caused by a variety of reasons, and the failure may be manifested in that the top and bottom substrates do not adhere together or the tests such as shear force and air tightness cannot meet the test requirements. It was found that both the bonding pressure and the bonding temperature are positively correlated to the failure rate as shown in Figure 12. When the bonding temperature is 625 K and the bonding pressure is above 80 MPa, the failure rate is less than 0.4; under the same pressure conditions, when the temperature is less than 625 K, the failure rate is greater than 0.5. It shows that the failure rate is greatly affected by the bonding temperature, which should optimally be above 625 K. When the bonding pressure is less than 50 MPa, the failure rate of the bonding is more than 0.5, suggesting the optimal bonding pressure of Pt-Pt bonding is more than 50 MPa.
Conclusions
The Pt-Pt metal interconnection used flip-chip hot-press packaging technology to achieve a relatively stable interconnection at a temperature of 350 °C and pressure above 80 MPa. A series of experiments to evaluate its bonding performance was carried out. The thickness of the metal layer did not change significantly after bonding, and there was no Pt overflow at the bonding interface. In the case of bonding force of 550 N for 20 min, the shear resistance could reach up to 30 MPa. The shear experiment shows that the
Conclusions
The Pt-Pt metal interconnection used flip-chip hot-press packaging technology to achieve a relatively stable interconnection at a temperature of 350 • C and pressure above 80 MPa. A series of experiments to evaluate its bonding performance was carried out. The thickness of the metal layer did not change significantly after bonding, and there was no Pt overflow at the bonding interface. In the case of bonding force of 550 N for 20 min, the shear resistance could reach up to 30 MPa. The shear experiment shows that the fractured interface is mostly on the silicon-metal layer, indicating that the interconnection interface after bonding is stable and does not easily fracture, indicating that the Pt-Pt metal bonding has a certain feasibility in packaging. The failure analysis experiment shows that the bonding pressure and temperature have an important influence on the failure rate, which is in line with the basic principle of metal welding. In addition, increasing the metal activity of the Pt metal interface may lower the requirements for bonding temperature and bonding pressure. Thus, these materials are considered worth exploring. We can plasmatreat the metal layer before bonding to improve the metal activity or find equipment that can increase the bonding temperature and bonding pressure to improve the quality of the bonding. In short, Pt-Pt metal bonding is a means of encapsulation that has great potential in the high-temperature environment in the future. | 2022-09-21T15:05:21.377Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "3f6e98e0722bfb86d6edbc16ea2de5f648c6f816",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/13/9/1543/pdf?version=1663419118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8aa7a79f800359a70ba5b22b2c4f2f78c2de6cc8",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260916000 | pes2o/s2orc | v3-fos-license | LVPT: Lazy Velocity Pseudotime Inference Method
The emergence of RNA velocity has enriched our understanding of the dynamic transcriptional landscape within individual cells. In light of this breakthrough, we embarked on integrating RNA velocity with cellular pseudotime inference, aiming to improve the prediction of cell orders along biological trajectories beyond existing methods. Here, we developed LVPT, a novel method for pseudotime and trajectory inference. LVPT introduces a lazy probability to indicate the probability that the cell stays in the original state and calculates the transition matrix based on RNA velocity to provide the probability and direction of cell differentiation. LVPT shows better and comparable performance of pseudotime inference compared with other existing methods on both simulated datasets with different structures and real datasets. The validation results were consistent with prior knowledge, indicating that LVPT is an accurate and efficient method for pseudotime inference.
Introduction
Cells, as the fundamental building blocks of life, prompt fundamental inquiries into their growth and development.With the development of high-throughput sequencing technology, single-cell sequencing technology can provide both mRNA expression values and spliced and unspliced gene expression values, which provides more possibilities for studying cell development and evolution.However, due to the limitations of experimental technologies, continuous changes in gene expression values cannot be observed during cell differentiation.Therefore, inferring cell differentiation pseudotime and trajectory from discrete data stands as a burgeoning research focus in single-cell analysis.
Numerous machine learning methods have been used to construct trajectories to tackle the aforementioned issue [1][2][3][4][5][6].These methods can be classified into two distinct categories: dimensionality-reduction-based methods and K-Nearest Neighbor (KNN) graph-based methods.The methods based on dimensionality reduction include Monocle [7][8][9][10], Slingshot [11], Waterfall [12], and STREAM [13].Monocle introduced a pioneering trajectory inference model that encompasses multiple steps, including preprocessing, dimensionality reduction, clustering, minimum spanning tree construction, principal curve fitting, and projection to form single-cell differentiation trajectory trees.Monocle's sensitivity to parameter settings can make it challenging to achieve consistent results across different datasets.Slingshot further extended the scope by allowing greater flexibility in the selection of dimensionality reduction and clustering algorithms.It used the Mahalanobis-like distance when constructing the minimum spanning tree.The Mahalanobis-like distance approach in minimum spanning tree construction, while innovative, could be sensitive to outliers, potentially affecting the overall trajectory structure.STREAM used an improved Locally Linear Embedding (LLE) algorithm for dimensionality reduction and then used a greedy optimization process to infer the principal curve.The improved LLE algorithm in STREAM may introduce computational complexity, particularly when applied to extensive datasets.The KNN graph-based methods include DPT [14], scTDA [15], PAGA [16], etc. DPT used a random walk model to calculate the average access time as pseudotime and detect the branch structure.The method scTDA used Topology Data Analysis to build the cell fate graph.PAGA used an abstract graph to model the structure of cells.The KNN graph-based methods are sensitive to the choice of the k parameter.Balancing the trade-off between capturing local versus global relationships becomes critical, and finding the optimal k value can be challenging.
Pseudotime inference changes with the development of technology.At the early stage of single-cell development, most models can only make linear inference.There are many methods based on various statistical models, such as DeLorean [17], which used the Gaussian Process Latent Variable Model, CSHMM [18], which used the Hidden Markov Model, and Wanderlust [19], which used shortest distance on the graph to calculate the pseudotime.With the continuous development of single-cell trajectory inference, pseudotime inference is gradually divided into two categories.One type is projectionbased methods, which use cells to project onto the latent backbone and calculate the distances between cells and the root cell as pseudotime, typically Monocle and Slingshot.The other type is random-walk-based methods, which sort cells by calculating the hitting time of random walks on the graph, typically methods such as DPT, PAGA, VIA [20], and VPT [21].PAGA used DPT to calculate pseudotime after constructing abstract graphs.The VIA method performed random walk at the clustering level and introduced the concepts of inertia and transport to increase robustness.The concept of RNA velocity [22] was proposed to integrate information about the transcription process.VPT applied RNA velocity to the pseudotime inference.There are some other methods for pseudotime inference, such as Waddington-OT [23] and LineageOT [24], which use the optimal transport method for time series data, TinGa [25], which uses a neural network, and Topographer [26].Although numerous methods have been diligently proposed to address the challenges of pseudotime inference, the complex and multifaceted nature of single-cell data presents an ongoing frontier for exploration, and new models are needed to achieve better results.
The introduction of RNA velocity has taken the research on pseudotime inference to a new stage.Although some algorithms such as VPT have been proposed, there is still room for further exploration.Here, inspired by VIA, we proposed LVPT (Lazy Velocity Pseudotime), a novel pseudotime inference model.LVPT uses the Gaussian kernel function as the distance between cells and integrates RNA velocity into pseudotime inference to provide information about the direction of cell differentiation.A lazy probability is introduced in LVPT to indicate the probability that the cell will remain in that state during random walking due to cell division.In addition, we also construct a complete single-cell trajectory inference model to analyze the overall cell evolution.Our experiments show that LVPT outperforms other methods in terms of accuracy on the simulated datasets and the predicted cell evolution process is accurately restored in the real datasets.
Overview
Currently, there are many KNN graph-based methods, such as VPT and VIA.Although these methods have achieved good results, they all have their own limitations.Our method, LVPT, solves the following problems on the basis of existing methods.The first is to construct a biological model of random walk, which provides a solid theoretical foundation for introducing inert probability methods.Compared with VPT, LVPT can better reduce the process of cell changes and has higher robustness.Compared with VIA, LVPT explains the inertia probability and migration probability in random walk from the perspective of cell division and differentiation.In our hypothesis, we did not consider gene mutations, so we abandoned the concept of transmission in VIA to make the model more rigorous.Secondly, unlike VIA performing random walk at the clustering level, LVPT performs random walk at the cellular level, which can utilize more information.Thirdly, after introducing the concept of rate, we can use Markov processes to determine the root node of the trajectory instead of it being manually set by the user like in VIA.
Data Collection and Preprocessing
In LVPT, we used the mRNA expression matrix and spliced and unspliced expression matrix of scRNA-seq data as the input.Spliced and unspliced data can be calculated using the tools provided in the RNA Velocity [22].The root cell can be provided by users as priori data or calculated from expression data.We experimentally validated the method with datasets of mouse pancreas [27] and mouse hippocampus [22].Sequencing data need to be properly preprocessed.We set a minimum threshold for expression counts, then centered and normalized the data.By calculating the dispersion of each effective gene, we took highly variable genes as filtered genes.Finally, we converted the expression data from count data to log-transformed data to facilitate further analysis.We set the minimum expression value to 20 and selected 2000 genes for the experiments in this paper.
Data Simulation Using Dyngen
We used dyngen, a multi-modal simulation engine for studying dynamic cellular processes at a single-cell resolution [28], to generate simulated data for testing the effectiveness of the algorithm.The simulated data contain 1000 cells and 100 expressed genes which were generated with the GillespieSSA2 method with the parameter τ = 1/12.Different simulated datasets were obtained by setting linear, bifurcating, and trifurcating trajectory backbones for the evaluation of LVPT.The simulated datasets contain spliced and unspliced expression read counts to calculate cell velocity.The simulated datasets provide the ground truth of pseudotime which can be used as a gold standard for comparison with the predicted pseudotimes.
Lazy Velocity Pseudotime Inference Model
We developed the LVPT model to infer the cell states, including pseudotime and trajectory.LVPT consists of four modules, including clustering, velocity estimation, pseudotime inference, and trajectory inference.For clustering, we used the Leiden [29] algorithm to obtain cell clusters as trajectory nodes.By modeling the kinetic of the gene transcription process, the relationship between the expression of unspliced mRNA and spliced mRNA can be used to infer the amount of change in the gene expression value in the future, known as cell velocity.We used the scvelo.tl.velocity [21] function to estimate cell velocity.The basic idea of velocity estimation is to model the transcription process of cells and calculate the trend of gene expression changes in cells over a short period of time in the future using spliced and unspliced expression values.The root cell can then be inferred using the scvelo.tl.terminal_states [21] function.The function models dynamic cellular processes as a Markov chain, where the transition matrix is computed based on the velocity vector of each individual cell.Based on this Markov chain, cells are filtered into transient/recurrent cells using the left eigenvectors of the transition matrix and clustered into distinct groups of terminal states using the right eigenvectors of the transition matrix of the Markov chain [21].The root cells are obtained as stationary states of the transpose of the velocity-inferred transition matrix, which is given by left eigenvectors corresponding to an eigenvalue of 1, i.e., where µ root is the left eigenvectors and π is the velocity-inferred transition matrix in LVPT.
The main contribution of this study is to provide a new pseudotime inference method.We integrated the estimated velocity with the information from spliced and unspliced data to infer the pseudotime in this model, making pseudotime inference more accurate.Finally, we also analyzed the transition relationship between cell states at cluster resolution to construct cell differentiation trajectories.Partition-based graph abstraction (PAGA) is a reliable method for constructing the connections between cell states [16].PAGA uses the statistical model to calculate the connection possibility between clusters and constructs one or more undirected graphs.We first used PAGA to construct an abstract trajectory graph, then applied the average pseudotime of cell states to judge the transition direction between states and finally constructed a directional trajectory structure.
Pseudotime Inference
Cell growth implies the sequential expression of different genes, including several different intermediate steady stages [30,31].We model this process using cell division and differentiation.Cell division means the cell is cloning itself, the expression level is in a steady stage, and cell differentiation means the cell is evolving from one stage to another, the expression level is going to another steady stage.The process of cell division and differentiation is random.We can simulate it through the random walk method.Briefly, if a cell goes through cell division to the next step, the expression should not be changed; otherwise, it is changed.The core of the random walk model is the transition probability matrix.We constructed a reliable random walk model by introducing lazy probability and cell velocity.
At the cell level, cell steady-stage expression is not a constant value; it can be affected by noises.We use the G function to model gene expression distribution.The transition probability from cell i to cell j in a steady state is defined as where G means Gaussian function, and P 1 represents the probability that the cell is in its current state, not through differentiation but through self-proliferation or external influences that alter its expression.The center of the function is at the expression value x i of the current cell i.The variance σ i is the Euclidean distance in expression space between the cell i and its kth neighbor, where k is a parameter when generating the neighbor network.In this way, each cell possesses a distinct Gaussian kernel function that preserves information within the cell's local neighborhood.We define the transition probability during differentiation as where δ ij is the cosine value between the cell velocity v i and the vector composed of cells x i and x j , which represents the cell velocity component in the direction of the vector composed of the target cell j and the current cell i.The probability P 2 indicates that if the δ ij is positive, the closer the cell j is to the cell velocity direction v i , the greater the probability of migrating to the target cell j; if the δ ij is negative, the target cell is located in the opposite direction of the differentiation process and should not migrate to the target cell j, and the transition probability should be 0. The complete transition probability of a cell is defined as where α is the lazy probability, which indicates that the cell will walk to the next stage through division.After regularization, we can obtain the cell-to-cell transition probability matrix: where x and y are the cells.The original diffusion map publication [32] pointed out that there is a symmetric matrix T that has the same eigenvalues as asymmetric matrix T: After determining the transition probability matrix, we can calculate the pseudotime t = {t 1 , . . . ,t n } result with the random walk [14].LVPT uses the same random walk computational method with DPT.f (t) is defined as the reaching probability from f (0) in time t: By summarizing the probabilities, we can obtain the (time independent) "path integral" for reaching each cell from f (0): f (0) is initialized by root cells.By reducing the stationary component to make the sum above converge, we obtain a new matrix M: If the random walk starts at cell x, f (0)M will be a row of M which we present by M(x, .).Then we can define the lvpt measure as The pseudotime of cell x from root r is lvpt(x, r) (14)
Evaluation Metrics
Saelens et al. established a complete evaluation framework [33] for single-cell trajectory inference.Based on this framework, we defined the correlation between the estimated pseudotime and the true value using the Spearman correlation coefficient.The testing dataset generated with the single-cell simulation method provides us with a real-time view of each cell.If the real time is defined as t r and the inferred pseudotime is defined as t, the Spearman coefficient can be defined as follows: The Spearman correlation coefficient indicates the correlation between t and t r .When using the real time and the inferred pseudotime as the parameters of the Spearman correlation coefficient, the Spearman coefficient can be used as the accuracy evaluation index of the result.The closer the accuracy is to 1, the better the inferred result of the model is.
The Hamming-Ipsen-Mikhailov (HIM) metric is used to compare the trajectories.Conceptually, the HIM metric is a linear combination of the normalized Hamming distance and the normalized Ipsen-Mikhailov distance.The Hamming distance calculates the distance between two graphs by matching individual edges in the adjacency matrix but disregards overall structural similarity.The Ipsen-Mikhailov distance calculates the overall distance of two graphs based on matches between its degree and adjacency matrix, while disregarding local structural similarities.It requires a parameter, which we fixed at 0.1 to make the score comparable across different graph sizes.
Overview
As shown in Figure 1, the input into the LVPT model includes the gene expression matrix for a given starting node, the spliced gene expression matrix, and the unspliced gene expression matrix from scRNA-seq data.The LVPT model consists of four modules, including clustering, velocity estimation, pseudotime inference, and trajectory inference.We developed a new method for the pseudotime inference module.In this module, we used cell velocity and the gene expression matrix to calculate pseudotime during cell growth and development, the Leiden algorithm to cluster the cells, scVelo for velocity estimation, and the PAGA method to obtain the connection relationship between clusters and combined this with pseudotime to determine the direction of trajectory connection.
When using the real time and the inferred pseudotime as the parameters of the Spearman correlation coefficient, the Spearman coefficient can be used as the accuracy evaluation index of the result.The closer the accuracy is to 1, the better the inferred result of the model is.
The Hamming-Ipsen-Mikhailov (HIM) metric is used to compare the trajectories.Conceptually, the HIM metric is a linear combination of the normalized Hamming distance and the normalized Ipsen-Mikhailov distance.The Hamming distance calculates the distance between two graphs by matching individual edges in the adjacency matrix but disregards overall structural similarity.The Ipsen-Mikhailov distance calculates the overall distance of two graphs based on matches between its degree and adjacency matrix, while disregarding local structural similarities.It requires a parameter, which we fixed at 0.1 to make the score comparable across different graph sizes.
Overview
As shown in Figure 1, the input into the LVPT model includes the gene expression matrix for a given starting node, the spliced gene expression matrix, and the unspliced gene expression matrix from scRNA-seq data.The LVPT model consists of four modules, including clustering, velocity estimation, pseudotime inference, and trajectory inference.We developed a new method for the pseudotime inference module.In this module, we used cell velocity and the gene expression matrix to calculate pseudotime during cell growth and development, the Leiden algorithm to cluster the cells, scVelo for velocity estimation, and the PAGA method to obtain the connection relationship between clusters and combined this with pseudotime to determine the direction of trajectory connection.
Evaluation of the LVPT Model on Simulated Datasets
To verify the effectiveness of the LVPT model, we used simulated datasets to compare the accuracy of LVPT with other methods.First, we used a dyngen simulation to generate simulated data with different trajectory structures, including a linear structure, a bifurcating structure, and a trifurcating structure.Then, we used the dynmethods [33] library to call different algorithms, including LVPT, VPT, VeTra [34], CellPath [35], PA-GA, Monocle2, Slingshot, DPT, and TSCAN.Five simulation datasets were generated for
Evaluation of the LVPT Model on Simulated Datasets
To verify the effectiveness of the LVPT model, we used simulated datasets to compare the accuracy of LVPT with other methods.First, we used a dyngen simulation to generate simulated data with different trajectory structures, including a linear structure, a bifurcating structure, and a trifurcating structure.Then, we used the dynmethods [33] library to call different algorithms, including LVPT, VPT, VeTra [34], CellPath [35], PAGA, Monocle2, Slingshot, DPT, and TSCAN.Five simulation datasets were generated for each structure.Using these datasets, we tested each method under the same experimental conditions.The average Spearman correlation between pseudotime and the ground truth over the five simulated datasets was used as the accuracy to evaluate the performance.The predicted accuracy and HIM distance are shown in Table 1.As can be seen from Table 1, the accuracy and HIM distance of LVPT are slightly higher than those of the other methods, especially in the case of complex branch structures.The results of LVPT outperform DPT.This is because we introduced the cell velocity into the transition matrix, which integrates the information about the transcriptional state of the cell and helps to judge the differentiation state of cells better.The accuracies of LVPT and VPT are similar.But our method employed a hyperparameter α to control the proportion of steady probability.Figure 2 shows that the tuning hyperparameters yield better accuracy.It also indicates that adding steady probability can improve the performance of pseudotime inference.However, when α is too high, the differentiation part in the transition probability formula will not have an effect, and the accuracy will be significantly reduced (See Figure 2).Thus, we recommend a reasonable range of hyperparameters to be 1-10% [20].
each structure.Using these datasets, we tested each method under the same experimental conditions.The average Spearman correlation between pseudotime and the ground truth over the five simulated datasets was used as the accuracy to evaluate the performance.The predicted accuracy and HIM distance are shown in Table 1.As can be seen from Table 1, the accuracy and HIM distance of LVPT are slightly higher than those of the other methods, especially in the case of complex branch structures.The results of LVPT outperform DPT.This is because we introduced the cell velocity into the transition matrix, which integrates the information about the transcriptional state of the cell and helps to judge the differentiation state of cells better.The accuracies of LVPT and VPT are similar.But our method employed a hyperparameter α to control the proportion of steady probability.Figure 2 shows that the tuning hyperparameters yield better accuracy.It also indicates that adding steady probability can improve the performance of pseudotime inference.However, when α is too high, the differentiation part in the transition probability formula will not have an effect, and the accuracy will be significantly reduced (See Figure 2).Thus, we recommend a reasonable range of hyperparameters to be 1-10% [20].
Performance Evaluation of LVPT on Real Datasets
The pancreatic tissue dataset was derived from mouse pancreatic endocrine cells.After processing the data, we obtained 3696 single-cell data points and 2000 genes as feature attributes.
After PCA processing, the 30 top-ranked principal components were selected as the features after dimension reduction, and the nearest neighbor network was constructed by setting the number of neighbors as 30 according to the data scale.We used the Leiden clustering algorithm to set the resolution parameter to 0.54, and the pseudotime inference hyperparameter α was equal to 0.05.The results are shown in Figure 3.After clustering, the cells were divided into nine clusters.It can be seen from the pseudotime dia-
Performance Evaluation of LVPT on Real Datasets
The pancreatic tissue dataset was derived from mouse pancreatic endocrine cells.After processing the data, we obtained 3696 single-cell data points and 2000 genes as feature attributes.
After PCA processing, the 30 top-ranked principal components were selected as the features after dimension reduction, and the nearest neighbor network was constructed by setting the number of neighbors as 30 according to the data scale.We used the Leiden clustering algorithm to set the resolution parameter to 0.54, and the pseudotime inference This result indicates that pancreatic endocrine cells start to differentiate from pancreatic ductal epithelial cells, go through the process of endocrine progenitors, gradually differentiate into proendocrine cells, and finally differentiate into glucagon-producing alpha cells to produce insulin-producing beta cells, somatostatin-producing delta cells, and ghrelin-producing epsilon cells.We performed differential analysis on the clustering results to identify the genes with the highest differential expression in each category and arranged them in the order of trajectories.
We applied LVPT to the dataset of mouse hippocampus with a larger data volume and more complex topological structure.The mouse hippocampus dataset has 18,213 cell samples, and 2000 genes were selected as sample features after preprocessing.According to the sample size, we set the number of neighbors to 100 to construct the neigh- This result indicates that pancreatic endocrine cells start to differentiate from pancreatic ductal epithelial cells, go through the process of endocrine progenitors, gradually differentiate into proendocrine cells, and finally differentiate into glucagon-producing alpha cells to produce insulin-producing beta cells, somatostatin-producing delta cells, and ghrelin-producing epsilon cells.We performed differential analysis on the clustering results to identify the genes with the highest differential expression in each category and arranged them in the order of trajectories.
We applied LVPT to the dataset of mouse hippocampus with a larger data volume and more complex topological structure.The mouse hippocampus dataset has 18,213 cell samples, and 2000 genes were selected as sample features after preprocessing.According to the sample size, we set the number of neighbors to 100 to construct the neighbor networks.We set the resolution parameter of the Leiden algorithm to 0.85 and obtained 12 clusters.α was set to 0.05 to calculate the pseudotime distribution and cell trajectory.
As shown in Figure 4c, cells start to differentiate from cluster 4, the intermediate progenitor cells, and move toward three evolutionary directions.First, through narrow channels, intermediate progenitor cells differentiate into oligodendrocyte precursors (cluster 11).Second, through intermediate radial glial transformation, intermediate progenitor cells differentiate into astrocytes (cluster 3).Third, intermediate progenitor cells are evolved into Nbl2 cells (cluster 7).In the third evolutionary direction, the differentiated Nbl2 cells face another fate choice, that is, to differentiate into dentate gyrus granule neurons (cluster 9) or CA cells (cluster 1).CA cells then differentiate into CA1-CA4 and subtotal cells (clusters 5, 6, 10), while dentate gyrus granule neurons develop into mature cells after passing through two intermediate states.Our results are consistent with major functional and anatomical subdivisions of the hippocampus, validating the effectiveness of LVPT [36].
Biomolecules 2023, 13, x 9 of 12 bor networks.We set the resolution parameter of the Leiden algorithm to 0.85 and obtained 12 clusters.α was set to 0.05 to calculate the pseudotime distribution and cell trajectory.
As shown in Figure 4c, cells start to differentiate from cluster 4, the intermediate progenitor cells, and move toward three evolutionary directions.First, through narrow channels, intermediate progenitor cells differentiate into oligodendrocyte precursors (cluster 11).Second, through intermediate radial glial transformation, intermediate progenitor cells differentiate into astrocytes (cluster 3).Third, intermediate progenitor cells are evolved into Nbl2 cells (cluster 7).In the third evolutionary direction, the differentiated Nbl2 cells face another fate choice, that is, to differentiate into dentate gyrus granule neurons (cluster 9) or CA cells (cluster 1).CA cells then differentiate into CA1-CA4 and subtotal cells (clusters 5, 6, 10), while dentate gyrus granule neurons develop into mature cells after passing through two intermediate states.Our results are consistent with major functional and anatomical subdivisions of the hippocampus, validating the effectiveness of LVPT [36].Through gene enrichment analysis, we found the axon guidance signaling pathway (KEGG number mmu04360) and analyzed the data distribution of its related genes Gng12, Sema5a, and Sema3c.As shown in Figure 4e, these genes have different regulatory roles.The Gng12 gene is mainly distributed in the glial cell area, the Sema5a gene is mainly related to the formation of dentate gyrus granule neurons, and Sema3c mainly affects the initial CA cells and dentate gyrus granule neurons.
Discussion
The development of single-cell sequencing technology has played an important role in understanding the life course of cells.With the continuous development of technology, the potential of single-cell sequencing data is constantly being explored.Early studies directly used dimension reduction methods or topological distances for trajectory inference on mRNA expression data.With the deepening of research, more complex and improved models have been continuously proposed, and data during transcription process have also been mined and integrated into RNA velocity, providing us with more useful information.LVPT presents a simple and efficient method for trajectory and pseudotime inference.It incorporates transcription dynamics into pseudotime inference while introducing lazy factors to mimic the effects of cell division.We compared LVPT with other methods in pseudotime inference using simulated datasets with different trajectory structures and demonstrated that our model had equivalent or better performance.We also applied LVPT to two real datasets of mouse pancreas and mouse hippocampus, and the pseudotime inferred by LVPT is consistent with prior knowledge, indicating that LVPT is an accurate and effective method for trajectory and pseudotime inference.
Figure 1 .
Figure 1.Overall workflow of the LVPT model.
Figure 1 .
Figure 1.Overall workflow of the LVPT model.
Figure 2 .
Figure 2. Variation in the accuracy of trifurcating simulated data under different α.
Figure 2 .
Figure 2. Variation in the accuracy of trifurcating simulated data under different α.
hyperparameter α was equal to 0.05.The results are shown in Figure 3.After clustering, the cells were divided into nine clusters.It can be seen from the pseudotime diagram and trajectory diagram that the cells started to develop from cluster 2 and differentiated into clusters 3, 5, and 8.
gram and trajectory diagram that the cells started to develop from cluster 2 and differentiated into clusters 3, 5, and 8.
Figure 3 .
Figure 3. Application of LVPT to the pancreas dataset.(a) Scatter plot of the Leiden clustering result.(b) Scatter plot of pseudotimes inferred by LVPT.(c) Trajectory plot with clustering labels.The width of the edge indicates the weight between the clusters.(d) Histogram of enrichment analysis results.Different colors represent different categories.(e) Heatmap of differential genes.Genes are ordered by trajectory, and expression is ordered by pseudotime.(f) Spliced and unspliced expression plots, velocity plots, and mRNA expression plots of Abcc8 and Gnas.
Figure 3 .
Figure 3. Application of LVPT to the pancreas dataset.(a) Scatter plot of the Leiden clustering result.(b) Scatter plot of pseudotimes inferred by LVPT.(c) Trajectory plot with clustering labels.The width of the edge indicates the weight between the clusters.(d) Histogram of enrichment analysis results.Different colors represent different categories.(e) Heatmap of differential genes.Genes are ordered by trajectory, and expression is ordered by pseudotime.(f) Spliced and unspliced expression plots, velocity plots, and mRNA expression plots of Abcc8 and Gnas.
Figure 4 .
Figure 4. Application of LVPT to the hippocampus dataset.(a) Scatter plot of the clustering result.(b) Scatter plot of the pseudotime inferred by LVPT.(c) Trajectory result plot.The evolution of hippocampal cells starts from cluster 4 and moves toward 5 evolution directions.(d) Histogram of enrichment analysis results.Different colors represent different categories.(e) Spliced and
Figure 4 .
Figure 4. Application of LVPT to the hippocampus dataset.(a) Scatter plot of the clustering result.(b) Scatter plot of the pseudotime inferred by LVPT.(c) Trajectory result plot.The evolution of hippocampal cells starts from cluster 4 and moves toward 5 evolution directions.(d) Histogram of enrichment analysis results.Different colors represent different categories.(e) Spliced and unspliced gene expression plot, velocity plot, and mRNA expression plot of Gng12, Sema5a, and Sema3c.
Table 1 .
Performance comparison of LVPT with other methods in different structure databases.The values in the table represent the accuracy of different methods in different datasets.
Table 1 .
Performance comparison of LVPT with other methods in different structure databases.The values in the table represent the accuracy of different methods in different datasets. | 2023-08-16T15:14:18.098Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "7c86123530004e03c19abf578beb05ffcd1f0722",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/13/8/1242/pdf?version=1691832867",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcc7ca6469bc7a42e79e44bb615dba87e3e9a1ba",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18684557 | pes2o/s2orc | v3-fos-license | Comparison of smith-petersen osteotomy, pedicular subtraction osteotomy, and poly-segmental wedge osteotomy in treating rigid thoracolumbar kyphotic deformity in ankylosing spondylitis a systematic review and meta-analysis
Background This study aimed to compare Smith-Petersen osteotomy (SPO), poly-segmental wedge osteotomy (PWO) and pedicular subtraction osteotomy (PSO) in patients with rigid thoracolumbar kyphosis primarily caused by ankylosing spondylitis. The efficiency, efficacy and safety of these three osteotomies have not been compared systematically, and no illness-oriented surgical type selection strategy for the treatment of ankylosing spondylitis related to non-angular kyphosis has been reported. Methods The inclusion and exclusion criteria were defined, and 19 electronic databases were searched for eligible studies without language limitations. For the included studies, data extraction, bias analysis, heterogeneity analysis and quantitative analysis were performed to analyze the correction of kyphosiskyphosis and the incidence of complications. Results Nine comparative studies that met the standards were included with a total of 539 patients that underwent SPO (n = 120), PWO (n = 119), or PSO (n = 300). The correction of kyphosis by PSO was 8.74° [95 % CI: 0.7-16.78] greater than SPO. The correction of kyphosis by PWO was 13.88° [95 % CI: 9.25-18.51] greater than SPO. For local biomechanical complications, the pooled risk ratio of PWO to PSO was 1.97 [95 % CI: 1.03-3.77]. For blood loss, PSO was 806.42 ml [95 % CI: 591.72-1021.12] greater than SPO and 566.76 ml [95 % CI: 129.80-1003.72] greater than PWO. Conclusions To treat rigid thoracolumbar kyphosis, PSO showed higher efficiency and efficacy than SPO, and PWO had a higher efficacy than SPO. The risk of local biomechanical complications was greater in PWO than PSO. Bleeding was more severe in PSO than in SPO or PWO. The incidence of neural complications and systemic complications was similar.
Background
Ankylosing Spondylitis (AS), a type of chronic disease that involves the axial skeleton, causes severe thoracolumbar kyphotic deformity (TKD). AS makes it difficult for patients to see forward, stand straight and maintain a comfortable posture. Some patients even suffer from dyspnea or other serious complications due to chest compression [1,2]. Therefore, it is necessary to perform corrective surgeries to help restore spine curvature and visual function [3,4].
To treat non-angular kyphosis, there are two categories of frequently used surgery. The first, called opening osteotomy (OO), is characterized by "opening" of the anterior column. The Smith-Petersen [5] osteotomy and its modified versions developed by Chapelle [6], Briggs [7], Wilson [8] and Simmons [2] are common choices. SPO only works on one or two segments, so the anterior longitudinal ligament (ALL) and the aorta may be ruptured under highly concentrated stress. In 1982, Zielke [9] increased the segments to three or more. The "elongated SPO" allocates the stress to each segment evenly and is called poly-segmental wedge osteotomy.
The second category is "closing" the posterior column by tri-column osteotomy within one vertebra and is thus termed the closing osteotomy (CO), Thomassen Osteotomy or pedicular subtraction osteotomy; it was first described by Thomassen [10] in 1986. Modified versions of this procedure, such as the "egg-shell" osteotomy and transpedicular bivertebrae wedge osteotomy, are classified as closing osteotomies.
Both opening and closing osteotomies are effective in the treatment of non-angular kyphosis. Several noncomparative clinical trials have attempted to describe the efficacy of kyphosis correction and risk, but many controversies remain. Until now, there was neither a randomized controlled trial (RCT) nor a quantitative meta-analysis on this subject, so the evidence was insufficient to determine which strategy is better. The class of available evidence is not superior to level-3 according to "Oxford 2011 Levels of Evidence" [11], which was established by the OCEBM Levels of Evidence Working Group.
It is important to realize that simply summarizing each individual study without weighting them or equalizing to a baseline as was done in some articles is unacceptable. A meta-analysis of pairwise comparative studies would weight each study by its quality and effectively solve the baseline problem. In order to meet this purpose, raise the level of evidence and highlight some unapparent outcomes by the pile-up effect, the authors of this study intended to perform a meta-analysis to compare Smith-Petersen osteotomy, poly-segmental wedge osteotomy and pedicular subtraction osteotomy from the aspects of efficacy (general correction of kyphosis), efficiency (per level), and safety (complications) in kyphosis correction.
Inclusion and exclusion criteria
Studies meeting all inclusion criteria but none of the exclusion criteria were enrolled. The criteria consisted of 4 parts: type of intervention, type of study, type of participant and type of outcome. A comparison between opening osteotomy and closing osteotomy was acceptable. Controlled studies were eligible. Blindness and allocation concealment were not restricted. Rigid thoracolumbar kyphosis caused by AS or other diseases was eligible. Neither non-rigid nor non-thoracolumbar kyphosis was included. Studies containing patients older than 80 years old or who had an accompanying severe systemic disease like organ failure, malignant tumors or psychosis were excluded, and studies in which patients did not sign consent forms were also excluded. Correction of kyphosis and the incidence of complications were the outcomes of interest. Correction of kyphosis included local correction and that of the whole spine. All information about complications had to be included.
Search
Without confining the language type, the first two authors filtered the articles by the keywords of osteotomy, osteotomies, ankylosing spondylitis, rigid, fixed, kyphotic, kyphosis, deformity, deformities, thoracic, lumbar, thoracolumbar, sagittal, imbalance, correction, and corrective in databases such as PubMed, Web of Science, Journal Citation Reports (JCR), Derwent Innovations Index, BIOSIS Previews, MEDLINE, Essential Science Indicators (ESI), EMBASE, OVID, ACP Journal Club, Cochrane Central Register of Controlled Trials, Cochrane database of systematic reviews, Cochrane Methodology Register, Database of Abstract of Reviews of Effects, Health Technology Assessment, NHS Economic Evaluation Database, China National Knowledge Infrastructure (CNKI), VIP, and Wanfang Data. The third author was prepared to make a judgement call if any divergence occurred. The date was up to 2015/7/27.
Data extraction and assessment of study quality
The authors extracted the following information independently and contacted the original authors in case some critical data were found to be lacking: date, study type, patient quantity, gender, age, follow-up period, osteotomy type, operative segments, correction of kyphosis, type and incidence of complications. The second step was the assessment of their quality by means of Cochrane Collaboration's Tool for Assessing Risk of Bias [12] for RCT, and The Newcastle-Ottawa Scale (NOS) [13] for non-RCT. Subgroup analysis and sensitivity analysis were performed if necessary.
Heterogeneity and quantitative analysis
Heterogeneity analysis was used to evaluate the differences between studies that were large or small, and the Chi 2 and I 2 statistics were used in this step. A p value from the Chi square test less than 0.05 and an I square greater than 50 % were considered substantial. Only comparative studies were pooled. The fixed and random mode was used to merge homogenous and heterogeneous data. Subgroup analysis was performed if necessary. The level of the test (α) was set as 0.05. Continuous variables were merged by the general variance-based method. The risk ratio (RR) of dichotomous variables was merged by the Mantel-Haenszel method. All analyses were realigned using Review Manager (Version 5.3) [14] software from the Cochrane Collaboration.
Search results
The electronic database search and an additional hand search initially yielded 116 citations. All full texts were downloaded from the original database. Of these, 48 did not meet the criteria of participants, 35 were not the correct study type, and 22 did not meet the intervention criteria. Only 9 [15][16][17][18][19][20][21][22][23] papers were used for the final analysis ( Fig. 1). Of these 9 included papers, 8 were retrospective cohort studies and 1 [15] was a conference article. All were published studies, and none were ongoing.
Data extraction and bias analysis
The 9 studies included 539 patients and 3 types of osteotomies, such as SPO (n = 120), PWO (n = 119) and PSO (n = 300), without the involvement of other modified surgical types, such as the "egg-shell" osteotomy. The cases of SPO consisted of 111 single-level and 9 double-level. However, each PSO was operated in a single level. Four studies [17,19,20,23] reported that PSO was used more often to treat patients with severe stiffness.
The descriptive data of the studies are shown in Table 1. Local biomechanical complications were defined as biomechanical imbalance or instability of the local bone-instrument complex, such as vertebral body translation, pedicle fracture, pedicle screw loosening, instrument breakage, nonunion, and anterior cortex fracture. Neural complications included transient and permanent injury of the radicular and spinal cord. Others, such as superior mesenteric artery syndrome, abdominal compartment syndrome, paralytic ileus, intestinal perforation, dyspnea, pneumonia, cardiac infarction, visual field defect and infection, were classified into systemic complications. The Newcastle-Ottawa Scale was applied to assess whether the biases were too large to combine. The result was encouraging because each text received at least 4 points (median quality). The results are presented in Table 2.
Selection
Representativeness of the Exposed Cohort Not described Not described Not described Not described Not described Not described Not described Not described Selection of the Non-Exposed Cohort The same population The same population The same population The same population The same population The same population The same population
Local biomechanical complications
Seven articles reported local biomechanical complications (Fig. 3) Fig. 4 was more severe in closing osteotomy than in opening osteotomy.
Dural tear, neural complications, and systemic complications
Seven articles reported the incidence of dural tear, neural complications, and systemic complications. As is shown in Fig. 5, the pooled RR of dural tear was 1.91 [95 % CI: 1.04-3.51], and the p value was less than 0.05 (I 2 = 0). The RR of systemic complications was 1.46 (not significant, I 2 = 11 %). The RR of neural complications was 0.6 (not significant, I 2 = 0). The authors attempted to separate radicular and spinal injuries to perform subgroup analysis but failed due to a shortage of suitable studies.
Discussion
Four pooled studies used opening osteotomy (SPO or PWO) to treat mild-rigid spine, and closing osteotomy (PSO) to severe-rigid spine, because the anterior column of mild-rigid spine can be bent backwards easily in the process of "opening", while the stiffness of the severerigid spine can only accept "cutting and closing" in a closing osteotomy. This point of view is generally recognized as consensus by most surgeons. In this research, the stiffness of spine was simply divided into three categories based on their degree of stiffness and the surgical type doctors considered appropriate. For the very soft or very rigid kyphosis, doctors were already able to choose the suitable operation easily based on their expertise as mentioned, so the operation selection guidance for those cases was not necessary. However, they usually felt hesitation about the intersectional stiffness between mild and severe since both opening and closing methods seemed good to use, therefore we define this state of stiffness as "median-rigid kyphosis" These cases are actually confusing to surgeons attempting to choose a suitable method. The efficiency of kyphosis correction in osteotomy can be defined as the correction of kyphosis per surgical level. As described above, the kyphosis correction of PSO was larger than SPO, so the efficiency of PSO was greater than SPO in treating median-rigid cases. The reasons can be explained from two aspects. First, every PSO was single-level, but 9 out of 120 SPO cases were double-level, so the correction angle of a single-level SPO was smaller if we excluded the double-level cases. Second, even if the severe-rigid cases were included, PSO still had a larger correction angle than SPO. If PSO is used to cure mild-to-median rigidness, the correction angle should not decrease. On the other hand, the correction angle of SPO is difficult to increase during the treatment of severe stiffness. PWO had a larger correction efficacy than SPO because PWO involves more segments. Because PWO wins by the sheer quantity of surgical levels, the outcome is unlikely to change in median-rigid cases (Fig. 6).
The correction angles of some individual cases were quite extreme, such as 52°for SPO [15] and 60°for SPO [2,24]. As the angle increases, the incidence of complications increases. Wide-angle SPO usually causes lethal bleeding by aortic damage. In all 9 included studies, only Arun [22] reported one death. This reflects the fact that surgeons clearly know of the problem and paid great attention to avoid such a danger. Arun [22] emphasized the importance of a slow and careful operation during the "opening" process. In other words, do not let the "click" sound occur. The sound "click" came from the fracture of bone when spine was pulled backwards rapidly. Chang [19] did not suggest performing SPO on the elderly, advanced AS patients, or patients with arteriosclerosis. Chang [19] and Arun [22] considered SPO to be relatively suitable for L2 and lower levels, and malleable rods were necessary to offer temporary stability in the case of a correction greater than 35°. Because each method has its limitations, seeking a one-step correction in a single level is very dangerous. Bridwell [18] Fig. 4 Comparison of blood loss; CI, confidence interval. Information: Both SPO and PWO had fewer blood loss than SPO suggested combining one PSO and several PWOs together. SPO and PWO are not essentially different, except for the quantity of levels.
Local biomechanical complications included vertebral body translation, pedicle fraction, pedicle screw loosening, instrument breakage, nonunion, and anterior cortex fracture. They are mostly caused by improper fixation, miss operation or excessive elastic stress. PWO has a higher incidence of local biomechanical complications than PSO because PWO needs to overcome a greater stress of the spine. For mild-rigid cases, PWO would be more difficult in the median-rigid cases. On the contrary, using PSO to treat median-rigid cases should be easier compared to the severe cases reported by the pooled four studies. In other words, the rigidness of the spine affects PWO much more than PSO. Of the fixation materials, the Universal Spine System (USS) was stronger [15,17] than the slender rod. Royen [25] reported that the instrumentation failure rate in PWO (6.5 %) was higher than PSO (3.8 %), which was close to our findings. Zhu [20] reported that the rate of correction loss in follow-up during opening osteotomy was 6.1°± 6.7°, while that of the closing osteotomy was 1.3°± 5.4°. Surgeons [15,23] found that high anterior column tension, osteoporosis and progression of AS were too difficult and dangerous to perform for opening osteotomy because the implementation of SPO and PWO rely on a Blood loss in PSO was 800 ml greater than in SPO and 550 ml greater than in PWO. The large amount of blood loss in PSO was due to deeply cutting the vertebrae and difficulty in hemostasis. The key steps of PWO and SPO are the resection of the disc and zygapophyseal joint, during which bleeding would not be as many of a problem. In addition, PSO is more frequently used in severe-rigid cases. Thus, the more rigid the bone is, the more fragile the vessel becomes. As such, closing osteotomy should be carefully performed on the elderly and patients with a low tolerance to hemorrhage.
Some non-significant but highly consistent pooled outcomes appeared as well. In the analysis of local biomechanical complications, every individual study consistently presented that the incidence in SPO was greater than PSO. It is very likely that a meaningful outcome would be found if more articles were included or if each included study had a smaller standard deviation (high uniformity). We considered the incidence of local biomechanical complications in SPO to be higher than in PSO in median-rigid cases.
Cho [18] found better improvement in the sagittal vertical axis (SVA) in PSO than in PWO after the same angle of correction because PSO swings the upside vertebral column backward, and PWO twists it instead. He recommended using PSO to treat a SVA greater than 12 cm. Daubs [26] thought that 10 cm was more reasonable.
The incidence of systemic complications was nearly the same in each individual study. The causes, such as inflammatory status of AS, medical co-morbidity, and perioperative management, might be complicated. Multifactor regression analysis and the inclusion of more related articles are needed for further analysis. Dura mater tearing is caused by vertebral translation [25] and careless clamping off of the calcified ligamentum flavum.
The spinal cord and nerve root are likely to be injured during the transformation of kyphosis. Royen [25] and Lazennec [15] believed that PSO did less harm to the nerve root because the previous steps, laminoplasty and intervertebroplasty, generally broadened the nerve root canal. However, PSO is not a good option for the spinal cord. A PSO greater than 40°would increase the probability of spinal cord compression by shortening the posterior column [27]. Chang [19] did not encounter such problems even during PSOs of greater than 45°. He thought careful manipulation and the high tolerance of the medullary cone to compression were helpful. By calculating the height of a normal lumbar vertebral body, some surgeons consider that a PSO greater than 35° [19,28,29] was difficult to achieve theoretically in one level due to greater nerve injury risk. The angle approaches our result (36°).
Conclusions
The "median-rigid" was not an exact quantified degree, but just the remaining cases without prominent features of stiffness that could not be recognized by doctors in decision making. The authors mainly discussed the operation choose for these cases. The authors closely combined surgeon's specialized knowledge with evidence-based data in order to help make right operation decisions. Median-rigid cases can be treated by either opening or closing osteotomy; PSO is more effective and efficient than SPO, and PWO is more efficient than SPO. PSO is suitable for high-degree kyphosis. Its advantages are little harm to the aorta, small demand on bone density, and low risk of instrumentation failure. However, substantial blood loss and the complicated surgical technique are two factors doctors should seriously consider. Comparing with SPO, PWO is proper for large angle kyphosis, but doctors must overcome such high risk of instrumentation failure. SPO is really good for small angle. Careful work is necessary to avoid aorta damage and middle column fracture. There are some limitations to this study. I 2 value more than 50 % is customarily considered to be high heterogeneity between studies. The I 2 value in the correction angle comparing PSO vs SPO is 92 %. By now it is inappropriate to remove any articles no matter from professional judgement or the weight each possesses, because too few are included. To reduce the confounding effect of high heterogeneity, we used random mode which was supposed to give a very conservative result that not easy to make sense. The correction angle of SPO and PSO was regarded to be different in case the result under random mode still showed significance, and that was the fact. However, low heterogeneity degrade the level of evidence. To recognize and eliminate the bias completely, an RCT was needed. With the publication and enrollment of more new studies, these conclusions will become more convincing.
Competing interests
The author (s) declare that they have no competing interests.
Authors' contributions HX and CZ: acquisition, analysis or interpretation of data. WP, HL, and TY took part in the research design. YJ, CK, and AJT revised the article. SH approved and submitted the final version. All authors have read and approved the final submitted manuscript. | 2016-05-12T22:15:10.714Z | 2016-01-22T00:00:00.000 | {
"year": 2016,
"sha1": "61d1c1e62d6d51088d7895a1e24b04eb61cc2494",
"oa_license": "CCBY",
"oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-015-0118-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61d1c1e62d6d51088d7895a1e24b04eb61cc2494",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235538191 | pes2o/s2orc | v3-fos-license | Foliar-Supplied Molybdenum Improves Phyto-Nutritional Composition of Leaves and Fruits of Loquat ( Eriobotrya japonica Lindl.)
: This study was conducted to analyze the effect of molybdenum (Mo) on the level of macronutrients (Mg, K, Ca), micronutrients (Na, Zn, B, Mo, Mn, Fe, Co, Ni, Cu) and toxic elements (Cd, Sn, As, Cr, Hg, Pb) in leaves and fruit tissues (peel, pulp & seed) of loquat using inductively coupled plasma-mass spectrometry (ICP-MS). Fruits were obtained from a loquat orchard located in Fujian (Yun Xiao/Zhangzhou). Loquat leaves showed 3.99, 23.90, 57.68 and 9.79-fold increase in the concentrations of Mg, K, Ca and B as affected by exogenously applied Mo, while a significant ( p ≤ 0.05) reduction was observed in Na, Mo, Mn, Co, Ni, Cu, Sn, As, Cr, Hg, and Pb contents. In fruit peel, Mo induced 1.05, 1.56, 2.26, 1.15, 1.39, 1.30 and 1.89-fold increase in Mg, K, Na, Zn, B, Co and Ni, respectively, and Mo, Mn, Fe, Cu, Cr, and Pb contents were reduced. Fruit pulp showed elevated contents of Na, B, Co, Ni, and Sn by 25.65, 26.57, 70.09, 12.33 and 0.99%, respectively, and decreased concentrations of Mg, K, Ca, Zn, Mo, Mn, Fe, Cu, Cd, and Cr. Similarly, in loquat seeds, under the influence of Mo application, levels of K, Na, Zn, Fe, and Cr were uplifted by 14.75, 41.54, 12.71, 536.47 and 656.27%, respectively, while Mg, Ca, Mn, Co, Cu, Sn, and Hg were reduced significantly ( p ≤ 0.05). Overall, mineral concentrations detected in the leaves and fruit tissues of loquat were greatly influenced by the application of Mo. Although the concentration of Sn in fruit pulp was increased by the foliar application of Mo, the fruits were under safe limits for human consumption.
Introduction
Fruits and vegetables are low in energy content but have high nutritive value [1,2]. They are a rich source of vitamins, fibers, and minerals which are much essential for the human body [3]. The sufficient intake of fruits and vegetables can remove saturated fats and sugars from the body and enhance the consumption of healthy nutrients and dietary fiber [4]. To avoid certain cardiovascular diseases, it is recommended to consume 440 g fresh fruits and vegetables per day [5]. Heavy metals can influence the function of the nervous system, and cause mental illness by affecting the blood circulatory system [6]. Hence, it is very important to investigate the mineral contents of fruits and vegetables [7].
Loquat (Eriobotrya japonica Lindl.) is an evergreen fruit tree originated from China. It belongs to the family Rosaceae, subfamily Maloideae. It is a rich source of vitamin A, vitamin B6, potassium, magnesium, and dietary fiber [8]. It is a very beautiful orangecolored fruit with a mild sweet and sour taste [9]. It is most widely grown in Japan, Korea, India, Pakistan, and the south-central region of China. It is also grown as an ornamental shrub in California [10]. China is the leading producer and exporter of loquat and grows it on more than 100,000 hectares. The annual production of loquat in China reaches up to 380,000 tons [11]. More than 30 species of loquat are being grown in temperate and subtropical regions of Asia [12].
Fertilizer use is one of the main factors influencing the mineral contents of the plants [13][14][15]. Foliar application of fertilizers is 10-20% more influential than soil application [16][17][18]. Molybdenum is an essential nutrient for plant growth and is required in trace quantity. It is an integral component of nitrate reductase and nitrogenase, and vital for the nitrates' assimilation in the soil. Apart from its role in nitrate reductase, its function in higher plants is not well known [19]. Therefore, the role of Mo is closely associated with nitrogen metabolism, and its deficiency leads to N deficiency in plants [20]. Foliar fertilization of Mo can effectually increase internal Mo and improve the performance of molybdo-enzymes [21].
Since not much is studied about the nutritional effects of foliar-applied Mo on leaves and fruits of loquat, a field experiment was conducted at Yun Xiao, Fujian, to investigate the nutritional response of loquat to exogenous application of Mo. The contents of macronutrients (Mg, K, Ca), micronutrients (Na, Zn, B, Mo, Mn, Fe, Co, Ni, Cu), and heavy metals (Cd, Sn, As, Cr, Hg, Pb) in different tissues of loquat (leaf blade, fruit peel, pulp, and seed) were analyzed at different maturity stages through Inductively coupled plasma-mass spectrometry (ICP-MS).
Plant Material
Field experiment was carried out in a loquat orchard located at Yun Xiao, Fujian, China (23 • 57 13.5 N 117 • 20 36.0 E). Twenty-four young loquat trees (Cv. Jiefang zhong) were selected and tagged for periodic monitoring after foliar application of ammonium molybdate (CAS No. 12054-85-2, ≥99% purity, Sigma Aldrich, Milwaukee, USA). Loquat trees were 4-5.5 m tall with canopy diameters of 4-5 m. The plantation distance among trees was 6 × 6 m 2 , approximately. Loquat trees were systematically pruned and thinned, and fertilized with NPK (15:15:15) at concentration of 5 kg per plant per season for three growing seasons. The experiment contained two treatments including the control (water spray) and 0.05% ammonium molybdate. The foliar application was carried out twice at immature green fruiting stage with a 3-week interval. Each treatment consisted of four replicates, and each replicate had three trees.
Sample Collection and Preparation
Samples of the leaves and fruits were taken from the south-exposed tree canopy, at about 3-5 m height, at three different fruit maturity stages (i.e., immature green (IMG) (at the time of foliar spray), mature green (MG) (40 days after foliar spray) and fully ripe (FR) (70 days after foliar spray). Samples were washed (for 10 s approximately) with a solution of phosphate-free detergent, then with a 0.1 N HCl solution, and finally with distilled water. Leaves, fruit peel, pulp, and seeds were dried at 70 • C, ground, and passed through a 500-µm stainless-steel sieve. Soil samples were taken from the root zone of each tree, about 2 m from the trunk, and at two depths (0-25 and 25-40 cm) using a spiral auger with a 2.5 cm diameter. Three sub-samples around the trunk were taken to make a composite soil sample for each tree [7]. Samples were transported to the laboratory (Institute of Subtropical Fruits, College of Horticulture, Fujian Agriculture and Forestry University, Fuzhou), oven-dried at 40 • C, and crushed to pass through a 2-mm sieve, then ground to <60 µm for element determinations [7,22]. The soil's structure was sandy loam with pH and EC of 6.2 and 0.396 dS m −1 , respectively. The pH and EC were determined by using digital pH meter (Hanna, HI-98,107, Mauritius) and EC meter (HI-98,304, Hanna Instruments Inc., Mauritius), respectively. The nutritional status of the soil was determined using ICP-MS before the foliar spray of Mo (Table 1). The values in Table 1 show the mean ± standard error of four replicates.
Instrumentation and Reagents
The inductively coupled plasma mass spectrometer (Agilent7700X, Agilent, Santa Clara, CA, USA) used in this study was combined with a high-efficiency sample introduction desolvating system equipped with a quartz cyclonic spray chamber. Before each analysis, the instrument was tuned for daily performance using Agilent 7700X Sensitivity Detection Limit Solution, Agilent, USA. This is an aqueous multi-element standard solution of Li, Y, Co, Ce, and Tl for consistent sensitivity ( 7 Li,59 Co, 89 Y, 140 Ce, 205 Tl) and minimum doubly charged and oxide species levels ( 140 Ce). The concentration of elements in the solution was 10 µg L −1 . The internal standard solution (Agilent, USA) had a concentration of 0.01 µg ml −1 for each element ( 226 Re, 115 In, 73 Ge, 45 Sc).
Standard solution (1000 µg L −1 ) for each element (i.e., Mg, K, Ca, Na, Zn, B, Mo, Mn, Fe, Co, Ni, Cu, Cd, Sn, As, Cr, Hg, and Pb) was purchased from the National Standard Material Research Centre. Nitric acid (HNO 3 ) used in the experiment was analytically pure, obtained from CNW Technologies GmbH, Germany. The deionized water (1.83 × 10 10 m·mhos) was prepared locally. In this experiment, the working parameters of the inductively coupled plasma mass spectrometer were optimized before the test [7].
Sample Preparation
After cleaning, glassware and a Pure Teflon (PTFE) digestion tank were soaked in 20% nitric acid (1 + 4, V + V) for more than 12 h, then rinsed with deionized water three times before use. After lyophilizing, 0.5 g of freeze-dried loquat tissue was added into a poly-tetra-fluoroethylene digestion tank. Then, 5 mL of concentrated nitric acid (analytical grade, CNW Technologies GmbH, Düsseldorf, Germany) was added. The digestion tank was gently shaken to completely immerse the sample, and microwave digestion (Mars5, HY−20−164, CEM, Matthews, NC, USA) occurred under the set conditions [7]. After microwave digestion, the solution was cooled naturally. The digestion solution was transferred to a 25-mL volumetric flask. The blank sample (control) was prepared simultaneously.
Determination
A single standard series solution was prepared. The mixed standard series solution and sample solution were measured by ICP−MS, and the standard curve method was used for quantification. The mixed internal standard was added to correct for matrix interference and instrument signal drift.
Statistical Data Analysis
Treatment allocation to loquat trees was done according to randomized complete block design (RCBD) arrangement. Collected data were subjected to analysis of variance (ANOVA), and Fisher's LSD technique was used to compare means (p ≤ 0.05) using the analytical software package 'Statistix 8.1'. Principal component analysis (PCA) and correlation coefficient values were determined with the Pearson (n) method using XLSTAT Ver. 2019.
Macronutrients
The foliar application of 0.05% ammonium molybdate increased the concentrations of Mg, K, and Ca in loquat leaves by 248%, 672%, and 632% at mature green stage, while 299%, 2290%, and 5668% at full ripe stage as compared to control. The results showed gradual increase in macronutrients with the foliar application of 0.05% ammonium molybdate from mature green fruiting stage to full ripe stage, except in case of Mg, when Mg contents were higher at mature green stage than full ripe stage ( Table 2).
Micronutrients
Overall concentration of Na was reduced at full ripe stage as compared to immature and mature green stage. At mature green stage, the Na level increased by 4.43% under the influence of Mo application as compared to control and recorded maximum among all observations. While at full ripe stage, Mo application reduced the Na concentration by 18% as compared to control. Zinc contents remained unchanged with the fruit maturity but significantly elevated at mature green stage in untreated plants. Plants treated with Mo exhibited a 25% decrease in Zn concentration at mature green stage, as compared to control, while remained unchanged at full ripe stage. The maximum level of B (33.9 mg·Kg −1 ) was observed in loquat leaves treated with 0.05% ammonium molybdate, at full ripe stage, followed by same treatment at mature green stage. The foliar application of Mo caused gradual increase in B concentration, while untreated plants showed constant level of B in leaves with fruit maturity. The plants receiving Mo application exhibited 9.8 and 7.7-fold decrease in Mo concentration at mature green and full ripe stage, respectively. Whereas untreated plants showed significantly increased Mo level with fruit maturity. Manganese concentration in the leaves of untreated plants reduced with fruit maturity (38.8%), whereas the plants treated with 0.05% ammonium molybdate showed further reduction in Mn level (Table 3). Untreated plants showed significantly increased Fe level at mature green stage (15.4 g·Kg −1 ) followed by full ripe stage (12.86 g·Kg −1 ) as compared to immature green stage (11.39 g·Kg −1 ). While, foliar application of Mo severely hampered the Fe concentration (106 and 247-fold reduction at mature green and full ripe stage, respectively, as compared to control). Similarly, Co concentration was also reduced with Mo application in comparison with control (12.64 and 14.42-fold reduction at mature green and full ripe stage, respectively). Whereas, untreated plants exhibited gradual decrease in Co level with fruit maturity. Nickle concentration was also reduced with fruit maturity, in untreated plants. But under the influence of Mo, maximum Ni (9.7 mg·Kg −1 ) was recorded at mature green stage, while minimum (1.0 mg·Kg −1 ) at full ripe stage. In case of Cu, untreated plants exhibited steady enhancement with fruit maturity from immature green to full ripe. While Mo treated plants showed maximum Cu level (13.8 mg·Kg −1 ) at mature green stage, while lowest at full ripe stage (1.8 mg·Kg −1 ) ( Table 3).
Heavy Metals
Cadmium concentration in the leaves of loquat increased with increase in fruit maturity. Maximum Cd contents (197.3 µg·Kg −1 ) were observed in the plants received foliar application of 0.05% ammonium molybdate, which was 2.5-fold higher than control. In contrast with Cd, Sn concentration decreased with increased fruit maturity, as well as with Mo application. Maximum Sn (7745.3 µg·Kg −1 ) was recorded in untreated plants at mature green stage, while minimum concentration (89.1 µg·Kg −1 ) was observed in Mo treated plants at same maturity stage. Foliar application of Mo significantly reduced As concentration in loquat leaves. At full ripe stage, untreated plants showed 42.3 mg·Kg −1 As, while in comparison, plants treated with Mo exhibited only 9.2 µg·Kg −1 As, which was 4700 times less than control, approximately. Similarly, Maximum Cr concentration (4.4 mg·Kg −1 ) was observed in the leaves of untreated loquat plants at mature green stage, while minimum (504.6 µg·Kg −1 ) found in Mo treated plants at same maturity stage. Although Hg concentration increased with increase in fruit maturity, Mo application significantly reduced (96 and 95.7% at mature green and full ripe stage, respectively) Hg contents at mature green and full ripe stage. In contrast with Hg, Pb concentration decreased with increase in fruit maturity. Loquat plants treated with Mo exhibited 545 and 235-fold reduction at mature green and full ripe stage, respectively (Table 4).
Macronutrients
Magnesium concentration in fruit peel of loquat significantly reduced with increase in maturity stage. The foliar application of 0.05% ammonium molybdate caused increase in Mg concentrations by 12.81% and 5.76% at mature green and full ripe stage, respectively. The results showed gradual increase in K contents with increase in maturity stage. The foliar application of 0.05% ammonium molybdate boosted up the increasing trend of K by 56.5% at full ripe stage. While, Ca concentration increased with increase in fruit maturity, regardless of treatment applied. At full ripe stage, Ca concentration was 3.5-fold higher than that at mature green stage (Table 5).
Micronutrients
The concentration of Na increased at full ripe stage as compared to mature green stage. The foliar application of 0.05% ammonium molybdate boosted up the increasing trend of Na by 126.8% at full ripe stage. Similarly, a gradual increase in Zn contents was observed with increase in maturity stage. At full ripe stage, the Zn level increased by 14.5% under the influence of Mo application as compared to control and recorded maximum among all observations. Similar trend was observed in case of B concentration in fruit peel of loquat. Leaf B elevated by 39.12% at full ripe stage in Mo treated plants. Untreated plants showed 208% increase in Mo concentration with increase in maturity (from mature green to full ripe stage). While Mo treated plants exhibited 30% decrease in leaf Mo concentration at full ripe stage. The plants receiving Mo application exhibited 1.08 and 1.32-fold decrease in Mn concentration at mature green and full ripe stage, respectively. Whereas untreated plants showed significantly increased Mn level with fruit maturity. Iron concentration in the leaves of untreated plants reduced with fruit maturity (56.6%). Whereas the plants treated with 0.05% ammonium molybdate showed further reduction (1.3-fold) in Mn level, at full ripe stage. Cobalt concentration increased with increase in fruit maturity, as further increase with the foliar application of Mo in comparison with control (1.4 and 1.3-fold increase at mature green and full ripe stage, respectively). Nickle concentration in fruit peel of untreated loquat plants also increased with increase in fruit maturity. Under the influence of Mo, maximum Ni (1.3 mg·Kg −1 ) was recorded at full ripe stage followed by 0.9 mg·Kg −1 at mature green stage. Regardless of treatments applied, plants exhibited significant reduction in Cu concentration with increasing fruit maturity. The foliar application of Mo enhanced Cu contents by 12.26% at mature green stage, while reduced by 3.87% at full ripe stage (Table 6). Table 6. Micronutrient concentrations in fruit peel of loquat as affected by the foliar application of molybdenum.
Fruiting Stage
Treatments
Heavy Metals
Cadmium concentration in fruit peel of loquat increased with increase in fruit maturity. Maximum Cd contents (190.5 µg·Kg −1 ) were observed in the plants received foliar application of 0.05% ammonium molybdate, which was 1.03-fold higher than control. Similarly, Sn concentration also increased with increase in fruit maturity, as well as with Mo application. Maximum Sn (1156.6 µg·Kg −1 ) was recorded in Mo treated plants at full ripe stage, while minimum concentration (160.14 µg·Kg (Table 7).
Macronutrients
The foliar application of 0.05% ammonium molybdate enhanced the Mg concentrations at mature green stage by 8.6%, and reduced at full ripe stage by 10.6%. Overall Mg level was reduced with increase in fruit maturity. Like in case of Mg, Mo treated plants exhibited increased K concentration at mature green stage by 71.3%, and decrease at full ripe stage by 45.2%. The trend of K level in untreated plants was reciprocal to that of Mg level i.e., K level was enhanced with increase in fruit maturity. Similar to Mg contents, Ca contents reduced with increase in fruit maturity. Fruit pulp Ca was reduced up to 90% at full ripe stage as compared to control. The foliar application of Mo further reduced Ca level in fruit pulp of loquat by 27.8 and 33.5% at mature green and full ripe stage, respectively (Table 8).
Micronutrients
Overall concentration of Na remained unchanged at full ripe stage as compared to immature green stage, and reduced with respect to mature green stage. At mature green stage, the Na level increased (2.64-fold) as compared to immature green stage and recorded maximum among all observations. At same maturity stage, the foliar application of Mo reduced the Na concentration to same level as immature green stage had. While at full ripe stage, Mo application enhanced the Na concentration by 25.65% as compared to control. Zinc contents were reduced with the fruit maturity. Minimum Zn contents (21.58 mg. Kg −1 ) were observed at mature green stage in Mo treated plants. Plants treated with Mo exhibited 17% decrease in Zn concentration at full ripe stage, as compared to control. The maximum level of B (36.3 mg·Kg −1 ) was observed in fruit pulp of loquat treated with 0.05% ammonium molybdate, at mature green stage, followed by same treatment at full ripe stage. The untreated plants showed slight but significant increase in B contents in fruit pulp with fruit maturity. The foliar application of Mo caused further increase in B concentration. The plants receiving Mo application exhibited 1.37 and 1.01-fold decrease in Mo concentration at mature green and full ripe stage, respectively. The untreated plants also showed significant decrease in Mo level with fruit maturity. Manganese concentration in the fruit pulp of untreated plants reduced with fruit maturity (up to 82%). Whereas, the plants treated with 0.05% ammonium molybdate showed slight but significant increase in Mn level at mature green stage, and further reduction at full ripe stage as compared to control (Table 9). Untreated plants showed slight but significant increase in Fe level at full ripe stage (76.1 mg·Kg −1 ) as compared to immature green stage (68.9 mg·Kg −1 ). While, foliar application of Mo significantly hampered the Fe concentration (1.21 and 1.38-fold reduction at mature green and full ripe stage, respectively, as compared to control). Cobalt concentration was observed increasing with increased fruit maturity. Molybdenum application induced 1.4 and 1.7-fold increase in Co level of loquat fruit pulp at mature green and full ripe stage, respectively. The maximum Ni concentration (10.6 mg·Kg −1 ) was observed in fruit pulp of untreated plants at mature green stage. The foliar application of Mo induced 3.5-fold reduction in Ni level at mature green stage. While at full ripe stage, Ni contents were elevated by 12.33% as compared to control. In case of Cu, the maximum level (8.5 mg·Kg −1 ) was observed in fruit pulp of untreated plants at mature green stage. The foliar application of Mo induced 2 and 1.4-fold reduction in Cu level at mature green and full ripe stage, respectively (Table 9).
Heavy Metals
Cadmium concentration in fruit pulp of loquat decreased with increase in fruit maturity. Maximum Cd contents (298.1 µg·Kg −1 ) were observed in the untreated plants at immature green stage, which was 4.3-fold higher than those at full ripe stage. Foliar application of Mo induced 1.8 and 1.86-fold reduction in Cd level at mature green and full ripe stage. In contrast with Cd, Sn concentration increased with increased fruit maturity, as well as with Mo application. Maximum Sn (101.8 µg·Kg −1 ) was recorded in Mo treated plants at full ripe stage, while minimum concentration (58.4 µg·Kg −1 ) was observed in untreated plants at mature green stage. Foliar application of Mo significantly reduced As concentration in fruit pulp of loquat at mature green stage, while observed unchanged at full ripe stage as compared to control. At mature green stage, untreated plants showed 21.7 µg·Kg −1 As, while in comparison, plants treated with Mo exhibited only 0.5 µg·Kg −1 As, which was 47 times less than control, approximately. Similarly, Maximum Cr concentration (1.5 mg·Kg −1 ) was observed in fruit pulp of untreated loquat plants at immature green stage, while minimum (535.1 µg·Kg −1 ) found in Mo treated plants at full ripe stage. Mercury concentration remained unchanged throughout fruit maturity. Molybdenum application significantly enhanced Hg contents in fruit pulp of loquat at mature green stage, while recorded unchanged at full ripe stage as compared to control. Regardless of treatment applied, Pb concentration decreased with increase in fruit maturity (Table 10).
Macronutrients
The Mg and Ca concentration in untreated loquat seeds increased by 91 and 46.4% at mature green stage, and decreased by 52 and 90%, respectively at full ripe stage as compared to immature green stage. While seeds of plants receiving foliar application of Mo exhibited 1.4 and 1.5-fold reduction in Mg and Ca contents at full ripe stage, respectively. The K concentration showed reciprocal trend to the Mg and Ca concentrations. Magnesium concentration in untreated loquat seeds decreased by 48.74% at mature green stage, and increased by 35.27% at full ripe stage as compared to immature green stage. This fluctuation indicated an extraordinary reduction in K level (10,956.3 mg·Kg −1 ) at mature green stage. While seeds of plants receiving foliar application of Mo exhibited 1.03 and 1.14-fold increment in K contents at mature green and full ripe stage, respectively (Table 11).
Micronutrients
Sodium contents in seeds were observed to be reduced with increase in fruit maturity of loquat. While, Mo application significantly improved the Na percentage in loquat seeds at mature green (195%) and full ripe stage (41.5%) as compared to control. Zinc concentration in untreated loquat seeds increased by 62.2% at mature green stage, and decreased by 70% at full ripe stage as compared to immature green stage. This fluctuation indicated an extraordinary enhancement in Zn level (37 mg·Kg −1 ) at mature green stage. While seeds of plants receiving foliar application of Mo exhibited 1.38 and 1.12-fold enhancement in Zn contents at mature green and full ripe stage, respectively. Regardless of treatment applied, B contents remained unchanged with the fruit maturity, but significantly supressed at mature green stage in untreated plants. The maximum level of Mo (4.7 mg·Kg −1 ) was observed in loquat seeds treated with 0.05% ammonium molybdate, at mature green stage. All other observations exhibited non-significant difference among each other. Manganese concentration in untreated loquat seeds increased by 668.4% at mature green stage, and decreased by 93.4% at full ripe stage as compared to immature green stage. This fluctuation indicated an extraordinary enhancement in Mn level (402.2 mg·Kg −1 ) at mature green stage. While seeds of plants receiving foliar application of Mo exhibited 2.02 and 1.85-fold reduction in Mn contents at mature green and full ripe stage, respectively (Table 12). Untreated plants showed significantly increased Fe level at full ripe stage (205.4 mg·Kg −1 ) followed by mature green stage (66.3 mg·Kg −1 ) as compared to immature green stage (45.9 mg·Kg −1 ). While, foliar application of Mo significantly enhanced the Fe concentration (6.36-fold) at full ripe stage as compared to control. Cobalt concentration in untreated loquat seeds increased by 176.4% at mature green stage, and decreased by 18.35% at full ripe stage as compared to immature green stage. This fluctuation indicated an extraordinary enhancement in Co level (0.6 mg·Kg −1 ) at mature green stage. While seeds of plants receiving foliar application of Mo exhibited 3.06 and 1.75-fold reduction in Co contents at mature green and full ripe stage, respectively. The Ni concentration showed reciprocal trend to the Co concentration. Nickel concentration in untreated loquat seeds decreased by 71.82% at mature green stage, and increased by 196.83% at full ripe stage as compared to immature green stage. This fluctuation indicated an extraordinary reduction in Ni level (0.4 mg·Kg −1 ) at mature green stage. While seeds of plants receiving foliar application of Mo exhibited 1.9-fold reduction in Ni contents at mature green stage. In case of Cu, untreated plants exhibited steady enhancement with fruit maturity from immature green to full ripe. While Mo treated plants showed maximum Cu level (2.5 mg·Kg −1 ) at full ripe stage, while lowest at mature green stage (1.7 mg·Kg −1 ) (Table 12).
Heavy Metals
Cadmium concentration in the seeds of loquat decreased with increase in fruit maturity. Maximum Cd contents (268.2 µg·Kg −1 ) were observed in the plants received foliar application of 0.05% ammonium molybdate, which was non-significantly different than control. In contrast with Cd, Sn concentration increased with increase in fruit maturity, while decreased with Mo application. Maximum Sn (484.2 µg·Kg −1 ) was recorded in untreated plants at full ripe stage, while minimum concentration (80.24 µg·Kg −1 ) was observed in untreated plants at immature green stage. Foliar application of Mo significantly reduced As concentration in loquat seeds at mature green stage. At full ripe stage, untreated plants showed 4.4 µg·Kg −1 As, while in comparison, plants treated with Mo exhibited 9.09 µg·Kg −1 As. In contrast, Maximum Cr concentration (80.7 mg·Kg −1 ) was observed in the seeds of Mo treated loquat plants at full ripe stage, while minimum (1.3 mg·Kg −1 ) found in untreated plants at immature green stage. Molybdenum application significantly increased Hg contents by 78.2% at mature green stage, while reduced by 68.8% at full ripe stage. Similarly, loquat plants treated with Mo exhibited 1.65-fold enhanced Pb at mature green as compared to control. While, there was no significant change in Pb contents observed by Mo application at full ripe stage (Table 13).
Principle Component Analysis
To delineate the effect of multiple variables, Principal Component Analysis (PCA) among Mo treatment and nutritional concentrations in leaves and fruits of loquat was determined ( Figure 1). Factor F1, covering 39.81% variability in data (eigenvalue 7.167), showed highest squared cosine values of Mg, K, Ca, Fe, Cu, Sn, As, Hg and Pb in leaves with both treatments (Control and 0.05% ammonium molybdate) at immature green and full ripe stage, as well as in seeds at immature green stage, suggesting positive relation of aforementioned response and treatment variables. Second factor, covering 24.06% variability in data (eigenvalue 4.331), showed highest squared cosine values of Zn, Mo, Mn and Cd in fruit peel with both treatments at mature green stage, in fruit pulp both treatments at immature green stage and Mo treatment at mature green and full ripe stage, in seed both treatments at mature green stage, and control treatment at full ripe stage. Third factor of PCA, covering 12.59% variability in data (eigenvalue 2.268), showed highest cosine values of B, Ni and Cu in leaves with foliar application of Mo at mature green stage, in fruit peel both treatments at full ripe stage, in pulp control treatment at mature green and full ripe stage. Factor 4, covering 9.44% variability in data (eigenvalue 1.7), showed highest cosine values of Na and Cr in loquat seeds with foliar application of Mo at full ripe stage. Thus, PCA delineated the effect of treatment variables on macro-nutrients, micro-nutrients and heavy metals in leaves and fruit tissues of loquat.
Discussion
There were differences in the concentrations of nutrients and heavy metals between leaves and fruit tissues of loquat observed on three different maturity stages i.e., immature green, mature green, and full ripe stage (Table 1-13). The nutrient concentrations in plant tissues are known to fluctuate throughout the growing season due to environmental
Discussion
There were differences in the concentrations of nutrients and heavy metals between leaves and fruit tissues of loquat observed on three different maturity stages i.e., immature green, mature green, and full ripe stage (Tables 1-13). The nutrient concentrations in plant tissues are known to fluctuate throughout the growing season due to environmental factors (i.e., soil water, temperature) between periods of adequate soil supply of micronutrients and periods of insufficient soil supply of the applied micronutrient [23,24]. Therefore, in present study, the fluctuation in concentrations of nutrients was observed throughout maturity of loquat. Current study suggests that Mg, K, and Ca content increased by the foliar application of 0.05% ammonium molybdate in leaves of loquat. In plants, K, Na, Ca, and Mg not only maintain the balance of potential energy, but are also components or key supporting elements of important functional enzymes [25]. The obtained results regarding the effect of foliar applied Mo on leaf Mg, K, and Ca content are in line with earlier reported studies. It has been reported that Mo application improved leaf Mg, K, and Ca content of sunflower by 12%, 3%, and 6%, respectively [26]. Foliar application of 1.5 cm·L −1 Mo increased the concentrations of Mg, K, and Ca by 6.6%, 5%, and 9.8% in the leaves of 'Le-Conte' pear. Same treatment, when applied along with 3 g·L −1 Fe, enhanced the Mg, K, and Ca level by 18.3%, 12.3%, and 30.3% in the leaves of 'Le-Conte' pear [27].
Molybdenum is an essential element for plant growth and plays important role in different metabolic processes. Moreover, it has vital role in the function of nitrate and nitrite reductase [21]. After the uptake of nitrates by roots, these nitrates are readily moveable in plants and their accumulation occurs in vacuoles. However, nitrates reduction to ammonium (NH 4 + ) is necessary to support the synthesis of organic compounds and proteins in plants [28]. The nitrates reduction to NH 4 + is supported by different enzymes in two steps. In first step, the transformation of NO 3 − to NO 3 − is occurred by nitrate reductase in cytoplasm. In second step, the conversion of NO 2 − to NO 4 + is occurred in proplastids or chloroplasts by nitrite reductase [29]. The reduction of NO 3 − to NO 2 − depends upon several factors including plant species, plant carbohydrates, activity of nitrate and nitrite reductase, and environmental conditions i.e., light, pH, CO 2, and NO 3 − [30]. The foliar application of Mo is highly associated with N accumulation in plant leaves. The absence of Mo promotes nitrates accumulation and indicates less N assimilation by the plants [28]. Exogenously applied Mo improved Mo concentration in plant tissues of 'Le-Conte' pear [27], grapes (Cv. Merlot) [31], sunflower [26], peanut [32] and lettuce [28]. Our findings are in line with aforementioned studies, suggesting that foliar application of 0.05% ammonium molybdate increased the concentration of Mo in seeds of loquat. The reduction of Mo in leaves can be associated with the availability NH 4 + along with Mo. It has been reported earlier than the availability of NH 4 + lowers the activity of nitrate reductase causing Mo deficiency in plants [33]. A similar observation was made by Phillips [34] who reported lower molybdenum concentrations in leaf petioles from previously treated grape vines than untreated controls. Furthermore, the data presented by Williams et al. [35] over a three-year period also showed a successive decline in leaf molybdenum concentration.
Foliar application of 0.05% ammonium molybdate severely hampered the Fe concentration in leaves, fruit peel, and pulp of loquat. While, in 'Le-Conte' pear, Fe concentration was significantly increased with Mo application [27]. Similarly, Zn concentration was observed increased in fruit peel and seed under the influence of foliar-applied Mo. Our findings are in corroboration with Abd-El-Latif et al. [27] who also reported a remarkable increase in Zn concentration of pear leaves under the influence of 1.5 cm.L −1 Mo. Abd-El-Latif et al. [27] also reported an increase in Mn concentration as the results of Mo application on 'Le-Conte' pear, which was in contrast with our findings. Similarly, synergism between Mn and Mo was found in canola [36] Loquat leaves have high medicinal value and have been used as folk medicines for thousands of years. Cough, inflammation, chronic bronchitis, cancer, and diabetes are treated with the extract of loquat leaves in Chinese folk medicine [37]. Loquat leaves are rich in phenolics and triterpenes [38]. Concerning human health, it is very important to investigate the effect of foliar-supplied nutrients on phytonutritional composition of loquat leaves as the uptake of elements by plants can be associated with the availability of other elements. The quadrupole inductively coupled plasma mass spectrometry (ICP-MS) is the most suitable method for the determination of trace elements in fruits and vegetables and is prevailed as the most appropriate practice for clinical quantification [39][40][41][42]. The element detection through ICP-MS has become a predominant methodology, because of its rapidity, determination limits, and the sample quantity needed for analysis [43].
Schroeder and Balassa [44] were the first to identify that fertilizers were implicated in raising heavy metal concentrations in food crops. Studies of heavy metals uptake by plants have often revealed their accumulation at a level, toxic to human health. More than 70% of cadmium intake by humans is sourced from fruits and vegetables [45]. According to the findings of various scientists, cadmium is a mobile element in the soil whereas it is immobile in plants, that is why it is found abundantly in plant roots [46][47][48]. The results indicate a significant decrease in cadmium concentration in loquat fruits from mature green to fruit ripening stage. The reduction of cadmium concentration over time in leaves of holm oak and fruit pulp of olives has also been reported earlier [22]. Although Cd is an immobile element in plant body, its reduction under the influence of Mo has also been reported in several studies [49,50].
Loquat fruit is one of those fruit crops which are highly sensitive to quality of irrigation water [51]. Arsenic (As) is a heavy metal that is present in the groundwater of many areas of the world and can contaminate fruits through irrigation water [52]. There are many studies reported on high-level toxicity of arsenic in fruits, vegetables, and cereal crops irrigated on arsenic-contaminated water [53]. Our findings of the arsenic concentration in different plant tissues of loquat indicate that the toxicity of arsenic reduced with the progression from mature green stage to full ripe stage. The results are in corroboration with Madejon et al. [22] who reported reduction in the concentration of As with the passage of time in leaves and fruits of olive and holm oak. Overall, results suggested that Mo improved nutritional composition of loquat leaves and fruit tissues and alleviated the toxicity of heavy metals.
Conclusions
Concerning human health, it is very important to investigate the effect of foliarsupplied nutrients on heavy metal accumulation in fruits as the uptake of elements by plants can be associated with the availability of other elements. Therefore, this study was conducted to confirm the concentration and availability of phytonutrients in loquat leaves and fruit tissues under the influence of the foliar application of molybdenum. Exogenous application of molybdenum proved to be successful for improving mineral nutrition of loquat, as evidenced by increased macro and micro-nutrients and decreased heavy metals in leaves and fruit tissues. There is need to investigate Mo-modulated molecular mechanisms regulating minerals uptake and accumulation in leaves and fruit tissues. | 2021-06-22T17:55:10.553Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "7e890bbae6e02826c123f2e4312a5248a3e6566f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/5/892/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "50c9bebc6590748c4de6dfc5260f88214c4d49c3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
31485903 | pes2o/s2orc | v3-fos-license | Wireless Handheld Computers in the Undergraduate Medical Curriculum
: Wayne State University Medical School has implemented wireless handheld computers or PocketPCs (PPCs) into all four years of the undergraduate curriculum. A transition from a passive to an interactive learning environment in the classroom, as well as administrative solutions for monitoring patient encounter data by students in their clinical rotations was fostered by this educational technology. Implementation of the wireless devices into the curriculum will be described in this report. This will include the technical specifications and justification for the required device, as well as a detailed discussion of the different applications used for educational and administrative purposes by the preclinical and clinical students. Outcomes from the educational and administrative aspects of the project will also be presented in this report.
In collaboration with CampusMobility, Wayne State University (WSU) Medical School introduced wireless PocketPCs (PPCs) into its undergraduate medical curriculum. A pilot project that provided wireless PPCs to a group of second year medical students was initiated in the summer of 2002. 10 A wireless infrastructure was established in the medical school and library to support this initiative.
There was full-scale implementation of wireless PPCs into the curriculum at WSU Medical School in 2004. Wireless PPCs and software applications developed by CampusMobility and WSU Medical School were used for educational and administrative purposes. Educational applications include interactive classroom sessions, computer-based exams, the delivery of digitized course content, and real-time feedback. The devices were also used to facilitate administrative processes such as clerkship encounters, course evaluations, and scheduling. This report will describe the development, implementation, and outcomes of integrating wireless handheld computers into the undergraduate medical curriculum.
Implementation
In 2003, second and third year medical students were notified by list-serve e-mail that they were required to purchase a PPC with the specifications listed in Table 1. The second and third year classes were chosen for several reasons: 1) current medical school enrollment at WSU is approximately 260 students per class and we believed that involving the second and third year students in the initiative would provide a manageable PPC-using group; 2) both groups were currently accessible as rising first and second year students and could be easily advised of the requirement; 3) the third year curriculum represented a transition from the classroom experience to clinical clerkships. Therefore, the educational and administrative applications for the two groups selected for the initiative would be diverse and provide insight for full-scale implementation in the fall of 2004.
PPC Purchase - Table 2 shows the distribution of devices purchased by the second and third year students in 2003. Toshiba was the most popular because the devices available at that time (e740 or e750) offered the most options (processor speed, memory, two expansion slots, and built-in wireless) for the price. The Dell Axim was another popular choice by the students in 2003, most likely due to its lower base price, but this device required the purchase of an interface card to access the wireless network.
Our experience with several different devices in 2003 led to the selection of Toshiba PocketPC e800 (http://www.Toshiba.com) as the required device for the freshman and sophomore classes. Toshibas consistently caused the least hardware support issues and the e800 offered the best operating system and features for the price. The School of Medicine Mobile Device Steering Committee believed that these devices were well suited for our planned applications and would serve the medical students throughout their time in medical school.
Student Orientation -All four classes of medical students at WSU were required to use PPCs for educational and administrative functions in 2004. Therefore, we faced the challenge of managing over 1000 students via a wireless PPC implementation. Our experience, as well as a student survey, revealed that our technical support infrastructure needed expansion. We conducted workshops and training sessions for faculty, support staff, and the incoming class of students. Our support infrastructure included personnel from Biomedical Communications, the Office of Academic and Student Programs, and Medical Library Services.
As with the 2003 implementation, the medical students were notified by e-mail that they were required to purchase a PPC. Incoming freshmen were sent notification along with their orientation material and an introductory session was held for them during the first week of classes. The second and third year classes attended separate orientation sessions conducted by members of the PPC support team and CampusMobility. Basic device operation and the specific application software were covered during these sessions. In 2004, access to the wireless network at the medical school used a wireless authentication system that eliminated the need for an encryption code. Student registration with CampusMobility was provided by the School's e-mail authentication system. It was not necessary for faculty to have PPCs, but they were registered with CampusMobility Toshiba 750/755 46 52 98 Toshiba 740 40 41 81 Toshiba 570* 0 20 20 Dell Axim 64 37 101 IPAQ 4 21 25 *Toshiba 570 provided to pilot group 10 3 so that the evaluation database could be populated. Once they logged in to CampusMobility, students downloaded specific software applications for the second or third year classes. Second year students received links for the following applications: attendance, interactive lectures, evaluations, studentinstructor instant messaging, and course content. Third year students could access these platforms plus the patient encounter application and third-party software license by the WSU School of Medicine and Medical Library (http://www.lib.wayne.edu/shiffman/pda/index.php). Figure 1 shows a screenshot of the daily start page that the medical students would access after logging into the CampusMobility site using their PPCs. All applications were available to the third year students while the sophomores could not access the patient encounter monitoring applications and did not have access to licensed third party software in 2003 or 2004.
Application Platforms
1. Attendance tracking -This feature was used for the courses and third year didactic presentations with required attendance. Prior to this application, it was necessary to use a sign-in sheet that required manual tabulation. Using the PPC-based application, students entered the lecture hall or small group lab and tapped the check-in button. A time stamp was listed next to their name on the administration page and the icon switched to check-out on the start page. Students in attendance were reminded to check-out at the end of the session so that a second time stamp was placed next to their name. The IP address from which the student checked in was also listed next to their name to verify attendance at the required location. Because the IP address was unique to the location that the PPC establishes an internet connection, it was possible to identify those students attempting to check-in from home, the library, or a coffee house.
Course evaluations -WSU Medical Students
were required to complete monthly evaluations of clerkships, courses, individual lecturers and lab instructors. The PPC platform replaced the paper and pencil version of the evaluation forms. It was no longer necessary to score each evaluation form and manually distribute the results to the faculty. To initiate the process, a course administrator would open an evaluation on the server and the link then appeared on the students' task list on the start page ( Figure 1). Once the evaluation was completed, the link disappeared from the students' task list. Faculty and course directors login to the administration page of the CampusMobility website and view their evaluation results in real time. We assured anonymity by hiding data until the number of submitted evaluations exceeded five to prevent association of evaluation results with an individual student. Because PPCs have a note-taking feature, it was possible for students to enter free response comments in a text box at the end of the standard multiple choice evaluation form. 3. Interactive learning -Wireless PPCs were used to introduce interactivity into the didactic large group presentations at WSU Medical School. Multiple choice questions were prepared by the lecturers and posted on the portal for the PPC users to access in the classroom. Questions could be opened and closed to meet the needs of the presentation. Some lecturers preferred that their questions were made available to the students immediately before lecture and closed immediately after. Others left the interactive sessions open for several days to permit review by the students. Figure 2 shows an interactive question displayed on the PPC and the audience response graph that would be viewed on the administration page and projected to the class. Panel A shows a sample question viewed by the students and Panel B is the audience response results that were viewed during lecture by the entire group. Students accessed the multiplechoice questions in the lecture auditorium then responded in one to two minutes. It was possible to embed images into the questions ( Figure 2A) and the PPC platform permitted images to be enlarged so that students could view fine detail if necessary. The instructor and the class viewed responses in real time during the presentation allowing modification of the lecture content "on the fly" based on subject matter deficiencies demonstrated by the responses in the large group setting.
In 2004, 11 self-assessment modules were introduced. These self-assessment modules correlated with the different units of the Immunology, Microbiology, and Infectious Diseases course. These modules were comprised of approximately 10 multiple choice questions each that were only available to students by PPC. These sample questions could be accessed and answered anytime before the exam. After completing the self-assessment module the student would receive a score and could review their choices along with the correct answer.
4. Send comment -Because our class size has consistently been in excess of 260 students, effective real-time communication with the faculty has not been possible until now. Students could anonymously post a question to the lecturer using the send comment button on the PPC start page. Student comments were viewed through the administration portal from the desktop computer in the front of the lecture hall. Time permitting, the lecturer would privately review the questions and address responses to the entire class. If there was not enough time to permit response in class, then the faculty would forward the questions to their e-mail account and respond to the whole class during the next lecture period or via the list-serve.
Although this feature was ostensibly no different from a student sending a question to the lecturer by email after class, classroom use of the mobile devices made the process more dynamic. The send comment feature opened a live chat line between the student body and lecturer. The learning environment was enhanced by permitting the lecturer to respond immediately to areas of misunderstanding. This system created a connection between the student and teacher that did not otherwise exist in the large group format.
5.
Lecture content -WSU used BlackBoard 6.1 in 2004 as a host for a variety of educational resources such as course content, on-line assignments, bulletin boards, and grade book. Because Campus-Mobility and BlackBoard are application partners, it has been possible for the Medical School to provide the PPC-using students access to course content developed for the web. The graphic user interface permitted filtering course content such as text, tables, and figures for the PPC. In addition, all lecture presentations were videotaped and streamed on the web using Windows Media. Students used their PPCs to access the streaming media to review lecture presentations or simply listen to the audio portion. Because the audio portion of the lectures contained no copyright-protected material, it was possible for our students to download these files and play them off-line. . Medical software licenses for these applications were available for our third year students to use in their clinical rotations. This content is being used to train our students to use mobile devices to aid diagnoses at the bedside.
Medical decision-making software -
In the future, the software will be made available for the preclinical first and second year students. This will permit integration of these various applications into the basic science curricula, albeit on a more remedial level. For example, a first year small group learning assignment may be to make a differential diagnosis for pneumonia from a case presentation. Students could use the diagnostic algorithm provided by one of these applications as part of the problemsolving exercise.
Patient encounter monitoring -
Monitoring the clinical clerkship experiences of medical students is required for U.S. and Canadian medical schools to meet the accreditation standards of the Liaison Committee on Medical Education (LCME). 11 Accurate historical logs of a student's clinical experiences permit the learner and physician-teacher to reflect on the educational outcome in a more controlled setting. WSU Medical School addressed this task in 1999, by using Palm Pilots to track the clinical experiences of third year students in the obstetrics and gynecology clerkship. 4 The Palm Pilots eased the burden of logging encounters using paper and pencil by permitting students to maintain records of their patient encounters on the mobile devices loaded with software that had defined database fields. Student data would be transferred to a dedicated server using a phone line connection. Our PPC initiative was a more robust extension of this pioneering application.
Patient encounter software was downloaded from CampusMobility and stored on each of the third year students' PPC. WSU Medical School clerkship directors authored unique experience encounters based on the educational objectives defined for a particular discipline. Students would select from a drop-down menu one of the seven third-year clerkships: Family Medicine, Internal Medicine, Neurology, Obstetrics/ Gynecology, Pediatrics, Psychiatry, or Surgery. Students would then select the participating hospital or clinic, admission status (consult, inpatient, or outpatient), age, ethnicity, the diagnosis and procedure.
Students would also rate the encounter and their level of participation. Data collection was an educational exercise and within the Health Insurance Portability and Accountability Act (HIPAA) compliance. 12 Free response fields were also included to permit the addition of unanticipated comments or experiences. Completed case logs were downloaded to the server via wireless connection or by cradling the PPC with a desktop computer that enabled pass-through to the internet.
During the 2003 implementation of PPCs in the clinical rotations, we anticipated that many of our students participated in clerkships at medical facilities that lacked wireless access. Therefore, the application developed for the patient encounter monitoring was downloaded during a syncing process and saved on the device. A log of patient encounter data was stored on the device until the student opened a sync gateway to the server. This was in contrast to the second year applications that required wireless capabilities with continuous internet access. Students could connect to the server to transmit data and receive software upgrades by cradling their devices to a desktop computer. In addition, we established an infrared (IR) kiosk to accommodate third year students without wireless cards. Because all PPCs have IR transmission capabilities, it was possible to provide a means for third year students to transmit their patient encounter logs to the server. This kiosk is currently available in the medical school, but WSU Medical School has plans to install IR kiosks at the medical library and at some of the clinical facilities that host our third and fourth year medical students. Until more facilities provide wireless capabilities to the students, these IR transmission portals provide a wireless means to synchronize data.
Use of mobile handheld devices for monitoring patient encounters eased the administrative burden on clerkship directors and is helping the Medical School administration meet LCME requirements. 11 Figure 3 (Appendix) is an example of a clerkship encounter summary for Psychiatry. In this sample, 37 students entered 647 encounters with 214 different diagnoses from 10 different Detroit Medical Center (DMC) affiliated hospitals as well as other non-affiliated sites. This system permitted identification of students who did not fulfill specific clerkship objectives. These students could be offered an independent training module or simulation to fulfill the requirement. Students also had access to the log of their clerkship experiences, which we expect will facilitate their preparation for step II of the USMLE.
Project Outcomes
Evaluations -In 2004, the paper and pencil evaluation forms were discontinued and all year one and two students were required to complete course, lecturer, and lab evaluations using PPCs. All students were required to complete the course and lab evaluations, but the class was divided into three groups of approximately 85 students for the lecturer evaluations. Using this method, each student was required to complete only one third of the lecturer evaluations, thereby lessening their burden of having to complete evaluations every month, while still generating a representative set of evaluations for the lecturers teaching portfolio.
Evaluation data was made available to the course directors and for the first half of 2004 (January 1 -June 1), 2,499 course evaluations were collected using this system. Evaluation of lecturers generated approximately 1,445 responses during this 6-month period while the number of lab evaluations was approximately 6,120. Formerly, this data would have been collected and tabulated using scantron sheets. An analysis of our return on investment (materials and labor) for the wireless evaluation system in comparison to the paper and pencil method will be the subject of a separate report.
Patient encounter data WSU Medical School pioneered the use of handheld devices for the collection of obstetric and gynecology patient encounter data in 2001 4 . Therefore, it was a natural transition to use the PPCs and patient encounter software for all seven clerkships. The advantages of the mobile device monitoring application over the previous system were significant. Previously, students recorded patient encounters using a paper log that were individually evaluated by the clerkship department office. The paper-based system was not standardized and the logistics of managing the large number of encounters made it difficult to assess accurately fulfillment of the clerkship learning objectives. With the current PPC system, it is possible to view a spreadsheet with encounter data for an individual student, clerkship, or site. Students who missed a required clinical experienced were assigned cases or simulations for individual review. It will be possible to use the data being collected using the mobile handhelds to measure outcomes and refine the clinical learning objectives.
Third year medical students collected patient encounter data in seven different clerkships (Psychiatry, Neurology, Family Medicine, Surgery, Pediatrics, Obstetrics/Gynecology, and Internal Medicine). During a 2004 test period from January 1 -June 1 the computers in the undergraduate medical curriculum Available from http://www.med-ed-online.org 7 total number of cases collected by 60 third-year students during this period was 1,972, which averages to 32.87 encounters per student. Figure 3 shows an example of a patient encounter database for 37 students in the Psychiatry Clerkship.
Student surveys
For the 2003 initiative, a voluntary survey was posted at the end of the semester on the second year students' PPC start page. Students responded to the survey that was designed to gather information about the educational and administrative applications, as well as technical aspects of using PPCs. In 2004, a web-based survey was also posted that allowed students to share their experiences with the interactive learning interventions provided as part of the PPC initiative.
Fifty-six percent of the students purchased Toshiba models while fewer students (42%) purchased the Dell Axim. The Dell device lacked internal wireless capabilities and was less expensive than the recommended Toshiba models. The third year students were not required to own a PPC with wireless capabilities so significant numbers purchased the Dell device (Table 2). Although most students expressed satisfaction with their PPC model in the survey, 39% expressed a desire to upgrade to internal wireless capabilities. These are most likely the same students who purchased the Dell PPCs that lacked internal WiFi cards.
Questions addressing technical support issues confirmed our experiences; a majority (60%) of the students required hands-on assistance with their PPC. We asked a specific question addressing accidental hard reset of the PPCs to gauge the incidence of lost data caused by battery drain; 44% of the students lost data due to accidentally resetting their PPC two or more times. We anticipated in 2003 that technical issues would decrease with increased implementation of interactive teaching sessions, evaluations, selfassessment modules, and the availability of PPCbased course content. Students began to assume greater ownership responsibilities, including charging their PPCs every day, as the daily use increased. Another issue was the use of wired equivalency protocol (WEP) encryption on our wireless local area network (WLAN) which required manual entry of an alphanumeric key code. An infrastructure change in 2004 eliminated the WEP encryption so that access to our WLAN was through an authentication server.
In a previous report describing our PPC pilot project 10 , students were asked their preference for exams. In that project, 70% of the pilot group preferred paper and pencil exams. A primary complaint was that the PPC exams did not permit flagging questions for later review. Because each question was answered and submitted individually, students were not able to review their responses. A key revision to our application has been to allow students to download entire exams and self-assessment modules, select answers, and then submit the entire form at one time. This platform was first developed for the patient encounter application for clinical clerkships. Security measures such as the browser lockdown and proctoring console described in our previous report 10 will be used during the development of PPC-based exams. In comparison to the current paper and pencil exams, PPCs with appropriate security applications will better prepare our medical students for the computer-based USMLE step exam sequence. 13 Feedback from those students who participated in the survey pertaining to the PPC-based interactive content was supportive. Eighty percent of the respondents participated in at least seven of the eleven selfassessment modules. Of these students, 60.3% found these interventions "extremely useful", a ranking of 9 or 10 on a scale of 1 (not useful) -10 (extremely useful). A majority of respondents would recommend the interactives to other students the as a tool to prepare for exams (88%) and organize course material (68%).
Conclusions
We chose to promote the use of PPCs by our undergraduate medical students based on the potential for mobile classroom and clinical applications. 5 This PPC initiative is training future physicians to use a tool that is becoming increasingly popular in the medical community for accessing patient monitoring applications, drug databases, and diagnostic algorithms. 14 Hospitals are migrating to mobile device collection of patient information and prescription writing. 1,2 Hospitals are also considering using handhelds to distribute, manage, and track graduate medical education requirements. Our 2002 pilot project permitted us to test a variety of wireless implementations in the undergraduate medical curriculum prior to the large-scale implementation described in this report. We gained insight into a variety of logistic issues, technical concerns, and student user styles.
Until we had implementation of PPCs, it was not possible to have true interactivity with the large class size. Introduction of interactive learning sessions into the curriculum proved to be a valuable educational tool. Students were motivated to participate and assess their own knowledge base prior to exams. Depending on the lecturers' preference, interactive computers in the undergraduate medical curriculum Available from http://www.med-ed-online.org 8 sessions were conducted either during or immediately after a lecture while others were saved for one hourlong review sessions the day before an exam. Lecturers who participated in this activity discovered that the audience response system significantly enhanced the classroom experience. Student attention to the subject matter peaked at a time when concentration typically faded. An unexpected benefit of the interactive sessions was faculty training. A number of lecturers discovered that their problem solving questions were not challenging thereby providing little benefit as teaching tools. An aim of our faculty development office is to improve the problem-based teaching and exam writing skills of our lecturers.
PPCs eased a number of administrative burdens including patient encounter monitoring, collecting evaluation data, and attendance. All of these activities were paper and pencil based in the past, requiring hundreds of person-hours for collection and tabulation of the data. In comparison to paper and pencil methods, handheld-collected evaluation and patient encounter data had enhanced validity. Because completion of the entire evaluation was required before the task would disappear from the start page, students were more inclined to select a response for each question. In the past, a number of students received false credit for turning in blank scantrons when paper and pencil forms were in use. The log-in feature has resulted in an unexpected outcome: the increased accuracy using the attendance-tracker has renewed interest in professionalism standards for our students. PPCs have also proved to be invaluable tools for patient encounter monitoring and as partial fulfillment of LCME accreditation standards. Reporting accuracy and compliance improved because students used diagnosis parameters that were specifically tailored for individual clerkships and provided as drop-downs in the encounter software. This system also permitted clerkship administrators to monitor student progress, as well as the quality of the learning environment at participating hospitals. A future application for the devices will include preceptor-based evaluations of student performance using handheld devices. These evaluations were as part of the encounter databases.
We plan to introduce PPCs into the existing School of Medicine objective structured clinical examinations (OSCE) as administrative tools. PPCs will be used to score student performance by the graders during OSCEs and the assessment will be immediately entered into the database. As with the evaluations and patient encounter monitoring, we anticipate that using PPCs for OSCE administration will provide a return on our investment in the CampusMobility platform.
A value-added feature that raised support among the students was the medical decision-making software provided by the medical school. Students discovered that their PPCs could be used as medical reference tools in the clinic. PPCs and resident software are beginning to transform the clinical learning environment in Detroit metropolitan hospitals by providing our students with real-time access to contemporaneous information. We are witnessing a culture shift because at times, students gain an information advantage over their clinical faculty. Another feature under development is teach-me links associated with the different diagnoses. Content developed by our faculty or extracted from medical-decision making software will be used to enhance the educational experience of students monitoring their clinical encounters.
In addition to the third party medical software provided to our students, we are in the process of developing course content on the PPCs. In the past, our content has been available on desktop computers to our students using Blackboard. However, in 2003 Blackboard content was not universally accessible in the Windows CE format for PPCs. Implementation of Blackboard 6.1 in 2004 has made adaptation of existing course content for the PPCs feasible. Therefore, a current focus is to teach preclinical students to access course content during large or small group learning activities.
Student survey results reflected a satisfaction with the effort put forth by the faculty who contributed to the educational interventions described in this report. We continue to develop the application platforms used by our undergraduate students and plan to expand usage in participating teaching hospitals. We will be tracking and reporting the current class of PPC users into their clinical clerkships to assess the outcome that this initiative has on patient care. Reports of our findings may be useful for other medical schools considering the PPC-based educational interventions and administrative applications described in this report. | 2017-05-30T18:28:34.574Z | 2005-12-01T00:00:00.000 | {
"year": 2005,
"sha1": "01da1396755b889168a8b99a5fa5cd00dcb21d7e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/meo.v10i.4386?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "01da1396755b889168a8b99a5fa5cd00dcb21d7e",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
225393950 | pes2o/s2orc | v3-fos-license | User Care Level Audit of Information Data Security at PT XYZ Using Guttman Scale
The purpose of this research is to know the level of care of information technology users on information data security. The utilization of information technology in supporting corporate objectives must be balanced with the level of infrastructure support and user care for data security. The company, more likely to build and develop infrastructure that can support every business process of the company, regardless of the level of care and knowledge of its users. Users play an important role in supporting the security of information data either personally or in the company’s information data. So, to be able to assess and improve data security in the company is required not only the increased infrastructure, but the care and knowledge of users in supporting each company’s business processes supported by the use of technology Information. The method used was evaluating the user by auditing the user’s level of knowledge and concern for the security of information data, so that the level of user care is available. Information security in support of each company’s business processes. The results obtained in this study are the overall level of user care is at a level with an average percentage of 75.64% of users concerned about data security. In detail, the company’s support level to data security is high enough with 98% of users should follow the procedures set by the company in using information technology. Inversely proportional to the low level of user care, 42% of company users ignore concern about data security demonstrated by behavior not using log in facility. With these results, the company needs to increase the user’s level of concern for data security by providing regulations related to the use of information technology, as well as support for increasing user knowledge in order to secure data personal and corporate can be maintained properly.
Introduction
Technological developments have made the management of the company always demand solutions from the use of information technology. Information technology is used to encourage and provide business value and increase utilization to increase productivity [1]. In its implementation, information technology has become a necessity for a company. Companies invest heavily in the development of information technology, in order to support every business process of the company. The company's success, now measured on how far the company manages information technology [2]. The company's focus on improving performance based on information technology is not balanced with the data security due to both personal data and company data. Companies tend to further improve technology infrastructure with large investment allocation compared to security concerns [3].
In fact, the security of information technology is very important, because security problems can trigger a mechanism to control access to various devices [4]. To improve the efficiency and effectiveness of the use of secure information technology in the company, it is required a mechanism to evaluate each use of information technology, especially related to data security. Quality control of information IOP Conf. Series: Materials Science and Engineering 879 (2020) 012050 IOP Publishing doi:10.1088/1757-899X/879/1/012050 2 technology can be evaluated using three instruments of application map and infrastructure, risk assessment, and separation of tasks [5]. Risk assessment is one way that can be taken to evaluate the use of information technology, especially in terms of data security through audit mechanisms.
The information technology audits used are typically focused on auditing, indicators, process, and focus on management levels [6]. The results showed that the most common evaluation by auditors was control of application processing, data integrity, privacy, and security control [7]. Any audit or evaluation process of the use of information technology aims to ensure that each system or process fulfils its objectives [8]. The auditing process used during this time focuses on the security of the data from the computer network alone regardless of the user aspect and the level of care. So, for data security is considered both in terms of infrastructure and users, the research aims to determine the level of user care for the security of information data on technology that the company uses.
Methods
In this research, there are several phases of audit process including pre-audit, on site audit, and post audit. Pre-audit of the preliminary stages includes the preparation of the initial audit on the scope and preparation of the questionnaire. The phases on site audit are the audit process, including the direct survey process, data collection, and the recording of findings and the questionnaire. The final stage is the post audit study findings and analysis of user survey results. The system user auditing process is shown in Figure 1. Figure 1 shows each stage and step of the information technology security auditing process consisting of three steps, namely Pre audit, On site audit, and Post audit. Each step has its own stages, including:
Pre-Audit
Pre-audit phase is an early stage in the data security auditing process which is the information on the determination of audit scope and audit target. In this case the scope of audit related data security information at XYZ company. The audit objective in this case is the user of the information system in the company that amounted to 52 users divided into several parts of the company. The audit process begins with the auditor conducting information collection related to security audits through interviews and document review [9].
On Site Audit
The process on site audit is a phase of security auditing of information technology by arranging questions and conducting surveys directly to users in using information technology. In this case, the user who is targeted to do surveys is totalled 52 people and covers all the parts that use the information technology in the company XYZ. Here, the number and item of questions provided to the user at table 1: Table 1. Number and Item questions for the audit process
Does every user login when using a computer? 2.
Does a user use one computer? 3.
Are all employees reminded about the importance of computer security? 4.
Are access rights to share data regulated according to employee functions? 5. Does the operating system on the computer being used, often do the latest patch updates? 6.
Is the computer used antivirus installed and always updated regularly? Does the company use its application? 8.
Are there procedures for employees operating computers following their functions and access rights? 9.
Are there procedures for backing up data and applications that are well documented? 10. Does every employee who uses a computer follow the procedures set by the company? 11. Are there standards for prevention, detection of virus disorders? 12. Are there standard procedures to limit the use of external storage (such as flash drives) in an office computer system? 13. Is there regular information technology training for employees? 14. Are there any reporting procedures related to damage or loss of data on either the system or computer equipment? 15 Are there standard procedures for using computer ethics and access rights for employees or users?
Post Audit
The last process in the security auditing phase of information technology is the post audit process which includes the results of analysis and findings. Results of analysis are the end result of the analysis of user data on the security of information data, as well as viewing user behaviour towards maintaining data security.
Results and Discussion
This section shows the findings in the information technology security auditing process. Table 2 shows about result of questionnaire responses. Table 2 shows the results of the user's answer to the given question. These results vary with each question by number of answers yes and the answers are not given by each user to the questions given. Question no 1 relates to a user's habit of logging in when using the computer. 30 respondents gave a statement that they never did the login facility when using the computer. Question no 2 and 3 have the same number of respondents, with regard to the use of computers by one person and all employees are warned about computer security. In addition, question no 4 has respondents 51 people that each employee uses a computer according to their function. Users are then given questions about the use of device updates, antivirus, and applications. All respondents answered yes that all devices, antivirus, and applications were periodically updated. Question no 8 regarding the user to follow the rules and access rights, 48 respondents answered yes and the rest replied no. Inquiries no 9 through no 11 related data backup procedures, data prevention and data security. More than 50 respondents stated that the procedure has been implemented properly. The next question regarding the use of external storage media, the training for users states that 48 and 49 people state that there are no restrictions on access to the use of external storage media and training for security to users. From the result of the respondent in Table 1, it was used to calculate the result of the value, percentage, and category of the question to the number of respondents. To calculate the values, percentages, and categories used Guttman scale. Calculations using Guttman scale were used because it provides a non-parametric method that is useful for evaluating quality rating to suit a variety of situations [10]. Here are the calculations for the values, percentages, and categories used. Value is the number of respondents with the answer yes.
= ( ) * 100%
As for the category, the Guttman Scale provides two categories with the following value ranges Table 3: Here are the result calculation values, percentages, and question categories (Table 4). Table 4 shows the calculation of value, percentage, and question categories. Of the 25 questions given to respondents, the results showed that 21 questions had enough categories, and 4 questions had a low category. 21 categorized questions that are related to computer use, data security warnings, access rights, app updates, antivirus, and data backup procedures have been done entirely by the company and the information technology team. Out of the 4 low-category questions, the questions regarding user habits include the use of login facilities, use of additional storage media, data security-related training activities for users, and computer-use procedures and permissions for the user. With these four low-category questions, it shows that the user's level of concern for data security in the company is low. It is because users do not use the login facility for data security. In addition, users are not provided with data security training by the company. It shows that the level of user care for data security is very low.
Conclusion
The results gained in this study are the overall level of user care to be at a level with an average percentage of 75.64% of users concerned with data security. In detail, the company's support level to data security is high enough with 98% of users should follow the procedures set by the company in using information technology. Inversely proportional to the low level of user care, 42% of company users ignore concern about data security demonstrated by behaviour not using log in facility. With these results, the company needs to increase the user's level of concern for data security by providing regulations related to the use of information technology, as well as support for increasing user knowledge in order to secure personal data and corporate can be maintained properly. | 2020-08-13T10:02:44.929Z | 2020-08-07T00:00:00.000 | {
"year": 2020,
"sha1": "6245e772f81ef554c4b4fb1fee21e081e20fc387",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/879/1/012050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "343033c3de3e23b94e1254a683d09712dd3395f8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255591110 | pes2o/s2orc | v3-fos-license | How you get it matters: Moderating role of transfer method in guiding ownership assignments for modified objects
Introduction Previous research has found that value change and creation drive people to support the laborer more than the original owner in ownership reasoning for modified objects; however, the transfer methods used to resolve conflicts have largely been ignored. In this work, two studies were designed to investigate the role of value change and creation in adults’ labor-based ownership judgments in four transfer conditions (i.e., take/steal/borrow/find). Methods Scenarios involving different extent of value change and creation in different transfer ways were presented to Chinese adult subjects after which they were asked to judge who is the owner. Results People were more likely to assign ownership to the original owner in the take, steal and borrow conditions but not in the find condition, and this reasoning held regardless of whether the original materials showed high or low value appreciation or successful creation, and it was applicable to raw materials with low (Study 1) and high values (Study 2). In addition, the effect of value change and creation on ownership reasoning varied according to different transfer methods. Conclusion The results suggest the moderating role of transfer method in people’s ownership assignments, which will provide insights for real-life mediation of ownership conflicts.
Introduction
We are living in a world that involves property and ownership. Ownership is an important social institution that maintains human relationships (Epstein, 1978;Kanngiesser et al., 2016), and it is a concept used to protect individual property rights and respect the property rights of others (Kanngiesser et al., 2019(Kanngiesser et al., , 2020. As the idiom goes, possession is nine-tenths of the law. Thus, many interpersonal conflicts are incurred by ownership problems. Studies have empirically explored the preferential resolution strategies of laymen when confronted with these problems (Friedman, 2008;Friedman andNeary, 2008, 2009;DeScioli and Karpoff, 2015;Fast et al., 2016Fast et al., , 2017DeScioli et al., 2017) and found that their resolutions were not always consistent with the legal provisions (Fast et al., 2016(Fast et al., , 2017 or final verdict in court (DeScioli and Karpoff, 2015;DeScioli et al., 2017), which suggests that legal judgments regarding certain property issues may not correspond to the psychological expectations of laypeople. Knowing the psychological mechanisms that people use to resolve ownership disputes may shed light on real-life ownership trials and help reduce the gap between legal decisions and laymen's judgments.
When we intervene in property issues, labor is an important cue for resolving conflicts. According to Locke's property theory, the work of a man's body and his hands should be seen as rightfully owned by him (Locke, 1690(Locke, /1978. Psychological research supports that both young children and adults tend to use labor rules to attribute ownership (Beggan and Brown, 1994;Kanngiesser and Hood, 2014a;Levene et al., 2015;Hartley et al., 2021). For example, 2-3 year-olds would protest when someone threatened to take away their homemade clay animals but would not protest when the original materials were taken away (Kanngiesser and Hood, 2014a;Hartley et al., 2021). In a study conducted by Beggan and Brown (1994), adults were more inclined to assign ownership to an agent when the agent modified a discovered branch into a plane than when he just played with the branch. Palamar et al. (2012) also revealed that people were more likely to assign ownership to the man who intentionally knocks a pineapple from a tree rather than the man who picks it up on the ground afterward. Occasionally, ownership disputes arise between the original owner of a material and the modifier of the material. Imagine a piece of wood originally owned by someone that is then modified by another person into a set of furniture. Should we assign ownership of the furniture to the modifier or the original owner of the wood? Some scholars addressed this question with British 4-year-old children and adults (Kanngiesser et al., 2010). First, they gave subjects a piece of clay as their own and asked the subjects to make animal models out of the clay provided to the other subjects. The results showed that most preschoolers acknowledged that ownership transferred to the model maker, but many adults did not make the such assumption. Cross-cultural research showed that Japanese adults were more likely than British counterparts to transfer ownership to the laborers when they saw the third-party conflicts between the initial owners and the modifiers (Kanngiesser et al., 2014). However, another study revealed that Chinese adults supported the original possessor rather than the modifier as owner (Li et al., 2019). It can be seen that there is cultural difference in ascribing ownership based on the labor rule. Some studies also showed that adults give priority to the laborers over the original creators when it comes to the intellectual property (Fast et al., 2016(Fast et al., , 2017Burgmer et al., 2019). For example, they tend to ascribe ownership of an artwork to the person who actually made it than to the person who came up with the idea (Burgmer et al., 2019) and hold that the alteration should be permissible if the laborer has acknowledged the original creator of the artwork (Fast et al., 2016(Fast et al., , 2017. Scholars have proposed that value change might be an underlying mechanism for people to reason about ownership with the labor rule as ownership and value often go together (Pesowski et al., 2022). Previous research showed that there is a labor-valuation effect in people's preferential conflict resolution (Burgmer et al., 2019). For example, when a laborer put another person's idea into practice which makes the final artwork high-valued, people often allocate more money to the laborer than to the idea giver (Burgmer et al., 2019). Kanngiesser and Hood (2014b) presented adult subjects with scenarios in which an artist takes some materials from another man and modifies them into artwork, and they found that people were more likely to transfer ownership when the artist's labor greatly increased the value of the original materials but less likely to do so when the value changed little. In addition, the subjects were more inclined to transfer ownership for low-value materials (i.e., plastic) than for the high-value materials (i.e., gold).
Creation is another important element that people will refer to when making ownership judgments on the grounds of labor cue (Kanngiesser et al., 2010(Kanngiesser et al., , 2014Levene et al., 2015;Faigenbaum et al., 2018;Davoodi et al., 2020). Children and adults often regard others as having stronger claims over things they created than things they found Faigenbaum et al., 2018;Davoodi et al., 2020). Kanngiesser et al. (2010) showed that children would transfer ownership to the laborers when the modifier creatively made an animal model out of the original owners' clay, but they would not when the laborers just cut off a small piece of the clay using a knife. The same result was found in a cross-cultural study (Kanngiesser et al., 2014). Levene et al. (2015) established a scenario in which a man either crushed a can into an ashtray with a rock (creation condition) or just dented it without changing its function (no creation condition), and the ashtray or dented can was finally picked up by another man. The authors found that adult subjects were more likely to attribute ownership to the laborer when creation was involved, thus suggesting that creation will affect people's support for the laborer.
Most of the aforementioned studies did not explicitly discriminate among the transfer methods when the laborer obtained the original materials from the original owners. People may obtain others' objects through illegal (e.g., steal) or legal (e.g., borrow) ways and process them, which result in a great increase in value and creative change, thereby cause complex property conflicts. In Kanngiesser and Hood's (2014b) study, they depicted a scenario in which the artist takes the materials from the original owner. While take is often used as a legitimacy-neutral verb, we did not know whether the artist's taking behavior was permitted by the original owners. People's ownership assignments may change if the modifying behavior is not approved by the original owner or the raw materials are obtained in some illegal way (such as steal). Previous studies have shown that people would not transfer ownership for the stolen objects and the lost objects, Frontiers in Psychology 03 frontiersin.org but they would permit ownership transferred or in the gift-giving context (Blake and Harris, 2009;Li et al., 2018;McDermott and Noles, 2018). Hook (1993) explored people's ownership assignments when someone borrowed another person's wood and modified it into a falcon without permission. They found that children above 10 years old and adults would not support the modifier as the owner of the falcon, even if the value of the wood had appreciated greatly. The same result was also found in Chinese adult samples (Li et al., 2019). Recently, Li et al. (2020) examined Chinese adults' ownership assignments when faced with disputes between the original owner and the modifier in three transfer contexts (i.e., keep, borrow and find). In the keep context, the original owner keeps his wood in the modifier's house and the modifier modifies them into a set of furniture without permission. In the borrow context, the wood was borrowed from the original owner by the modifier. In the find context, the wood was lost by the original owner and found by the modifier. These authors found that subjects tended to support the original owner as owner of the furniture in the keep context but tended to support the modifier in the find context; however, the subjects did not support either the original owner or the modifier in the borrow context. In the study, the authors did not discriminate the degree of value change (e.g., large value change, small value change) in the scenarios. While the modification from woods to furniture can be regarded as a large increase in value, we do not know how people will attribute ownership when the value change is small. In addition, the study selected only one kind of raw material (woods) in the scenarios; however, the initial value of the raw material may affect the subjects' ownership assignments, as Kanngiesser and Hood (2014b) discovered.
Current study
In summary, previous studies have acknowledged the role of value change and creation in people's ownership resolution between the original owner and the modifier, but most of the studies examined the role of the two independently. Value change and creation may affect people's ownership judgments in interactive ways. For example, one person may transform another person's cup into a sound box such as to make a creative change, but the price of the cup and the sound box may be in similar level. Alternatively, people may simply deal with an item without creatively changing it, but will increase its value. As a case in point, people often give higher value to items (e.g., shirts) worn or used by celebrities, even if the shirts have not been changed during the historical course (Huang et al., 2017). It is worth testing how will people solve ownership problems when pitting value change against creation for us to understand the relative importance of different cues in ownership representation.
Moreover, fewer studies have explored how the context in which a modifier obtains raw materials affects the subjects' ownership judgments. Due to the different legitimacy of the transfer methods, people's ownership judgments may be swayed in different extent when the modified objects have different value added or creative changes. People are unlikely to transfer ownership for stolen objects even if the processing has made them appreciate greatly because it is commonly seen as a serious illegal behavior to steal others' items. But people may permit ownership transferred for the found objects due to relatively neutral description and that people have a tendency to assign responsibility to the original owners for losing objects (Li et al., 2020). They are also likely to permit ownership transferred for the borrowed objects by attributing that the lenders did not declare beforehand that any processes are not allowed for the original objects. The permissions are more likely to occur when the laborers make the original objects gain great value compared to less value.
To address these questions, we comprehensively investigated the role of value change, creation and transfer method in people's labor-based ownership judgments. The results will help us uncover the relative importance of ownership cues in resolving property conflicts and provide insights into the methods of settlement in reality. We performed two studies to investigate Chinese adults' preferential resolutions for modified objects with low initial value (Study 1) and high initial value (Study 2) with a scenario-based method, as many previous studies have used (e.g., Kanngiesser and Hood, 2014b;Li et al., 2020). The value change (high, low) and creation (with/no creation) of an object and transfer methods were set in the scenarios. With regard to value change, a laborer processes an original owner's object which causes a large or small change in its value. With regard to creation, a laborer transforms an original owner's object into a new one thus changing its function (creation condition), or just adds some traces on it without changing its original function (no creation). This refers to previous studies (e.g., Levene et al., 2015) since people are ready to link creation with functional changes (Judge et al., 2020). We selected four transfer methods in the study, i.e., steal, borrow, find, and take. The steal condition was selected because stealing is an obviously illegal behavior and the taking behavior is not permitted according to common sense. Even three-year-old children would deny that ownership was transferred in such a context (Blake and Harris, 2009;Li et al., 2019). The borrow condition was selected because the taking behavior is often approved in this context although the modifying behavior may not allowed by the lender. The find condition was selected because in this context, the finder may not know that the discovered objects were originally owned and the objects were inadvertently modified. Finally, we set a take condition because it has no clear license meaning or legitimacy meaning and this would offer us a baseline for comparisons with the results from other three conditions and that of a previous study (i.e., Kanngiesser and Hood, 2014b).
As with previous studies, we expected that there would be significant effect of value change in subjects' ownership assignments such that they would be more likely to transfer ownership in the large value change condition than in the small value change condition, and there would be significant effect of Frontiers in Psychology 04 frontiersin.org creation in subjects' ownership assignments such that they would be more likely to transfer ownership in the creation condition than in the no creation condition. In addition, we expected that there would be significant effect of transfer method in subjects' ownership assignments such that they would be more likely to transfer ownership in the find condition than in other three conditions as previous research showed (Li et al., 2020). Especially, we expected that interactive effects might be found between the transfer method and value change and between the transfer method and creation. Due to the obvious illegality of theft, people are unlikely to transfer ownership even if the processing has made the stolen goods gain greatly added value or creative change. But for the conditions of borrow and find, people's ownership assignments may sway with the value change and creation.
Study 1 2.1. Subjects
We used G*Power 3.1 statistical software to estimate the sample size. A priori power analysis indicated that to reach a medium effect f = 0.25, alpha = 0.05, power = 0.80, the study needed to include 120 participants. Finally, we recruited 148 college students (age range: 17 ~ 26 years old, M age = 20.07, SD = 1.05, 61 males) from a university in Northwest China as subjects, with the aim to include a sufficient sample size given the possibility of invalid responses. Subjects learned about the study through recruitment posters and contacted us voluntarily by cell phone or other network communication tools. We excluded students majoring in law to avoid the possible effect of professional knowledge background in ownership judgments. The study was conducted with the approval of the Scientific Research Ethics Committee of our unit. All subjects provided signed informed consent before participation in the experiment.
Materials and procedure
Subjects were invited to the laboratory and asked to complete the study online. They were asked to read some scenarios and answer the questions after each scenario. The scenarios included materials that were originally owned by an agent but transferred to another agent in different ways, i.e., take/steal/borrow/find. Then, the new possessor of the material modified it into a new object, thereby greatly increasing the value (large value change condition) or only increasing the value to a small degree (small value change condition). Previous scholars have associated functional change with creation (e.g., Levene et al., 2015). Therefore, in this study, the modified object was depicted to have a new function in the creation condition while retaining the original function in the no creation condition. We selected three objects (i.e., a plastic cup, wood, and clay) that have relatively low values as the raw materials and implemented 48 scenarios that included the transfer mode (4), value change (2), creation (2) and object (3) as within-subjects factors. The following are the sample scenarios of the plastic cup.
Take-large value change-creation condition
Li Ming has a plastic cup. Zhao Lei takes this cup and modifies it into a sound box. The initial price of the cup is ¥10, 1 while the price is estimated to be ¥10,000 2 after Zhao Lei modifies it into a sound box. Li Ming learns that the cup has been modified and claims the sound box. Finally, Li Ming and Zhao Lei quarrel about who owns the sound box.
Steal-small value change-creation condition
Li Ming has a plastic cup. Zhao Lei steals this cup and modifies it into a sound box. The initial price of the cup is ¥10, while the price is estimated to be ¥11 after Zhao Lei modifies it into a sound box. Li Ming learns that the cup has been modified and claims the sound box. Finally, Li Ming and Zhao Lei quarrel about who owns the sound box.
Borrow-large value change-no creation condition
Li Ming has a plastic cup. Zhao Lei borrows this cup and draws a picture on it. The initial price of the cup is ¥10, while the price is estimated to be ¥10,000 after Zhao Lei draws a picture on it. Li Ming learns that the cup has been modified and claims the cup. Finally, Li Ming and Zhao Lei quarrel about who owns the cup.
Lose-small value change-no creation condition
Li Ming loses a plastic cup. Zhao Lei finds this cup and draws a picture on it. The initial price of the cup is ¥10. It's estimated to be ¥11 after Zhao Lei draws a picture on it. Li Ming learns that the cup has been modified and claims the cup. Finally, Li Ming and Zhao Lei quarrel about who owns the cup.
For the other two raw materials, the wood was depicted as modified into a falcon model and the clay was depicted as molded into an animal figure (e.g., a Mickey Mouse figurine) in the creation condition. In the no-creation condition, Chinese characters are written on the wood 3 while the clay was just kneaded into an oval shape by the laborer. Across all three raw materials, the value was set to change from ¥10 to ¥10,000 in the large value change condition and from ¥10 to ¥11 in the small value change condition. The names of the protagonists of different scenarios were varied to ensure that the subjects believed that they were reading different stories.
Frontiers in Psychology 05 frontiersin.org After each scenario, the subjects were first asked to answer how the laborer got the object, the extent of the value change and whether the object changed to another item to ensure that they understood and did not mix the content of the stories. Finally, the subjects were asked to provide their own opinions on who should own the modified objects by choosing from two options, i.e., the original owner of the raw materials and the modifier. They were instructed: "Please make decisions on ground of your own opinion and do not consider outside sources such as legal, economic, or political knowledge". To control for possible spillover effects, scenarios corresponding to the take condition were always presented first because subjects' responses in the other three conditions might prime the answer in this condition. For scenarios with the other three transfer conditions, the presenting order was counterbalanced. The presenting order of value change and creation were also counterbalanced. The total duration of the experiment is about 15 min.
Results
All subjects correctly answered the content understanding questions. Subjects' choices were assigned a value of 1 if they selected the original owner as the owner. In contrast, they were assigned a value of 0 if they selected the modifier as the owner. A preliminary analysis showed that there were no significant differences across the three kinds of raw materials in ownership scores. Therefore, the ownership scores of three raw materials were summed up to form a composite score. The total score would be 3 if they selected the original owners in all three kinds of materials, and would be 0 if they selected the modifiers in all three kinds of materials. The midpoint of the ownership scores would be 1.5 [(0 + 3)/2] which represents the chance level of subjects' ownership assignments.
A three-factor repeated measures analysis of variance (ANOVA) was conducted, with transfer type (4), value change (2) and creation (2) as the within-subjects independent variables. The results showed a significant main effect of transfer type (Greenhouse-Geisser test) [F(3, 441) = 80.02, MSE = 1.32, p < 0.001, η p 2 = 0.35], with ownership scores ordered from the steal condition > borrow condition > take condition > find condition, and significant differences between each type of transfer (ps < 0.001) with the exception of that between the steal condition and the borrow condition Figure 1A]. The interaction between transfer type and creation was also significant [F(3, 441) = 23.16, MSE = 0.14, p < 0.001, η p 2 = 0.14]. A simple effect analysis revealed that under the creation condition and no creation condition, the transfer type effect was significant Figure 1B). We probed subjects' favor to the original owners or the modifiers with one-sample t-tests by comparing their ownership scores to the midpoint in each condition. The results showed that in the find condition, the ownership scores corresponded to the chance level (ps > 0.05) regardless of whether the value change was large or small and whether creation occurred. In contrast, in the other three conditions, the ownership scores were significantly above the chance level (ps ≤ 0.002).
Study 2
Study 1 investigated how people would resolve ownership disputes when the original materials had low initial values. In Study 2, we investigated this issue using original materials with high initial values.
Subjects
One-hundred thirty-eight Chinese undergraduates (age range: 17.70 ~ 33.01 years old, M age = 20.71, SD = 1.76, with 46 males) from our university participated in this study. All subjects were asked to provide signed informed consent before they participated in the experiment. Students who majored in law were excluded from this study.
Materials and procedure
The original materials in Study 1 were replaced with a gold bullion, rosewood and a diamond. Before the study, 60 subjects were asked to evaluate the initial value of these materials and the materials of Study 1 to ensure that the initial values were very different. The subjects were asked to rate the value of each material on a five-point scale from extremely invaluable (1) to extremely valuable (5). The results showed that the materials in Study 2 (gold bullion, mean = 4.08, SD = 0.11; rosewood, mean = 4.20, SD = 0.13; and diamond, mean = 4.00, SD = 0.13) were rated as more valuable than the materials in Study 1 (plastic cup, mean = 3.15, SD = 0.12; wood, mean = 3.12, SD = 0.13; clay, mean = 2.50, SD = 0.12, ps < 0.001), and rated as having a significantly higher value than the level of chance (3) (ps < 0.001). Due to the obviously high initial value, the price of the original materials was set to ¥10 thousand in this study, and the added value reached ¥10 million after modification in the large value change condition but ¥1.1 thousand in the small value change condition. The value changes were designed to correspond with those in Study 1. In the creation condition, the gold bullion was modified into a boat model, the rosewood was modified into a set of furniture, and the diamond was modified into a ring. In the no-creation condition, a picture was painted on the gold bullion; Chinese characters were written on the rosewood; and an oval shape was chiseled into the diamond. Thus, 48 scenarios were generated, which was consistent with Study 1, and the transfer mode (4), value change (2), creation (2) and object (3) were used as within-subjects factors. The following scenarios were presented.
Take-large value change-creation condition
Li Ming has a gold bullion. Zhao Lei takes this gold bullion and modifies it into a boat model. The initial price of the gold bullion is ¥10 thousand, but it is estimated to be ¥10 million after Zhao Lei modifies it into a boat model. Li Ming learns that the bullion has been modified and claims the boat model. Finally, Li Ming and Zhao Lei quarrel about who owns the boat model.
Steal-small value change-creation condition
Li Ming has a gold bullion. Zhao Lei steals this gold bullion and modifies it into a boat model. The initial price of the gold bullion is ¥10 thousand, but it is estimated to be ¥1.1 thousand after Zhao Lei modifies it into the boat model. Li Ming learns that the bullion has been modified and claims the boat model. Finally, Li Ming and Zhao Lei quarrel about who owns the boat model.
Borrow-large value change-no creation condition
Li Ming has a gold bullion. Zhao Lei borrows this gold bullion and paints some pictures on it. The initial price of the gold bullion is ¥10 thousand, but it is estimated to be ¥10 million after Zhao Lei paints some pictures on it. Li Ming learns that the bullion has been modified and claims the gold bullion. Finally, Li Ming and Zhao Lei quarrel about who owns the gold bullion.
Lose-small value change-no creation condition
Li Ming loses a gold bullion. Zhao Lei finds this gold bullion and paints some pictures on it. The initial price of the gold bullion is ¥10 thousand, but it is estimated to be ¥1.1 thousand after Zhao Lei paints some pictures on it. Li Ming learns that the bullion has been modified and claims gold bullion. Finally, Li Ming and Zhao Lei quarrel about who owns the gold bullion.
As with Study 1, scenarios in the take condition were always presented first. The presenting order of the other three transfer types, value change type and creation type were counterbalanced. The subjects were first asked to answer how the laborer got the object, the extent of the value change and whether the object changed to another item. Then they were asked to provide their own opinions according to their intuition on who should own the modified objects and choose from the original owners and the laborers. The total duration of the experiment is about 15 min. The participants were assigned a score of 1 if they selected the original possessors as the owner and a score of 0 if they selected the laborer as the owner.
Results
All subjects correctly answered the content understanding questions. Preliminary analysis indicates that there were no Frontiers in Psychology 07 frontiersin.org significant differences across the three kinds of raw materials in ownership scores. Composite scores were generated for further analysis. A repeated-measures ANOVA (4 transfer types × 2 value changes × 2 creation modes) was conducted with transfer type, value change and creation mode as the within-subject variables.
The results revealed that the main effect of transfer type was significant [F(3, 411) = 52.06, MSE = 1.56, p < 0.001, η p 2 = 0.28], with ownership scores ordered as steal condition > borrow condition > take condition > find condition, and significant differences were observed between each condition (ps ≤ 0.011). The interaction between transfer type and value change was significant [F(3, 411) = 14.36, MSE = 0.20, p < 0.001, η p 2 = 0.16]. A simple effect analysis revealed that significant transfer type effects (ps < 0.001) occurred in the large and small value change conditions. The value change effect was significant in the take, borrow and find conditions (ps ≤ 0.002) but was not significant in the steal condition (see Figure 2A).
The interaction between transfer type and creation was also significant [F(3, 411) = 21.64, MSE = 0.14, p < 0.001, η p 2 = 0.14]. A simple effect analysis revealed that the transfer type effect was significant (ps < 0.001) in the creation condition and the no creation condition. The creation effect was significant in the take condition (p < 0.001) and find condition (p = 0.022) but not significant in the steal condition and in the borrow condition. No other significant main effects or interactions were found. The interaction between value change and creation and the interaction between the three factors are not significant [F(1, 137) = 0.23, MSE = 0.02, p = 0.879, η p 2 = 0.00; F(3, 411) = 1.20, MSE = 0.01, p = 0.308, η p 2 = 0.01] (see Figure 2B).
One-sample t tests revealed that ownership scores were at the level of chance in the find condition, regardless of whether the value change was large or small or successful creation occurred. Ownership scores were also at the level of chance in the take-large value change-creation condition. Ownership scores were above the level of chance in other conditions (ps < 0.001).
Finally, we integrated the data from the two studies and tested the effect of the materials' initial value through a mixeddesign analysis of variance, with the initial value (high or low) as the between-subject variable and transfer type, value change, and creation as the within-subject variables. Similar to the results for Study 1 and in Study 2, the mixed-design results yielded a significant main effect of transfer type A simple effect analysis showed that the transfer type effect was significant for materials with a low and high initial values (ps < 0.001). Ownership scores were significantly higher for materials with low initial values than high initial values in the take (p = 0.007) and borrow conditions (p = 0.001) but were significantly higher for materials with high initial values than low initial values in the steal condition (p = 0.029). No significant difference was found in the find condition (p = 0.978).
Study 3
In the above two studies, we found that subjects did not consider ownership to be transferred in the take condition. This finding seems to be inconsistent with Kanngiesser and Hood's (2014b) study, which found that an adult would transfer ownership to the modifier as laborer when the original materials' value appreciated greatly with modification. One possibility is that subjects' ownership judgments would still be affected by the other three conditions because subjects were allowed to revise their answers, although the initial presented scenarios were in the take condition. To address this issue, we conducted a third study in which we extracted from the above two studies the scenarios with the take condition and presented them solely to subjects. Such a design would make it impossible for subjects' ownership judgments in the take condition to be affected by other conditions.
Subjects
Ninety-six additional Chinese undergraduates (age range: 18.98 ~ 25.35 years old, M age = 22.14, SD = 0.98, 8 males) were recruited as subjects in this study. They provided signed informed consent before participation.
Materials and procedure
Twenty-four scenarios corresponding to the take condition in Study 1 and Study 2 were extracted and integrated to form the materials in this study. The presentation order of these materials was counterbalanced according to the value change and creation conditions. The subjects were assigned a score of 1 if they selected the original possessor as the owner and a score of 0 if they selected the laborer as the owner.
Results
Significant differences in ownership scores were not observed across the three kinds of raw materials in the preliminary analysis. A repeated-measures ANOVA (2 initial values × 2 value changes × 2 creation modes) was conducted with the transfer type, value change and creation as the within-subjects variables. The results showed that the effect of the initial value was not significant [F(1, 95) = 0.75, MSE = 0.63, p = 0.389, η p 2 = 0.008]; the effect of creation was not significant [F(1, 95) = 1.79, MSE = 1.00, p = 0.185, η p 2 = 0.02]; and the effect of value change was significant [F(1, 95) = 9.30, MSE = 0.87, p = 0.003, η p 2 = 0.09]. Ownership scores in the small value change condition were significantly higher than those in the large value change condition. None of the interactions were significant (ps > 0.05). A one-sample t test showed that the ownership scores were significantly higher than chance (1.5) in all eight conditions (ps < 0.001) (see Figure 3).
Discussion
This study examined how people would resolve ownership disputes between the original owner and a laborer when the original materials were transferred in different ways and modified. By pitting value change and creation against transfer type, the results showed that the transfer method could affect ownership assignments. Subjects were more likely to support the original possessor as owner in the take, steal and borrow conditions, whereas they did not support the original owner in the find condition. The effect was significant whether the original materials showed high or low appreciation in value and whether successful creation was observed. This outcome is applicable to raw materials with low (Study 1) and
Frontiers in Psychology 09 frontiersin.org high values (Study 2). Especially, we found transfer method interacts significantly with value change and creation in people's ownership judgments. Subjects' support for the original owners is significantly reduced in the take, borrow and find conditions when the modified objects appreciated greatly, but it remains unchanged in the steal condition. Subjects' support for the original owners is significantly reduced in the take condition (and in the find condition, in Study 2) when there is successful creation, but it remains unchanged in the steal condition and in the borrow condition in two studies.
The study replicated results of previous research that value change would influence people's ownership reasoning based on labor rule (Hook, 1993;Kanngiesser and Hood, 2014b;Burgmer et al., 2019), but there is also difference because previous research (e. g., Kanngiesser and Hood, 2014b) showed that people would transfer ownership to the laborers in the take condition but we did not found this in current study. This result held regardless of whether the value of the raw materials presented high or low changes and whether successful creation was observed. This finding was not due to the influence of the other three conditions because it held even when the scenarios in the take condition were presented independently (in Study 3). One possible reason for this finding is that Asian people are more conservative in judging ownership with regard to labor cues. For example, Kanngiesser et al. (2015) found that when asking British, Japanese and Chinese 4-year-olds to resolve ownership questions between an agent who obtained a piece of paper first and another agent who took it and painted a picture on it, British children were more likely to assign the picture to the painter while Japanese and Chinese children did not show such a tendency. In addition, they found that British adults favored the laborer more than the initial possessor while Japanese adults showed the inverse pattern (Chinese adults were not selected as subjects in this study). Future research should investigate the cultural factors that lead Asian adults to judge ownership more cautiously when labor cues are involved.
Previous research has revealed that creation will affect adults' ownership resolution between the laborer and the subsequent possessor (Levene et al., 2015). This study demonstrates that creation will also play a role in solving conflicts between the laborer and the original owner. In addition, we found that both value change and creation interacted with transfer type in the study, and the interactive effects were similar between the two since whether a great value addition or the successful creation is not enough for subjects to change their ownership judgements for objects acquired in illegal way (i.e., steal), but will shake the extent of support for the original owners or the laborers when the acquisition method is relatively neutral (i.e., take, find). Although value increase does not necessarily lead to creation and creation may not lead to value appreciation, it may be easy for subjects to associate the two to make judgments in the same way because they are often concomitantly changed in most cases in life. Such explanation is worth considering because this study did not find the interaction between value change and creation. Alternatively, value change and creation may have a common foundation in guiding human's ownership decisions. For example, people will think that both value appreciation and creation will make objects very different from their original state. Future research should examine these possible explanations further.
This study revealed that people consistently supported the original owners as owners of the newly made objects in the steal and borrow contexts, and the ownership scores in the steal condition and in the borrow condition were significantly higher than those in the take condition and in the find condition. The result suggests that people may not automatically represent taking behavior as stealing or borrowing because ownership scores in the take condition were significantly lower than those in these two conditions. This finding warns that previous studies only investigating people's ownership opinions in the take context is not sufficient to reach a conclusion and we should distinguish among different transfer modes when exploring adults' support for the original owner and the modifier. Interestingly, we found subjects' ownership scores in the find condition were at the level of chance in both studies, which contrasts with Li et al. (2020)'s study that showed people would support the modifiers in this condition. It should be noted that there are three differences between this study and Li et al. (2020)'s study. First, we distinguished different extents of value change and creation but Li et al. (2020)'s study did not consider them. Second, Li et al. (2020) only selected woods as the raw material but we chose three materials in each experiment. Third, and most importantly, Li et al. (2020) have proposed the reason why the laborer modified the woods into a furniture (i.e., for marriage), but we did not mention this lest introducing confusing factors. All these differences might lead subjects' responses more skewed toward the modifiers in Li et al. (2020)'s study, which need to be explored in future studies.
Limitations and implications
While our study addressed the role of transfer method in people's ownership decisions between laborers and original owners, some limitations should be noted. First, this study takes ordinary adults as a sample to investigate the role of value change, creation and transfer types in labor-relevant ownership judgments. We did not determine whether young children and legal professionals would present similar reasoning on ownership issues. Due to their limited legal experience, children's answers may better reveal the naive weighing of three elements during ownership resolution. Previous studies have found that young children place more emphasis on creative labor in ownership reasoning than adults (Kanngiesser et al., 2010(Kanngiesser et al., , 2014Faigenbaum et al., 2018), and they may display different response patterns in such research. The inclusion of professionals can help us directly compare the preferred solutions between lay people and legal scholars and find the root of possible inconsistency. Second, this study suggests that the intention of the original owner and the laborer might play an important role in third parties' ownership assignments. Subjects were more likely to support the original owners when the laborers intentionally modified others' objects without consent (i.e., in the steal and Frontiers in Psychology 10 frontiersin.org borrow conditions) but less likely to support the original owners when the laborers modified others' objects without knowing that they were originally owned (i.e., in the find condition). We did not directly examine the effect of intent in the study. Future research may clarify this effect with a direct experimental design. Third, although we did not mention the relationship between the laborers and the original owners to avoid introducing confounding variables, different transfer methods may lead to different perceptions of the relationship between the laborers and the original owners (e.g., friends are more likely to take and borrow each other's things without consent), which might introduce some unexpected effects to the results. Such scenarios should be strictly controlled in future studies. Finally, this study recruited Chinese adults as subjects.
While the Chinese population is often regarded as a non-WEIRD (i.e., Western, Educated, Industrialized, Rich, and Democratic; Henrich et al., 2010) sample, the generalizability of the results needs to be tested with more cross-cultural research.
The results of this research will provide insights for real-life mediation of ownership conflicts. Because many complicated ownership cases involve transfer scenarios, this research indicates that people have different opinions in different transfer situations and these opinions may be contrary to the law. Although the law indicates that ownership of stolen/borrowed/found property cannot be transferred (Ma, 2003;Simeone, 2009), it did not consider the occasions when the property is modified which makes it appreciate greatly. At least in this study, we found that people did not absolutely support the original owner for found property. When mediating conflicts of ownership in real life, we should not only respect the rights of the original owners but also consider the interests of the creative laborers, especially when the laborers processed the objects inadvertently and increased the value of the objects greatly.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Scientific Research Ethics Committee of Xi'an Jiaotong University. The patients/participants provided their written informed consent to participate in this study.
Author contributions
ZL designed the study, collected and analyzed the data, and drafted the manuscript. DD revised the manuscript critically. All authors contributed to the article and approved the submitted version. | 2023-01-11T17:24:02.147Z | 2023-01-10T00:00:00.000 | {
"year": 2022,
"sha1": "63a4c36dd0352349a89f0fc61c3ff101df94fa8f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "63a4c36dd0352349a89f0fc61c3ff101df94fa8f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18247494 | pes2o/s2orc | v3-fos-license | Impurity Effect as a Probe for the Gap Function in the Filled Skutterudite Compound Superconductor PrOs$_{4}$Sb$_{12}$: Sb-NQR Study
We have carried out nuclear quadrupole resonance (NQR) measurements in the filled skutterudite compounds Pr(Os$_{1-x}$Ru$_x$)$_4$Sb$_{12}$ ($x=0.1, 0.2$), in order to gain insights into the symmetry of the superconducting gap function. Upon replacing Os with Ru, the spin-lattice relaxation rate $1/T_1$ becomes proportional to temperature ($T$) at low $T$ far below $T_c$, and the magnitude of $(1/T_1T)_{low-T}$ increases with increasing Ru content. These results indicate that a finite density of states is induced at the Fermi level by the impurity, and thus suggest that there exist nodes in the gap function of PrOs$_{4}$Sb$_{12}$.
The superconductivity at T c =1.85 K discovered in the filled skutterudite compound PrOs 4 Sb 12 has attracted much attention. 1,2 This is the first Pr-based heavy fermion superconductor, in which the heavy mass has been suggested by the large specific heat jump ∆C/T c ∼500 mJ/K 2 · mol at T c , 1, 2 and directly confirmed by de Haas-van Alphen (dHvA) experiments. 3 The crystal electric field (CEF) ground state is a Γ 1 singlet, which is separated by the first excited state of the Γ (2) 4 triplet by a gap of ∆ CEF =10 K. [4][5][6][7][8][9] Because of this small ∆ CEF , the role of quadrupole moment fluctuations arising from the Γ (2) 4 state has attracted much attention. It has been speculated that the interaction between the quadrupolar moments and the conduction-electron charges may be responsible for the heavy mass, thus representing a charge-scattering version of the Kondo effect. 10 The contrasting behavior that the isostructural compound PrRu 4 Sb 12 with ∆ CEF =70 K has a much smaller m * 11, 12 has increased such expectation.
One of the first steps toward understanding the Pr-based heavy fermions may be the determination of the gap symmetry of the superconductivity. Previous nuclear quadrupole resonance (NQR) measurements have revealed the unconventional nature of the supercon-1/10 ductivity. 13 No coherence peak was found in the spin-lattice relaxation rate 1/T 1 just below T c , which suggests non-BCS-type superconductivity and is in contrast to the s-wave bahavior subsequently observed in PrRu 4 Sb 12 . 14 At low temperatures, 1/T 1 does not follow a powerlaw T -dependence, which is in contrast to the observation of a T 3 variation in other known heavy fermion superconductors. 15 Together with the exponential decrease in the penetration depth found by muon spin relaxation, 16 it has been suggested that the superconducting gap is isotropic. However, oscillations with respect to the magnetic field angle have been found in angle-resolved thermal conductivity measurements, which suggests that the gap is anisotropic. 17 In this study, we use an impurity as a probe for the gap function. We replace Os with Ru, and study its effect on the superconducting density of states (DOS) using the NQR technique.
It has been known for some time that a non-magnetic impurity is a powerful "smoking gun" for the gap function. In s-wave superconductors, a paramagnetic impurity produces a bound state within the gap (Yu-Shiba state), 18,19 and non-magnetic impurities smear out the gap anisotropy, if any. 20 In contrast, if there are nodes in the gap function, a finite DOS is brought about at the Fermi level, 21,22 which can be detected experimentally. 15 Single crystals of Pr(Os 1−x Ru x ) 4 Sb 12 , (x=0.1 and 0.2) were grown by the Sb-flux method.
T c 's are 1.6 K and 1.4 K for x=0.1 and 0.2, respectively, in agreement with those previously reported. 23,24 For NQR measurements, the single crystals were powdered in order to allow the radio-frequency magnetic field to penetrate into the sample. The NQR measurements were performed by using a phase coherent spectrometer. The spin-lattice relaxation rate, 1/T 1 , for 123 Sb were measured using the saturation-recovery method. Data below 1.4 K were collected using a 3 He/ 4 He dilution refrigerator. Figure 1 shows the NQR spectra for the x=0.1 and 0.2 samples. For comparison, the data for the two end-member compounds 13,14 are also shown in Figs 1(a) and 1(d). In the Ru-doped samples, in addition to the spectrum peaks that are located at the same positions as those for pure PrOs 4 Sb 12 , there appear new sets of peaks that are characterized by ν Q =26 MHz and the asymmetry parameter η=0.43 for 123 Sb. Here, ν Q and η are defined as ν Q ≡ ∂z 2 and η = |ν x − ν y |/ν z with Q being the nuclear quadrupolar moment and ∂ 2 V ∂α 2 (α = x, y, z) being the electric field gradient at the position of the nucleus. 25 It should be emphasized that these peaks (labelled as "peaks 2" in Table I) are different from those for pure PrRu 4 Sb 12 . These new peaks are due to the alloying effect, which indicates that the alloying is homogenous and there is no phase separation. Such a feature has been seen before in other heavy fermion compounds. For example, in Ir-doped CeRhIn 5 , upon replacing Rh with Ir, there emerges a new set of NQR peaks that is different from that for CeRhIn 5 or CeIrIn 5 . 26 Nonetheless, 1/T 1 measured at the new set of peaks shows the same T -dependence as that measured at those corresponding to CeRhIn 5 or CeIrIn 5 , indicating that the new set of peaks is not due to phase-separated regions but rather to the homogenous alloying.
The nuclear spin-lattice relaxation rate 1/T 1 was measured at the peaks at 49.9 MHz (±3/2 ←→ ±5/2 transition) and 42.2 MHz (±1/2 ←→ ±3/2 transition). Both measurements yield the same results. Even for x=0.2, no influence on T 1 due to the overlapping of the 121 Sb line was seen in the measurements at the 49.9 MHz peak, probably because the H 1 we used for observing the echo is much smaller than the frequency difference between the 121 Sb and 123 Sb peaks, which is about 0.8 MHz, and the 121 Sb peak has a small full width at half maximum (FWHM) of 0.5 MHz. Above T ∼ 100 K, the nuclear magnetization can be fitted excellently using the expected theoretical rate equation. 27 However, below T ∼ 100 K, it cannot be fitted using the theoretical curve with a single component of T 1 . This is also true at the peak at 42.2 3/10 Table I. NQR frequency ν Q and asymmetric parameter η for 123 Sb in Pr(Os 1−x Ru x ) 4 Sb 12 . Also shown are the CEF gap ∆ CEF and the heavy quasiparticle enhancement factor β (see text).
Peaks 1
Peaks 2 MHz that is free from overlapping by other transition lines. Figure 2 shows the decay curve of the nuclear magnetization at T =6 K. As will become clearer later, the inhomogeneous T 1 is an intrinsic property of the alloyed samples. The one-component behavior for T ≥ 100 K is rather a coincidence, since T 1 is the same for both PrOs 4 Sb 12 and PrRu 4 Sb 12 .
We then attempt to fit the low-T nuclear magnetization with two T 1 components, namely, where T S 1 means the short component due to fast relaxation, and T L 1 , the long component due to slow relaxation. The solid curve in Fig. 2 shows such fitting for x=0.1, with a S =0.8 and a L =0.2 that do not depend on temperature appreciably.
We first discuss the main component, 1/T S 1 , whose temperature dependence is shown in Fig. 3. Compared to the results for the end-member compounds, several trends can be seen. for all x merge into a same line. This suggests that the relaxation at high T is not governed by the electronic state that is sensitive to the transition metal element, 28 but due to other degree of freedom. Second, below T ∼ 100 K, 1/T S 1 is close to that for pure PrOs 4 Sb 12 , but decreases as x increases. Since the slow component 1/T L 1 is very close to that for pure PrRu 4 Sb 12 , as will be shown later, we consider that 1/T S 1 origenates from the Sb sites that are located far from Ru. Third, and most importantly, below T ∼ 1 K, 1/T S 1 is proportional to T ; the magnitude of 1/T S 1 T increases with increasing x. The 1/T S 1 T =const. relation indicates that a finite DOS is induced by the impurity. 1/T 1 in the superconducting state may be expressed as where N s (E) = N 0 E/(E 2 − ∆ 2 ) 1/2 is the superconducting DOS with ∆ being the super- are shown in Fig. 4. The curves indicate the calculated results obtained by Miyake for the gaps with line-nodes (axial) and point-nodes (polar). 21,22 The experimental results agree qualitatively with the theoretical results. That the experimental data fall below the theoretical curve is probably due to the depression of the pairing force by the substitution of Ru, which is not included in the theoretical results. We will discuss this point later. Although we are unable to distinguish between line nodes and point nodes in Fig. 4, point nodes nonetheless seem more plausible, in view of the relatively weaker suppression of T c by the impurity with respect to x. 23 For completeness, we show the temperature dependence of 1/T L 1 in Fig. 5. Above T c , 1/T L 1 shows a temperature variation very close to that of PrRu 4 Sb 12 . This 1/T L 1 can be assigned to come from the Sb sites that are located close to Ru. Below T c , 1/T L 1 decreases rapidly, with no coherence peak.
Finally, we discuss the high-temperature behavior of 1/T S 1 . Figure 6 shows the temperature variation of the quantity 1/T 1 T . Below T ∼ 6 K, 1/T 1 T decreases, leaving a peak at around T ∼ 6 K. Such a decrease becomes mild as x increases. Since the CEF gap ∆ CEF is small, Kotegawa et al. have analyzed the low-T data by decomposing 1/T 1 T into two parts, 13 namely, the contribution due to the excitation to the Γ (2) 4 state, and that due to heavy quasiparticles: 1/T 1 T = α×exp(−∆ CEF /k B T )+β×0.7 s −1 K −1 , where 0.7 s −1 K −1 is the 1/T 1 T for LaOs 4 Sb 12 and the parameter √ β represents the mass enhancement factor of the heavy quasiparticles. Table I, which are in good agreement with those reported by Frederick et al. who deduced ∆ CEF from the susceptibility and the mass enhancement factor from the specific heat coefficient γ. 23 As has been discussed by several authors, 14, 23 the increase in ∆ CEF is responsible, at least partly, for the decrease in T c in going from PrOs 4 Sb 12 to PrRu 4 Sb 12 ; thus, the discrepancy between the theoretical curve and the experimental data seen in Fig. 4 can be ascribed to the increase in ∆ CEF .
In conclusion, we have studied the impurity effect on the superconductivity in the filled skutterudite heavy fermion superconductor PrOs 4 Sb 12 using the NQR technique. We find that replacing Os with Ru brings about a finite density of states at the Fermi level, which increases with increasing Ru content. Our results provide strong evidence for the existence of nodes in the gap function.
Acknowledgment
We are grateful to Y. Kitaoka, H. Kotegawa, Yogi and Y. Imamura for helpful discussions and contribution. We would also like to thank K. Miyake for providing the unpublished calculated results shown in Fig. 4. This work was supported in part by a research grant from MEXT on the Priority Area "Skutterudites" (No. 15072204). | 2014-10-01T00:00:00.000Z | 2005-07-15T00:00:00.000 | {
"year": 2006,
"sha1": "7176dfbb12bcc242697f7268999e6ce199d6152a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0608698v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4fea95e62f24d17665fd5558bfe04cf9ad09313d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
245353841 | pes2o/s2orc | v3-fos-license | Mining Drifting Data Streams on a Budget: Combining Active Learning with Self-Labeling
Mining data streams poses a number of challenges, including the continuous and non-stationary nature of data, the massive volume of information to be processed and constraints put on the computational resources. While there is a number of supervised solutions proposed for this problem in the literature, most of them assume that access to the ground truth (in form of class labels) is unlimited and such information can be instantly utilized when updating the learning system. This is far from being realistic, as one must consider the underlying cost of acquiring labels. Therefore, solutions that can reduce the requirements for ground truth in streaming scenarios are required. In this paper, we propose a novel framework for mining drifting data streams on a budget, by combining information coming from active learning and self-labeling. We introduce several strategies that can take advantage of both intelligent instance selection and semi-supervised procedures, while taking into account the potential presence of concept drift. Such a hybrid approach allows for efficient exploration and exploitation of streaming data structures within realistic labeling budgets. Since our framework works as a wrapper, it may be applied with different learning algorithms. Experimental study, carried out on a diverse set of real-world data streams with various types of concept drift, proves the usefulness of the proposed strategies when dealing with highly limited access to class labels. The presented hybrid approach is especially feasible when one cannot increase a budget for labeling or replace an inefficient classifier. We deliver a set of recommendations regarding areas of applicability for our strategies.
Introduction
Contemporary data sources generate new information at both tremendous size and speed. Therefore, modern machine learning systems must deal not only with the volume but also velocity issues [46]. Stock exchange, sensor networks, or social media are among the examples of scenarios in which new instances continuously arrive at high speed over time, creating a demand for adaptive, real-time data mining algorithms. This has led to the emergence of data streams notion and the development of a family of dedicated algorithms. New challenges needed to be addressed, such as the potentially unbounded size of the data that may quickly overflow computational resources [10], learning schemes that are able to use both new and historic instances, as well as approaches for managing the evolving nature of data [36]. A phenomenon known as concept drift is embedded in the streaming scenario, as characteristics of data may change over time [17].
Supervised learning has gained significant attention in data stream mining, allowing for efficient classification and prediction from non-stationary data [13]. However, the vast majority of research in this area assumes that class labels become available right after the incoming instance was processed. Then a label is obtained and used to update the learning system. While this is true to the overall stream mining principles, it completely neglects the issue of how to actually obtain every single label. If we would have access to a theoretical oracle that would provide us with such information every time we query for it, then what is the purpose of a classification system? In reality, a class label should be provided by a domain expert in order to maintain the highest certainty with regard to the used information. However, an expert's service is related to the cost, both in monetary and time terms [47], and thus cannot be called upon every time a new instance becomes available [4,32]. This holds especially true for massive and high-speed data streams (e.g., Twitter can produce around 350 000 new tweets every 60 seconds). Therefore, methods that would allow for mining data streams on a budget are in high demand [41].
Summary. We propose a novel hybrid framework for low-cost data stream mining. It is based on combining active learning and self-labeling for using available limited labels most effectively. These two strategies work in a complementary fashion, boosting their advantages while minimizing their weak sides. Active learning allows for an informative selection of instances that will be most useful for adjusting the classifier to the current state of the stream. However, each such query reduces the available budget. Self-labeling exploits discovered data structures and improves the competency of a classifier at no cost, yet offers no quality validation. The proposed hybrid approach allows active learning to explore the incoming data stream for new emerging concepts, while self-labeling offers their further exploitation without depleting the budget. We introduce 7 hybrid strategies, divided into two groups: blind and informed. The former one consists of methods that use the classifier output for deciding if a new instance should be used for active learning or selflabeling. The latter one combines the information from both the classifier and drift detector modules to better adapt decisions to the evolving nature of data streams. The analysis of the performance of these proposed hybrid algorithms sheds light on their areas of applicability, as well as allows us to identify shortcomings of state-of-the-art active learning methods.
Main contributions. This work offers the following contributions to the data stream mining domain.
-A novel hybrid framework for mining data streams on a budget, using the combination of active and semi-supervised learning.
-Two families of methods based on blind and informed approaches, leading to seven algorithms for empowering active learning with self-labeling.
-Thorough experimental study on real data streams under various labeling budgets, showcasing the advantages of using hybrid solutions when available class label are scarce, especially in extremely small budget cases.
-Analysis and recommendations on areas of applicability of the proposed algorithms.
Data stream mining
A data stream can be defined as a sequence of ordered instances arriving over time and of potentially unbounded volume. Mining data streams imposes certain specific requirements on used classifiers. Contrary to static scenarios a predefined training set is not available, as instances become available at given time intervals one by one (online case) or in form of chunks (block case). The entire stream cannot be stored in memory due to its unknown and ever-expanding size. Thus only a limited number of most recent instances can be stored, while the old ones are to be discarded to limit the computational resources being used. Characteristics of the stream may evolve over time and this must be accommodated during the continuous learning process. More technically, a data stream is a sequence < S 1 , S 2 , ..., S n , ... >, where each element S j is a set of instances (or a single instance in case of online learning), each of them being independent and randomly generated according to a probability distribution D j . If a transition between states S j → S j+1 holds D j = D j+1 , then we deal with a stationary stream. However, in most real-life scenarios incoming data is subject to change, leading to the notion of non-stationary streams and concept drift [17]. As concept drift may affect various properties of the stream, therefore we may analyze it from various perspectives. When taking into account the influence on learned decision boundaries, one may distinguish between real and virtual concept drift. The former has an effect on posterior probabilities and may impact unconditional probability density functions. This forces the learning system to adapt to change in order not to lose competence. The latter drift does not have any effect on posterior probabilities, but only on conditional probability density functions. It may still cause difficulties for the learning system, leading to false alarms and unnecessary computational expenses due to rebuilding the classifier. Another view on the concept drift comes from the severity of ongoing changes. Sudden concept drift appears when S j is being suddenly replaced by S j+1 , where D j = D j+1 . Gradual concept drift is a transition phase where examples in S j+1 are generated by a mixture of D j and D j+1 with their proportions continuously changing. Incremental concept drift is characterized by a smooth and slow transition between distributions, where the differences between D j and D j+1 are not significant. Additionally, we may face recurring concept drift, in which a state from k-th previous iteration may suddenly reemerge D j+1 = D j−k , which may take place once or periodically.
Learning systems designed for data stream mining must take into account the presence of concept drift and have embedded a solution to tackle it [13]. Three general possible approaches include: (i) rebuilding a classifier from scratch whenever new instances become available; (ii) using a mechanism to detect change occurrence and guide the rebuilding process more effectively; and (iii) using an adaptive classifier that will adjust automatically to the current state of the stream. The first approach is impractical, due to prohibitive costs connected with continuous model deleting and rebuilding. Therefore, the two remaining directions are commonly utilized in this domain. Concept drift detectors are external tools that allow for monitoring the characteristics of a stream, in order to anticipate and detect the change point [24]. Adaptive classifiers are based on either sliding windows [43] or on online learners [40]. Finally, ensemble solutions are popularly used for mining drifting data streams [26].
Labeling constraints in data streams
While supervised learning from drifting data streams has gained significant attention in recent years, the vast majority of developed algorithms assume working in test-then-train mode. This means that each new instance (or chunk) is first used to test the current classifier and then utilized for the updating procedure. This scheme easily translates into real-life scenarios, when we use the model at hand to predict labels for new data and then try to conduct the learner update. However, many of these algorithms work under the assumption that true class labels become available the moment after we obtain predictions from the model. While such a set-up is convenient for an experimental evaluation of a classifier in the controlled environment, it completely fails to capture the real-life situation of dealing with limited ground truth availability [30]. Therefore, many supervised classifiers, while delivering excellent performance on benchmark data, cannot be directly used in practical applications [26]. This is caused by the fact that obtaining a true class label for a new instance is far from being a trivial task. If we would have access to a theoretical oracle that would provide us with such information every time a new instance becomes available, then there is no need to have any classification procedures. In most real-life applications, a domain expert is required to analyze a given instance and label it. While one may theorize that a company developing a specific data stream mining system should have such an expert at its disposal, we cannot forget the costs connected with such a procedure. This can be viewed as monetary costs, as an expert would require a payment for sharing his knowledge, as well as a time cost, as an expert needs to spend some time analyzing each instance. Therefore, in a real-life scenario, neither a constant label query is possible (as a given company would quickly use up its budget), nor instant label availability [20]. Even if these factors, for various reasons, play a less important role, the human throughput must also be considered. A given expert cannot work non-stop and will have limited responsiveness per given time unit. Thus, in cases of massive and high-speed data streams assumption of continuous label availability cannot hold.
There are specific application areas, where true class labels can be obtained at no cost. Let us consider stock exchange or weather prediction. Here one may observe the current state of the environment in order to verify the prediction. However, one still cannot assume that the ground truth will become instantly available. Usually, such predictions are being made forward into the future, whether we consider hours, days or months. Therefore, a given amount of time must pass before a ground truth can be observed. Although here we do not have the problem of associated cost, we still need to deal with label latency [33].
This has led to works on learning from data streams under limited access to class labels. Most popular approaches include active learning solutions that allow selecting only a limited number of instances for labeling [25]. They are usually selected to offer new information to the classifier, instead of reinforcing old concepts [31]. Although there is a good amount of research on active learning for static scenarios [35], there exist but a few solutions that take into account the drifting and evolving nature of streams [22,41]. Another branch of works focuses on semisupervised learning [32], usually using clustering-based solutions [28]. The most common assumption here is that with every time interval a subset of instances arrives as labeled and one may use them to guide the learning process. This is fundamentally different from active learning, as it concentrates on how to make use of unlabeled instances, instead of selecting the proper ones for labeling. There also exist algorithms working under the assumption that only initial data is being labeled and no further ground truth access is to be expected [15].
As we deal with limited access to class labels, any techniques that may allow us to better exploit the given budget and take advantage of unlabeled instances without paying the cost for querying them are highly attractive for data stream mining [45]. Therefore, we focus our attention on combining active and semi-supervised [39] learning paradigms. Many hybrid frameworks have been proposed for batch mode, offline settings and successfully applied to different domains. Combining self-labeling based on uncertainty with active learning strategies is just one possible approach, where the least certain instances are chosen to be queried and the most certain are labeled by a classifier on its own. Graph-based methods build a similarity graph to determine which samples are the most informative and should be queried. The labeled objects are then used to propagate their classes to the closest neighbors in the graph.
The area of hybrid frameworks for evolving data streams mining is a relatively recent idea and yet weakly explored. As far as we know, while there is a lot of pure active learning [29,38] or semi-supervised learning [34,44] methods for streaming data, very few hybrid solutions have been proposed. One of them uses variance reduction on data chunks as an active learning strategy and the same instance selection method as a self-labeling step [21]. Another solution is based on the incremental graph building algorithm -Adaptive Incremental Neural Gas (AING) -which provides information needed to decide if a sample should be queried or if a classifier can teach itself with it [9]. An interesting framework was proposed in [18]. It is described as a highly scalable, parallelizable and optimized online Bayesian framework using sequential Monte Carlo and so-called gap assumption. In our previous work, we have proposed the first study on a combination of active and self-labeling solutions for data streams [23]. However, the algorithm described there used a fixed threshold and therefore was unable to efficiently adapt to the presence of concept drift. Addressing this limitation is the cornerstone of the study proposed in this manuscript.
Due to the lack of comprehensive works on hybrid frameworks for data streams, we investigate a framework that offers a combination of active learning with different self-labeling strategies based on uncertainty thresholding in the presence of concept drifts and under labeling constraints. While most of the presented solutions take for granted fixed thresholds, which are core elements of proper decisions, we focus on changing the self-labeling threshold adaptively to an incoming data stream.
Combining active learning with self-labeling for drifting data streams
The reduction of the labeling cost can be intensified using different methods concurrently. Both active learning and semi-supervised learning algorithms aim to enhance the learning process when few labeled instances are available and there is a vast number of unlabeled samples. However, they focus on generally disjunctive facets of data utilization. The former group of methods uses limited supervision for selecting only the most informative samples to be labeled, while the latter exploits already gained knowledge to adjust a model more properly in an unsupervised manner. Therefore, active learning can be interpreted as an exploration process, since it asks for the unknown, and semi-supervised learning as an exploitation step -it exploits the known [37]. The idea behind such a combination is depicted in Fig. 1. It depicts a situation in which the decision boundary is successively moved away from the green cluster's center. At first, the labeled instances selected by the active learning algorithm delineate a general decision boundary. Since this dataset fulfills the smoothness assumption [12], during the self-labeling step it is more likely that objects will be taken from the dense cluster on the left, so the classification will be correct and the border will be moved in the right direction at no additional cost. This can be explained by the fact that active learning itself, especially on low budgets, will not be able to exhaustively sample the properties of each class. Therefore, we may deal with the issue of underfitting, where we do not have enough labeled instances to correctly estimate the decision boundary. Self-labeling allows for correcting this problem by using the small number of labeled instances to enrich the training set for each of the classes. We find it intuitive to consider some hybrid frameworks which offer a hybridization of both techniques to reduce the overall amount of labeled instances needed for an effective model construction under drifting data streams.
Framework
In this section, we present our generic online hybrid framework for active and semisupervised hybridization. The framework consists of four flexible components.
-Adaptive classifier: a backbone of our framework that can be realized by any learning algorithm capable of incremental/online processing of instances.
-Concept drift detector: a module for monitoring the changes in the stream and informing our framework when it is necessary to take a drifting nature of a distribution into account.
-Active learning strategy: a module for selecting the most valuable instances for the label query process that allows us to obtain labeled instances under the given budget constraints.
-Self-labeling strategy: a module allowing for handling the underfitting of current concepts that uses instances acquired by active learning to increase the size of labeled training set at no additional cost imposed on the budget.
Adaptive classifier is being updated with incoming data samples. It is important that the algorithm should be able to work in an online setting since the whole framework is designed to do so. Another constraint is that a classifier should return a probabilistic outcome or values which can be approximately treated as the posterior probability. Some of them are Naïve Bayes or k-NN algorithm, to name a few. The state-of-art classifier, which is able to classify data streams efficiently and is responsive to a non-stationary distribution, is Adaptive Hoeffding Tree (AHT) [8]. It is a modification of the incremental decision tree called Very Fast Decision Tree (VFDT) [14] which uses the Hoeffding bound to expand its structure. AHT applies adaptive windows on every node to adjust local statistics independently and replace outdated nodes, so it is able to adapt to concept drifts. There are a few other variants of Hoeffding Tree -one of them is Randomized Hoeffding Tree (RHT) [3] that introduces additional randomization to the learning process. More complex classifiers like online ensembles can also be used in our framework.
For example, there is Recurring Concept Drift framework (RCD), which handles drifts by storing a separate base learner for each of distinguished concepts [19], or Accuracy Weighted Ensemble (AWE), which maintains a committee of models that are weighted by their accuracy on incoming data [42]. We find our hybrid framework for learning on a budget highly flexible, so it can be adjusted to many real-life data stream mining scenarios.
Concept drift detector is responsible for providing indications of changes in a data stream. The information can be used by different strategies to adjust their parameters and boost the overall performance. The most popular drift detectors are Drift Detection Method (DDM) [16] and Early Drift Detection Method EDDM [6], which will be described later.
Active learning strategy can be realized via any online strategy from the group of stream-based selective sampling methods [37]. In our work, we incorporate an uncertainty sampling strategy [5] which makes its decision based on some uncertainty measures -usually, they use a maximum a posteriori probability or an entropy. The measure is compared with a threshold. If it is low enough, a sample should be queried. For the former measure the inequality is given as follows: where X is a vector of an instance's features, p(y|X) is the conditional class probability and θ is the threshold. The lower the threshold, the less certain samples are chosen to be queried. Therefore, lower values are preferred. However, one should notice that too strict conditions may favor only those objects which are close to a decision boundary, so changes that occur far from the boundary may never be detected The issue of determining the appropriate threshold for uncertainty sampling strategies has been comprehensively studied in [41]. We construct our active learning module based on the results from this work. One of the proposed reliable strategies is Variable Uncertainty Strategy with Randomization (RandVar). In Alg. 1, each incoming instance X is forwarded to a classifier L which returns posterior probabilities for the sample. The uncertainty measure is determined through the maximum a posterior rule. There is a threshold θ that is used for decision making. These are usual steps. The strategy introduces two additional operations.
Algorithm 1: Variable Uncertainty Strategy with Randomization
Data: incoming instance X, trained classifier L, threshold adjustment step s ∈ (0, 1 , variance of the threshold randomization σ Result: labeling ∈ {true, f alse} Initialization : θ ← 1, store the latest value during operation y ← arg max y p(y|X), where y ∈ {1, ..., c}; return labeling ← f alse Firstly, it modifies the threshold value with an adjustment step s. The idea is to lower the value when a concept drift occurs and the classifier returns more uncertain labels more frequently; and to increase it when there is a stable period and the model is more sure about its decisions. This step leads to a more balanced budget spending, so the labeling is more uniformly distributed over time. Secondly, the threshold is randomized by multiplying it by a variable drawn from the Gaussian distribution with mean m = 1 and predefined variance σ. This step ensures that a decision space is sampled more uniformly since the instances which are far from the decision boundary are more likely to be queried. We find the strategy universal and according to the presented results reliable, so we choose it as our base active learning strategy. We do not perform any further investigation on the matter. Instead, we focus on selecting the appropriate self-labeling strategy which will cooperate well in the hybrid framework.
Self-labeling strategy complements the selected active learning strategy, acting as our semi-supervised learning approach. We assume that unsupervised learning based on the knowledge provided by the queries will lead to more exhaustive and accurate adaptation on the same budget. In addition, we preconceive that the active learning strategy will guide self-labeling properly, thus reducing the risk of error amplification. This module should also work in an online setting. Relating to the previous method, we propose different strategies for changing the self-labeling threshold respectively to an incoming data stream. We investigate selfreliant approaches, as well as, using a feedback from the active learning strategy and drift detectors. To the best of our knowledge, such research has not been done before.
Algorithm 2:
The generic hybrid algorithm for combining active learning and self-labeling Data: labeling budget B, QueryStrategy, SelflabelingStrategy, DriftDetector Result: classifier L at every iteration Initialization :b ← 0 repeat receive incoming instance X; get class labelŷ predicted by classifier L; ifb < B and QueryStrategy (X) = true then request the true label y of instance X; update labeling expensesb; update drift indications using DriftDetector (y,ŷ); update classifier L with (X, y); else if SelflabelingStrategy (X, f eedback) = true then update classifier L with (X,ŷ); until stream ends; The proposed generic algorithm is presented in pseudo-code form as Alg. 2. For each incoming sample, the actual budget spendingb is being checked. For potentially infinite streams it can be estimated as a ratio of already labeled instances to all received samples. If the value does not exceed a given budget B and a response from the active learning strategy (QueryStrategy) is positive, the sample is queried. After receiving a label, the expenses, drift indications and classifier are updated. If the query strategy rejects the sample, it still can be subject to the self-labeling approach (SelflabelingStrategy). This depends on the response of the second strategy we add to the classification flow. The method may also be optionally supplied with a f eedback from the active learning strategy or the drift detector. It is worth saying that the chosen strategies and their measures are, in fact, complementary. Very uncertain data samples will be forwarded to an expert and, obviously, they will not be used for the self-training. On the other hand, the algorithm will discard the objects for which it will be very confident and will use them for the training without additional supervision. We put the active learning step before the self-labeling, because we assume that the latter requires sufficient guidelines from the former, thus it is better to spend the whole available budget. Proposed self-labeling strategies for the hybrid online approach are described in the next sections.
Blind self-labeling strategies
The first group of strategies are blind approaches [17]. These methods do not use any explicit information about changes that occur in a data stream. They make some assumptions and use heuristics to handle data evolution. Classifiers, which are guided by them, adapt gradually to concept drifts, slowly forgetting old concepts and learning new ones. It takes time, as there is no clear and direct indication of a change. Some blind strategies for active learning were proposed in the mentioned work [41]. Another one is, for example, the adaptation method used in VFDT that incorporates a sliding window technique for its statistics [14].
Fixed
The first and the simplest blind strategy we use in our hybrid framework is based on a fixed threshold -we call it Fixed strategy. To determine a confidence measure, which is compared with the self-labeling threshold γ, the maximum a posteriori rule is used, since it returns a value that is chosen by a model to classify an instance: The inequality is opposite to the active learning condition. It prefers those samples to which a classifier displays the highest certainty, so there is a higher possibility that a classifier's own decision will be correct. This strategy is the most popular one in semi-supervised streaming frameworks discussed before, as the research on the self-labeling module is usually very limited.
Uniform and Randomized Uniform
The main weakness of the previous strategy is the need to predefine the threshold value. During a concept drift reliability of a classifier may significantly change, so a distribution of posterior probabilities will also be different -there will be more lower values than during a stable period. It is not obvious which values and when can be denoted as high enough or even that they should be actually high.
Our two following strategies are based on the methods presented in [41]. The first one is Uni. It incorporates the idea of balancing a budget spending over time (Alg. 1). Since there is no budget for self-labeling, we simply aim to ensure that the process will be uniformly frequent, regardless of a stream's state.
Algorithm 3: Uni self-labeling strategy Data: incoming instance X, trained classifier L, threshold adjustment step s ∈ (0, 1 Result: self-labeling ∈ {true, f alse} Initialization : γ ← 1, store the latest value during operation We achieve this by increasing the threshold when the classifier L becomes more confident, so we prevent it from being overfitted. On the other hand, we decrease the threshold when some changes occur and decisions are less confident, therefore, the model should be sufficiently supplied with additional labels and adjust itself to changes faster. The second one is Randomized Uniform (RandUni) strategy. In fact, it is RandVar strategy (Alg. 1) with increasing and decreasing the threshold as in the previous algorithm. We assume that the randomization step may assure some helpful diversification, especially after abrupt changes, when γ is usually far from a more stable value.
Inverted Uncertainty
The last blind strategy is Inverted Uncertainty (InvUnc). In this approach, we use the active learning threshold θ from RandVar algorithm to control the selflabeling threshold γ using a simple transformation 1 . We assume that γ should be higher during a concept drift and lower for stable periods. It is motivated by an assumption that when a concept is changing and a model is insufficiently adapted, there are no formed internal class structures to be exploited, so only the most certain decisions should be made. On the other hand, when a concept is stable these structures are more likely to be present, therefore, we should intensify selflabeling in order to be able to exploit them. In RandVar the threshold is higher during stable periods and lower when drifts occur. The self-labeling condition is then given as follows: where p(ŷ|X) is a maximum a posteriori probability and c is a number of possible classes. 1 − θ factor is self-explanatory. That is why we call the method Inverted Uncertainty. We add 1/c factor, due to the fact that p(ŷ|X) is never lower than it, so regardless of small disturbances introduced by the randomization step for small values of s (which should be used [41]), the threshold θ is in range 1 c , 1 . We want γ to be in the same range.
Informed self-labeling strategies
To improve the responsiveness of the adaptation process dedicated change detectors can be introduced. Those algorithms are directly oriented on indicating changes, thus they provide accurate information about stream evolution. Strategies that use such indicators are called informed [17]. The simplest drift detector can be based on a classification error rate. The two most popular algorithms that have been already mentioned are DDM and EDDM. We use them as a base of our next three self-labeling strategies, in order to improve their adaptation capabilities in the presence of concept drift.
Continuous DDM
In this approach, we use DDM indications to create continuous control over the self-labeling threshold. We call the method Continuous DDM (cDDM). The standard DDM is designed to indicate three discrete states of a data stream: stable, warning and change. The core assumption of this method is that an error rate p of a classifier should be approximately constant and low when a concept is stable. When a drift occurs, the error should be significantly higher than the value registered for the static distribution, since the classifier has not adapted to a new concept yet. Therefore, changes can be detected by tracking the actual error rate p along with its standard deviation s and comparing it with the registered error for the stable period. The algorithm makes decisions based on the condition: where p min and s min are the mean error and its standard deviation registered for a stable concept after at least 30 samples. The α parameter is used to determine thresholds for warning (α = 2, the confidence interval is 95%) and change (α = 3, the confidence interval is 99%) states [16]. Such discrete DDM can be used to reset a classifier and itself when a drift occurs [41].
Since we want to control the self-labeling threshold continuously and respectively to the DDM indications, we take the whole algorithm as it is, excluding the classifier reset, and we simply extract the tracked, continuous error measure = p + s. We use this value analogously to the InvUnc idea (Sec. 4.2. 3) -the threshold should be higher during a concept drift and lower during a stable period. Decisions are based on the following condition: We add 1/c to additionally penalize a situation when a classifier simply guesses labels for = 1 − 1/c. For example when = 0.5 for two classes, then + 1 c = 1, which is the maximum possible posterior probability returned by the model. Since the error can be higher than 1 − 1/c we have to normalize the overall value -we use tanh 2x for this, which is approximately correct, so γ is never higher than 1. At the same time, the lowest threshold γ is always higher than 1/c, therefore, the condition in this strategy is a bit stricter than in the others. It can be justified since for posterior probabilities close to 1/c a classifier is very uncertain about its decisions.
Continuous EDDM
Another drift detector -EDDM -presents a slightly different approach. We use it for our second informed strategy called Continuous EDDM (cEDDM). Instead of considering the mean error, the algorithm calculates an average distance p between two misclassified objects and also its standard deviation s . When a concept is stable, the average distance increases due to a model adaptation process. When the concept starts to change, gaps between two mistakes become shorter. The algorithm compares the current mean distance with the maximum registered: where p max and s max are registered statistics for a stable period after at least 30 errors. β is a threshold for a similarity between two error distance distributions. It is empirically recommended to set β = 0.95 for warning and β = 0.9 for change [6]. The standard EDDM is used just like DDM, including resetting. Again, to control self-labeling in a continuous way, we remove the classifier reset step and extract the similarity measure ζ p +2s p max +2s max , as the reference value. One should notice that ζ ∈ 0.9, 1 , because when a drift is detected for ζ = 0.9 the EDDM is reset and ζ stays unchanged until the next update is possible. The idea of the control is the same as for InvUnc and cDDM. The condition that is used to check if we should use an instance in the self-labeling process is given as follows: where f (ζ) is any decreasing function defined as: In our case we choose a simple linear function which fulfills the above requirements.
Windowed Error
The last strategy is Windowed Error (WinErr). This method is, indeed, a sliding window for the mean error. Instead of resetting drift indications p and s, like in cDDM, we track the strictly continuous error that changes dynamically within the window. We use the same self-labeling condition as in cDDM.
It is worth noting that a limited labeling budget may have a significant influence on indications generated by drift detectors. Due to the lack of a sufficient amount of error indicators (coming with labels), internal estimators used in the detectors will be inaccurate, impeding the process of drift detection. It may be a serious problem in the case of discrete (binary) drift detection when one wants to use these indications to reset a model and retrain it from scratch since if there is not enough information the detectors may never be triggered or they may act at random. However, in our work, we do not use drift detectors in such a way. Instead of working in the retraining mode, we continuously update our strategies in an informed manner using the error (DDM) or distance between errors (EDDM) as a continuous input to our self-labeling strategies. By doing this we alleviate the mentioned problem since we do not have to rely on rare, binary and unreliable drift detections. Obviously, we may still be forced to use imperfect estimations, especially for very low budgets, but it is something we have to accept, due to the assumption of strictly limited access to ground truth. There is very little that can be done without more labels.
Still, the used drift detectors could be further investigated in the given settings and it is possible that some improvements can be done in order to improve their performance while working on a budget. However, since that would require a broader in-depth statistical analysis in various scenarios we find it beyond the scope of this work.
Experimental study
In this section, we describe our experiments conducted to prove the validity of introduced strategies for active learning and self-labeling hybridization. Firstly, we present data streams that are used in the evaluation process. Next, we briefly delineate the problem of measuring performance in a streaming data environment and describe a chosen set-up. Finally, obtained results with a commentary and conclusions are presented.
Set-up
Data streams. To examine the adaptation capabilities of the presented algorithms we used a set of drifting data streams. We attempted to explore our strategies in the context of a variety of data streams and concept drifts. Therefore, we utilized real streams from different domains and with diversified properties. The real data streams allow for conducting relatively reliable tests of adaptive methods in real-world environments. They are characterized by mixed types of drifts that are coming from the underlying nature of datasets, thus leading to more realistic learning difficulties. Details of used real streams are given in Tab. 1.
Evaluation methods. Evaluation in streaming data environments enforces a different approach than those which are used in standard batch mode scenarios. Such reliable methods like k-fold cross validation may turn out to be impractical due to its time-consuming nature. Furthermore, other techniques which are dedicated to data streams, like holdout [27], may be inappropriate for streams with concept drifts. Simpler error-estimation procedures that cope with dynamic online settings are possible. One method, which can be successfully used for the considered evaluation, is prequential evaluation p e with sliding window for the most recent where ω is the window size and L(y, y * ) is a chosen loss function for predicted outputs y and true labels y * . Using this approach, each sample is firstly utilized for testing and later for learning. The chosen accuracy measure is recalculated instance by instance. It provides a relatively good measurement sensitivity, but it highly depends on the optimal window size. We chose this approach with ω = 1000 to estimate an average error within the window for time series measurements and, in addition, to calculate a global average for a whole stream.
Examined strategies. In our experiments, we examined the proposed strategies: Fixed, Uni, RandUni, InvUnc, cDDM, cEDDM and WinErr on the presented real data streams, using the chosen evaluation method. As a baseline, we selected a pure active learning strategy RandVar introduced in Sec. 4.1. The variable threshold was set to s = 0.01 and the standard deviation of randomization to σ = 1. We call the strategy ALRV in the experiments. In addition we include results for random selection (ALR) [41] and sampling (ALS) [11]. For the Fixed strategy we empirically found out that very high threshold values are preferable on average, therefore we set γ = 0.95. Parameters of Uni and RandUni were the same like in ALRV. The internal configuration of drift detectors for cDDM and cEDDM was set to default, as stated in [16,6], so a minimum number of registered samples was n = 30 and for errors it was also n e = 30. The width of WinErr's window was set to w = 100, based on empirical observations that smaller windows are more reactive and accurate when a rate of information is low (a limited labeling budget), which is somehow indicated in the literature [7].
Labeling budgets. We compared effectiveness of these strategies for different but generally low and very low budgets B = {1%, 5%, 10%, 20%, 50%}. As base learners, we chose different classifiers to show that the presented framework is indeed generic. We picked two single classifiers and two ensembles that have been already mentioned in Sec. 4.1. These are two different Hoeffding Trees -AHT, RHT and two ensembles -RCD with Naïve Bayes as base learners and AWE with perceptrons. They were tested with all the presented strategies 2 .
Results
The results are presented in two forms: tables and performance series. For the former, the overall average accuracy is given. The best results are in bold. In addition, cells with scores higher than the best active learning strategy are in the green color. For the graphs presenting performance series, the accuracy within the sliding window is included. These are the best AL and self-labeling results obtained for each presented case. The included results for single classifiers (Tab. 2 and 3) and ensembles (Tab. 4 and 5) clearly show that the proposed strategies were able to enhance the accuracy for a wide range of budgets. The single classifiers achieved similar performance on average -for some cases, the AHT algorithm performed better, for example, Cover, Spam or Elec, but for the rest, like Sensor or Gas, the RHT classifier was more efficient. However, it can be easily noticed that the latter was improved much more frequently than the former. Although several improvements can also be seen for both RCD and AWE, there are some cases -the Sensor stream, for instance -in which they were not able to learn concepts properly with or without a self-labeling module, so they perform slightly worse in general than the single classifiers. It is especially interesting since the diversity of the ensembles should, theoretically, provide better adaptivity. Although it may be trivial, one should also notice that the overall accuracy rises for all classifiers when the budget increases.
Both single learners performed well on the Sensor stream. AHT using the Fixed or WinErr strategy improved results for almost all budgets from nearly 3% for B = 1% to more than 15% for B = 10%. One should notice that there are over 50 classes for the stream, so the improvement is very significant. In Fig. 2 it can be clearly seen that for AHT with WinErr and budget values equal to 5% or 10% the accuracy within the sliding window is more frequently on a higher level than for ALR. RHT boosted the learning process for almost all strategies on 1% and 5% budget, achieving the best results on average for such settings. Neither of the committees was able to learn the concept properly. Even for the basic approaches without self-labeling they worked practically at random, so there was no chance that a semi-supervised learning step could improve something without a sufficiently reliable model. The most impressive improvement was obtained for the Spam stream. In the case of AHT, the enhancement is definitely present on low budgets for all strategies, excluding Uni. Among others, the InvUnc strategy gave almost 60% gain in accuracy for B = 1% and more than 30% for B = 5%. cEDDM was able to increase the accuracy by 10% even for B = 10%. The graph for the stream and the cEDDM strategy (Fig. 2) shows that ALRV was not able to learn new concepts quickly enough for both 1% (after the second drift) and 10% (after the first change). For B = 50% there is no noticeable gain. RHT provides slightly lower accuracy for low budgets, however, it was able to improve the learning process for the whole range of budgets when the InvUnc, cDDM, cEDDM or WinErr strategy was used. The InvUnc strategy gave almost 30% gain on B = 10%, cDDM gave nearly 20% on B = 20% and more than 12% on B = 50%, to name a few. Out of the two ensembles, AWE performed much better than RCD. The former achieved as good results as the single classifiers and even the best result (90.54%) overall, using WinErr on B = 50%. The latter improved adaptivity mainly for the lowest budget.
A very similar relation between AHT and RHT, regarding an average accuracy and a rate of improvements, can be noticed for the Cover stream. AHT worked best with self-labeling approaches for B = 1%, while RHT for all budgets. RCD was able to significantly boost the learning mainly for high values of the budget. For B = 50% almost all self-labeling strategies provided a gain from 10% to 15%. In Fig. 4 it can be seen that when B = 20% the accuracy was more frequently between 50% and 75% than in the case of AL and on B = 50% above 75% for most of the time. The ALRV strategy remained unaffected by the increasing budget, even if five times more labels were provided. In the case of the Power data stream, the hybrid framework using AHT was stable for practically all strategies and budgets, providing some improvements at the same time, mainly for the Fixed and WinErr strategy. However, the boost of performance was rather minor, between 1% and 2%. When RHT was used instead of AHT it can be seen that the active learning strategies alone were not capable of maintaining relatively efficient models in the dynamic environment. The accuracy dropped drastically to less than 5% and in Fig. 3 we can observe that the model is completely useless without a self-labeling module. After adding the InvUnc, cDDM, cEDDM or WinErr strategy the average accuracy rose by about 10%. In the given example for InvUnc we can see that the algorithm starts reacting to the changes even for very small B = 1%. A few improvements can be noticed for the Gas stream, especially when the informed strategies are combined with both single classifiers. More significant gain is present for RHT and it is another case in which a model using the last four strategies provided significant boosts of accuracy -from 2% for InvUnc on B = 10% to almost 15% for cEDDM on B = 50%. In Fig. 3 we can see that the cDDM strategy elevates the accuracy for 10% and 50% budget, especially in the second half of the stream. For B = 1% the learning process is slightly more intensified, so it looks a bit more dynamic, however, it does not result in better performance on average. The RCD ensemble improved learning only for the lowest budget. In the case of AWE we can see much more improvements, however, the classifier did not learn the concepts as well as all the rest of the models, being about three times worse than them.
For the Usenet stream, all algorithms worked well on average. Self-labeling strategies with AHT and RCD improved the accuracy only for low budgets up to 10%, with RHT they did it for the higher ones, and the Uni, RandUni or cEDDM strategy worked well with AWE for all budgets. In the case of the Poker data stream, only AHT and RCD algorithms with the Fixed strategy were able to convincingly enhance the performance for almost all budgets. The ensemble provided a notable gain with all self-labeling strategies on B = 50%. For the Elec stream, we can see improvements mainly for committees, however, once again, only RCD performed stably and it can be compared with AHT, which did not cooperate well with self-labeling algorithms, but it achieved the best accuracy on average. On the graph below ( Fig. 4) we can observe how the Fixed self-labeling strategy improved learning for all presented budgets, for example when at the very beginning the accuracy was elevated above 75% for B = 10%. Pure active learning approaches on low budgets are sometimes not able to exploit new concepts sufficiently, since there are not enough labeled instances to reinforce a new concept discovery. The hybrid approach that uses a self-labeling step helps with exploiting the concept and creating a more adequate model, without affecting an available budget. This is especially valuable when taking into account that the aim of this approach was to develop methods for learning with very limited access to true class labels. For higher values of the budget, there are a bit fewer improvements, but there are still many of them. Well-modeled concepts may be more easily exploited by self-labeling, since there is a lower risk that an error will be amplified when class boundaries are more or less correct. One must although remember that very large budgets are unrealistic and prohibitive in most real-life scenarios, so we cannot increase it as much as it may be necessary. This is why we did not consider budgets higher than 50%. On the other hand, when a lot of labeled instances are available, the active learning strategy may be more likely to dominate the learning process and it may be sufficiently good in sampling incoming concepts, while a self-labeling strategy may only impede the process. Such scenarios lead to the situation where the feasibility of a hybrid approach on higher budgets is highly dependent on a stream to which it may be applied.
Aggregated results and conclusions
Let us summarize the findings from this manuscript and formulate a set of observations and recommendations regarding the usefulness and applicability areas of proposed hybrid methods for mining drifting data streams on a budget. Two following measures are used for this purpose. The first one is the average accuracy (Acc) for all self-labeling strategies over all examined data streams. We calculate it for each classifier and budget separately. The second measure is the fraction (Fh) of cases (each cell for a self-labeling strategy in the result tables is a single case) in which a hybrid approach achieved a better result than any reference method relying only on active learning. The fraction is calculated for each budget and classifier individually. As long as our baseline classifier achieved better than random accuracy, we can assume that improvements are relevant. Therefore, we excluded AWE results for the Sensor and Power stream. The fraction measure tells us if a strategy works well with a given base classifier. The comprehensive summary of all results regarding classifiers, strategies and budgets is presented below.
Budget matters. For real data streams, we can observe that improvements occur for all budgets, but they are present mainly on lower budgets (Fig. 5). The most frequent enhancements can be seen for RCD (in more than a half of cases on B = 1%) and RHT (from about 40% on B = 1% to almost 55% on B = 20%). It is very encouraging since low budgets are the most realistic and practical ones. The RHT classifier reinforced with self-labeling was able to achieve results comparable with very solid AHT. We distinguish the RHT classifier on low budgets as the best for the proposed framework. The good influence of the hybrid approach was also observed for the highest considered budget, on which improvements are relatively frequent. The explanation of why self-labeling works for very low and very high budgets has been presented at the end of Sec. 5.2. On the other hand, we can notice a local minimum on B = 10% and B = 20%. It might mean that the hybrid framework is not good for in-between cases, when models are not very reliable and, at the same time, there are fewer chances for improvements. We can also observe that the average accuracy increases with the budget. This is very intuitive, as more labeled instances support more accurate learning.
Generic wrapper. We can see that the hybrid framework was able to improve results for many cases and for all of the examined classifiers. For most of the considered real scenarios at least one better hybrid solution could be found (see Sec. 5.2). Since the framework has been designed as a wrapper (a classifier is modular) it is very important that we can observe this fact. One can easily apply the solution to any online classifier in order to boost its performance, especially when facing very limited access to class labels. We recommend experimenting with the framework if results are not satisfying and a classifier cannot be changed. Although the framework is able to improve all considered classifiers, some differences can be noticed. As already mentioned, the highest impact was registered for the RHT classifier. Moreover, both Hoeffding Trees achieved the highest average accuracy in almost all cases (see Fig. 5). In general, single classifiers were able to integrate with self-labeling better than ensembles, which, besides the worse results, are also slower. However, we suppose that committees can be improved in the context of semi-supervised learning. The main reason why they perform worse than single classifiers is probably a fact that for low budgets there are not enough labeled samples to generate sufficiently diversified ensembles. One can notice that the accuracy of the RCD classifier on higher budgets is closer to RHT than on lower ones (Fig. 5).
Informed over blind. The informed self-labeling strategies -cDDM, cEDDM and WinErr -performed generally better for real streams than the blind approaches, regarding both accuracy and the fraction of enhancements (see Fig. 6). They worked well with all classifiers, but the most significant difference can be observed for the RHT classifier, for which improvements were registered in between 60% and 65% of cases. The most straightforward explanation is that the information from drift detectors accordingly and efficiently supported the adjustment of a self-labeling threshold. The only exception is the blind Fixed strategy that uses a very high confidence threshold. It cooperated effectively with AHT and RCD on our examined real-world datasets. The InvUnc strategy was able to significantly improve results only for RHT on real data streams.
More for free. Last but not least, we want to emphasize the most important fact that our hybrid framework can significantly reduce the cost of maintaining online classifiers that work with drifting streaming data. As we could notice, for the Spam data stream the AHT classifier using the active learning strategy without self-labeling was able to achieve nearly 90% accuracy only if 50% of objects were labeled. Applying the InvUnc strategy to this case provided a very similar performance with only 1% of data being annotated, while for the same budget all the active learning strategies were correctly classifying less than 30% of samples. Let us consider an illustrative example. It has been estimated that more than 500 million tweets is created every day [1] and according to the CrowdFlower's offer [2], annotation of 100 000 rows costs 1500 dollars every month. If one wanted to have 50% of tweets labeled he would have to spend 6.75 million dollars monthly for that, while for 1% it is 50 times less, so only 75 000 dollars monthly. This is only a theoretical reduction that self-labeling connected with active learning may provide, but it shows very clearly why the hybrid approach should be considered each time a better accuracy is required and increasing a budget is not feasible.
Summary and future works
In this paper, we have introduced a novel hybrid framework for learning from drifting data streams on a budget. In real-life scenarios, unlimited access to ground truth cannot be assumed, as the cost is connected with obtaining such information from a domain expert. Therefore, we discussed a set-up of a learning system under scarce access to labels. We have proposed a combination of information coming from active learning and self-labeling, in order to obtain more efficient usage of very few available instances. Active learning allowed for selecting proper ones for label queries, thus leading to the exploration of new concepts emerging from a data stream. These seeds were then utilized by a self-labeling module that offered exploitation of previously discovered structures at no additional costs. We developed two families of algorithms that relied only on classifier outputs or empowered them with additional information from a drift detection module. Seven algorithms were proposed in total, offering a selection between complexity and performance. We recommend applying the hybrid approach if results for all available classifiers are insufficient and one cannot increase a budget or when a weak classifier cannot be replaced. Experimental analysis showed the usefulness of the proposed hybrid-based approach, especially for realistic scenarios with a highly limited budget. In such cases inclusion of self-labeling allowed to improve the classifier performance by increasing its competence over a discovered concept, while saving the small budget at hand for adapting to changes. We proved that the hybrid framework is a flexible wrapper, so it can work with different classifiers, including online ensembles.
We also observed that informed strategies are preferable over blind approaches with an exception to the strategy that uses a high fixed threshold for self-labeling. The proposed hybrid solutions displayed excellent performance for real-life data streams.
The obtained results encourage us to continue our works on learning on a budget from drifting data streams. As a next step, we envision works on another semi-supervised approach which is oversampling. It may be used to enhance the adaptation process by providing more labeled instances without additional cost. We suppose that such an approach may reinforce the construction of diversified online ensembles. | 2021-12-22T02:15:35.216Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "c3d1ce4efd600d7b18fd049d9fa83d078dd862fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c3d1ce4efd600d7b18fd049d9fa83d078dd862fd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256105046 | pes2o/s2orc | v3-fos-license | High-Field THz Source Centered at 2.6 THz
We present a semiconductor-based high-field terahertz (THz) source with a spectral peak at 2.6 THz. A peak field of 303 kV/cm is produced through optical rectification of a 0.57 mJ near-infrared (NIR) generation beam. A surface-etched grating allows a tilted-pulse-front phase matching geometry, optimizing THz generation >2 THz. This configuration has the potential to yield a THz peak field around 1 MV/cm when the ultrafast NIR laser delivers 5 mJ pulses. Our scheme enables the exploration of coherent control and nonlinear effects in the region between 2 THz and 4 THz.
Introduction
High-field terahertz (THz) sources [1] are widely applied to explore nonlinear THz properties of various materials including semiconductors [2], 2D materials [3], and gases [4].These sources also allow experimentalists to access new regimes of high-harmonic generation [5], free electron acceleration [6], and coherent control [7][8][9].Most research groups working on these topics have employed the tilted-pulse-front technique based on an ultrafast near-infrared (NIR) laser to generate high-field THz pulses by optical rectification inside a LiNbO3 nonlinear crystal [10,11].Generally, This table-top technique provides a THz electric field in the order of several hundreds of kV/cm, with a record value of 6.3 MV/cm [12], featuring a spectrum which peaks around 1 THz and then gradually decreases in amplitude as frequencies approach the phonon resonance of LiNbO3 at 4.5 THz [13,14].Modifications to this standard configuration have recently led to a more efficient generation of high-field THz pulses with spectral components up to 4 THz [15].However, 98% of the pulse energy in this configuration is still contained within the frequency range below 2 THz, preventing most high-field applications requiring a spectrum centered at higher frequencies.Cooling the LiNbO3 crystal down to cryogenic temperatures can also extend the spectrum towards higher frequencies by reducing losses due to the phonon absorption tail.For example, this approach has enabled the generation of spectral lines peak at 2.7 THz with a peak field strength around 25 kV/cm, and peak 4 THz, with a peak field strength of about 13 kV/cm with a THz beam size around 1 mm 2 [16].However, the use of cryogenic equipment complicates these experiments.There is therefore a general lack of a straight-forward table-top experimental configuration able to generate high-field THz pulses with a spectrum centered at frequencies exceeding 2 THz.Such a system is crucial for providing experimentalists with access to a new range of phenomena such as phonon-assisted nonlinearities [2,17], coherent control of Bose-Einstein condensation in semiconductor microcavities [18] and saturable transitions in molecular gases [4].Although other high-field THz generation techniques using air plasma or metallic spintronic emitters have reported peak fields above 1 MV/cm [19][20][21], the spectral bandwidth of these sources extend from 0.5 to 30 THz.This large bandwidth limits many nonlinear applications as the energy available to address a specific transition at a given frequency remains relatively weak [22].Finally, organic materials, such as DAST and DSTMS, also display great potential for generating high-field THz up to 30 THz [23][24][25][26].However, organic crystals typically have a damage threshold below 20 mJ/cm 2 [25][26][27], which is at least 60 times lower than that of semiconductor crystals, such as GaP [28,29].This relatively low damage threshold may be a limiting factor for long-term stability at high incident power.In this work, we demonstrate a scheme to generate high-field THz pulses centered at 2.6 THz using a phase grating directly etched onto the surface of a nonlinear crystal to enable noncollinear phase-matched tilted-pulse-front THz generation.This technique has been proposed theoretically [30][31][32][33][34][35][36] and then demonstrated experimentally [37][38][39] with surface gratings on ZnTe and LiNbO3 crystals to achieve high THz generation efficiencies at frequencies <2 THz.In comparison to the standard tilted-pulse-front technique in LiNbO3, this configuration eliminates imaging distortions of the diffracted generation pulse, hence improving THz beam quality as well as THz generation efficiency [30,33,37].Recently, broadband THz generation was demonstrated from 0.1 to 6 THz with a phase grating etched on the surface of a 2 mm thick GaP crystal [39].Here we use the same material, but we rely instead on a different geometry using a collimated 0.57 mJ NIR pulse to generate a THz pulse with a peak field reaching 303 kV/cm at a central frequency of 2.6 THz.More importantly, there are two indications that our configuration is able to yield much larger THz peak fields: (1) NIR pulse energies to generate THz radiation can be increased by two orders of magnitude before reaching the GaP damage threshold, and (2) we observe a quasi-linear relationship between the incident NIR pulse energy and the emitted THz amplitude, with a slight saturation onset only observed at the highest energies in our experiments.
Experiments
Fig. 1 shows a diagram of the experiment apparatus for high-field THz generation in a GaP crystal.This crystal is patterned with a surface phase grating to efficiently diffract the incident NIR beam and enable tilted-pulse front phase-matching conditions [10,40].The optical source is a commercial Yb:KGW regenerative amplifier system generating 265 fs pulses with a center wavelength of 1035 nm, a pulse energy of 1 mJ and a repetition rate of 3 kHz.The output laser beam is focused in air with a 1 m focal length lens.A 1 mm thick BK7 window is placed before the focus to broaden the NIR spectrum from a bandwidth of 3.5 THz to 7.2 THz through selfphase modulation (SPM).The NIR spectra of the pulse before and after the BK7 window are shown in Fig. 1(a).The laser beam is then guided through a set of chirped mirrors, providing a total dispersion of -3000 fs 2 , to compensate for the positive dispersion in the SPM process and compress the pulse to ~80 fs in the time domain.The autocorrelation traces of the NIR pulses before and after the BK7 window and chirped mirrors are shown in Fig. 1(b).The NIR beam in the first arm is collimated onto a 1-mm thick 110-oriented GaP semiconductor crystal with a surface-etched phase grating [39,41] to enable efficient THz generation by optical rectification.This NIR beam has a 1/e 2 diameter of 3.2 mm and a pulse energy of 0.57 mJ.A grating pitch of Λ = 1.6 µm, which yields a diffraction angle of 11.7° inside the GaP crystal, enables non-collinear phase-matching conditions leading to broadband THz generation from 0.5 to 6 THz with a geometry allowing both the incident NIR pulses and generated THz pulses to propagate along a direction normal to the crystal plane.In this geometry, the 1 st order diffraction angle of the NIR pulse is also equal to its pulse-front-tilt angle [42]: where ng(ωNIR) is the group index of the NIR generation beam centered at 1035 nm and n(ω4THz) is the phase index at 4 THz inside of the generation crystal.The grating filling ratio is 50%, and the target height modulation is 245 nm.This corresponds to an optical π-phase difference between optical light rays passing through the top and bottom sections of the grating, which reduces the 0 th diffraction order because of destructive interference.In a transmission geometry, we measure 44% of the incident NIR power in both ±1 diffraction orders, while only 0.7% of the incident power remains in the 0 th diffraction order.Note that the presence of the grating also creates an effective index layer at the air-GaP interface reducing the Fresnel reflection coefficient by about 5% at the front crystal surface [41,42].The generated THz radiation is collected by an off-axis gold parabolic mirror PM1 in Fig. 1(c
Results and Discussion
Fig. 2 shows the measured high-field THz transient and corresponding spectral amplitude, obtained with the Fourier transform.The spectral bandwidth extends up to 6.3 THz with a peak centered at 2.6 THz.The multi-cycle pulse is attributed to group velocity dispersion in the 1 mm-thick GaP generation crystal stretching the pulse in time.We evaluate the electric field strength of the THz pulse at the focus between PM3 and PM4 since this is a practical position to insert a sample in the setup.We evaluate the THz electric field ETHz with the following equation [43]: where A and B are the voltages on the PDs, λgat is the central wavelength of the NIR gating beam (1035 nm), r41 = 1 pm/V is the electro-optical coefficient of GaP at 1035 nm [44], n0 is the refractive index of the GaP at 1035 nm, L = 0.1 mm is the thickness of the GaP detection crystal, and ttot is the transmission coefficient taking into account the THz transmission through the GaP detection crystal, the Si wafer after PM4 and the Ge wafer after PM2.Considering this geometry, we obtain a peak field of 303 kV/cm at the focus of PM1.The dynamic range of our system reaches 80 dB at 2.6 THz and remains above 60 dB between 0.6 THz to 5.3 THz (Fig. 2b).This figure of merit is calculated by dividing the THz spectral intensity by the noise floor, which is measured by blocking the THz beam and then fitted to a model A*(1/f+B), where f is the frequency, and A and B are fitting parameters [45].To confirm the THz field strength, we also measure the THz power with a thermal detector.A Golay cell, calibrated using a blackbody source, measures a power corresponding to a THz pulse energy of 2.8 nJ immediately after the detection crystal.Considering the Fresnel transmission coefficients of the GaP detection crystal and the Si wafer, the THz pulse energy at the focus between PM3 and PM4 corresponds to 16.8 nJ.Based on the energy contained in the measured time-resolved transient, and the measured THz beam diameter of 317 µm (1/e 2 ), the Golay cell measurement corresponds to a peak field of 446 kV/cm, which is 1.5 times higher than the value measured with the EOS.This discrepancy of the THz pulse energy between the different measurement techniques is well known [46] and has also been observed in other work [43].We consider that the THz peak field calculated from the EOS measurement is a more reliable value.Finally, we look into the possibility to combine our configuration with a more powerful NIR source to achieve even higher THz peak fields.We first determine the crystal damage threshold to the incident NIR pulse, which is a critical parameter to allow the use of higher pulse energies.We perform this test at a wavelength of 1035 nm and a repetition rate of 50 kHz by placing a non-patterned and patterned GaP window at the focus of a 15 cm lens.For both samples, we only notice visible damage on the crystal accompanied by an abrupt drop in the generated THz signal when the incident power exceeds 0.8 W, which corresponds to a peak fluence of 5.6 J/cm 2 .For comparison purposes, the highest peak fluence used in this work is 14.2 mJ/cm 2 , which is a factor of 400 lower than this damage threshold.These results indicate that it is indeed possible to significantly increase the incident NIR pulse energy to enable the generation of higher THz peak fields.We also investigate the dependence of the THz peak field as a function of the NIR generation pulse energy ℰNIR.Fig. 3(a) shows the THz signal as ℰNIR is varied from 0.06 mJ to 0.57 mJ.The experimental results are displayed along with two simple models: (1) a linear relationship (red dashed line): = aℰNIR, where the slope a = 658 kVcm -1 mJ -1 is related to the conversion efficiency, and (2) a modification of the first model (blue dashed curve): = aℰNIR / (1+bℰNIR), where b = 0.59 mJ -1 accounts for saturation effects, which can be caused by two-photon absorption [10].We observe that the generated THz peak amplitude is linear with ℰNIR until 0.2 mJ, which corresponds to a NIR peak fluence of 5.1 mJ/cm 2 .This saturation onset is 2.5 times higher than the one reported in previous work using the same excitation wavelength, but a non-patterned GaP crystal, and similar to the saturation onset observed with a LiNbO3 crystal in a tilted-pulse front configuration [47].Fig. 3(b) shows THz spectra generated with different NIR incident pulse energies, which all peak at 2.6 THz.However, we observe a gradual decrease of the THz spectral amplitude around 3.5 THz as ℰNIR increases from 0.06 mJ to 0.18 mJ.Similar THz spectral changes at high ℰNIR have also been observed previously [48], but further investigation is still required to fully understand this effect.Considering the high damage threshold of GaP and the saturation behavior of observed in our experiments, we predict that a NIR pulse energy of 5 mJ could be used to access a THz peak field of 886 kV/cm with the spectrum centered at 2.6 THz.
Conclusion
We demonstrate a high-field THz system using collimated NIR pulses impinging on a GaP crystal with a surface phase grating to generate pulses with 303 kV/cm peak field centered at 2.6 THz.This peak field is confirmed by measurements performed with a calibrated Golay cell.We also show that our system operates significantly below the GaP damage threshold and close to a linear regime, allowing more powerful NIR sources to produce even higher peak fields, potentially reaching 1 MV/cm.Also, a laser source at a longer wavelength could be used to reduce the multi-photon absorption processes in GaP, which may help to reduce saturation effects observed in the THz generation process as well as increase the crystal damage threshold.Considering a tilted-pulse-front configuration with an optimal phase grating, this latter approach could, according to numerical models, produce peak fields reaching up to 17 MV/cm at a central frequency of 3 THz [36].Finally, although our experiment focuses exclusively on GaP as the THz generation crystal, the same optical configuration could be used to generate high-field THz in other materials with a surface phase grating to gain access to different spectral ranges or to increase THz generation efficiencies.This work will pave the way towards a new class of high-field THz sources able to access a spectral range departing from the conventional region below 2 THz and will enable novel nonlinear and coherent control experiments in condensed matter systems.
) with 1/2" diameter and 1/2" focal length.In the same figure, the subsequent parabolic mirrors, with a 2" diameter and 2" focal length (PM2-PM5), are arranged in a standard terahertz time-domain spectroscopy (THz-TDS) configuration.The gating pulse in the second arm is overlapped with the focused THz transient inside a 0.1 mm-thick 110-oriented GaP detection crystal to resolve the oscillating THz transient with electro-optical sampling (EOS) detection.
Fig. 1 .
Fig. 1.(a) and (b) Spectra and corresponding autocorrelation traces of the NIR laser pulses measured at the laser output (black line) and after spectral broadening in BK7 and temporal compression with chirped mirrors (CMs) (red line).We observe a spectral broadening from 3.5 THz to 7.2 THz (FWHM) and a reduced pulse duration from 265 fs to 81 fs (FWHM).The reference frequency ν0 corresponds to the center wavelength of 1035 nm.(c) Schematic of the highfield THz setup.The focused NIR pulses are first passed through a 1-mm thick BK7 window to broaden the spectrum by self-phase modulation.A THz-TDS scheme is then used to generate and detect THz radiation where the NIR generation beam is collimated onto the THz generation crystal.The system is operated in a dry-air purged environment.Optical components to build the setup are labelled above as follows: L1: lens, f = 100 cm; L2: lens, f = 70 cm; CMs: -250 fs 2 each; BS: beamsplitter; TS: translation stage; PG on 1 mm GaP: 110-oriented 1 mm-thick GaP crystal with a phase grating on the incident surface; Ge: germanium wafer; Si: silicon wafer; PM: Parabolic mirror; 0.1 mm GaP: 110-oriented 0.1 mm-thick GaP crystal; L3: lens of a focal length of 5 cm; λ/4: quarter-wave plate; WP: Wollaston prism; PD: photodetector.
Fig. 2 .
Fig. 2. (a) Time-resolved high-field THz transient.The inset shows the corresponding spectral amplitude calculated with the Fourier transform.(b) THz spectral intensity (black curve) and noise floor (blue curve).The dashed blue line is the noise floor fitted with the model A*(1/f+B), where f is the frequency and both A and B are fitting parameters.The dynamic range (red curve) is calculated based on [45].
Fig. 3 .
Fig. 3. (a) Measured THz peak electric field strength versus NIR pump pulse energy ℰNIR.The red dashed line is a linear fit while the blue dashed line corresponds to a saturation model: aℰNIR /(1+bℰNIR), where both a and b are fitting parameters.(b) Corresponding THz spectra measured with NIR generation pulses with different pulse energy.We normalize these measurements to the maximum amplitude obtained with a 0.57 mJ NIR generation pulse. | 2023-01-24T06:42:19.309Z | 2023-01-23T00:00:00.000 | {
"year": 2024,
"sha1": "f739b907f42a555bd229f5bdfde4f049fc7bf9d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.496855",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "f739b907f42a555bd229f5bdfde4f049fc7bf9d4",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
239469109 | pes2o/s2orc | v3-fos-license | Robert’s Intragastric Alcohol-Induced Gastric Lesion Model as an Escalated General Peripheral and Central Syndrome, Counteracted by the Stable Gastric Pentadecapeptide BPC 157
We redefined Robert’s prototypical cytoprotection model, namely the intragastric administration of 96% alcohol in order to generate a general peripheral and central syndrome similar to that which occurs when major central or peripheral veins are occluded in animal models. With this redefinition, we used Robert’s model to examine the cytoprotective effects of the stable gastric pentadecapeptide BPC 157. The intragastric administration of alcohol induced gastric lesions, intracranial (superior sagittal sinus) hypertension, severe brain swelling and lesions, portal and vena caval hypertension, aortal hypotension, severe thrombosis, inferior vena cava and superior mesenteric vein congestion, azygos vein failure (as a failed collateral pathway), electrocardiogram disturbances, and heart, lung, liver and kidney lesions. The use of BPC 157 therapy (10 µg/kg or 10 ng/kg given intraperitoneally 1 min after alcohol) counteracted these deficits rapidly. Specifically, BPC 157 reversed brain swelling and superior mesenteric vein and inferior vena caval congestion, and helped the azygos vein to recover, which improved the collateral blood flow pathway. Microscopically, BPC 157 counteracted brain (i.e., intracerebral hemorrhage with degenerative changes of cerebral and cerebellar neurons), heart (acute subendocardial infarct), lung (parenchymal hemorrhage), liver (congestion), kidney (congestion) and gastrointestinal (epithelium loss, hemorrhagic gastritis) lesions. In addition, this may have taken place along with the activation of specific molecular pathways. In conclusion, these findings clarify and extend the theory of cytoprotection, offer an approach to its practical application, and establish BPC 157 as a prospective cytoprotective treatment.
Introduction
This study attempts to clarify and extend the theory of cytoprotection, offer an approach to its practical application, and establish a prospective cytoprotective treatment. Cytoprotection is one of the most important concepts in gastroenterology and pharmacology [1][2][3]. The term and concept of cytoprotection were pioneered by Robert et al. in 1979 in response to the noxious gastric effect of the application of intragastric alcohol and non-steroidal anti-inflammatory drugs (NSAIDs) [1][2][3]. Robert's breakthrough in gastroenterology was cytoprotection, which he defined in general terms as the counteraction of the lesions arising from direct detrimental contact with a noxious agent. The specific model involves the counteraction of intragastric alcohol-induced gastric lesions [1][2][3].
However, this eventuality has not been investigated, nor appropriately combined with Robert's cytoprotection [1][2][3]8], in the determination of whether BPC 157 therapy could counteract the peripheral and central deficits mentioned above.
Conceptually, screening the ability of compounds to modulate multiorgan pathology is consistent with Robert's original proposal, which examined direct injury to the epithelial cells in the stomach by various noxious agents (i.e., intragastric absolute alcohol) [1][2][3] and lead to the smooth extension of cytoprotective effects to other epithelia (i.e., wound-healing effects). In other words, a cytoprotective agent would be released and have a marked range of protective effects, including organ protection [1][2][3]. The multiorgan pathology of intragastric alcohol inducement has been shown in experimental animals [18][19][20] and patients [21][22][23][24].
The present study is the first to examine the treatment options for the full occlusive syndrome, including peripheral and central deficits similar to those described in rats with major vessel occlusion, that occurs following the intragastric administration of absolute alcohol (epithelial and endothelial injuries and thrombosis), as in Robert's prototypical cytoprotective model [11][12][13][14][15][16][17]. We aimed to determine whether the administration of BPC 157 activated bypassing collateral pathways to provide cytoprotection. We hypothesized that, in accordance with its previous therapeutic effect in occlusive syndromes [11][12][13][14][15][16][17], the administration of BPC 157 counteracts gastric lesions, peripheral (portal and vena caval hypertension, aortal hypotension) and central (i.e., brain swelling and intracranial hypertension) disturbances and organ lesions. In addition, this may take place along with the activation of the specific molecular pathways, i.e., eNOS, mTOR and VGFEa, known to interact with BPC 157 and the administration of alcohol [7,10,58,59].
Finally, our findings could demonstrate the benefits of this model for screening potentially cytoprotective compounds.
Animals
This study was conducted with 12-week-old, 200 g male albino Wistar rats, randomly assigned at six rats/group/interval. The rats were bred in-house at the Pharmacology Animal Facility, School of Medicine, Zagreb, Croatia. The animal facility was registered by the Directorate of Veterinary (Reg. No: HR-POK-007). The laboratory rats were acclimated for 5 days and then randomly assigned to their respective treatment groups. They were housed in polycarbonate cages under conventional laboratory conditions at 20-24 • C, with a relative humidity of 40-70% and a noise level of 60 dB. Each cage was identified by dates, the study number, the group, the dose and the number of the animal. Fluorescent lighting provided illumination 12 h per day. A standard good laboratory practice (GLP) diet and fresh water were provided ad libitum. The animals' care was in compliance with the standard operating procedures (SOPs) of the Pharmacology Animal Facility and the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes (ETS 123).
This study was approved by the local Ethics Committee. The ethical principles of the study complied with the European Directive 010/63/E, the Law on Amendments to the Animal Protection Act (Official Gazette 37/13), the Animal Protection Act (Official Gazette 135/06), the Ordinance on the protection of animals used for scientific purposes (Official Gazette 55/13), the Federation of European Laboratory Animal Science Association (FELASA)'s recommendations and the recommendations of the Ethics Committee of the School of Medicine, University of Zagreb. The experiments were assessed by observers blinded to the treatment.
Drugs and Experimental Protocol
The medication was administered as described in previous studies [11][12][13][14][15][16][17], without the use of a carrier or a peptidase inhibitor for stable gastric pentadecapeptide BPC 157, a partial sequence of the human gastric juice protein BPC, which is freely soluble in water at pH 7.0 and in saline. The BPC 157 (GEPPPGKPADDAGLV, with a molecular weight 1419 Da; Diagen, Slovenia) was prepared as a peptide with 99% high-performance liquid chromatography (HPLC) purity, with the peptide 1-des-Gly as the main impurity. The dose and application regimens were the same as described in previous studies [11][12][13][14][15][16][17]. Briefly, deeply anaesthetized rats received 1 mL of 96% alcohol delivered intragastrically. One minute after the injection, BPC 157 (10 µg/kg or 10 ng/kg), or an equal volume of saline (5 mL/kg), was administered intraperitoneally. The rats were euthanized 1, 5, 15 or 30 min following the injection.
Gross Lesion Presentation
The gross lesions were recorded in deeply anaesthetized laparatomized rats, with a camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA). Hemorrhagic lesions in the stomach were assessed as the percentage of the total area of the glandular stomach at 1 min after alcohol administration and following medication, at 1, 5, 15 and 30 min; the rats were then euthanized. The other investigated features included the gross presentation of the brain, the superior mesenteric vein, the inferior vena cava and the heart, as well as a thrombus assessment.
2.4. Assessment of the Change in the Brain, Vein or Heart Volume Proportional to the Change in the Brain, Vein or Heart's Surface Area As described in previous studies [13][14][15][16][17], we recorded the presentation of the brain, peripheral veins (superior mesenteric vein, inferior vena cava and azygos vein) and heart of deeply anaesthetized rats that had undergone laparotomy or complete calvariectomy, using a camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA). This endeavor was performed before intragastric alcohol administration to the healthy rats and then 1 min after the introduction of 1 mL of 96% alcohol in the stomach, before therapy initiation (1 min injury time), and 1, 5, 15 and 30 min after therapy (saline or BPC 157); the rats were then euthanized. The border of the brain, vein or heart in the photographs was marked using ImageJ software (National Institutes of Health, Bethesda, MD, USA). Next, the surface area (in pixels) of the brain, vein, or heart was measured using a measuring function. This was performed with the brain, vein or heart photographs at each of the above-mentioned time-points for both the control and alcohol-treated animals. In the rats administered alcohol, the brain, vein or heart area before application was marked as 100%, and the ratio of each subsequent brain, vein or heart area relative to the first area was calculated as ( A 2 A 1 ). Starting from the square-cube law Equations (1) and (2), an equation for the change in the brain, vein or heart volume proportional to the change in the brain, vein or heart surface area (6) was derived. For expressions (1)-(5), any arbitrary one-dimensional length on the photograph was defined (e.g., the rostro-caudal length of the brain, or any arbitrary length of a vein or the heart). It was used only to define the one-dimensional proportion (l 2 /l 1 ) between two observed brains, veins or hearts and as an inter-factor (and therefore not measured) for deriving the final expression (6). The procedure was as follows: square-cube law, square-cube law, from (1), after dividing both sides by A 1 , from (3), after taking the square root of both sides, from (2), after dividing both sides by V 1 and after incorporating (4) into (5). This measuring procedure, followed by the calculation of the volume ratios, was performed separately for the brain, veins and heart.
Brain swelling was recorded in separate rats 15 min after complete calvariectomy. Briefly, 6 burr holes were drilled in three horizontal lines, all of them medial to the superior temporal lines and temporalis muscle attachments. The two rostral burr holes were placed just basally from the posterior interocular line, the two basal burr holes were placed just rostrally to the lambdoid suture (and the transverse sinuses) on both sides, respectively, and the middle two burr holes were placed in the line between the basal and rostral burr holes.
Superior Sagittal Sinus, Portal Vein, Vena Caval and Abdominal Aortal Pressure Recording
Recordings were made in the deeply anaesthetized rats with a cannula (BD Neoflon™ Cannula, BD Switzerland, Eysins, Switzerland), connected to a pressure transducer (78534C MONITOR/TERMINAL; Hewlett Packard, Palo Alto, CA, USA), which was inserted into the superior sagittal sinus, the portal vein, the inferior vena cava and the abdominal aorta at the level of the bifurcation before the administration of alcohol 1 min after alcohol application and 1, 5, 15 and 30 min after therapy (saline or BPC 157). Each recording lasted 1 min. For the superior sagittal sinus pressure recording, we made a single burr hole in the rostral part of the sagittal suture, above the superior sagittal sinus, and cannulated the anterior portion of the superior sagittal sinus using Braun intravenous cannulas. We then laparatomized the rats in order to record portal vein, inferior vena caval and abdominal aortal pressure.
Of note, the normal rats exhibited a superior sagittal sinus pressure of −24 to −27 mmHg and a portal pressure of 3-5 mmHg, which was similar to that of the inferior vena cava, although it was at least 1 mmHg higher in the portal vein. By contrast, the abdominal aorta blood pressure was 100-120 mm Hg at the level of the bifurcation [11][12][13][14][15][16][17].
ECG Recording
ECGs were recorded continuously in the deeply anaesthetized rats for all three main leads by positioning stainless steel electrodes on all four limbs using an ECG monitor with a 2090 programmer (Medtronic, Minneapolis, MN, USA) connected to a Waverunner LT342 digital oscilloscope (LeCroy, Spring Valley Village, NY, USA) at 15 min, 24 h or 48 h after ligation. This arrangement enabled precise recordings, measurements and analysis of the ECG parameters at the level of the bifurcation [11][12][13][14][15][16][17].
Tissue Preparation
The stomach (from grossly intact tissue), brain, lungs, liver, kidneys and heart were rapidly removed and fixed in 10% neutral buffered formalin at room temperature for 24 h. Tissue blocks were embedded in paraffin, sectioned at 4 µm, stained with hematoxylin and eosin (H&E) and evaluated by light microscopy using semiquantitative scoring.
Brain Histology
As described [13][14][15][16][17], two coronal sections of each brain were prepared according to NTP-7, Levels 3 and 6, considering the neuroanatomic subsites present in certain brain sections [60]. At NTP-7 Level 3, we observed the areas of the fronto-parietal cortex and hippocampus. At NTP-7 Level 6, we analyzed the cerebellar cortex. Brain coronal blocks were embedded in paraffin, sectioned at 1 µm, stained with H&E and evaluated by light microscopy using neuropathological scoring [61]. The number of dark neurons in the temporal cortex and in the hippocampus was counted in an area of 26,406 µm 2 , with 80 measurements for each sample of cortex and hippocampus. The dark neurons were subjected to hypoxia; they featured eosinophilic cytoplasm, pyknotic nuclei and loss of Nissl substance.
Lung Histology
We used a scoring system to grade the degree of lung injury, including observations of focal thickening of the alveolar membranes, congestion, pulmonary edema, intra-alveolar hemorrhage, interstitial neutrophil infiltration and intra-alveolar neutrophil infiltration. Each feature was assigned a score from 0 to 3, based on its absence (0), or presence to a mild (1), moderate (2) or severe (3) degree. A final histology score was determined [13][14][15][16][17]62].
Renal, Liver and Heart Histology
The assessment of renal injury was based on the degeneration of Bowman's space, glomeruli and proximal and distal tubules, vascular congestion, and interstitial edema. The criteria for liver injury were the vacuolization of hepatocytes and pyknotic hepatocyte nuclei, the activation of Kupffer cells and the enlargement of sinusoids. Each specimen was scored using a scale from 0 to 3 (0, none; 1, mild; 2, moderate; 3, severe) for each criterion [13][14][15][16][17]63]. The myocardium was graded for the severity of necrosis exhibited in the ventricles. The mean value of their scores is presented. The pathological criteria for grading the severity of necrosis were: score 1 (mild), one or two small foci; score 2 (slight), several small foci; score 3 (moderate), multiple small foci or several large foci; score 4 (severe), multiple large foci or diffuse area of necrosis [13][14][15][16][17]64].
Gene Expression Analysis
To illustrate the possible involvement of pathways, gene expression analysis assessment (Table 1) was performed on the rats at 5 min after intragastric alcohol administration, and intraperitoneal administration of 5 mL/kg of saline or 10 ng/kg of BPC 157. The total RNA was extracted from different tissues, rapidly dissected, snap-frozen in liquid nitrogen and stored at −80 • C. The tissues were homogenized using a Bio-Gen PRO200 homogenizer (PRO Scientific, Willenbrock Rd, Oxford, CT, USA) in 1000 µL of TRIzol (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA), and RNA extraction was performed using a TRIzol-based reagent method according to the manufacturer's instructions.
A High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Thermo Fisher Scientific, Waltham, MA, USA) was used to perform reverse transcription following the manufacturer's instructions and using a ProFlex PCR System machine (Applied Biosystems, Thermo Fisher Scientific, Waltham, MA, USA).
TaqMan Gene Expression Assays (Applied Biosystems, Thermo Fisher Scientific, Waltham, MA, USA) with a TaqMan Gene Expression Master Mix were used for the gene expression analysis of selected genes (Table 1). Quantitative PCR was carried out in duplicate for every sample. A Cobas z 480 instrument (Hoffmann-La Roche Ltd., Basel, Switzerland) was used to perform qPCR under the following conditions: 2 min at 50 • C, 10 min at 95 • C, 45 cycles of 15 sec at 95 • C and 1 min at 60 • C.
Actb was chosen as a reference gene for the normalization of the Nos3, Mtor and Vegfa gene expression data.
The difference in gene expression between the treated and non-treated samples was analyzed using the formula 2 −∆∆Ct , where ∆∆Ct is the difference between the ∆Ct of the treated sample and the ∆Ct of the non-treated sample. The results were expressed as fold change and as percentages. Fold change values lower then 70% indicated decreased gene expression in BPC-157-treated animals (downregulation), fold change values between 70% and 130% were considered as biological variability (no change in gene expression), and fold change values higher then 130% indicated increased gene expression in BPC-157-treated animals (upregulation).
Statistical Analysis
The statistical analysis was performed by using the parametric one-way analysis of variance (ANOVA) with the post hoc Newman-Keuls test or the non-parametric Kruskal-Wallis test, followed by the Mann-Whitney U test to compare the groups. The values were presented as the mean ± standard deviation (SD) or as the minimum, median and maximum. To compare the frequency difference between the groups, the chi-square test or Fischer's exact test was used. A value of p < 0.05 was considered statistically significant.
Results
We found that an assault on the stomach-96% alcohol administered intragastricallyproduced gastric lesions and rapidly escalated to a generalized syndrome with peripheral and central deficits similar to those that have been observed when major vessels are occluded in rats [11][12][13][14][15][16][17]. It was completely counteracted by BPC 157 therapy, using either the ng/kg or µg/kg regimen. A likely interpretation is that BPC 157 administration may have extended the innate cytoprotective effect, particularly the vascular effect that is rapidly activated by intragastric alcohol exposure. The beneficial effects then extended peripherally and centrally.
Stomach
Alcohol insult to the stomach produced large gross hemorrhagic lesions and severe pathology in the stomach (i.e., mucosal surface erosion, even in areas that were macroscopically intact) (Figure 1a-d). In the stomachs of the rats that received BPC 157, the lesions were markedly attenuated, and in the areas that were macroscopically intact, the microscopic presentation showed no gross changes and no microscopic congestion of the stomach mucosa (Figure 1a-g).
Liver
Within 1 min of intragastric alcohol administration, there was moderate congestion in the liver (Figure 2a,b). After 5 min, there was further progression of liver congestion. After 15 min, we noted prominent congestion. After 30 min, there was prominent congestion and a ballooning of hepatocytes in zone 3 of the liver lobules (Figure 2a,c). For rats the that received BPC 157, in the 30 min post-injury period, there were no changes found in the liver (Figure 2a,d,e).
Kidney
Alcohol insult to the stomach led to kidney lesions within 1 min, moderate congestion and progression after 5 min and prominent congestion after 15 and 30 min (Figure 3a-c). In the rats treated with BPC 157, there were no changes in the kidneys 30 min after treatment (Figure 3a,d,e). Stomach lesions (% of total glandular stomach area). Controls (white bars) with the administered 1 mL of the 96% alcohol into the stomach showed marked hemorrhagic lesions within the stomach, with an ascending course from the first minute until the end (1, 5, 15, and 30 min following therapy, which was administered at 1 min following the introduction of 96% alcohol in the stomach). The course was markedly counteracted with BPC 157 therapy (10 µg (light gray bars) or 10 ng/kg (dark gray bars) IP). Six rats/group/interval. * p < 0.05, at least vs. the corresponding control. (b,c). Histological (HE, ×100, scale bar 200 µm) presentation of stomach mucosa. One mL of 96% alcohol was applied directly into the rats' stomachs, and saline 5 mL/kg IP was administered at 1 min following the introduction of 96% alcohol in the stomach. Microscopically (specimens taken at the areas grossly intact), the control rats presented marked congestion and erosive gastritis, as illustrated in (b) (1 min; mucosal surface erosions marked), and c (30 min; mucosal surface erosions) with full brace). (d). Alcohol lesions in controls at 15 min (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). (e,f). Histological (HE, ×100, scale bar 200 µm) presentation of stomach mucosa. One mL of 96% alcohol was applied directly into the rats' stomachs, and BPC 157 therapy (10 µg or 10 ng/kg) IP was administered at 1 min following the introduction of 96% alcohol in the stomach, with marked attenuation at 1 min (e), and at 30 min (f). (g). Alcohol lesions markedly attenuated in BPC-157-treated rats at 15 min (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). . Liver lesions, scored 0-3, assessed in the rats with 1 mL of the 96% alcohol introduced in their stomachs, at 1, 5, 15, and 30 min following therapy. Therapy (BPC 157 (10 µg (light gray bars) or 10 ng/kg (dark gray bars); saline (5 mL/kg (control, white bars)), IP) was administered at 1 min following the introduction of 96% alcohol in the stomach. The course was markedly counteracted with BPC 157 therapy. Six rats/group/interval. * p < 0.05 at least vs. the corresponding control.
(b-e). Histology of the liver parenchyma (b-e). After the immediate period following the introduction of 96% alcohol in the stomach, the controls presented with congestion (dilatation of central veins, sinusoids and blood vessels in portal tracts), which progressed in the liver tissue (b (1 min), c (30 min), arrows). The rats treated with BPC 157 showed no changes in their liver tissues (d (1 min), e (30 min)); (HE, ×100, scale bar 200 µm). The pictures obtained using BPC 157 10 ng/kg IP are representative of the observations for both doses.
Kidney
Alcohol insult to the stomach led to kidney lesions within 1 min, moderate congestion and progression after 5 min and prominent congestion after 15 and 30 min (Figure 3a-c). In the rats treated with BPC 157, there were no changes in the kidneys 30 min after treatment (Figure 3a,d,e). . Kidney lesions, scored 0-3, assessed in the rats with 1 mL of 96% alcohol introduced in their stomachs, at 1, 5, 15, and 30 min following therapy. Therapy (BPC 157 (10 µ g (light gray bars) or 10 ng/kg (dark gray bars); saline (5 mL/kg (control, white bars)), IP) was administered at 1 min following the introduction of 96% alcohol in the stomach. The course was markedly counteracted with BPC 157 therapy. Six rats/group/interval. * p ˂ 0.05 at least vs. the corresponding control. (b-e). Histology of the renal parenchyma. After the immediate period following the introduction of 96% alcohol in the stomach, the controls presented with congestion, and its progression in the renal tissues presented with dilated and congested small, medium and large blood vessels ,as well as glomeruli
Lung
In the control rats, there was moderate congestion in the lungs with hemorrhage in the lung parenchyma 1 min after intragastric alcohol administration (Figure 4a,b). After 5 min, there was further progression of tissue congestion with persistent hemorrhage in the lung parenchyma. By 15 and 30 min, we noted prominent congestion and hemorrhage in the lung parenchyma (Figure 4a,c). In the rats treated with BPC 157, there were no changes found in in the lung after 1 and 5 min (Figure 4a,d). After 15 and 30 min, only mild congestion was found in the lung (Figure 4a,e).
Lung
In the control rats, there was moderate congestion in the lungs with hemorrhage in the lung parenchyma 1 min after intragastric alcohol administration (Figure 4a,b). After 5 min, there was further progression of tissue congestion with persistent hemorrhage in the lung parenchyma. By 15 and 30 min, we noted prominent congestion and hemorrhage in the lung parenchyma (Figure 4a,c). In the rats treated with BPC 157, there were no changes found in in the lung after 1 and 5 min (Figure 4a,d). After 15 and 30 min, only mild congestion was found in the lung (Figure 4a,e).
Heart
Heart Lesions and Dilatation and ECG Recording Intragastric alcohol rapidly produced heart dilatation and lesions and ECG disturbances (Figure 5a-k). In the control rats, within 1 min of intragastric alcohol administration, there was moderate congestion in the heart (Figure 5b,k), and this deficit progressed over time (Figure 5c,k). After 5 min, there was tissue congestion and persistent hemorrhage. After 15 min, we noted prominent congestion and, due to low aortic pressure, passive congestion occurred in the myocardium, with acute subendocardial infarct. After 30 min, we found prominent congestion and acute subendocardial infarct in the control group ( Figure 5k). The rats treated with BPC 157 showed no changes in the heart (blood vessels marked with arrows) (Figure 5j (1 min) and Figure 5i (30 min)). In the rats treated with BPC 157, there were no gross changes in the heart after 30 min (Figure 5h). BPC 157 counteracted heart dilatation ( Figure 5a); this proportional change in the heart surface area was used to assess the development of heart failure.
One minute after the introduction of intragastric, ECG recordings showed marked tachycardia with prolonged PQ and QTc intervals (Figure 5e-g). Furthermore, along with the rapid appearance of heart lesions, the rats presented ST elevation that was highest at the earliest time point (1.3 ± 0.1 at 1 min) and remained high (0.7 ± 0.1) until the end of the experiment (30 min). Treatment with BPC 157 completely counteracted the ST elevation (p < 0.05 compared with saline-treated rats). The only abnormality was peaked T waves in the third limb lead at all time points.
Blood Vessels
A proportional change in the vessel was used to assess the development of peripheral vessel failure after intragastric alcohol administration (Figure 6a-g). There were rapidly induced peripheral vessel disturbances: the inferior vena cava and superior mesenteric vein volumes increased (Figure 6a,c,e) and the azygos vein completely failed, with a volume close to zero (Figure 6b,d). Therapy with BPC 157 rapidly attenuated these disturbances. The superior mesenteric vein and the inferior vena cava appeared normal (Figure 6c,e,g) and the complete failure of the azygos vein was reversed (Figure 6d,f). These benefits may have been due to the activated collateral pathways and the re-established blood flow.
Portal, Superior Mesenteric, Vena Caval, Abdominal Aortal and Superior Sagittal Sinus Pressure Recording
The intragastric administration of 96% alcohol led to severe portal and vena caval hypertension, with the former more pronounced than the latter, and aortal hypotension that persisted throughout the entire experimental period. In addition, the normal (negative) pressure in the superior sagittal sinus was increased (positive; Figure 7a). Therapy with BPC 157 rapidly resolved the severe portal and vena caval hypertension and aortal hypotension, and the increased (positive) pressure in the superior sagittal sinus was immediately restored to the negative pressure (Figure 7a).
Figure 5.
Heart lesions. (a) Heart, volume (% of volume in healthy). Controls (white bars) with 1 mL of 96% alcohol introduced in the stomach showed increased heart dilatation from the first minute until the end (1, 5, 15, and 30 min following therapy, which was given at 1 min following 96% alcohol instillation in the stomach (intragastric alcohol before medication)). The course was markedly counteracted with BPC 157 therapy, which consisted of 10 µ g (light gray bars) or 10 ng/kg of 96% alcohol introduced in the stomach showed increased heart dilatation from the first minute until the end (1, 5, 15, and 30 min following therapy, which was given at 1 min following 96% alcohol instillation in the stomach (intragastric alcohol before medication)). The course was markedly counteracted with BPC 157 therapy, which consisted of 10 µg (light gray bars) or 10 ng/kg ((dark gray bars) IP). Six rats/group/interval. * p < 0.05 at least vs. the corresponding control, # p < 0.05 at least vs. normal healthy (100%). (b,c). Histology of the myocardium in saline-treated rats. Afyer the immediate period following the introduction of 96% alcohol in the stomach, the controls presented with congestion of the myocardium (b (1 min), HE, ×100, scale bar 200 µm), progressing to the subendocardial infarct (necrotic myocytes (arrows)) (c (30 min), HE, ×200, scale bar 200 µm). (d). Gross presentation of the heart in rats that received 1 mL of 96% alcohol introduced directly into the stomach, and saline 5 mL/kg IP, presentation at 30 min thereafter, before euthanasia (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). ECG changes (e-g). PQ-(a), QT-intervals (b) and frequency (c) in the rats with who had 1 mL of 96% alcohol introduced in the stomach, at 1, 5, 15, and 30 min following therapy. Therapy (BPC 157 10 µg (light gray bars) or 10 ng/kg (dark gray bars); saline (5 mL/kg (control, white bars)), IP) was administered at 1 min following the introduction of 96% alcohol in the stomach. The course was markedly counteracted with BPC 157 therapy. Six rats/group/interval. * p < 0.05 at least vs. the corresponding control. (h). Gross presentation of the heart in the rats that received 1 mL of 96% alcohol introduced directly into the stomach, and BPC 157 10 ng/kg IP at 1 min following alcohol, presentation at 30 min thereafter, before euthanasia (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). (i,j). Histology of the myocardium in rats treated with BPC 157. These rats showed no changes in the heart (j (1 min, HE, ×100, scale bar 200 µm), (i) (30 min, HE, ×200, scale bar 200 µm)). (k). Heart necrosis, scored 0-3, assessed in the rats that had 1 mL of 96% alcohol introduced in the stomach, at 1, 5, 15, and 30 min following therapy. Therapy (BPC 157 10 µg (light gray bars) or 10 ng/kg (dark gray bars); saline (5 mL/kg (control, white bars), IP) was administered at 1 min following the introduction of 96% in the stomach. The course was markedly counteracted with BPC 157 therapy. Six rats/group/interval. * p < 0.05 at least vs. the corresponding control. The pictures obtained using BPC 157 10 ng/kg IP are representative of the observations for both doses.
Thrombosis
In the rats that received intragastric alcohol, thrombosis rapidly appeared, largely in the periphery, especially in the inferior vena cava, and then in the portal vein, the superior mesenteric vein (Figure 7b), the hepatic and superior mesenteric artery and the abdominal aorta (Figure 7c). It eventually progressed centrally and was visible in the superior sagittal sinus (Figure 7b). Treatment with BPC 157 markedly counteracted and reversed the thrombosis presentation (Figure 7b,c).
Brain Damage
The rats without BPC 157 treatment exhibited brain swelling (Figure 8a,c,g). Without this counteraction (Figure 8a,c,g), there was a > 120% increase in the brain volume relative to the brain surface area compared with healthy rats (Figure 8a,g). In contrast to this intragastric alcohol-induced brain swelling (Figure 8c), the prominent effects of BPC 157 application appeared quite rapidly (Figure 8a,d-f). Azygos vein (red arrow) (AV), failed (white letters) ((b), controls) in rats that received 1 mL of 96% alcohol introduced directly into the stomach, and saline 5 mL/kg IP (b) (left) at 1 min following alcohol. Presentation at 30 min thereafter, before euthanasia (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). (c-e). Inferior caval vein (c), azygos vein (d) and superior mesenteric vein (e) relative volume (% of volume in healthy rats). Controls (white bars) who had 1 mL of 96% alcohol introduced in the stomach showed marked congestion of the inferior caval vein and superior mesenteric vein, and failed azygos vein presentation from the first minute until the end (1,5,15, and 30 min following therapy, which was given at 1 min following the introduction of 96% alcohol in the stomach (intragastric alcohol before medication)). The course was markedly counteracted with BPC 157 therapy (10 µg (light gray bars) or 10 ng/kg (dark gray bars)) IP). Six rats/group/interval. * p < 0.05 at least vs. the corresponding control, # p < 0.05 at least vs. normal healthy rats (100%). (f,g). Gross presentation of the vessels (f,g) in rats treated with BPC 157. (f). Azygos vein (red arrow) (AV), functioning (black letters). (g). Superior mesenteric vein (full black arrow) and inferior caval vein (dashed black arrows) (SMV, ICV), no congestion (black letters), in rats that received 1 mL of 96% alcohol introduced directly into the stomach, and BPC 157 10 ng/kg IP (f,g) (right) at 1 min following alcohol. Presentation at 30 min thereafter, before euthanasia (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). The pictures obtained using BPC 157 10 ng/kg IP are representative of the observations for both doses.
Thrombosis
In the rats that received intragastric alcohol, thrombosis rapidly appeared, largely in the periphery, especially in the inferior vena cava, and then in the portal vein, the superior mesenteric vein (Figure 7b), the hepatic and superior mesenteric artery and the abdominal aorta (Figure 7c). It eventually progressed centrally and was visible in the superior sagittal . Controls (white bars) with 1 mL of 96% alcohol introduced in the stomach showed increased swelling of the brain from the first minute until the end (1, 5, 15, and 30 min following therapy, which was given at 1 min following the introduction of 96% alcohol in the stomach (intragastric alcohol before medication)). The course was markedly counteracted with BPC 157 therapy (10 µ g (light gray bars) or 10 ng/kg (dark gray bars)) IP). Six rats/group/interval. * p ˂ 0.05 at least vs. the corresponding control, # p ˂ 0.05 at least vs. normal healthy rats (100%). (b-e). When given to healthy rats (healthy brain presentation (H)), intragastric alcohol (A) induced brain swelling and BPC 157 (B) induced the reversal of the brain swelling, as per the timeline. One mL of 96% alcohol introduced directly into the rats' stomachs, BPC 157 10 ng/kg IP at 1 min following alcohol. Brain presentation as follows: b. Healthy, (H), before the introduction of alcohol. (c). At 1 min following alcohol, but before therapy (A 1 min). (d). At 1 Figure 8. (a). Brain swelling relative volume (% of volume in healthy). Controls (white bars) with 1 mL of 96% alcohol introduced in the stomach showed increased swelling of the brain from the first minute until the end (1, 5, 15, and 30 min following therapy, which was given at 1 min following the introduction of 96% alcohol in the stomach (intragastric alcohol before medication)). The course was markedly counteracted with BPC 157 therapy (10 µg (light gray bars) or 10 ng/kg (dark gray bars)) IP). Six rats/group/interval. * p < 0.05 at least vs. the corresponding control, # p < 0.05 at least vs. normal healthy rats (100%). (b-e). When given to healthy rats (healthy brain presentation (H)), intragastric alcohol (A) induced brain swelling and BPC 157 (B) induced the reversal of the brain swelling, as per the timeline. One mL of 96% alcohol introduced directly into the rats' stomachs, BPC 157 10 ng/kg IP at 1 min following alcohol. Brain presentation as follows: b. Healthy, (H), before the introduction of alcohol. (c). At 1 min following alcohol, but before therapy (A 1 min). (d). At 1 min following BPC 157 (B 1 min). (e). At 15 min following BPC 157 (B 15 min), (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)). The pictures obtained using BPC 157 10 ng/kg IP are representative of the observations for both doses. (f,g). Gross presentation of the brain in rats that received 1 mL of 96% alcohol introduced directly into the stomach, BPC 157 10 ng/kg IP (f) or saline 5 mL/kg IP (g) at 1 min following alcohol. Presentation at 30 min thereafter (BPC-157-treated rat, B 30 min (f), saline (control)-rat, C 30 min (g)), immediately after euthanasia (camera attached to a VMS-004 Discovery Deluxe USB microscope (Veho, Dayton, OH, USA)).
Furthermore, microscopically, the course was markedly counteracted with BPC 157 therapy (Figure 9a-p). The rats treated with saline after intragastric alcohol presented significant lesions in the cortex (Figure 9a-f) and hippocampus (Figure 9g-i). By contrast, the rats that received BPC 157 presented a structurally normal cortex (Figure 9a,j-n) and hippocampus (Figure 9i,o,p). The rats treated with saline after intragastric alcohol exhibited brain edema after 1 and 5 min, with vascular congestion. Furthermore, after 15 and 30 min, these rats showed generalized congestion, edema, and intracerebral hemorrhage, with degenerative changes in the cerebral and cerebellar neurons indicating toxic changes created by the ethanol. Regularly, the rats treated with BPC 157 after intragastric alcohol administration showed no cerebral or cerebellar tissue changes, with only mild congestion after 1 and 5 min. After 30 min, the rats showed minimal edema, with no degenerative changes to the cerebral, cerebellar and dark line hippocampus neurons. Within the 15-30 min period, the number of 'dark' neurons in the temporal cortex and in the hippocampus was markedly increased in the control group. Indeed, 'dark' neurons represented one fifth (temporal cortex) or of more (hippocampus) of the total neurons. By contrast, there was only a very small percentage of 'dark' neurons in the rats treated with BPC 157 (Figure 9a,i).
Gene Expression Analysis, eNOS, mTOR, and VGEFa
As an illustration the complexity of the beneficial effect, when assessed at 5 min following the intragastric introduction of alcohol, eNOS, mTOR, and VGEFa expression showed a particular presentation ( Figure 10). It is likely that this particular presentation depended on the affected organ, and the particular processes initiated while all of these organs (the stomach, and then the brain, heart, lung, liver and kidneys) were fairly protected, grossly and microscopically, by the administration of BPC 157. The eNOS specificity demonstrated a decreased expression in the stomach and liver, an increased expression in the lung, while the brain, heart, and kidneys not affected. The mTOR specificity demonstrated a decreased expression in the heart, lung, brain and liver, and the stomach and kidneys were not affected. The VGEFa specificity demonstrated a decreased expression in the heart, stomach and liver, an increased expression in the lung, and the brain and kidneys were not affected.
Summary
In summary, we found that in rats subjected to intragastric alcohol administration, BPC 157 therapy counteracted the gastric lesions as well as the peripheral and central deficits. There was rapid resolution that adequately reversed the anatomical imbalance of venous drainage and improved the peripheral and central deficits. The outcomes were the attenuation of thrombosis and the counteraction of brain, heart, lung, liver, kidneys, gastrointestinal lesions. In addition, gene expression analysis showed a particular presentation of the eNOS, mTOR, and VGEFa in the heart, stomach, kidneys, lung, brain and liver (see Figure 10 as summarized background). Figure 10. Summarizing the essential epithelium and endothelium protection interplay known in Robert and Szabo's cytoprotection, and the role of the stable pentadecapeptide BPC 157 as a likely mediator, we suggest that BPC 157 may be useful as cytoprotective therapy. Hopefully, the huge theoretical importance of the all aspects of the cytoprotection concept may finally be realized in practice. Conceptually, taking intragastric alcohol as Virchow triad occlusive syndrome, there is a new point (bypassed occluded or ruptured vessel, equation endothelium maintenance → epithelium maintenance = blood vessel recruitment and activation towards defect or bypassing vessel occlusion), the recruitment of collateral Figure 10. Summarizing the essential epithelium and endothelium protection interplay known in Robert and Szabo's cytoprotection, and the role of the stable pentadecapeptide BPC 157 as a likely mediator, we suggest that BPC 157 may be useful as cytoprotective therapy. Hopefully, the huge theoretical importance of the all aspects of the cytoprotection concept may finally be realized in practice. Conceptually, taking intragastric alcohol as Virchow triad occlusive syndrome, there is a new point (bypassed occluded or ruptured vessel, equation endothelium maintenance → epithelium maintenance = blood vessel recruitment and activation towards defect or bypassing vessel occlusion), the recruitment of collateral blood vessels to compensate for vessel occlusion and reestablish blood flow. The BPC 157 counteracted various venous occlusion-induced syndromes, inferior caval vein syndrome, superior sagittal sinus occlusion syndrome, superior mesenteric artery occlusion syndrome, superior mesenteric vein occlusion syndrome, superior mesenteric artery and vein occlusion syndrome, Pringle maneuver ischemia, reperfusion, and Budd-Chiari syndrome in rats. The activation of the alternative collateral pathways to bypass occlusion, and the reestablishment of alternative blood flow, resulted in the counteraction of the full range of consequent perilous syndromes. Thus, intragastric alcohol application-induced syndrome comparable to the effects of major vessel occlusion was also counteracted with the application of BPC 157. As an illustration of the complexity of these beneficial effects, when assessed at 5 min following intragastric alcohol challenge, eNOS, mTOR, and VGEFa expression showed a particular presentation. Gene expression analysis: eNOS (black bars), mTOR (horizontally dashed white bars), and VGEFa (vertically dashed gray bars) at 5 min following intragastric alcohol challenge. Percentage of changes relative to control samples in the affected organs (heart, stomach, kidney, lung, brain and liver). Therapy (BPC 157 (10 ng/kg); saline (5 mL/kg (control)), IP) was administered at 1 min following the introduction of 96% alcohol in the stomach. Six rats/group/interval. * p < 0.05, at least vs. the corresponding control.
Discussion
Of note, the prototypical model of cytoprotection, namely Robert's absolute alcohol intragastric application, is usually described in terms of epithelial and endothelial injury and thrombosis [1][2][3]. However, our study was the first essential extension to describe the full intragastric alcohol application-induced occlusive syndrome, which includes peripheral and central dysfunction resembling the deficits that have been described in rats with major vessel occlusion [11][12][13][14][15][16][17]. We claimed that in both noxious events, the intragastric introduction of alcohol and major vessel occlusion [11][12][13][14][15][16][17], an effective therapy is the activation of bypass collateral pathways, which is key to cytoprotection studies and the screening the cytoprotective activity of agents. The administration of BPC 157 has been shown to counteract the deficits induced by the occlusion of major vessels [11][12][13][14][15][16][17]. In this study, likely in the same way, BPC 157 counteracted the gastric lesions, peripheral (portal and vena caval hypertension and aortal hypotension) and central (brain swelling and intracranial hypertension) disturbances and organ lesions induced by intragastric alcohol administration. In addition, this may have taken place along with the activation of the specific molecular pathways, i.e., eNOS, mTOR and VGEFa, known to interact with alcohol and the application of BPC 157 [7,10,58,59] (Figure 10).
Intragastric alcohol application-induced syndrome comparable to the effects of major vessel occlusion follows the initial insult to the stomach. Marked gastric lesions alongside Virchow's triad, inside and outside the stomach, appear with the same characteristics as the previously mentioned occlusive syndromes produced by the particular occlusion of one or two major vessels [11][12][13][14][15][16][17]. Since BPC 157 exerts beneficial effects against permanent vessel occlusion syndromes by activating collateral bypassing loops [11][12][13][14][15][16][17], the benefits of BPC 157 against intragastric alcohol application likely involve a similar activation of the relevant collateral pathway(s). As a likely rescue pathway, we identified the activated azygos vein pathway and the inferior vena cava-azygos vein-left superior vena cava pathway. In the rats treated with saline after intragastric alcohol, the azygos vein had completely failed, a phenomenon that also occurs in the rat model of Budd-Chiari syndrome [14] and central venous occlusion [17]. In the intragastric alcohol-treated rats administered BPC 157, the inferior vena cava and superior mesenteric vein congestion was reversed, reflecting the elimination of the otherwise severe vena caval and portal hypertension. Thus, as has been noted in vessel occlusion syndromes [11][12][13][14][15][16][17], the principle of endothelium maintenance → epithelium maintenance [1,8] was upgraded to endothelium maintenance → epithelium maintenance = blood vessel recruitment and activation ('running') towards the site of injury, or bypassing occlusion via alternative pathways [5,6] in the intragastric alcohol-treated rats.
There may be, however, other activated bypassing loops. For example, in the rats with an occluded superior sagittal sinus, we identified central shunts through the ophthalmic vein, the angularis vein, the facial anterior and posterior veins, and the facial vein, as well as the superior cerebral veins, the superior and inferior sinus cavernosus and sinus petrosus, the sinus transversus, the external jugular vein, the subclavian vein and the superior vena cava [17]. Thereby, with BPC 157 therapy delivered topically on the swollen brain, intraperitoneally or intragastrically, there was rapid attenuation of the brain swelling [17]. Therapy with BPC 157 rapidly eliminated the increased pressure in the superior sagittal sinus, the severe portal and vena caval hypertension and aortal hypotension, it quickly recruited collateral vessels, abrogated venous and arterial thrombosis, and it helped in the recovery of the organ lesions [11][12][13][14][15][16][17]. Evidently, BPC 157 offers therapeutic benefits in the resolution of damage due to intragastric alcohol or peripheral [11][12][13][14][15][16] or central [17] vessel occlusion.
It appears that after the intragastric introduction of alcohol, BPC 157 therapy offers strong beneficial effects to the stomach and brain regions, including the cerebral and cerebellar cortices, the hypothalamus, the thalamus and the hippocampus. For example, BPC 157 attenuates intracranial hypertension and thereby counteracts edema and intracerebral hemorrhage due to intragastric alcohol, which lead to hypoxic/dark neurons and degenerative changes in the cerebral and cerebellar neurons. It also helps to maintain (negative) blood pressure in the superior sagittal sinus [17]. The shunts activated by BPC 157 provide for a well-functioning and rapidly established alternative equilibrium to compensate for the injurious course of intragastric alcohol administration, similarly to central venous occlusion [17], as well as peripheral superior mesenteric artery or superior mesenteric artery and vein occlusion [13,15,16]. If this resolution of impaired venous drainage does not occur centrally, the harmful build-up of venous blood leads to venous and intracranial hypertension [65][66][67][68][69][70]. Indeed, this rise occurred in essentially the same high range in the controls with the intragastric alcohol and a permanently occluded superior sagittal sinus [17], and to a slightly lesser extent in the controls with an occluded superior mesenteric artery or an occluded superior mesenteric artery and vein [13,15,16]. Of note, this particular vascular network organization, which BPC 157 induces to maintain undisturbed intracranial pressure (superior sagittal sinus), may work against further worsening that could appear with either peripheral or central challenges.
In one study, BPC 157 affected Egr, Nos, Srf, Vegfr, Akt1, Plc and Kras gene expression in the vessel that provided an alternative operating pathway (i.e., the left ovarian vein as the key to the infrarenal occlusion-induced inferior vena cava syndrome in rats) [11]. In the hippocampus, BPC 157 strongly elevated Egr1, Akt1, Kras, Src, Foxo, Srf, Vegfr2, Nos3 and Nos1 expression and decreased Nos2 and Nfkb expression; these changes may indicate how BPC 157 exerts its effects [40]. Therefore, in the rats who were received intragastric alcohol, BPC 157 maintained the integrity of the affected organs; the distinctive, evidenced changes to eNOS, mTOR and VGFRa expression may illustrate particular points at which BPC 157 therapy worked simultaneously in each of the organs. An illustrative example may be the simultaneous increase (eNOS, VGFRa expression) and decrease (mTOR expression) in the lung. Interestingly, we noted that eNOS, mTOR and VGFRa expression occurred together in the liver (decrease), and in the kidneys (no change). Overall, BPC 157 can induce and interact with a host of molecules to exert beneficial effects. The extent of the benefits of BPC 157, and the mechanisms by which it exerts these benefits, remain to be fully explored.
As a final note, it is true that animal studies per se, especially cytoprotection studies, must be interpreted with caution [96]. In addition, there is a relative paucity of BPC 157 clinical data [5,6]. However, BPC 157 has proved to be efficacious in the treatment of ulcerative colitis [5,6], in both clinical settings [97,98] and in experimental animal models [36,[80][81][82][83][84][85]99,100], as well as in complications (for review, see [101]). An important point regarding the application of cytoprotection in practice in various species [100] is the very safe profile of BPC (the lethal dose (LD1) could be not achieved) [7,45]), a point recently confirmed in a large study by Xu et al. [102]. In this context, and also for practical purposes, given that the therapeutic effects speak for themselves, especially in cytoprotection studies [1][2][3]8], animal models offer indispensable substantiation of the actions and principles by which BPC 157 exerts its benefits in multiple settings, including alcohol intoxication.
Conflicts of Interest:
The authors declare that there are no conflict of interest. | 2021-10-19T16:01:52.316Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "ec7ab24b14a857a168d0cb6272a5c6cc7c659758",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/9/10/1300/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2272dfca2a63b6a6ef3acf5bf4c4b26d43d5087",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248866874 | pes2o/s2orc | v3-fos-license | How is your carbon footprint?
Gaius Plinius Secundus (AD 23/24–79), called Pliny the Elder was a Roman natural philosopher (Fig. 1). There is no doubt that, whether you are a climate change believer or a climate-change denier, there has been a definite change in the zeitgeist, with personal responsibility playing a much more prominent role than here-to-fore. It’s not that long since frequent flyers, usually executives who like to travel in business class, were boasting about their status, and virtually ignoring their ‘carbon footprint.’ There has been a bewildering amount of claim and counter-claim in the media with the majority of blame being laid at the feet of the large multinationals. I don’t know about you, but weather extremes, fierce storms, forest fires [1] (Fig. 2) flooding, and drought, seem to be much more frequent occurrences than I can remember in my youth. Although forest fires produce human misery and destruction it seems that’' smoke tint’ in wine is not a major problem as in most cases the grape harvest had been completed before the fires. In some areas of the world, climate change appears to herald benefits [2]. whereas, in others, the opposite is the experience. I don’t know if you believe in climate change reports but they exist in abundance [3]. The latest United Nations (UN) report on climate change should be a cause for concern. According to Ben Spencer, writing in the Sunday Times [4] in February 2022, it is already too late to reverse some of the effects: some options for climate resistant development have already been forfeited by past societal choices. What has this to do with wine? Most wine is not consumed in the area/country in which it is produced. Therefore, wine, like many other commodities is often transported a long distance, before consumption. There are a number of ways to transport wine: in bulk, often by ship, wine can be transported in cardboard containers or most commonly in bottles [5]. Bulk transport of wine is often to countries like The Netherlands, where it is blended, bottled and labelled for large supermarkets, often with surprisingly good results. ‘Wine in a box’ [6] may be of reasonably good quality but there seems to be a psychological resistance to its consumption. However, many people to whom I have spoken find ‘wine in a box’ quite acceptable for everyday consumption. I presume snobbery plays a part for those who don’t! Is the weight of a wine bottle important? Yes, wine bottles, full or empty are commonly transported by air, thus, adding to the carbon footprint. For some bizarre reason, some wine makers are bottling their wine in larger and heavier bottles than the standard 75mls. I presume some consumers think wine in such bottles is better!! I and others find it irritating. These bottles are more expensive to transport and add more to CO2 emissions. They take up more space in the cellar and many modern wine storage systems cannot easily accommodate them. What about using something other than glass to store wine? Well, believe it or not, wine bottles, made from cardboard (Fig. 3) have entered the market [7]. However, tradition, snobbery and resistance to change will make it very difficult, in my view, to replace glass bottles. Cardboard wine bottles also require a liner to hold the liquid. Although gaining a little foothold in China, as yet, cardboard wine bottles, have not had a huge impact in Europe or North America. Somehow I can’t imagine building a collection of cardboard wine bottles and the question of longevity has not been adequately addressed, although I suppose cardboard wine bottles will be used for young wines. In the meantime I think I’ll continue to drink wine from a glass bottle and eschew wine from the dreaded plastic bottles served in ‘coach’ class,.
"It is far from easy to determine whether Nature has proved to man a kind parent or a merciless stepmother" Gaius Plinius Secundus (AD 23/24-79), called Pliny the Elder was a Roman natural philosopher (Fig. 1).
There is no doubt that, whether you are a climate change believer or a climate-change denier, there has been a definite change in the zeitgeist, with personal responsibility playing a much more prominent role than here-to-fore. It's not that long since frequent flyers, usually executives who like to travel in business class, were boasting about their status, and virtually ignoring their 'carbon footprint.' There has been a bewildering amount of claim and counter-claim in the media with the majority of blame being laid at the feet of the large multinationals.
I don't know about you, but weather extremes, fierce storms, forest fires [1] (Fig. 2) flooding, and drought, seem to be much more frequent occurrences than I can remember in my youth.
Although forest fires produce human misery and destruction it seems that'' smoke tint' in wine is not a major problem as in most cases the grape harvest had been completed before the fires. In some areas of the world, climate change appears to herald benefits [2]. whereas, in others, the opposite is the experience. I don't know if you believe in climate change reports but they exist in abundance [3]. The latest United Nations (UN) report on climate change should be a cause for concern.
According to Ben Spencer, writing in the Sunday Times [4] in February 2022, it is already too late to reverse some of the effects: some options for climate resistant development have already been forfeited by past societal choices.
What has this to do with wine? Most wine is not consumed in the area/country in which it is produced. Therefore, wine, like many other commodities is often transported a long distance, before consumption. There are a number of ways to transport wine: in bulk, often by ship, wine can be transported in cardboard containers or most commonly in bottles [5]. Bulk transport of wine is often to countries like The Netherlands, where it is blended, bottled and labelled for large supermarkets, often with surprisingly good results. 'Wine in a box' [6] may be of reasonably good quality but there seems to be a psychological resistance to its consumption. However, many people to whom I have spoken find 'wine in a box' quite acceptable for everyday consumption. I presume snobbery plays a part for those who don't! Is the weight of a wine bottle important? Yes, wine bottles, full or empty are commonly transported by air, thus, adding to the carbon footprint. For some bizarre reason, some wine makers are bottling their wine in larger and heavier bottles than the standard 75mls. I presume some consumers think wine in such bottles is better!! I and others find it irritating. These bottles are more expensive to transport and add more to CO 2 emissions. They take up more space in the cellar and many modern wine storage systems cannot easily accommodate them.
What about using something other than glass to store wine? Well, believe it or not, wine bottles, made from cardboard (Fig. 3) have entered the market [7]. However, tradition, snobbery and resistance to change will make it very difficult, in my view, to replace glass bottles. Cardboard wine bottles also require a liner to hold the liquid. Although gaining a little foothold in China, as yet, cardboard wine bottles, have not had a huge impact in Europe or North America. Somehow I can't imagine building a collection of cardboard wine bottles and the question of longevity has not been adequately addressed, although I suppose cardboard wine bottles will be used for young wines.
In the meantime I think I'll continue to drink wine from a glass bottle and eschew wine from the dreaded plastic bottles served in 'coach' class,. | 2022-05-19T14:33:52.495Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "cdd275445878cbb33f0c194681fc1fd455ffc39e",
"oa_license": "CCBYSA",
"oa_url": "https://www.nature.com/articles/s41409-022-01710-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e2ea069aa1df6ba52d9b4b4800e627950ea18a4c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234662731 | pes2o/s2orc | v3-fos-license | Remedial Strategies Against an Emerging Threat of Ticks in Goats: A Review
Current study was conducted in order to explore the prevalence of ticks infestation as well as different curative strategies against tick infestation in goats. In this regards a detailed review was carried and the obtained results were found much interesting and useful. It was noticed that the prevalence of tick infestation in goats is 70% compared other diseases. Female goats suffers more (82.84%) compared to males (56.25%). All the tick-infested goats show the clinical signs of weakness, anaemia, and anorexia, hair loss and skin damage. It was further reported various application are used for controlling this emerging threat in goat, however, the efficacy rate of Trichlorfon remains significantly higher (78.4 to 85.6%) compared to other applications. Further, the Cypermethrin was reported as second most effective (68 to 81.6%) effective application after Trichlorfon. In conclusion, Trichlorfon has been proved most effective drug against the tick infestation in goats compared to all other application. It may safely be used against emerging threat of ticks.
Introduction
Infestation of ticks is a dangerous threat to the farmers of goat raising areas of the developing countries where people are dependent on goats for their lively hood, economically, ticks reduce the good quality of hides and skins, tick injuries produce secondary infection on the skin, irritation and allergic skin also has no economic value, while excessive chemical use toxicosis cause the decrease in production [1]. The tick prevalence has been observed in goats worldwide. Tick prevalence and factors related to ectoparasitism problem were studied in goats of two separate agro-climatic regions in India [2]. Tick infestations in livestock causes huge economic losses worldwide, as the United Nations haematoprotozoan between animals to animals and human during blood-sucking [3].
Goat plays a significant role in the livestock sector. The goat populations around 90% found in world's developing or under developing countries such as Pakistan, South Africa, Nigeria, India, Bangladesh and Sudan [4]. Goats fulfill the demand of meat, milk, skin and by products for national and international export. Goats are well known for their good adaptation and are resistant to endemic diseases. There are many diseases of the goat including bacterial, viral, protozoal and metabolic disorders that causes economic losses [5]. Amongst all ticks infestation is very common in goats that cause major constraint in production, morbidity and mortality with clinical signs of anemia, weight loss, retarded growth, paralysis, reduce their production and intense irritation leading to skin damage that causes huge economic losses by reducing the quality and market value of skin [6]. It also causes weakness and decreases production that has a serious economic impact on goat farming and individual and national goat production in underdeveloped countries. Although treatment of infection is attempted by using difference drugs, but in most of cases therapy fails to cure the cure animals. Major causal factors in the development could be drug resistance or the misuse of acaricides formulation with improper concentration [7]. Keeping in view these all facts current study was planned, whereby the main motive was to investigate the curative strategies against the emerging threat of tick in goats worldwide.
Threat of ticks infestation in goats
Ticks cause the loss in productivity through blood loss, hide damage and transmitting of different bacterial, viral and protozoal diseases [8]. Ectoparasites infestation reduce economic gains and affects the welfare of farm animals [9]. Increased incidence of blood protozoan diseases is present where ticks act as intermediate hosts [10]. Ticks have a few natural predators and quickly develop resistance to drugs so controlling of tick infestation is very difficult [11,12]. Disease surveillance and diagnosis is very low, and it causes obstruction to livestock health [13]. The highest incidence of tick infestation is recorded among cattle (60.50%) followed by (25.90%) in goats [14]. There are many reports from worldwide about the tick infestation, such as [9] reported as high as 81.25 percent incidence of tick infestation in cattle, The study by Prakasan & Ramani [10] reflected about 58.60 percent tick infestation from South India.
It has been further stated that the ticks cause different clinical signs and lesions in goats which include anemia, weight loss and decrease blood protein levels, reduced fertility, abortion, and unthriftiness condition and increased in susceptibility to bacterial and viral disease and death. The most common tick infestation sites on body surfaces of the goats were recorded on the ears (53.57%) followed by anus and external genitalia (14.58%) and around eyes (6.25%). The higher tick infestation (54.16%) was observed during the summer season observed [16]. The rate of tick infestation was higher of Haemaphysalis and lower of Ixodes ticks performed [17]. The prevalence of Ixodid ticks from July 1991 to June 1992 in the Chittagong hilly area of Bangladesh was reported. Figure 2 is showing the histological versus macroscopic lesion caused by ticks. It was indicated that 65.45% of cattle and 44.4% of goats were infested with different ectoparasitic species of tick such as Boophilus microplus, microplus, rhipicephalus, appendiculatus and haemaphysalis, and they also reported that the rate of tick infestation was higher in the summer season and reduced or lower in the winter season [18] Figure 3 is showing some ticks species of goat concern.
Curative strategies against ticks infestation
The cypermethrin has been reported as an external parasite killer compound. It has been used to control the ticks, lice, and flies of the livestock, there are many reports as the efficacy of cypermethrin and drug-resistant adopted by ticks such as Rhipicephalus microplus ticks, as reported [24], where systemic and topical drugs like acaricides were previously failed to control Rhipicephalus (Boophilus) microplus infestations, an initially a single dose of injections Ivermectin 630 μg/kg subcutaneous was used, although there was reduction of 50%-75% in ticks that persisted for 30-40 days, however, Cypermethrin (150ppm) was found completely ineffective. The organophosphate Trichlorfon exhibited intermediate efficacies of approximately 60%. Therefore, it was concluded that there was a high degree of resistance to Ivermectin, and it was unable to eliminate tick infestations from cattle. Comparative efficacy of Ivermectin and Cypermethrin was observed against the tick infestation in bovines as reported [25]. A total of 480 ticks were exposed to Ivermectin (IVM) and microplus [26]. The efficacy of an Amitraz/Cypermethrin pouron preparation against natural tick infestations of goat and sheep also were reported [27], who reported that Amitraz/Cypermethrin was effective against different species of ticks such as Amblyomma, Rhipicephalus, and Hyalomma in goats.
In another study, the tick infestation in different goat breeds was observed [28] from Muzaffargarh and Layyah Punjab, Pakistan.
They observed the prevalence of tick infestation from 800 goats, the result of the overall prevalence was observed at 60.1% (481/800) in both districts of Punjab. The highest prevalence of Hyalomma tick was observed in Layyah, district Muzfar Garh. The most predominant species was recorded (Rhipicephalus sanguineus) in both districts.
The female goats were highly infested (72.8%) as compared to males (47.5%), while in contrast; younger animals had more tick infestation (63.5%) as compared to older (56.7%). Further, the efficacy of four acaricides drugs Deltamethrin, Amitraz, Diazinon, and Ivermectin was compared against tick infestation in sheep and goats [29]. The first three products Cypermethrin, Amitraz, and Diazinon were used for dipping at a concentration of 0.05%, 0.05%, and 0.04% respectively. Ivermectin (1%) was injected at the dose rate of 0.2 mg/kg subcutaneously. A total number of 1054 of ticks [30]. In another study, Trichlorfon efficacy trial has proved that this drug is very effective against the tick and other skin parasites [31]. Trichlorfon is an organophosphate insecticide used to control bedbugs, fleas, cattle grubs, flies, ticks, leaf miners, and leaf hoppers as well as, for treating domestic animals for the control of internal parasites Argentina. Trichlorfon efficacy was very effective to control the ticks and external parasites as 83.45% against the control of cattle ticks as reported [33]. It has further been studied that the compound Trichlorfon has been proved much successful for treating the resistant ticks. Dong et al. [34] investigated 17 cattle farms in Inner Mongolia for the tick's infections of Boophilus micro plus. The adult ticks were controlled by using 1% trichlorfon solution applied on body surface or by spraying. These both methods were found effective in killing Boophilus micro plus at herd level. Neguvon R trichlorfon 98% (Bayer, Germany) was also effective to control all external parasites strategy, including ticks when it was applied as 0.15% solution on the external body surface by means of dipping or spray [35].
Conclusion and Recommendations
The study concludes that ticks infestation is a common threat to the goats throughout the world. This threat may by efficiently be controlled by using Trichlorfon. Further, Trichlorfon has been proved most effective drug against tick infestation in goats compared to all other application. It may safely be used against emerging threat of ticks.
Tick control program should be implement worldwide using drug namely Trichlorfon in form of dipping or spray. Further studies should be carried out with different formulation in combine state to determine the synergetic effect of different drugs combination activity to decrease drug resistant adaptation by the ticks. | 2021-05-17T00:03:47.138Z | 2020-09-21T00:00:00.000 | {
"year": 2020,
"sha1": "95edd574e117fe1459d03e98baf94d4db78d2ca8",
"oa_license": null,
"oa_url": "https://doi.org/10.34297/ajbsr.2020.10.001517",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2270111458b7ecf8b432641b1e68f592f6e2ab6e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
267989811 | pes2o/s2orc | v3-fos-license | A Rare Recurrence of a Thyroglossal Duct Cyst Accompanied by Several Postoperative Complications in an Eight-Year-Old Boy: A Case Report
A thyroglossal duct cyst (TGDC) is a fluid-filled mass in the neck resulting from the persistence of a duct from fetal development that typically regresses spontaneously. When it persists, it is most often removed in a surgical procedure known as a Sistrunk operation. This case study presents the intriguing case of an eight-year-old boy who presented to an otolaryngology clinic with both a recurrence of his TGDC, as well as several postoperative complications, after the Sistrunk operation was performed. After the initial procedure resulted in an incomplete removal of the TGDC, the patient was referred to Interventional Radiology for sclerotherapy. After several rounds of this treatment technique the cyst remnants still persisted along with their associated symptoms. Due to the very low likelihood of a recurrence being observed after surgical removal with subsequent sclerotherapy, the reappearance of the cyst raised several clinical questions. This report underscores the significance of a thorough evaluation and consideration of unique presentations when confronted with recurrent TGDCs.
Introduction
The Sistrunk operation is a widely accepted surgical intervention within the field of otolaryngology due to its comprehensive approach in removing thyroglossal duct cysts (TGDCs) by not only removing the cyst, but also the middle part of the hyoid bone and the surrounding tissue.This is especially true when comparing its recurrence rate of 5.3% to a simple excision, which has a recurrence rate of 55.6% [1].The most noteworthy difference between these two procedures is the fact that the Sistrunk operation involves the removal of the body of the hyoid bone to ensure that the full cyst is excised from the body.This is in contrast to a simple excision, where recurrence was common due to TGDC remnants that would often remain anchored to the hyoid bone [1].The aforementioned statements all contributed to the procedure developed by Dr. George Sistrunk in the mid-1900s to eventually become the standard of care for this pathology within the field of otolaryngology [2].Due to the typically high rate of success attributed to this procedure, physicians are especially concerned with the exceedingly rare occurrence of persistence of postoperative complications.This case examines the emerging technique of sclerotherapy in the treatment of recurrent TGDCs.Multiple attempts were made using this approach, but due to the persistence of the target tissue the more traditional approach of an "extended" Sistrunk procedure was ultimately performed.
Case Presentation
This case delineates the clinical course of an eight-year-old Caucasian male patient presenting with a recurrent TGDC to an otolaryngology clinic in a hospital outside of Detroit, Michigan, United States.
The initial manifestation of the TGDC occurred two years prior, characterized by a non-painful midline mass at the hyoid bone level, exacerbated with tongue protrusion, prompting suspicion of TGDC.Aside from these findings, the rest of the physical exam and review of systems was normal as he was noted to have speech appropriate for his age, a good quality of sleep, a good appetite, normal vitals, and a height and weight that fall within the normal range.This patient also did not have any prior significant past medical or surgical history.
The midline mass prompted a subsequent ultrasound which confirmed the diagnosis (Figure 1), leading to the implementation of a Sistrunk procedure for definitive intervention, involving the excision of the hyoid bone.The wound was closed with Steri-Strips and no drainage or redness was observed.The excised mass was sent to pathology for examination and the results were consistent with normal findings for an excised TGDC.The report described the mass as "fibromuscular tissue with patchy areas showing granulation tissue with mixed acute and chronic inflammatory cells associated with degenerated skeletal muscle fibers".His mother was instructed to give the patient 9.3 mL of ibuprofen by mouth every six hours as needed for postoperative pain management, along with 5 mL of clindamycin by mouth every eight hours for 10 days for bacterial prophylaxis.
FIGURE 1: Ultrasound Demonstrating the Presence of a Thyroglossal Duct Cyst (TGDC)
Six days after the procedure, the patient was reported to have "significant submental swelling" that was out of proportion to what is normally observed at this stage of the healing process.This prompted the ENT to supplement the drug regimen with sulfamethoxazole-trimethoprim at a dose of 10 mL by mouth every 12 hours for seven days.Three days after updating the drug regimen, he presented to the otolaryngology clinic for a follow-up visit displaying a "greenish" drainage at the incision site despite adherence to the sulfamethoxazole-trimethoprim antibiotics, necessitating a Penrose drain placement.A soft tissue biopsy showed the presence of a rare Neisseria lactamica (Figure 2) and the final diagnosis was a submental abscess.The abscess was drained, new dressings were placed, and the antibiotic regimen was continued.
PMN: Polymorphonuclear neutrophil
Persistent drainage and granulation tissue several weeks later prompted an ultrasound of the neck, which showed 2.5 cm of fluid that had collected in the area of the previous TGDC resection (Figure 3).The ENT then referred the patient to pediatric Interventional Radiology, where intravenous sclerotherapy with 100 mg doxycycline suspended in 8 mL sterile water with 1 mL contrast was initiated to shrink the residual TGDC that was determined to be present due to the persistent collection of fluid.This procedure occurred three months after the excision of the TGDC.A subsequent round of sclerotherapy was indicated four months after the first round due to persistent swelling of the neck with intermittent cyst rupture.
FIGURE 3: Ultrasound Demonstrating Fluid Collection at Site of Surgery
Despite these interventions, symptoms endured, accompanied by mild weight loss over the next eight months.Ultrasound confirmation of TGDC recurrence (Figure 4) led to a second surgical removal procedure, performed successfully two months later.Notably, postoperative complications manifested as an apparent allergic reaction, managed with Benadryl (7 mg intravenous injection), steroids (4 mg intravenous injection), and subsequent antibiotic therapy with cefdinir (250 mg/5 ml by mouth).
FIGURE 4: Ultrasound Demonstrating Areas of Thyroglossal Duct Cyst (TGDC) Persistence
Two weeks after the second TGDC excision surgery, the patient has a slight accumulation of fluid at the surgical site but is otherwise noted as healthy with a normal physical examination.
Discussion
Thyroglossal duct cysts are often initially discovered as a visible midline mass on the anterior neck below the level of the hyoid bone [3].This presentation is distinguished from other cervical masses, such as a branchial cleft cyst, due to the fact that it moves with both tongue protrusion and swallowing [3].They occur in approximately 7% of the population worldwide and are distributed evenly between males and females [4].They appear as a result of the failure of the closure of the embryological thyroglossal duct, which is an outgrowth of the primitive pharynx that develops during the third week of gestation [3].The nonobservance of this involution is what leads to the subsequent clinical findings that are associated with a TGDC.
The Sistrunk procedure, developed by Dr. George Sistrunk, gained prominence in the medical community because it provided a robust solution to thyroglossal duct cysts by comprehensively addressing the issue, thus reducing the chances of reoccurrence [2].Aided by the success of this mid-20th-century invention, this groundbreaking contribution has become the standard of care and has been widely accepted among healthcare professionals [2].
Due to the low rates of recurrence following the Sistrunk procedure, the circumstances of these reappearances are always carefully and holistically scrutinized.The most common cause for recurrence is an incomplete removal of the thyroglossal tract and its branching ductules during the primary procedure [3].These ductules branch from the main duct and are difficult to detect during the operation [3].Even a small amount of remaining tissue can lead to recurrence for several reasons.This includes the ability of the remaining tissue to grow and expand, the transformation of either pre-existing or dormant cysts, or the development of de novo cysts [3].
The most common method for treating all of the various forms of TGDC recurrence is an "extended" Sistrunk procedure.In a study by Pastore and Bartoli, this procedure was demonstrated to have a 100% success rate in eliminating postoperative complications and recurrences [5].This study goes on to mention that this follow-up operation is both "highly effective and safe".An alternate approach to treating recurrences, and the one that was initially attempted in this case study, is the emerging technique of sclerotherapy.This technique involves an interventional radiologist injecting a sclerosing agent into the cyst to induce sclerosis and resolve the cyst.It is less invasive than another procedure and involves a shorter postoperative recovery time.
Existing literature advises against the use of sclerotherapy as the primary method of treatment for pediatric TGDCs [6].Kim and Chung describe cases where this was attempted, and ultimately failed, in a literature review that looked into the success rates of this approach [6].However, current literature generally supports the use of sclerotherapy when treating the recurrence of the TGDC [7].A case where this was attempted successfully in a pediatric patient is outlined by Ibrahim and Daniel, where they treat the recurrent cyst with doxycycline as the sclerosing agent [7].
Our case is unique because the approach that was taken is fully backed by existing literature, but the TGDC took years to ultimately resolve and required multiple modalities of treatment.To the best of our knowledge, there are no existing cases where each of these lines of action were subsequently attempted without a successful resolution of the cyst.It was not until the final follow-up procedure that the symptoms finally ceased.The patient was not only resistant to the initial interventional treatments, but also required multiple changes in antibiotic regimens and was found to have an infection with a rare strain of Neisseria lactamica at the site of the wound.
Conclusions
Based on a thorough review of existing literature, this is the first case of a recurrence of a TGDC that persisted beyond all of the initial interventional methods.Typically the cyst and its associated symptoms are resolved with the initial Sistrunk procedure.This procedure has a small chance of failing and in cases that it fails physicians can choose to either refer the patient to an interventional radiologist for sclerotherapy or perform a revision operation themselves.For this case, sclerotherapy was chosen and even after multiple rounds the TGDC and its symptoms persisted.Ultimately, a revision excision procedure was performed and the residual tissue was removed leading to the cessation of symptoms.Overall from the time of initial presentation until the final resolution of the patient's symptoms, the treatment process spanned the course of 19 months.This case demonstrates the importance of a comprehensive evaluation and a careful consideration of alternative treatments when addressing the case of a recurrent TGDC.This is especially true when considering the extended treatment timeline and protracted duration of symptoms that are observed in this case. | 2024-02-27T16:04:08.878Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "e63e13d9f66744c4326669dcfb44410dd6dc48ee",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "74d302ae1ae15b617c40c79a3cb42216c00bd6fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31009781 | pes2o/s2orc | v3-fos-license | Regional lymphadenectomy strongly recommended in T1b gallbladder cancer
This article discusses the adequate treatment of early gallbladder cancer (T1a, T1b) and is based on published studies extending over nearly 3 decades. Randomized studies and meta analyses comparing different surgical treatments do not exist. The literature shows that in up to 20% of patients lymph node metastasis are found in T1b gallbladder cancer. Due to high malignancy with early angiolymphatic spread and resistance to chemotherapy and radiation on the one hand, and the relative low operative risk of extended cholecystectomy (cholecystectomy and regional lymphadenectomy) on the other hand, we believe that this procedure is mandatory in early gallbladder cancer.
Abstract
This article discusses the adequate treatment of early gallbladder cancer (T1a, T1b) and is based on published studies extending over nearly 3 decades. Randomized studies and meta analyses comparing different surgical treatments do not exist. The literature shows that in up to 20% of patients lymph node metastasis are found in T1b gallbladder cancer. Due to high malignancy with early angiolymphatic spread and resistance to chemotherapy and radiation on the one hand, and the relative low operative risk of extended cholecystectomy (cholecystectomy and regional lymphadenectomy) on the other hand, we believe that this procedure is mandatory in early gallbladder cancer.
TO THE EDITOR
We read with interest the systematic review by Lee and colleagues, which compares the results of surgical treatment of T1a and T1b gallbladder cancer with cholecystectomy alone or by extended cholecystectomy [1] . In this study, the published results of 1266 patients were evaluated. The authors conclude that there is no evidence to show that extended cholecystectomy is advantageous over simple cholecystectomy for T1b cancer. However, they recommend performing regional lymphadenectomy in T1b gallbladder cancer. The argument for extended cholecystectomy was mentioned at the end of the discussion: via lymphadenectomy material can be obtained for correct staging which is the basis for possible reresection.
In fact, the debate on adequate treatment of so-called early gallbladder cancer has extended over nearly 3 decades [2] . Randomized controlled studies and consecutive meta-analyses comparing different surgical treatment strategies for T1a and T1b gallbladder cancer do not exist.
The general accepted opinion of visceral surgeons -as the leading physicians for this disease-is to perform regional lymphadenectomy in patients with T1b status on [3] .
In their retrospective evaluation, Lee and colleagues found that 10.8% of patients had lymph node metastasis in T1b gallbladder cancer, and 9.3% of patients with T1b gallbladder cancer died due to tumour recurrence. 12.5% of T1b patients had recurrence after simple cholecystectomy, and only 2.7% had recurrence after extended cholecystectomy. These data are consistent with recent studies. In contrast, the rate of postoperative morbidity was 28% and postoperative mortality was 1.5% after extended cholecystectomy in the evaluation by Lee et al [1] which, in our opinion, may not be representative [4] .
The detection rate of early gallbladder carcinoma has increased in recent years, due to the high frequency of laparoscopic cholecystectomy. Currently, the detection rate of T1a and T1b tumours exceeds more than the 10% as cited by Lee and colleagues, who evaluated data from 1991 on. The high rate of open cholecystectomy (e.g., 54.4% in T1a gallbladder carcinoma) argues for a past era in the treatment of benign gallbladder disease.
In the debate on surgical strategies for early gallbladder cancer, an exact comparison of operating procedures is mandatory as there is a wide variation in the procedures carried out by surgeons during "regional lymphadenectomy".
Regional lymphadenectomy in our group consists of lymphadenectomy in the hepatoduodenal ligament, pericholedochal, periportal and along the celiac axis.
When considering current experience in the interdisciplinary treatment of solid gastrointestinal cancer, abandonment of lymphadenectomy in submucosal T1besophageal-, gastric-, and colorectal cancer is no longer defensible [5] .
The characteristics of gallbladder cancer are high malignancy with aggressive direct, lymphatic and hemato-gen-venous spread, and extensive resistance to chemotherapy and radiation [6] . Combined with the low morbidity and mortality of regional lymphadenectomy, this procedure is mandatory in T1b cancer of the gallbladder [4] .
The study by Lee and colleagues is important and commendable. It shows that oncosurgical treatment recommendations for patients with highly malignant cancer are not only based on literature studies. | 2018-04-03T01:49:28.506Z | 2011-10-14T00:00:00.000 | {
"year": 2011,
"sha1": "749f84cdb3d2aa729d772d3a25148438a91993c0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v17.i38.4347",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "51e8dfc86c8e985aae7c350e2929ae426ff138ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244584699 | pes2o/s2orc | v3-fos-license | Stator Winding Second-Order Thermal Model including End-Winding Thermal Effects
This paper proposes a second-order thermal model for electrical machines. The goal of this model is the prediction of the average winding temperature during short and long thermal transients up to the steady-state conditions. First, the thermal parameters of the electrical machine are determined by a DC test. Then, the proposed model is characterized and validated using AC tests. The accuracy of the proposed thermal model has been verified comparing the computed temperatures with the measured ones. The maximum error found during the thermal transient is lower than 3%, an excellent result comparing the complexity of a total enclosed fan cooled induction motor and the simplicity of the proposed model.
Introduction
Modern electric drives applications feature stressing operation profiles, characterized by repeated sequences of fast and short transients. This kind of operation makes it impossible to define the duty cycle in a classical way [1]. The most immediate example of these working conditions are the traction motors for e-Mobility. Due to the characteristics of the driving cycles [2], the e-Drives are required continuous accelerations and braking, thus making it difficult to evaluate the instantaneous thermal condition of the motor [3]. Moreover, as it is well-known, the most sensitive components to the heat are the stator windings, due to the limited thermal performance of their insulating materials [4,5]; therefore, it is of utmost importance to develop suitable motor thermal models, which are capable to accurately predict the instantaneous temperature of the stator winding. Classical thermal models are often intended for off-line thermal studies of electrical machines [6] and can provide accurate results of the electrical machine temperature distribution; however, these models are not feasible for real-time implementation inside the electric drive control system and require a certain degree of knowledge of the electrical machine geometry and parameters.
In general, the real-time implementation of these models on industrial microcontrollers sets the following requirements: • Limited number of thermal elements (resistances and capacitances) in the circuit to be solved in real-time; • Accuracy of the predicted stator winding temperature to avoid damaging of the machine; • Definition of the thermal circuit parameters from experimental tests without detailed knowledge of the machine geometry and materials, as they are not often available to the electric drive manufacturer.
In the technical literature, several thermal models have been proposed for various types of electrical machines. In [7], a lumped parameters model derived from geometrical data is proposed. Other lumped parameter models are proposed in [8,9], and their parameters are tuned from nodal and computational fluid dynamics (CFD) simulations. A second-order thermal model is proposed in [10] for permanent magnet (PM) machines, but its parameters are determined by means of a reference set of temperatures and lacks validation in short thermal transient test. A computationally efficient lumped parameter model is proposed in [11], which also takes into account the rotor thermal parameters. In this work, however, the thermal parameters are computed using an analytical tuning procedure and do not start from experimental measurements on the machine. Finally, Ref. [12] proposes a reduced order thermal model, but its parameters are still based on simulations and the validation is performed only for the long transient. The goal of this paper is, therefore, to improve the second-order thermal model proposed for the first time in [13,14] and to put in evidence the measurement issue found during the thermal parameters determination. As it will be shown in Figure 5, the second-order thermal model proposed in [13,14] lacks precision during the long transient (i.e., the time range before reaching the steady-state temperature). For this reason, to improve the performance of that model, the heat transfer due to the end winding has been considered, and this new second-order thermal model is deeply analyzed in this paper.
This paper is organized as follows. In Section 2, the basics of the adopted secondorder model are presented and described, as well as the necessary experimental tests to define its thermal parameters. Then in Section 3, the accuracy of the second-order model is discussed and validated on an experimental test bench. Section 4 presents the proposed modification of the basic second-order thermal model. Finally, some more considerations and the conclusions are drawn in Sections 5 and 6.
Second-Order Thermal Model
In the literature, several examples of first order thermal models of electrical machines are available [15][16][17]; however, despite being simple and reliable, these models are only able to predict the winding temperature evolution during short thermal transients. To predict the temperature evolution of the other parts of the machine, a more advanced thermal model has to be defined. To maintain the compromise of a simple and yet reliable model, the second-order thermal model depicted in Figure 1 was first presented in [13]. The goal of this second-order thermal model is to predict the winding temperature both in short transient and in long transient up to the thermal steady-state conditions, considering the conduction and the forced convection heat transfer. This latter heat path is the most relevant in motors with forced ventilation such as totally enclosed fan cooled "TEFC" induction motors. Moreover, the second-order thermal model shown in Figure 1 can be easily identified by means of simple experimental tests, as well presented in [13]. In particular, the thermal parameters can be determined by performing two tests, each necessary to identify some of the thermal circuit parameters.
DC Heating Test
In this first test, the machine phases were connected in series (see Figure 2) and supplied by a constant DC current rated in the range of 40-50% of the nominal machine current. This connection ensures balanced Joule losses in each phase of the motor and thus a uniform heating. The voltage V dc and current I dc must be measured and logged during the complete duration of this first test; however, thanks to the long thermal time constants involved, low sampling rates are possible. Since the machine is not rotating during this test, the second-order thermal circuit is simplified as shown in Figure 3. In fact, the forced cooling effect, modeled by R f c , is not present (machine in standstill conditions). The values of R eq,w and C eq,w , which represent the thermal parameter of the winding and its insulation system, can be considered constant during this first DC test. The product of R eq,w and C eq,w defines the winding thermal time constant. As already discussed in detail in [13], R eq,sr and C eq,sr can assume different values depending on how long is the thermal transient time that the model has to predict. If the second-order thermal model has to be accurate up to the time requested to reach the thermal steady-state, the value of C eq,sr will include the complete stator lamination, frame and rotor thermal capacitances. At the end of this first DC test, the R eq,w and C eq,w values are immediately obtained according to the short transient thermal test procedure described in [17]. In particular, the short transient is extracted from the first minutes of this test, calculating the electrical energy fed by the DC supply from the measured voltage and current. A complete and detailed description of the test procedures for the determination of the thermal parameters of Figure 1 can be found in [13,14] where it is possible to find the elaboration technique for the computation of the thermal parameters as well.
AC Load Test
The determination of the thermal resistance due to the forced convection R f c (see the complete second-order thermal model of Figure 1) requires an AC load test, where the machine is rotating, and it is loaded by an external mechanical load. The mechanical load applies a constant loading torque. A load torque in the range 50-100% of the rated one is suggested to maximize the temperature increase. Once the machine has reached steady-state thermal conditions, it is necessary to measure the stator input (AC voltage and current), the winding temperature and the mechanical quantities at the motor shaft (torque and rotating speed). In fact, when the machine is in steady-state conditions, the thermal circuit of Figure 1 can be simplified as in Figure 4. Two losses contributions can be found and reconstructed. First, the heat source P Js , modeling the stator Joule losses, can be computed by (1): where R sT is the phase stator winding resistance at the measured stator operative temperature T s , reconstructed from the resistance at ambient temperature. The other losses of the machine (iron, rotor. . .) are modeled by the heat source P ol , which can be computed as (2): where P el is the absorbed electrical power from the stator ( √ 3V s I s cos φ), T and ω are the mechanical torque and speed and P mech are the mechanical losses not involved in the heat production inside the machine. In the considered total enclosed fan cooled "TEFC" motor, whose rated values are listed in Table 1, these mechanical losses are mainly related to the ventilation losses of the fan, positioned on the rear side of the machine. The bearing friction losses, always very small, can be neglected with respect to the ventilation one. From the thermal circuit point of view, the mechanical losses P mech cannot be considered as thermal losses because they do not produce heat as the other loss contributions (Joule and iron losses). Instead, the ventilation losses, due to the axial fan, represent a load for the motor, increasing the requested torque. Consequently, these losses do not produce heat and must be separated from the power term P ol . It is important to correctly estimate the mechanical losses from the classical no load test as suggested by the International Standards for the determination of the electrical machine efficiency [1,18]. Finally, using the thermal resistances calculated previously in the DC test R eq,w andR eq,sr , the thermal network of Figure 4 can be solved, obtaining the value of the forced convection resistance R f c .
Second-Order Thermal Model Accuracy
The proposed thermal model reported in Figure 1 was implemented in MATLAB/Simulink for the simulation of the motor thermal transients. The thermal transient used for the ac-curacy evaluation of the proposed thermal model was defined as the addition of two torque step variations. The first torque step starts from a no load condition at the ambient temperature up to 50% of the rated torque (same torque used for the determination of the thermal resistance R f c ). When the motor reaches the temperature steady-state condition at 50% of the rated torque, a new torque step variation from 50% to 100% of the rate torque has been applied to the motor. The transient load test has been closed when the motor has reached the new thermal steady-state condition. During the transient load test, the electrical, mechanical quantities and the temperatures were acquired by means of the data recorder HBM Gen7. Since two thermal steady-state conditions had to be reached, the load test has been 10 h long. Figure 5 shows the comparison between the measured and the computed winding temperature for complete torque transient. It is well evident the good agreements of the predicted and the measured temperature. In steady-state condition at 50% of the rated torque, the difference between the computed and the measured temperature is equal to 0.44 • C with a percentage errors is equal to 0.67%. In steady-state condition at 100% of the rated torque, the difference between the computed and the measured temperature is equal to 2.15 • C and the percentage error is equal to 1.81 %. The better values obtained with 50% of the rated torque is an expected result since 50% of the rated torque has been the load condition for the determination of the thermal resistance R f c . The accuracy of the secondorder thermal model has been verified during the short transient as well. Figure 6 and 7 show the predicted and the measured winding temperature for the torque step variation from 0 to 50% and from 50% to 100%, respectively. Both the figures put in evidence a delay of the measured temperatures with respect to the computed ones as shown by the dashed ellipses.
This time delay can be justified by an intrinsic thermal time constant of the thermocouple used for the measure of the winding temperature. Since the thermocouple is glued on the winding, the glue (epoxy resin) plus the thermocouple itself have a thermal capacitance and thermal resistance that can introduce the found time delay. This hypothesis is supported by the different initial derivatives of the measured temperature with respect the computed one. Inside the ellipses, it is well evident as the measured temperature starts with an horizontal trend (derivative equal to zero) while the predicted temperature starts with a positive derivative. In order to confirm the author's hypothesis by means of experimental approach, the load test has been repeated, using a specific instrument that can measure the winding resistance injecting a very small DC current while a three-phase motor is running connected to the grid. The used instrument, shown in Figure 8, is an Elettrotest RHM 60A/2 [19]. It is important to underline that the use of the RHM 60A/2 is limited to a sinusoidal supply and it cannot be used with inverter supply.
As shown in Figure 9, a 2-channel resistance meter has been used, with three inlets and three outlets for power supply (for AC decoupling) and two measurement channels (RHM1 and RHM2), each made up of two injection (INJ) and reading (SENSE) terminals. The measure resistance is provided by the external display on the front panel and by an analog signal in the rear panel. The analog signal was used to connect the signal to the data logger HBM Gen4tb. In this way, the complete evolution of the winding resistance can be recorded and consequently its temperature transient. The results of the new load test are shown in Figures 10 and 11, where it is well evident the excellent agreement between the measured temperatures using RHM60A/2 and the model ones for both the short transients. These results validate the authors' hypothesis about the time delay introduced by the thermocouple. It is important to highlight that these results put in evidence as the use of a thermal sensor during temperature transient has to be properly assessed for taking into account its time constant. On the basis of the previous considerations, the temperatures measured using RHM60A/2 has been considered as the reference ones. Figure 12 shows the predicted and the measured by RHM60A/2 temperatures. It is well evident the good agreement between the two curves confirming that the proposed second-order thermal model well predicts the winding temperature both in short and in long thermal transients.
Second-Order Thermal Model Improvement
As discussed in Section 3, the second-order thermal model shows good performances both is short transients and steady-state conditions; however, Figure 12 shows discrepancies during the long time transients. The maximum percentage error is around 5% of the measured temperature in the time interval 5 to 6 h. Consequently, a review of the initially proposed has been considered for improving the accuracy of a second-order thermal model. As discussed in Section 2, the thermal resistance R eq,w takes into account the conduction heat transfer between the stator winding and the lamination; however, the stator winding copper can be divided in two sections. The first one is the copper inside the slots (defined as active conductors) and the second one is the copper due to the end winding. The Joule losses due to the active conductors and the end winding can be separated taking into account the geometrical dimensions of the stator stack shown in Figure 13 where D is the average diameter corresponding to the stator slots and L is the stator stack length. The resistance of a stator winding coil R s,coil is proportional to its length due to the length of the active conductors and the coil external connections L c,ec . In first approximation it is possible to write that where N p the number of poles. Let define the shape factor K s as L/D, the ratio between the stator end winding resistance R s,ew and the coil resistance α can be considered proportional to As a consequence, the ratio between the stator end winding Joule losses P ew and the stator total Joule losses P js can be considered proportional to the coefficient α The stator end winding losses P ew and the slot copper losses P slot can be written as The variation of α versus K s is reported in Figure 14 for different values of the pole numbers N p . From the thermal point of view, the copper Joule losses of the slot P slot move from the slots to the ambient through the stator lamination as shown by the orange arrows in Figure 15. Due to the fins always present in the rotor short circuit rings, the end winding losses move directly from the end winding copper to the frame due to the forced convection effect, as shown by the red arrows of the same figure. The previously discussed stator Joule losses due to the end winding have been included in an improved second-order thermal model reported in Figure 16. A thermal resistance R ew,a that takes into account the force convection heat exchange of the end winding has been added between the heat source P js and the ambient temperature T 0 . The new thermal network is still a second-order thermal model and all the thermal parameters can be measured and computed in easy way. If the value K s is know (from the stator lamination geometrical data) the value of α can be computed by (4) and the value of P ew and P slot can be computed by (6). Using the measured values in steady-state condition the value of R ew,a can be computed by It is important to underline that the value of the thermal resistance R eq,w can be considered constant, because it takes into account the conduction heat transfer and its value has been obtained using the values measured with DC test. During the DC test the rotor speed is equal to zero and since there is not forced convection heat exchange for the end winding. As a consequence, during the DC test, the thermal resistance R ew,a can be considered equal to infinite and P ew equal to zero. In other words, both the losses P ew and P slot are trans-ferred to the ambient by the stator lamination only. On the base of these considerations the computed value of the thermal resistance R eq,w can be considered constant.
The temperatures transients with the new thermal model has been computed using a value of α equal to 0.5 obtained by the design data of the motor under test.
Comparing the results obtained using the original model of Figure 12 and the improved ones of Figure 17 is it evident as the temperatures computed with the improved model better fit the measured ones, even if a discrepancy of 2.1 • C (1.61% error) in the steady-state condition at 100% torque can be seen. The response of the improved secondorder thermal model is reported in Figures 18 and 19, for the short transient for 0 to 50% and 50% to 100% of the rated torque, respectively. It is well evident the good agreement between the measured and the predicted temperatures in both the transients. In particular, comparing the results shown in Figures 11 and 19 for the transient 50% to 100% is evident the better performance of the improved model with respect to the original one. Figure 18. Comparison between measured (RHM60A/2) and computed winding temperatures during the short transient (0% to 50% of the rated torque) using the improved second-order thermal model.
The results obtained with the original model [13,14] and the proposed improved one are highlighted in Figure 20, where the two models have been simulated and compared with the experimental measurements during both thermal transients (50% and 100% of the rated torque). Moreover, to further show the improvement of the proposed model during the thermal transient, a magnification of Figure 20 is available in Figure 21, with a focus on the second transient (50% to 100% of the rated torque). The improvement is also quantified in Figure 22, where the relative temperature estimation error is displayed for the original and the improved model. It can be clearly seen that the percentual error during the transient is roughly halved using the proposed model, at the cost of a slightly larger error at steady state. Finally, the thermal parameters of the original model and the improved one are listed in Table 2. The results obtained with the original model [13,14] and the proposed improved one are highlighted in Figure 20, where the two models have been simulated and compared with the experimental measurements during both thermal transients (50% and 100% of the rated torque). Moreover, to further show the improvement of the proposed model during the thermal transient, a magnification of Figure 20 is available in Figure 21, with a focus on the second transient (50% to 100% of the rated torque). The improvement is also quantified in Figure 22, where the relative temperature estimation error is displayed for the original and the improved model. It can be clearly seen that the percentual error during the transient is roughly halved using the proposed model, at the cost of a slightly larger error at steady state. Finally, the thermal parameters of the original model and the improved one are listed in Table 2. Figure 20. Comparison between measured (RHM60A/2) and computed winding temperatures during the whole test using the original [13,14] and the improved second-order thermal model. Figure 21. Comparison between measured (RHM60A/2) and computed winding temperatures during the long transient (50% to 100% of the rated torque) using the original [13,14] and the improved second-order thermal model. _ _ Figure 22. Relative temperature estimation error during the thermal test of the original [13,14] and the improved second-order thermal model.
Considerations for a Correct Use of the Model
The results discussed in this paper have shown that a second-order model can represent a viable solution for the thermal prediction of an electrical machine in a time interval that starts from a short transient where only the stator winding is involved up to the thermal steady-state condition. Two second-order thermal models have been compared. The first one does not include the forced convection heat exchange in the end winding and shows an excellent accuracy in predicting the steady-state temperature but a worse fitting accuracy during the long temperature transient. The improved model allows and excellent temperature fitting during the long transient but it has a worse accuracy in the prediction of the steady-state temperature. Anyway the maximum temperature errors found in both the models are lower than the 2% and the authors consider this results as an excellent one taking into account how complex is the thermal system and how simple are the proposed models. Even if the proposed second model thermal models have been calibrated and validated on a TEFC induction motor, the models can be used in all the electrical machines with distributed windings taking into account the following considerations.
Electrical machines with natural convection or constant fluid cooling: The proposed models can be used excluding the thermal resistances R f c while R ew,a has to be included in induction motor because the ventilation fins are present in short circuit rings only.
Electrical machines with separated/assisted ventilation (separated/assisted forces convection cooling): The thermal models can be used as they are. R f c is constant because the force convection cooling is constant and does not depend on the rotor speed.
Electrical machines with self ventilation (self-forced convection cooling) (TEFC motors): The model can be uses as they are, but R f c and R ew,a will be depending on the rotor speed. In order to evaluate the variation of the two thermal resistances depending on the motor speed, load tests at different supply frequency have been performed. Since a reduction in the speed corresponds to a reduction in the cooling air speed the load torque has been reduced for avoiding a winding over temperature. The obtained values are reported in Table 3 where is well evident the increase in the thermal resistances values with the reduction in the frequency. Figure 23 shows the variation of the thermal resistances R f c and R ew,a with the supply frequency. Even if it is possible to find an equation for correlating the values, this equation cannot be considered of general validity. Consequently, the values have to be measured motor by motor. As a final consideration the authors have used a TEFC induction motor for the definition of a second-order thermal model well aware that that motor is the most complex for the thermal point of view. Taking into account that TEFC induction motors are the most complex from the thermal analysis point of view (forced convention on the frame, rotor Joule losses in the cage), the good results obtained on the TEFC induction motor guarantee that the proposed thermal model can be extended to other motor typologies.
Conclusions
In this paper, a second-order thermal model is presented and experimental validated. The thermal tests necessary for the thermal parameters determination are analyzed and the procedure to determine them is discussed. Starting from an original second-order thermal model that does not include the cooling effect of the end winding, an improved model that includes these cooling effects has been found and discussed. The advantage of this more accurate model is summarized in a higher accuracy during long transient (i.e., the time interval before reaching the steady-state temperature) at the cost of a slightly lower accuracy at steady-state.
The comparison between the measured and computed winding temperature demonstrates that the proposed second-order thermal models can be considered a simple and accurate approach for predicting the stator winding temperature with an excellent accuracy always lower than 3% both in short transient, long transient and steady-state conditions. Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest.
Glossary
The following symbols are used in this manuscript:
P Js
Stator copper losses P ol Other machine losses, (iron, rotor Joule losses) P JsT Stator Joule losses at the temperature T R eq,w Equivalent thermal resistance between the winding copper and the stator lamination C eq,w Equivalent thermal capacitance of the winding including copper and insulating material R eq,sr Equivalent thermal resistance between stator lamination and the ambient C eq,sr Thermal capacitance of the stator and the rotor not including the stator winding capacitance R f c Thermal resistance due to forced convection R ew,a Thermal resistance due to forced convection in the end winding T w Stator winding temperature T 0 Ambient temperature T sr Temperature of the thermal capacitance C eq,sr W Winding stored thermal energy ∆T Winding temperature rise I dc (t) DC current sample at the time t V dc (t) DC Joule losses in the conductors inside the slots P ew Joule losses in the end winding | 2021-10-16T15:12:03.897Z | 2021-10-13T00:00:00.000 | {
"year": 2021,
"sha1": "bd7df2be269133f711afdd88fd3a8f909968d86e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/20/6578/pdf?version=1634115697",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7b8093c27ea73fbe35865fdc7dfef5aa7e65e1e0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
13681347 | pes2o/s2orc | v3-fos-license | Derivative chromosomes involving 5p large rearranged segments went unnoticed with the use of conventional cytogenetics
Background In countries where comparative genomic hybridization arrays (aCGH) and next generation sequencing are not widely available due to accessibility and economic constraints, conventional 400–500-band karyotyping is the first-line choice for the etiological diagnosis of patients with congenital malformations and intellectual disability. Conventional karyotype analysis can rule out chromosomal alterations greater than 10 Mb. However, some large structural abnormalities, such as derivative chromosomes, may go undetected when the analysis is performed at less than a 550-band resolution and the size and banding pattern of the interchanged segments are similar. Derivatives frequently originate from inter-chromosomal exchanges and sometimes are inherited from a parent who carries a reciprocal translocation. Case presentation We present two cases with derivative chromosomes involving a 9.1 Mb 5p deletion/14.8 Mb 10p duplication in the first patient and a 19.9 Mb 5p deletion/ 18.5 Mb 9p duplication in the second patient. These long chromosomal imbalances were ascertained by aCGH but not by conventional cytogenetics. Both patients presented with a deletion of the Cri du chat syndrome region and a duplication of another genomic region. Each patient had a unique clinical picture, and although they presented some features of Cri du chat syndrome, the phenotype did not conclusively point towards this diagnosis, although a chromosomopathy was suspected. Conclusions These cases highlight the fundamental role of the clinical suspicion in guiding the approach for the etiological diagnosis of patients. Molecular cytogenetics techniques, such as aCGH, should be considered when the clinician suspects the presence of a chromosomal imbalance in spite of a normal karyotype.
Background
In Mexico and other developing countries, the genetic approach for patients with intellectual disability (ID) and congenital malformations (CM) uses conventional G-banded karyotyping as the first-choice diagnostic test. It is usually performed at a 500-band level that allows the detection of 5-10 Mb abnormalities and results in an etiological diagnosis in approximately 3-10% of patients with a suspected chromosomopathy [1][2][3]. However, some structurally abnormal chromosomes with rearrangements larger than 5 Mb may go unnoticed by conventional cytogenetics. This may occur with derivative chromosomes, which are unbalanced intra-or inter-chromosomal rearrangements, in which the exchanged segments share a similar size and banding pattern, making them difficult to identify by conventional karyotyping [3,4].
When a patient presents with a derivative chromosome, phenotypic evaluation and chromosome analysis of the parents are mandatory to rule out the presence of a balanced translocation in one of them [1]. In fact, 70% of derivatives are inherited and this information has a significant impact on genetic counseling [5,6]. Comparative Genomic Hybridization arrays (aCGH) may uncover this type of abnormalities, because the finding of a combined deletion/duplication in the same patient points towards the presence of a derivative chromosome. Here, we describe the cytogenetic and clinical findings of two patients in whom a clinical phenotype consisting of ID and CM prompted the completion of aCGH despite having a normal 450-band karyotype. The two patients presented here, were ascertained through aCGH during the study of a cohort of 152 patients that presented with ID or CM and a normal conventional karyotype (manuscript in preparation).
Genomic DNA from the two patients and their parents was amplified and labeled using the CGH-labeling kit for oligo arrays (Enzo Life Sciences, New York, USA) and then analyzed with a 60 K oligonucleotide arrays according to the manufacturer's protocol (Agilent, Santa Clara, USA). The slides were scanned using a microarray scanner with Surescan High Resolution Technology (Agilent, Santa Clara, USA). Image quantification, array quality control and aberration detection were performed using the Agilent Feature Extraction and DNA Analytics software (Agilent, Santa Clara, USA) according to the manufacturer's instructions. Changes identified in the samples were visualized using the UCSC Genome Browser online tool (http://genome.ucsc.edu) and were compared to the Database of Genomic Variants (http://projects. tcag.ca/variation) to exclude copy number changes considered to be benign variants. The DECIPHER (Database of Chromosomal Imbalance and Phenotype in Humans using Ensembl Resources) (https://decipher. sanger.ac.uk/) and ECARUCA (European Cytogeneticists Association Register of Unbalanced Chromosome Aberrations) (http://umcecaruca01.extern.umcn.nl:8080/ecaruca/ecaruca.jsp) databases were used to assist with the genotype-phenotype correlation.
The rearrangements were validated using Fluorescence in situ hybridization (FISH) probes (Sure-FISH, Agilent, Santa Clara, USA). Molecular cytogenetic techniques, and GTG-banded karyotypes were performed on patients and their first-degree relatives to establish the origin (inherited or sporadic) of the chromosomal rearrangement and provide appropriate genetic counseling.
Case presentation
Both patients were males, born to non-consanguineous parents. There was no family history of congenital diseases, intellectual disability, autism, seizures, neurological disorders, metabolic diseases, infertility or recurrent pregnancy loss. Physical examination revealed that both patients had weight, height and head circumference below the 5th percentile. An informed consent letter for each patient was obtained from the parents.
Patient 1
Eight years old boy, born from the fourth pregnancy of a 33 years old mother and a 40 years old father; he has 3 healthy sisters. During pregnancy, decreased fetal movements were noted, and two ultrasound studies (USG) were reported as normal. He was delivered by cesarean section due to breech presentation at 37 weeks of gestation. He weighed 2800 g (between the 10th and 50th percentile), his height was 48 cm (50th percentile), and he received an Apgar score of 7/8. He required hospitalization for hypoglycemia and seizures. Delayed psychomotor development was noted at 9 months of age. Physical examination at the age of 3 years showed that the patient had posterior flattening of the skull, carp-shaped mouth (downturned corners of mouth), low-set ears, short neck, appendicular hypotonia, hands with prominent finger pads and multiple palmar creases, and a non-palpable right testicle. The computed tomography (CT) and magnetic resonance imaging (MRI) of the brain were normal. He also had brainstem auditory evoked potentials (BAEP), with bilateral severe hearing loss. USG of gonads reported both testes at the proximal third of the inguinal canal. Conventional karyotype at 450-band resolution was reported as normal, however, aCGH demonstrated the presence of a 9.1 Mb deletion of chromosome 5 and a 14.8 Mb duplication of chromosome 10 [ Fig. 1a-e]. FISH analysis showed a normal pattern of the probes in the father's sample, while a balanced translocation was observed in the mother and in two of his sisters. Following the 2016 International System for Human Cytogenomic Nomenclature (ISCN 2016) [7], the patient's karyotype was 46,XY,der(5)t(5;10)(p15. 2;p13)mat. ish der(5)t(5;10)(p15.2;p13)(wcp5+,wcp10+).arr [GRCh37/hg19]5p15.33p15.2(151737_9215425)× 1, 10p15. 3p13(148,206_14,869,993)× 3 mat.
Patient 2
Seven years old boy, he is the first child of 30 years old parents. Two USGs were performed during the second trimester of pregnancy and intrauterine growth retardation was detected. Delivered at term by vaginal childbirth with a weight of 2175 g (below the 3th percentile). Birth height and Apgar score are unknown. He did not spontaneously breathe and required supplemental oxygen and hospitalization for 20 days. At the age of 8 months he was diagnosed with developmental delay. Physical examination revealed disproportion at the craniofacial-body level, round face, telecanthus, epicanthus, ears with posterior rotation, prominent helix and antihelix, short and wide philtrum, downturned corners of the mouth, micrognathia, heart murmur, abdomen with gastrostomy catheter, axial hypertonia, upper limbs with hypoplastic nails and bilateral clinodactyly of the 5th fingers, bilateral transversal palmar crease, and male genitalia but neither testis was palpable. Renal ultrasound reported left renal ectopia and crossed fused renal ectopia; BAEP indicated severe right hearing loss and moderate left hearing loss. Conventional GTG karyotype was reported to be normal but aCGH showed a 19.9 Mb deletion of chromosome 5 and an 18.5 Mb duplication of chromosome 9 [ Fig. 2a-e]. The patient's karyotype was defined following ISCN 2016 [7] as: 46,XY,der(5)t(5;9)(p14.3;p22.1). ish der(5)t(5;9)(p14.
In Fig. 2d, it is evident that the derivative chromosome 5 shows a very similar G-band pattern than the normal chromosome 5, which explains why it was not detected in the conventional karyotype despite the large segment involved in the rearrangement. FISH with the specific probes did not show a balanced chromosomal rearrangement in either parent.
Discussion
The prevalence of derivative chromosomes is unknown, although the estimated frequency of balanced reciprocal translocations ranges from 1 in 500 to 1 in 1000 in live births [4,6]. The carriers of a balanced rearrangement may have decreased fertility, miscarriages and children with ID and CM. All of these are consequences of pregnancies with genomic imbalances due to the fertilization with a gamete that received the derivative chromosome during meiotic segregation [8][9][10].
Genotype-phenotype correlation
We describe two patients with derivative chromosomes that despite having large chromosomal imbalances were not identified by conventional karyotyping. Furthermore, even though these rearrangements resulted in a 5p deletion, the patients did not present a phenotype in which the Cri-du-chat syndrome was readily recognized. The first patient's karyotype is 46,XY,der(5)t(5;10)(p15.2;p13) mat carrying a chromosome 5 derivative that arose from a t(5;10)(p15.2;p13). The derivative chromosome results in a partial monosomy of 5p15.2 → pter, producing haploinsufficiency of the SDHA (5p15.33) and SEMA5A (5p15.31) genes that have been related to psychomotor retardation, microcephaly, pachygyria and microgyria. All these symptoms were present in our patient [11]. The der(5)t(5;10) (p15.2;p13) also results in partial chromosome 10 trisomy 10p13 → pter. This region contains the AKR1C3 gene (10p15.1) that is involved in male gonadal development [12] although its overexpression is yet to be related to an altered function. Thus, it is possible that the cryptorchidism found in this patient is a phenotype due to the 5p deletion [13]. In addition, the GATA3 gene (10p14) is involved in ear development, and abnormalities at this level have been linked to ear defects [14]. Our patient shares ID, growth delay, microcephaly, low-set ears, downturned corners of mouth, cryptorchidism and abnormal palmar creases with previously reported 5p deletion patients [13]. Patients with the 10p duplication present with ID, microcephaly, low-set ears, hearing impairment, cryptorchidism and abnormal palmar creases [15,16].
The second patient also has a chromosome 5 derivative with a karyotype 46,XY,der(5)t(5;9)(p14.3;p22.1)dn leading to a partial monosomy of 5p14.3 → pter. This karyotype explains why this patient shared clinical manifestations with Cri du chat syndrome such as microcephaly, round face, epicanthus, telecanthus, downturned corners of mouth and dysplastic pinnae [17]. This derivative also results in partial trisomy of 9p22.1 → pter and the patient presents with clinical manifestations that have been previously reported in patients with a 9p duplication. These include ID, growth delay, microcephaly, hypertelorism, epicanthic folds, low-set ears, transverse fold, short stature, hearing loss, and nail hypoplastic [18,19].
Cri du chat syndrome due to a translocation is rare (10-15%) [13] and the typical phenotype of this syndrome also depends on the extent and composition of the trisomic region involved. Both our patients have some clinical manifestations of Cri du chat syndrome, which was not initially suspected, because the phenotype was modified by the combination of the trisomic segments and the deletion. Therefore, we reviewed the literature and compared our patients' phenotype with clinical manifestations of classical Cri du chat syndrome, and with those of dup(10)(p13p15.3) and dup(9)(p22. 1p22.3) ( Table 1). We found that, when the case is not detected neonatally, the classical phenotype is masked by the clinical manifestations of the trisomic regions, and the deletion of Cri du chat region is detected only when aCGH is done [20][21][22][23]. It is worth mentioning that we could not find other cases reported in the literature in which partial trisomies 10p13 → p15.3 or 9p22.1 → p22.3 coexisted with deletion 5p14.3 → pter or 5p15.2 → pter.
Genetic counseling
We were able to offer precise genetic counseling to these two families regarding the origin of these derivative chromosomes. It has been reported that 62% of rearrangements are paternal in origin, and only 38% are maternal [24]. However, when we analyzed the parents and siblings of patient 1 with specific FISH probes, we found that his mother and two of his sisters were carriers of a balanced translocation. This poses them at an increased risk of having offspring with ID and CM, as well as recurrent abortions, due to the generation of a gamete with an unbalanced rearrangement. In this family with identified carriers, knowing the breakpoints of the chromosomes involved in the rearrangement allowed us to build a pachytene cross (Fig. 3) and determine the percentages of possible normal, balanced, and unbalanced gametes. In addition to complement the medical history, it was possible to anticipate the viability of the unbalanced products and the products of the 2:2 segregation [8]. In this family, the theoretical risk of having a healthy child is 2 out of 6 (33%), with the healthy child receiving either the normal chromosomes or the balanced translocation Table 1 Comparison of the phenotype of patients 1 and 2 vs Cri du chat syndrome and clinical manifestations in patients with trisomy 10p or 9p (Fig. 3), and the empiric risk is 2 out of 4 (50%), considering only the viable products.
Genetic counseling for the second family is different because neither parent are carriers. Thus, the theoretical risk of recurrence is zero. However, we cannot rule out the presence of germline mosaicism in one of the parents or a non-paternity situation.
The use of molecular cytogenetics methodology should be mandatory in patients with intellectual disability and multiple congenital malformations Currently, it is well accepted that conventional karyotyping is an excellent and non-expensive methodology for detecting aneuploidies, low-level mosaicism and large -more than 10 Mb-rearrangements; however, detection of chromosomal structural abnormalities largely depends on the skills and experience of the cytogeneticist and yet, some alterations may go unnoticed [25]. To improve the detection of chromosomal abnormalities, molecular methodologies such as FISH or Multiplex Ligationdependent Probe Amplification (MLPA) and chromosomal microarrays have been used [1]. Several groups have highlighted the economical and medical convenience of using microarrays as a first-line diagnostic test for detecting constitutional genomic imbalances [26][27][28][29][30][31][32][33]. The high cost of microarrays keeps these methodologies from being the first-line diagnostic tests in some developing countries, forcing a careful selection of patients in whom such studies are performed. Often, the few available microarrays are used in patients in whom a submicroscopic alteration is suspected, overlooking other patients who could also benefit from these types of analyses. Our cases support the benefit of offering microarray analysis to patients with a more severe phenotype such as ID and developmental delay, dysmorphic features or multiple CM, and when the clinical picture does not point toward a specific target region of the genome. A high rate of detection of unbalanced rearrangements using microarray methodology has been observed in patients with these referral indications [32]. It should be noted that microarrays can only detect unbalanced genomic regions, while the chromosomal locations of the duplicated or deleted regions material cannot be defined. In order to do so, the analysis must be complemented with the use of FISH to localize the segments in the karyotype.
Conclusions
The phenotype of the patients with a derivative chromosome are unique, since it combines the clinical features of both the partial deletion and the partial duplication.
Noticeably, only when the clinical geneticist makes a detailed phenotype-genotype correlation, it becomes evident that the patient has clinical manifestations of the specific syndrome, such as Cri du chat syndrome in this study. The typical gestalt is modified by non-classical manifestations resulting from the new genome created by the chromosomal rearrangement, and the clinical diagnosis is not apparent.
Patients presented here are clear examples in which large chromosomal rearrangements go unnoticed by experienced cytogeneticists when using conventional cytogenetics. These cases highlight the importance of performing complementary analyses in patients with developmental delay or ID associated with CM using molecular cytogenetics techniques. The clinician's suspicion of a chromosomal etiology for the patient's condition, despite a normal conventional karyotype, is fundamental to support the need and demand the funding to perform further molecular cytogenetic testing that could positively impact the patient's diagnosis, and to provide information regarding the biology of the disease. | 2018-05-09T13:24:39.713Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "4f8213dc58c9a4549a343d5d0bc6508f09f8315c",
"oa_license": "CCBY",
"oa_url": "https://molecularcytogenetics.biomedcentral.com/track/pdf/10.1186/s13039-018-0374-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86d94e91c768e6f5bfaf92413cac22493b9c06a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
213843283 | pes2o/s2orc | v3-fos-license | Implementation of OPTIKU pocket book based Android for enhancing problem solving ability
Problem-solving ability is one of the abilities that must be possessed by students in the 21st century. Field studies obtain data that some students have low problem-solving abilities. This study aims to determine the feasibility of learning, improvement of problem-solving abilities, and differences in problem-solving abilities of students between classes using Android-based pocketbook learning media with classes that do not use Android-based pocketbook learning media on optical instrument concept. The method used in the study was a quasi-experiment with the design of a non-equivalent control group. The sample was selected using a purposive sample technique. The results showed that the implementation of learning was classified as a good category and there were differences in the problem-solving abilities of the two classes of students because the N-gain experimental class was 0.63 and the control class was 0.53. This shows that Android-based pocketbook learning media can improve students’ problem-solving abilities in the optical instrument concept.
Introduction
Technology is growing rapidly in the 21st century, so science and technology are important foundations in nation building. The 21st century requires students to have high-level abilities, hopes that students can associate the learning outcomes that they get with everyday life, so they can solve problems in everyday life. The high level of ability needed in the 21st century is the ability to solve problems [1]. On the other hand [2] adds that the factors that influence problem solving abilities are the knowledge structures possessed by students to solve problems and character problems. Therefore, in the learning process it is necessary to train students' problem solving skills so that they are able to adapt the needs of the 21st century.
Learning can not be separated from the process of transferring information, so it takes the media for the delivery process. Books become one of the solutions for delivering this information. However, books tend to be more easily damaged so they are less efficient [3]. One of the requirements for good learning media is learning media must be in accordance with the characteristics of students and conformity with the concepts to be delivered. The use of appropriate learning media will increase the interest of students in learning. Learning process will be easier and more interesting, so it can make students more comfortable and not seem bored or monotonous [4].
Today people cannot escape from technology, as well as students. At present, students spend more time with cell phones or Android compared to textbooks. Not a few students are very rare to open textbooks, most students prefer to read information from digital media in the form of computers or cellphones [5].
Supported by this, the researchers implementation a learning media based on Android pocket book in physics learning. The selection of pocket books as learning media is because Android pocket books tend to be more accessible and more practical than textbooks. [6] states that the presence of pocket book learning media can increase students' learning motivation. Based on the results of previous studies, [7] states that according to media experts, physic pocket books obtain good categories and are very feasible to use as learning media because they obtain a feasibility percentage of 88.44% and 91.11%.
[8] the results showed that there were higher learning outcomes for students using pocket books than without using pocket book.
[9] also mentions that using pocket book learning media shows an increase in students' cognitive abilities both in the experimental class and in the control class. In the other side Pocket book based Android learning media is very suitable for use in the concept of optical instruments, because in this concept there are several types problems that can be solved using the help of pocket book based Android learning media are like the formation of shadows on a lens that requires a concrete picture so that students don't just imagine. Optical instruments are very closely related to everyday life so it will help students to improve their problem solving skills [10].
Experimental method
This research method used was quasi-experimental method with non-equivalent control group design. It conducted by comparing experimental and control group, each consists of 30 students. Both groups were given pre-test then given treatment and post-test. The research design can be seen in table 1.
Group
Before Treatment After The sample was selected using a purposive sample technique, consideration of sampling taken is the availability of learning media that will be used in the form of Android in the experimental class, and the level of ability of students in the experimental class is more homogeneous than other classes. Test instrument used in this study is a test of problem solving ability, in the form of essays. This test is given before and after treatment. The goal is to find out an enhancing in students' problem solving abilities. For the enhancing problem solving skill, we used normalized gain equation.
Result and discussion
Based on research that has been done, data has been obtained in form of media validation result, observation results of implementation of the learning process, and result of pre-test and post-test. Before pocket book based on Android learning media was used on Students, the validation process needs to be done because good media is verified [12]. Where in the verification process contained validation and reliability. The pocket book validated first by three experts, they are media expert, content expert and physics teacher. Result from media expert validation obtained value of 82.67 which is in good category, while from content expert obtained value of 64.8 which is in medium category and from physics teacher obtained value of 88.71 which is in good category. Based on validation result, it can be concluded that the pocket book based on Android learning media is worthy to use as learning media.
The increase of problem solving skill of students is knows based on increase of student's pre-test and post-test. Pre-test is conducted before giving treatment as an initial student's skill test. Post-test is conducted after giving treatment as a final student's skill test. The analysis data result of pre-test, posttest and N-gain of experimental and control class for optical instruments material is presented in table 3. The table 3 can be presented as a graph in the figure below: Figure 1. Enhancing of student problem solving ability.
Based on table 3 and figure 1, obtained of increase in problem solving skill of experimental and control class. Experiment class shows pre-test average of 26.84 and post-test average of 73.07, this shows that experimental class has increased of 0.63 which is in medium category. Control class shows pre-test average of 27.9 and post-test average of 65.8, this shows that control class has increased of 0.53 which is in medium category. The results of the study obtained based on the sub-indicators of problem solving ability found that the highest N-Gain value is useful concept description. Based on this result, it shows that Pocket Book based on Android can help students in learning process and also can increase their cognitive skill.
Conclusion
Difference between student's problem solving skill with and without pocket book based on Android learning media on optical instrument material can be known based on N-Gain of Both Class. Class that using pocket book based on Android obtained N-Gain of 0.63 while class without pocket book based on Android obtained N gain of 0.53. The increasing of problem solving skill in class with pocket book based on Android learning media is higher than class without it. | 2019-12-19T09:18:19.425Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "30a0dd002d2a24df9f9531b58f640b2083012195",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1402/4/044100",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1d269d3564322b086631d3251d0a8303a64456b9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250009580 | pes2o/s2orc | v3-fos-license | Increase in Lower Limb Strength after Multimodal Pain Management in Patients with Low Back Pain
Background and Objectives: The aim of the present study was to evaluate the efficacy of a multimodal pain therapy (MPM) regarding the objective parameter muscle strength of segment-dependent lower limb muscle groups before and after such a treatment. Materials and Methods: 52 patients with a history of low back pain and/or leg pain received standardized multimodal pain management. Strength of segment indicating lower limb muscles were assessed for each patient before and after ten days of treatment by handheld dynamometry. Results: Overall strength increased significantly from 23.6 kg ± 6.6 prior to treatment to 25.4 ± 7.3 after treatment, p ≤ 0.001. All muscle groups significantly increased in strength with exception of great toe extensors. Conclusions: Despite lower basic strength values at the beginning of treatment, all investigated muscle groups, except for the great toe extensors, showed a significant increase of overall strength after completion of the multimodal pain management concept. Increased overall strength could help with avoiding further need of medical care by supporting patients’ autonomy in daily life activities, as well as maintaining working abilities. Thus, our study is the first to show a significant positive influence on lower limb strength in patients with low back pain after a conservative MPM program.
Introduction
Chronic low back pain (LBP) is of increasing socio-economic importance [1,2]. In 2016, it was one of the five leading causes of years lived with a disability [3]. Furthermore, a prevalence of 1 in 5 in Germany, and 1 in 10 in the United States of America was reported [4,5]. Low back pain, and chronic back pain, is associated with a higher Body Mass Index (BMI) and old age [6,7]. In addition, pain in general is associated with higher immobilization and reduced strength to master daily life activities or the professional life [2]. To avoid persistent need of medical care and to maintain patients' autonomy, the treatment of this disease is an important issue.
Chronic low back pain is often described in the literature as mixed pain. This means that most patients have both neuropathic and nociceptive pain components [8].
Unless there is an absolute surgical indication such as cauda equina syndrome (CES), conservative therapy should then be carried out first. Since, as mentioned before, chronic low back pain is normally a mixed pain, it should be addressed in a multidisciplinary approach. Different treatment options are described in the literature for the non-surgical treatment [9,10]. In this context, spinal injection therapy has an important role in the treatment of LBP. In addition, it has already been shown that lumbar injection therapy is an effective method for nerve root irritations [11,12].
In combination with accompanying treatments such as physical therapy and psychological counselling as a multimodal analgesia management, injection therapy can be used to control chronic pain symptoms and avoid surgery with accompanying treatments such as physical therapy and psychological counselling [13,14].
Up to now, the visual analogue scale (VAS) or the numeric rating scale (NRS) have mostly been used in the literature as indicators for successful therapy [14]. However, these scales are strictly subjective parameters that can be influenced by a variety of confounders. Patient dependent criteria have mostly been collected through a questionnaire [14][15][16][17]. Interestingly, up to now, no data has been available yet concerning the influence of multimodal pain management (MPM) on muscle strength.
The aim of the present study was to evaluate the efficacy of a multimodal pain therapy regarding the objective parameter muscle strength of segment-dependent lower limb muscle groups before and after such a treatment.
Study Population
This prospective study was approved by the local ethics committee on 21 March 2018 (Nr. 18-931-101). Written informed consent was given by all patients participating in this study. Patients were included in the study if they were indicated for inpatient, multimodal pain management in a single center (Department of Orthopedic Surgery, Regensburg University Medical Center, Bad Abbach, Germany) from June 2018 to April 2019 and met the following inclusion criteria: age between 18 and 80 years, radicular pain originating from a specific nerve root or clearly attributable muscle strength impairment of the lower limb without surgical indication, and a pain level of at least 5 on the numeric rating scale.
Exclusion criteria were tumors with spinal implication, congenital spinal malformations, former spinal surgery, and rheumatic or inflammatory spinal diseases. Demographic data are shown in Table 1.
Treatment Algorithm
Inpatient care was provided for each patient for a total of 10 days. The concept consisted mainly of lumbar spinal nerve root analgesia (LSPA) in the affected region, using a freehand technique, which has already been described elsewhere [18]. The injections were administered twice a day in the morning and at noon. In addition, the patient received a bilateral facet joint infiltration at the level L4 to S1 and an epidural injection, as well a nerve root block. Undiluted Mepivacaine 1% was used and 10 mL were applied as LSPA. For the lumbar epidural injection, 8 mg Dexamethasone were combined with sterile saline solution. Again, 10 mL were applied. Facet joint injections contained 8 ml undiluted Mepivacaine 1% and 8 mg Dexamethasone. For nerve root blocks, 4 ml Mepivacaine 1% and 4 mg Dexamethasone were injected. The aforementioned injections were accompanied by physiotherapeutic and sports medical exercises, which included isometric strengthening of the back muscles on specific training equipment. A specific training program for lower limb muscles was not carried out. Additionally, the patients received proprioceptive training, electrotherapy, thermotherapy, instruction in progressive muscle relaxation, and if indicated, a psychotherapeutic program, which was embedded in a group therapy.
Measurement
A hand-held dynamometer (microFET2, Hoggan Health Industries, Salt Lake City, Utah, USA) was used to measure strength. All values were documented in kilogram. The measurements were performed on two different days. The first measurement took place on the day of admission to determine the basic strength values, and the second on the day of discharge. Each measurement was performed independently by two examiners for each assessment. Both examiners received instruction and training in the use of the measurement device prior to the study.
The contact surfaces were marked with a skin-friendly pencil to ensure equal measurement points. The marked areas were repainted by the patient himself, if necessary, (e.g., after personal hygiene acts). The contact surfaces were chosen in a manner that allowed for comfortable measurement with the highest achievable leverage. The best testing position for each muscle group was described by Mentiplay [19] While in a sitting position, the following movements were then carried out, one after the other, against the resistance of the dynamometer using the "break method" as described by Burns and Spanier [20]: hip flexion, knee extension, knee flexion. Then, the patient was moved into a lying position. Starting from the neutral joint position, the following movements were performed: foot extension, foot flexion, big toe extension. Each measurement was carried out bilaterally and all movements were measured in every patient. Three different types of transducer pads were available for the device in order to achieve a proper skin contact, a flat one, a curved one and a small flat one for Toes. Figure 1 illustrates exemplarily the testing positions for hip flexion, knee flexion, knee extension and ankle flexion. To record the patients' pain, a numerical rating scale (NRS) from 0 to 10 was used. The patients were interviewed before each measurement regarding the pain level. The strength values were documented immediately in standardized, electronic form. In addition, the descriptive data of every individual was collected (Tables 1 and 2).
Statistical Analysis
For statistical analysis, continuous data are presented as mean values and standard deviation. Group comparisons were performed by two-sided t-tests for dependent variables. Absolute and relative frequencies were given for categorical data. Inter-observer agreement was assessed using the intraclass correlation coefficient (ICC). The following values were determined according to Koo et al. [21]: less than 0.5 poor, between 0.5 and 0.75 moderate, between 0.75 and 0.9 good, and greater than 0.90 excellent reliability. Differences of p < 0.05 were considered statistically significant. IBM SPSS Statistics 25 (SPSS Inc, Chicago, IL, USA) was used for analysis.
Results
In total, 52 Patients were included analyzed with a mean age of 63.
Strength Development
Hip flexors strength increased significantly (p = 0.013) by 7.2% from the first day to discharge.
For the knee extensors, the power of the muscle group increased significantly by 12.6% (p < 0.001).
Knee flexors showed an increased rate of 7.4%, (p < 0.001). Ankle extensor strength increased by 4.75% (p = 0.041). Ankle flexor strength increased by 9.5% (p < 0.001). For great toe extensors, a reduction of strength was noted by −4.0%, which was statistically not significant, p = 0.103. Mean values for strength development are shown in Table 2. Figure 2 shows Boxplots prior to treatment and after for mean overall strength.
Side-to-Side Differences
An overall significant side-to-side difference prior to treatment, considering all measurements on one side, was not noticed, p = 0.294. Overall strength on the right side was 23.96 kg ± 7.4, and on the left was 23.24 kg ± 6.7. After treatment, a significant side-to-side difference was recognized, p = 0.039. On the right side, overall strength was 25.87 kg ± 7.6, on the left side, it was 24.95 kg ± 7.3.
No adverse events were found in the study.
Discussion
The most important finding of this study was that lower limb strength increased significantly within a few days of non-operative treatment in patients with low back pain.
Up to now, objective data regarding the course of segment-dependent muscle strength in patients undergoing MPM are missing. Hence, the primary goal of this study was to objectify the influence of multimodal pain management with regard to the muscle strength development of lower limb muscles using a hand-held dynamometer.
There are several limitations of this study. First, the strength values were assessed only twice, at the start of the injection therapy and at its end. Therefore, our records do not contain any information about the course of strength development over the entire duration of therapy. Second, a handheld dynamometer has been used in the current study. However, a free hand technique cannot be as accurate as a fixed system due to imbalances in motion control of the raters and variances due to the raters themselves. Third, there is no comparison group, especially comparing injections to other treatment options like analgesia or physiotherapy alone in a multimodal analgesia concept. This might limit the interpretation of the data. In addition, the study group is very limited. Bigger study groups are needed to confirm the findings of this investigation. Since there were only two measurements in a very short period, it is not possible to predict a long-term result and how long the effect of MPM on muscle strength might last.
Researchers have to be aware when comparing different testers. As Wadsworth and Wikholm showed in their studies, intertester reliability is reduced when the strength of the subject is superior to the one of the examiner [22,23]. Third, the present study does not contain any information about the patient's dominant side. Compared to the results in the literature, the presented measuring method could have an unintentional influence by the examiner himself, based on different force and gender as well as, of course, levers of the examiner [24,25].
In the present study, the interrater reliability results for both before treatment and after treatment proved to be excellent with values of 0.94 and 0.96, respectively. These findings are comparable to literature, where a total interrater reliability of 0.94 using the same device can be found [26]. Nonetheless, there are also inhomogeneous results, ranging from moderate to excellent interrater reliability between assessors testing muscle strength with a hand-held device [27][28][29].
In general, the comparison of the available data with the literature does not appear to be unambiguous due to inhomogeneous measurement methods. For hip flexion, the values of 24.8 kg ± 9.7 were found after treatment, which is a good average of the literature. [32]. The increased hip flexion muscle strength in healthy adults compared to patients with low back pain might be the result of pain-free range of motion. In the present study, knee extension and flexion force was 30.3 kg ± 12.09 and 21.75 kg ± 7.31, respectively, both after treatment. These findings are more or less in line with Stoll et al., which found slightly more muscle strength in these groups [32]. As they investigated healthy participants, the results in this study suggest a normalization of high pain levels resulting in low strength levels to the ones of asymptomatic adults due to MPM. Also, a significant rise in knee strength was found after treatment, suggesting patient relief from pain-related functional restrictions. Lee et al., reported that patients with a history of low back pain have significantly lower strength levels in total knee strength than a control group [28].
For ankle dorsal extension and plantar flexion, a significant increase was also found, showing a considerably higher base value for the plantar flexors of about 14% in comparison to the dorsal extensor, which increased even after treatment to about a 21% difference. Mentiplay et al., could show higher levels of baseline values, ranging from about 21 kg and 31 kg for dorsal extensors and about 48 kg and 51 kg for plantar flexors [19]. This indicates an even higher plantar flexion and dorsal extension strength ratio, which highlights the increasing strength ratio after treatment in the present study.
The results for great toe extensors in this study slightly decreased over the treatment period. However, clinical testing of the extension force of the big toe is difficult due to the small contact surface on the one hand and the rather low extension force of the big toe compared to the force of the examiner's hands on the other. Interestingly, the extension force of the big toe has rarely been studied in literature, despite the fact that the extensor hallucis longus is the segment-dependent muscle of the L5 nerve root. In 1995, Jönsson et al., reported an improvement in the function of the extensor hallucis longus after surgical treatment of disc herniation and affection of the L5 nerve root, but the classification of the force ratios was only categorical and not absolute [33]. Further studies also did not present absolute data obtained by dynamometry, such as Hara et al., in 2011, who assessed extensor muscle strength by placing the middle finger of the investigator on the toe [34]. Riandini et al., used a Hoggan Microfet 3 but presented their data only relative [35]. The discrepancy in measurement should be the content of further research to compare these findings, since L5 nerve root irritations are common.
After treatment, the patients showed a significant side-to-side-difference, which was not the case prior to the multimodal pain management. According to Lanshammer et al., normally existing side-to-side differences reoccur, which emphasizes the positive effect of the multimodal pain therapy [36]. However, side differences are controversially discussed in literature. Andrews et al., did not see significant side-to-side differences in the lower limbs in asymptomatic patients [30]. The clinical relevance of the study is based on the fact that patient with low back Pain seem to highly benefit from a MPM. Handheld dynamometry is a useful instrument to monitor the success.
Conclusions
Despite lower basic strength values at the beginning of treatment, all investigated muscle groups, with the exception of the great toe extensors, showed a significant increase of overall strength after completion of the multimodal pain management concept. Increased overall strength could help with avoiding further need of medical care by supporting patients' autonomy in daily life activities, as well as maintaining working abilities. Thus, to our best knowledge, the present study is the first to show a significant positive influence on lower limb strength in patients with low back pain after a conservative MPM program. Altough, our findings are of limited value (low level of evidence), and it is unknown whether these effects are sustained in the long term. Institutional Review Board Statement: The study was approved by the local Ethics Committee (No. 18-943-101) prior to the initiation of this study.
Informed Consent Statement: Informed written consent was obtained from all study participants.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-06-25T15:08:07.720Z | 2022-06-22T00:00:00.000 | {
"year": 2022,
"sha1": "601ebb6e3047666cd97288b95c6298afad9ee887",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/58/7/837/pdf?version=1655887223",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "784610e5ebf5d7ca21fedcb4c06aca552cf4dfd0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250662250 | pes2o/s2orc | v3-fos-license | Attachment Style and Its Relationships with Early Memories of Separation Anxiety and Adult Separation Anxiety Symptoms among Emerging Adults
Emerging adulthood concerns the transition from adolescence to adulthood. It foresees the separation from the family and the creation of new significant relationships, whereby specific attachment styles might be triggered when facing these challenges. The present study investigates the influence of retrospective memories associated with childhood separation anxiety symptoms upon emerging adults’ romantic avoidant vs. anxious attachment styles including adult separation anxiety symptoms as mediators. Age and gender were included as covariates. A community sample of N = 394 Italian emerging adults (Mage = 23.64, SD = 4.00, 70% females) completed self-report measures. The results showed that the participants presented a greater anxious attachment rather than an avoidant attachment style. Moreover, both adult separation anxiety and the memories of early separation anxiety were positively and significantly associated with anxious attachment and not with avoidant attachment. A mediation model conducted and focused on anxious attachment showed that, although not directly associated, child separation anxiety did show a significant positive indirect effect on anxious attachment as mediated by adult separation anxiety symptoms. Thus, the findings highlighted the influence of adult separation anxiety symptoms and retrospective childhood memories of separation anxiety upon anxious adult romantic attachment, yet not on avoidant attachment style. Clinical implications are discussed and suggestions for future research are provided.
Romantic Attachment and Adult Separation Anxiety
Attachment theory has long acknowledged that the need to form and maintain close bonds is fundamental to humans throughout the course of life [1]. Attachment theory postulates that the early experiences of dyadic interactions with parents give rise to patterns of relational expectations, emotions, and behaviors regarding oneself and others that are embedded within the attachment system, then by guiding subsequent social competencies [1]. Bowlby [2,3] defined these cognitive schemas as Internal Working Models (IWM), which are mental representations resulting from two components: (1) a model of significant others (e.g., parents, close friends, and/or romantic partners), characterized by the expectations and beliefs concerning the availability, dependability, responsiveness, supportiveness, proximity, and comfort of attachment figures; and (2) a model of oneself, which includes information about whether one is worthy of attention, care, and support; one's ability to get sufficient proximity/comfort; and one's worth as a relationship partner. Once developed, these IWM operate as a guide to the development of later interpersonal relationships and specifically to how people think, feel, and relate to their close partners, as well as to the interpersonal world around them [4]. Accordingly, a distinction between secure, anxious, and avoidant attachment was made and then reflected in adult romantic attachment. Avoidant romantic attachment describes the degree to which individuals are uncomfortable with closeness and emotional intimacy in relationships. Specifically, avoidant individuals strive to create and maintain independence, control, and autonomy in their relationships [5], as they hold the belief that seeking psychological/emotional proximity with romantic partners is neither possible nor desirable. These beliefs motivate avoidant people to employ distancing strategies [6], whereby they defensively suppress negative thoughts and emotions to promote independence and autonomy. On the contrary, anxious romantic attachment concerns the degree to which individuals worry about being underappreciated or abandoned by their romantic partners. People experiencing heightened anxious attachment are heavily invested in their relationships, and they yearn for closeness with their partners in order to feel more secure [7]. Moreover, these individuals harbor negative self-views, while being both guarded and optimistic toward their romantic partners. These conflicting perceptions lead anxious individuals to question their worth, they worry about losing their partners, and they remain vigilant to signs that their partners might leave them [8,9]. Accordingly, people with anxious attachment are more prone to use emotion-focused and hyper-activating coping strategies when distressed, which sustain or escalate their worries and often keep their attachment systems chronically activated [10]. Avoidant and anxious attachments could thus play a role in facilitating or impeding adult tasks, which include dealing with separation from family and other attachment primary figures and facing attachment romantic bonds: during the challenging period of emerging adulthood, specific attachment patterns might be particularly critical as they are pivotal for social relationships [11].
Emerging Adulthood and Separation Anxiety
"Emerging Adulthood" is defined as a developmental period ranging approximately from 18 to 30 years of age and refers primarily to young adults who do not have children, do not live in their own home, or do not have sufficient income to become fully independent [12]. Although young adults are expected to leave home, which leads to further redefinitions of the relationship with their original family, about 50% of Italian young adults live at home up until the age of 35 and 30% are involved in long-distance romantic relationships, thereby delaying a stable commitment to a partner [13]. Indeed, the mean age of first marriages in Italy increased to 37.91 years of age in 2017 [14]. Moreover, in the Italian context, the family is the main care and welfare provider for emerging adults, who are facing a transition towards the uncertainty of a labor market, that is exacerbated by crucial social policies. Thus, they are most often financially sustained by their parents, that means permitting them to more easily attend school and/or find a job, resulting in prolonged co-residence [15]. Moreover, culturally bound psychosocial factors play a role in delaying young adults' separation from their parents. The literature reports that both parents and their emerging adult offspring perceive living together as a positive experience [16,17]. So, young people, still living at home with their parents, then demonstrated difficulty in becoming independent and in taking on responsibilities [18], because they relied on their comfort zone. Although chronologically adulthood has been achieved, adult psychosocial maturity has not, thus this period can be considered as a delay in reaching developmental tasks, leading to difficulty in facing adult responsibilities and romantic relationships [19]. Accordingly, the prolongation of this transition can become "a combined developmental undertaking" [20] for both parents and their children. Therefore, emerging adulthood can become a phase of uncertainty, so much so that the term "feeling in-between" was adopted to describe this [17]. Indeed, emerging adults may struggle with identity issues, with difficulty in finding a place in the adult world, and in making satisfying choices regarding their romantic relationships and work environment, all of which can subsequently result in experiencing symptoms of anxiety [21].
In this regard, it is noteworthy that the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [22] has included the Adult Separation Anxiety Disorder (ASAD), thereby acknowledging that adulthood can be a potential period in which separation anxiety may appear and thus it might play a role across one's whole lifespan. ASAD is characterized by an intense fear of actual or potential separation from close attachment figures and a consequent worry regarding the safety and whereabouts of such persons [22]. Among young adults, anxiety symptoms might be experienced when they separate from their parents or intimate partners, or when they are faced with life changes, such as moving or getting married [22]. The literature shows that separation anxiety correlates with a reduced differentiation of oneself, and it hardly impacts different aspects of life, such as work, close relationships, and social and private leisure activities [23]. However, genetic, familial, bonding, and attachment factors may be involved in the genesis of ASAD [24]. The changes introduced by the DSM-5 have led to increased research into the study of ASAD regarding its epidemiology, prevalence, age of onset [25,26], correlates [27], and comorbidities [25,28], with a mostly categorical approach. As emerged from epidemiological studies, childhood Separation Anxiety Disorder (SAD) may persist into adulthood, suggesting continuities between SAD and ASAD [29]. However, other studies also support the onset of separation anxiety symptoms in adulthood [26]. In attachment theory, the construct of separation anxiety has always played a pivotal role. From an evolutionary point of view, the attachment system, beyond other roles, promotes survival by maintaining proximity between parents (or other caregiving figures) and children, which allows them to experience reduced symptoms of fear, anxiety, and distress-related symptoms when experiencing separation. Accordingly, heightened anxiety symptoms associated with separation from the primary caregiver can be an indicator of insecure attachment, thereby referring to issues related to the child's IWM as shaped by previous and ongoing bonding experiences with the caregiver [2,3]. Research on adult separation anxiety symptoms and, for instance, current studies relevant to the COVID-19 psychological impact, suggest that relying on adults' memories can be considered valid and reliable, in particular when investigating subjective experiences rather than specific behavioral manifestations or detailed instances of separation anxiety [30,31]. Studies on early memories of separation anxiety symptoms are mainly focused on the connection with the developmental trajectories of SAD, while the investigation of the link between early memories of separation anxiety symptoms and adult romantic attachment styles is still in its infancy.
Notably, all of the aforementioned literature regarding separation anxiety focuses on the identification of early memories of separation anxiety symptoms and/or adult separation anxiety in clinical samples. Very little attention has been given to community samples whereby an ASAD clinical diagnosis is not considered a milestone [26,32]. To our current knowledge, no studies have tried to analyze the peculiarity of emerging adulthood, by distinguishing between retrospective memories of separation anxiety symptoms and adult separation anxiety symptoms according to a dimensional perspective, while also investigating their association with romantic attachment. Few papers have focused on these issues, and those that did have concentrated on research mainly carried out in clinical populations [28,33].
Objective
The present study is aimed at exploratively investigating how retrospective memories of separation anxiety symptoms and adult separation anxiety can be associated with emerging adults' avoidant and anxious attachments. In this regard, the first aim is to investigate which romantic attachment style (anxious vs. avoidant) is prevalent among Italian emerging adults. The second aim is to evaluate how romantic attachment is associated with adult separation anxiety profiles based on the symptom levels observed in adult separation anxiety as well as the retrospective memories of separation anxiety symptoms. Last, the influence of retrospective memories associated with early memories of separation anxiety symptoms upon emerging adults' avoidant vs. anxious romantic attachment styles, including adult separation anxiety symptoms as mediators, is explored.
Procedure and Participants
A total of N = 394 Italian emerging adults participated in the study, of which 276 (70%) were female and aged between 18 and 30 years (Mage = 23.64, SD = 4.00). The participants were all university students (undergraduate and postgraduate). All participants indicated that they had not been hospitalized with symptoms of psychiatric disorders in the past two years. A strong minority (<3% of the total sample) reported previous psychological counselling or intervention in the past two years for mild problems, unrelated to separation anxiety problems. Only 34 participants (4.1%) were in a common-law relationship and only one (3.6%) was married. The study procedure was conducted in compliance with the Declaration of Helsinki (Italian law 196/2003, UE GDPR 679/2016) and approved by the Ethical Interdepartmental Committee of Padova University (1266/2013). The participants provided their consent prior to their participation. All were informed that their data were confidential, that they could refuse to disclose information they did not want to provide, and that they could withdraw from the study at any moment. Trained master's students assisted the participants in the compilation of the self-report measures during regular university hours.
Separation Anxiety Symptom Inventory
The Separation Anxiety Symptom Inventory (SASI) [29] is a 15-item self-report questionnaire, that captures the retrospective memories of separation anxiety symptoms and their frequency prior to/up until the age of 18. It generates dimensional rather than categorical responses. Items are rated on a four-point Likert scale (from 0 = "this has never happened" to 3 = "this happens very often"). Individual items are summed up to obtain a total score for SASI, which are considered in the current study. Higher scores indicate greater retrospective memories of separation anxiety symptoms in childhood. The SASI has been translated and validated in Italy [34]. In the present study, the internal consistency shows α = 0.83 (95% CI = [0.80, 0.85]).
Adult Separation Anxiety-27
The Adult Separation Anxiety-27 (ASA-27) [35] is a 27-item self-report inventory, which is designed to assess separation anxiety symptoms in adulthood. Items are rated on a 4-point Likert scale (from 0 "this has never happened" to 3 "this happens very often"). Scores across items are summed up to obtain a total score for ASA-27, which are considered in the current study. Higher scores indicate a greater severity of adult separation anxiety symptoms. The ASA-27 has been translated and validated in Italy on a sample of Italian university students, showing good psychometric properties [36]. In the present study, the ASA-27 shows a satisfactory internal consistency of α = 0.89 (95% CI = [0.89, 0.91]).
The Experiences in Close Relationships Questionnaire-Revised
The Experiences in Close Relationships Questionnaire-Revised (ECR-R) [37] is a 36-item measure of adult romantic attachment style. The statements concern how the individual feels in emotionally intimate relationships, and how they generally experience relationships, not merely what is happening in a current relationship. More specifically, the ECR-R is designed to assess individual differences regarding two dimensions of attachment (18 items for each scale). The first dimension refers to attachment-related anxiety (ECR-Anxiety), which indicates the extent to which people are insecure vs. secure about the availability and responsiveness of romantic partners. The second dimension refers to attachment-related avoidance (ECR-Avoidance), which designates the extent to which people are uncomfortable being close to others vs. secure depending on others. Both dimensions are considered in the current study. Each item is rated on a 7-point Likert scale (from 1 "strongly disagree" to 7 "strongly agree"). Higher mean scores indicate greater degrees of anxious and/or avoidant romantic attachment. The ECR-R has been translated and validated in Italy [38]. In the present study, the internal consistency is α = 0.90 (95% CI = [0.89, 0.91]) for ECR-Anxiety, and α = 0.92 (95% CI = [0.91, 0.93]) for ECR-Avoidance.
Data Analysis
The analyses were performed using R and SPSS v.20. The preliminary analysis regarded demographic (N; %) and descriptive information (Median, Quartile) and the assessment of the variables' normal distribution through the Shapiro-Wilk test. The Shapiro-Wilk test was used to assess the distribution of the present variables (i.e., SASI, ASA-27, ECR-Anxiety, ECR-Avoidance). All tests were significant (p < 0.00), indicating that none of the variables presented a normal distribution.
Subsequently, the sample level of anxious (ECR-Anxiety) vs. avoidant (ECR-Avoidance) attachment was compared using the Wilcoxon test for paired samples. Furthermore, to explore the variables' associative patterns, Spearman rho correlations were performed and interpreted when Cohen's d effect size was at least medium (>0.30) [39]. All analyses were considered significant when p < 0.05.
A mediational model (PROCESS Model 4) [40] was performed including SASI as the criterion variable and ECR-Anxiety as the dependent variable with ASA-27 as the mediator. Age and gender were included as covariates. The bootstrapping method of drawing 5000 bootstraps was then applied. The effects' significance was assessed based on a 95% Confidence Interval (CI); the effects were considered significant when the CI excluded 0. It should be noted that the PROCESS' mediational models adopt the Ordinary Least Square (OLS) regression method to estimate the models' effects [40], meaning that associations between variables are assessed while controlling for the influence of all others included in the model. Through the OLS regression method, it is thus possible to control for the variables' shared variance, thereby justifying performing mediational analysis on cross-sectional data in which no causal relations can be implied. Accordingly, terms such as "effect", "influence", or "mediate" will be used in line with the model performed, but not to suggest causality.
Preliminary Analysis
The descriptive information is reported in Table 1. The participants consisted of N = 394 Italian emerging adults aged between 18 and 30 years (Mage = 23.64, SD = 4.00, 70.1% females). At the time of the study, most of the participants were university students (n = 385, 97.72%), and only n = 9 (2.3%) were university students in paid employment. With regard to the sample relationship status, the great majority of the participants (n = 347, 88.1%) were single, n = 23 (5.8%) were cohabitating with their partner, and only n = 14 (3.6%) were married. Ten (n = 10, 2.5% did not provide any information). The Wilcoxon test for paired samples was performed to investigate differences in the participants' levels of ECR-Anxiety vs. ECR-Avoidance. The results showed that the current sample presented significantly higher scores for ECR-Anxiety compared to those for ECR-Avoidance (stat = 9630; p < 0.00).
Correlations
The correlations are shown in Table 2, highlighting a significant negative association between the participants' age and all of the considered variables, although all show a small effect size. ECR-Anxiety presented positive and significant correlations with ASA-27 and SASI with medium effect sizes, while ECR-Avoidance showed negligible or insignificant correlations with ASA-27 and SASI. ECR-Anxiety and ECR-Avoidance were positively and significantly correlated with medium effect sizes, while SASI and ASA-27 were positively and significantly correlated with a large effect size. To further explore and to visually inspect the correlational results, four plots displaying the distribution and the linear associations between ECR-Anxiety and both ASA-27 and SASI (Figure 1, top plots), as well as the associations between ECR-Avoidance, ASA-27, and SASI (Figure 1, bottom plots) have been produced.
The Wilcoxon test for paired samples was performed to investigate differences in the participants' levels of ECR-Anxiety vs. ECR-Avoidance. The results showed that the current sample presented significantly higher scores for ECR-Anxiety compared to those for ECR-Avoidance (stat = 9630; p < 0.00).
Correlations
The correlations are shown in Table 2, highlighting a significant negative association between the participants' age and all of the considered variables, although all show a small effect size. ECR-Anxiety presented positive and significant correlations with ASA-27 and SASI with medium effect sizes, while ECR-Avoidance showed negligible or insignificant correlations with ASA-27 and SASI. ECR-Anxiety and ECR-Avoidance were positively and significantly correlated with medium effect sizes, while SASI and ASA-27 were positively and significantly correlated with a large effect size.
To further explore and to visually inspect the correlational results, four plots displaying the distribution and the linear associations between ECR-Anxiety and both ASA-27 and SASI (Figure 1, top plots), as well as the associations between ECR-Avoidance, ASA-27, and SASI ( Figure 1, bottom plots) have been produced.
The Mediational Model
In line with the obtained correlational results, only one mediational model was performed in order to investigate the direct and indirect effects of SASI on ECR-Anxiety and mediated by ASA-27, as displayed in Figure 2. Since neither SASI nor ASA-27 has been shown to significantly contribute to ECR-Avoidance, no mediational model with these variables was performed. Moreover, as reported in the Data Analysis section, only effect size medium (>0.30) results were considered in the present study. In line with the obtained correlational results, only one mediational model was performed in order to investigate the direct and indirect effects of SASI on ECR-Anxiety and mediated by ASA-27, as displayed in Figure 2. Since neither SASI nor ASA-27 has been shown to significantly contribute to ECR-Avoidance, no mediational model with these variables was performed. Moreover, as reported in the Data Analysis section, only effect size medium (>0.30) results were considered in the present study. The overall model comprising all criterion variables (i.e., SASI and ASA-27) as well as the covariates (i.e., age and gender) accounted for the 13.67% of ECR-Anxiety total variance. This model highlighted a significant total effect (β = 0.95; t = 6.96; CI = 0.68, 1.22).
In particular, as shown in Figure 2, SASI showed a significant positive association with ASA-27 (β = 0.94; t = 13.65; CI = 0.81, 1.08) but no significant direct effect on ECR-Anxiety. On the other hand, ASA-27 showed a significant positive association with ECR-Anxiety (β = 0.84; t = 9.23; CI = 0.66, 1.02). Accordingly, although not directly associated, ASA-27 showed a mediating role in the significant indirect association between SASI and ECR-Anxiety (β = 0.79; CI = 0.56, 1.04). Age and gender, when included as covariates, were significant when assessing SASI and ASA-27 effects on ECR-Anxiety, suggesting that both age and gender significantly contribute to these associations. On the other hand, only gender emerged as a significant covariate when calculating the association between SASI and ASA-27, suggesting significant gender differences in this association.
Discussion
The present study aimed at exploratively investigating how retrospective memories of separation anxiety symptoms and adult separation anxiety could be associated with emerging adults' romantic avoidant versus anxious attachment styles. Few studies have investigated these relationships and those that did, were mainly focused on clinical populations [28,33]. Thus, little attention has been paid to the associations of the above-mentioned variables in a community sample, in particular among Italian emerging adults. Following Silove and colleagues' studies [28], the present study adopted a dimensional perspective instead of a categorical approach in the interpretation of the results that emerged from the analyses.
The results related to the first and second aims of the present study pointed out that emerging adults showed a significantly higher anxious romantic attachment rather than an avoidant romantic attachment style. However, a positive significant correlation was found between anxious and avoidant attachment styles, which can indicate an increase in the use of anxious strategies as well as an increase in the use of avoidant strategies in romantic attachment styles. The results showed that early memories of separation anxiety and separation anxiety symptoms in adulthood are related to a more anxious attachment style. These findings are further supported by the plots showing the linear association The overall model comprising all criterion variables (i.e., SASI and ASA-27) as well as the covariates (i.e., age and gender) accounted for the 13.67% of ECR-Anxiety total variance. This model highlighted a significant total effect (β = 0.95; t = 6.96; CI = 0.68, 1.22).
In particular, as shown in Figure 2, SASI showed a significant positive association with ASA-27 (β = 0.94; t = 13.65; CI = 0.81, 1.08) but no significant direct effect on ECR-Anxiety. On the other hand, ASA-27 showed a significant positive association with ECR-Anxiety (β = 0.84; t = 9.23; CI = 0.66, 1.02). Accordingly, although not directly associated, ASA-27 showed a mediating role in the significant indirect association between SASI and ECR-Anxiety (β = 0.79; CI = 0.56, 1.04). Age and gender, when included as covariates, were significant when assessing SASI and ASA-27 effects on ECR-Anxiety, suggesting that both age and gender significantly contribute to these associations. On the other hand, only gender emerged as a significant covariate when calculating the association between SASI and ASA-27, suggesting significant gender differences in this association.
Discussion
The present study aimed at exploratively investigating how retrospective memories of separation anxiety symptoms and adult separation anxiety could be associated with emerging adults' romantic avoidant versus anxious attachment styles. Few studies have investigated these relationships and those that did, were mainly focused on clinical populations [28,33]. Thus, little attention has been paid to the associations of the above-mentioned variables in a community sample, in particular among Italian emerging adults. Following Silove and colleagues' studies [28], the present study adopted a dimensional perspective instead of a categorical approach in the interpretation of the results that emerged from the analyses.
The results related to the first and second aims of the present study pointed out that emerging adults showed a significantly higher anxious romantic attachment rather than an avoidant romantic attachment style. However, a positive significant correlation was found between anxious and avoidant attachment styles, which can indicate an increase in the use of anxious strategies as well as an increase in the use of avoidant strategies in romantic attachment styles. The results showed that early memories of separation anxiety and separation anxiety symptoms in adulthood are related to a more anxious attachment style. These findings are further supported by the plots showing the linear association between variables, which highlight how emerging adults with anxious romantic attachment are distributed evenly regarding both separation anxiety symptoms and early memories of separation anxiety. This distribution was not observed for avoidant attachment, which has no significant association with separation anxiety symptoms among emerging adults. Therefore, no mediation model was performed considering avoidance attachment as the dependent variable, whereas it was carried out considering anxious attachment and including both adults' reported separation anxiety symptoms and early memories of separation anxiety. This latter mediational model, as the third aim of the present study, shows that early memories of separation anxiety symptoms influence anxious romantic attachment, only as a contribution to the presence of separation anxiety symptoms. In other words, the more that young adults report early memories of separation anxiety and report having experienced adult separation anxiety symptoms, the more they display an anxious attachment style in their close relationships. These results are consistent with the available literature regarding the developmental challenges that emerging adults have to face, characterized, in particular, by the experience of "feeling in-between" [17]. Emerging adults have to face a process of exploration in several key areas of their lives, such as the search for a romantic relationship, which represents one of their intrinsic developmental tasks [9]. However, in the Italian context, this exploration usually occurs within the parents' household, where most emerging adults continue to live [13], thereby relying on their family nucleus as their safe zone even in adulthood. The process of leaving the parents' house represents a real-life separation experience for young adults, thereby delaying taking responsibility and their commitment to romantic relationships [19]. The experience of increased adult separation symptoms can increase difficulties in developing romantic attachment relationships and, as a consequence, increase the chances of adopting an anxious attachment style [21]. Indeed, it is in this context that the way an individual responds behaviorally and emotionally to close relationships is influenced by their mental representations of attachment relationships, developed through experiences with significant others [9] (i.e., early memories of separation anxiety). In line with this, Brennan's model suggests that adults with high levels of romantic anxious attachment overvalue their relationships, show a negative image of themselves, and fear separation [9]. Therefore, this mediational model may provide additional knowledge to the literature, which, in turn, can be useful for clinical practice to understand the importance of tracing back retrospective memories of separation anxiety experienced during childhood to influence current adult separation anxiety symptoms. Moreover, this pattern can guide clinicians and researchers in gaining an in-depth understanding of the way emerging adults relate to romantic partners, particularly when an anxious attachment style is involved. To conclude, these results suggest the presence of a possible continuation between early memories of separation anxiety symptoms and adult separation anxiety symptoms in emerging adults with anxious romantic attachment, in which the separation-individuation process is one of the main challenges regarding this transitional phase. With regard to the limitations, the present study used a cross-sectional design and thus the presence of a cause-effect relationship between the considered variables cannot be inferred. Furthermore, only retrospective memories of separation anxiety symptoms among emerging adults were considered; indeed, the literature should focus on the development of longitudinal studies in order to investigate the trajectory of the associations between these variables from childhood to adulthood. Moreover, the self-report questionnaires administered may not reveal the unconscious features of emerging adults' romantic attachment, and thus, future studies should combine quantitative data with the use of semi-structured interviews, such as the Adult Attachment Interview [41,42], which has already been used as a tool to predict behavior in romantic relationships [43,44]. Notwithstanding these limitations, the mediational model that emerged from this study may help in the understanding of the underlying challenges that this life period brings, specifically with regard to the influence of adult separation anxiety symptoms in the relationship between early memories of separation anxiety and anxious romantic attachment.
Conclusions
Extensive research has explored the impact of early experiences, particularly within the family [45,46], upon adult development, including personal adjustment [47] and attachment ties [48]. The instruments administered to adults focus on their current development, while early experiences are investigated, posing the same questions to both children and adolescents, yet with the items phrased using the past tense [49,50]. Therefore, early influences on adults' current adjustment can be studied and investigated consistently without the need for costly longitudinal studies [49,50] but instead relying on the use of instruments such as SASI. Following this line, to our knowledge, the current study is the first to investigate how childhood memories of separation anxiety symptoms and adult separation anxiety may be linked to the romantic avoidant versus anxious attachment styles among emerging adults. This study highlights how early memories of separation anxiety symptoms influence how emerging adults interact in romantic relationships, only when they report experiencing separation anxiety symptoms in adulthood. In this regard, one can speculate about a possible continuation between early memories of separation anxiety symptoms and adult separation anxiety symptoms in emerging adults, in particular within the context of anxious romantic attachment. Therefore, this research adds knowledge on the importance of supporting young adults in the process of separation-individuation and on the importance of establishing healthy romantic relationships, which is one of the main developmental tasks of emerging adulthood. In this transitional period, the role of the family, both directly and indirectly, continues to be a key developmental component for young adults, which should be considered in the implementation of future works. | 2022-07-20T15:27:07.436Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "2909b491bbbbc59fcb72e17b84be5b3ef79d8f40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/14/8666/pdf?version=1657964840",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05abb314ff599cd9649bec098e3d672d689f9b8d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
263217831 | pes2o/s2orc | v3-fos-license | Structural mechanism of cytosolic DNA sensing by cGAS
Summary Cytosolic DNA arising from intracellular bacteria or viral infections is a powerful pathogen-associated molecular pattern (PAMP) that leads to innate immune host defense by the production of type I interferon and inflammatory cytokines. Recognition of cytosolic DNA by the recently discovered cyclic-GMP-AMP (cGA) synthase (cGAS) induces the production of cGA to activate the stimulator of interferon genes (STING). Here we report the crystal structure of cGAS alone and in complex with DNA, ATP and GTP along with functional studies. Our results explain cGAS’ broad specificity DNA sensing, show how cGAS catalyzes di-nucleotide formation and indicate activation by a DNA-induced structural switch. cGAS possesses a remarkable structural similarity to the antiviral cytosolic dsRNA sensor 2’-5’oligoadenylate synthase (OAS1), but contains a unique zinc-thumb that recognizes B-form dsDNA. Our results mechanistically unify dsRNA and dsDNA innate immune sensing by OAS1 and cGAS nucleotidyl transferases.
Cytosolic DNA arising from intracellular bacterial or viral infections is a powerful pathogen-associated molecular pattern (PAMP) that leads to innate immune host defence by the production of type I interferon and inflammatory cytokines. Recognition of cytosolic DNA by the recently discovered cyclic-GMP-AMP (cGAMP) synthase (cGAS) induces the production of cGAMP to activate the stimulator of interferon genes (STING). Here we report the crystal structure of cGAS alone and in complex with DNA, ATP and GTP along with functional studies. Our results explain the broad DNA sensing specificity of cGAS, show how cGAS catalyses dinucleotide formation and indicate activation by a DNA-induced structural switch. cGAS possesses a remarkable structural similarity to the antiviral cytosolic double-stranded RNA sensor 29-59oligoadenylate synthase (OAS1), but contains a unique zinc thumb that recognizes B-form double-stranded DNA. Our results mechanistically unify dsRNA and dsDNA innate immune sensing by OAS1 and cGAS nucleotidyl transferases.
Recognition of pathogen-or danger-associated molecular patterns (PAMPs or DAMPs) is crucial for host defence. Innate immunity ensures this recognition through germline-encoded pattern recognition receptors (PRRs) and triggers signalling cascades that result in production of proinflammatory cytokines and type I interferons (IFN-a and IFN-b) 1,2 . Cytosolic DNA arising from intracellular bacteria or viral infections is a powerful PAMP and is also implicated as a DAMP in autoimmune diseases 1,3,4 . Over the past years, a variety of PRRs for cytosolic DNA have been reported: DNA-dependent activator of IFNregulatory factors (DAI, also known as ZBP1) 5 , absent in melanoma 2 (AIM2) [6][7][8] , RNA polymerase III 9,10 , leucine-rich repeat (in Flightless I) interacting protein-1 (LRRFIP1) 11 , DExD/H box helicases (DDX41, DHX9 and DHX36) 12,13 and IFN-inducible protein IFI16 14 . However, these PRRs are either cell-type-or DNA-sequence-specific, are possible accessory factors (DExD/H proteins), or trigger different pathways such as caspase-1 activation (AIM2) or a b-catenin-dependent signalling pathway (LRRFIP1) 15 .
Although the DNA sensor for type I IFN production with broad specificity and cell distribution was not identified until recently, it was known that IRF3 and NFkB activation in response to DNA requires STING (stimulator of interferon genes, encoded by gene TMEM173 the protein is also known as MITA, MPYS or ERIS), a transmembrane protein that is resident on the endoplasmic reticulum [16][17][18] . STING colocalizes with DNA in vivo but binds DNA only with low affinity in vitro 19 , suggesting the presence of an additional sensor. Furthermore, STING is a direct PRR for cyclic dinucleotides such as c-di-AMP and c-di-GMP 20 , which are signalling molecules in prokaryotes and trigger IFN in response to, for example, intracellular bacteria 21,22 .
In the presence of DNA cGAS produces cGAMP, which is an endogenous second messenger that activates STING 18 , explaining how STING can stimulate IFN in response to both cyclic dinucleotides and DNA. To reveal the mechanism of DNA-stimulated cGAMP synthesis, we determined the crystal structure of porcine cGAS Mab21 (residues 135-497, comprising the highly conserved, DNA-stimulated nucleotidyl transferase (NTase) domain) with and without a 14-mer dsDNA ligand and nucleotide substrates, along with functional studies in vitro and in living cells.
Crystal structure of cGAS Mab21
cGAS is a 60 kDa protein composed of an unstructured, not well conserved amino-terminal stretch of approximately 130-150 residues followed by a highly conserved Mab21 domain that belongs to the nucleotidyl transferase (NTase) superfamily 24 . To overproduce and crystallize cGAS, it was necessary to genetically remove the unstructured N-terminal tail. The resulting cGAS Mab21 used in this study (residues 155/161-522 for human cGAS and residues 135-497 for porcine cGAS) possesses DNA-dependent dinucleotide synthesis activity in the presence of a 50-mer dsDNA that induces IFN in THP1 cells ( Fig. 1a and Supplementary Fig. 1a, b). Whereas cGAS also produces cGAMP in the presence of a 40-mer dsDNA, no activity was observed when we omitted either GTP or ATP from the reaction mixture or substituted dsDNA with single-stranded DNA ( Supplementary Fig. 1a).
We determined the crystal structure of porcine cGAS Mab21 by singlewavelength anomalous dispersion to 2.5 Å resolution using a selenomethionine derivative. After density modification, we could build an initial model, which was completed and refined against the 2.0 Å resolution native data, resulting in good R-factors and stereochemistry (Supplementary Fig. 1c and Supplementary Table 1).
The Mab21 domain of cGAS comprises two lobes, separated by a deep cleft (Fig. 1b). Lobe 1 possesses the NTase fold with a two-leaved highly twisted b-sheet (b1-b8) that is flanked on the outside by two long a-helices (aA and aB). At the inner side, lining the cleft, b1 and b6 harbour the signature catalytic site residues (E200, D202, D296) of the NTase superfamily that coordinate the catalytic Mg 21 ions and nucleotides. Lobe 2 is a bundle of four a-helices (aE-aH), connected to lobe 1 by a long 'spine' (aA), two linker helices (aC, aD) and by a long active site loop connecting aA and b1.
The molecular surface opposite the active site is a fairly flat, slightly concave 'platform', formed predominantly by aA, aC, aD and the nucleotide-binding loop. An intriguing protrusion (residues 367-382) is situated at one end of the platform. This protrusion contains highly conserved histidine and cysteines (H367, C373, C374 and C381), which together coordinate a Zn 21 ion (Fig. 1c). We denote this loop 'Zn thumb'. Its sequence is inserted between lobes 1 and 2 and is a highly conserved and characteristic feature of cGAS orthologues ( Supplementary Fig. 1d), indicating an important functional role.
The cGAS-DNA-GTP-ATP complex
To reveal the structure of the activated conformation of cGAS, we co-crystallized cGAS Mab21(td) with a self-complementary 14-mer oligonucleotide, ATP, GTP and MgCl 2 . To trap an activated conformation of cGAS Mab21 with DNA and bound nucleotides we mutated the NTase catalytic residues E200 and D202 to Q and N, respectively, thereby preventing catalysis during crystallization. The resulting transferasedeficient (td) variant is denoted cGAS Mab21(td) . The structure of the cGAS Mab21(td) -DNA-GTP-ATP complex was determined by molecular replacement using the coordinates for apo cGAS Mab21 as search model. 2F o 2 F c and F o 2 F c maps revealed interpretable density for 13 out of 14 base pairs of the dsDNA duplex and for both nucleotides bound at the active site ( Supplementary Fig. 2). The structure was refined at 3.1 Å resolution, resulting in a model with good R-factors and stereochemistry (Supplementary Table 1).
DNA is bound along the platform between the spine on one side and the Zn thumb on the other side (Fig. 2a). cGAS binds DNA predominantly by sequence-independent interactions to both phosphate-sugar backbone strands along the minor groove (Fig. 2b, c). Hereby, cGAS binds seven nucleotides at the core of the platform, which are recognized by at least eleven residues via specific side-and/or main-chain contacts. In addition to the phosphate and sugar contacts, two arginine fingers (R150 and R192) are inserted into the minor groove, additionally stabilizing the interaction in a fairly sequence-independent manner. Besides binding to the array of conserved positively charged residues at the bottom of the platform, DNA is also bound by the spine and the Zn thumb. The continuous helix of the spine in apo-cGAS Mab21 is interrupted in the DNA complex and a DNA backbone phosphate is bound at the central kink of the spine helix. On the other side of the platform, the Zn thumb contacts the DNA backbone near the major groove. We do not see close, direct polar contacts between Zn thumb and DNA, but do not want to rule out water-mediated interactions here (Supplementary Fig. 2a).
The Zn thumb does not substantially change conformation or location between apo and DNA-bound cGAS. It seems to be a rather rigid element, in which the zinc ion serves as a structural stabilizer of the protruding loop, similar to Zn 21 in regulatory domains of RIG-I-like receptors 25 . The location of the Zn thumb at the backbone near the major groove suggests that it may assist in binding to B-form DNA. In support of this, we do not see a substantial perturbation of the bound DNA from canonical B-form DNA.
Altogether, our structure suggests a specific recognition of B-form dsDNA by cGAS through an extended B-DNA binding platform and
ARTICLE RESEARCH
flanking 'Zn thumb' across both lobes of the enzyme. The observed mode of binding is consistent with the key role of cGAS in sensing very different types of DNA in a sequence-independent manner 18,23 .
Structure-function analysis
To validate the structural results, we mutated several conserved positively charged residues at the DNA-binding platform of human cGAS, two active site residues, two zinc ligands in the Zn thumb, or the entire Zn thumb and tested for nucleotidyl-transferase activity in vitro by thin-layer chromatography (TLC) (Fig. 3a). cGAS produces a product that migrates approximately in the range of c-di-AMP synthesized by DisA 26 , consistent with formation of a dinucleotide. The conserved active site residues of NTases (human E2251D227; porcine E2001D202 and human G2121S213) are essential for in vitro activity of cGAS Mab21 . Moreover, mutation of conserved positively charged residues at the centre and flanking regions of the platform (K1731R176 and K4071K411) either diminish or abolish activity, in accordance with this site being important for DNA sensing. Finally, disruption of the zinc-binding site of the thumb (human C3961C397, Zn thumbless) abolishes DNA-induced NTase activity in vitro, highlighting the functional importance of the conserved Zn thumb in DNA binding.
To test the effect of active site, platform and thumb mutations in living cells, we measured the transactivation of an IFN-b promoter reporter by transiently expressing human cGAS variants in HEK293T cells that stably expressed murine STING (Fig. 3b). Induction of IFN-b by cGAS Mab21 (human cGAS 155-552 ) in these cells is only moderately reduced compared to wild-type cGAS, showing that the Mab21 domain structurally addressed in this study is the catalytic active functional core of the sensor. The activity of full-length cGAS was abolished when residues of the NTase active site were mutated (E225Q/A1D227N/A or G212A1S213A). Mutating charged platform residues (K173A1R176A; K407A1K411A) substantially reduced the activity of cGAS in living cells. Likewise, disrupting the zincbinding site of the thumb (C396A1C397A, Zn thumbless) severely compromised cGAS activity. These data validate the in vitro biochemical data and emphasize the importance of the structure-derived motifs and elements in living cells.
To see whether Zn thumb and conserved platform surface residues are important for dsDNA binding and activity, we performed electrophoretic mobility shift assays (Fig. 3c). Both porcine and human wild-type cGAS Mab21 bind efficiently to dsDNA and, surprisingly, also to dsRNA (Supplementary Fig. 3a, c). The mutations in platform and thumb either did not affect DNA/RNA binding under these conditions, or reduced but did not abolish it ( Supplementary Fig. 3b). However, both mutants fail to show DNA-stimulated activity under conditions where they still bind DNA, and dsRNA fails to stimulate activity under conditions where it binds robustly to the protein ( Supplementary Fig. 3c, d). Thus, although these analyses validate the functional relevance of the DNA binding platform and Zn thumb on activating cGAS, they suggest that DNA or RNA interactions per se are not sufficient to activate the enzyme, indicating for instance the necessity for a precise DNA-induced structural switch.
NTase and DNA induced structural switch
To reveal the mechanism of activation of cGAS by DNA, we first analysed the NTase mechanism. We see clear electron density for two nucleotide triphosphate moieties ( Supplementary Fig. 2b). The two bases partially stack in an approximately 90u rotated orientation and inserted into a hydrophobic/aromatic pocket, sandwiched between
RESEARCH ARTICLE
I298 (lobe 1) and Y413 (lobe 2). The current resolution of the diffraction data does not allow us to unambiguously determine which base is adenine and which guanine. Binding of R353 at nucleobase 1 (the 'receiving substrate' of NTases) near O6 and N7 would argue for this being guanine. In general, nucleobase 1 (interpreted as guanine here) is in hydrogen bonding distance to S355, S357 and T186, suggesting that this nucleotide is specifically recognized. In contrast, we do not observe direct hydrogen-bonding contacts of the protein to nucleobase 2 (the 'transferred' nucleotide in NTases; interpreted as adenine here). Nevertheless, this recognition might be mediated via water molecules such as in 39 terminal uridylyl transferases 27 .
The structure provides a mechanism for attack of nucleotide 1 on nucleotide 2, consistent with the mechanism of other NTases, for example, CCA adding enzyme 28 . The triphosphate chain of nucleotide 2 is well coordinated via S188 (lobe 1), S412 (lobe 2) and Mg 21 bound to E200 (Q in cGAS Mab21(td) ) and D202 (N in cGAS Mab21(td) ). As a consequence, the relative orientation of lobes 1 and 2 is important for the phosphate coordination of nucleotide 2. In our conformation, the a-phosphate of nucleotide 2 is well placed and oriented to promote nucleophilic attack of the sugar 29 OH from nucleotide 1 to form the 29-59 linkage (Fig. 4a, see ref. 29). The attacking OH of nucleotide 1 is polarized and activated by D296, consistent with the conserved features of NTases 24 . A second Mg 21 could be important for this catalytic step. However, distinct localization will require higher resolution.
cGAS is proposed to form a cyclic-dinucleotide, which would require a second catalysis step and an additional attack of the OH of nucleotide 2 on the phosphate of nucleotide 1. Such an attack will require an almost 180u flip of the sugar moiety of nucleotide 2 to place its a-phosphate appropriately. In principle this is possible: in the course of our studies we determined the crystal structure of cGAS Mab21 bound to UTP in the absence of DNA and do observe an appropriate flip of the sugar moiety ( Supplementary Fig. 4). In any case, our structure satisfactorily explains the catalysis of formation of a specific (at present linear) dinucleotide by cGAS, but formation of a cyclic dinucleotide needs to be addressed in future studies.
To reveal a potential activation mechanism of cGAS, we superimposed apo-cGAS, cGAS Mab21 -UTP and cGAS Mab21(td) -DNA-GTP-ATP complex (Fig. 4b, c and Supplementary Fig. 5a, b). We used cGAS Mab21 -UTP because UTP binding orders the b-sheets on lobe 1 and we can also visualize conformational changes specifically induced by dsDNA rather than the nucleotides.
Although UTP binding to cGAS ordered to some extend the nucleotidebinding loop in the active site, it did not substantially change the overall structure and active site geometry of cGAS ( Supplementary Fig. 5b). In contrast, DNA phosphate binding to the spine (Fig. 4b) triggers a substantial structural switch in the spine helix (Fig. 4c) that closes lobes 1 and 2 and rearranges the active site loop, allowing magnesium coordinating of E200 to position and activate nucleotide 2.
To test the role of this DNA-induced structural switch we mutated human L174 to N. L174 (porcine L148) is repositioned in response to DNA binding to stabilize the nucleotide-binding loop, but does not directly bind DNA or NTPs (Supplementary Fig. 5c). Although L174N shows fairly normal DNA binding ( Fig. 3c and Supplementary Fig. 3b), it lacks DNA-stimulated cGAMP synthetase activity in vitro (Fig. 3a) and shows decreased interferon stimulation in cells (Fig. 3b). Thus, the structural and biochemical data suggest that cGAS is activated by a DNA-induced structural switch that rearranges the NTase active site.
Conclusion
Here we provide the structure and mechanism of activation of the cytosolic DNA sensor cyclic-GMP-AMP synthase that readily explain the synthesis of a linear dinucleotide intermediate by cGAS in response to DNA binding. The backbone binding of a canonical B-DNA by cGAS is consistent with a broad specificity innate immune PRR for cytosolic DNA and the structural elements of cGAS such as the positioning of residues involved in minor-groove binding, arginine fingers and the Zn thumb suggest that cGAS specifically responds to B-form DNA. This might explain the function of other innate immune DNA sensors to detect non-canonical DNA structures, such as DAI 5 . A structural switch transmitted by proper B-form DNA binding to the active site could also explain the lack of activation by dsRNA or in mutants that still bind DNA: slightly different conformations of RNAbound or DNA-bound mutant cGAS would not trigger robust cGAMP synthesis as even small differences in the active site geometry can strongly affect catalytic rates of enzymes.
In future, it will be important to address the specificity for other DNA structures in the activation of cGAS in more detail to see which types of DNA structures can activate cGAS. It will also be important to investigate additional requirements for efficient DNA sensing in vivo, because although shorter dsDNA molecules can stimulate cGAS Mab21 in vitro, DNA larger than 50-mer is required for efficient IFN stimulation in vivo 14,19 . One possibility is that fraying of shorter DNA molecules prevents efficient stimulation or that the positively charged N terminus contributes to sensing of longer DNA molecules. In addition, STING might have a direct role in DNA binding in a larger context in vivo 19 , although we do not see strong DNA binding in vitro and IFN stimulation in response to DNA in HEK293T cells in the absence of cGAS ( Supplementary Fig. 6).
Interestingly, cGAS has remarkable fold similarity to the antiviral protein oligoadenylate synthase 1 (OAS1) 30,31 (Fig. 5) ARTICLE RESEARCH cytosolic dsRNA. The structural similarity not only embraces the overall fold, several active site features and arrangement of lobes 1 and 2, but also certain structural elements of the platform, including the long 'spine' helix. Like cGAS, OAS1 binds dsRNA along the 'platform' and triggers a structural change that is transmitted to the active site 31 . However, whereas OAS1 is activated by A-form RNA, cGAS is activated by B-form DNA. The Zn thumb in cGAS, missing in OAS1, probably acts as a molecular 'ruler' to specifically trigger activation in response to B-form but not A-form nucleic acids (Fig. 5). Despite these differences, cGAS shows a structural switch induced by dsDNA that is very similar to that of OAS1 induced by dsRNA 31 (Fig. 5). Thus, our results structurally unify dsDNA and dsRNA sensing by cGAS and OAS1 NTases, respectively, in the innate immune system and suggest that both processes are evolutionarily connected. Note added in proof: After submission of the revised version of this manuscript, Gao et al. 32 reported related structures of cGAS and its complexes with DNA and nucleotides.
METHODS SUMMARY
Proteins were produced in Escherichia coli and purified by affinity, ion exchange and size exclusion chromatography. Apo, UTP-and DNA-ATP-GTP-bound cGAS Mab21 and its catalytic inactive form were crystallized by hanging or sitting drop vapour diffusion. The structure of apo cGAS Mab21 was determined by singleanomalous dispersion phasing on selenomethionine derivatized protein. The other structures were determined by molecular replacement using apo cGAS Mab21 as search model. NTase assays were performed by thin layer chromatography and phosphor imaging. DNA and RNA binding were assessed by electrophoretic mobility shift assays. Analysis of cGAS mutants in living cells were performed in HEK 293T cells stably expressing full-length murine STING and transfected with an IFN-b promoter reporter plasmid. | 2016-03-01T03:19:46.873Z | 2013-05-30T00:00:00.000 | {
"year": 2013,
"sha1": "3fdc4daecbf5975c0d1a6861036af925bb608dd4",
"oa_license": null,
"oa_url": "https://bib-pubdb1.desy.de/record/168643/files/DESY-2014-02579.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "e086e42c336e4410edc79c9fd00b08b94fb8ff3e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
201963996 | pes2o/s2orc | v3-fos-license | Treatment-Induced Brain Plasticity in Psychiatric Disorders
In tandem with a better-informed neurobiological model of mental illness, psychiatry has progressively been shaped into its current state of clinical neuroscience. The traditional dichotomy of organic versus endogenous mental disorders has been replaced by the growing recognition that all changes in mental processes are accompanied by changes in structures or functions of the brain. Thus, all psychiatric interventions are deemed to have a biopsychosocial nature, whereby drugs in addition to their effect on the brain have a psychological effect, and psychothera-pies beyond their psychological effects may alter the brain. In this view, the ultimate goal of any psychiatric treatment is to induce neural plasticity in a manner that restores the full original function and potential of the injured brain. Herein present chapter gives an insight into how evidence-based treatments achieve their therapeutic effects on the level of cerebral reorganization across a host of psychiatric disorders. The main theme of this work is the posited mechanism of neuroplasticity on neural-systems level for each treatment modality.
Neural parameters of therapeutic change
Mechanisms of neuroplasticity constitute fundamental processes behind learning and memory, that determine the ability of neuronal systems to incorporate novel environmental stimuli and to make appropriate adaptive response. Delineating cerebral processes of recovery from an insult to the brain holds promise for developing more refined and novel treatment modalities to target specific areas of pathology. Functional neuroimaging studies provide a mean to characterize changes in brain function related to psychiatric interventions. Well-established in indexing biomarkers of psychiatric disorders, novel neuroimaging techniques are now used to depict patterns of neural plasticity mediating post-treatment amelioration of symptoms. Various modalities provide indices of brain activity by measuring cerebral blood flow or glucose metabolism including functional magnetic resonance imaging (fMRI), 18fluorodeoxyglucose positron emission tomography (FDG-PET), and 99mtechnetium hexamethylpropyleneamineoxime single photon emission computed tomography (99mTc-HMPAO SPECT) [see ref. 1 for a detailed review] One powerful imaging modality that has significantly advanced our knowledge in this field is the task-based functional magnetic resonance imaging (t-fMRI).
It consists of a paradigm defined by a functional measurement including a stimulation adjusted to the brain area under investigation. The subject is required to perform a defined motor or sensorimotor, language or another cognitive or visual tasks in the MRI scanner while typically GRE T2*-weighed echo planner images (EDI) are rapidly acquired [for a more in-depth description of fMRI see ref. 1]. Local changes in cerebral blood flow (CBF) during task execution relative to resting state are used to infer brain regions/networks functionally involved in specific tasks. To ultimately determine the specificity and amount of therapy-induced neuroplasticity, multiple pre-, and post-therapy scans are compared against activity pattern changes in other active treatment groups and a no-treatment waiting-list group [2]. With this in mind, the next section follows with an overview of main findings associated with intervention-induced neuroplasticity and their interpretations.
Putative neuroplastic mechanisms of pharmacotherapy
Pharmacotherapy constitutes first-line treatment modality for majority of psychiatric disorders, and various theories exist as to how drug-induced neurochemical changes reverse different psychiatric symptoms. The posited purely neurotransmitter-based mechanism of action postulates either increased or reduced synaptic concentration of a target neurotransmitter that is implicated in a given disorder. This model is challenged by disjunction in the timescale of the onset of neurochemical versus therapeutic effects, wherein the potentiation or attenuation of neurotransmitter function often occurs within hours of administration and the clinical improvement is typically seen days or weeks after [3]. In quest of new rapidacting agents, contemporary approaches to understanding of drug action focus on the role of adaptive neuroplastic processes that correlate in time with the onset of clinical improvement, hence are hypothesized to represent a more direct treatment target.
Antidepressant drugs and mechanisms of neuroplasticity
Current national and international guidelines recommend serotonin reuptake inhibitors (SSRIs) as first-line treatment for most patients with major depression, and the use of serotonin-norepinephrine reuptake inhibitors (SNRI) in patients resistant to the former [4,5]. Although novel, better tolerated and more selective inhibitors of serotonin and norepinephrine reuptake are continuously being developed, the efficacy of tricyclic antidepressants such as amitriptyline for severe depression, has never been surpassed [6].
Most of currently licensed antidepressants act to enhance monoamine neurotransmission, where they are believed to achieve therapeutic effects by increasing availability of serotonin or/and norepinephrine, at least initially [7]. On the neural level, antidepressants normalize aberrant neural activity patterns underlying negative bias in affective information processing, posited to play central role in the etiology and maintenance of depressed state [8]. Thus, antidepressants were shown to attenuate hyperactivity in limbic areas of the brain (amygdala, insula, anterior cingulate), and enhance regulatory activity in the dorsolateral and medial prefrontal cortex as measured by functional magnetic resonance imaging [9,10]. It was demonstrated that 7 days treatment with SSRI, citalopram, SNRI, and reboxetine reversed abnormal patterns of neural response to affective information, and induced a similar direction of change in healthy individuals [11,12]. Noteworthy, short-term SSRI administration normalized amygdala hyperactivity in response to negative emotional stimuli prior to clinical changes in mood ratings in placebo-controlled . These findings allow to speculate that treatment-induced early reversal of negative emotional bias sets the scene for therapeutic recovery over time by reducing the influence of this key maintaining factor [14].
Antipsychotic drugs and mechanisms of neuroplasticity
Antipsychotic medication is the mainstay of effective management of psychosis where schizophrenia is the most prevalent among psychotic disorders. Most of what we know about antipsychotic drugs action is at the receptor level, where abnormalities in neurotransmission constitute either an excess or a deficiency of neurotransmitters, including dopamine, serotonin, and glutamate. Therein firstgeneration drugs act as antagonists of dopamine D2 receptors and target most positive symptoms such as hallucinations and delusions. The receptor-binding profile of second-generation drugs extends beyond D2 affinity antagonism to other neuroreceptors including serotonin 5-HT2A in the frontal lobe, thus accounting for superior efficacy of these drugs in the pathophysiology of negative symptoms and cognitive disorganization [15,16]. Overall, treatment response has been shown to be associated to the level of D2 occupancy, which is the target of all currently licensed antipsychotics [17]. To delineate therapeutic mechanism of clinically effective drugs beyond receptor level, research has focused on neural systems effects before and after pharmacotherapy in medication-naïve patients with firstepisode psychosis. Functional MRI studies revealed pre-treatment functional alterations within frontostriatal circuitry, marked by patterns of hypoactivity within the dorsolateral/medial prefrontal cortex coupled with hyperactivity in the hippocampus and striatum [18][19][20]. Thus, aberrant frontostriatal circuitry might represent a potential system-level mechanism of psychosis and a candidate for treatment target with antipsychotics. Post-treatment findings lend some evidence to validate this model, showing increases in task-related frontal cortical activation in patients who underwent 12 weeks of quetiapine fumarate treatment compared to a drug-naive group [21,22], and in a small group of patients with schizophrenia medicated with risperidone [23]. A similar study on cortical structure and function alterations within 1 year of psychosis onset in unmedicated schizophrenia patients versus patients under short-term therapy with atypical antipsychotics revealed a more complex relationship [20]. Although the treatment was associated with enhanced cognitive control and increased prefrontal, middle temporal, parietal, and occipital activity, it also revealed post-treatment prefrontal cortical thinning in the treatment group. The mechanism by which antipsychotics are associated with the loss of gray matter remains unclear, however neuroinflammatory models posit elevations in proinflammatory cytokine levels [24], microglia activation [25], and increased extracellular volume in white and gray matter [26]. Thus, the study adds to the growing literature on therapeutic mechanisms of antipsychotics, mediated by normalization of aberrant frontal cortical function, and suggests that caution must be exercised in interpreting neuroanatomical changes as being potentially deleterious to brain function.
Mood stabilizers and mechanisms of neuroplasticity
Lithium and anticonvulsants with mood-stabilizing properties (lamotrigine, valproate) constitute first-line drug treatment for episodes of depression and mania with variable inter-episode remission [27][28][29]. Whilst different compounds may differentially target specific facets of bipolar disorders, lithium is effective for all phases including acute depression [30]. On the neural level, functional imaging studies consistently point to pre-treatment frontolimbic dysfunction during cognitive control and emotion-paradigms in bipolar disorder patients [31][32][33]. Thus, abnormal emotion regulation and impaired cognition might be attributed to interference in cognitive control within medial prefrontal cortex though overactivity in subcortical structures (amygdala, ACC, insula), involved in emotion generation and appraisal. Findings of mood stabilizers-induced neural plasticity yield less consistent results due to methodological limitations that make it difficult to draw firm conclusions. Whilst some studies find no significant effects of pharmacotherapy upon functional measures of cerebral reorganization in bipolar patients [34][35][36][37][38][39][40][41] others reported increased task-related prefrontal cortical activity coupled with normalized subcortical limbic activity during emotional processing [38,39,42,43]. Typically, individuals recruited in these studies are able to tolerate medication withdrawal and washout, and therefore are likely to have a milder form of the disorder. Given that it is not clinically feasible to withdraw all patients with bipolar disorder from medication, individuals with a more severe form of the disorder are likely to be underrepresented in many studies and therefore findings might not be generalizable to the most at-need of new treatments group.
Putative neuroplastic mechanisms of psychotherapy
Although studies of neural parameters of therapeutic change under psychotherapy are under-represented relative to analogous studies of medications, emerging literature support the thesis that changes in affect, cognition and behavior mediated by psychotherapy have demonstrable neuroplastic underpinnings. Since the call for more neuroscientifically informed approaches to psychotherapy [44], studies have elucidated neural mechanism of psychotherapy-induced changes in brain activity profiles across a range of psychiatric disorders.
Cognitive behavioral therapy and mood disorders
Psychotherapy processes appear to target maladaptive cognitive and emotional patterns by engaging their biological analogues that are responsive to a discrete mode of treatment [45]. One salient example involves re-appraisal technique under cognitive behavioral therapy (CBT) for depression, where patients are invited to re-interpret their negative perceptions of unpleasant occurrences in a more positive light. Mood ratings before and after re-thinking negative events revealed improved positive affect, mediated by elevated activity in dorsolateral and dorsomedial PFC coupled with decreased activity in the amygdala and orbitofrontal cortex [46]. To delineate CBT-induced mechanism of neuroplasticity in depression, FDG-PET scans before and after psychotherapy relative to paroxetine treatment were acquired from patients instructed to 'avoid ruminating on any one topic' during scanning [47]. Although efficacy of both treatments was comparable, differential activity patterns emerged in frontal and limbic regions, implying that medication and psychotherapy might achieve their therapeutic effects in different ways. Whilst CBT was associated with decreased metabolism in multiple frontal regions including the dorsolateral PFC together with increased activity in the hippocampus, parahippocampal gyrus, and dorsal cingulate gyrus, paroxetine-induced increased PFC metabolism, and decrease in hippocampal, parahippocampal, posterior cingulate and ventral subgenual cingulate activity. This modality-specific mechanism of neuroplasticity posits that CBT exerts 'top-down' changes in cognitive processing in favor of engaging ventral and limbic regions, which mediate attention to personally salient stimuli, whereas antidepressant drugs prompt 'bottom-up' disengagement of ventral, frontal and limbic regions. Although this model runs counter the aforementioned emotion regulation model, divergent findings might result from using healthy subjects in the former study and patients with depression in the latter, invoking the notion that brain activation results from the interaction between underlying brain state and treatment modality [48].
In efforts to elucidate CBT-induced neural mechanism of anxiety disorders functional neuroimaging study examined pre-and-post CBT brain activity patterns in non-medicated patients with spider phobia and healthy subjects [49]. The former exhibited elevated activation in the parahippocampal gyrus and right dorsolateral PFC prior to the treatment, which was normalized with successful group CBT sessions focused on exposure therapy. Given that parahippocampal gyrus mediates contextual memory, authors suggested that after CBT less demand was placed on the dorsolateral PFC to construct a cognitive defense to the perceived threat. Moreover, a therapy-induced shift of activity to the ventral PFC was indexed, which might play a role in down-regulation of limbic activity and thereby dampening fear reaction. Collectively, these studies depict a neuroplastic model of cognitive behavioral therapy which posits altered engagement of dorsal prefrontal circuitry to down-regulate limbic and ventral prefrontal structures thereby improving affect in response to emotionally significant contexts.
Dialectic behavioral therapy and borderline personality disorder
Given that psychotherapy is the gold standard treatment modality for borderline personality disorder [50], extensive research efforts focused on measuring brain changes induced by specific modes of therapy. To date, dialectic behavioral therapy is the most researched, refined and evidenced-based therapy informed by a deficit model in self-regulation, distress tolerance and interpersonal skills, deemed to arise from transaction between highly sensitive individuals and invalidating environments [51,52]. DBT purports to render individuals more mindful and able to manage relationships effectively by incorporating the concept of dialectics and strategy of validation into approach focused on skills acquisition and behavioral shaping.
Consistent with the skills deficit model of BPD, neuroimaging evidence supports that acquisition of affective control strategies under DBT balances neural substrates of emotion regulation. One salient example indexed neural activity alterations under re-appraisal and reported dampened insula and ACC activity, together with an enhanced connectivity of the latter to medial and superior frontal gyrus, superior temporal gyrus, and inferior parietal cortices [53]. Notably, treatmentinduced increase in dorsal ACC activity during exposure to negative stimuli was associated with improvement self-reported BPD symptoms, suggesting a possible biomarker of improved affect regulation. In a similar study Winter et al. [54] set out to establish whether neural correlates of distraction might be amenable to a successful DBT. In this view, 31 BPD patients under constant medication were scanned before and after a 12-week residential DBT-based treatment while performing a distraction task. When compared to 15 BPD control patients under non-DBT-based treatment or no treatment at all, and 22 healthy participants, 16 DBT responders exhibited attenuated activity in the right inferior parietal lobe/supramarginal gyrus. Notably, this pattern of brain activity was correlated with improvement in self-reported borderline symptom severity (ZAN-BPD). Furthermore, treatment was associated with a reduction in the right perigenual ACC activity and increased activity in these regions during distraction in the context of aversive stimuli. These findings might reflect a shift from emotional to more cognitive processing in the context of aversive stimuli, thereby suggesting an improvement in emotional susceptibility under DBT.
Taken together aforementioned studies support that DBT processes target maladaptive emotional patterns by altering their biological analogues that are responsive to discrete cognitive strategies. DBT normalizes frontolimbic imbalances as part of the disturbed circuitry, which appear to mediate amelioration of BPD symptomatology. Caution must be exercised however, while interpreting results as medications may attenuate emotional responses in BPD patients [55], and giving combinations of drug subtypes makes it impossible to isolate the effect of a single agent.
IPT and depression
IPT is a short-term treatment that typically consists of 12-16 one-hour weekly sessions focused on improving interpersonal relationships. Drawing directly on identifiable issues between patients and therapists, it purports to instil the ability to make the necessary adjustments in interpersonal situations that will help to reduce symptoms of depression. Several imaging studies have examined biomarkers of cerebral reorganization induced by IPT relative to pharmacotherapy. One of them compared the effects of the former and venlafaxine (37.5 mg daily) on regional CBF using 99mTc-HMPAO SPECT in 28 drug-naive or drug-free patients with MDD [56]. Whilst comparative clinical improvements were mediated by elevated activity in the right basal ganglia in both treatment groups, patients in the IPT group also exhibited an increase in the right posterior cingulate activity. However, drawing firm conclusion from these findings is hampered by methodological issues as four patients with a strong preference for venlafaxine could choose the treatment, while one preferred IPT. Moreover, subjects in the latter evidenced greater striatal perfusion, potentially reflecting design limitation. Brief duration of IPT and relatively low dose of venlafaxine give rise to the possibility that both treatments were suboptimal, thereby underscoring the engagement of limbic and paralimbic recruitment in psychotherapy-induced changes reported in parallel research [56].
A similar study on the effects of IPT and paroxetine relative to healthy controls [57] reported results analogous to CBT effects described by Goldapple et al. [47]. Whilst treatment response in both groups was associated with an increase in metabolism in limbic and paralimbic regions (the right insula and left inferior temporal lobe) relative to controls, unlike CBT the effects of IPT were mediated by a decrease in dorsal and ventral prefrontal cortical metabolism. A follow-up study was set out to correlate treatment-mediated changes in brain activity patterns with amelioration in mood symptoms measured by the Hamilton Depression Rating Scale and the tension/anxiety and fatigue clusters of the Profile of Mood States [58]. A cohort of 39 patients under either paroxetine or IPT for MDD exhibited post-treatment reductions in ventral and dorsal frontal lobe metabolism, which was associated with improvements in the anxiety/somatization and psychomotor retardation symptom clusters. Unlike previous findings of negative correlation between activity in the dorsolateral PFC and improvement on global depression scores under CBT, in the present study alterations in dorsolateral PFC activity positively correlated with improvement in cognitive disturbance. These suggest that each treatment modality engages dorsolateral PFC function differently to achieve a specific therapeutic effect. While CBT appears to engage this region to attenuate 'over-thinking' in depression, IPT might induce it to improve general cognitive abilities.
Psychoeducation and euthymic bipolar disorder
Given that pharmacotherapy is often ineffective for treatment of residual depressive, dysthymic and dysphoric symptoms [59], researchers have shown interest in psychoeducation in targeting emotional and cognitive processes [60][61][62][63]. Psychoeducation is a treatment option for bipolar disorder focused on improving coping strategies to manage symptoms in everyday life, compliance with medication to prevent thymic relapses, quality of life and social functioning [64,65]. Whilst wealth of research exists to support its efficacy in clinical symptoms improvement, less is known how therapeutic change is achieved on the level of neural functioning.
Favre and collaborates [66] set out to index neural processes before and after psychoeducation therapy in 16 euthymic bipolar patients (EBP) matched against 16 healthy subjects. Pre-treatment fMRI scans revealed reduced activity of cognitive control regions (bilateral inferior and left superior frontal gyri, right insula, right fusiform gyrus and bilateral occipital gyri) and elevated activity of emotion-related processing regions (bilateral hippocampus, parahippocampal gyri and the left middle temporal gyrus) in the treatment group. Thus, aberrant cognitive and emotion processing that characterize acute episodes in bipolar disorder appear to persist during euthymic phase. Post-treatment clinical improvement was mediated by increased activity of inferior frontal gyri and a pattern of decreased activity of right hippocampus and parahippocampal gyrus. These findings suggest that psychoeducation improves cognitive control by engaging prefrontal networks and normalizes generation of emotional responses by quieting activity within limbic networks.
Cognitive remediation therapy and schizophrenia
Cognitive remediation therapy (CRT) is an evidence-based treatment for neuropsychological deficits in memory, attention, executive function, social cognition or metacognition across a host of neuropsychiatric disorders [67][68][69]. There is a growing literature focused on neurobiological changes that mediate cognitive recovery under this type of intervention in patients with schizophrenia [70][71][72][73], mood disorders [74], mild cognitive impairment [75] and in healthy adults [76]. Majority of studies examined the effects of cognitive remediation on brain functioning in patients with schizophrenia and have amounted to several systematic reviews and meta-analyses [76][77][78]. Findings lend support to the frontal hypoactivation mechanism of cognitive impairment and suggest that cognitive remediation improves these networks efficiency. Most commonly reported areas of post-treatment amelioration in efficiency involved prefrontal and thalamic regions. Meusel and collaborates [73] set out to describe functional correlates of cognitive remediation in patients with bipolar disorder or depression versus healthy controls. Thirty eight subjects completed 10 weeks of treatment and were scanned (fMRI) during an n-back working memory task and a recollection memory task to investigate the potential for change within these networks. PRE-POST improvements correlated with functional activation in lateral and medial prefrontal, superior temporal, and lateral parietal regions, suggesting neural correlates improved working memory under cognitive remediation.
Discussion
The predominant paradigm of modern psychiatry posits that advances in neurosciences can unravel the mysteries of mental illness. Since the 1990s were declared the decade of the brain, imaging evidence has taught us a great deal about neural correlates of symptoms expression and recovery from an insult to the brain [79]. Despite remarkable neuroscientific advances, specific mechanisms behind major mental illnesses, thus far, have not been identified [80]. Moreover, whilst neurotransmitters are known to mediate synaptic pathways, research has not yet been able to explain any psychiatric disorder in terms of chemical imbalances [81]. Various reasons exist as to why neuroscience is unlikely to provide a definite understanding of the disordered mind. First and foremost, what is preventing the scientific strategy to reduce psychiatry to neuroscience is the fact that diagnoses listed in the Diagnostic and Statistical Manual of Mental Disorders' are not diseases but merely syndromes without a precise endophenotype [82]. Moreover, the pathways from temperamental vulnerabilities to illness cannot be understood without taking into account psychosocial adversities [83]. In this view, associations between biomarkers of pathological and treatment processes are unlikely to be strong or linear. Pharmacotherapy, whilst useful in severe mental disorders, it is not in any way curative, and psychosocial interventions continue to play an important role in psychiatric treatment, evoking multiple risk factors and complex interactive pathways to the disordered mind [84].
Research efforts in tandem with more powerful imaging techniques will further unravel the intricacy of cerebral organization behind pathological and treatment processes. Nonetheless, the scientific strategy to reduce psychiatry to neurosciences is hindered by a discrepancy between a clinical phenomenon and its neural substrate, which is rooted in a conceptual mind and brain gap.
Conclusion
Long before the era of functional neuroimaging it was suggested that intervention-driven changes in affect, cognition and behavior appear to have measurable biological analogues [85]. To date, the potential to characterize neural mechanisms of recovery processes have amassed vast neuroimaging data on treatment-induced brain plasticity. Pharmacotherapy and psychotherapy appear to engage neural circuits that are responsive to a discrete treatment modality. Although both have similar effects on brain activity patterns in patients who share the same diagnosis, their neural systems profile is not identical. While the former appears to act in a bottom-up manner on a subcortical level to regulate higher cortical structures, the latter acts top-down on cortical activity to subsequently impact subcortical regions. Although neuroimaging techniques have revolutionized our biological insight into recovery processes, little can be concluded about the precise neurobiological mechanism of these changes. The remaining question is whether these changes elucidate a neural mechanism of treatment action or simply reflect correlates of symptom amelioration. Despite methodological and theoretical limitations neuroimaging literature holds promise to strengthen the credibility and utility of mainstay in psychiatric treatment, and to improve clinical decision-making.
Conflict of interest
The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. No writing assistance was utilized in the production of this manuscript. | 2019-09-09T18:27:57.501Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "d21d5b75229911f24b342198b4c33879cd242103",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/66419",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "edc3472d89e12e2048e71fb991124998e42a2f2f",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35580652 | pes2o/s2orc | v3-fos-license | Impact of Dynamic Path Loss Models in an Urban Obstacle Aware Ad Hoc Network Environment
This study highlights the importance of the physical layer and its impact on network performance in Mobile Ad Hoc Networks (MANETs). This was demonstrated by simulating various MANET scenarios using Network Simulator-2 (NS-2) with enhanced capability by adding propagation loss models (e.g., modified Two-Ray Ground model, ITU Line of Sight and Nonline of Sight (ITU-LoS and NLoS) model into street canyons and combined path loss and shadowing model (C-Shadowing)). The simulation results were then compared with the original Two-Ray Ground (TRG) model already available into NS-2. The scenario primarily simulated was that of a mobile environment using RandomWay Point (RWP) mobility model with a variable number of obstacles in the simulation field (such as buildings, etc., causing variable attenuation) in order to analyze the extent of communication losses in various propagation loss models. Performance of the Ad Hoc On-demand Distance Vector (AODV) routing protocol was also analyzed in an ad hoc environment with 20 nodes.
Introduction
Mobile Ad Hoc Networks are formed by a collection of mobile nodes that can establish wireless communication links among each other without any infrastructure.This feature gives such networks a distinguished edge over other networks (such as GSM, UMTS, CDMA, and LTE).However it also brings new challenges and performance compromises (in terms of reliability, quality of service, scalability, etc.).Up until today, the majority of published research in MANETs has used simulation tools as a prime mean for performance analysis.This is primarily due to the high cost involved in realization of real ad hoc test beds.NS-2 [1] is the most popular simulation tool in MANETs researcher community [2].This tool accommodates various routing, mobility, and propagation features key to analyze the performance of MANETs.However, this tool considers flat terrain for simulation and does not accommodate geographical features of the simulation field that may affect the received signal strength at the receiver.In [3], the author has introduced specialized mobility models, which restrict the mobility of nodes due to obstacles in the simulation area and some variations have been suggested in [4] for NS-2 environment.However the focus of their work is mainly the mobility aspect of the nodes.Contrarily, a vast majority of simulation studies have used simplistic radio propagation models [5] such as Two-Ray Ground (TRG) for performance analysis of MANETs routing strategies, which results in more optimistic rather than realistic network performance.This study accommodates several new propagation models (i.e., ITU Line of Sight (LoS) and Nonline of Sight (NLoS) path loss models in street canyons [6], and combined shadowingpath loss model [7] along with modified-TRG model) in an obstacle aware mobility environment.By varying the number of obstacles in the simulation field, the performance of Ad Hoc On-Demand Distance Vector (AODV) routing protocol [8] has been analysed on the basis of various quantitative performance metrics.From literature review [9][10][11][12], the comparison of simulation based studies also reveals that the use of simplistic propagation models (i.e., TRG) results in highly overestimated performance analysis.This is simply because these models do not accommodate wireless signal propagation losses that mainly happened due to reflection, diffraction, scattering, and multipaths effects (commonly caused by urban structures).Moreover in [3], the authors have discussed a modified-TRG model that accounts for signal attenuation between two nodes obstructed by a wall or building that reduce the effective signal strength received at the receiver by a random value.However this model does not cater for the increasing level of attenuation (i.e., due to increase in obstacles) among communicating nodes.In [13], the author has used the Attenuation Factor propagation model [14] in a constrained mobility environment that accounts for the number of walls between transmitter and receiver and calculates the attenuation based on rgb values (i.e., scanning the picture and calculating the colour intensity) among them.The primary objective of abovementioned work is the obstacle based mobility analysis with little attention to propagation perspective specifically with increasing obstacles.The propagation impact with varying obstacles has not been looked upon in detail.This study investigates the impact of various path loss models in an obstacle aware ad hoc environment by considering the effect of increasing obstacles (i.e., buildings, walls, etc.) on routing performance for AODV.The rest of this paper is organized as follows.Section 2 briefly summarizes the overview of AODV protocol used in our simulation analysis.Sections 3 and 4 describe the propagation loss models and mobility environment, respectively.Section 5 covers the performance metrics and Section 6 mentions the motivation and methodology adopted for this study.Section 7 contains the results and discussion about them and Section 8 summarizes the key conclusions about this study.
Overview of Routing Protocol
Due to bandwidth constraints, it is readily understandable that an on-demand approach (i.e., the route discovery is initiated only if there is a demand for it) is commonly used in wireless ad hoc network scenarios.AODV is a uniform and destination based reactive protocol.It uses a table driven routing strategy and destination sequence numbers to form an on-demand protocol.AODV maintains routing tables on the nodes.The source node initiates the route discovery process only in the presence of data packets in need of route.An intermediate node may reply with a Route Reply (RREP) only if it knows a more recent path than the one known by the sender node to the destination.A destination sequence number is used to indicate how recent the path is as follows.A new route request generated by the sender node is tagged with a higher sequence number and an intermediate node that knows the route to the destination with a smaller sequence number cannot send the RREP message.Forward links are setup when a RREP travels back along the path taken by Route Request (RREQ).So the routing table entries are used to forward the data packet.If an intermediate node is unable to forward the packet to the next hop or destination due to link failures, it generates the Route Error (RERR) message by tagging it with a higher destination sequence number.When the sender node receives the RERR message, it initiates a new route discovery for the destination node.Figure 1 shows the AODV mechanism implemented between a sender (S) and a destination (D) node using intermediate nodes 1 and 2.
AODV is widely used in MANET simulations.We used this protocol with the anticipation that other routing strategies will be affected similarly due to obstacle and their impact on propagation environment (which is the focal point of this study) will not change the results significantly.
Propagation Loss Models
Radio propagation models considerably influence the performance of wireless communication networks.Radio propagation loss models are used in simulations to estimate the received signal strength of each packet received by a node.NS-2 uses the threshold values (i.e., Carrier Sense (CS Threshold) and Receiver (RX Threshold)), which defines the minimum possible value of the received signal strength indicator by which a node is still able to communicate successfully.If the value is smaller than the threshold, NS-2 considers that the receiving node did not receive the packet successfully.The following subsection presents the deterministic and probabilistic propagation models used in our simulation scenarios.
Two-Ray Ground Path
Loss Model .This model takes into consideration both direct and indirect paths between the transmitting and receiving node.This model shows better performance than free space path loss model [15] for longer distances [14].This is an empirical model, which uses the following equation to calculate the approximate received power.Consider where is the transmission power in watts, and are the transmitter and receiver antenna gain, ℎ and ℎ are the transmitter and receiver antenna heights, respectively, is the communication distance, and is the system loss.The Two-Ray model does not give a good result for a short distance due to oscillation caused by the constructive and destructive combination of the two rays.Free space model is a better choice for smaller distances.NS-2 simulator uses a crossover distance when this model is used.If < , path loss is calculated with Friss equation and if > , TRG model is used.At the cross-over distance, both models produce the same results, so can be calculated as This model has been found to be reasonably accurate in predicting the large-scale signal strength over distances of several kilometres for mobile radio systems that use tall towers (i.e., height which exceeds 50 m) as well as for LoS microcell channels in urban environments [16].However, this is not a typical case in MANET scenarios (i.e., infrastructureless environment).This model is readily available in NS-2 and was implemented by the Monarch group.We have modified this model in such a way that if there is an obstacle (i.e., a wall) existing between communicating nodes, the model subtracts the 6 dB power (i.e., attenuation due to single brick wall [14]) from the received signal strength.So, depending upon the number of walls that existed between two communicating nodes, modified received signal strength has been calculated from existing TRG model.
ITU LoS-NLoS Model in Street
Canyons.This path loss model is recommended by ITU [6] for typical urban areas.This is a statistical model that calculates the path loss in LoS and NLoS regions and models the sharp decrease in signal strength in transition distance (i.e., going from the LoS to the NLoS region) known as the corner loss (see Figure 2).This model was originally developed by an Ofcom project [17] based upon measurements taken in two cities (i.e., London and Reading) in U.K.This was called a "Low Height Model" with the aim of developing a model for propagation between low height terminals (see Figure 3) where both terminals are located within clutter (primarily, but not exclusively, urban and suburban clutter) [17].Although the multihop communication scenarios were not implemented during the development of this propagation model, this model seems to be the most suitable model for MANETs (with pedestrian mobility) where nominal antenna height of transmitter and receiver is in between 1 to 1.5 meters (i.e., similar to human height).
LoS region
NLoS region BS MS ) , where (m) is the distance between and and (MHz) is the operating frequency.For the required location percentage, (%), this model calculates the LoS location correction factor by using the following Rayleigh cumulative distribution function.Consider where is the standard deviation (sd) recommended as 7 dB through measurements.Now the total loss is calculated as The NLoS loss is calculated as median NLoS () = 9.5 + 45log 10 + 40log 10 ( 1000 ) + urban .
urban depends upon the urban category and is 0 dB for suburban, 6.8 dB for urban, and 23 dB for dense urban region.The required location percentage for NLoS location correction is calculated as where is recommended as 7 dB and −1 (⋅) is the inverse normal cumulative distribution function.The total NLoS loss can be calculated as For the required location percentage, (%), the distance LoS for which the LoS fraction Los equals is calculated as ( This model suggests that if the mobile node's distance from the corner is known then LoS () is set to that distance [6].
Finally the path loss at distance is calculated by the following three conditions: (a) if < LoS , then (, ) = LoS (, ); (c) otherwise the loss is linearly interpolated between the following values: where width is the street width that introduces a transition region between LoS and NLoS conditions and is typically recommended as = 20 m [6].We have implemented this model into NS-2 in a hybrid way (i.e., depending upon the location of and in the simulation field, NS-2 selects appropriate path loss model).
Combined Path Loss and Shadowing
Model.This model combines the simplified path loss model with shadow fading (i.e., log-normal shadowing) [7].For this combined model, the received to transmitted power in dB is given as where is a unitless constant that depends on the antenna characteristics and the average channel attenuation, is the path loss exponent, 0 is a reference distance for the antenna far field region, and dB is the Gauss-distributed random variable with mean zero and variance dB .
Mobility Model
Mobility plays an important role in network stability in MANET.Routes between communicating nodes can change rapidly due to mobility.The mobility can affect not only the communicating nodes but also the intermediate nodes and thus can have a significant influence on the network's topology and hence the performance of routing protocols.The classification of mobility and mobility models can be done on the basis of controllability and model construction [18].In synthetic mobility models, nodes move according to a random probabilistic process whereas the trace based mobility models are based on mobility patterns that are observed experimentally.Trace based models have problems with scalability and are generally difficult to implement so synthetic models are frequently used in MANET simulations.This study implements Random Way Point (RWP) model in an obstacle aware environment.The RWP mobility model is the simplest and most widely used model for MANET studies [19].
In this mobility model, the nodes choose a random destination anywhere in the network area and start moving towards it with a velocity chosen from a speed vector [0, max ].After reaching the destination, the node stops at the destination for a duration specified by the "pause time" parameter, which is the same for all nodes.All nodes repeat this process until the simulation ends.This model has some known characteristics such as nonuniform node distribution and speed decay.These characteristics have a strong influence on routing protocol performance and many variations have been suggested by researchers to cope with these issues [20].Figure 4 shows the various scenarios used in our simulation for different sets of obstacles in the simulation field.We have used the mobility model based upon obstacles as described by [3] for this simulation work.Obstacles of various sizes with random positions in the network field have been introduced in a rectangular area of 1000 × 600 meters.By varying the number of obstacles in the simulation field, the performance of Ad Hoc On-Demand Distance Vector (AODV) routing protocol has been analyzed on the basis of various quantitative performance metrics.
Network Performance Analysis
The following three quantitative performance metrics are used for this study.
(1) Packet Delivery Ratio.This is the ratio of data packets successfully delivered to the number of data packets sent by the CBR sources.
(2) Normalized Routing Load.This is the ratio of the total number of routing packets generated to the number of data packets successfully delivered to destination.
(3) Mean End-to-End Delay.The delays caused by latency, buffering, queuing, retransmission, and route discovery are all included in this performance analysis; this delay is measured in milliseconds.
Motivation and Methodology
Through literature survey, it has been known that there is not any propagation model proposed yet that has been derived purely through MANETs scenario experiments.This is mainly because there is not any known reported test bed study done with the scope of modelling propagation environment for MANETs.Harrold et al. [21] have investigated the additional attenuation to the propagation loss effects in the cases where the antenna is very close to the ground and close to the objects such as human body (i.e., a typical case in MANETs).Green and Obaidat [22] have suggested a propagation model for ad hoc networks LoS conditions based upon signal strength measurements using WLAN traces in university campus scenario.Patwari et al. [23] have presented a path loss model for peer-to-peer communication systems based upon measurements with an antenna height of 1.7 m and operating frequency of 1.8 GHz in rural and urban areas.This model considers the path loss with respect to distance and does not accommodate corner-loss effects typically found in urban NLoS cases.Harrold and Nix [24] have shown that forming a mobileto-mobile connection (using relaying) can be useful to achieve significant benefits such as reduction in transmitted power and increase in network capacity.
Wang et al. [25] state that path loss increases with lower terminal height as does the probability of LoS.It is important to consider suitable path loss model when simulating peer-to-peer communication with low antenna height such as in MANETS.However, a generic, standalone propagation model addressing general MANET characteristics (i.e., infrastructure independent, low antenna heights, multihopping, mobility, etc.) is still a challenge for MANET research community.Wu et al. [26] have introduced an obstacle-aware mobility model in ONE (a Delay Tolerant Network (DTN) simulator) introducing obstacles of various shapes (i.e., round, hexagonal, etc.) and results have been obtained for DTN.However, the propagation model used by [26] is similar to the one mentioned in [3].So this work covers the shortcomings in that aspect and analyses the MANET performance with the effect of increased attenuation due to increased obstacles.The selection of propagation models for our simulation work is done on the basis of relativity to the simulation environment and MANET characteristics.TRG model has been adopted according to the obstacle environment.
Furthermore, ITU models were selected as they have been developed through experimental work closely similar to the MANET environment (i.e., frequency, low antenna height, lack of tall base stations, mobility, etc.).C-Shadowing model was selected to test the MANET performance under small scale fading conditions typically found in urban environment.At this stage, it is hard to comment on the practicality of any specific model for MANET performance analysis.However, through simulation results it has been The main aim of this study is to analyze the impact of propagation loss with varying obstacle level in the simulation field.With this purpose, we generated ten mobility files for each mobility scenario.This was done in order to normalize the impact of mobility on simulation results.Each result is an average of ten simulation runs with identical input parameters but with different random seed.We used IEEE 802.11b equipped radios with Omni directional antennas (height of 1.5 m and with unity gain in all directions) and a receiver threshold of −85 dBm with a maximum transmission power of 15 dBm at a maximum of 11 Mbits/s variable data rate.These parameters were chosen to approximate the commercially available 802.11b compliant radios such as Lucent Wavelan [27].
Results and Discussion
The parameters that are used in our simulation are given in Table 1.
Figure 5 shows the PDR for different propagation loss conditions.The results indicate that all the path loss models (except TRG) show almost linear degradation in performance as the number of obstacles increases from 0 to 10 in steps of 2. As ITU and C-Shadow models do not accommodate obstacles in their method, however increasing obstruction level means that there is more possibility of NLoS conditions among communicating nodes which increases the uses of ITU-NLoS model during simulation run and hence nodes experience higher attenuation if the channel conditions behave like ITU-NLoS environment.Furthermore, increasing obstacles decreases movement area for nodes (i.e., as nodes move only in available free space), which results in reduced mobility level and hence increases the possibility of more communication failures among nodes.It is worth mentioning that TRG model (although heavily used in MANETs performance analysis studies) shows almost no impact on network performance with increasing obstacles.
From Figure 6, it can be readily observed that AODV suffers with considerably higher routing load with increasing obstacles under C-Shadowing conditions.Under fading conditions such as C-Shadowing, most of the packets are dropped because interface queue is full when the transmitting node is waiting for an available route.Due to the random power fluctuations in the signal level caused by multipath propagation effects, a route found in a route discovery process may not remain a valid route that leads to more retransmission attempts and thus increases the routing load significantly.
Since we have taken the log scale on the y-axis as this was due to the extremely high routing load experienced with modified propagation models, hence TRG results are there but they have very small values and are kind of suppressed.
From Figure 7, it is evident that the Mean Delay is very high when the radio channel behaves like a C-Shadowing fading environment.With increasing obstacles in the simulation environment, nodes experience less connectivity due to buildings, and so forth, and hence increasing routing load leads to longer communication delays.
Conclusion
This study analyses the impact of various propagation loss models on the performance of AODV in an obstructed environment.Two new propagation models have been added into NS-2 simulator and the existing Two-Ray Ground propagation model has been modified in order to accommodate increasing level of attenuation due to increase in obstacle.
Results indicate that the network performance is affected with increase in obstacles.AODV suffers from low PDR, higher NRL, and Mean Delay if the communication channel behaves like C-Shadowing ITU or modified-TRG models in comparison with TRG model.This is mainly because of the fact that obstacle levels are not considered by TRG model.Moreover, ITU model incorporates fading margins for LoS/NLoS scenarios and hence degrades received signal strength.This study verifies that underestimating physical layer in MANETs will lead to more optimistic rather than realistic network performance analysis.Furthermore, it is hard to identify any particular propagation model more suitable for MANET performance analysis.However, it can be said that use of simple propagation model such as TRG may lead to overly optimistic network performance.In future, this study will lead to the investigation of propagation effects in MANETs based upon test bed work using ray tracing algorithms.
Figure 2 :Figure 3 :
Figure 2: Typical trend of propagation along street canyons with low base station height for frequency range from 2 GHz to 16 GHz [6].
Figure 4 :
Figure 4: Visualization of mobility scenarios with various obstacle levels and random node movements.
Figure 7 :
Figure 7: Number of obstacles versus Mean Delay. | 2018-04-03T02:54:24.042Z | 2015-03-08T00:00:00.000 | {
"year": 2015,
"sha1": "6e2d9ad74b5ae74e3831970fe837369f0c33e894",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2015/286270.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6e2d9ad74b5ae74e3831970fe837369f0c33e894",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
24630693 | pes2o/s2orc | v3-fos-license | Swept source optical coherence microscopy using a 1310 nm VCSEL light source
We demonstrate high speed, swept source optical coherence microscopy (OCM) using a MEMS tunable vertical cavity surface-emitting laser (VCSEL) light source. The light source had a sweep rate of 280 kHz, providing a bidirectional axial scan rate of 560 kHz. The sweep bandwidth was 117 nm centered at 1310 nm, corresponding to an axial resolution of 13.1 μm in air, corresponding to 8.1 μm (9.6 μm spectrally shaped) in tissue. Dispersion mismatch from different objectives was compensated numerically, enabling magnification and field of view to be easily changed. OCM images were acquired with transverse resolutions between 0.86 μm 3.42 μm using interchangeable 40X, 20X and 10X objectives with ~600 μm x 600 μm, ~1 mm x 1 mm and ~2 mm x 2 mm field-of-view (FOV), respectively. Parasitic variations in path length with beam scanning were corrected numerically. These features enable swept source OCM to be integrated with a wide range of existing scanning microscopes. Large FOV mosaics were generated by serially acquiring adjacent overlapping microscopic fields and combining them in post-processing. Fresh human colon, thyroid and kidney specimens were imaged ex vivo and compared to matching histology sections, demonstrating the ability of OCM to image tissue specimens. ©2013 Optical Society of America OCIS codes: (110.4500) Optical coherence tomography; (140.3600) Lasers, tunable; (170.3880) Medical and biological imaging; (180.1790) Confocal microscopy; (180.6900) Three-dimensional microscopy. References and links 1. J. A. Izatt, M. R. Hee, G. M. Owen, E. A. Swanson, and J. G. Fujimoto, “Optical coherence microscopy in scattering media,” Opt. Lett. 19(8), 590–592 (1994). 2. A. D. Aguirre, P. Hsiung, T. H. Ko, I. Hartl, and J. G. Fujimoto, “High-resolution optical coherence microscopy for high-speed, in vivo cellular imaging,” Opt. Lett. 28(21), 2064–2066 (2003). 3. A. D. Aguirre, Y. Chen, B. Bryan, H. Mashimo, Q. Huang, J. L. Connolly, and J. G. Fujimoto, “Cellular resolution ex vivo imaging of gastrointestinal tissues with optical coherence microscopy,” J. Biomed. Opt. 15(1),
Introduction
Optical coherence microscopy (OCM) uses coherence-gated detection to remove out-of-focus light and improves contrast and imaging depth over conventional reflectance confocal microscopy for imaging scattering tissues [1,2]. When used in conjunction with high numerical aperture (NA) objectives, OCM can achieve sufficient transverse resolution to provide real time visualization of both structural and cellular morphology without the need for extrinsic contrast agents. The utility of OCM to identify pathologies has been demonstrated in ex vivo studies using human breast, thyroid and renal tissue [3][4][5][6]. OCM may be a promising alternative to frozen section analysis histology for intraoperative assessment of cancer margins. Breast, lung, thyroid, and head and neck cancers would benefit from an imaging modality that enables real time assessment of surgical specimens and could reduce the rates of second surgeries from positive or close surgical margins [7][8][9]. OCM also has a broad range of applications for research and biological microscopy, ranging from cellular level imaging of the cortex in small animals, to in vivo imaging of developmental biology specimens [10][11][12][13][14].
To date, most OCM systems have used on time-domain OCT detection, which enables video-rate imaging of en face planes [3,4]. However one limitation of time-domain OCM (TD-OCM) is that only a single coherence-gated depth is acquired. This increases the complexity of the system because both the confocal and coherence-gate depths must be carefully matched, and either the specimen or microscope must also be axially stepped to acquire data at different depths. The majority of commercial scanning microscopes use two axis galvanometer scanners where the mirrors are separated by a small distance and therefore produce a parasitic path length delay variation when the beam is scanned. This causes a curvature in the en face image surface which does not match the image plane of the objective [15]. It is possible to design a custom scanning system where the two galvanometer mirrors are relay imaged onto each other to produce a path length invariant scan. However, the majority of microscopes do not use this design. The requirement of a specially designed microscope system which has a path length invariant scanner has prevented TD-OCM from being used with many existing microscope platforms. Furthermore, the requirement of dispersion matched sample and reference arms makes changing microscope objectives and magnifications difficult using TD-OCM because the reference arm dispersion must be changed when the objective is changed.
Fourier domain detection has significant advantages over time domain detection which overcome these limitations. Fourier domain detection allows simultaneous acquisition of multiple depths, simplifying the acquisition of en face images [16]. Post-processing algorithms may be applied to Fourier domain OCM data in order to compensate for both path length variations across the scan field as well as dispersion mismatch between sample and reference arms [17,18]. This enables swept source or spectral domain OCM to be integrated with a wide range of existing scanning microscopes.
In addition spectral information enables methods such as light scattering spectroscopy (LSS) and Fourier domain low coherence interferometry (fLCI), in which spectroscopic information is used to enhance OCM image contrast and provide quantitative information, such as nuclear size and density [19][20][21]. Furthermore, functional imaging methods, such as Doppler OCT, can be applied to extract blood flow and enhance visualization of vasculature [22,23]. Finally, access to phase enables numerical correction of aberrations, as demonstrated in methods such as interferometric synthetic aperture microscopy (ISAM) [24].
Swept source OCM (SS-OCM) has several advantages over spectral domain OCM (SD-OCM). SS-OCM does not require a spectrometer or line scan camera and can be more readily extended to longer wavelengths such as 1.3 µm where tissue scattering is reduced and image depths are improved compared with shorter wavelengths. In addition, swept source enables dual-balanced detection which removes common-mode noise, and optical clocking to eliminate the need for wavenumber calibration and improve processing speeds. Since the imaging speed of swept source detection is determined by the laser sweep repetition rate rather than camera read rates as in spectral domain detection, swept source can generally achieve much higher imaging speeds. High speed is critical for OCM imaging because each pixel in the en face image requires an axial scan. A key limitation for the development of SS-OCM systems has been the lack of high speed, broadly tunable swept source light sources.
Vertical Cavity Surface-Emission Laser (VCSEL) sources can operate at MHz sweep rates, are broadly tunable at 1 µm and 1.3 µm wavelengths and are well suited for SS-OCM [25,26]. Recently, our group demonstrated VCSEL technology for ophthalmic imaging and achieved adjustable sweep rates from 60 kHz up to 1.2 MHz [27]. In this study, we demonstrate SS-OCM using a high speed, prototype VCSEL light source operating at 1310 nm with a 280 kHz sinusoidal sweep frequency and bidirectional A-scan rate of 560 kHz (Thorlabs, Inc. and Praevium Research, Inc.). Data was acquired using optical clocking, eliminating the need for spectral resampling and significantly reduced the effects of drift and variations in the laser sweep. A tuning range of 117 nm was achieved, which provided an axial resolution of 13.1 µm in air, corresponding to ~8.1 µm (9.6 µm spectrally shaped) in tissue. Four interchangeable objectives (40X, 20X and 10X) were used to achieve transverse resolutions between 0.86 µm -3.42 µm, over ~600 µm x 600 µm, ~1 mm x 1 mm and ~2 mm x 2 mm fields of view (FOV), respectively. Ex-vivo SS-OCM images acquired from fresh human colon, thyroid and kidney specimens were compared with corresponding histological sections to demonstrate visualization of cellular and morphological features. Tissue samples were mounted in a histological cassette modified to include a coverglass imaging window. Specimens were compressed against the glass window in order to provide a flat imaging surface. After imaging, the specimens were fixed in 10% neutral buffered formalin in the cassette in order to maintain tissue orientation for histological processing.
System setup
The optical clock arm of the SS-OCM system consisted of a Mach-Zehnder interferometer (MZI) with a single pass air path length mismatch of ~5.6 mm that generated a peak sampling frequency of 400 MHz. The MZI signal was detected by a second dual-balanced detector, amplitude normalized using a Thorlabs prototype clocking circuit and used to clock the data acquisition card (DAQ Alazar 500 MHz, ATS9350, AlazarTech). The optical path lengths and electrical cable and circuit delays in the imaging arm were carefully matched to that of the MZI clock arm to ensure that the clock timing was correct. The SS-OCM fringes were sampled linear in wavenumber, thus eliminating the need for computationally intensive time to wavenumber resampling and interpolation in postprocessing [28]. Furthermore, optical clocking provides stable acquisition of fringes and is insensitive to slow drift or sweep-to-sweep variations of the light source.
An analog I/O card (NI PCI-6751, National Instruments) was used to drive the scanning mirrors and synchronize the mirror scanning with DAQ acquisition. Data acquisition and image preview were performed using custom-developed C + + software, and image postprocessing was performed using MATLAB (MathWorks). Figure 2 shows a summary of the system performance. A transverse resolution better than 2.2 µm was measured for the 40X/W objective by identifying the smallest resolvable element in an USAF 1951 resolution test chart [ Fig. 2(a)], and a more accurate 1/e 2 transverse resolution of 0.86 µm was measured using a knife edge [29]. Using the same method, the transverse resolutions of the 20X/W, 20X/air and 10X/W objectives were measured as 1.71 µm, 1.62 µm and 3.42 µm, respectively. The confocal parameter of the 40X/W objective was measured to be 9.4 µm (in water), which yields a very high axial sectioning resolution, but gives a very limited depth of field. The confocal parameters of the 20X water and air objectives and the 10X/W objective were measured to be 34 µm in water (20X/W), 23 µm in air (20X/air) and 119 µm in water (10X/W), respectively, which are 4 (20X/W), 3 (20X/Air) and 15 (10X/W) times the axial resolution. The FOVs for the 40X, 20X and 10X objectives were measured to be 600 µm x 600 µm, 1 mm x 1 mm and 2 mm x 2 mm, respectively.
Characterization of system performance
The VCSEL swept source laser used in these experiments was centered at 1310 nm with a full sweep range of 117 nm [ Fig. 2 Figure 2(c) shows an OCT fringe signal acquired from a mirror. The signal has no frequency chirp, consistent with optically clocked wavenumber linearization and matched dispersion between the sample and reference arms. An axial resolution of 11.4 µm in air, corresponding to ~8.1 µm (9.6 µm spectrally shaped) in tissue, was measured using the full width at half maximum (FWHM) of the point spread function (PSF) of a single reflector [ Fig. 2(e)]. It should be noted that due to the re-arming time requirement of the acquisition card (256 sample clock cycles), it was not possible to acquire the full spectrum of the fringe signal, which contributed to the side lobes seen in Fig. 2(e). To mitigate this effect, spectral reshaping is performed with a Gaussian window which resulted in the fringe signal and PSF shown in Figs. 2(d) and 2(e), respectively. The axial resolution was reduced to 13.5 µm in air, corresponding to ~9.6 µm in tissue, due to the spectral reshaping process. Finally, Figs. 2(f) and 2(g) show the sensitivity falloff characteristics of the VCSEL source for the raw and spectrally reshaped fringes, where a uniform sensitivity can be observed throughout the imaging rage of the system, demonstrating the long coherence length of the light source.
The output power of the VCSEL swept source was about 40 mW and 19.8 mW was incident into the sample arm. The throughput of the 40X/W Olympus was 35% and all Zeiss objectives had a throughput of 45% and measured sensitivities were 98.1 dB and 99 dB.
OCT signal processing and display
Since OCT fringes were acquired using optical clocking, the signals were already linearly spaced in wavenumber and no resampling was necessary. To remove fixed pattern noise and any spectrally-dependent fiber optic coupler imbalance, a background spectrum, calculated by averaging 200 fringes acquired when the beam is scanned off the sample, was first subtracted from each fringe spectrum. Each OCT fringe spectrum was then divided by a median filter smoothed, averaged background spectrum envelope curve to remove any spectral modulations. Next, the fringe spectra were multiplied by a phase term to numerically correct for dispersion imbalance and Gaussian reshaped to reduce side lobes in the axial PSFs [30]. Using optical clocking with a VCSEL sweep rate of 280 kHz and a 400 MHz maximum clock frequency, 357 samples were acquired on each forward and backward sweep, which were zero padded to 2048 points prior to Fourier transformation. Finally, the fringe spectra were Fourier transformed to generate A-scans and cross-sectional OCM images. En face OCM images were generated by selecting a depth-of-interest from all cross-sectional images in a volumetric data set and a square root compression was applied to the intensity for visualization [31].
Scan delay variation correction and generation of large field mosaic images
It has been difficult to adapt TD-OCM to existing commercial scanning microscopes because the majority of these instruments use a scan design where the optical path length delay varies as the beam is scanned. One of the key advantages of Fourier domain detection is that it acquires a range of delays simultaneously and therefore it is possible to correct for this scan delay curvature in post processing, enabling use in existing scanning microscope systems. In this study, we specifically chose a beam scanning design which is similar to standard scanning microscopes in order to demonstrate this concept.
Scan delay was compensated using a calibration to extract the delay curvature of the scan, similar to the methods previously demonstrated in the literature [17,18]. For this purpose, the reflection from the cover slip was segmented from each cross sectional frame, then a third order smoothing median filter was applied to construct the three dimensional surface profile. This profile was then used to shift each individual A-line in the image to generate an image corresponding to a flattened delay scan. As an example, Fig. 3(d) shows a surface plot from the cover slip surface taken with the 40X/W objective, where a total delay variation of >50 µm can be observed throughout the FOV. Figures 3(a)-3(c) shows the impact of the scan delay curvature on images generated from an ex vivo human colon sample using the 40X/W objective. Note that in Figs. 3(a) and 3(b), which are selected from two different depths of the same data set (~30 µm apart), different portions of the field of view are in focus. This can be clearly seen by the visualization of goblet cells at the regions where the sample is in focus (pointed by the arrows). By compensating the scan delay variation, the image shown in Fig. 3(c) is generated, which shows uniform focus throughout the FOV. Notice that in The main disadvantage of using high magnification is that the FOV is small. One approach to overcome this limitation is to initially survey the specimen with a low magnification objective, then zoom into regions of interest by switching to a higher magnification objective. The ability to change objectives and numerically compensate dispersion is important for many applications. However, there are also applications when one wants to obtain high resolution images from the entire surface of the specimen. One method to overcome the FOV limitation is to acquire a number of small FOV images from different regions of the sample and then combine these images to generate a large FOV stitched image. This is a well-known research problem in image processing and there are many algorithms to construct seamless composite images by combining individual frames [32]. Image Composite Editor (ICE), which is a closed source, but freeware program from Microsoft, provides a reliable and robust environment for generating large field stitched images. For generating the mosaic images, ICE performs a two-step process consisting of (1) alignment of frames within each other based on image features, and (2) composition of the large field mosaic image by using several image blending algorithms such as graph-cut optimization and gradient domain image-fusion algorithms, which accounts for possible intensity and rotational variations between the overlapping fields [33,34]. After the stitching is completed, images can be converted to a deep zoom format which produces a hierarchical scheme with different magnification levels. This format can be opened using the HD View program from Microsoft which seamlessly allows the user to pan around the images and zoom to regions of interest, producing the native high resolution view. Furthermore, the HD View program is integrated with internet browsers allowing the user to send the links of the stitched data set or the specific views to satellite locations, eliminating the need for physically transporting large amount of data. In this study we have implemented mosaic imaging by using high speed motorized stages (MTS50-Z8, Thorlabs). Individual small field images are acquired with a 50% overlap to ensure a seamless stitching by providing sufficient corresponding features between overlapping frames.
Results and discussion
Studies were performed under an IRB protocol approved by the Beth Israel Deaconess Medical Center and Massachusetts Institute of Technology. OCM imaging was performed ex vivo on freshly excised specimens of normal human colon as well as normal and neoplastic kidney and thyroid. All specimens used in this study were discarded and not required for clinical diagnosis. The specimens were stored in chilled Dulbecco's Modified Eagle Medium (DMEM) and imaged within several hours after removal from the body. Prior to imaging, the specimens were immersed in a 6% acetic acid solution for ~1 minute followed by a ~30 second wash with water, in order to increase the nuclear contrast by condensing nuclear material in the cell and increasing the nuclear scattering [35]. After imaging, specimens were fixed in 10% neutral buffered formalin for a minimum of 24 hours, and then sent for standard histological processing. Histological sections were obtained along the en face planes at different depths in order to ensure correspondence with OCM images. OCM images were obtained by raster scanning an area of 1024 B-scans, each consisting of 1024 A-scans. The volumetric data contained 1024 x 1024 x 1024 voxels and was acquired in ~2 seconds per data set. For the high power objectives (40X and 20X), the specimen was moved in the axial direction to set the focal plane at various depths within the first 200 µm below the tissue surface and imaging was repeated at those depths. Figure 4 shows ex vivo OCM images of a normal human colon specimen taken with three different magnification objectives, together with corresponding histology. The architectural morphology of the crypt pattern can be appreciated with the 10X objective over a relatively large FOV, whereas the 20X and 40X objectives resolve the finer details of the crypt structures. Close agreement can be observed between OCM images and the corresponding histology shown in Fig. 4(g). Note that OCM images clearly delineate the mucin secreting goblet cells residing in the crypts.
In addition, Fig. 4 also demonstrates the advantage of coherence gating in OCM over confocal gating employed in confocal microscopy. Simulated confocal microscopy images can be generated from the OCM volumes by summing the OCM signal over all depth ranges. This produces an image where unwanted scattered light is rejected only by the confocal gate. Figures 4(d)-4(f) show simulated confocal images which were generated from the OCM data shown in Figs. 4(a)-4(c) by summing over all axial depth. Image blurring and loss of contrast is observed in the simulated confocal images obtained with all of the magnifications. OCM can achieve much better contrast because unwanted scattered light is rejected by the coherence gating [1,16]. The coherence gate is very effective in rejecting multiply scattered light, even in the limit where a high NA objective is used and the confocal gate is short. Although the axial resolution and the confocal parameter for the 40X objective are comparable (9.6 µm vs. 9.4 µm, respectively), there is a significant amount of unwanted scattered light which is rejected by the coherence gate. Coherence gating improves sharpness and contrast for all magnifications, although the effect is especially dramatic for low magnifications because the confocal gate is very long. Figure 5 shows ex vivo OCM images of a fresh normal human thyroid specimen taken with three different objectives, together with corresponding histology. The architectural morphology of normal thyroid, characterized by the presence of well-organized large circular and oval follicles filled with colloid of varying densities can be seen on both OCM images and H&E histology. Similar to the images of the colon specimen, the 10X objective provides a large field of view for the general architecture of the thyroid, whereas finer details can be appreciated in the images obtained with the higher magnification objectives (20X and 40X) over a smaller FOV.
Although OCM images shown in Figs. 3 to 5 are from a single depth, it is possible to extract different depths from the 3D data sets. Depending on the objective NA, multiple high resolution en face images from a ~50-150 µm depth range can be acquired simultaneously. This is analogous to serial sectioning in histology and can provide information on the three dimensional structure of tissue. This is a significant advantage over TD-OCM, as well as confocal and multiphoton system that require mechanical translation of the delay or objective to generate images from different depths. As an example, Fig. 6 shows a volumetric OCM data set from the thyroid in Fig. 5. In Figs. 6(c)-6(e) three different en face planes are selected from the same data set, 50 µm, 130 µm and 180 µm below the specimen surface. Depthdependent features of the follicular architecture can be clearly observed in the OCM images. As shown in Fig. 6(b), cross sectional images can also be displayed. Figure 7 shows ex vivo OCM images of a fresh human thyroid specimen with a histological diagnosis of multinodular goiter, taken with the 40X/W and 20X/air objectives, together with corresponding histology. The images in Fig. 6 differ from the normal thyroid in Fig. 5 and exhibit large variation in follicle size, significant number of microfollicles, as well as numerous irregularly shaped follicles. Figure 9 shows a large field of view ex vivo OCM image from a fresh normal human kidney specimen. The OCM image shown in this figure is generated by combining 30 frames taken with the 40X/W objective to obtain a 1.8 mm by 2.1 mm total field. Glomeruli and convoluted tubules can be observed throughout the imaging field, consistent with the characteristics of normal renal cortex. As previously described, the deep zoom display format allows the user to pan around the images and zoom to regions of interest enabling a more detailed examination. Figure 9(b) shows an example where a region with a glomerulus surrounded by convoluting tubules is enlarged to show finer details. One disadvantage of acquiring individual square fields to construct the mosaic image is that the acquisition speed will be limited by the translation stages, which have to perform a discrete movement after acquiring each frame. For our study, this added an additional ~1 second to the acquisition of each volumetric data set. An alternative to acquiring individual square fields is to obtain long strip images by continuously translating along one axis while acquiring the image [36]. In this case the acquisition speed for the total mosaic will be mainly limited by the A-scan rate of the system. For the system employed in this study, the total time required to acquire the whole field of view shown in Fig. 9 was about ~3 to 4 minutes. This also included the time required to save the volumetric data sets to the hard drive, which constituted the main bottleneck in extending the total imaging time. Nevertheless, using solid state disc technology and improved data saving algorithms we anticipate that this time can be significantly reduced. Finally, generating the mosaic image using the ICE took about ~2 minutes on a standard dual core personal computer.
Conclusion
In conclusion, this study demonstrates swept source OCM imaging using a high speed swept VCSEL light source. SS-OCM has powerful advantages compared to TD-OCM. Almost all commercial scanning microscopes have a varying path length delay as the beam is scanned.
This causes TD-OCM to fail because it is not possible to keep the coherence gate in the image plane as the beam is scanned. In order to use TD-OCM, it is necessary to custom design a microscope which relay images the two galvo scanner axes in order to maintain constant delay while scanning. SS-OCM solves this problem because it acquires a volumetric image over a range of path delays. As shown, it is possible to compensate path length delay variation. This means SS-OCM can be adapted to a wide range of commercial microscopes.
TD-OCM requires using matched glass thickness in the reference and sample arm. This makes it difficult to change objectives with TD-OCM, because glass in the reference arm must be changed when the objective is changed. SS-OCM has the advantage that dispersion matching can be performed numerically so that objectives can be readily changed to vary magnification and field of view. When lower NA objectives are used, SS-OCM has the advantage that it acquires multiple depth en face images simultaneously.
The study also investigates SS-OCM for pathology imaging in human tissues ex vivo. Normal and pathologic colon, thyroid and renal specimens were imaged and compared with corresponding histology to understand which features could be visualized using OCM. En face imaging has the advantage that it achieves excellent transverse image resolutions of 1 to 2 µm, enabling visualization of cellular features. Coherence gating was shown to improve image sharpness and contrast compared with confocal gating only. This improvement was observed even at high magnification, high NA imaging which had a short confocal gate. We demonstrated imaging at 1.3 µm because this wavelength has reduced scattering and improved imaging depth in biological tissue. Microscope objective transmissions are lower at this wavelength, however this is the most commonly used wavelength for imaging tissues. It is also possible to perform SS-OCM using 1 µm VCSEL light sources. This would improve the transmission through the objectives, but image penetration depth would be reduced because of increased scattering.
Although the axial resolution of the system was limited to 13.1 µm in air, corresponding to 8.1 µm in tissue (9.6 µm spectrally shaped) due to sweep range of the laser, next generation VCSELs are expected to have ~160 nm sweep range which will improve the axial resolution to <6 µm in tissue. The current study was performed with 560 kHz axial scan rates, but improvements in VCSEL sweep rates should enable >1 MHz axial scan rates. These high speeds are important because each pixel in the en face OCM image requires an axial scan.
In summary, SS-OCM is a powerful technology for real time, microscopic volumetric imaging. The ability to improve image sharpness and contrast over confocal microscopy in scattering tissues as well as the adaptability of this technology to existing commercial scanning microscope platforms are important features which promise to enable many new applications. | 2018-04-03T03:55:22.686Z | 2013-07-29T00:00:00.000 | {
"year": 2013,
"sha1": "2d6f5b770257792d8422a39320a8f1b001ad30ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.018021",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ab031209615d0c77dd29a8bf9904e51bb12484b3",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
92986057 | pes2o/s2orc | v3-fos-license | THYROGLOBULIN AS A TUMOR MARKER IN DIFFERENTIATED THYROID CANCER – CLINICAL CONSIDERATIONS
SUMMARY – Initial treatment of the majority of patients with differentiated thyroid cancer (DTC) includes total thyroidectomy. Postoperative ablation therapy with radioactive iodine (I-131) is indicated in all high-risk patients, however, there is disagreement regarding its use in low- and intermediate-risk patients. Over the last few decades, thyroglobulin (Tg) has been established as the primary biochemical tumor marker for patients with DTC. Thyroglobulin can be measured during thyroid hormone therapy or after thyroid-stimulating hormone (TSH) stimulation, through thyroid hormone withdrawal or the use of human recombinant TSH. In many studies, the cut-off value for adequate Tg stimulation is a TSH value ≥30 mIU/L. However, there is an emerging body of evidence suggesting that this long-established standard should be re-evaluated, bringing this threshold into question. Recently, a risk stratification system of response to initial therapy (with four categories) has been introduced and Tg measurement is one of the main components. The relationship between the Tg/TSH ratio and the outcome of radioiodine ablation has also been studied, as well as clinical significance of serum thyroglobulin doubling-time. The postoperative serum Tg value is an important prognostic factor that is used to guide clinical management, and it is the most valuable tool in long term follow-up of patients with DTC.
Introduction
Th yroid cancer is the most common type of endocrine malignancy 1 . About 90% of thyroid neoplasms are diff erentiated thyroid cancers (DTC) with low malignant potential and a very good prognosis 2 . Over the last few decades, the incidence of thyroid cancer has considerably increased. Th e age-standardized incidence rate of thyroid cancer has increased in Croatia 22.1 times in women and 5.6 times in men in the period from 1968 to 2014, with a decrease in mortality 3 . A similar trend has been observed around the world. In the United States, the yearly incidence has almost tripled from 4.9 in 1975 to 14.3 per 100,000 in 2009, without associated increase in mortality 4 . Th is perceived increased incidence of DTC is mostly caused by more intensive imaging with neck ultrasonography (US) and the aggressive use of US guided fi ne needle aspiration biopsy (FNA), which has led to detection of disease in early stages 5,6 . In the majority of patients with DTC, initial treatment consists of total thyroidectomy. However, lobectomy may be performed in patients with microcarci-noma without nodal metastases on imaging 7,8 . Postoperative ablation therapy with radioactive iodine (I-131) is indicated in all high-risk patients, however, there is disagreement regarding its indication and dosage in low-and intermediate-risk patients [9][10][11] . Th e introduction of L-levothyroxine substitution or suppression therapy is the next step in the treatment of these patients, which depends on balancing the degree of response to initial therapy and increasing the risk of adverse eff ects related to thyroid-stimulating hormone (TSH) suppression 12 . With this treatment approach, the majority of patients have an excellent prognosis with normal life expectancy 13 . However, in some patients, persistent disease is present after initial therapy, or disease recurrence is detected during follow-up.
Th yroglobulin (Tg) is a glycoprotein with a molecular weight of approximately 660 kDa, which is synthesized by thyrocytes and released into the lumen of thyroid follicles 14 . Production of Tg is stimulated by TSH, intrathyroidal iodine defi ciency or excess 15 , and the presence of thyroid-stimulating immunoglobulins. Th yroglobulin plays a crucial role in the synthesis of the peripheral thyroid hormones triiodothyronine (T3) and thyroxine (T4), containing tyrosine residues which are iodinated using tyrosine oxidase in monoiodotyrosine and diiodotyrosine forms (MIT and DIT), which then form T3 and T4. Over the last few decades, the role of Tg as the primary biochemical tumor marker in patients with DTC has been established. Serum Tg measurements, neck ultrasonography, and occasionally diagnostic I-131 whole body scintigraphy are used in the follow up of DTC patients [16][17][18] . Neck ultrasonography is a readily available, noninvasive, cost-eff ective method, which can guide diagnostic procedures with low complication rates 19 . Recently, some authors have proposed the combined use of the above mentioned modalities with computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) including [F18] FDG PET/CT when necessary 11,20,21 . Patients at low risk of disease recurrence may be followed up only by neck US and serum Tg measurement 22 .
Measurement of Th yroglobulin and Anti-Th yroglobulin Antibodies
Th yroglobulin has been measured by three diff erent classes of methodology. Radioimmunoassay (RIA used since the 1970s), immunometric assay (IMA) since the 1980s, and liquid chromatography-tandem mass spectrometry (LC-MS/MS), which was developed in 2008. Immunometric assays are more sensitive than RIA and have shorter incubation time, wider working range and a stable labeled antibody reagent 23 . Th e proposed functional sensitivity of current assays is ≤0.1 μg/mL 24 . Serum Tg should be measured by validated immunoassays calibrated against the Certifi ed Reference Material (CRM 457, now described as BCR 457, European Commission, Institute for Reference Material and Method). Laboratories providing Tg measurements are required to participate in certifi ed national or international programs of quality assurance. Ideally, follow-up of DTC patients with Tg and anti-thyroglobulin antibodies (TgAb) concentrations should be performed in the same laboratory using the same assay each time 24 .
Anti-thyroglobulin antibodies are 330 kd molecules which are often undetectable using older techniques. Th e TgAb is polyclonal, belongs to the IgG class of antibodies, not restricted to a particular subclass, although IgG2 is the predominant class in DTC 25 . Anti-thyroglobulin antibodies falsely lower serum Tg in immunometric assays and are present in approximately 25% of thyroid cancer patients and 10% of the general population 26,27 . Th ey are the most serious problem that limits the clinical value of Tg determination. Anti-thyroglobulin antibodies should be measured in conjunction with serum Tg assay by the same method to increase accuracy. After total thyroidectomy and I-131 remnant ablation, TgAb usually disappear after 3 years in patients without persistent disease. Th yroglobulin antibody levels that decline over time are considered as a good prognostic sign, while rising levels, or a new appearance of TgAb increases the risk of persistent or recurrent disease [28][29][30][31] .
Preoperative Role of Th yroglobulin
Many studies have evaluated preoperative serum Tg values as a possible predictor of malignancy in thyroid nodules. Currently, the American Th yroid Association does not recommend routine preoperative measurement of serum Tg and TgAb since there is no defi nite evidence that this impacts patient management or outcome. Meanwhile, the most recent EPIC study demonstrated a strong positive association be-tween thyroid cancer risk and blood levels of Tg, but did not support the use of serum Tg level for screening and early detection of DTC 11,32 .
Postoperative Measurement of Th yroglobulin
Th e half-life of serum Tg is 1-3 days, and the postoperative nadir is reached in almost all patients 3-4 weeks after operation 33,34 . After therapy with I-131, it takes several months for Tg to completely disappear from the circulation 35 . Th e measurement of Tg can be done during thyroid hormone therapy or after TSH stimulation, through thyroid hormone withdrawal or the use of human recombinant TSH (rhTSH) [36][37][38] . Reports have questioned the possibility of a shorter duration of thyroid hormone withdrawal prior to Tg measurement 39 . Recombinant human thyrotropin is approved in many countries for preparation of patients without distant metastases for radioiodine ablation. Th is exogenous method of elevating TSH levels may also be used in patients who are unable to achieve adequate TSH elevation following thyroid hormone withdrawal, or in patients with signifi cant comorbidities 40 . Th e rhTSH stimulated Tg levels tend to be lower than those following thyroid hormone withdrawal 11,41 .
Cut-off TSH Th reshold of 30 mIU/L
In many studies, the cut-off value for adequate thyroglobulin stimulation is a TSH value ≥30 mIU/L 11,42 . Th is value has its diagnostic (elevation of Tg) and therapeutic (increased I-131 uptake in tumors) implications 43 . However, there is an emerging body of evidence suggesting that this long-established standard should be re-evaluated, thus bringing this threshold into question. Although the value of TSH of 30 mIU/L has been generally used in stimulation for I-131 ablation, some authors postulate diff erent values. For example, Vrachimis et al. published a report on patients who received ablation with TSH <30, with the same ablation outcome 44 . On the other hand, some authors advocate the use of higher TSH values, suggesting them to be connected with better treatment outcome 45,46 . In our two previous studies, no diff erences were observed comparing TSH levels and the ablation outcome 47,48 .
Th yroglobulin and Radioiodine Ablation
Th e postoperative serum Tg value is an important prognostic factor that can be used to guide clinical management, especially in guiding the decision-making process leading to radioiodine ablation treatment, as well as predict successful ablation of the thyroid remnant. Postoperative serum Tg values >10 ng/mL increase the probability of persistent or recurrent disease, failing I-131 ablation, presence of distant metastases, and mortality, therefore prompting additional evaluation and treatment [49][50][51][52] . It is very important to emphasize that Tg level is not the only criterion for successful ablation, and that patient risk group should also be carefully considered. In low-risk patients, suppressed or stimulated Tg <1 ng/mL confi rms low recurrence risk 51,53 . In intermediate-risk patients, the same Tg values do not completely rule out the presence of smallvolume I-131-avid metastatic disease. In high-risk patients, even postoperative Tg values <1 ng/mL do not rule out RAI-avid disease and therefore do not infl uence the decision to proceed with I-131 ablation 11 .
In conclusion, there is no optimal cut-off for postoperative serum Tg values to help determine indication for I-131 ablation in the American Th yroid Association (ATA) guidelines 11 . However, some authors recommend omission of I-131 ablation in patients with low postoperative stimulated (≤1 ng/mL) and non-stimulated Tg levels (0.2-0.3 ng/mL) with negative TgAb [54][55][56] . A recent study by Mourão et al. concluded that patients with papillary thyroid carcinoma with low non-stimulated Tg levels (Tg <0.3 ng/mL) and negative nodal status in the neck following thyroidectomy did not require postoperative I-131 treatment 57
Th yroglobulin/Th yrotropin Ratio as a Predictive Factor for Radioiodine Ablation Outcome
In recent literature, the relationship between thyroglobulin/thyrotropin ratio and the outcome of radioiodine ablation has been extensively studied. For exam-ple, Trevizam et al. bring into question the hypothesis that thyroglobulin levels and the thyroglobulin/thyrotropin ratio may accurately predict the success of radioiodine ablation therapy in patients with DTC. Th e Tg and Tg/TSH ratio cut-off values that predicted success of I-131 therapy were 4.41 ng/mL and 0.093, respectively. After multivariate analysis, only serum Tg emerged as an independent factor predicting ablation outcomes 60 . Further evidence for Tg/TSH as a predictive factor was confi rmed in studies assessing radioiodine ablation failure and the need for re-ablation in low-and intermediate-risk groups of patients 48,61 . For example, one study evaluated 740 such patients, with the Tg/TSH ratio determined to be a more powerful prognostic factor than Tg alone, and higher Tg/TSH ratios were found to be an independent predictor of treatment failure in a multivariate logistic regression analysis. Patients with Tg/TSH >0.126 had a higher probability of radioiodine ablation failure 48 .
In another study, which compared Tg and Tg/TSH ratio, also in low and intermediate groups of patients, and re-ablation outcomes, TSH signifi cantly increased, Tg decreased, and Tg/TSH ratio decreased from 0.115±0.217 to 0.034±0.071 prior to re-ablation. A positive association was observed between treatment failure and Tg2 levels, as well as between treatment failure and Tg2/TSH2 ratio 61 . In a study by Lin et al., the Tg/TSH ratio was considered as a predictive factor for treatment failure in patients with DTC 62 . Further studies are needed to evaluate the predictive and prognostic value of this easy calculable variable in patients undergoing TSH stimulation.
Value of Th yroglobulin in Dynamic Risk Stratifi cation
Th e 2015 ATA guidelines have classifi ed patients with DTC into low-, intermediate-and high-risk groups 11 . Th e risk of recurrence is associated with risk stratifi cation, with the risk of recurrence in the ATA intermediate-risk group ranging from 3% to 9%, while the risk of recurrence in ATA high-risk patients ranges from 23% to 40%. However, the initially estimated recurrence risk should be continually modifi ed during follow up, since it can change over time as a function of disease status and response to therapy 11,63 . Th e majority of DTC patients show elevated preoperative Tg, but the predictive role has not been clearly outlined, since immunoassays cannot diff erentiate between Tg secreted by normal tissue and by malignant thyroid cancer cells 64 .
All patients with high-risk according to ATA should receive I-131 treatment, as this therapy improves both disease-free and disease-specifi c survival. In ATA low-risk patients with tumor size ≤1 cm, there is no indication for postoperative I-131 ablation therapy. However, in low-risk group patients with other risk factors, as well as in ATA intermediate-risk patients data regarding indications for I-131 ablation are confl icting, and therefore I-131 ablation therapy is selectively utilized 11 . Th e response to initial therapy (with four categories of treatment response) was introduced by Tuttle et al. and modifi ed in another study by Vaisman et al. 65,66 . Th yroglobulin measurement is one of the main factors in "response to therapy reclassifi cation" in patients receiving radioiodine remnant ablation after total thyroidectomy 11 .
Patients with suppressed Tg <0.2 ng/mL, TSH stimulated Tg <1 ng/mL, and with negative imaging studies fall into the excellent response to therapy subgroup, with a very low risk of recurrence, and almost no disease-specifi c mortality 65,[67][68][69] . Th e "biochemical incomplete response" category includes patients with negative imaging studies, but with elevated Tg levels following treatment (suppressed Tg ≥1 ng/mL or stimulated Tg ≥10 ng/mL), or with rising TgAb values. Th e mortality rate in this subgroup is minimal, but a signifi cant number of patients in this group develop structural disease recurrence 65,70 .
In the group with functional or structural evidence of disease (structural incomplete response), the Tg level and the presence (or absence) of TgAb do not change the response classifi cation of these patients. Th e indeterminate response group includes patients with biochemical or structural fi ndings that cannot be classifi ed as either benign or malignant. Patients with non-stimulated detectable levels (Tg <1 ng/mL), or stimulated Tg detectable levels (Tg <10 ng/mL) fall into this subgroup. Th e patients with stable or declining anti-Tg antibodies without structural or functional disease are part of this "response" group 65,70 .
Th yroglobulin in Long-Term Follow-up of Patients with Diff erentiated Th yroid Cancer
Long-term monitoring of patients with DTC is guided by the patient's response to therapy during the fi rst year of follow-up. Most cases with recurrences of DTC occur during the fi rst years of follow-up, but may occasionally also occur many years after initial treatment 71 . At each follow-up, clinicians reclassify ongoing management recommendations in accordance with the patient's current clinical status. Detection of the possible persistent or recurrent disease during the fi rst year after thyroidectomy is obtained with neck ultrasonography and measurement of TSH, as well as serum Tg levels. Additional imaging such as MRI, CT 72 , and FDG-PET-CT 73 is usually reserved for high-risk patients who typically have either biochemical or structural incomplete response to initial therapy and follow-up of intermediate-risk patients who demonstrate structural or biochemical incomplete response to treatment during the fi rst year of follow-up. Diagnostic I-131 whole body scintigraphy has a role in the follow-up of these patients with high or intermediate risk of disease, but not in the follow-up of low-risk patients.
Serum Tg and TgAb in patients on thyroid hormone suppression are generally measured every 3 to 6 months for the fi rst year following initial treatment. Th e interpretation of serum Tg depends on the type of initial therapy. For patients with total thyroidectomy and radioiodine remnant ablation, non-stimulated Tg should be <0.2 ng/mL and stimulated Tg <1 ng/mL in the absence of interfering antibodies. In patients with lobectomy, serum Tg levels are less useful because they will not refl ect the presence or absence of malignant tissue, but will depend on the remaining thyroid lobe volume, current iodine status and TSH concentration. In these patients, follow-up is performed by neck ultrasonography and, when necessary, US-guided fi ne needle aspiration (FNA) of any suspected metastatic foci 24 .
Neck US is performed at 6-to 12-month intervals depending on the patient's risk stratifi cation. Cervical lymph nodes are the most common site of recurrence in patients with papillary carcinoma. Th e most sensitive technique for localization of recurrent tumor in the neck is US. Metastatic lymph nodes on US demonstrate a cystic appearance, microcalcifi cations, bulging shape, loss of normal hilum, and peripheral vascularization 19,74 . US-guided FNA of suspected lymph nodes with Tg measurement in aspirates (FNA-Tg) should be performed in patients with suspected nodal metastases 19 . Values under 1 ng/mL are considered normal, values of 1-10 ng/mL should be compared with the results of cytology, and values above 10 ng/ mL suggest the presence of malignancy 19 . Some authors suggest comparison of FNA-Tg and serum Tg levels as worthwhile. FNA-Tg is highly reliable in the diagnosis of neck nodal metastases 75 .
Serum Tg may fail to identify patients with a relatively small volume of metastatic disease. Th ese small metastases are often located in the neck region and US is the method of choice for detection of nodal disease in these patients 11 . In cases with elevated serum Tg levels and negative radioiodine scan, the explanation for such fi ndings could be the presence of tumor deposits too small to be detected by a scintillation camera, or loss of iodine uptake through tumor de-diff erentiation 76 . In such patients, neck ultrasound and neck/chest CT may be performed in order to detect metastatic disease. If negative, other imaging methods should be performed. In high-risk patients and in those with serum Tg >10 ng/mL, [18F]-FGD PET/ CT should also be performed 73,77 .
Th yroglobulin Doubling Time
In medullary thyroid cancer, measurements of serum calcitonin and carcinoembryonic antigen (CEA) doubling-times have prognostic implications 78,79 . Miyauchi et al. have described the prognostic impact of serum Tg doubling time (Tg-DT) during follow up and under thyrotropin suppression. In this study, patients were divided into several groups: Tg-DT <1 year, 1-3 years, and ≥3 years. In the group with a Tg-DT of <1 year, disease-specifi c survival at 10 years was 50%, in the group with Tg-DT of 1-3 years it was 95%, and in other groups disease-specifi c survival was 100%. Th yroglobulin DT was found to be a statistically signifi cant indicator of survival by univariate analysis. Multivariate analysis also revealed Tg-DT to be an independent predictor of distant metastases and loco-regional treatment failure 80 .
Several studies addressed the issue of advanced/recurrent thyroid cancer and Tg-DT values. In a study by Rossling et al., the Tg-DT and other prognostic markers in patients with recurrent/progressive DTC were evaluated in uni-and multivariate analysis. Th e median observed Tg-DT was 212 days. Mortality risk was two times higher in patients with Tg-DT <5 months as compared with patients with Tg-DT >14 months 81 . In a study by Giovanella et | 2019-04-01T11:55:27.586Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "2601d141673055a8f5070589eb4ca268e70788a3",
"oa_license": "CCBYNCND",
"oa_url": "https://hrcak.srce.hr/file/315436",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2601d141673055a8f5070589eb4ca268e70788a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247520896 | pes2o/s2orc | v3-fos-license | Rheumatoid Arthritis in Spine Surgery: A Systematic Review and Meta-Analysis
Study Design Systematic Review and Meta-analysis Objective The purpose of this study is to synthesize recommendations for perioperative medical management of RA patients and quantify outcomes after spine surgery when compared to patients without RA. Methods A search of available literature on patients with RA and spine surgery was performed. Studies were included if they provided a direct comparison of outcomes between patients undergoing spine surgery with or without RA diagnosis. Meta-analysis was performed on operative time, estimated blood loss, hospital length of stay, overall complications, implant-related complications, reoperation, infection, pseudarthrosis, and adjacent segment disease. Results Included in the analysis were 9 studies with 703 patients with RA undergoing spine surgery and 2569 patients without RA. In RA patients compared to non-RA patients undergoing spine surgery, the relative risk of infection was 2.29 times higher (P = .036), overall complications 1.61 times higher (P < .0001), implant-related complications 3.93 times higher (P = .009), and risk of reoperation 2.45 times higher (P < .0001). Hospital length of stay was 4.6 days longer in RA patients (P < .0001). Conclusions Treatment of spinal pathology in patients with RA carries an increased risk of infection and implant-related complications. Spine-specific guidelines for perioperative management of antirheumatic medication deserve further exploration. All RA patients should be perioperatively co-managed by a rheumatologist. This review helps identify risk profiles in RA specific to spine surgery and may guide future studies seeking to medically optimize RA patients perioperatively.
Introduction
Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease affecting 1-2% of the global population. 1,2 Initial treatment is medical management with nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and others, 2 including novel disease modifying antirheumatic drugs (DMARDs) and biologics that directly affect the host autoimmune response. Although DMARDs and biologics have the potential to slow the progression of rheumatoid arthritis and protect the joints from permanent damage, 3,4 infectious and potentially other side effects have been attributed to use of DMARDs. 5,6 Many patients with RA will require surgical treatment for advanced disease even with a proper use of medication regimen, 3,7,8 though this may be decreasing over time. 9 As the spine is a frequent location for RA involvement, surgical intervention may be indicated in patients with myelopathy, radiculopathy, instability, or deformity. 7,8,[10][11][12] The systemic inflammatory nature of this autoimmune disease as well as the subsequent medical treatments have been implicated in increasing complication rates in orthopedic and other surgery. Numerous studies have shown that RA may increase the risk of developing a variety of postoperative complications, including wound infection and instrument failure, ultimately necessitating revision after initial spine surgery, [3][4][5][6][7] though other reports presented no appreciable differences in surgical outcomes or complications. 8,9 Prior studies have suggested the perioperative continuation of some DMARDs, biologics, and other antirheumatic medications may increase infection rates after surgery; [15][16][17][18][19] however, current evidence is varied based on type of medication, power of available studies to detect low probability events such as infection or other complications, and sometimes conflicting results. 15,16,[18][19][20][21][22] Additional concern exists for precipitating RA disease exacerbation while off medication. 20 Although this question is studied extensively in elective hip and knee arthroplasty and other elective orthopedic procedures, 15,18,20,23,24 there is a paucity of literature on spine surgery.
Considering these knowledge gaps, the objectives of this study were to synthesize recommendations for perioperative medical management of RA patients and quantify outcomes and complications after spine surgery when compared to patients without RA.
Data Sources and Search Strategy
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and checklists were used for this systematic review. 10 A certified, experienced librarian carried out an extensive search of electronic databases in PubMed, Ovid/MEDLINE, Cochrane, and Scopus for published articles for all available years. A search strategy including keywords for "spine" or "spinal" or rheumatoid or (rheumatic adj arthritis) or "inflammatory arthritis" "lumbar" or "lumbosacral" "occipital-cervical" or atlantoaxial or "atlanto-axial" or "occipitoatlantoaxial" or "occipito-atlantoaxial" or "craniovertebral junction" or "subluxation" or "vertebral" or "intervertebral" or "disc" or "discs" or "sacral" or "sacrum" or "fusion" or "fused" or "fusing" or "fixation" or "decompression' or "arthrodesis" or "reconstruct." Two authors (C.O. and Y.Y) screened the studies for eligibility after a list of articles was obtained. The senior author (M.B.) was consulted in final decision-making for any discrepancies.
Eligibility Criteria
The following inclusion criteria were used for eligibility; (i) studies that included patients with a diagnosis of rheumatoid arthritis undergoing spine surgery compared to a control group of patients without RA undergoing spine surgery. (ii) Studies that reported any complications or made an assessment regarding complications for patients with RA directly compared to patients without RA and (iii) studies in English language and with available full text. Reviews and studies that did not present specific information about spine surgery were excluded from our analysis. In addition; (i) studies that included patients with diagnosis of rheumatoid arthritis prior to spine surgery, (ii) studies that provided details regarding antirheumatic medication use prior to surgery (name, dose, timing) and (iii) studies that reported any complications or made an assessment regarding complications if no complications present were included for a separate systematic review on perioperative medication use.
Data Extraction and Processing
The following information was collected for included studies; (i) information on author name and study year, (ii) average age, gender, and follow-up (iii) number of patients specific to RA and Non-RA cohorts, (iv) location, type and specific levels of the spine surgery, (v) operative parameters including blood loss and operative time, (vi) any complications seen in patients undergoing spine surgery including wound infection, (vii) reoperation at the same level, and (viii) diagnosis of adjacent segment disease. For the systematic review of perioperative medication use, information on (i) author name and study year, (ii) study design, (iii) number of patients (total and spine-specific numbers if other procedures are included), (iv) location and specific levels of the spine surgery, (v) name and the dose of the medication used in rheumatoid arthritis treatment, (vi) whether the medication was stopped and restarted perioperatively and (vii) any complications seen in patients undergoing spine surgery were collected.
Statistical Analysis
Mean differences (MDs) were used to summarize continuous variables and the categorical outcomes were presented using risk ratios (RRs) with 95% confidence intervals (CI). The outcomes of interest were compared between patients undergoing spine surgery with and without RA diagnosis. Heterogeneity was represented with Higgins I-square (I 2 ). A random effects model was used when meta-analyses indicated greater than 50% heterogeneity. Pooled estimates and effect sizes were represented by forest plots. Statistical analyses were conducted using R 4.0.5. (R Foundation for Statistical Computing). P values <.05 were considered significant.
Level of Evidence
Levels of evidence were assessed by use of the Oxford Centre for Evidence-Based Medicine Levels of Evidence. 11 It was assessed by 2 authors in parallel with arbitration by a third author in cases of disagreement.
Search Results and Study Characteristics
An initial search of the electronic databases revealed 2016 studies which were filtered further to 391 relevant articles. From these, 9 full-text articles were included in the qualitative assessment. (Figure 1). 3,5,8,9,[12][13][14][15][16] Included studies were published between 2008 and 2019 with a cumulative patient number of 703 diagnosed with RA and 2569 patients without RA. A total of 550 (78%) of all RA patients and 1259 (49%) of all Non-RA patients were female. Seven studies were retrospective cohorts, utilizing various databases comparing RA to Non-RA patients undergoing spine operations 3,5,8,9,12,13,15 1 was a prospective cohort, 14 and the other was a case-control study. 16 The average age for RA patients was 65; for Non-RA patients, this was 63.4. The average follow-up for RA patients was 40.4 months and 42.6 months for the Non-RA group. These characteristics have been summarized further in Table 1.
Perioperative Details
In the RA and non-RA groups, operative time (95% CI: À28.78 to 17.5, P = .633, Figure 2) and estimated blood loss (95% CI: À20.43 to 137.30, P = .147, Figure 3) were not significantly different. Hospital length of stay was 4.62 days longer on average in the RA group (95% CI: 3.97-5.26, P < .0001, Figure 4). Due to the heterogeneity of reporting, number of operative levels and perioperative mortality in each group could not be examined or compared.
Due to the varied nature or absence of reporting in the included studies, several variables of interest could not be reliably examined or compared, including: distinction between deep and superficial infection, the distinction between medical and surgical complications, and patient reported outcome measures.
Perioperative Antirheumatic Medication Management
After applying the inclusion/exclusion criteria, 7 full-text articles were included in qualitative assessment of management of antirheumatic medication perioperatively to spine surgery (
Guidelines and Recommendations From Surgical Literature
Few studies specifically investigate perioperative RA medication management for patients undergoing spine surgery, 24 Time to stop these medications ranges from 9.7 days (methotrexate) to 26.4 days (infliximab). Regarding to time to restart, the earliest medications to be readministered were methotrexate (10.4 days) and tacrolimus (10.5), and the latest medication to be readministered was infliximab (24.8 days). 24 More recently, the American College of Rheumatology and American Association of Hip and Knee Surgeons published guidelines for perioperative management of antirheumatic medication where medications were placed mostly in groups rather than individual assessments. 25 Commonly used RA medications (biologic and non-biologic) are listed in Table 3. Guidelines suggested that DMARDs can be continued, while biologics were recommended to be stopped prior to surgery with different timing (range: 2 days to 7 months) depending on the medication. In addition, mycophenolate mofetil, azathioprine, cyclosporine and tacrolimus were advised to be withheld if the patient does not have severe systemic lupus erythematosus. 25
Discussion
The primary purpose of this review was to evaluate the differences in peri-and postoperative outcomes with RA diagnosis among patients undergoing spine surgery and guide management of the medical treatments for rheumatoid arthritis in the perioperative period for spine surgery. The included studies compared RA and non-RA patients with respect to clinical outcomes, though the variable reporting of these outcomes did limit the quantitative analysis. The qualitative review of medication management revealed additional variability.
Spine surgery for patients with RA is indicated for myelopathy, radiculopathy, instability, or deformity. [26][27][28][29][30] Existing literature has shown differences in surgical outcomes for patients previously diagnosed with RA, often affected by the complication profile seen in RA. 3,6,31 Studies reporting on spine-specific outcomes in RA patients have shown some mixed results. In a retrospective review included in this study, Crawford et al. found no statistically significant difference in RA versus non-RA patients in complications or outcomes after 32 Kang in a retrospective matched cohort comparison of RA and non-RA posterolateral lumbar fusion patients found higher complication rates, including infection, nonunion, implant failure, and overall reoperation in those patients with RA. 5 Horowitz performed a database review of Medicare patients with RA undergoing 1-or 2-level ACDF, finding increased medical, surgical, and infectious complications when compared to those without RA, although information on the perioperative medical treatment of these patients was unavailable. 6 Valuable details regarding perioperative medical management, such as the dosing or timing of weaning or cessation, however, were not available in any of the above studies. While some studies have shown higher rates of infection in RA patients, details of perioperative medical management are often not well reported, and the causative factors remain unclear. Zhang et al. in a previous systematic review assessed the effect of RA on infection and complications after spine surgery, finding significantly greater rates of complications in the RA cohort. 33 However, the review included 6 studies 2 of which included information from databases and pose a risk of overlapping patients. Moreover, only complications and infections were included in the study. The present review included 9 independent studies and assessed a broader spectrum of parameters, including estimated blood loss, operative time, and implant-related complications in addition to the rates of overall complications and infection.
The increased risk of postoperative complications, including infections and wound breakdown, previously has been attributed to increased comorbidities in patients undergoing spine surgery. Several studies have found increased postoperative complication rates in patients with comorbidities including BMI, smoking, and diabetes, while others have demonstrated that increased operative times are associated with worse postoperative outcomes. [34][35][36] In this review, there was no significant difference in operative time between RA and non-RA patients and overall comorbid burden was unable to be compared.
In this review, implant-related complications were significantly associated with the RA group. A multicenter prospective study by Soroceanu assessed 245 patients on the incidence, risk factors, and impact of implanted related complications and quality of life measures after adult spinal deformity correction, identified nearly a third of their cohorts experienced this complication with over half of them needing reoperation after 2 years. 37 Seki investigated the differences in rates of adjacent to segment disease (ASD) and clinical outcomes in RA patients undergoing lumbar decompression. Results from this study showed a significantly increased rate of ASD in RA patients undergoing lumbar fusion. 28 Existing literature provides some direct and indirect clues to the safety of continuing certain medications in and around the time of spine surgery. Unfortunately, some studies have produced contradictory results. There is evidence that perioperative continuation of prednisone, 12,38,39 hydroxychloroquine, 38 leflunomide, 40 and DMARDs 18,19,41,42 increase the risk of infection; however, other studies have shown no increase of infection with prednisone, 22,32 methotrexate 38 or those same or different DMARDs. 32,42,43 Due to long-term corticosteroid use, many patients with RA may also carry increased risk of osteoporosis, present in up to 30% in some populations. 44 Theoretical risks also exist for decreased healing capacity of bone due to chronic inflammation or prednisone use. 45 Vertebral fractures and implantrelated issues stemming from osteoporosis are noted in several studies. 5,46,47 Excluding infection or wound problems, other complications appear similar in some studies 8,12 but increased in RA populations in others. 3,5 Regarding clinical outcomes, most studies show that spinal surgery provides predictable improvements in outcome measures for patients with RA, similar to those patients without RA diagnosis. 5,8,12 Progression of rheumatic pathology in the spine is noted in several studies, which may affect longer-term outcomes, 5,30 including those specific to spine. 40,41 Table 3. Different types of non-biologic DMARDs.
Non-Biologic DMARDs
Conventional synthetic-methotrexate, leflunomide, hydroxychloroquine sulfate, sulfasalazine Targeted synthetic-Janus Kinase (JAK) Inhibitors-baricitinib, tofacitinib, upadacitinib Biologic DMARDs Tumor necrosis factor (TNF) inhibitors-adalimumab, certolizumab, etanercept, golimumab, infliximab Anti-B (CD-20)-rituximab Anti-T cell stimulation-Abatacept Interleukin-6 (IL-6) inhibitors-Sarilumab, tocilizumab Interleukin-1 receptor (IL-1) inhibitors-Anakinra Newer antirheumatic medications appear to be decreasing the burden of spinal disease over time 26,48 ; however, medical treatments both old and new are not without risk. Terashima reported on a 10-year prospective cohort study enrolling RA patients without initial cervical instability, noting that corticosteroid use correlated with development of more severe cervical pathology. 27 Despite the high quality and long-term follow-up of this study, it is unknown whether prednisone treatment indicates a more severe disease burden or if treatment is associated with this poorer outcome.
Limitations
Several limitations exist in this systematic review. First, included studies were retrospective in nature, as a result, the influence of selection and recall biases cannot be fully withheld. Second, there was significant heterogeneity across studies. The types of operations were different, and the types of reported complications also varied greatly across studies; therefore, understanding the details surrounding individual complications was not possible. Significant heterogeneity also existed between studies for the reporting of operative parameters like EBL and length of surgery for the respective operative types. Finally, several studies reported on preoperative use of anti-RA medications, however, this was inconsistent across studies. Moreover, frequency of use and specific medications or dosing was often not reported.
Conclusion
Rheumatologic disease continues to afflict patients with spinal pathology, and patients often require surgical treatment despite recent advances in medication regimens. There is a dearth of current guidance on the relative risks of patients with RA undergoing spine surgery. The results from this metaanalysis suggest that patients with RA are at a significantly increased risk of postoperative complications including surgical site infections following spine surgery. Furthermore, the perioperative medication management for these patients appears to have significant variability in timing, cessation or continuance, and types of medication. All RA patients should be perioperatively co-managed by a rheumatologist. As in other areas of surgical practice, spine surgery may require further investigation in specific, patient-centered guidance on the recommendations for perioperative medication management to both optimize patients and minimize risk.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2022-03-19T06:23:27.484Z | 2022-03-18T00:00:00.000 | {
"year": 2022,
"sha1": "48e878e6ff3c0f57d9f506a09cd8965e9df27851",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21925682211057543",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "d0d6849ab06618da2497bb6ae919337852d8a2d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268143178 | pes2o/s2orc | v3-fos-license | Modeling of finned flat tube heat exchangers and search of Nusselt-Reynolds numbers correlations
Using a wind tunnel, a series of model finned flat tube copper radiators was studied. All geometrical parameters of the studied radiators were fixed except for the height of the fin, which was varied in the range from 5 to 20 mm. It was found that the global water-air heat transfer coefficient weakly depends on the water flow rate and significantly depends on the air flow rate. Correlations between the Nusselt and Reynolds numbers for fins with different heights are found. Based on the obtained correlations, a procedure for scaling the heat exchanger, verified experimentally, is proposed. The experiments performed demonstrate the efficiency and time-saving potential of the proposed method for choosing the optimal finned flat tube heat exchanger for adsorption heat conversion systems.
Introduction
Due to the depressing environmental situation on the planet, there is a growing interest in energysaving technologies in the global community.Adsorption heat transformation (AHT) is an energysaving technology that allows the use of heat from both alternative energy sources and waste heat that is uselessly dissipated in industry, transport, etc. AHT is a technology that makes it possible to make available to the consumer both heat production (adsorption floor heating systems) and cold production (adsorption cooling, air conditioning and ice production systems) [12 -3 4].The main difficulty in using alternative energy sources and waste heat is their relatively low temperature potential of about 100C, so it is quite difficult to return such heat to a useful cycle.Indeed, solar collectors of the simplest designs heat the thermal fluid up to 80-100C, the temperature of a large number of industrial heat losses is in the same temperature range.Adsorption heat transformers are able to work with low temperature potential heat, but despite significant progress in recent years, commercially available AHT devices are still rare on the market and need to be optimized [56 -7].One of the most important directions of AHT optimization is the acceleration of sorption/desorption processes resulting in specific power improvement.In the last two decades, the world scientific community demonstrates increased attention to the development of adsorption methods for low-temperature heat transformation [8] as a real alternative to current compression and absorption technologies.Indeed, in recent decades, there has been a growing interest in adsorption heat transformers as an alternative to similar compression systems.The results of research in this area can be summarized as follows: -Due to the extremely low consumption of electrical energy, adsorption heat transformers are an environmentally friendly alternative to compression devices for heat and cold generation [8].To date, the various adsorption materials has been widely developed, which makes it possible to implement the working cycles of AHT with high efficiency.For the AHT systems both traditional (silica gels, zeo-lites, coals) and innovative sorbents (aluminophosphates, "salt in a porous matrix" composites) interacting with a number of working liquids (water vapor, methanol, and ammonia) can be used [9].
-Despite the development of AHT technology, due to the low specific power of heat conversion -100-300 W/(kg of adsorbent), the devices are characterized by impressive dimensions and cost, as a result, occupy a small market share [33].Previously, it was shown that low power values of AHT devices [2] are due not to the thermodynamic properties of the sorbent, but rather to the conditions of the cycle (for example, the duration of adsorption/desorption stages [10]) and the organization of heat and mass transfer [11] in the system.Thus, in order to increase the power of AHT units, it is necessary to improve the heat and mass transfer in the system under consideration.The process of heat conversion occurs in adsorber-heat exchanger (AdHex), that is why this unit should be optimized first of all [12].At the moment, generally for AHT creation different commercial radiators are used as heat exchangers.Such modules initially are specialized for processes other than AHT, in most cases for cooling engines of automobile/motorbike by dissipation of heat into the environment.At present, just first steps for development an appropriate procedure of choosing the optimal heat exchanger for AHT among commercial ones were undertaken.Recently it was shown that finned flat tubes (FFT) [131 415-16171819] heat exchangers (Hexes) are more perspective for AHT than units with a different geometry [20] (Fig. 1).On the base of numerical analysis, it was demonstrated that FFT Hexes show the best efficiency [21,22].
The main parameters required for core of FFT Hex characterization are (Figure 1): f, c -thickness of fins and channels' walls, cchannel internal thickness, Hf -fin's height, fdistance between fins, Dwidth of Hex.Other important parameter characterizing heat transition from media 1 to media 2 are: A (m 2 ) -Hexes primary surface (channels' area), Af (m 2 ) -secondary surface (fins' area), λ -thermal conductivity of Hex material (W(m٠K)), heat transfer coefficients h1 / h2 (W/(m 2 K)) between material of Hex and media 1 / media 2. The global coefficient of heat transfer UA (W/K) of the whole heat exchanger providing heat transfer between media 1 and 2, can be found according to the formula [23,24]: where E -coefficient of fins' effectiveness, K = (A + Af)/A -surface extension coefficient [23].In case of rectangular fins placed between two flat channels the E coefficient can be written [25]: For FFT Hex with thin ducts in a case of media 1 flow is laminar, Nusselt number for media 1 is close to constant Nu ≈ 8 [26].Taking into account this consideration the expression for heat transfer coefficient h1 between metal and media1 can be written as follows: where λ1 -thermal conductivity of media1, Φc -channel's hydraulic diameter of the Φc =4Dc/(2D + 2c).The same equation can be used for air.
When heat exchanger loaded with sorbent is used in AHT the adsorbent grains can be considered as the media 2 with constant heat transfer coefficient h2 between metal surface and granules.The values of h2 for typical sorbents are in a range 50 -200 W/(m 2 K) [27].From other side when Hex is used as airto-liquid radiator, and media 2 (air) flows through volume with fins, air side Nu and h2 can correlate with velocity of air uair, or, in other words, with Reynolds number of air Reair that is proportional to uair.The corelation formula between Nu and Re can be found in classical literature [28]: where Pr is Prandtl number of media 2. In case of nonideal plates, for example, perforated or wavy plates instead of (4) the correlation between Nusselt and Reynolds numbers can be presented in a more common form [29]: where a, b and c are empirical coefficients.For more complicated situations, for example, non-steady flow and presence of turbulence a huge amount of correlations was presented in literature [303 .Knowing of these corelations gives a possibility to obtain Nu and h2 for any given geometry of Hex elements.
The results of [34] confirm that if all parameters of equations (1, 2) are known the global heat transfer coefficient UA for given Hex can be found easily.In [3434] small, but representative Hex with known geometry was manufactured from core of Yamaha Aerox YQ50 radiator and tested under conditions of adsorption air conditioning cycle.The experimentally found UA and maximal power transferred from sorbent to water for the tested Hex were very close to theoretically values of the same parameters calculated according to (1,2).In [35] eight commercial radiators were considered and detailed information on their cores' geometry was comprehensively analysed.On the base of information about abovementioned Hexes' geometry UA coefficient for heat exchangers with the fixed volume were estimated.After that three small Hexes were manufactured and tested under adsorption heat storage cycle conditions.For the Hexes production the most different from each other from UA point of view commercial radiators were used.The coincidence of experimental and theoretically estimated UA values was observed again.So, there are two ways for choosing Hex the optimal from particular AHT cycle point of view among a great number of radiators produced by industry.The first possibility is direct measuring of the UA coefficient under conditions of considered AHT application.The second one is calculation of the appropriate UA coefficient on the base of Hex geometric parameters and heat transfer coefficients h1 and h2.The latest way seems much more easier, than the former one.At the same time, one should take into consideration that information about radiator's geometry and heat transfer coefficient sorbent-metal is not widly available.Indeed, in order to get data about Hexes geometry it should be dismantled.Moreover, measurement of heat transfer coefficient between metal and sorbent is complicated and time-consuming experiment.
On the other hand, it is important to remember that difference in roles (cooling of automobile engine by air/adsorption heat transformation) of commercial radiators lies in h2 coefficient nature.In case of air-to-liquid cooling of engine h2 coefficient characterizes convective heat transfer between air and metal, in the case of adsorption heat transformation the appropriate coefficient represents conductive heat transfer between sorbent and metallic support.It is evident that, the liquid side (contact with media 1) of Hex is the same for both considered roles.The question is: "Will be the Hex with highest value of UA for air-liquid engine cooling process (among a number of commercial radiators) optimal for adsorption heat transformation?".In [36] it was shown that: "Yes, testing of commercial radiators in wind tunnel can give information about their prospects for AHT".The heat exchangers considered in abovementioned article were manufactured from commercially produced radiators.That is why all geometric parameters of these Hexes differed from each other.
The purpose of this work is investigation a series of model Hexes with same geometric parameters except for the fin height by means of wind tunnel.Such fixing other parameters gives a possibility for analysis of fin's height impact on global heat transfer coefficient.Moreover, the possibility of predicting the global heat transfer coefficient when scaling the heat exchanger will be considered.
Heat exchangers
When manufacturing model heat exchangers with the same core volume (Hexes 1M-3M, Table 2), but with a different number of channels, copper was used as it is a highly heat-conducting and easily processed material (λCu = 400 W/(m٠K)).Additionally, heat exchanger 4M with geometry similar to that of Hex 3M, but twice its volume, was manufactured in order to check the correctness of the expression (1) applicability at scaling.When changing the number of channels, the height of the Hex fins will change (Table 2) too.The parameters remained the same for all exchangers were: the channel wall thickness δw 500 µm, the inner height of the channel h'c 1 mm, the distance between the fins Δf 1 mm, the fin thickness δf 50 µm.For correct comparison of volumetric power, the radiators 1M-3M were manufactured with almost the same core volume V = 160 ± 3 cm 3 .It is important, because namely volumetric conductance (UA related to volume of the unit) is one of the key parameters of adsorbers in AHT applications.
Wind tunnel set up
The main part of experimental set up for measuring of heat fluxes in tested Hexes are: wind tunnel, water circuit and data recording system (Figure 3a).The tested Hexes was situated into the wind tunnel, which is a thermally insulated channel with rectangular form with ventilator, flowmeter and flow regulator.The air circulates through this tunnel.Water circuit consists of Hexes' channels, regulator of flow, thermostat, flowmeter, pipelines and valves.The temperatures of water and air streams were controlled by 8 T-type thermocouples.Five of them were situated in the outlet part of wind tunnel with distance of 5 cm (Figure 3b).For collection data of the experiment analog-to-digital converter ADAM 4018 and personal computer were used.
Experimental procedure and data evaluation.
After setting the certain values of air and water flows the measurements were carried out.The inlet water temperature was supported almost constant Tin(w) = 40.0± 0.1 o C, however air inlet temperature was not stabilized and approximately equal room temperature.All flow temperatures both water and air were recording during measurements.Among signals from five thermocouples characterized the outlet air flow, the maximal value of temperature Tout(air) was considered.
Heat flows of air and water (Q(air) and Q(w) respectively) as well as average flow Q(av) can be calculated according to the following equations: where Cp is heat capacity, ρdensity, fflow rate of air/water [373 8-3940].The temperature driving force of the process can be characterized by Logarithm mean temperature difference LMTD: Using data about LMTD and heat flow the global heat transfer coefficient UA can be calculated: For further analysis an average value UA(av) was used.
Error analysis.
Instrumental errors (air flow meter -3%, water flow meter -2%, thermocouples and ADC ±0.1K) must be taken into account to achieve the error of the final result.So, the absolute and relative errors for Q, LMTD and UA can be found.The cumulative relative error in UA coefficient determination is 11.7%.
Results and discussion
Typical dependences of temperatures at inlet and outlet of the radiator, and of the logarithm mean temperature difference (LMTD) on time are presented in Figure 3.This figure evidences that in ca.500 s after beginning the experiment the temperatures approach to their stationary values and then remain constant in a range of the specified uncertainty (Table 2).The same tendency is clearly reflected in the temporal behavior of heat fluxes (Figure 4a).After the experimental run begins, the fluxes rapidly alter during first ca.200 s and then slowly approach a certain value.After approximately 500 seconds, the fluxes become constant and remain stable for a further period of time.This indicates that the system achieves a stationary state.As both the heat fluxes and LMTD values reach a constant magnitude, the values of UA also become constant, as is shown in Figure 4b.The mean value of the global heat transfer coefficient UAav, averaged over the whole steady state period (ca.600 s) were taken for further evaluation.The experimental data on the dependencies of UA on air and water flow for studied heat exchangers is summarized in Figure 6.One can see that the values of UA for radiator 1M lie in a range UA= 8.5-18 W/K while UA coefficients for radiators 2M and 3M are lower UA=6-12 W/K.Also, it can be seen that the global heat transfer coefficients for all the studied heat exchangers depend slowly on the water flow.This confirms the theoretical findings that the Nusselt number in thin planar channels is independent of the water flow velocity.At the same time, it can be seen that the heat transfer coefficients are strongly dependent on the air flow rate.This is in agreement with both theoretical considerations [33] and previously observed experimental results [36].This makes it possible to find the correlations between the Nusselt and Reynolds numbers based on the experimental data obtained for the model heat exchangers considered in this paper.Indeed, for a heat exchanger with a fixed geometry, assuming that the metal-to-liquid heat transfer coefficient is constant, the global heat transfer coefficient is a function of only one parameter -the metalto-air heat transfer coefficient h2 (Figure 7).On the other hand, the air-to-metal heat transfer coefficient, for a fixed heat exchanger geometry, is a single-valued function of the Nusselt number, as is shown by equation (3).It should also be noted that for each heat exchanger of fixed geometry, the Reynolds numbers corresponding to certain air flow rates which can be calculated using the relationship: where u is velocity of air, is hydraulic diameter, -is viscosity of air.Thus, the experimentally measured global heat transfer coefficients, the corresponding air-to-metal heat transfer coefficients and the corresponding Nusselt numbers can be unambiguously correlated with the Reynolds numbers for air flow.The difficulty of this procedure is that since the direct dependence between the experimentally measured global heat transfer coefficient UA and the air-to-metal heat transfer coefficient is a hyperbolic function, it is hardly to find the inverse dependence in analytical form.Therefore, the problem of finding the inverse function was solved graphically as shown in Figure 7.In such a way the one-to-one coincidences between experimental values of UA, h2 and Nu and appropriate Re values were established for all the studied radiators.These coincidences are presented in Figure 8 in double logarithmic coordinates.This kind of data presentation allows obtaining the empiric coefficients a and b of Nu-Re correlations (equation 5).The obtained coefficients are summarized in Table 3 together with the ranges of Re, Nu and appropriate h2 values where the abovementioned correlations are valid.Data presented in Table 3 demonstrates that Nu-Re correlations found for the tested heat exchangers are very close to a classical formula with a = 0.64 and b = 0.5.However, some difference between the obtained coefficients and the theoretically predicted ones may indicate, for example, the non-ideal shape of the fins in the real heat exchangers that were tested in this paper.In order to demonstrate this possibility, the double-sized radiator was manufactured with the fins and channels geometry identical to the geometry of fins and channels for heat exchanger 3M.This up-sized radiator is designated in Table 1 as 4M and is two times longer that radiator 3M whereas all the other geometric parameters remained the unchanged.The surface of 4M radiator also is two times higher than of 3M radiator (Table 1).This enlarged radiator was tested in a way how 1M-3M heat exchangers were tested before.The raw results of the direct UA measurements for this enlarged radiator (green symbols) are presented in Figure 10 and compared with data for the single-sized heat exchanger (red symbols).The red 3D surface interpolates the UA data for small heat exchanger.Multiplying the UA values of this surface by 2 gives the upscaled 3D surface (green).Figure 10a clearly evidences that a simple proportional up-scale is not a proper way for calculating the parameters of the radiator while its external sizes are scaled.Indeed, the experimental points measured for 4M radiator visibly differ from this tentative up-scaled surface.What is a true way for calculating the dependences of UA on the velocity of fluids flow for the radiator with known geometry of the core, while the dimensions of the heat exchanger are variable?Figure 10 b answers this question and gives the way for correct scaling procedure.Green 3D surface in Figure 10 b is constructed in a follows way: 1) for the selected flow rate and the corresponding Re number, using the relation (12), the appropriate number Nu is calculated; b) value of heat transfer coefficient h2 is calculated from Nu number using (3); c) the global heat transfer coefficient UA is assessed for radiator with known dimensions with use of (1-3, 5).And it is wort note that namely correlations found for small radiator 3M were used for finding the appropriate h2 and UA values (green plane in Figure 10b) for the up-scaled radiator 4M.Remember, that the core geometry for these radiators is identical (Table 1).Figure 10b reveals that the results of theoretical scaling (green 3D surface) and the results of direct UA measurements (green symbols) coincide with a reasonable accuracy.This confirms the correctness of the proposed method for calculating heat transfer rate at scaling up the radiators using the obtained Re-Nu correlations.
Another advantage of the proposed methodology of radiators testing in wind tunnel and finding Re-Nu correlations in the context of determining AdHex efficiency is that this method provides a simple and time-efficient way to experimentally measure UA for heat exchangers with a given range of sorbentto-metal heat transfer coefficient h2.Thus, it is not necessary to load adsorbent granules into the heat exchanger and perform measurements under the realistic conditions of a given adsorption cycle.It is only necessary to find the desired flow rate corresponding to the appropriate Re and Nu and corresponding to the exact h2 (sorbent-metal) value that corresponds to a specified adsorptive cycle and adsorbent.Then, at a given flow rate, UA measurements should be carried out in a wind tunnel.In other words, the change in flow rate in the wind tunnel is the analogue of the change in h2 for the adsorbent.Thus, to find the best heat exchanger for a given adsorption cycle, relatively simple and time-efficient experiments can be performed if the properties of the adsorbent are known.For example, in [35] it was shown that heat transfer coefficient h2 for desorption stage of daily heat storage cycle is as high as h2 = 125 W/(m 2 •K).Which radiator will be better for this cycle and how much higher will be its global heat transfer coefficient UA?And how to confirm it experimentally?Table 3 shows, that this value of heat transfer coefficient h2 corresponds to Re number of about Re ~ 600.And at air flow that corresponds to this value of Re radiator 3M demonstrates experimental value of UA = 10 W/K, whereas radiator 1M shows UA = 17 W/K (Figure 9).Considering that one experimental run takes no more than 30 min, it is safe to say that after 1 hour it is possible to obtain direct experimental confirmation that reducing the fin height from 2 cm to 0.5 cm allows to achieve an increase in the global heat transfer coefficient by more than 50%.This demonstrated the perspectives and time-saving potential of the proposed methodology for testing FFT heat exchangers, evaluating their conductance, finding the optimal radiator to be used as AdHex for a given application in adsorption heat transformers.The predictive ability of the proposed methodology for heat exchanger scaling is also demonstrated.
Conclusions
The paper considers a series of model water-air heat exchangers with FFT geometry.The fin height in the Hexes was varied directionally (from 0.5 to 2.0 cm) while the other geometric parameters were kept constant.The heat exchangers were investigated using a wind tunnel, the dependences of the global heat transfer coefficient UA on the flow rate of water (from 10 to 35 L/s) and air (from 0 to 0.05 L/s) were determined.Based on the data obtained, the correlations of the Nusselt numbers with the Reynolds numbers were determined, which allow scaling the heat exchangers of the considered geometry.The possibility and correctness of the scaling procedure were confirmed experimentally.A time saving procedure for choosing the optimal FFT heat exchanger for a given cycle of adsorption heat transformation is proposed and discussed.
Figure 3 .
Figure 3. (a) The wind tunnel rig scheme; (b) outlet part of wind tunnel channel.
9 Table 3 .Figure 9
Figure9demonstrates that the correlations obtained allow fitting the experimental data using the equations (1-3, 5) with a reasonable accuracy.It opens the possibility of estimation the performance of heat exchangers at their scaling up.
Table 1 .
Geometric parameters of the tested Hexes.
Table 2 .
The error analysis. | 2024-03-03T19:34:23.252Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "06cbefabb2abdbee30818492a90fdd9fdc89ac4d",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2701/1/012140/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0508f077e580813deb10bfc863d4849b8d8a837f",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
252816864 | pes2o/s2orc | v3-fos-license | Why do ambulance employees (not) seek organisational help for mental health support? A mixed-methods systematic review protocol of organisational support available and barriers/facilitators to uptake
Introduction The COVID-19 pandemic is exacerbating a wide range of symptoms of poor mental health among emergency medical service (EMS) ambulance populations. Evidence suggests that using organisational support can improve employee outcomes and in turn, patient outcomes. Understanding why EMS staff do and do not use support services is therefore critical to improving uptake, ensuring equitable access, and potentially influencing workforce well-being, organisational sustainability and patient care delivery. This systematic review aims to identify what support is available and any perceived barriers and facilitators to accessing and utilising organisational support. Methods and analysis Searches performed between 18 February 2022 and 23 February 2022 will be used to identify studies that report barriers and facilitators to EMS employee support among all government/state commissioned EMS ambulance systems. Electronic databases, AMED, CINAHL, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, EMCARE, HMIC, Medline and PsycINFO will be searched. All relevant English-language studies of adult employees of government/state commissioned EMS ambulance organisations published since December 2004 will be screened and relevant data extracted by two independent reviewers. A third reviewer will resolve any disagreements. The primary outcome is the identification of perceived barriers or facilitators to EMS staff using organisational support for mental health. The secondary outcome is the identification of supportive interventions offered through or by ambulance trusts. Study selection will follow Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, and the methodological quality of included studies will be appraised by administering rating checklists. A narrative synthesis will be conducted to report qualitative and quantitative data and will include population characteristics, methodological approach and information about barriers and facilitators. Ethics and dissemination Ethical approval is not required because only available published data will be analysed. Findings will be disseminated through peer-reviewed publication and conference presentation. PROSPERO registration number CRD42022299650.
INTRODUCTION
Emergency medical service (EMS) employees save lives. They respond to emergency and urgent care needs to reduce anxiety, pain, and suffering. EMS is called to work in a range of environments and with a range of patient populations, undertaking autonomous life and death decisions. They frequently work long, irregular hours, while contending with staff shortages and exposure to distressing and traumatic events. These factors can result in severe consequences for some staff; with an increased risk of post-traumatic stress, early retirement on medical grounds, accidental injury or death. 1 EMS employees are over four times more likely to experience mental ill health compared with the general workforce. 2 3 The COVID-19 pandemic further exacerbates risk of poor mental health. A recent survey of UK emergency responders
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ This systematic review addresses a gap in the current evidence-base by providing an overview and critical appraisal of studies that report emergency medical service employee perceptions of the barriers and facilitators to organisational mental health support, which may influence employee uptake of such support. ⇒ By following the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols and Synthesis without meta-analysis in systematic reviews reporting guidelines and by registering and publishing this protocol, the transparency of systematic review methods and findings is improved. ⇒ Restricting the study to English-language only publications may exclude relevant information written in other languages. ⇒ There is potential for heterogeneous and low-quality reporting of barriers and facilitators in the studies identified for review.
Open access
identified that ambulance staff (77%) were the most likely to report their mental health has worsened since the pandemic began. 4 Suicide is a particular concern, 5 with Mars et al 6 identifying a 75% increased risk among male paramedics compared with the general population. A number of risk factors contribute to EMS employee mental ill health, including those shared with the general population, such as genetics, loneliness, stressful life events and physical ill health. 7 A recent systematic review identified a higher prevalence of alcohol and drug misuse compared with the general population 3 and evidence suggests a high prevalence of adverse childhood experiences among EMS employees, such as abuse and neglect. 8 However, research by the mental health charity Mind 9 found that EMS employees were twice as likely as the general population to identify problems at work as the main cause of their mental ill health. Poor employee mental health can have a detrimental impact on EMS capability, with some areas reporting a 50% staff attrition rate, citing poor staff mental health and organisational culture cited as primary contributing factors. 10 11 Evidence suggests that utilising organisational support when needed is related to improved employee and patient outcomes. 12 If support is not available or employees are unable or won't access support, staff may feel isolated, unsupported and this can lead to poor mental health and an inability to thrive at work. 13 EMS employees report reluctance to disclose mental health problems at work, citing perceived stigma associated with mental health and feeling unsupported by employers to address mental well-being. To help prevent workforce burnout, action is needed to better support EMS employee mental wellbeing. With the right support, staff experiencing mental ill health can successfully continue to work, the severity of symptoms can be reduced and suicide prevented. 14 15 In addition the frequency and length of sickness absence reduces; increasing workforce productivity, capability and safety. 16 Current EMS employee assistance programme uptake is improving, but it is vital that EMS organisations make improvements to ensure all employees can access support when needed. 17 This is vital not only because of the impact of poor mental health on individual employees, but also the critical impact of prehospital care on patient outcomes. 18 Therefore, understanding what EMS employees perceive to be barriers and facilitators to utilising support services is key to improving their uptake. This systematic review aims to improve our understanding of why some employees access organisational support and why others do not. This protocol aims to provide a transparent method of identifying current support provision, barriers and facilitators to utilising support, while assessing the quality and risk of bias of the current available evidence.
Review aim
Our primary aim is to identify and review previously conducted studies which include reports of EMS ambulance employees' perceptions of the barriers or facilitators to the provision of organisational mental health support for their own psychological well-being.
Objectives
The objective is to establish what support is available and identify any element/s perceived as effective and/ or ineffective for the uptake of organisational support for EMS employee mental well-being. For the purposes of this review organisational support is defined as any programme, pathway or signposting that is provided, funded or facilitated by the employing organisation in support of mental health. This review will seek to: ► Identify and report the range of the distinct types of supportive interventions available for EMS ambulance employee well-being. ► Establish the proportions of participants that report barriers and/or facilitators and/or other key factors. ► Identify attitudes, perceptions and experiences relating to any barriers, facilitators and other key factors.
METHODS AND ANALYSIS
This protocol was prepared following the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA) and Synthesis without meta-analysis (SWiM) in systematic reviews reporting guidelines. 19 20 The protocol was then registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 2 February 2022 (ref CRD42022299650).
Inclusion and exclusion criteria
Types of studies All study types that examine factors relating to organisational mental health support for prehospital EMS ambulance organisation employees will be included. Primary papers from relevant systematic reviews alongside quantitative, and mixed-methods studies will be included to establish what interventions are offered and to assess barriers, facilitators, and any associated benefits and/ or harms linked to reported interventions. Qualitative, cross-sectional and survey studies that report any barriers and/or facilitators relating to organisational employee mental health support, will also be examined. Only articles published after 1 December 2004 will be examined, since this date coincides with a shift in focus on the well-being of first responders across the globe. This shift likely relates to the terrorist attack in New York on 11 September 2001. Legislative and guidance changes were introduced to ambulance organisations across the globe such as 'Agenda for change ' (2004) 21 in the UK, in the USA, the 'EMS Workforce for the 21st Century project' 22 commenced in the fall of 2004 and in Australia the 'Emergencies Act (ACT)' (2004) 23 promoted responder welfare and described employer responsibility. Articles not written in English will be excluded. Any study samples that consist of mixed emergency employees (ambulance/ coastguard/fire/police), where results are combined,
Open access
and samples include less than 50% ambulance staff will also be excluded from this systematic review.
Types of participants
All studies involving adults (18+) employed by government or state commissioned EMS organisations in clinical or non-clinical roles will be included. Employees will be eligible for inclusion if contracted to full or part-time roles or hold a bank contract that requires a minimum number of regular working hours. Employees could include paramedics, Emergency Medical Technicians, Emergency Care Assistants, EMS ambulance nurses and doctors, emergency medical number call centre and dispatch staff, operational managers, support and central function staff such as Human Resources and patient safety teams, as well as senior leadership. Paramedic students, EMS apprentices, non-government/state commissioned/ private EMS ambulance employees and volunteers, including volunteer first responders will be excluded, since any available supportive interventions may differ from those offered to employed staff.
Interventions
The review will include studies which report on EMS ambulance employees' perceived barriers or facilitators to seeking or accessing help from their organisation for mental health support. This may include individual-level factors relating to the decision to engage in employee support, the acceptability of the support offered, perceptions and experiences of support, as well as organisational level factors such as, culture, and finally, policy level factors such as targeted campaigns and regulation of professional standards. Organisational factors examined in this review will include interventions reported to be offered for employee mental health and well-being. Any intervention, regardless of the mode of delivery (face to face, e-learning, virtual, etc), is eligible for inclusion if the employer was involved in any element such as development, design, delivery, funding, signposting. Studies that only examine social support (support outside of the employee context, such as non-organisational family and friend support) and organisational support in response to isolated specialist occurrences, such as natural disaster and terrorist events, will be excluded.
The main outcome will be the identification of EMS ambulance employees' perceived barriers or facilitators to accessing organisational support for their mental health (including formal peer-support networks, manager support and employee assistance programmes). This will Include elements of organisational factors identified by participants as being effective or ineffective for the provision and uptake of support. The presence of any factor that promotes the development, implementation, adoption, uptake of or participation with, organisational employee mental health support will be considered a facilitator. Any factor that limits or restricts the development, implementation, adoption, uptake of or participation with organisational employee mental health support will be considered as a barrier. The same factor may be both a barrier and a facilitator.
Information sources
The following electronic databases were searched between 18 February 2022 and 23 February 2022 (and will be rerun 6 weeks before review completion): AMED, CINAHL, Cochrane Central Register of Controlled Trials and the Cochrane Database of Systematic Reviews via the Cochrane Library, EMBASE, EMCARE, HMIC, Medline, PsycINFO, Scopus and Web of Science . An example search strategy for Medline is presented in online supplemental appendix 1. Searches were tailored to each database using the Polyglot Search Translator 24 and conducted using keywords and relevant theasai such as MeSH and EMTREE. To ensure that all the available and relevant research is captured, grey literature will also be sought from the OpenGrey, MedNar and ProQuest databases and through the webpages of industry and charitable organisations active in supporting EMS ambulance employee mental health. A full list of webpages to be manually searched will be developed by the research team and will include sites such as the Global Ambulance Leadership Alliance (which covers the UK, USA, Canada and Australasia), The Ambulance Staff Charity (UK), the Royal Foundation, and the mental health charity, Mind. The reference lists of all studies selected for critical appraisal will be hand searched for further material for inclusion. The searches will be rerun 6 weeks prior to the final analyses to identify and retrieve any other studies for inclusion.
Study records Data management
References identified from electronic and hand searches, including title and abstracts, will be imported into Mendeley citation manager software and any duplicates removed.
Selection process
Two reviewers will independently screen a subset (10%) of titles and abstracts. Full-text screening will be based on a PICoT concept: ► Population: Adults (18+) employed by government/ state commissioned EMS ambulance services. ► Phenomena of Interest: Types of organisational interventions offered to support ambulance staff mental health and any barriers and/or facilitators to utilising such support. ► Context: Government/state commissioned prehospital EMS ambulance organisations. ► Types of studies study design: All types of research studies. Studies scoring 4/4 for all the above criteria will be included. Any reviewer uncertainty will be rated as 'unsure' and discussed by the independent reviewers with reference to the full text if required. If not resolved through reviewer discussion, disagreements will be settled Open access through discussion with an independent third reviewer. The inter-rater reliability of consensus will be calculated.
Data extraction process
To identify papers for inclusion the full text of remaining studies will be retrieved and screened. Again, the interrater reliability will be calculated to ensure consistency and clarity. From this final selection, all potentially relevant data will be extracted and collated in an Excel spreadsheet including: ► Primary author. ► Publication details. ► Country of study. ► Study methods. ► Setting. ► Sample characteristics (sample size, age range, EMS job role). ► Phenomenon of interest (self-reported barriers and/ or facilitators). ► Intervention (where relevant).
Outcomes measured will include: ► Primary outcome measures (self-reported barriers and/or facilitators). ► Assessment tool names. ► Reported statistics. ► Reported significance levels. ► Reported effect sizes. ► Secondary outcome measures. ► Relevant findings.
To ensure sufficient detail capture to enable replication, any described intervention content will be extracted using Hoffman et al's 25 Template for Intervention Description and Replication checklist. If data are missing or additional information is required, we will contact authors by email as per Cochrane recommendations and document the frequency of contact and authors' responses. 26 Search results will be reported in full and presented in a PRISMA flow diagram.
Quality assessment
The quality, alongside the trustworthiness, relevance and findings of each of the studies identified for final selection will be assessed by two independent reviewers using two rating checklists (Standard Quality Assessment Checklists) developed by Kmet et al. 27 One checklist is designed to assess the quality for quantitative studies (and will also be applied to the quantitative components of mixedmethods studies) and the other for qualitative studies (which will also be applied to the qualitative components of mixed-methods studies). Each checklist item will be rated on a quality scale from 0 to 2: ► Criteria not met=0. ► Criteria partially met=1. ► Criteria fully met=2.
Any included grey literature will be assessed using Tyndall's 28 'Authority, Accuracy, Coverage, Objectivity, Date, Significance' checklist. Reviewer discrepancies will be resolved through discussion and when necessary, consultation with the third reviewer. All study types will be included in this review, regardless of methodological quality, since it is anticipated that the availability of highquality evidence will be limited. However, a sensitivity analysis will be conducted by testing whether removing any studies rated zero for methodological quality from the analysis changes the thematic results. Critical appraisal results will be displayed in a predetermined assessment of methodological quality table. The narrative synthesis will include a summary of the relative impact of missing data and of methodological flaws on the findings.
Data synthesis
Mixed-methods systematic reviews are an emerging field of enquiry, useful for enhancing the credibility of findings. This is particularly important for this review as although quantitative evidence suggests that ambulance staff report high rates of mental ill health and want organisational support, 9 29 evidence from qualitative studies indicates that negative experiences and perceptions of such support can affect the acceptability of utilising support. 30 By using a mixed-methods approach, both the experience and effectiveness of organisational support initiatives can be captured; factors vital for informing the research question. The mixed-methods procedure will follow Joanna Briggs Institute guidance for a convergent integrated approach. 31 This involves transforming extracted data from quantitative papers (and quantitative aspects of mixed-methods papers) by qualitising (creating a textual description) quantitative findings. This enables findings from all studies to then be combined during the analysis phase. It is anticipated that data from the included studies will be heterogeneous since they are likely to include different approaches to design and use of different outcome measures. Heterogeneity will be determined by summarising: ► Population characteristics (eg, sample size, age, type of mental health problem/disorder). ► Methodological approach (eg, qualitative, survey, experiment). ► Assessment (the measures used to assess staff perceptions of organisational support, barriers or facilitators where relevant). ► Intervention characteristics (eg, type of intervention, frequency, duration, uptake). It is therefore unlikely that it will be possible to undertake a meta-analysis. Instead, a narrative review and synthesis approach will be taken by conducting inductive thematic analysis, using NVivo software and data from the excel data extraction sheets in the following steps: 1. Key data (data from the results sections of included papers) and quotations will be transposed from data extraction sheets to NVivo for coding by two reviewers, who will agree a coding structure for coding of participant data. The third reviewer will arbitrate any conflict. 2. Using the agreed on coding structure, two reviewers will undertake thematic analysis of the coded data and will meet regularly to ensure the coding structure is ap-
Open access
propriate and can be applied to the conclusions being drawn from the identified themes. 3. Factors impacting on participation with EMS organisational employee mental health support, with a focus on information from studies relating to employee experiences and/or perceptions of barriers against, or facilitators to, accessing and using support, will be synthesised in this systematic review (there will be no minimum number of studies). 4. A combined narrative (descriptive) synthesis will be used following Campbell et al's 20 SWiM guideline. 5. The certainty of evidence will also be synthesised using #27 quality checklist.
Amendments
If any protocol amendments are required, the date, description and rationale will be made available on the PROSPERO registration.
Patient and public involvement
To enhance the meaningfulness and robustness of findings, an EMS staff reference group and an EMS specific patient involvement group in the UK reviewed and supported the development and design of this protocol. These groups will review and provide an employee and public perspective on the interpretation of the findings and will support dissemination.
Ethics and dissemination
Ethical approval is not required because only available published data will be analysed and this is a protocol for a systematic review. Findings will be disseminated through publication in a relevant peer-reviewed journal. The findings will also be communicated at research conferences, symposia, congresses and via social media to ensure dissemination to a wide range of interested parties.
DISCUSSION
EMS employee mental well-being can influence the care given to patients. A number of initiatives are provided to support EMS employee mental health, although evidence suggests that some staff do not seek help or feel unable to disclose their mental health status when needed. With this in mind, a strength of this systematic review will be the presentation of barriers and facilitators specific to the uptake of employee mental health support in the EMS context identified through robust, replicable methods and critical appraisal of the available literature. Limitations will be addressed through transparent reporting and appraisal of study quality the involvement of EMS staff in the development of the inclusion and exclusion criteria, and by grading of the quality of the studies included. | 2022-10-12T06:18:02.099Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "aace7de60fd6e764d1b4c05aa4d27c20429a4f2b",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/10/e062775.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cbdc3e7ee6ff25d6f807a222fdbdd9dc7674ce2",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218600598 | pes2o/s2orc | v3-fos-license | Effectiveness of a hydrophobic dressing for microorganisms’ colonization of vascular ulcers: Protocol for a randomized controlled trial (CUCO‐UV Study)
Abstract Aim To determine the effectiveness of a hydrophobic dressing (Cutimed Sorbact®) against a silver dressing (Aquacel® Ag Extra) in the level of colonization of chronic venous leg ulcers. The secondary endpoints are health‐related quality of life, level of pain, and time to complete healing. Design Open randomized controlled trial, with blinded endpoint. Methods Patients with chronic venous leg ulcers with signs of critical colonization will be randomized in a concealed sequence using computer software to receive one of the alternative dressings. A total of 204 participants recruited in Primary Health Care and nursing homes will be necessary to assure statistical power. Measures will include sociodemographic variables, wound‐related variables (area, exudate, and time to healing), level of pain, adverse effects, and health‐related quality of life. Smear samples will be collected from the ulcers and will be subject to DNA‐typing technique through polymerase chain reaction to obtain the level of colony‐forming units. Measures will be collected at baseline, 4, 8, and 12 weeks. Discussion Elevated levels of microorganisms prevent wound healing and favour its chronification. The main target when colonization is present is to reduce the bacterial load to levels that promote immune system mobilization. Hydrophobic dressings prevent the formation of biofilm in the wound by means of physical effect, so that the possibility of antimicrobial resistance is significantly reduced. Impact Current evidence about the effectiveness of dressings to minimize venous leg ulcers colonization is very limited. Previous studies have important methodological flaws. This study will permit to obtain the effectiveness of hydrophobic dressings against silver dressings with a robust design based on conditions of routine clinical practice in Primary Health Care and nursing homes.
| INTRODUC TI ON
Chronic wounds of torpid evolution are currently a great challenge for health services worldwide. In Europe, they can eventually lead to 2-4% of health expenditure (Posnett, Gottrup, Lundgren, & Saal, 2009).
Particularly, venous ulcers are estimated that would affect at least 1% of adult population throughout their lifetime. Studies have determined venous ulcer frequency around 0.2-1.1% (Graham, Harrison, Nelson, Lorimer, & Fisher, 2003), thought this figure can rise up to 4% in people over 65 years old (Pugliese, 2016).
Clinical nursing has a key role in the detection, diagnosis, treatment, and tracking of venous leg ulcers. It is important to identify the different types of wounds and their characteristics that nurses can find in their clinical practice, and the appropriate treatments that can be carried on. Furthermore, the healing of these chronic wounds depends on the choice of the most suitable designed dressing. Most of times, this decision relies on the clinical nurse's judgment about the venous ulcer evolution (Davies, Turton, Woolfrey, Elley, & Taylor, 2005).
| Background
More than 80% of venous leg ulcers may be colonized or infected by bacteria (O'Meara et al., 2014). One of the leading causes of delay in the healing is the presence of pathogenic bioburden. In fact, bacterial biofilms (exopolysaccharides extracellular matrix which provides structural bacterial defence) over the wound have been associated with chronic wound interference recovery (Clinton & Carter, 2015;Gompelman, van Asten, & Peters, 2016;Hurlow et al., 2015).
Moreover, biofilm provides a key resistance to local and systemic antibiotic treatments, especially in the case of venous ulcers (Bianchi et al., 2016). Major representative strains are Staphylococcus aureus and Pseudomonas aeruginosa, while it can be also found in anaerobic microorganisms (Peptostreptococcus species and gram-negative Bacillus), Enterobacteria, or group A Streptococcus (Etchebarne et al., 2017).
The presence of colonization or infection in chronic wounds produces a direct effect in their healing and recovery when colony-forming units (CFUs) exceed 10 5 per tissue gram. Thus, wound colonization by pathogenic bacteria promotes its chronification.
Furthermore, pathogenic bacteria can also generates intra-wound synergistic effects with other non-pathogenic microorganisms increasing their action (Rahim et al., 2017). This problem has been usually addressed by using local dressings, with the aim to reduce the total wound bacterial load. Although the most common therapeutic strategy is the use of silver dressings due to their antimicrobial effect, there is a lack of evidence about their effectiveness in venous ulcers (O'Meara et al., 2014). Moreover, there is some uncertainty over adverse effects derived from systemic absorption of silver molecules, in addition to generate silver resistance in many bacteria strains (Maillard & Hartemann, 2013;Mijnendonckx, Leys, Mahillon, Silver, & Van Houdt, 2013;Percival et al., 2008). Additionally, these inconveniences can involve an elevated cost when long-term treatments are needed.
A novel approach to deal with the problem of chronic wound colonization is the use of hydrophobic dressings. It has been extensively addressed that when two hydrophobic molecules contact each other, they increase their molecular entropy by hydrophobic interactions to finally expelling residual water molecules (Curtis, Steinbrecher, Heinemann, Blanch, & Prausnitz, 2002). Thus, hydrophobic surface microorganisms present a high probability to join to hydrophobic dressings, allowing hydrophilic bacteria to rest at the wound bed to promote the healing. In vitro experiments have also showed that most pathogenic microorganisms present high hydrophobic membrane molecules (Ljungh, Yanagisawa, & Wadström, 2006).
Hydrophobic mechanism is neither based on active absorption of exudate nor placing chemical agents on the surface of the ulcer. In fact, the presence of dialkylcarbamoyl chloride (DACC)-coated dressings attract hydrophobic fungus and bacteria, avoiding the formation of biofilm in the wound (Derbyshire, 2010). In this sense, antimicrobial properties of hydrophobic dressings are based on physical effects instead of chemical effects. Consequently, the possibility to generate antimicrobial resistance is significantly reduced (Ljungh et al., 2006). Furthermore, it has been analysed how local antiseptic administration could affect to the expression of the hydrophobic bacterial membrane.
Thus, the use of eutectic mixture of local anaesthetics (EMLA) or glycol-carbohydrates modified polymers (Askina ® ) can interfere hydrophobicity, whereas this property is not affected by other products commonly used in the care of wounds, such as povidone-iodine or saline solutions (Ljungh et al., 2006).
Furthermore, the study population in these studies did not include Impact: Current evidence about the effectiveness of dressings to minimize venous leg ulcers colonization is very limited. Previous studies have important methodological flaws. This study will permit to obtain the effectiveness of hydrophobic dressings against silver dressings with a robust design based on conditions of routine clinical practice in Primary Health Care and nursing homes.
K E Y W O R D S
bacterial adhesion, bacterial load, leg injuries, nursing, varicose ulcer, wound infections patients in home or ambulatory care, the most frequent clinical settings with a high demand for this type of wounds. In addition, the trials carried out have evaluated the results throughout short follow-up periods. All these issues limit the available evidence on hydrophobic dressings, as it has been stated in a recent systematic review (Totty et al., 2017).
| Aim
The main aim of the study is to determine the effectiveness of a hydrophobic dressing (Cutimed Sorbact ® ) against a silver dressing (Aquacel ® Ag Extra) in the level of colonization of chronic venous leg ulcers.
The secondary aims will be to analyse the effect of Cutimed Sorbact ® versus Aquacel ® Ag Extra on total healing, the proportion of wound reduction, time of recovery, level of pain during recovery, and health-related quality of life. In addition, the safety of Cutimed Sorbact ® will be evaluated.
The null hypothesis is that there is no difference in the level of bacterial colonization at 12 weeks of follow-up in chronic ulcers treated with Cutimed Sorbact ® versus those treated with Aquacel ® Ag Extra.
| Design
An open, randomized, controlled trial with blinded evaluators will be carried out.
| Sample
Assuming a 95% confidence level and 80% statistical power, 186 participants would be necessary (93 for each study arm) to detect a 20% load bacteria reduction (Mosti et al., 2015). This sample will be increased up to 204 participants to cover any possible patient dropouts.
| Allocation
Inclusion of participants will be randomized in a concealed sequence to participants, clinical nurses, and investigators. Once a participant meets the inclusion criteria, collaborative investigators will know by means of a telephone call the study group assigned according to the random sequence. The random sequence will be generated by a computerized system. Although the study groups are not blinded for patients and nurses, the members of the team who will carry out the analyses will be blinded to the allocation groups.
Inclusion criteria
The study will be developed in adult patients who are attended in health centres or nursing homes belonging to the District of Primary Health Care of Málaga (Spain), who present a chronic venous vascular ulcer located in their lower limbs with signs of critical colonization, according to the criteria of Lazareth and Moore (Lazareth et al., 2007;Moore, 2013). These criteria imply the presence of at least three of the following: (a) Severe pain during dressing change; In case of fulfilling the mentioned criteria, an exudate sample will be collected from the wound by the primary healthcare nurse. Ulcer infection would be assumed in case of a level of microorganisms per tissue gram or microbial counting over 10 5 CFUs per microlitre (μL) (Pugliese, 2016).
Exclusion criteria
The exclusion criteria will be: people younger than 18 years old, venous ulcer that does not meet Lazareth and Moore criteria, arterial ulcers, participants with diabetes and without pedis pulse, people who are immunosuppressed, patient with rheumatoid arthritis in the acute phase, patients with dermatitis prior to the appearance of the ulcer, and patients with neuropathy or lack of sensitivity of any aetiology.
| Data collection
Patients will be recruited by clinical nurses in health centres and nursing homes belonging to the District of Primary Health Care of Málaga (Spain). Following obtaining the patients' informed consent, sociodemographic variables (age, gender), wound location and characteristics, and current treatment will be collected at baseline, together with the endpoints by the nurse who usually give care to the patient. This will reinforce the patient adherence to the study.
Primary endpoint
Level of colonization: it will be evaluated by quantitative real-time polymerase chain reaction (qPCR) identification technique and quantified in CFUs and in nanograms (ng) of bacterial deoxyribonucleic acid (DNA) per μl of vascular ulcer exudate.
Two samples per wound (to have a reserve sample in case of possible contamination) will be collected by cotton swab directly from the wound bed. The samples will be collected before cleaning the wound to obtain a reliable microorganisms' concentration. Swab tip will be rotated in a 1 cm 2 wound surrounding area for 5 s. Next, cotton swabs will be placed in a sterile container at 4 ºC (Pugliese, 2016) until the process of qPCR.
Then, to determinate the total microorganisms' quantity, samples will be subjected to a thermal shock, which consists in freezing them at −80ºC for 5 min, followed by heating samples at 95ºC for additional 5 min. With this procedure, gram-positive and gram-negative bacteria will be efficiently lysed. After spin centrifugation, DNA strands will be extracted for the following steps. The total DNA quantity of the isolated DNA will be measured using NanoDrop TM spectrophotometer. Microorganisms will be quantitative determined by qPCR (referred to number of DNA copies amplified by this technique) (Rhoads, Wolcott, Sun, & Dowd, 2012).
Fast EvaGreen ® qPCR Master Mix (2X) kit will be used for qPCR amplification reactions, including specific primers usually present in vascular ulcers (Table 1; Etchebarne et al., 2017). Serial dilution will be performed from purified genomic DNA to generate a concentration calibration curve to compare data obtained from samples. From there, it will be possible to detect not only the presence of microbial species but also the quantitative determination of variation in these stains in samples of the same patient, as well as comparing treatments. Results will be standardized to CFUs per μL of exudate.
Secondary outcome measures
Wound size will be evaluated by planimetry using PictZar ® 7.6.1 software (Wendelken et al., 2011). Photos will be taken from affected area completely in parallel with respect to the ulcer and the photo will have to include a calibrated rule of 3 cm (standardized procedure according to PictZar ® planimetry). The characteristics and clinical evolution of the wound will be evaluated using the Resvech
scale (Restrepo-Medrano & Verdú Soriano, 2011).
Healing time will be measured by the number of days until complete healing. This outcome will be complied if re-epithelialization of the skin is confirmed in two consecutive visits across 2 weeks. Pain will be measured using the Numerical Rating Scale (NRS).
Health-related quality of life will be measured by means of the Charing Cross Venous Ulcer Questionnaire (CCVUQ) (González de la Torre, Quintana-Lorenzo, Perdomo-Pérez, & Verdú, 2017; González-Consuegra & Verdú, 2010). Adverse events related to the treatment will be collected both by nurses and patients. Follow-up will be carried out at 4, 8, and 12 weeks, except the quality of life, which will be only evaluated at baseline and 12 weeks. Adherence to the treatment will be checked with the nurse and with the evaluation of consumed dressings.
| Intervention
Following the obtention of the baseline data, the nurse who usually gives care to the patient will proceed to cure the venous ulcer using a standardized procedure. They will wash the area with saline solution and evaluate if mechanical debridement is necessary.
Following, they will apply Cutimed Sorbact ® dressing (intervention) or Aquacel ® Ag Extra (control) dressings in the wound. Finally, they will cover the wound with a secondary gauze dressing and a dou-
| Data analysis
Data analysis will include descriptive statistics and exploratory analysis to determine the normality of the distributions and to check the range of data values. Bivariant analysis will be performed by chisquare test, Student t test, Mann-Whitney U, Wilcoxon W, ANOVA, and Kruskal-Wallis test, depending on the normality of the distributions, type of variable analysed, and their homoscedasticity, which will be tested by the Levene's test. Intra-group analysis will be performed by general linear models, with the corresponding corrections in multiple comparisons by Bonferroni test. To test the null hypothesis (inter-group analysis), lineal generalized models will be developed in repeated measures, taking on consideration CFUs number as dependent variable as well as control or experimental designation group as a factor. Even if basal differences are significative, they will be attached as adjustment factor, following Mallinckrodt and Lipkovich recommendations (Mallinckrodt, Lipkovich, & Lipkovich, 2016).
Sphericity will be tested using Mauchly test, Greenhouse-Geisser test, or Huynh-Feldt test in case of no setting.
All analyses will be performed by an intention-to-treat approach.
To this purpose, missing data will be calculated by multiple imputation methods.
All the analyses will be performed in a blinded way. (Majewski et al., 1995) in addition to multi-resistant pathogens (Moore et al., 2010)
| Ethical considerations
The study has been approved by the Ethics and Research Committee of Malaga. The study will meet the requirements of Helsinki Statement and later revisions and the principles of good clinical practices. Informed consent will be demanded to the participants in accordance with current law in Spain by using an application form and an information sheet designed for this purpose.
Due to the characteristics of the target study population, it is possible to find patients with presence of cognitive impairment.
In those cases, informed consent will be requested from their caregivers.
The Ethics and Research Committee of Malaga will be reported in advance on any modification in the eligibility criteria, outcomes, or analyses. All the monitoring procedure of adverse effects will be driven by the regulation referred to MEDDEV 2.7/3 Guide Review
| Validity and reliability
According to ISO 14971 regulation, several procedures have been set to manage potential risk due to the use of the study products.
Scientific literature has been extensively reviewed about risks of Cutimed Sorbact ® dressings, with no adverse effects reported to date (Totty et al., 2017).
A key strength of this study is that it will be performed under routine clinical conditions, in multiple centres, both in Primary Health Care and nursing homes. This feature will give a strong external validity. In addition, the use of planimetry software PictZar ® ensures objectivity to evaluate the wound area.
The trial will be supervised by an independent monitoring committee integrated by external members belonging to the District of Primary Health Care of Málaga and IBIMA. This committee will carry out audits each 6 months. On the other hand, some potential confounding variables will be recorded such as antibiotic and/or analgesic therapy, nutritional status, and level of dependence (Barthel index). Moreover, this trial will comply with a rigorous procedure to report adverse events, to avoid the usual lack of reporting adverse events in venous leg ulcer research (Jull & Biggs, 2019).
| D ISCUSS I ON
The prevalence of venous leg ulcers tends to raise due to its association with ageing. In Western countries chronic venous conditions consume up to 2% of healthcare budget (Davies, 2019). The impact of venous leg ulcers on quality of life affects multiple dimensions such as difficulties with activities of daily living, social isolation, pain, or disability (Joaquim, Silva, Garcia-Caro, Cruz-Quintana, & Pereira, 2018). Multiple approaches are available using local dressings to treat venous leg ulcers combined with other interventions (O'Meara et al., 2014). Bacterial colonization or infection is one of the complications that hinder an adequate healing, since wound bed microorganisms increase their resistance by using a biofilm (Maillard & Hartemann, 2013;Mijnendonckx et al., 2013).
This trial attempts to evaluate a physical-based approach instead of the current chemical one based on silver dressings. The use dressings that mimic biofilm conditions, try to produce a bacterial migration into the dressing. Thus, no chemical intervention in the wound bed is carried out, avoiding possible side effects (Ljungh et al., 2006). It is important to note that either high levels or no presence of microorganisms do not help to wound healing and favour its chronification. Consequently, the main aim of these dressings is not to eliminate the bacterial load, but to reduce it to limited levels to activate the immune system at the wound site, providing growth factors that facilitate wound healing (Ojalvo et al., 2017).
If the results of this trial show higher effectiveness of Cutimed Sorbact ® , in terms of reduced colonization, without adverse events, it is presumable that this turns into shorter healing times, with a substantial reduction in the use of health staff time. Additionally, the study will be able to evaluate the impact on health-related quality in aspects such as pain, exudate, odour, mobility, and activities of daily living, or personal relationships (Phillips et al., 2018).
| Limitations
This study does not include patients with venous ulcers and compromised distal arterial blood circulation, or lack of sensitivity of any aetiology in their lower limbs. Further studies should be carried out in people with these clinical features to evaluate the effectiveness of Cutimed Sorbact ® against Aquacel ® Ag Extra. Likewise, those patients with rheumatoid arthritis in an acute exacerbation phase, or those with local dermatitis prior to the existence of the ulcer will not be included.
| CONCLUSIONS
New approaches are required to improve venous ulcers treatment to avoid antimicrobial resistance and limit the use of chemical agents. One of these new approaches would be the use of hydrophobic dressings. A large sample and routine conditions of clinical practice will give this study a strong external validity and generalizability. | 2020-05-13T13:03:53.999Z | 2020-05-12T00:00:00.000 | {
"year": 2020,
"sha1": "4131f32813fc1d59e86e1696803c7613c4a03f82",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jan.14412",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "feda738f4c1ca5ccee08a165d6bf930e80ee3408",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215516393 | pes2o/s2orc | v3-fos-license | Comparison of MRI and VQ-SPECT as a Screening Test for Patients With Suspected CTEPH: CHANGE-MRI Study Design and Rationale.
The diagnostic strategy for chronic thromboembolic pulmonary hypertension (CTEPH) is composed of two components required for a diagnosis of CTEPH: the presence of chronic pulmonary embolism and an elevated pulmonary artery pressure. The current guidelines require that ventilation-perfusion single-photon emission computed tomography (VQ-SPECT) is used for the first step diagnosis of chronic pulmonary embolism. However, VQ-SPECT exposes patients to ionizing radiation in a radiation sensitive population. The prospective, multicenter, comparative phase III diagnostic trial CTEPH diagnosis Europe - MRI (CHANGE-MRI, ClinicalTrials.gov identifier NCT02791282) aims to demonstrate whether functional lung MRI can serve as an equal rights alternative to VQ-SPECT in a diagnostic strategy for patients with suspected CTEPH. Positive findings are verified with catheter pulmonary angiography or computed tomography pulmonary angiography (gold standard). For comparing the imaging methods, a co-primary endpoint is used. (i) the proportion of patients with positive MRI in the group of patients who have a positive SPECT and gold standard diagnosis for chronic pulmonary embolism and (ii) the proportion of patients with positive MRI in the group of patients with negative SPECT and gold standard. The CHANGE-MRI trial will also investigate the performance of functional lung MRI without i.v. contrast agent as an index test and identify cardiac, hemodynamic, and pulmonary MRI-derived parameters to estimate pulmonary artery pressures and predict 6-12 month survival. Ultimately, this study will provide the necessary evidence for the discussion about changes in the recommendations on the diagnostic approach to CTEPH.
The diagnostic strategy for chronic thromboembolic pulmonary hypertension (CTEPH) is composed of two components required for a diagnosis of CTEPH: the presence of chronic pulmonary embolism and an elevated pulmonary artery pressure. The current guidelines require that ventilation-perfusion single-photon emission computed tomography (VQ-SPECT) is used for the first step diagnosis of chronic pulmonary embolism. However, VQ-SPECT exposes patients to ionizing radiation in a radiation sensitive population. The prospective, multicenter, comparative phase III diagnostic trial CTEPH diagnosis Europe -MRI (CHANGE-MRI, ClinicalTrials.gov identifier NCT02791282) aims to demonstrate whether functional lung MRI can serve as an equal rights alternative to VQ-SPECT in a diagnostic strategy for patients with suspected CTEPH. Positive findings are verified with catheter pulmonary angiography or computed tomography pulmonary angiography (gold standard). For comparing the imaging methods, a co-primary endpoint is used. (i) the proportion of patients with positive MRI in the group of patients who have a positive SPECT and gold standard diagnosis for chronic pulmonary embolism and (ii) the proportion of patients with positive MRI in the group of patients with negative SPECT and gold standard. The CHANGE-MRI trial will also investigate the performance of functional lung MRI without i.v. contrast agent as an index test and identify cardiac, hemodynamic, and pulmonary MRI-derived parameters to estimate pulmonary artery pressures and predict 6-12 month survival. Ultimately, this study will provide the necessary evidence for the discussion about changes in the recommendations on the diagnostic approach to CTEPH.
INTRODUCTION The Clinical Problem
Chronic thromboembolic pulmonary hypertension (CTEPH) occurs in 0.1-4.0% of patients with acute pulmonary embolism (PE) within 2 years (1). If left untreated, the outlook for patients with CTEPH is dismal. Median survival is <2 years in patients who have a mean pulmonary artery pressure higher than 30 mm Hg at diagnosis (2). Right-heart failure is the most frequent cause of death. However, when treated successfully with endarterectomy, CTEPH patients have a good prognostic outcome (3), therefore timely diagnosis is of great importance. Advances in management have improved outcomes, but CTEPH remains a potentially fatal condition, especially when surgery is not an option (4,5). The diagnostic approach according to the current guidelines for CTEPH starts with transthoracic echocardiography (6) to assess the likelihood of pulmonary hypertension, followed by ventilation-perfusion single-photon emission computed tomography (VQ-SPECT) to detect or rule out perfusion defects (7) (Figure 1). VQ-SPECT is currently the preferred imaging tool for screening because of its high sensitivity and a negative predictive value of virtually 100% (9). (12)]. In addition, a noncontrast, free breathing ventilation perfusion MRI technique, known as the Fourier decomposition MRI method (13), has recently shown initial encouraging results in diagnosing chronic pulmonary embolism (14). In the CHANGE-MRI study we implemented phase-resolved functional lung (PREFUL) MRI as a secondary outcome parameter, which is a further development of Fourier Decomposition MRI depicting the whole breathing and cardiac cycle (15). These novel functional MRI techniques hold significant potential to be an equal rights non-ionizing alternative to VQ-SPECT in the near future, if they can demonstrate robust test performance in a prospective multicenter setting (Figure 2). MRI techniques can be an equal rights alternative to VQ-SPECT as a screening test in a diagnostic strategy for CTEPH where the positive findings of the screening test are verified with catheter pulmonary angiography or computed tomography pulmonary angiography (CPA/CTPA) in the diagnostic work-up.
Study Information
Investigators from Hannover Medical School (MHH) designed the trial, and MHH acts as the study sponsor and is funded by the German Centre of Lung Research (DZL). A trial management group for the study comprises specialists from the disciplines of Pneumology, Nuclear Medicine, Radiology, Biostatistics, Medical Documentation and Medical Imaging and Computing. The study started recruitment in 2016. Ethical approval for the study was granted by MHH ethics committee (No 2678-2015). The trial is registered with ClinicalTrials.gov identifier NCT02791282.
Objectives
The diagnostic strategy for CTEPH is basically composed of two components leading to a final diagnosis for or against CTEPH: 1) Presence of chronic pulmonary embolism, and 2) Presence of elevated pulmonary artery pressure.
The tests to be compared in this study (MRI vs. SPECT) are diagnostic interventions for chronic pulmonary embolism (chronic PE). Diagnosis of elevated pulmonary artery pressure follows the routine clinical assessment with right heart catheterization. As such, the study primarily deals with the first step diagnosis of chronic pulmonary embolism. The medical hypothesis of the CHANGE-MRI trial is that the diagnostic performance of novel functional lung MRI is sufficient to be an equal rights alternative to VQ-SPECT-the current clinical standard-in the detection of chronic PE in the diagnostic algorithm for CTEPH and thus can reduce the radiation burden for patients. Therefore, the MRI should identify almost all correct cases in the SPECT-strategy and should not increase the rate of false positive findings The study will be considered successful in demonstrating that MRI can replace SPECT if the following co-primary hypotheses can be confirmed for the interrelation of MRI and SPECT: (i) The probability that MRI is positive in patients who are SPECT positive and who have a positive gold standard is larger than 95% and, (ii) The probability that MRI is positive in patients who are SPECT negative and who have a negative gold standard is smaller than 10%.
To determine the sensitivity and specificity of functional MRI as co-primary endpoints of this diagnostic study, an alternative strategy would have been to send all study patients for a verification step with CPA/CTPA. However, this strategy was considered to imply an unacceptable increase in the diagnostic burden and radiation for patients who are not usually subjected to further diagnostic interventions. Secondary objectives are: -To demonstrate an additional benefit of MRI in comparison to SPECT with correct results of MRI (according to the gold standard) where SPECT showed false diagnoses -To compare the performance of SPECT and MRI using conventional diagnostic measures: sensitivity, specificity, positive and negative predictive value -to compare the performance of SPECT and MRI in subgroups of male and female and obese (BMI ≥ 28 kg/m 2 ) and nonobese (BMI < 28 kg/m 2 ) patients -to evaluate the inter-rater reliability in SPECT assessment, MRI-assessment and CPA/CTPA assessment between local inplace read and blinded read -to compare SPECT and MRI regarding safety and procedure related limitations.
Patients
The inclusion criteria are chosen to reflect exactly the clinical setting in the diagnostic algorithm for CTEPH. Here, MRI is positioned for an in-place validation compared to SPECT without any exclusion of subgroups compared to • Women who are pregnant or breast feeding SPECT, except for inability to undergo MRI and pregnancy (see Table 1).
Index Test
A robust functional cardio-pulmonary-MRI exam that can be conducted within 30 min without radiation burden is used as novel index test. In brief, the MRI protocol consists of anatomical MR sequences, to depict thoracic pathology (ECG gated steady state free precession sequences and Half-Fourier Acquisition Single-shot Turbo spin Echo imaging covering the whole thorax both in axial and coronal planes), coronal 2D Fast low Angle Shot stacked time series for 50 sec each covering the whole thorax and 2 additional sagittal 2D Fast low Angle Shot planes (one for each lung) in free breathing for PREFUL MRI, contrast enhanced pulmonary MRA (4D time resolved gradient echo MRA and 3D pulmonary MRA, total gadolinium dose 1.5 mmol/kg) as well as cardiac function (short axis cine stack and 4 chamber cine view). While retrospectively ECG gated cardiac cine sequences are used in the study protocol, novel fast real-time cardiac cine sequences (i.e., compressed sensing MRI techniques) may be used, especially in cases of arrhythmia or shortness of breath (16). For detailed description of the MRI protocol see Figure 3. A central read for cardiac function and strain analysis is performed using dedicated cardiac software (CMR42, Circle Cardiovascular Imaging). For the primary read all MRI data is available except PREFUL MRI. In a sub-study all MRI data is available except 4D time resolved gradient echo MRA and 3D pulmonary MRA to test PREFUL as a secondary outcome parameter without the need for i.v. contrast for CTEPH diagnosis.
Reference Test
VQ-SPECT is used as the reference test in this study. In clinical practice, several CTEPH expert centers use perfusion single-photon emission computed tomography (Q-SPECT/CT) without concomitant ventilation SPECT (V-SPECT) to exclude or diagnose CTEPH, although not in compliance with the current ERS/ESC guidelines (8,17). Therefore, in this study also Q-SPECT/CT is accepted as reference test, reflecting current clinical practice and real-world test performance. In this article, the reference test of VQ-SPECT or Q-SPECT/CT is termed SPECT. The duration of the intervention is ∼60 min per patient. The radiation exposure applied by the reference test is ∼1.2-2 mSv (17).
Diagnostic Gold Standard
Catheter pulmonary angiography or computed tomography pulmonary angiography (CPA/CTPA) is considered to be the current clinical gold standard for confirming chronic pulmonary embolism in a diagnostic algorithm (7). CTPA and CPA are well suited as verification step for the diagnosis of chronic pulmonary embolism (18). Nevertheless, they both have a considerable radiation burden on the patient. Therefore, in current diagnostic practice, in many expert centers only patients with a positive SPECT undergo a verification step with CPA/CTPA, whereas at some sites CPA/CTPA is used for verification of all patients.
In this study, we only verify patients, if verification with CPA/CTPA is clinical routine at the respective study site. It is the responsibility of the treating pneumologist to decide whether verification by CPA/CTPA is clinically indicated. In any case, verification with CPA/CTPA is not part of study directives but a clinical decision.
A clinical follow-up after 6-12 months is implemented, where all patients are contacted and asked whether there has been any further intervention indicative (or contraindicative) of CTEPH, in order to identify cases diagnosed as false negative (or false positive, respectively) in the diagnostic work-up. Additionally, cases diagnosed as false negative by the screening test (SPECT negative patients are not verified by CPA/CTPA on a regular basis), but subsequently identified as positive can be captured by incorporation of the clinical follow up. Since the gold standard should capture the true diagnosis at inclusion, the clinical follow up diagnosis is restricted to be within 1 year of the inclusion to ensure that CTEPH-if diagnosed in the clinical follow up-was present at the inclusion already and did not develop afterwards.
With this trial design we avoid an increase in the diagnostic burden (i.e., the increased radiation of CPA/CTPA) for SPECTnegative patients without clinically indicated CPA/CTPA. A verification of all patients (to precisely estimate the sensitivity and specificity of functional MRI) was considered infeasible and ethically indefensible due to the application of the additional radiation dose required for CPA/CTPA without any clinical justification. Consequently, a composite gold standard for pulmonary embolism used in this trial combining SPECT, CPA/CTPA and follow-up. The composite gold standard is a diagnostic strategy with SPECT as screening test and CPA/CTPA as verification test, corrected by clinical diagnosis after 6-12 months: • If CPA/CTPA is performed during the initial diagnostic workup, the gold standard is set to the result of CPA/CTPA corrected for the clinical diagnosis after 6-12 months. In cases of mismatch, the clinical 6-12 months follow-up diagnosis overrules the CPA/CTPA diagnosis (e.g., if CPA/CTPA diagnosis is negative but 6-12 months diagnosis is positive, the gold standard diagnosis is positive). • If CPA/CTPA has not been conducted during initial diagnostic work-up, the gold standard is set to the clinical diagnosis after 6-12 months.
Clinical Routine and Study Flow Figure 4 illustrates the current diagnostic work-up for pulmonary embolism in the diagnostic strategy for CTEPH, the additionally conducted study procedures and the path of verification and diagnostic decision in the CHANGE-MRI trial.
In-place Assessment
In this study at each site standard operating procedures are implemented with the aim that the in-place assessment of SPECT (following the clinical routine) and MRI are performed and evaluated independently by two different readers from different teams. The respective diagnoses are termed in-place assessments and are used for secondary analyses. Both readers are aware of the medical history of the patient under investigation but are not allowed to know the diagnosis made with the aid of the other test, as the reference test is already clinically established and, as such, bares some information. In accordance, the system for electronic Case Report Forms is set up in a way that each reader's access is restricted to the respective imaging method. If required by the treating pneumologist, CPA/CTPA is performed and evaluated at the study site, as well.
Blinded-Reader Assessment
A centrally organized blinded reader assessment of SPECT and functional MRI will be implemented to ensure that no information carry-over occurs from clinical SPECT diagnosis to the functional MRI diagnosis or vice versa. Therefore, this blinded reader assessments will be used for the primary analysis. All SPECTs and functional MRIs will undergo a blinded read where independent expert readers are randomly selected for each center and each imaging method from one of the other centers. The second reader is not aware of the findings from the initial in-place read and the respective other imaging method but has access to the patient's clinical information used in the in-place SPECT read. Additional to ensuring a truly blinded assessment of the imaging methods, the standardized blinded reader assessment allows assessing inter-rater agreement and context sensitivity of the diagnostic evaluations. If there is a discrepancy between the in-place read and the second reader, a final consensus read will be performed by a third expert reader.
Outcome Measures
SPECT and MRI will be compared based on a standardized blinded reader assessment of the imaging methods as well as based on the in-place assessments as part of a workflow The co-primary endpoint will also be analyzed based on the in-place assessment and results will be compared to assess the inter-rater agreement and context sensitivity of the diagnostic evaluations and to evaluate the external validity and generalizability of the primary results. Additional secondary endpoints based on the in-place assessment and the blindedreader assessment include the diagnostic measures sensitivity, specificity, positive and negative predictive values of MRI and SPECT as compared to the defined gold standard. To assess additional benefit of the MRI compared to the SPECT, the proportion of patients with positive MRI in the group of patients who have a negative SPECT, but are positive in the gold standard, and the proportion of patients with negative MRI in patients who are SPECT positive, but have a negative gold standard, will be evaluated.
To assess the safety, the occurrence of allergic reactions or other adverse events, non-completed procedures, reasons for non-completion, non-diagnostic procedures, reasons for nondiagnostic, and the quality of the images will be evaluated for both imaging techniques.
Sample Size
The sample-size calculation was based on a substudy from the Sheffield ASPIRE registry with 132 patients suspected for CTEPH (12). The prevalence of CTEPH was around 60% here, sensitivities and specificities were 96 and 90% for Q-SPECT and 97 and 92% for functional MRI. It was not appropriate to directly utilize these numbers for sample size estimation: First, the primary analysis of the CHANGE-MRI trial uses different, unconventional endpoints due to the incomplete verification by CPA/CTPA. Second, it was unclear, in how far unverified (by CPA/CTPA) patients will occur in the study setting and how large the resulting differential verification bias might be.
Thus, a simulation study was conducted based on the ASPIRE study simulating 1,000 patients in different scenarios. Estimates for the primary endpoint proportions were taken from the mean diagnostic table of 5,000 simulation runs (see Table 2): the proportions of patients with positive MRI within those with positive SPECT and positive gold standard was estimated to be 98% and the proportion of patients with positive MRI within those with negative SPECT and negative gold standard was estimated to be 6%. These estimates were used for sample size calculations with a Chi 2 -test. For both co-primary hypotheses, a one-sided type-I-error of 2.5% and a power of 80% were used. Sample size calculations for the individual hypothesis resulted in 331 patients with positively verified SPECT and 388 patients with negative SPECT and negative gold standard. To calculate the expected overall sample size to achieve this sample sizes for the subgroups, the expected diagnostic table was calculated based on the simulation study. For each hypothesis, the expected overall sample size was calculated taking into account the expected prevalence of each subgroup (58 and 36%, respectively, see Table 2). Consequently, the final sample size was determined by the higher sample size, which is associated with the second primary objective [ Table 2, part (b)]. A total of N = 1080 required patients are anticipated.
Statistical Analysis
The primary analysis will be performed in a modified intentionto-treat population, i.e., patients will only be excluded from the analysis, if for all three diagnostic tests no assessment (neither in-place nor blinded) is available and no further follow-up data on the clinical diagnosis after 6-12 months is available. The overall two-sided type-I-error probability is set to 5%. Point estimates and 95% Wilson confidence intervals will be calculated for both co-primary endpoints based on the blinded reader assessments. Since both endpoints are evaluated coprimarily, no correction for the overall type-I-error probability is necessary.
Imputation of missing values is conducted dependent on whether any assessments of SPECT, MRI or CPA/CTPA images and clinical follow up diagnosis are available or not. If either inplace or blinded reader assessments are available, missing values of MRI or SPECT will preferably be replaced by the results of the respective available assessments. In the case of missing values because the test has not been performed at all or neither inplace nor blinded assessment are available, missing values of MRI or SPECT will be replaced using a conservative strategy in favor of SPECT, where missing information for the experimental functional MRI in all instances is counted such that sensitivity and specificity are diminished as compared with SPECT.
If missing values occur for all three diagnostics MRI, VQ-SPECT, and gold standard (e.g., because the patient decided against the study directly after written informed consent) and no further follow-up information is available for the patient, the patient is omitted from the analysis population and no imputation will be performed.
Secondary analyses will be performed in line with primary analyses using point estimates and 95% Wilson confidence intervals for the various proportions of interest. Safety endpoints will be evaluated descriptively with absolute and relative frequencies of complications and malfunctions of MRI and SPECT and will be compared using risk differences and the Chi 2tests. As sensitivity analyses, the primary analysis will be repeated in the per-protocol population based on the blinded reader assessment, comprising all patients without missing co-primary endpoints. Additionally, a sensitivity analysis will be conducted in the per-protocol population based on the in-place assessment, comprising all patients, where in-place MRI diagnosis has been entered into the eCRF before SPECT assessment so that the blinding can be considered to be intact.
Methods Against Bias
All centers document the age and gender of all patients with suspected CTEPH undergoing SPECT in a screening log. There is agreement that information will be made available in all centers from the hospital administration to assess the degree of representativeness of the investigated patient population for the overall CTEPH-population.
It is the primary aim of this diagnostic study to demonstrate perfect agreement between SPECT and functional MRI in all cases, where the SPECT can be considered to be correct. Therefore, it is of utmost importance that the two diagnostic tests are conducted and evaluated independently. To ensure internal validity of the co-primary endpoint, the diagnoses used in the primary analysis are determined in a standardized blinded reader assessment of the SPECT and MRI images.
Standard operating procedures are developed for each study site in order to achieve that the in-place assessment of MRI and SPECT is performed independently and blinded against each other.
Before commencement of the CHANGE-MRI study at the respective centers, local readers are trained on the basis of ∼15 MRI test cases that are provided by the centers to exclude effects of learning curves on the outcome measures.
DISCUSSION
In this study also Q-SPECT/CT is accepted as an approximation of the VQ-SPECT as reference test, reflecting current clinical practice and real-world test performance, although not in compliance with the current ESC/ERS guidelines (8,17). This omission of V-SPECT will affect in particular specificity of the reference test in cases of Q-SPECT/CT, for CT cannot be regarded as a fully equivalent substitute for V SPECT.
For primary evaluation, the blinded reader assessment of SPECT and MRI are used to ensure the independency of the assessments and maximize internal validity. On the other hand, external validity of the results might be reduced by the standardized blinded assessment, which does not necessarily reproduce the clinical field of application completely. Consequently, for analyzing the external validity and generalizability of the results based on the primary blinded assessment, the in-place assessments of SPECT and MRI are used as secondary endpoints. While the independence of these assessments cannot be guaranteed, the clinical setting is well represented.
To avoid an increased radiation burden, in this trial the current gold-standard instrument CPA/CTPA is routinely only applied in SPECT-positive patients, which might induce differential verification bias (6,19,20). To mitigate this limitation, the follow-up information up to 1 year after study inclusion will be included into the final diagnosis of PE by correcting negative cases, which subsequently are identified as positive in the follow-up period or (although less probable) correcting positive cases, which subsequently are identified as negative.
Ethics votes from the Institutional Review Boards at all participating centers were obtained. The CHANGE-MRI study is designed so that no extra radiation in addition to the clinically indicated radiation-based imaging tests is applied to the study participants. Thus, approval by the Federal Office for radiation protection (Bundesamt für Strahlenschutz) was not required.
This study is conducted in compliance with the Declaration of Helsinki, ICH-E6-guidance, and for design issues regarding diagnostic validation studies, and the EMA Guideline on the Clinical Evaluation of Diagnostic Agents.
The multicenter CHANGE-MRI study could pave the way for lung MRI methods to be an equal rights alternative to VQ-SPECT in the diagnostic pathway for CTEPH. By generating robust evidence both from in-place assessments and a standardized blinded reader assessment, this trial will generate the data needed for a thorough comparison of SPECT and MRI technology. | 2020-04-09T17:19:44.251Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "fd3111ad3c9afde8cdf79d8d1a1a7d5b45fdee8d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2020.00051/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd3111ad3c9afde8cdf79d8d1a1a7d5b45fdee8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214749304 | pes2o/s2orc | v3-fos-license | MiR-30a and miR-379 modulate retinoic acid pathway by targeting DNA methyltransferase 3B in oral cancer
Background Epigenetic silencing of retinoic acid (RA) signaling-related genes have been linked with the pathogenesis and clinical outcome in oral squamous cell carcinoma (OSCC) carcinogenesis. However, the precise mechanisms underlying the abnormal silencing of RA signaling-related genes in OSCC have not been well investigated. Methods Using combined analysis of genome-wide gene expression and methylation profile from 40 matched normal-tumor pairs of OSCC specimens, we found a set of retinoid signaling related genes are frequently hypermethylated and downregulated in OSCC patient samples, including alcohol dehydrogenase, iron containing 1 (ADHFE1) and aldehyde dehydrogenase 1 family, member A2 (ALDH1A2), which are the important rate-limiting enzymes in synthesis of RA. The expression of ADHFE1 and ALDH1A2 in OSCC patients was determine by quantitative real-time PCR (qRT-PCR) and immunohistochemistry. The binding sites of miR-30a and miR-379 with DNA methyltransferase 3B (DNMT3B) were predicted using a series of bioinformatic tools, and validated using dual luciferase assay and Western blot analyses. The functions of miR-30a, miR-379, and DNMT3B were accessed by growth and colony formation analyses using gain- and loss-of-function approaches. Chromatin immunoprecipitation (ChIP) was performed to explore the molecular mechanisms by arecoline and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) treatment. Results We demonstrated that deregulated miR-30a and miR-379 could represent a mechanism for the silencing of ADHFE1 and ALDH1A2 in OSCC through targeting DNMT3B. Ectopic expression of miR-30a and miR-379 could induce re-expression of methylation-silenced ADHFE1 and ALDH1A2, and lead to growth inhibition in oral cancer cells. Furthermore, the dysregulation of the miRNAs and DNMT-3B may result from exposure to tobacco smoking and betel quid chewing. Conclusions Our results demonstrate that tobacco smoking and betel quid chewing could repress miR-30a and miR-379, which upregulate the DNMT3B expression, in turn, lead to the hypermethylation of ADHFE1 and ALDH1A genes, consequently, promote the oncogenic activity. These findings highlight the potential use of retinoids in combination with epigenetic modifiers for the prevention or treatment of oral cancer.
Background
Oral squamous cell carcinoma (OSCC) is the most common cancer of the head and neck worldwide. Even though outstanding diagnostic and therapeutic improvements are available, mortality associated with OSCC is still extremely high [1,2]. Therefore, it is urgent to identify reliable prognostic biomarkers for treatment failure, as well as to develop innovative drug targets for more effective and less toxic treatment. Recent evidence indicates that epigenetic alterations, apart from genetic alteration, have been linked with the pathogenesis and clinical outcome in OSCC carcinogenesis [3,4]. The most common epigenetic alteration in OSCC is aberrant DNA methylation, which can silence gene expression and regulate biological processes, and play an important role in cancer initiation, progression, and metastasis [5,6]. However, in the last decade, our understanding on how epigenetic alterations affect the tumor response and its clinical application in OSCC carcinogenesis is still limited.
Previously, we have conducted a global DNA methylation analysis of 40 OSCC primary tumor samples to Gene Expression Omnibus (GEO) database (Accession number GSE45238), which showed a specific signature of gene promoter methylation profile [7]. Among these genes, we found that a group of retinoic acid (RA) signaling-related genes are silenced by promoter hypermethylation in OSCC patients, including alcohol dehydrogenase, iron containing 1 (ADHFE1) and aldehyde dehydrogenase 1 family, member A2 (ALDH1A2). ADHFE1 can oxidize retinol to retinaldehyde, which is further metabolized to RA by ALDH1A2. ADHFE1 and ALDH1A2 are the rate-limiting enzymes and responsible for synthesis of RA, which exerts its biological functions, including cell differentiation, cell cycle arrest, and apoptosis under physiological and pathological conditions, through binding to nuclear RA receptors (RARs) that form heterodimers with retinoid X receptors (RXRs) to activate downstream target genes [8]. RA and related synthetic products are approved for the treatment of T-cell lymphoma and acute promyelocytic leukemia [9,10], however, the chemopreventive and therapeutic effects of retinoids in solid tumors, including head and neck cancer, have failed to show significant advantage [11]. One of the possible mechanisms is that DNA methylation leads to the aberrant RA signaling [8]. Several studies have demonstrated the methylation-silenced expression of ADHFE1 and ALDH1A2 may be responsible for reducing RA levels and altered RA signaling [12][13][14]. The epigenetic silencing caused aberrant RA signaling could contribute both to tumor development and, consequently, to render cells retinoid resistant [15].
Recently, increasing evidence suggests that microRNAs (miRNAs) play important roles in the promoter methylation of CpG islands by targeting the DNA methylation machinery [16,17]. Aside from miRNAs, epidemiological studies and experimental evidences suggest that many environmental chemical carcinogens, such as cigarette smoking and betel quid chewing, can also affect epigenetic mechanisms [18,19]. However, the molecular links among the chemical carcinogens, miRNAs, and reduced ADHFE1 and ALDH1A2 remain unclear in OSCC. In this study, we demonstrated that the exposure of oral cancer cell lines to 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK, one of the major components of tobacco), or arecoline (Are, a major betel nut alkaloid), significantly reduced the expression of miR-30a and miR-379. Deregulated miR-30a and miR-379 could represent a mechanism for the silencing of ADHFE1 and ALDH1A2 in OSCC through modulation of DNA methyltransferase 3B (DNMT3B). These results highlight a role of miRNAs in the aberrant DNA methylation of OSCC and support a pharmacological rationale for the combination of RA and DNA methylation inhibitors in the prevention of oral carcinogenesis.
Tissue specimens
Paired tumor specimens and their adjacent nontumorous epithelia were from OSCC patients as previously described [20]. The study protocol was reviewed and approved by the Institutional Human Experiment and Ethic Committee of the National Cheng Kung University Hospital (No: HR-97-100). These matched pairs of oral tumor/adjacent normal (T/N) tissues were grouped into two sets, a training set containing 40 samples for genome wide microarray study and a validation set containing 33 samples for ADHFE1, ALDH1A2, DNMT3B, and miRNAs quantitative-PCR analysis. For immunohistochemical study, 36 of matched pairs of oral T/N specimens for ADHFE1 staining and 38 of matched pairs of oral T/N specimens for ALDH1A2 staining were used. Clinical profiling of microarray data are available in Gene Expression Ommibus (GEO) under accession number GSE37991 for gene expression, GSE45238 for miRNA expression and GSE38823 for methylation analysis.
Immunohistochemistry (IHC)
For immunohistochemical study, OSCC tissues were deparaffnized using xylene and then rehydrated through an ethanol series. Antigens were retrieved by autoclaving the slides in Dako retrieval buffer (Dako, Carpinteria, CA, USA). After cooling to room temperature, the slides were incubated with primary ADHFE1 antibody (Sigma-Aldrich, St. Louis, MO, USA) or ALDH1A2 antibody (Santa Cruz Biotechnology, Santa Cruz, CA, USA) at 4°C overnight. Specific signals were then developed with LSAB+ kit (Dako) using diaminobenzidine as chromogen. Sections were then counterstained with hematoxylin and observed under light microscope. Tumor ADHFE1 and ALDH1A2 level were scored according to staining intensity as follows: 0, negative; 1, weak; 2, intermediate; and 3, strong. Two pathologists independently assessed all the scorings.
RNA extraction, reverse-transcription PCR (RT-PCR)
Total RNA was extracted from OSCC cell lines using TRIzol reagent (Life Technologies, Gaithersburg, MD, USA) according to the manufacturer's instructions. RNA concentration was checked by NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA). For mRNA analysis, the cDNA was synthesized using random hexamer primers and SuperScript III reverse transcriptase (Invitrogen, Carlsbad, CA). Gene expression analyses were assayed on a Biometra T3000 thermocycler (Biometra GmbH, Gttingen, Germany) and GAPDH was used as a loading control. PCR products were subjected to electrophoresis on 2% agarose gel and visualized on UVP GDS-8000 Bioimaging System (UVP, Upland, CA, USA) with 0.01% of SYBRSafe (Invitrogen) inner staining.
Quantitative real-time PCR (qPCR)
For miRNA analysis, the cDNA was synthesized using specific stem-loop RT primers and TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems, Carlsbad, CA, USA). q-PCR analysis was used to detect the ADHFE1, ALDH1A2, and DNMT3B using Omics Green EvaGreen q-PCR Master Mix (OMICS Biotechnology, New Taipei City, Taiwan) and the expression level of miR-30a and miR-379 using QuantiTect SYBR Green PCR System (Qiagen, Hilden, Germany), respectively, according to the manufacturer's instructions on the ABI StepOnePlus Real-time PCR system (Applied Biosystems). GAPDH and RUN44 were used as the internal controls. All reactions were run in triplicate and relative expression levels were calculated as 2-△△CT after normalization with the internal control. All primers used for this study are summarized in Additional file 1: Table S1.
Plasmids and transfection
The entire 3′-UTR of DNMT3B fragment, containing target sequences of miR-30a and miR-379, were PCR amplified and cloned into the pmirGLO firefly luciferase-expressing vector (Promega, Madison, WI, USA) according to the manufacturer's instructions. The miR-30a and miR-379 binding site mutation vectors were also constructed by using Site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA, USA), and all the constructs were verified by DNA sequencing. For miRNA expression, the pri-form miRNA sequence was amplified and subsequent cloned into the pLemiR miRNA expression vector (Open-Biosystem, Rockford, IL, USA). For transfection of the plasmids, cells were transiently transfected with 2 μg of plasmids using Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol. The miRNA inhibitors (AM) and miRNA mimics (PM) were chemically modified RNA nucleotides and obtained from Ambion. The nucleotide transfection was performed using Lipofectamine RNAiMAX (Invitrogen) according to the manufacturer's instructions.
Chromatin immunoprecipitation (ChIP)
ChIP assay was performed based on previous described [24]. For miRNA treatment, OEC-M1 cells were treated with control mimics (NC, 20 nM), or miRNA mimics (PM-30a or PM-379, 20 nM) for 48 h. For chemical treatments, DOK cells were treatment with vehicle control (DMSO, 10 nM), arecoline (50 μM) or NNK (10 μM) for 5 days. For demethylation assay, the wild type (Wt), Are-L and NNK-L DOK cells were treated with 5-aza-dC (5 μM) for 5 days. And then fixed with formaldehyde for cross-link chromatin associated proteins to genomic DNA, lysed and sonicated to generate DNA fragments between 200 to 1000 base pairs (confirmed by agarose gel electrophoresis). Then, the cell lysate were subjected to immunoprecipitation overnight by DNMT3B (ab2851, Abcam, Cambridge, MA, USA) antibody and consequently for PCR assay. Primers used for this study are summarized in Additional file 1: Table S1.
Bioinformatics and statistical analyses
Group differences were analyzed by the two-tailed Student t test. All statistical analysis and graph presentation were performed using GraphPad Prism 5 Software Ver.5.01 (San Diego, CA, USA). Correlations between pairs of data were performed by parametric Pearson Spearman correlation analyses. A value of p < 0.05 was considered as statistically significant.
Retinoid pathway is downregulated and methylated in oral cancer
Upon analysis of gene expression array (GEO accession number GSE37991) and DNA methylation array (GEO accession number GSE45238) in 40 patients with OSCC, we found a set of retinoid signaling related genes, including ADHFE1, ALDH1A2, cellular retinol-binding protein 1 (CRBP1), paired box gene 9 (PAX9), growth differentiation factor 10 (GDF10), transforming growth factor beta receptor 3 (TGFBR3), and peroxisome proliferator-activated receptor gamma (PPARγ), were not only highly down-regulated but also methylated in OSCC cancer tissues compared with paired noncancerous tissues (Fig. 1a, Table 1). Because both ADHFE1 and ALDH1A2 are rate-limiting enzymes and participate in retinoid metabolism by oxidizing retinol to retinaldehyde to retinoic acid [25], therefore, we concentrated our attention on these two enzymes in following study. Using qRT-PCR to further validate the expression level of ADHFE1 and ALDH1A2, we found that ADHFE1 and ALDH1A2 levels were significantly downregulated in tumors than in corresponding normal samples (p < 0.0001) (Fig. 1b). Moreover, immunohistochemical analysis of the representative specimens of patients with OSCC demonstrated that the expression of ADHFE1 and ALDH1A2 expression was greater in nontumor squamous epithelium than in tumor tissues ( Fig. 1c-d).
DNMT3B involved in ADHFE1 and ALDH1A2 silencing ADHFE1 and ALDH1A2 have been reported to be downregulated and hypermethylated in cancers [12,13]. From our microarray data, we also found that DNMT1, DNMT3A and DNMT3B were overexpression in oral cancer (Fig. 1a). Next, we attempted to determine which DNMT member involved in methylation-mediated ADHFE1 and ALDH1A2 silencing. As shown in supplementary Fig. S1, DNMT3B is not only differentially expressed in tumor tissues (Additional file 2: Fig. S1a), but also has significant correlation with ADHFE1 and ALDH1A2 expression (Additional file 2: Fig. S1b). We further used an independent cohort of 33 OSCC tissues to validate the identified DNMT, and confirmed that DNMT3B displayed higher levels in OSCC tumors (Fig. 2a) and its expression negatively correlated with expression of ADHFE1 and ALDH1A2 with p-value< 0.05 (Fig. 2b). Consistent with this, western blotting and RT-PCR analysis revealed that DNMT3B levels are relatively higher in the most of OSCC cell lines when compared with the human normal keratinocyte HOK (Fig. 2c). To examine the role of aberrant methylation in deregulation of ADHFE1 and ALDH1A2 in OSCC, we evaluated the effect of the methylation inhibitor 5-aza-2-deoxycytidine (5-aza-dC) on ADHFE1 and ALDH1A2. We found that ADHFE1 and ALDH1A2 expression were increased by incubation with 5-aza-dC in both SCC-15 and OEC-M1 cells (Fig. 2d). To better study the methylation-dependent mechanism of ADHFE1 and ALDH1A2 change, we further manipulated the RNA interference to knockdown the DNMT3B expression (Fig. 2e). After the incubation of si-DNMT3B, the Bottom left, results of ADHFE1 scoring grade for the 36 OSCC specimens pairs and ALDH1A2 scoring grade for the 38 OSCC specimens pairs. Bottom right, comparing of staining intensity between nontumor (N) and tumor (T) in the OSCC specimen expression of ADHFE1 and ALDH1A2 mRNA were upregulated in SCC-15 and OEC-M1 cells (Fig. 2f). These data indicated that ADHFE1 and ALDH1A2 can be modulated by epigenetic mechanism and DNMT3B plays a role in ADHFE1 and ALDH1A2 promoter methylation.
MiR-30a and miR-379 directly target DNMT3b in OSCC cells
To test whether DNMT3B was targeted by miRNAs, the predicted miRNAs were retrieved from microRNA.org database combined with OSCC patients' miRNA microarray data (GSE45238) [20]. MiR-30a and mi-379 were two of the potential candidates which with the high conservation of the putative binding sequences in the DNMT3B 3′-UTR (Additional file 2: Fig. S2). To test the hypothesis that miR-30a and miR-379 could direct target DNMT3B, we constructed dual-luciferase reporter plasmids containing either wilt-type or mutated 3′-UTR of DNMT3B for miR-30a or miR-379 (wt-3′-UTR, mt-30a-3′-UTR, mt-379-3′-UTR) (Additional file 2: Fig. S2). We observed a remarkable reduction of the wild-type 3′-UTR reporter activity in the presence of miR-30a or miR-379. In contrast, no obvious change in the mutant 3′-UTR reporter plasmid activity was observed (Fig. 3a). Transfecting OSCC cells with miR-30a and miR-379 mimics (PM) resulted in a significant repression of endogenous DNMT3B expression, conversely, depletion of miR-30a and miR-379 with miRNA inhibitor (AM) caused the upregulation of DNMT3B (at both mRNA and protein level) (Fig. 3b). Meanwhile, overexpression of miR-30a and miR-379 also increase ADHFE1 and ALDH1A2 mRNA expression in SCC-15 and OEC-M1 cells (Fig. 3c). We also analyzed the expression level of miR-30a and miR-379 in OSCC clinical samples. Results revealed that the levels of miR-30a and miR-379 were not only significantly reduced in tumors (Fig. 3d) but also with a strong negative correlation with the expression of DNMT3B (Fig. 3e). Taken together, our data suggested that miR-30a and miR-379 disturbed ADHFE1 and ALDH1A2 expression by direct targeting DNMT3B in OSCC.
MiR-30a and miR-379 regulate OSCC cells proliferation through retinoid pathway
To identify the tumorigenic roles of miR-30a and miR-379 in OSCC, we determined whether upregulation of these miRNAs could have effects on the viability of OSCC cells. Ectopic overexpression of miR-30a and miR-379 (Fig. 4a) decreased colony-forming ability in clonogenic proliferation assays (Fig. 4b) and reduced the growth rates of OSCC cells in MTT assays (Fig. 4c). Abundant evidence shows that retinoid inhibit cell-cycle progression and cell proliferation in a variety of human cancer cells through binding to RARs and RXRs heterodimers [26][27][28]. Moreover, miR-30a and miR-379 could increase ADHFE1 and ALDH1A2 expression and affect retinoid metabolic pathway (Figs. 2 and 3). To verify this hypothesis, RXR transcriptional activity was measured by the luciferase assay. We showed that overexpression of miR-30a and miR-379 significantly enhanced the RXR transcriptional activity relative to the control (NC) in OEC-M1 cells (Fig. 4d). Retinoic acid was used to stimulate RXR transcriptional activity and acts as a positive control (Fig. 4d). Furthermore, using chromatin immunoprecipitation (ChIP), we demonstrated that miR-30a and miR-379 treatment reduced the DNMT3B binding activity to ADHFE1 and ALDH1A2 promoter region in OEC-M1 cells. Simultaneously, we also found miR-30a and miR-379 treatment not only increased acetylation of Histone 3 (H3Ac), but also decreased trimethylation of Histone 3 lysine 9 (H3K9me3) and Histone 3 lysine 27 (H3K27me3) in ADHFE1 and ALDH1A2 promoter region (Fig. 4e). These results indicate that miR-30a and
MiR-30a and miR-379 involved in arecoline and NNK induced epigenetic silencing
Smoking and betel quid chewing are two of the most important risk factors for oral cancer in Taiwan [19], and DNA hypermethylation has been reported to be related to smoking and betel quid chewing [18,29]. Next, we attempted to determine the effect of arecoline, a major component of betel nut alkaloids, and 4-(Methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), one of the major components of tobacco, on the expression of miR-30a, miR-379, DNMT3B, ADHFE1, and ALDH1A2. As shown in Fig. 5a, arecoline treatment significantly deceased the expression level of miR-30a and miR-379 in DOK cell. Notably, in these conditions, arecoline not only increased the expression level of DNMT3B, but also decreased the expression of ADHFE1 and ALDH1A2 (Fig. 5b). Using chromatin immunoprecipitation (ChIP), we demonstrated that arecoline treatment increased the DNMT3B binding activity to ADHFE1 and ALDH1A2 promoter (Fig. 5c). On the other hand, 5-aza-dC (DNMT inhibitor) treatment significantly rescued arecoline-repressed ADHFE1 and ALDH1A2 expression (Fig. 5d). In addition, miR-30a and miR-379 treatment Fig. 2 Silencing of ADHFE1 and ALDH1A2 through DNMT3B. a Relative DNMT3B expression levels in 33 of OSCC tumors (T) compared with their own adjacent normal tissues (N). b Correlation analysis of DNMT3B and ADHFE1 or ALDH1A2 in OSCC patients (n = 33) by qRT-PCR analysis. Pearson correlation coefficients and p-values were calculated as indicated. Red, tumor part; green, normal part. c Expression level of DNMT3B by RT-PCR and Western blot (W.B.) analysis in human oral keratinocyte (HOK) and OSCC cell lines. GAPDH and α-tubulin were used as internal control, respectively. d RT-PCR analysis of ADHFE1 and ALDH1A2 protein after 5-aza-dC (5 μM) treatment for 5 days. GAPDH was used as an internal control. e RT-PCR and Western blot analysis of DNMT3B in SCC-15 and OEC-M1 cells following DNMT3B knockdown (si-DNMT3B) or non-targeting siRNA control (si-SC) for 48 h. GAPDH and α-tubulin were used as internal control, respectively. f qRT-PCR analysis of ADHFE1 and ALDH1A2 expression after DNMT3B knockdown (si-DNMT3B) compared with siRNA control (si-SC) in SCC-15 and OEC-M1 cells. All data are presented as mean ± SD; ***p < 0.001 significantly increased arecoline-repressed ADHFE1 and ALDH1A2 expression (Fig. 5e). Similar results were observed in the NNK-treated DOK cells (Fig. 5a-e). Taken together, these findings demonstrated that arecoline or NNK exposure could downregulate the miR-30a and miR-379 in oral cancer cells, consequently increase the DNMT3B protein level and recruit DNMT3B binding to ADHFE1 and ALDH1A2 promoter and caused DNA methylation.
DNA methylation-silenced of ADHFE1 and ALDH1A2 is reversible
Persistent smoking has lasting effects on DNA methylation, and methylation levels correlate with the cumulative dose of smoking [18,30]. In order to assess the impact of long-term exposure of arecoline and NNK on DNA methylation and miRNA expression in this study, we generated the arecoline long-term treated DOK (Are-L) and NNK long-term treated DOK (NNK-L) for . e Correlation analysis of DNMT3B and miR-30a or miR-379 in OSCC patients (n = 33) by qRT-PCR analysis. Pearson correlation coefficients and p-values were calculated as indicated. Red, tumor part; green, normal part further assay. As shown, miR-30a and miR-379 exhibited lower expression in Are-L and NNK-L cells compared with wild-type DOK cells (Fig. 6a). Not only that, Are-L and NNK-L cells also have more DNMT3B protein amount and the stronger DNMT3B binding activity to ADHFE1 and ALDH1A2 promoter (Fug. 6b-c). As expected, the expression of ADHFE1 and ALDH1A2 were silenced in both Are-L and NNK-L cells (Fig. 6d). Conversely, 5-aza-dC-treated Are-L and NNK-L cells caused a significant DNMT3B repression and, consequently, caused a decrease in the DNMT3B binding activity to ADHFE1 and ALDH1A2 promoter and ADHFE1 and ALDH1A2 induction (Fig. 6b-d). These data suggest that persistent exposure of arecoline and NNK have a longer lasting effect on ADHFE1 and ALDH1A2 DNA methylation. Even though, DNA methylation can still reverse the expression of ADHFE1 and ALDH1A2 from NNK or arecoline long-term treated cells.
Discussion
ADHFE1 and ALDH1A2 can participate in retinoid metabolism by oxidizing retinol to retinaldehyde to RA. RA binds a nuclear retinoic acid receptor and transcriptionally regulates genes involved in several biological processes involved in cell growth, differentiation, and carcinogenesis [8]. In the past, the promoter hypermethylation of ADHFE1 or ALDH1A2 was identified as a common issue in cancers and served as a risk factor with a poor prognosis [12,13,31,32]. However, the epigenetic changes and the regulation mechanisms of ADHFE1 Fig. 4 The effects of miR-30a and miR-379 overexpression on OSCC cells. a qRT-PCR analysis showing the expression level of miR-30a (pLemiR-30a) and miR-379 (pLemiR-379) compared with vector control (pLemiR-NS) in SCC-15 and OEC-M1 cells. b Colony formation assay after miR-30a and miR-379 transfection in SCC-15 and OEC-M1 cells for 7 days (left). The mean number of colonies for each well was determined from three independent assays (right). c Growth rates of OEC-M1 cells measured by MTT assay after vector control, miR-30a or miR-379 transfection. d OEC-M1 cells co-transfected with 1 μg of empty control vector and pRXR vector were incubated with the vehicle control (DMSO, 10 nM), 9-cis-RA (25 nM), control mimics (NC, 20 nM), miR-30a (20 nM) or miR-379 (20 nM). The relative luciferase activity of each sample is measured at 48 h after transfection and normalized to Renilla luciferase activity. e ChIP assay of ADHFE1 and ALDH1A2 promoter region was performed with OEC-M1 cells using anti-DNMT3B antibody, anti-acetyl-histone H3 (H3Ac) antibody, anti-histone H3 trimethylation of lysine 9 (H3K9me3) antibody, antihistone H3 trimethylation of lysine 27 (H3K27me3) antibody, control mouse IgG (mIgG) antibody and control rabbit IgG (rIgG) antibody after treatment with control mimics (NC, 20 nM), or miRNA mimics (PM-30a or PM-379, 20 nM) for 48 h. All data are presented as mean ± SD; **p < 0.01; ***p < 0.001 and ALDH1A2 genes in human OSCC remain unclear. Previously, we found a set of retinoid signaling related genes, including ADHFE1, ALDH1A2, CRBP1, PAX9, GDF10, TGFBR3, and PPARγ were frequently hypermethylated and downregulated in OSCC patient samples [7], suggesting the severe molecular defects in RA metabolism in oral cancer. RA signaling defects often cause resistance in solid tumors and result in RA treatment failure [15,33]. Potential mechanisms of RA resistance in solid tumors have been proposed, such as the loss of RAR coactivators [34], impaired RAR signaling [15], increased RA metabolism [35] and decreased RA availability [36]. Besides that, RARβ is frequently lost early in carcinogenesis by epigenetic silenced, which is probably an important reason for the RA resistance in carcinoma [37]. However, the aberrant expression of RARβ is not observed in our cohort of OSCC patients (GSE37991). Instead of RARβ, our data demonstrate that epigenetic disruption of ADHFE1 and ALDH1A2 is a common event in human OSCC. ADHFE1 and ALDH1A2 catalyze irreversible steps in the synthesis of RA and thereby regulate distinct cellular functions [8]. It has been reported that cancer cells are unable to synthesize RA from retinol, due to loss of expression of ALDH6 [38,39], and this conclusion is consistent with our findings. Our findings suggest the methylation-silenced Arecoline and NNK induced DNMT3B activity and repressed ADHFE1, ALDH1A2 and miRNAs expression. a qRT-PCR analysis of miR-30a and miR-379 expression level after treatment with arecoline (50 μM) or NNK (10 μM) for indicated days. b RT-PCR analysis of ADHFE1, ALDH1A2 level and western blot analysis of DNMT3B level in DOK cells after treatment with arecoline (50 μM) or NNK (10 μM) for indicated times. GAPDH and α-tubulin were used as internal control. c ChIP assay of ADHFE1 and ALDH1A2 promoter region was performed with DOK cells using anti-DNMT3B antibody after treatment with vehicle control (DMSO, 10 nM), arecoline (50 μM) or NNK (10 μM) for 5 days. Mouse IgG (mIgG) antibody was used as negative control. d RT-PCR analysis of ADHFE1 and ALDH1A2 level in DOK cells after treatment with arecoline (50 μM) or NNK (10 μM) alone or combined with 5-aza-dC (5 μM) for 5 days. GAPDH was used as internal control. e qRT-PCR analysis of ADHFE1 and ALDH1A2 level in DOK cells after treatment with vehicle control (C), or arecoline (50 μM) plus control mimics (NC), miR-30a (20 nM), miR-379 (20 nM), or NNK (10 μM) plus control mimics (NC), miR-30a (20 nM), miR-379 (20 nM) for 5 days. GAPDH was used as an internal control. All data are presented as mean ± SD; **p < 0.01; ***p < 0.001 expression of ADHFE1 and ALDH1A2 may be responsible for lower RA levels and, ultimately, to RA resistance in OSCC. In view of this, delivery of retinoids alone to patients treatment is challenging because of the rapid metabolism of retinoids in blood circulation as well as epigenetic changes can render cells RA resistant [40]. Therefore, the pharmacologic rationale is likely to require combination of retinoids and DNMT inhibitors for the treatment of OSCC patients which lack of ADHFE1 and ALDH1A2 expression.
Interestingly, aberrant DNA methylation has been reported to be involved in oral cancer associated with tobacco smoking and betel quid chewing [41,42]. Tobacco smoking and betel quid chewing are the most common environmental risk factors for the development of oral cancer in Taiwan [43]. In this study, we found that NNK and arecoline treatment could recruit DNMT3B to ADHFE1 and ALDH1A2 promoter region, subsequently repressed the expression of ADHFE1 and ALDH1A2 in oral cancer. Epidemiological studies reveal that epigenetic modifications, such as DNA methylation, may present a biological biomarker of lifetime accumulation of environmental exposures related to aging [44], alcohol [45], smoking [18,46], and perhaps many others. Here, our experiments demonstrated that long-term exposure of NNK and arecoline has lasting effects on DNA methylation, especially on promoter region of ADHFE1 and ALDH1A2. Notably, DNMT inhibitor treated OSCC cells, which were long-term exposure of NNK and arecoline, caused a significant repression of DNMT3B and dramatic re-expression of ADHFE1 and ALDH1A2, suggesting that lasting effects on DNA methylation by NNK and arecoline are reversible. These results highlight the lack of ADHFE1 and ALDH1A2 expression could serve as an attractive biomarker to stratify OSCC patients and may be better to use a combination of retinoids and demethylating agents for therapeutic/preventive strategies in patients with oral cancer.
In addition, we also found that the reduced miR-30a and miR-379 can lose an inhibitory effect on DNMT3B, which contributes to hypermethylation of the ADHFE1 and ALDH1A2 genes. It has been reported that miR-30a Fig. 6 Effects of long-term treatment of arecoline and NNK. a Expression level of miRNAs in wildtype DOK cells and DOK cells long-term treatment with 50 μM of arecoline (Are-L) or 10 μM of NNK (NNK-L) for 12 months. The data are represented as mean ± SD; **p < 0.01; ***p < 0.001 versus wildtype DOK cells. b Western blot analysis of DNMT3B level in wildtype DOK cells (Wt), Are-L and NNK-L DOK cells after treatment with 5-aza-dC (5 μM) for 5 days. α-tubulin was used as internal control. c ChIP assay of ADHFE1 and ALDH1A2 promoter region was performed in Wt, Are-L and NNK-L DOK cells using anti-DNMT3B antibody after treatment with 5-aza-dC (5 μM) for 5 days. Mouse IgG (mIgG) antibody was used as negative control. d qRT-PCR analysis of ADHFE1 and ALDH1A2 level in Wt, Are-L and NNK-L DOK cells after treatment with 5-aza-dC (5 μM) for 5 days. GAPDH was used as internal control and miR-379 are downregulated in many cancers and act as a tumor suppressor to regulate various biological processes, including proliferation, cell cycle, apoptosis, and metastasis [47,48]. The present study showed that miR-30a and miR-379 can downregulate DNMT3B which in turn inhibit methylation in the promoter of the ADHFE1 and ALDH1A2 genes leading to higher ADHFE1 and ALDH1A2 expression and growth inhibition. Therefore, we identified that miR-30a and miR-379 have anti-proliferation ability through their effect on the increase of ADHFE1 and ALDH1A2 expression. Thus, it is advised that miR-30a and miR-379 could be potentially regarded as promising biomarkers in oral cancer development and progression. Although the detail mechanisms of downregulation of miR-30a and miR-379 in oral cancer remain unclear, tobacco smoking and betel quid chewing seem to play a role in regulating these miRNAs expression.
Conclusion
In summary, the results of the current study demonstrate that tobacco smoking and betel quid chewing could repress miR-30a and miR-379, which upregulate the DNMT3B expression, in turn, lead to the hypermethylation of ADHFE1 and ALDH1A2 genes, consequently, promote the oncogenic activity. Our findings implicate ADHFE1 and ALDH1A2 as tumor suppressor genes in oral cancer and provide rationale for further investigation of retinoids in combination with epigenetic modifiers for the prevention or treatment of oral cancer.
Additional file 1: Table S1. List of primer sequences.
Additional file 2: Figure S1. Expression level of DNMTs in OSCC tissues. a Microarray analysis of DNMT1, DNMT3A and DNMT3B expression levels in OSCC tumors (n = 40) compared with their own adjacent normal tissues or compared with patients' stage. Expression levels are expressed as the log2 ratios. b Correlation analysis of DNMT1, DNMT3A and DNMT3B with ADHFE1 or ALDH1A2 in human OSCC patients (n = 40). Each spot indicates the value of Tumor/Normal ratio. Figure S2. Schematic representation of the putative miR-30a (a) and miR-379 (b) binding sequence in the 3′-UTR of DNMT3B with wild-type form (wt-3′-UTR) and mutant form (mt-30a-3′-UTR or mt-379-3′-UTR). The mutated nucleotides are labeled black color with underline. | 2020-04-02T09:09:57.721Z | 2020-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "ea095187a4a527987cb0986c58fc8571c5c1dca6",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-020-00644-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad58b3f499967c045b93b2df3741d99ea047bd63",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
252853166 | pes2o/s2orc | v3-fos-license | Numerical Solution of Reaction–Diffusion Equations with Convergence Analysis
In this manuscript, we implement a spectral collocation method to find the solution of the reaction–diffusion equation with some initial and boundary conditions. We approximate the solution of equation by using a two-dimensional interpolating polynomial dependent to the Legendre–Gauss–Lobatto collocation points. We fully show that the achieved approximate solutions are convergent to the exact solution when the number of collocation points increases. We demonstrate the capability and efficiency of the method by providing four numerical examples and comparing them with other available methods.
Introduction
One of the special cases of partial differential equations (PDEs) is reaction diffusion equation (RDE) that has attracted the attention of many researchers, recently [1,20,28,32,33]. RDEs are the mathematical models which correspond with physical and chemical phenomena. Often, it is the change in space and time in viscosity of one and more chemical materials: chemical reactions in which the materials converted in each other, and diffusion which causes the materials to extend over a surface in space. RDEs are also applied in sciences such that biology [14], geology [15], ecology [20] and physics [23].
The general form of RDEs can be described as follows and here we can consider the following initial and boundary conditions where K is the diffusion coefficient, 1 ∶ [0, T] → ℝ , 2 ∶ [0, T] → ℝ and 3 ∶ [0, L] → ℝ are given sufficiently smooth functions. The target of this manuscript is to present an effective numerical method for solving the RDE (1) with conditions (2) and to analyze the convergence of the method.
There are several methods for solving this class of PDEs such as traveling wave method [19], finite elements [6], fixed-node finite-difference schemes [7] and spectral methods [4]. One of other methods for solving RDE presented by Reitz [22]. He applied different several methods for solving RDE. His methods had good numerical stability and can be used for multidimensional cases. Sharifi and Rashidian [24] applied an explicit finite difference associated with extended cubic B-spline collocation method for solving RDEs. Wang et al. [27] used the compact boundary value method (CBVM) for solving RDE. Their method is the combination of compact fourth-order differential method (CFODM) and P-order boundary value method (POBVM). This method is locally stable and have unique solution. Furthermore this method have fourth-order accuracy in space and P-order accuracy in place. Wu et al. [29] applied variational iteration method (VIM) for structuring integral equations to solve RDE. In this method, Lagrange multipliers and a discrete numerical integral formula are used to solve RDE. This method for first time was proposed by He [11]. Biazar and Mehrlatifan [3] solved RDE using the compact finite difference method. Diaz and Puri [8] applied the explicit positivity-preserving finite-difference method for solving RDE. Lee et al. [17] in their work investigated and found exact solutions 1 3 of derivative RD system and next they showed some exact solutions of derivative nonlinear Schrödinger equation ( DNLS) via Hirota bilinearization method. Gaeta and Mancinelli [9] analyzed the asymptotic scaling properties of anomalous RDE. Their numerical results showed that for large t, well defined scaling properties. Another method for solving RDE is lifted local Galerkin which was presented by Xiao et al. [30]. Yi and Chen [31] introduced a new method based on repeated character maping of traveling wave for solving RDE. Toubaei et al. [26] represented one of the most applied functions of RDE in chemistry and biologic sciences in their paper and then they solved RDE by using collocation methods and finite differences methods. Koto [16] applied the implicit-explicit Range-Kutta method for RDE. Diaz [7] utilized a logarithmic numerical model. He considered the monotonousness, bounding and positiveness of approximations in following of his work and for first time he showed that the logarithmic designs are stable and convergent. The nonclassical symmetries method is used by Hashemi and Nucci [10] to solve the diffusion reaction equations. An et al. [2] suggested a method to compute the numerical approximation for both solutions and gradients, while the other methods can also compute the numerical solutions. Moreover, in this method they computed the element by element instead of solving the whole of system that this can decrease the expenses of computations.
Despite the existence of above-mentioned numerical methods , providing a numerical convergent method with simple structure and high accuracy, for solving RDEs, is still required. Hence, we extend a spectral collocation method to estimate the solution of RDEs. Spectral methods are one of the most powerful methods for solving the ordinary and partial differential equations [5,25]. In this method, we apply a two-dimensional Lagrange interpolating polynomial to estimate the solution of the RDE. We apply the Legendre-Gauss-Lobatto (LGL) nodes as interpolating or collocation points and convert the RDE with its initial and boundary conditions into a system of algebraic equations. By solving this system, the coefficients of interpolating polynomial can be gained. We fully show that the approximate solutions are convergent to the exact solution when the number of collocation points tends to infinity. Note that spectral collocation methods have high accuracy and exponential convergence and, up to now many researchers utilized them to solve different continuous-time problems involving the ordinary and partial differential equations [12,13,18].
The paper is structured as follow: in Sect. 2, we implement the spectral collocation method for approximating the solution of RDE. In Sect. 3, we study the convergence of approximations to the exact solution of RDE. In Sect. 4, four numerical examples are given to show the efficiency and accuracy of methods comparing with those of others. Finaly, the conclusions and the suggestions are presented in Sect. 5.
Approximating the Solution by Spectral Collocation Method
We approximate the solution of system (1)-(2) as follows where D mi and D (2) nj are defined as follow and By replacing the relations (9), (10) and (11) in (1) we get where ũ mn for m, n = 0, 1, … , N are the unknowns. By solving algebraic system (14), we achieve the point-wise approximate solutions ũ mn (m, n = 0, 1, … , N) and the continuous approximate solution u N (., .) defined by (3).
Convergence Analysis
In this section we analyze the convergence of the proposed method. We assume Λ = [0, T] × [0, L] and C k (Λ) is the set of all continuously differentiable functions from order k. To check the convergence of the method, we initial with the following definition.
Definition 3.1
The continuous function F ∶ ℝ + → ℝ + with the following properties is called modulus of continuity [21] 1. F is increasing, A special case for modulus of continuity is Here we consider that O 2 is unit circle in ℝ 2 . The continuous function f on Λ , accepts F(.) as modulus of continuity when the following is finite where to show the set of the first continuously differentiable functions on the unit circle O 2 and equippe it with the following norm Now we define According to above if for some maps We define P(N, N, Λ) , the space of all Polynomials, as Proof The proof has been obtained from Theorem 2.1 in Ragozin [21]. ◻ Related to the existence of solution, we convert system (14) into the following system where N is enough large and F(.) is a function which satisfies Definition 3.1. Since (21) is a solution for system (14) as N → ∞ . We now define In the following, we show that system (21) is feasible. We want to prove that ũ N = (ũ mn ; m, n = 0, 1, … , N) satisfies system (21). By (24), (25) and (26) x, u(t, x)), Here we want to give the convergence theorem of solutions. (21) and {u N (., .)} ∞ N=N is the sequence of polynomials defined in (3). We assume that for any x ∈ [0, L] , the sequence {(u N (0, x), u N t (., .))} ∞ N=N has a subse- p(., .)) uniformly, where p(., .) ∈ C 2 (Λ) , ∞ (.) ∈ C 2 ([0, L]) and lim i→∞ N i = ∞ . Then satisfies the system (1)- (2).
Hence, by (35) we get
On the other hand, since lim i→∞ (21) we get and this contradicts relation (36). So ũ(., .) satisfies the Eq. (1). Moreover, it is easy to show that ũ(., .) satisfies the initial and boundary conditions (2) and this completes the proof. ◻
Examples
In this section, we have provided four of examples to illustrate the efficiency of method in solving RDEs. The first example is constructed by the authors to test the method. The next three examples show the comparison of the suggested method with other existing methods. We solve the corresponding system (14) using FSOLVE command in MATLAB software. The absolute error of gained estimate solution u N (., .) is defined by Also, We calculate the L 2 and L ∞ errors of approximations by the following relations Example 4.1 Consider the RDE (1)-(2) with g(t, x, u) = u + e t sinx, K = 1 and the following conditions The accurate solution is u(t, x) = e t sinx, (t, x) ∈ [0, 1] 2 . We solve this equation for N = 10 using suggested method. Figure 1 shows the obtained approximate solution and its absolute error. Also, Fig. 2 illustrates that by increasing N, the L 2 and L ∞ errors decrease. This shows our presented method has good accuracy and stable treatment. 2 , K = 1 and the following conditions ⎧ ⎪ ⎨ ⎪ ⎩ u(t, 0) = 1 (t) = e t sint, u(t, 1) = 2 (t) = e t−1 sin(t + 1), u(0, x) = 3 (x) = e (−x) sinx. The accurate solution for this example is u(t, x) = e t−x sin(t + x), (t, x) ∈ [0, 1] 2 . We illustrate the obtained approximate solution and its absolute error for N = 10 in Fig. 3. E N 2 and E N ∞ errors are presented in Fig. 4. It can be seen that by increasing N, these errors decrease and our method is stable. Also we compare the presented method with IMEX Range-Kutta method [16], that are shown in Table 1. These results present that the E N 2 error of suggested method is less than that of the method [16]. (1+e x−5t ) 2 , (t, x) ∈ [0, 1] . We solve this equation for N = 20 using our approach . Figure 5 shows the gained approximate solution and its absolute error. Also, Fig. 6 illustrates that by increasing N, the E N 2 and E N ∞ errors decrease and the presented method has good accuracy. Then we compare with VIM method [29], that are shown in Table 2. Our approach, N = 20 −8.5 −18 VIM method [29], N = 40 -−3.5 The accurate solution is u(t, x) = e (−0.5−0.1 2 )t sin( x), (t, x) ∈ [0, 1] 2 . We illustrate the obtained results, for N = 9, in Fig. 7. E N 2 and E N ∞ errors, for N = 9, are presented in Fig. 8. It can be seen that the errors decrease when N increases. We also give the absolute error of suggested method, compact finite difference method [3], explicit finite difference method [8] and collocation method [24] in the Table 3 . The results show that the error of suggested method is less than that of others.
Conclusions and Suggestions
In this text we showed that spectral collocation method can be utilized to find a solution for RDE with a simple structure. We analyzed the convergence of approximate solutions to the accurate solution by utilizing the theory of module of continuity and a normed space of polynomials. We presented two main theorem related to feasibility of obtained estimate solutions and their convergence. We solved some numerical examples and illustrated the capability of the presented method. For future work, we will utilize this powerful method and its convergence results for other types of PDEs involving delay and fractional derivatives.
Author Contributions MH carried out the research, study, methodology and writing. MG carried out the methodology and supervisor role. MHNS participated in MATLAB program and methodology. DB participated in the validity confirmation and advisor role. Table 3 The comparison of maximum of E N (., .) for Example 4.4 t = 0.8 Our method, N = 9 method [24], N = 9 Method [8] Method [3], N = 9 3.4708 × 10 −10 1.25216 × 10 −6 3.1835 × 10 −2 1.09 × 10 −4 Funding There are no funders to report for this submission.
Availability of Data and Materials
There is no data and material outside the article.
Declarations
Conflict of interest he authors declare that they have no competing interests.
Consent to participate Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-10-13T15:21:11.712Z | 2022-10-11T00:00:00.000 | {
"year": 2022,
"sha1": "d2d80776e7a937817babf6c6ac2f2271186b678d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s44198-022-00086-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3fb620bbde02e0b6438f6d0b2406c6e7bb4cbd49",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
41697235 | pes2o/s2orc | v3-fos-license | The Effects of Glutaredoxin and Copper Activation Pathways on the Disulfide and Stability of Cu,Zn Superoxide Dismutase*
Mutations in Cu,Zn superoxide dismutase (SOD1) can cause amyotrophic lateral sclerosis (ALS) through mechanisms proposed to involve SOD1 misfolding, but the intracellular factors that modulate folding and stability of SOD1 are largely unknown. By using yeast and mammalian expression systems, we demonstrate here that SOD1 stability is governed by post-translational modification factors that target the SOD1 disulfide. Oxidation of the human SOD1 disulfide in vivo was found to involve both the copper chaperone for SOD1 (CCS) and the CCS-independent pathway for copper activation. When both copper pathways were blocked, wild type SOD1 stably accumulated in yeast cells with a reduced disulfide, whereas ALS SOD1 mutants A4V, G93A, and G37R were degraded. We describe here an unprecedented role for the thiol oxidoreductase glutaredoxin in reducing the SOD1 disulfide and destabilizing ALS mutants. Specifically, the major cytosolic glutaredoxin of yeast was seen to reduce the intramolecular disulfide of ALS SOD1 mutant A4V SOD1 in vivo and in vitro. By comparison, glutaredoxin was less reactive toward the disulfide of wild type SOD1. The apo-form of A4V SOD1 was highly reactive with glutaredoxin but not SOD1 containing both copper and zinc. Glutaredoxin therefore preferentially targets the immature form of ALS mutant SOD1 lacking metal co-factors. Overall, these studies implicate a critical balance between cellular reductants such as glutaredoxin and copper activation pathways in controlling the disulfide and stability of SOD1 in vivo.
The copper-and zinc-containing superoxide dismutase (SOD1) 6 (1) protects eukaryotic cells against oxidative stress by scavenging toxic superoxide anions. Enzyme catalysis is carried out by the copper co-factor, and in cells, this copper ion is delivered to SOD1 through regimented metal trafficking pathways. One pathway involves the CCS copper chaperone (2) that physically docks with SOD1 (3,4) and transfers its copper cargo in an oxygen-dependent fashion (5). Metazoan SOD1 can also acquire copper by a second "CCS-independent pathway" (6,7). Although the precise copper donor in this case is not understood, the CCS-independent pathway shows a dependence on glutathione and is particularly sensitive to certain perturbations in the SOD1 structure (6,7). Specifically, prolines at SOD1 positions 142 and 144 (based on Saccharomyces cerevisiae and human SOD1) will block activation by the CCS-independent pathway but not by CCS. S. cerevisiae SOD1 naturally contains these prolines and shows total dependence on CCS (2). SOD1 molecules from higher organisms generally lack prolines 142 and 144 and can be activated independent of CCS (6,7). How CCS-independent activation is blocked by prolines 142 and 144 is uncertain, but these residues have been proposed to disrupt the monomer-dimer equilibrium of apoSOD1 (8).
Although SOD1 is normally a protective enzyme, dominant mutations throughout the SOD1 polypeptide have been linked to the fatal motor neuron disease, amyotrophic lateral sclerosis (ALS). The underlying mechanism is incompletely clear, but a well accepted model involves misfolding of SOD1 mutants and consequential accumulation of toxic SOD1 aggregates (9 -11). The misfolding and instability of SOD1 mutants are curious in that SOD1 is normally a highly stable enzyme (12). Studies with purified enzyme indicate that the bound metal co-factors as well as an intramolecular disulfide in SOD1 help stabilize the structure of the SOD1 homodimer, and this is true for both wild type (WT) and ALS mutant polypeptides (13)(14)(15)(16)(17)(18). During disease, loss of the intramolecular disulfide correlates with misfolding of mutant SOD1 (19), and formation of improper intermolecular disulfides helps aggregate the protein (20 -22). As such, the cellular factors that promote faithful oxidation of the correct intrasubunit disulfide in SOD1 should promote SOD1 stability.
Very little is known regarding the cellular factors that impact on the disulfide of human SOD1. O'Halloran and co-workers (23) found that in the case of yeast SOD1, Cu-CCS promotes oxidation of the disulfide. The role of the CCS-independent pathway in oxidizing the disulfide of metazoan SOD1 has not been addressed previously. Without copper activation, the disulfide cysteines may be reduced, as was shown for yeast SOD1 in vivo (23); however, the thiol reductants that promote this process are unknown. The low redox potential of the cytosol should favor cysteine reduction. Additionally, the thiol oxidoreductases thioredoxin and glutaredoxin (GRX) are known to target a limited number of cysteines in polypeptides (24,25). GRX can resolve mixed disulfides between GSH and a polypeptide cysteine (an S-thionylated polypeptide) (24,26), and can also directly reduce intramolecular disulfides in proteins without an S-thionylated intermediate. Such reduction of intramolecular disulfides has been described for Escherichia coli Grx1 (24,27,28). But to date, no intramolecular disulfide target has been described for the GRXs of eukaryotes. Could SOD1 represent such a target?
Here we describe cellular factors that control the status of the SOD1 disulfide in vivo. By using a yeast expression system, we find that in addition to CCS, the CCS-independent pathway for copper activation helps oxidize the disulfide of human SOD1. When both pathways are blocked, the SOD1 disulfide cysteines are reduced, and in the case of ALS SOD1 mutants A4V, G93A, and G37R, the protein is highly unstable and is subject to degradation. We also demonstrate for the first time that cytosolic glutaredoxins (GRX) can reduce the SOD1 disulfide cysteines and thereby affect stability of certain ALS SOD1 mutants.
For most biochemical analyses, 50 ml of yeast cells were propagated overnight at 30°C in either minimal synthetic dextrose (SD) selecting media (starting A 600 ϭ 0.15) or in enriched YPD (yeast extract, peptone, dextrose) medium (starting A 600 ϭ 0.05). In studies with methionine repression, overnight cultures were diluted in a volume of 200 ml to an ϷA 600 ϭ 0.6 and allowed to grow for an additional 2 h to early log phase. 1 mM methionine was then added, and aliquots of 50 ml were harvested at various time points for cell lysis.
For expression of Grx2p in yeast, the S. cerevisiae GRX2 gene was amplified with primers that introduced BamHI and SalI sites at Ϫ962 and ϩ987 and inserted at these same sites in either pRS413 (HIS3 CEN), creating pCO147, or pRS414 (TRP1 CEN), creating plasmid pVCO147. For expression in E. coli, GRX2 was amplified without the mitochondrial targeting signal (residues 1-34) (32) from pCO147 using primers that introduced an NdeI site at Met-35 and a BamHI site 15 bp after the stop codon. The fragment was inserted at the NdeI and BamHI sites of pET21a (Novagen), creating pCO150.
Purification of Recombinant Yeast Grx2p and Human SOD1-For production of recombinant Grx2p, E. coli strain BL21(DE3) (Novagen) transformed with plasmid pCO150 was grown in 5 liters of LB media and induced for 2.5 h with 1 mM isopropyl -D-thiogalactopyranoside when the A 600 reached 0.7. Harvested cells were stored at Ϫ80°C. A frozen cell paste equivalent to ϳ800 ml of cell culture was lysed by freezingthawing and resuspended in 50 ml of 50 mM Tris-HCl, pH 8.0, 5 mM DTT. Cell debris was removed by centrifugation, and proteins were precipitated with 65-90% (NH 4 ) 2 SO 4 followed by resuspension in 20 mM Tris-HCl, pH 8.0, 25 mM NaCl, 5 mM DTT to a final volume of ϳ3 ml. The protein solution was desalted using a Hi-Trap TM desalting column (GE Healthcare) and then loaded onto a Hi-Trap TM Q-Sepharose XL column (GE Healthcare) equilibrated with 20 mM Tris-HCl, pH 8.0, 5 mM DTT. Recombinant Grx2p was eluted with 0 -50 mM NaCl and concentrated to 37 mg/ml (3.1 mM). Protein purity was demonstrated by SDS-PAGE and Coomassie staining (supplemental Fig. S3A). Protein was stored at Ϫ80°C with 5% glycerol added to the buffer. This recombinant Grx2p was found to efficiently catalyze the reduction of 2-hydroxyethyl disulfide (HED) by GSH with activity (ϳ300 units/mg protein) comparable with E. coli Grx1 (33).
Recombinant human WT and ALS mutant SOD1 proteins were obtained from an S. cerevisiae expression system as described previously (18). Purity of these proteins was demonstrated by a single band on SDS-PAGE and are of the correct mass as demonstrated by electrospray ionization mass spectrometry using a Sciex API III triple quadrupole mass spectrometer (PerkinElmer Life Sciences). Highly purified SOD1 proteins purified in this manner have been used in single crystal x-ray diffraction analyses (15,17), differential scanning calorimetry analyses (34), and in vitro proteasomal digestion assays (35). The latter study further verified the purity of wild type and pathogenic SOD1 proteins isolated in this fashion by using reverse phase high pressure liquid chromatography immediately prior to mass spectrometric analysis. By inductively coupled plasma mass spectrometry, WT and A4V SOD1 contained equal amounts of copper and zinc in the copper-binding site and only zinc in the zinc-binding site (36). Metal-free SOD1 proteins were generated by dialysis of this material at low pH in the presence of EDTA as described (18,37), resulting in apoSOD1 containing Ͻ0.05 eq of both copper and zinc per dimer (18,38), and an oxidized disulfide (as determined by AMS analysis; see Fig. 6) (35).
Biochemical Assays-4-Acetamido-4Ј-maleimidylstilbene-2,2Ј-disulfonic acid (AMS) was used to monitor the disulfide status of SOD1. Yeast cells from a 50-ml culture were washed twice in deionized water, and a 100-l cell pellet was resuspended in 200 l of a GdnHCl buffer (6 M guanidine-HCl, 3 mM EDTA, 0.5% Triton X-100, 50 mM Tris-HCl, pH 8.3) that contained, as needed, 15 mM AMS (Molecular Probes). 100 l of glass beads (425-600 m; Sigma) were added and cells lysed by three cycles of vortexing at room temperature for 2 min, interspersed by 1-min incubations on ice. Extracts were clarified by centrifugation at 10,000 ϫ g for 5 min, and the supernatant was incubated at 37°C for 1 h in the dark. A 50-l aliquot was then applied to a MicroSpin G-25 gel filtration column (Amersham Biosciences), and 32 l of flow-through was prepared for SDS-PAGE by incubating with SDS-DTT loading buffer at room temperature for 7 min, followed by quick clarification by centrifugation.
For AMS modification of SOD1 from fibroblasts, cells were plated at a density of 6.67 ϫ 10 5 cells per 100-mm tissue culture dish and cultured for 24 h at 37°C. Media were aspirated, and cells were washed once in phosphate-buffered saline, followed by cell lysis through addition of 200 l of GdnHCl buffer (see above) containing, as needed, 15 mM AMS. The AMS reaction and subsequent gel filtration proceeded as above. 20 l of column flow-through was boiled in SDS-DTT gel loading buffer and clarified by centrifugation prior to analysis by SDS-PAGE on 14% precast gels (Invitrogen).
Immunoblotting with fibroblasts used an antibody (1:1000 dilution) that only recognizes human SOD1 (6), although detection of SOD1 from yeast lysates generally employed a peptide-derived antibody (39) that recognizes both human and mouse SOD1. Cross-reactivity with nonspecific yeast products of Ϸ39 and Ϸ69 kDa was occasionally observed with early preparations of the antibody (e.g. see supplemental Fig S1A), but not with later preparations of higher titer (e.g. see Fig. 5A). Standard immunoblots (no AMS) and native gels for SOD1 activity used 30 -50 g of yeast cell lysate protein. In nonreducing gels (as in Fig. 5A), the gel was pre-soaked in tris(2-carboxyethyl) phos-phine (TCEP) according to published methods (21) prior to electroblotting. SOD1 activity was monitored by native gel electrophoresis on 12% precast gels and by nitro blue tetrazolium staining as described (40,41).
GRX activity of yeast cell lysates was monitored by the HED assay (33). Yeast cells grown to confluency in selecting media were subjected to glass bead lysis. Cell lysates were heated at 85°C to inactivate glutathione reductase and thioredoxin reductase (42). 15 g of cell lysate protein was added to a 1-ml quartz cuvette containing a 300-l HED reaction mixture (100 mM Tris-HCl, pH 8.0, 2.0 mM EDTA, 1 mM GSH, 0.4 mM NADPH, 6 g/ml glutathione reductase, and 0.7 mM HED). GRX reduction of the HED substrate was measured by continuous monitoring of NADPH consumption at 340 nm over a range of 2 min.
An in vitro assay for reduction of the human SOD1 disulfide by recombinant Grx2p was carried out in a 300-l reaction containing the aforementioned GRX assay constituents with 0.2 mM NADPH rather than 0.4 mM NADPH and no HED. 2-3 M purified human SOD1 with an oxidized disulfide was added as substrate, and following the addition of recombinant Grx2p to a concentration of 0.05 M, the reaction was incubated at 30°C. At specific time points, 10-l aliquots were mixed with 60 l of GdnHCl buffer (see above) containing 15 mM AMS. Following incubation for 1 h at 37°C, samples (Ϸ100 ng of SOD1) were subject to gel filtration and analysis of SOD1 by SDS-PAGE and immunoblot. It is noteworthy that in certain preparations of recombinant SOD1, thiol modification at Cys-111 was incomplete. For example, human WT SOD1 (purchased from Sigma) showed no AMS reactivity at Cys-111 unless pretreated with reducing agents. Apparent oxidation at Cys-111 was also observed upon storage (Ͼ2 weeks at 4°C or Ͼ6 months Ϫ70°C) of apo but not metallated A4V and G93A human SOD1. Similar problems with Cys-111 have been reported elsewhere (35,43). As such, it is important to carry out analysis of the disulfide with freshly prepared samples of purified apoSOD1.
The Disulfide in Human SOD1 Is Oxidized by Both the CCSdependent and -independent Pathways for Copper Loading-
We sought to understand how copper trafficking pathways affect the disulfide of human SOD1. Disulfide status was probed by AMS, which forms stable thioether linkages with free polypeptide cysteines, but is nonreactive toward cysteines bridged in a disulfide. Unlike S. cerevisiae SOD1 and the Cu,Zn-SOD of Caenorhabditis elegans that contain only two cysteines (7), human SOD1 contains four cysteines as follows: the disulfide Cys-57 and Cys-146 cysteines, and Cys-6 and Cys-111. To discern between these, we introduced single and combined mutations at these cysteines. The corresponding SOD1 variants were expressed in yeast, and lysates prepared in 6 N guanidinium were treated with AMS and analyzed by immunoblot. The goal of such a "redox Western" is not to determine total protein levels but to compare the level of oxidized versus reduced cysteines within a particular sample. Typically, the 15.9-kDa human SOD1 migrates on SDS gels at the position of Ϸ21.5 kDa (Fig. 1A, lane 1) (44). When WT SOD1 from CCS1ϩ yeast is reacted with AMS, a major band corresponding to a shift in mobility of Ϸ5.0 kDa is observed (Fig. 1, thin arrow, lane 2). This reflects AMS modification at the nondisulfide Cys-6 and Cys-111 cysteines, because no mobility shift is observed with C6S/C111S SOD1 expressed in CCS1ϩ yeast (Fig. 1, lane 5). The lack of AMS reactivity at Cys-57 and Cys-146 demonstrates that the disulfide is oxidized in CCS1ϩ yeast cells, as would be expected for active SOD1. It is noteworthy that AMS modification at Cys-6 and Cys-111 produces a larger shift in mobility than the 1.07 kDa expected for two AMS moieties. This is due to AMS modification at Cys-111, because C6S SOD1, but not C111S, also produces an aberrantly large shift in mobility (Fig. 1B, also see supplemental Fig. S1B and Tables S1 and S2). Certain small modifications in SOD1 composition can effect anomalous mobility on SDS gels, such as the C57S mutation (Fig. 1A, lane 11) and several other substitutions reported for SOD1 (31,39,41,44). AMS modification at Cys-111 likewise results in anomalous mobility on SDS-PAGE. A more detailed description of the effects of AMS modification at each of the four cysteines is presented in supplemental Tables S1 and S2.
When human SOD1 is expressed in ccs1⌬ null yeast cells lacking the CCS1 gene, a second AMS product bearing increased mobility shift appears (Fig. 1A, heavy arrow, lane 3). This corresponds to additional AMS reactivity at the disulfide Cys-57 and/or Cys-146 because the same shift is seen with C6S/C111S SOD1 expressed in ccs1⌬ cells (lane 6), but not with C146S and C57S SOD1 affecting the disulfide (Fig. 1A, lanes 9 and 13). AMS reactivity at Cys-57 and/or Cys-146 indicates that the disulfide cysteines have been reduced in cells lacking CCS.
We also monitored the disulfide of human SOD1 expressed in mammalian cells. These studies employed skin fibroblasts derived from CCS ϩ/ϩ and CCS Ϫ/Ϫ null homozygous mice that are transgenic for human WT SOD1 or the ALS SOD1 mutant G37R (6,45). When expressed in CCS ϩ/ϩ fibroblasts, human WT SOD1 exhibits the identical AMS reactivity pattern seen with disulfide-oxidized SOD1 from CCS1ϩ yeast (Fig. 1C, compare lanes 1 and 3). In CCS Ϫ/Ϫ fibroblasts, two products are observed, representative of both oxidized and reduced states of the disulfide cysteines (Fig. 1C, lane 6). With the G37R ALS mutant SOD1, there is evidence for disulfide reduction even in CCS ϩ/ϩ fibroblasts (Fig. 1C, lane 8), consistent with the notion that ALS mutants are more susceptible to disulfide reduction (46). As has been shown for the endogenous SOD1 of yeast (23), CCS helps oxidize the disulfide of human SOD1 expressed in yeast and mammalian cells.
We tested whether the CCS-independent pathway affects the human SOD1 disulfide (6). This auxiliary pathway for activating SOD1 is blocked by introducing prolines in SOD1 at positions corresponding to amino acids 142 and 144 (6,7). Human SOD1 contains Ser and Leu at these positions, and an S142P/ L144P mutant of human SOD1 is only activated by CCS (6). When S142P/L144P human SOD1 is expressed in CCS1ϩ yeast cells, the enzyme is active (6), and the disulfide is oxidized (Fig. 2A, lane 3). However, when expressed in ccs1⌬ yeast cells, S142P/L144P human SOD1 cannot obtain copper by either pathway; the enzyme is inactive (6), and the disulfide cysteines are completely reduced (Fig. 2A, lane 4). Hence, both CCS-dependent and -independent pathways for activating SOD1 contribute to disulfide oxidation with human SOD1.
The Disulfide and Stability of ALS Mutant SOD1-We explored how changes in copper loading in vivo affect ALS mutants of SOD1. Human SOD1 mutants G37R, G93A, and A4V were expressed in yeast under conditions where copper activation by CCS and/or the CCS-independent pathways were blocked. CCS was inhibited by expression in a ccs1⌬ null yeast strain, and CCS-independent activation was blocked by introducing prolines 142 and 144 into SOD1. As seen in Fig. 2B, lane 10 and 11) as has been reported for other SOD1 mutants (31,39,41,44). B and C, lane 1, human SOD1 expressed in the CCS1ϩ strain KS107. C, lanes 2-9, analysis of immortalized fibroblasts from CCS ϩ/ϩ and CCS Ϫ/Ϫ mice that were transgenic for either WT or G37R human SOD1 as described (6,45). Y indicates CCS1ϩ yeast expressing WT human SOD1; M indicates fibroblasts from CCS ϩ/ϩ mice expressing WT human SOD1. It is noteworthy that with long exposures, disulfide-oxidized SOD1 often runs as a doublet, with a minor band Ϸ1.5 kDa smaller than the major disulfide-oxidized product (e.g. A, lane 2). Because this doublet is observed with WT, C146S, C57S, and C6S, but not with C111S SOD1 (also see supplemental Fig. S1B), AMS modification at Cys-111 might effect both Ϸ4.5 kDa (major) and Ϸ3.0 kDa (minor) shifts in apparent molecular mass. However, we cannot exclude other SOD1 modifications, such as oxidative products and retention of some metal binding during electrophoresis as has been reported previously to cause multiple SOD1 isoforms during electrophoresis (23,56). Although AMS reactivity generally goes to completion, there occasionally is some low level of incomplete modification as is seen in C, lanes 8 and 9. 4, loss of the CCS-independent pathway alone through an S142P/L144P substitution resulted in some lowering of the steady state level of the ALS mutants, particularly A4V (also see supplemental Fig. S2). Moreover, when the S142P/L144P variants of G37R, G93A, and A4V were expressed in ccs1⌬ cells, no polypeptide could be recovered (Fig. 2B, lane 3). By comparison, the S142P/L144P variant of WT SOD1 stably accumulated in ccs1⌬ cells (Fig. 2B, lane 3, top panel, also see supplemental Fig. S2). The ALS mutants, but not WT SOD1, appeared highly unstable and degraded when both the CCS-dependent and -independent pathways for copper loading and disulfide oxidation were eliminated.
Loss of CCS alone can also affect the stability of certain ALS mutants expressed in yeast. For example, G41D accumulates to very low steady state levels in ccs1⌬ yeast (Fig. 2C). A4V SOD1 also appears somewhat unstable in ccs1⌬ cells, but to a lesser degree ( Fig. 2C and also see Figs. 4C and 5B). It is noteworthy that this same pattern of instability has been observed in mammalian expressions systems; G41D and A4V are more prone to degradation and aggregation than other ALS mutants (39,47). It is important to note that unlike mammalian expression systems where SOD1 misfolding can lead to both degradation and aggregation of the polypeptide, protein degradation is the primary end point of SOD1 misfolding in yeast. There is no evidence of SOD1 aggregation in yeast expression systems (see "Discussion").
To monitor turnover of disulfide-reduced versus disulfideoxidized polypeptides, SOD1 synthesis in yeast cells was controlled by the methionine-repressible MET25 promoter. In this manner, A4V SOD1 is actively synthesized in yeast cells not treated with methionine but is repressed upon addition of methionine to the growth medium, allowing us to monitor loss of the SOD1 polypeptides over time. In the absence of methio-nine, a good fraction of A4V SOD1 expressed in ccs1⌬ cells is seen in the disulfide-reduced state, and the ratio of reduced to oxidized disulfide remains relatively constant over 3 h (Fig. 3A, lanes 1 and 2; also see supplemental Fig. S2). But when A4V expression was repressed by methionine supplements, the disulfide-reduced form of A4V SOD1 was lost, whereas the disulfide-oxidized form was more stable (Fig. 3A, lanes 4 -6; also see supplemental Fig. S2). This result is consistent with in vitro studies with recombinant SOD1 showing that disulfide-reduced SOD1 is more prone to degradation by the proteosome than disulfide-oxidized SOD1 (35).
We similarly examined stability of WT human SOD1. In the experiment of Fig. 3B, the disulfide of WT SOD1 at steady state is more oxidized than that of A4V SOD1 examined in parallel (compare t ϭ 0 samples for A4V and WT SOD1). Following 3 h of methionine repression, both the reduced and oxidized pools of WT SOD1 were retained, compared with A4V SOD1 that exhibited instability particularly with the disulfide-reduced fraction (Fig. 3B, also see supplemental Fig. S2). The disulfidereduced form of WT SOD1 is not subject to the same dramatic turnover as the ALS mutant.
The Role of Cytosolic Glutaredoxins in Reducing the SOD1 Disulfide and Destabilizing ALS Mutant Polypeptides-In the absence of copper activation, what cellular factors favor reduction of the SOD1 disulfide cysteines? We tested the possible role of GRX. S. cerevisiae expresses two GRXs in the cytosol, namely Grx1p and Grx2p. Double grx1 grx2 null mutations do not alter cellular redox or GSH/GSSG ratios (38). We tested how loss of GRX affects the disulfide of SOD1 expressed in FIGURE 2. The effects of copper activation on the disulfide and steady state levels of human SOD1 variants expressed in yeast. Yeast strains expressing WT human SOD1 or the indicated mutant variants were analyzed as follows: A, for AMS modification of SOD1 cysteines as in Fig. 1; B and C, for total human SOD1 protein by immunoblot. A, strains as described in Fig. 1A expressed either WT or S142P/L144P human SOD1 under control of the S. cerevisiae PGK1 promoter. Heavy and light arrows indicate disulfide-reduced and disulfide-oxidized SOD1. B, the ccs1⌬ sod1⌬ strain LS101 was transformed where indicated (CCSϩ) with the human CCS expressing plasmid pPS015 and also with the indicated variants of human SOD1 under control of the S. cerevisiae SOD1 promoter. In lanes 3 and 4, the S142P/L144P substitution was introduced in WT, G37R, G93A, and A4V human SOD1. Quantification of the levels of SOD1 can be found in supplemental Fig. S2. C, human WT, G41D, and A4V SOD1 under control of the PGK1 promoter were expressed in strains described in Fig. 1A. ccs1⌬ null yeast strains, where the SOD1 is normally a mixed population of disulfide-reduced and -oxidized forms (Fig. 4A, lanes 2 and 5). Loss of GRX through grx1⌬ grx2⌬ null mutations shifted the disulfide of SOD1 toward the oxidized state, and the effects were particularly pronounced with A4V SOD1 (Fig. 4A, lane 6).
Loss of yeast GRXs not only affected the SOD1 disulfide but also the stability of ALS mutants. In Fig. 4B, A4V SOD1 stability was monitored through methionine repression. The disulfideoxidized form of SOD1 that accumulates in grx1⌬ grx2⌬ ccs1⌬ strains is quite stable over 3 h of methionine repression (Fig. 4B, lanes 3 and 4). As such, the steady state levels of total A4V SOD1 increases, as do levels of G41D SOD1 expressed in grx1⌬ grx2⌬ ccs1⌬ yeast (Fig. 4C, lanes 2 and 3).
Of the two cytosolic GRXs in yeast, Grx2p is the predominant form (38). Single grx2⌬ mutations were sufficient to increase steady state levels of G41D SOD1 (Fig. 4C, lane 5), whereas grx1⌬ mutations were not (not shown). To examine the effects of Grx2p further, grx1⌬ grx2⌬ ccs1⌬ cells were transformed with a low copy plasmid expressing GRX2 under its native promoter. Plasmid-borne Grx2p was indeed enzymatically active, as monitored by the standard in vitro assay for GRX activity using HED as substrate (33) (Fig. 4D, top). Expression of Grx2p in grx1⌬ grx2⌬ ccs1⌬ yeast also correlated with reduction of the disulfide in A4V SOD1 (Fig. 4D, bottom).
Loss of GRX clearly affects oxidation of the intramolecular disulfide in SOD1, but what about non-native disulfides? Recently, ALS SOD1 mutants have been shown to oligomerize and form intermolecular disulfide cross-links that can be visualized by electrophoresis under nonreducing conditions (19 -22, 48). However, when analyzed under nonreducing conditions ("ϪDTT"), A4V SOD1 expressed in grx1⌬ grx2⌬ ccs1⌬ yeast only exists as a monomer, and there were no unique high molecular species consistent with intermolecular disulfides (Fig. 5A, right). Therefore, GRXs appear to only target only the chief intramolecular disulfide of SOD1.
We also examined the effects of GRX loss on SOD1 activity. As seen in Fig. 5B, there was no significant change in SOD1 activity with WT, G93A, or A4V variants expressed in grx1⌬ grx2⌬ ccs1⌬ cells compared with ccs1⌬ single mutants. A4V SOD1 shows poor CCS-independent activity in ccs1⌬ cells, and this does not change with additional grx1⌬ grx2⌬ mutations (Fig. 5B). Despite oxidation of the A4V SOD1 disulfide in this strain (shown in Fig. 4, A, B, and D), the SOD1 remains largely inactive. Presumably, the disulfide-oxidized SOD1 is still copper-deficient in grx1⌬ grx2⌬ ccs1⌬ cells (see "Discussion").
To more directly test whether GRX can reduce the SOD1 disulfide, we designed an in vitro assay using purified recombinant yeast Grx2p and purified human WT or A4V SOD1. To regenerate reduced GRX, the in vitro reactions also contained GSH. In the experiment of Fig. 6A, recombinant Grx2p at 50 nM was allowed to react with disulfide-oxidized A4V SOD1 that was apo for metals and present at a concentration of 2.0 M. Within 1 h, the disulfide was reduced (Fig. 6A, lane 3, and also see Fig. 6C, lane 6). Similar results were obtained with apoG93A SOD1 (not shown). GSH alone was not sufficient to reduce the disulfide (Fig. 6A, lane 6), but GSH was required for Grx2p-dependent reduction of the disulfide (supplemental Fig. S3B), as strain MC120 (identical to MC119 except ccs1⌬::ADE2 rather than ccs1⌬::URA3) also expressing A4V SOD1 under PGK1 was transformed where indicated (GRX2), with the pVC0147 plasmid for expressing S. cerevisiae Grx2p or with empty vector pRS313 (V). Top, lysates were assayed for GRX activity using the standard HED assay. Results represent the averages of two independent assays; error bars represent range. Activity is defined in terms of nanomoles of NAPDH consumed per min per mg of lysate protein. Bottom, status of the A4V SOD1 disulfide was monitored as in Fig. 1. Heavy and light arrows disulfide-reduced and disulfide-oxidized SOD1, respectively. would be expected for GRX reactions (24). Compared with A4V SOD1, the disulfide of apoWT human SOD1 exhibited poor reactivity toward Grx2p and GSH (Fig. 6B) even after 2 h of incubation (Fig. 6C, lane 3). At best, Ϸ2.0% conversion to the reduced form was seen in one experimental trial out of eight with apoWT SOD1 (see supplemental Fig. S3B).
We also tested Grx2p reactivity toward metallated SOD1. As seen in Fig. 6D, the disulfide of metallated A4V SOD1 containing both copper and zinc was refractory to reduction by Grx2p in vitro. The apo version is the preferred substrate for disulfide reduction by Grx2p.
DISCUSSION
Heren we describe how post-translational modification factors for SOD1 can impact on the intramolecular disulfide and stability of ALS mutant SOD1. Two classes of intracellular factors are shown to work in opposite ways to control status of the disulfide. First, the CCS-dependent and -independent pathways for copper activation promote oxidation of the human SOD1 disulfide and enhance stability of ALS mutants A4V, G93A, and G37R expressed in yeast. Without copper activation, intracellular reductants such as GRX promote disulfide reduction, contributing to SOD1 instability.
The ALS mutants seemed particularly vulnerable to loss of copper activation. Although WT SOD1 stably accumulated in yeast cells without the copper co-factor or an oxidized disulfide, the three ALS mutants we examined (G93A, G37R, and A4V) were degraded when both CCS-dependent and -independent pathways were blocked. In the yeast expression system, misfolded SOD1 mutants lacking metals and the disulfide are effectively cleared by protein degradation. There is no evidence of SOD1 aggregation in yeast by either formation of high molecular weight species on SDS gels (as in Fig. 5A and supplemental Fig. S4) or by formation of detergent-insoluble precipitates (data not shown). In mammalian cells, the clearance of misfolded SOD1 may be incomplete, allowing for accumulation of misfolded aggregates. Regardless of whether the end point is degradation or aggregation, the initiating misfolding event in SOD1 can be promoted by the absence of copper and the intramolecular disulfide. Copper loading of SOD1 is incomplete in various cells and tissues (5, 49 -51), and in the case of certain ALS mutants, this pool of immature SOD1 may very well seed formation of misfolded aggregates.
ALS mutants may also be more vulnerable to disulfide reduction by GRX. Yeast Grx2p was seen to promote reduction of the disulfide cysteines of A4V SOD1 both in vivo and in vitro although WT human SOD1 was less reactive. It is possible that misfolding of certain ALS mutants allows for greater access of the GRX molecule toward the disulfide. This increased reactivity with GRX, together with the high instability of the disulfidereduced state, makes the SOD1 mutant a prime target for protein misfolding and degradation.
To date, very few in vivo substrates have been documented for eukaryotic dithiol GRXs. Mammalian GRX can act as a dethionylase for actin, Hsp70, and Ras (26,52,53), but no substrates have been identified for S. cerevisiae Grx1p and Grx2p. Furthermore, there have been no reports of an intramolecular disulfide target for eukaryotic GRXs, only S-thionylated targets. . Loss of glutaredoxins in yeast does not affect SOD1 oligomerization or alter SOD1 activity. A, lysates from the indicated yeast strains expressing A4V SOD1 under PGK1 where indicated (ϩ) were subjected to denaturing gel electrophoresis and immunoblot analysis for steady state levels of SOD1. Prior to electrophoresis, samples containing 30 g of total extract protein were heated at 95°C in SDS-buffer that either contained (ϩDTT) or lacked (ϪDTT) 10 mM DTT as a reducing agent. Gels were soaked in TCEP to help reduce polypeptide cysteines according to published methods (21) prior to immunoblotting. Vertical numbers indicate size of molecular weight markers run in parallel. B, lysates from the indicated yeast strains expressing WT, A4V, or G93A SOD1 under PGK1 were subjected to either nondenaturing gel electrophoresis and nitro blue tetrazolium staining for SOD1 activity (40, 41) (top), or to SDS-PAGE and immunoblot for human SOD1 protein (bottom). Strains utilized are as follows: GRX1/GRX2ϩ CCS1ϩ, CY4; GRX1/GRX2ϩ CCS1⌬, the ccs1⌬ strain MC108; GRX1/GRX2⌬ CCS1⌬, the grx1⌬ grx2⌬ ccs1⌬ strain MC120. This particular strain background (CY4) shows lower levels of CCS-independent SOD1 activity than other strains (e.g. BY4741 or EG103), perhaps due to lower abundance of intracellular GSH needed for efficient CCS-independent activation (6). We favor a model in which GRX acts on the intramolecular disulfide of SOD1 rather than an S-thionylated intermediate. If SOD1 were S-thionylated, such an intermediate would be detected by AMS modification and would hyper-accumulate in grx1⌬ grx2⌬ yeast mutants lacking dethionylase activity. To our knowledge, SOD1 represents the first reported intramolecular disulfide substrate for a eukaryotic GRX. It is quite possible that other polypeptide disulfides serve as targets, including those noted in the prion (54) and transthyretin redox-sensitive proteins (55) implicated in disease.
Our studies strongly indicate that GRX preferentially acts on a SOD1 molecule that contains an oxidized disulfide, yet lacks copper. First, the in vivo effects of GRX on the SOD1 disulfide were only observed in yeast strains where copper activation was low (e.g. in ccs1⌬ strains). Loss of GRX correlated with disulfide oxidation, but the SOD1 enzyme remained largely inactive, indicative of no copper co-factor. Moreover, recombinant Grx2p could reduce the disulfide of apo but not metallated A4V SOD1. If copper-deficient, disulfide-oxidized SOD1 is indeed the substrate for GRX, this would imply that SOD1 can obtain an oxidized disulfide in vivo without copper insertion. How the SOD1 disulfide is oxidized without copper is still unclear but is the subject of current investigations. In any case, the GRXand/or GSH-mediated reduction of the disulfide in copper-deficient SOD1 would be beneficial to the cell, as it would provide additional substrate for CCS that is normally inert toward disulfide-oxidized SOD1 (23).
Although these studies on the human SOD1 disulfide were largely conducted in yeast, they are predicted to have important implications for SOD1 folding and stability in mammalian cells as well. First, the factors that control the SOD1 disulfide are well conserved in yeast and mammals, including CCS (2), the CCS-independent pathway (6), and the dithiol GRX molecules of the cytoplasm (38). Moreover, the relative instability observed with ALS mutants A4V and G41D expressed in yeast is remarkably similar to what has been reported in mammalian cells (39,47). Therefore, the effects of copper loading pathways and thiol reductants on the disulfide and the stability of SOD1 are expected to be conserved. In a previous transgenic mouse study, loss of CCS was reported to not affect motor neuron disease associated with expression of ALS mutants G93A, G37R, or G85R (45). Based on our studies in yeast, these mutants are stable without CCS due to compensatory effects of the CCS-independent pathway. It is therefore important to consider the impact of both copper loading pathways, as well as thiol reductants such as GRX on the fate of ALS mutants in motor neuron disease. | 2018-04-03T05:08:20.257Z | 2006-09-29T00:00:00.000 | {
"year": 2006,
"sha1": "61deaf5e902177a18587f8ca455c93487eb044ec",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/39/28648.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "13ebf044550f24d8537024ab2e2dcbc37689bb2f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
10042147 | pes2o/s2orc | v3-fos-license | Design and Characterization of a Fully Differential MEMS Accelerometer Fabricated Using MetalMUMPs Technology
This paper presents a fully differential single-axis accelerometer fabricated using the MetalMUMPs process. The unique structural configuration and common-centriod wiring of the metal electrodes enables a fully differential sensing scheme with robust metal sensing structures. CoventorWare is used in structural and electrical design and simulation of the fully differential accelerometer. The MUMPs foundry fabrication process of the sensor allows for high yield, good process consistency and provides 20 μm structural thickness of the sensing element, which makes the capacitive sensing eligible. In device characterization, surface profile of the fabricated device is measured using a Veeco surface profilometer; and mean and gradient residual stress in the nickel structure are calculated as approximately 94.7 MPa and −5.27 MPa/μm, respectively. Dynamic characterization of the sensor is performed using a vibration shaker with a high-end commercial calibrating accelerometer as reference. The sensitivity of the sensor is measured as 0.52 mV/g prior to off-chip amplification. Temperature dependence of the sensing capacitance is also characterized. A −0.021fF/°C is observed. The findings in the presented work will provide useful information for design of sensors and actuators such as accelerometers, gyroscopes and electrothermal actuators that are to be fabricated using MetalMUMPs technology.
Introduction
One of the current trends in physical sensor technologies is hybrid rather than monolithic integration for optimal sensing element and dedicated electronics, and overall low cost and better performance [1]. Electroplated metal as sensing structural material has been attempted for robust sensors [2][3][4][5]. For process controllability and overall low cost of MEMS elements, foundry services are also preferable. Among the many MEMS foundry services, MUMPs provides relatively mature technologies for a variety of MEMS materials including metal. Various types of actuators fabricated using MetalMUMPs technology have been reported, of which most take great advantage of the thick nickel structural layer for large actuation force and displacement [6]. Recently metal sensors such as capacitive gyroscopes have been explored using MetalMUMPs or other adapted processes [2,3,[7][8][9]. Compared to polysilicon sensing structures, the metal sensing elements, which normally have thicker structures, allow for larger proof mass, greater sensing capacitance, therefore smaller overall device size with comparable performance. In the reported metal gyroscope [2,3,7], the larger proof mass results in higher Coriolis force and increases the sensitivity in microgyroscope design. Although MetalMUMPs technology is widely used in actuators, due to constrains of its design rules, only few capacitive sensors have been attempted using this process [4,5,10]. In these reported displacement sensing scheme, a half-bridge capacitive sensing circuit is constructed using sidewall capacitance formed by nickel electrodes.
In this paper, a MetalMUMPs capacitive accelerometer employing in-plane sensing mechanism has been designed, fabricated and characterized. The motivation is to exploit some particular merits in MetalMUMPs technology, such as the excellent elastic and electrical properties of nickel, good process controllability, robust metal structure, low wiring parasitics and low cost potential. The uniqueness of this demonstrated device is the implementation of a fully differential sensing scheme with common-centroid wiring of the symmetrically partitioned sensing capacitor groups which also allows offset cancelation for large sensitivity. In device design, special connections are employed by using the doped polysilicon and insulator layers in the stacked structure, to realize mechanical connection and electrical isolation of the separated proof mass. To validate the unique sensing mechanism, a commercially available universal capacitive readout IC MS3110 from Irvine Sensor (Costa Mesa, CA, USA) was used in device characterizations.
Device characterization included derivation of residual stress in the structural nickel layer based on optical measurement of the fabricated devices, and electrical tests of the packaged device. A Veeco surface profilometer (Tucson, AZ, USA) was used in the optical measurement. Due to the robustness of the metal structures and particular design rules, MetalMUMPs sensors and actuators normally have larger dimensions than their polysilicon or single-crystal silicon counterparts. The undesired accumulated deformations of the structures could be significant in many MetalMUMPs devices. The residual stress characterized in this work can be a useful resource for MetalMUMPs users in design and optimization of their devices. The fully differential sensing scheme can also be applied to design of other capacitive devices. Figure 1 shows a 3D model of the prototyped fully differential accelerometer fabricated using MetalMUMPs technology. The sensor has overall dimensions of approximately 1.6 mm × 1.2 mm in footprint, with a structural thickness of approximately 20 μm. The designed accelerometer consists of a 1.3 mm × 1.0 mm proof mass, sixty-four pairs of sensing comb drives, and four symmetrically folded mechanical springs through which the proof mass is anchored to the substrate. The equivalent dimension of each spring beam is 1,200 μm × 8 μm if stretched. In device design, MetalMUMPs design rules were strictly followed for successful structure release. Table 1 summarizes critical technological parameters of MetalMUMPs technology that are related to device design and performance prediction [6]. The dimensions of the sensor structures and associated material properties are given in Table 2.
Device Structural Design
As a typical second-order system, the mechanical performance of an accelerometer is largely determined by the response of its mechanical springs. To reduce the device size, folded mechanical springs have been used in the designed device, as shown in Figure 1. An accurate analytical model is used in mechanical spring design. The model has included effects of the stiffness of the meanders in the folded springs, as shown in Figure 2 [13]. Based on the configuration, the spring constant of the overall four guided-end springs can then be calculated as: where n is the number of meander, E is the Young's Modulus of nickel, the structural material in this device; a and b are geometrical parameters as shown in Figure 2. I = wt 3 /12 is moment of inertia of each spring beam which has a rectangular cross sections; w and t denotes the width and thickness of the springs, respectively. The spring constant is then calculated as 73.6 N.m. With the given dimensions of the proof mass, the mechanical resonant frequency of the structure can be estimated as approximately 2.9 kHz.
Common-Centriod Wiring and Fully Differential Sensing
As an exploratory study, this paper presents a fully differential capacitive sensing scheme implemented using MetalMUMPs process. A single-axis accelerometer with nickel structure is exemplified as a demonstrating device. Intuitively it is challenging to realize fully differential sensing schemes with metal microstructures because of the limits of the design rules for signal routing. All the reported MetalMUMPs capacitive sensors use a half-bridge sensing scheme [4,5,10]. The unique features of the accelerometer demonstrated in this work include a common-centriod wiring configuration for fully differential sensing scheme; and metal proof mass and electrodes for increased sensitivity and robust structures. Figure 3 shows illustrative electrical connections of the capacitive sensing electrodes to construct the fully differential capacitive sensing scheme, with insets showing the mechanical connection of the microstructures in the device. (1) and (3). The polysilicon underneath the gap is used for mechanical supporting.
As shown in Figure 3(a), to realize a fully differential sensing scheme by ample use of design rules and material arrangements in MetalMUMPs, the accelerometer proof-mass is separated into three pieces which are mechanically connected by combined layer underneath the main structural nickel layer. The combined layer is comprised of a 0.7 µm polysilicon layer sandwiched by two nitride layers both of which have a thickness of 0.35 µm, as given in Table 1. While being separated from the large piece (1) that serves as an output node of the sensing bridge, the two small diagonal pieces (2) and (3), as shown in Figure 3(a), are connected using a polysilicon layer underneath the nickel layer to form another output node. With this strategy and external swapped wirings, a full sensing bridge can be constructed, as described in the following paragraphs. Figure 3(b) shows the cross-sectional view of the combined layers for the mechanical connection of proof mass pieces (2) and (3). The gap between the two diagonal pieces and the whole large piece are used for mechanical connection and electrical isolation [14].
The inset in Figure 3(a) shows the connection of individual comb finger pairs. Connections of two pairs of the sensing comb drives are illustrated. The two types of stator electrodes that are both anchored to the substrate are electrically isolated from each other. The same type of stator electrodes are electrically connected differently for differential sensing. As shown in the figure, while the two lower stator electrodes in both pairs are directly connected using the nickel metal, the two upper stator electrodes are electrically connected using the doped polysilicon layer underneath the structure material. While the polysilicon layer is isolated from the lower electrodes, it is connected to the upper electrodes in the anchor pads.
A fully differential sensing bridge is formed by wiring the comb fingers in a common-centroid manner as shown in Figure 3(a). C1a, C3a, C1b and C3b are connected together to positive modulation voltage V m + while C2a, C4a, C2b and C4b are connected together to negative modulation voltage V m − The electrical equivalent circuit of the sensing bridge is shown in Figure 4. A motion of the proof mass in y direction, as shown in Figure 3, has been assumed. Consequentially, the equivalent circuit in Figure 4 can be further simplified as shown in Figure 5 where C1 = C1a + C1b, C2 = C2a + C2b, C3 = C3a + C3b and C4 = C4a + C4b.
Referring to Figure 5, the output voltage is given by: with an calculated value of ~110 fF; C P is the estimated parasitic capacitance formed by the wiring metal beams and polysilicon. The overall sensitivity, V S /a in can be derived as: where a in is the input acceleration. In this study, due to the suspension of the metal wiring with large gap to the substrate, only the parasitic capacitance resulted from the polysilicon connections are included. A relative permittivity of 4.0 has been used for SiO 2 underneath the polysilicon connection. The estimated parasitic capacitance for each quarter is ~160 fF; V m is the modulation voltage; y is the displacement of the proof mass under acceleration; y 0 is the original finger gap; and ω = 2f is the angular frequency of mechanical resonance, which can be obtained from the resonant frequency f. For a modulation voltage with amplitude of 2.25 V, an overall mechanical sensitivity of 0.71 mV/g can be estimated with the estimated parasitic capacitance.
Device Simulation
CoventorWare, a FEA simulator dedicated to MEMS device design, is used in the structural and electrical design verification for the fully differential accelerometer. The technological parameters presented in the embedded MetalMUMPs process in CoventorWare are used in the whole simulation. As a key mechanical parameter, residual stress of 100 MPa [6] in the nickel structural layer is adopted. Linear responses are observed for both the displacement and capacitance change, as shown in Figure 6. A capacitance sensitivity of ~0.16 fF/g has been obtained. Based on these simulation results, with an external modulation voltage of 2.25 V, a mechanical sensitivity of 2.8 mV/g has been deduced. The results show larger sensitivity than the analytical design value because in FEA simulation, fringe capacitance is included. In addition, the simulator extracts no parasitic effect from the simplified 3-D model. From the modal simulation, the sensor structure demonstrates a resonant frequency of approximately 3.157 kHz, which is within 10% of the calculated value.
Device Fabrication
The device presented in this paper is fabricated at MEMSCAP using MetalMUMPs technology. In total 10 thin film layers are involved in MetalMUMPs fabrication processes. An illustrative cross-sectional view of the released accelerometer is shown in Figure 7 with color codes showing the thin films involved. Table 1.
MetalMUMPs technology uses a substrate wafer with high resistivity. A 2 µm layer of silicon oxide (Isolation Oxide) is grown on the entire wafer to provide electrical isolation from the substrate. Layer Nitride 1 in combination with the layer Nitride 2 is used for various purposes in the demonstrated device. Firstly, the combined nitride layers provide a protective encapsulation for the polysilicon that is used for electrical connections of the sensing electrodes and separated proof mass pieces. Secondly, they are patterned to protect other areas in the wet anisotropic Si etching for the trench above which the entire accelerometer structures are suspended. The trench is critical for final device release. Lastly, a patterned nitride area is also used to provide a mechanical connection between the two pieces of proof mass (2) and (3) as shown in Figure (3). This device design makes full use of the insulator layers in MetalMUMPs to achieve a fully differential accelerometer. The 0.7 µm thickness doped polysilicon is mainly used as electrical connection material in the design. It also provides connections in the crosses in the electrical routing. Mechanical structures, including the proof mass and springs, consist of a 20 µm nickel layer and a 0.5 µm gold layer on top of the nickel. Nickel has been chosen in MetalMUMPs due to its considerably good electrical and elastic properties; and manufacturability using electroplating technology for overall low cost and good process control. The other two layers not shown in Figure 7 are the Oxide 1 and Oxide 2 layers. They are made of phosphosilicate glass (PSG) and act as sacrificial release layers. Layer Oxide 1 in particular is used to define areas where a 25 µm deep trench in the bsilicon substrate will be formed subsequently. It is removed after the releasing Nitride 1 layer process. Oxide 2 is removed by wet chemical etching in the final structural release step to free the entire accelerometer. In the fabrication of the accelerometer in this work, the 25-µm trench underneath the entire device, defined by Oxide 1, is anisotropically etched using KOH after the device is released, and the Oxide 1 is removed. Figure 8 shows SEM images of a released device with insets showing some detailed microstructures. As shown in the Figure 8(a,b), the proof mass made of nickel is formed by three pieces that are electrically isolated and mechanically connected. Enclosed by the labeled edges, the silicon trench under the proof mass and sensing electrodes can also be seen in the picture. Figure 8(c) shows the anchor structures of the stator electrodes, corresponding to the inset of Figure 3. One group of electrodes is connected directly using structural nickel; the other electrode pads are electrically connected by polysilicon underneath, which is invisible. SEM observations have shown no evidence of in-plane buckling of the comb fingers, although out-of-plane curling is observed, which will be discussed in Section 4.2.
Sensor Tests
In device characterization, a universal capacitive readout circuit IC MS3110 from Irvine Sensors was used. Due to the input configuration of the MS3110 board, only half sensing bridge was connected to the board in each characterization. Yet the swapped bridge was also tested. In circuit configuration and initiation, a modulation voltage of 2.25 V was designed in MS3110. An off-board bandpass filter with a gain of 52 dB was employed for further signal conditioning. Preliminary dynamic tests were conducted using a LMT-100 shaker from Ling Electronics (Corona, CA, USA). In the measurements, a Type 8692B50 PiezoBeam accelerometer from Kistler (Novi, MI, USA) was used as a reference device. Figure 9 shows the test setup and the mounting board on which the MetalMUMPs device under test (DUT) was assembled with the reference accelerometer. The DUT was packaged in a 68 pins J-Bend Leaded Chip Carrier (Evergreen Semiconductor Materials, Inc., San Jose, CA, USA). The board was screwed to the threaded pole of the shaker. Figure 10 shows a comparison of the output waveforms made between the fabricated sensor and the reference accelerometer under 1 g acceleration. Prior to the test, the reference accelerometer was calibrated using a PCB Piezoelectronics (Depew, NY, USA) hand-held shaker that can provide standard 1 g acceleration at 159.1 Hz. Due to the design of MS3110 board, each capacitive half-bridge in Figure 5 was tested, respectively. In each test, the excitation was calibrated as a sinusoidal acceleration with amplitude of 1 g and frequency of 110 Hz. The half-bridge sensing scheme demonstrated a sensitivity of ~105 mV/g with a 52 dB external gain, which corresponds to a 0.26 mV/g mechanical sensitivity without external gain. Compared to the designed value, the reduced measured sensitivity is attributed to a few factors including the parasitic capacitance in the test system, increased resonant frequency caused by the residual stress in the mechanical spring, etc. The overall sensitivity of the Ling's LMT-100 Shaker Spectrum analyzer Oscilloscope
MS3110
Ling's Power Amplifier Signal Generator fully differential accelerometer can be demonstrated as ~210 mV/g. The maximum acceleration applied in device characterization is 5 g. Beyond 5 g, the system demonstrates considerable non-linearity that is caused by the mounting of the device and evaluation board on the shaker. Figure 10. Output waveforms from the DUT and reference accelerometer under a 1 g sinusoidal acceleration. Both channels have the same sensitivity of 50 mV/div.(The phase shift between the two accelerometers is due to the different sensing and parasitic capacitance and readout circuit.)
Characterization of Residual Stresses in MetalMUMPs Nickel Layer
Residual stress is a common issue in surface micromachined MEMS devices, as the accelerometer demonstrated in this work. Even though electroplating process is used in nickel deposition, resultant structure deformations caused by residual stresses in the accelerometer structures have been observed in the fabricated device.
When investigating the effects of residual stresses, two types of intrinsic stresses are normally considered, i.e., mean stress and stress gradient [15,16]. The consequence of the residual mean stress in a thin film is compressive or tensile stress along the axial direction of the film after the film release. On the contrary, the consequence of a stress gradient the film thickness direction is the bending of the film structures, such as cantilever beams, upward (positive stress gradient) or downward (negative stress gradient) [11]. Stress characterization of a particular technology is useful to other device and process designs using the same technology. Although some previous works on axial and gradient stress characterization for MetalMUMPs devices have been done by other researchers using special test structures [6,11], it is still worthwhile performing non-destructive stress characterizations directly using the sensing comb drives in the accelerometer. The results presented in this work will provide useful information for design of sensors and actuators such as accelerometers, gyroscopes and electrothermal actuators that are to be fabricated using MetalMUMPs technology.
In order to characterize the mean stress and the stress gradient in thin films, different methods have been used to diagnose the states of stress in the film [17]. Among them, measuring the deformations of a cantilever beam made of the thin film to be characterized is considered the simplest and most frequently used method [16,[18][19][20]. In our demonstrated accelerometer, no matter whether they are anchored to substrate or the proof mass, the capacitive sensing fingers can be considered as cantilever
Ref. Accelerometer.
DUT beams made of nickel and gold, as shown in Figure 11. It can be observed in SEM scanning that the sensing comb fingers have a downward bending in the fabricated device, which means that a negative stress gradient has been established in device fabrication. Figure 11. Downward bending can be observed along the sensing comb fingers that are used for stress characterization.
For the continuous electroplating process in MetalMUMPs, it is reasonable to assume a linear intrinsic stress in the thickness direction of the beam, then the linear stress distribution at a particular location can be expressed as [16,21] : where t is the beam thickness, z (−t/2, t/2) is the normal to the surface of the beam with the origin starting at the film's mid plane;σ 0 is the residual mean stress for in-plane shrinkage of the beam, and σ 1 is the peak value of gradient stress which causes the deflection or curling. The distributions of σ 0 , σ 1 , and the resultant stress distribution are illustrated in Figure 12(a). The stress distribution leads to a bending moment in the beam given as: Since the metal structural layer in MetalMUMPs technology is consisted of two layers, i.e., a 20 μm nickel layer with a 0.5 μm gold on the surface, as shown in Figure 12(b), the effective bending moment in the beam can then be expressed as [22]: where I Ni and I Au are the moments of inertia of nickel layer and gold layer respectively about the neutral axis. Because the gold layer is much thinner than the nickel layer, the neutral axis along which zero stress exists can be assumed at the middle of the thickness. In Equation (6),ρ is the curvature of the bending beam which can be measured using normally optical methods. In this study, a Wyko NT1100 optical profilometer (Tucson, AZ, USA) is used to measure the surface curvature of a deflected sensing comb drive. Stress gradient is then derived based on the measurements. Figure 13 shows a typical measurement data set obtained from NT1100 scanning on some groups of sensing comb drives. Post-scanning data processing allows for extraction of a variety of 3-D geometrical and morphological parameters.
Ni E and Au
E are the equivalent Young's moduli of the nickel and gold layer, respectively, which can be expressed as [15]: where E is the Young's modulus and is the Poisson ratio. The Young's moduli of nickel and gold are 160 GPa and 79 GPa, while the Poisson ratios of nickel and gold are 0.31 and 0.44, both of which can be obtained from [23].
The two-layered sensing comb drive beams are asymmetrical beams because of the different Young's moduli and thicknesses of the nickel and gold layer. The bending moment caused by the mean residual stress in the nickel layer can be neglected due to the asymmetrical beam [11]. Moreover, the residual stress in gold layer can be neglected due to its much smaller thickness compared to the nickel layer. As the result of the above approximation, the peak value of the gradient stress in the nickel layer can be expressed as: and the in-plane residual mean stress can be determined by: where is the strain of the beam after release; l is the original (designed) length of the beam and s is the length of the beam after release, which can be deduced by measuring the curvature of the beam obtained in the profile scanning as shown in Figure 13.
With the measured s of 300.123 μm and the designed l of 300 μm and the effective Young's Modulus of 232 GPa, the stress gradient is calculated as approximately −5.27 MPa/ μm using σ 1 /(t/2), where t is the beam thickness and the residual mean stress is calculated as 94.67 MPa using Equation (9). These values are within 10% of the values reported in [11]. The differences are due to the variation of each batch. Figure 13. Screen shot of the Veeco post-scanning data processing interface. The data was obtained from a scanning of a group of sensing comb drives at 25 °C.
Capacitance Change Due to the Residual Stresses
Due to the curling of the sensing fingers caused by the residual stress discussed above, the sensing capacitance of each pair is 5.3 fF which differs consequently from the designed value of 6.6 fF that is based on flat comb drives as well.
Without the residual stress, two electrode plates are in parallel with each other. Due to the residual stress, the electrode plates bend downward so that the common area between two electrode plates decrease, the capacitance of the sensing system thus decrease.
Assuming the rotor and stator are symmetric, the two bending electrode plates due to the residual stress are shown in Figure 14. Because the bending distance is much smaller in comparison with the length of the sensing finger, the surface profile of electrode plates is approximated as straight line instead of curve. In this case, the common area of two electrode plates after release is about 1/6 less, thus the total capacitance of the accelerometer decreases 1/6 of the designing value.
Temperature Dependence of the Sensing Capacitance
Temperature dependences of the sensing capacitance and resultant parameters have been also characterized. A Kapton micro heater from Omega (Stamford, CT, USA) is attached to the backside of the device package to heat the sensor structure, which is placed underneath the objective lens of Veeco surface profilometer. Profiles of the comb drive surface at three temperature points, i.e., 25, 50 and 73 °C, are scanned respectively. The downward bending with elevated temperature, as revealed by the profiles, results in a −0.021 fF/°C of the temperature dependence of the capacitance (TDC). It corresponds to a 10% and 19% reduction at 50 °C, and 73 °C, compared to the capacitance of each sensing pair at 25 °C, respectively. Based on the above TDC, a temperature coefficient of sensitivity (TCS) of −0.014 mV/°C can be derived. No apparent in-plane buckling is observed.
Rotor
Proof mass Substrate Stator
Conclusions
A capacitive accelerometer enabled by MetalMUMPs foundry technology has been designed, fabricated and characterized in this project. The device features a fully differential sensing scheme with a unique common-centriod capacitance configuration. In simulation results, linear responses for both displacement and capacitance are observed. With a ~52 dB external amplification gain, the accelerometer achieves a sensitivity of ~210 mV/g. Without amplification, the device demonstrates a mechanical sensitivity of 0.52 mV/g. Residual mean stress and stress gradient are characterized as 94.67 MPa and −5.27 MPa/μm. Respectively, by measuring the surface profile using a Wyko NT1100 optical surface system. The temperature dependence of the sensing capacitance is characterized as −0.021 fF/°C, which results in a temperature coefficient of sensitivity of −0.014 mV/°C. No apparent in-plane buckling is observed. The data reported in this paper provides other MetalMUMPs user an important reference for designing and optimizing suspended MEMS device. | 2014-10-01T00:00:00.000Z | 2011-06-13T00:00:00.000 | {
"year": 2013,
"sha1": "fe71509a6ba3c8945f17b596f4c68c378b5f6772",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1424-8220/13/5/5720/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe71509a6ba3c8945f17b596f4c68c378b5f6772",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science",
"Materials Science"
]
} |
210168555 | pes2o/s2orc | v3-fos-license | From petrographic analysis to stereomicroscopic characterisation: a geoarchaeological approach to identify quartzite artefacts in the Cantabrian Region
Several isolated studies have tried to understand quartzite from an archaeological perspective by applying two different methodological approaches. The first one is based on non-destructive characterisation, aiming to understand human procurement and management of quartzite, without solid geoarchaeological criteria. The second characterised the material from archaeological sites using only petrographic or geochemical perspectives of a limited sample. Currently, both perspectives are unconnected, creating a methodological gap that needs to be solved to study the procurement andmanagement of quartzite in greater depth. The present study, mainly methodological, will explore the gap between petrographic analysis and non-destructive characterisation. Doing so, we could fill this vacuum of information and generate a solid geoarchaeological basis to characterise not only a sample but complete assemblages. To this end, we analyse the lithic assemblages at El Arteu and El Habario, two Middle-Palaeolithic sites in the Cantabrian Region, northern Spain.We summarise the main results derived from petrographic analysis, but especially we will focus on non-destructive criteria to characterise the lithic surfaces of archaeological quartzite using stereoscope microscopy. This process allows us to understand the complete assemblage but also, through technological characterisation, understand the management of different quartzite petrogenetic types in both sites.
Quartzite in archaeology
The study of raw materials used for lithic implements is a well-known topic of research in prehistoric archaeology. Since the beginning of this scientific discipline, petrological description and classification of rocks transformed into artefacts was carried out by the first prehistorians, such as Juan Villanova i Piera or Édouard Lartet (Pelayo López and Gonzalo Gutiérrez 2012). The characterisation of these raw materials has become more detailed and frequent since the 1950s through the use of classic petrological methodologies and because of increasing archaeologist interest in new perspectives to understand prehistoric societies (Polanyi 1957). Both perspectives converge in the characterisation of obsidian Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12520-019-00981-7) contains supplementary material, which is available to authorized users. as the key raw material to trace incipient commercial routes in the Mediterranean in Recent Prehistory (Binns and McBryde 1969;Dixon et al. 1968). The interesting results, and also the methodologies generated, led to a significant increase in lithic raw material studies for the next two decades, focussed not only in the trade itself, but also in the areas where abiotic resources were extracted by prehistoric societies (Earle and Ericson 1977;Kowalski et al. 1972;Sieveking et al. 1972;Singer and Ericson 1977). In the 1980s and 1990s, raw material studies increased not only in number, but also in their geographic and chronological frameworks. To do so, different raw materials were analysed and, for Palaeolithic Archaeology in Europe, flint became the best and most widely characterised raw material (e.g. Demars 1980;Floss 1990;Geneste 1985;Luedtke 1979;Morala 1980;Séronie-Vivien and Séronie-Vivien 1987). At the same time, research focussed on economic and social dynamics through the techno-typological characterisation of lithic assemblages has increased our knowledge of the acquisition and management of flint (Andrefsky 1994;Roebroeks 1988;Turq 1996). All these elements create a better understanding on the economy created around raw materials and the complex management and catchment patterns followed by Palaeolithic societies.
Despite being the second most-often-used lithic raw material in the European Palaeolithic, quartzite has not received the same methodological development from geoarchaeological perspectives as flint or obsidian. This situation creates a lack of information and many biases in understanding the catchment and management strategies pursued by prehistoric societies. This is especially important in those areas where flint is not common or in ancient chronologies in areas where nonflint rocks are predominant. However, in recent years, quartzite from archaeological deposits has been studied from geoarchaeological perspectives by combining different destructive methodologies such as petrographic analysis and geochemical procedures (Blomme et al. 2012;Cnudde et al. 2013;Dalpra and Pitblado 2016;Pitblado et al., 2008Pitblado et al., , 2012Prieto et al. 2019;Veldeman et al. 2012). These studies reflect the ambiguity of the term quartzite in archaeological literature, where it can refer to different geological origins, from clearly metamorphic to sedimentary. This prompts the need of petrography to exactly differentiate sedimentary orthoquartzite from metamorphic quartzites. Similar discrepancies between megascopic identification and laboratory analysis are also mentioned by petrologists, especially when quartzite is used as a field name (Howard 2005;Skolnick 1965).
Archaeological quartzites have also been studied by other specialists using non-destructive techniques. In the Cantabrian Region, several studies have attempted to study this material by its macroscopic characterisation without examining these rocks from detailed and curated geoarchaeological perspectives (e.g. Álvarez-Alonso et al. 2013;Castanedo 2001;Manzano et al. 2005;Sarabia 2000). These studies have proposed different procurement and management strategies for quartzite, mainly based on techno-typological analysis, but their conclusions are clearly biased by the lack of accuracy in their descriptions. Currently, both perspectives are unconnected, creating a methodological gap that needs to be solved for more in-depth studies of quartzite acquisition and management strategies, based on a solid geoarchaeological approach.
The goal of this methodological work is to explore the gap between destructive analysis, mainly petrographic study of thin sections, and non-destructive characterisation of quartzite. The information gap might be filled in this way and a solid geoarchaeological basis might be established to characterise not only a sample but complete quartzite assemblages. To this end, we have analysed the lithic collections from El Arteu and El Habario, two Middle-Palaeolithic sites situated in the Central part of the Cantabrian Region, northern Spain. We summarise the main results derived from the petrographic and geochemical characterisation of 18 samples recently published by our research group (Prieto et al. 2019), but especially, we will focus on non-destructive criteria to characterise the lithic surfaces of quartzites using stereoscope microscopy and descriptive characterisation. The diagenetic and metamorphic changes which modify the texture and structure of former sedimentary rocks, the grain size and its distribution, and the mineral component itself will be analysed in this research using both approaches. Therefore, the correlation between both techniques (destructive and non-destructive) is the main goal of this research. This process allows us to understand the complete assemblage but also, through technological characterisation, to understand the management of quartzite in both sites, taking into account production, usage and discard of this raw material (Roebroeks 1988;Turq et al. 2013). During the last two decades, several studies have contributed to the knowledge about lithic management strategies developed by Middle-Palaeolithic groups in the Cantabrian Region (e.g. Carrión et al. 2008;Gonzalez-Urquijo et al. 2005;Rios-Garaizar 2012). These researches accomplished not only a description of lithic assemblages using typological characterisation of retouched artefacts, but they also unveiled catchment mechanisms (Carrión and Baena 1999;Castanedo 2001;Manzano et al. 2005), technological procedures (Carrión 2002;Cuartero et al. 2015;Maillo 2007;Santamaría et al. 2010) or use practices (Rios-Garaizar 2010). These studies have opened new perspectives to discover a complex and varied economy from geographic, chronological and human standpoint. The research we present here shows not only the management of quartzite at these two Middle-Palaeolithic sites, but also the new wider perspectives this geoarchaeological study can open in archaeological practice (Dibble et al. 2017;Romagnoli et al. 2018).
Materials
The sites of El Habario and El Arteu are located in the central part of the Cantabrian Region, in the northwestern part of Spain ( Fig. 1). They are in the western part of the Autonomous Community of Cantabria. This area is a mountainous region and the sites are located in the basin of the River Deva and its tributaries, in the eastern foothills of the Picos de Europa. This area is characterised by a complex geology determined by the presence of two geological domains. The first one is the eastern part of the Cantabrian Zone, mainly composed by Carboniferous materials. Two main provinces can be distinguished, the Pisuerga-Carrión Province, to the South and the Picos de Europa and Ponga Province to the West and North. Both provinces also contain Cambrian, Ordovician, Silurian and Devonian strata (Bastida 2004). The second one is the western part of the Basque-Cantabrian Basin, in particular, the Navarro-Cantabrian sulcus, dominated by sedimentary Mesozoic rocks and small parts of Cenozoic material. Neogene and Quaternary deposits are represented too (Barnolas and Pujalte 2004). In this context, quartzite is characterised by multiple and different environments: as outcrop (quartz arenites), generally related to the older strata; as pebbles and blocks inside carboniferous conglomerates; and as pebbles in fluvial quaternary deposits.
El Habario and El Arteu assemblages have been attributed chrono-culturally to the Mousterian, with discoid reduction methods mostly in quartzite. They appear to form a network of sites together with El Esquilleu rock shelter (Baena et al. , 2012Carrión 2002;Carrión et al. 2008Carrión et al. , 2013Fig. 1 Location of El Habario, El Arteu and the Deva-Cares Valley. Main chrono-lithological strata are displayed based on a 1:1.000.000 geological map (Álvaro et al. 1994) Archaeol Anthropol Sci (2020) 12:32 Manzano et al. 2005 (Carrión andBaena 1999, 2005). The latest researches concluded that the dominant lithic reduction model was based on hierarchical centripetal reduction processes aimed at flake production. All these reasons, as well as the proximity to the Remoña Conglomerates, identify this site as a workshop of quartzite, probably related with the central layers of the sequence of El Esquilleu and El Arteu. The collection analysed here comes from El Habario B and the number of pieces is 467. All except three lithics are quartzites. The archaeological site of El Arteu is situated in a small rock shelter in a rugged high-mountain area near the Deva and its confluence with the River Cares (X = 368.247 Y = 4.793.505, 30T ETRS1989). The artefacts were collected after they had fallen from a section, but they display industrial coherence. The lithic assemblage (n = 255) was attributed to a single phase within the Mousterian (Carrión 2002;Carrión et al. 2008). Technological characterisation of this assemblage showed that lithic reduction was done under Levallois or discoid lithic method and it was aimed at obtaining pointed flakes. The interpretation of El Arteu for hunting activities inside a complex residential model in the area was suggested by Baena et al. (2005). Radiolarite (n = 11), limestone (3), lutite (1) and hydrothermal quartz (1) are represented in addition to archaeological quartzite (n = 237).
The assemblages studied here are deposited in the Museum of Prehistory and Archaeology of Cantabria (MUPAC). Of the total of artefacts made in quartzite from El Habario (HA) and El Arteu (ATS), we sampled a set of representative pieces for thin section analysis (n = 17) and X-ray fluorescence analysis, as described in Prieto et al. (2019). We also select another sample from a Deva river beach. Petrographic analysis includes packing, texture and quartz grain feature description, also grain size characterisation based on image processing and, finally, mineral characterisation. The results point to a high variability of archaeological quartzite according to the genesis of the material. The 18 quartzites were classified into seven main types according to their petrogenesis as described by thin section, especially through the analysis of textures, packing and quartz grain features, and secondly by quartz grain size and morphology. The variability in mineral characterisation and geochemical composition was only used to understand similarities between quartzites at both sites, understanding them as cumulative features to the main types defined. The petrogenetic types used here are the following (Table 1): for sedimentary quartzites, clastic fabric with matrix or non-quartz cement quartz arenite (MA), clastic quartz arenite (CA), syntaxially overgrown orthoquartzite (OO) and sutured grain orthoquartzite (SO); and for truly metamorphic quartzites another three types depending on the textural changes on quartz grains: bulging recrystallised quartzite (BQ), subgrain rotation recrystallised quartzite (RQ) and grain boundary migration recrystallised quartzite (MQ). This classification is based on the geological work of several researchers (Bastida 1982;Folk 1974;Howard 2005;Wilson 1973). In order to readily identify and manage in large databases, a two-letter code is used for every petrogenetic type. When possible, we try to relate the first letter with a relevant microtextural feature of the rock, e.g. presence of matrix (M) or syntaxially quartz overgrowths (O); whereas the second capital letter applies to the specific rock type, precisely sandstones are labelled as arenites (A) or orthoquartzite (O), and metamorphic quartzites with the letter (Q).
Features to characterise, analysis to understand
We first studied 18 selected hand samples that were used for thin-section preparation to describe their textural and grain features. Once they were described using a stereomicroscope and the information was related with that obtained by standard petrographic methods on an optical microscope, we described the other 687 quartzite pieces from the complete assemblage.
We also described these pieces according to the basic techno-typological characterisation based on Analytical Typology (Laplace 1972). We used the proposal made by Maite García-Rojas to understand technological products (García-Rojas 2010, 2014. First, we classified lithic implements attending to three categories: (a) cores, characterised by the presence of one or more knapping surfaces and one or more striking platforms; (b) knapping products, characterised by the presence of a ventral and a dorsal surface and the presence of a butt of striking platform; and (c) chunk, material that could not be classified in the two previous categories due to the lack of criteria. Cores are classified as discoid, levallois, irregular or core on flakes attending to their features and morphology (Fig. 2). Knapping products are classified as blanks, in case there is only one plane in the dorsal surface (including flakes, blades and bladelets), or core preparation/rejuvenation products, a knapping product with presence of two different planes on its dorsal surface. One of these planes has a percussion platform while the other, a dorsal surface. This category includes core tables or core-trimming elements. Finally, blanks are classified into four categories attending to the quantity of scars on the dorsal surface. In addition, they are classified into another four categories attending to the quantity of cortex on dorsal surfaces (Fig. 3). Retouch is analysed as an accumulative feature on technological products, and we have only counted the number of primary types on each piece (Fernández-Eraso and García-Rojas 2013; Laplace 1987). Finally, the metrical characterisation of the pieces was done using their weight. The non-destructive petrological characterisation of quartzites in both assemblages was based on four different scales of observation. The first one is based on naked-eye description. The second one, uses two × 10 and × 20 hand magnifiers, a stereomicroscope and the Dino-Lite digital microscope to the same magnifications. The third scale employs × 50 magnification provided by the Dino-Lite digital microscope and by stereomicroscopes. The fourth applies × 250 magnification with the first instrument. Most of the pieces were photographed in a flat position to × 50 and × 250 magnification to create a library of reference pictures. We preferentially used Dino-Lite model AD7013MZT with polarized light to eliminate most of the surface lustre. The microscope was handled with the MS35B vertical stand. We used the software Dino-Capture 2.0. The stereomicroscope used was the Nikon SMZ800, with up to × 120 magnification. Nondestructive rock characterization is based on (a) qualitative characterisation of texture and quartz grain features, (b) quantitative-qualitative characterisation of quartz grain size and orientation and (c) characterisation of non-quartz elements (Fig. 4).
Qualitative characterisation of texture, packing and quartz grain features
We qualitatively systematised lustre to the naked eye on the surface of quartzites in four categories: (a) no lustre, (b) low lustre, (c) medium lustre and (d) high lustre.
Density of surface micro-cracks/chips in the surface of quartzite to naked eye. These areas are characterised as lighter and sparkling elements with scale morphology that appears on the surface of quartzites (Fig. 5). We systematised their presence using the following criteria: (a) absence, (b) small, (c) medium or (d) high presence of micro-cracks.
Quartz grain boundaries definition/recognisability was defined using a surface section of approximately 1.6 × 1.3 mm at × 250 magnification as (a) very easy, when at least 25 quartz grains were recognisable; (b) easy, when between 15 and 25 quartz grains were observable; (c) difficult, when between 5 and 15 quartz grains could be recognised; and (d) impossible, when fewer than 5 quartz grains were recognisable.
We distinguished five different types of packing in hand specimens according to the categories used for thin section and the following criteria ( Fig. 6): Fig. 3 Technological products identified at both sites: a blank without scars and more than 66% of cortex on it (El Arteu), b blank with one negative scar and more than 66% of cortex on it (El Habario), c blank with two negative scars with less than 33% of cortex on it (El Arteu), d blank with three or more negative scars and cortex between 33 and 66% of dorsal surface (El Arteu), e core preparation/rejuvenation product (El Arteu) and f chunk (El Arteu) 1. Floating, if the quartz grains are separated one from each other by matrix or cement. 2. Punctual or isolated, when the quartz grains are one near another and the contacts between them are restricted to single points. At ranges between × 50 and × 250, the grains are closer to each other, but contacts are very small or absent. 3. Tangential, in case the grains are joined together, but cement is still present. At × 50-× 250 magnification, the cement is restricted to a thin layer between the limits of the quartz grains. 4. Complete, where there is a very small content of cement or matrix and the grains create an almost complete texture. Matrix or cement is almost absent, limited to few accumulations in small confined areas between grains. They are delimited by very weak, fine and straight contours. 5. Suturated, when the limits between grains generate a complete and deformed texture. Matrix or cement is restricted to small points on quartz limits, as in the previous packing type.
We distinguished six different types of texture, not only attending to the perception of the surface relief or the feel (touch) of the quartzite surface, but also using criteria observable by stereomicroscope. Texture categories are the following: a. Saccharoid texture is defined by the generalised presence of matrix or a carbonate cement on the surface of the samples. The touch is granular and sandy and it is usually heterogeneously coloured to the naked eye. At low magnification, slopes and rough relief are appreciated. At middle magnification, it is possible to observe the presence of isolated to tangential quartz grains surrounded by large amounts of matrix and secondary diagenetic cement, creating a rough relief. Some quartz grains can sparkle, but the lustre is not homogeneously distributed. At high magnification, quartz grains tend to be isolated and, again surrounded by matrix, generally as the sum of small specks that cover the surface of quartz and other mineral grains (Fig. 7). b. Granular texture is defined by a clear granular touch on the surface of the sample. The quantity of matrix or cement is reduced and its presence is restricted to small areas or the surroundings of the edges of the grains. To the naked eye, it generally shows a heterogeneous colour distribution. At low magnification, softly rough to flat relief is appreciated. At middle magnification, it is possible to recognise quartz grains and some of them can sparkle, but lustre is not homogenous. At high magnification, grains of these rocks are also recognisable and small quantities of cement or matrix (mainly formed by the sum of small specks) fill the small empty areas between quartz grains (Fig. 7).
c. Compact and grainy texture is defined by the presence of a very soft granulated touch on the surface of the sample.
There is no presence of matrix or cement. To the naked eye, the colour is more homogeneous and micro-cracks are recognisable. At low magnification, the relief is gentler than that in the previous texture and some of the surfaces show successive planes of squamous surfaces. At middle magnification, most of the grains are still recognisable, especially the lighter grains which show clear and curved outlines, although some parts of them cannot be appreciated. In a same way, at high magnification, grains are recognisable, especially those with thicker, brighter and curved outlines. Other grains or parts of the outlines are vague or diffused. The previously commented lighter grains are visible (Fig. 7). d. Fine and grainy texture is defined by a smooth touch and a moderate lustre. In general, grains are difficult to observe completely. Although the touch is fine, small rough areas could be observed, mainly generated by the presence of secondary ferruginous, siliceous or carbonate precipitates or by the presence of joints. To the naked eye, colour is relatively homogeneous, except in micro-cracks, which are generally brighter. At low magnification, the relief is soft without roughness, although some surfaces of the quartzite show squamous surfaces. At middle magnifications, grains are hardly recognisable and there is a thin and bright lustre covering the surface of the rock. At high magnification, grains are almost unrecognisable and only some outlines can be appreciated. Small specks are visible on the outlines of the quartz grains and they seem to be on the same level as the rest of the surface, covered by the previously mentioned thin and bright lustre (Fig. 8). e. Fine texture is defined by a really smooth touch and a shiny/brilliant lustre. In general, grains cannot be observed and only some small relicts from their outlines are seen. The touch is very fine and, as in the previous texture, some areas can be rougher due to cement precipitations. To the naked eye, colour is homogenous and shows a high lustre. Micro-cracks are much more limited than those in the previous texture, although they are present. At low magnification, the relief is very soft and no rugosity is observed; neither is there a large quantity of squamous surfaces, which are reduced to very small areas. At middle magnification, no grains are recognisable and only small specks are visible. Thin and bright lustre is observed and it clearly covers the entire surface of the quartzite. At high magnification, no grain is observed and only small specks or associations of them are recognised on the surface, reduced to small areas (Fig. 8). f. Soapy texture is quite similar to the preceding one. The touch is soapy, even smoother than that in the previous texture, and no grains are observed. To the naked eye, colour is really homogeneous, very bright, and microcracks are limited. At low magnification, the relief is plain and soft and there is almost no plain squamous surface. At middle magnifications, no grains are recognisable and neither are the small specks appreciated in the preceding texture. Previous thin and bright lustre is extended onto the entire surface of the quartzite. At high magnifications, only a few grains are recognisable, not by the outlines but by a small, thin and almost unrecognisable bulging that creates a wavy micro-relief. Hardly any small specks are appreciated (Fig. 8).
Quartz grain features were determined by applying morphological features of the border of the quartz grains. We reduced the variability of each quartzite to the two prevailing categories. These were: 1. Quartz grains with plain and angular limits (Fig. 9a). 2. Quartz grains with plain and rounded limits (Fig. 9b).
3. Quartz grains with ruffled and irregular limits generated by the effect of matrix or cement (Fig. 9c). 4. Quartz grains with appearance of regrowth of quartz syntaxial cement, recognisable by the partial or complete dual grain outline that creates a lighter, glossy and curved space between both lines, generally in concavo-convex lines between grains (Fig. 9d). 5. Quartz grains with ruffled, irregular and thin limits and surfaces with flat relief (Fig. 9e). 6. Quartz grains with no boundaries detected, where the limits are reduced to small alignments of specks or small saturated lines (Fig. 9f). Quartz grains with plain and rounded limits, from sample HA-5632. c Quartz grains with ruffled and irregular limits generated by the presence of matrix or cement, from sample ATS-339. d Appearance of regrowth of syntaxial quartz cement (arrow on syntaxial quartz cement in the direction of its growth), from sample ATS-308. e Quartz grains with ruffled irregular and thin limits on flat surface, from sample HA-5500. f Quartz grains with no apparent boundaries detected, reduced to small alignments of specks, from sample HA-5519 Qualitative characterisation of quartz grain size and orientation As commented before, we tried to recognize in hand specimens the mineral and textural features observed in thin sections, as a way to link the information from destructive and non-destructive methodologies. Nevertheless, due to the lower resolution of stereomicroscopes and the restrictions imposed by the lack of polarizing light to properly recognize grain features, the amount of information generated, and especially the possibility of applying different techniques, here we measure quartz grain size using a qualitative scale. This allows us to analyse a large number of rock implements using only two criteria that systematise the grain size of quartzites.
On one hand, we determined mean grain size using × 50 and × 250 magnification, obtaining the approximate measurement of the secondary axis of the particles. The measurement was performed with the measuring mode provided by the software Dino-Capture 2.0. In general, the number of measured grains per sample is higher than 20. So we used three categories, based on the Udden-Wentworth scale (Wentworth 1922) and modified for their application to quartz grains according to the conclusion reached by Prieto et al. 2019: (a) Coarse quartz grain size, used for quartzites containing quartz grains bigger than fine sand (> 0.25 mm); (b) medium quartz grain size, used for quartz grains between coarse silt and fine sand (0.031 to 0.25 mm); and (c) fine quartz grain size, used for quartzites with grains smaller than coarse silt (< 0.062). For heterogeneous samples, the biggest grain determined was assigned as the size of the quartzite.
On the other hand, we completed the qualitative analysis of quartz grains by describing their sorting degree using the following categories: homogeneous, bimodal or heterogeneous quartz grain according to the size distribution.
We also noted the presence of bedding planes on the surface of quartzites, both to the naked eye and using low magnification, according to the following criteria: (a) absence of bedding, (b) non-clear bedding and (c) clear bedding.
Finally, we tried to recognize foliation features on at least one surface of the sample, although this is not an easy task in hand specimens of highly compact quartzites (Fig. 10).
Qualitative characterisation of non-quartz mineral
In addition to quartz, archaeological quartzites contain other minerals in very small proportions. Up to three of the most representative minor minerals were identified in every sample. The observed minerals are the following: as primary phases (a) white mica, (b) feldspars, (c) pyrite and (d) undetermined black minerals, and as alteration phases (e) Fe and/or Mn oxides (Fig. 11).
From petrographic analysis to binocular characterisation
The groups and types established through petrographic analysis of thin sections were used for non-destructive characterisation of hand specimens by using similar qualitative criteria Fig. 10 Microphotograph of the surface of two quartzite pieces at × 50 and × 250 magnification. On the left side, ATS-205 exhibits clear bedding of the surface with grains preferentially oriented and silica matrix filling the empty space between them. No grain is deformed. On the right side, sample HA-5847 shows clear foliation. It is easy to distinguish schistosity from bedding owing to the modification of quartz grains in the former, while in the latter no modification of quartz grains is observable (both observations at × 250 magnification) Archaeol Anthropol Sci (2020) 12:32 for the assessment of the implement surface. The description of all these features during the data acquisition process, the comparison with the original rough samples subjected to thin section analysis and the comparison with the reference collection of microphotographs helped us to assign petrogenetic groups and types to every artefact. Figure 12 schematically relates both approaches. MA type (clastic fabric with matrix or non-quartz cement quartz arenite) is characterised through thin section by the association of clastic and cemented texture and floating or punctual packing, as well as by the presence of clastic quartz grains. This is reflected in hand specimen (non-destructive) characterisation through the association between saccharoidal texture and floating or punctual packing. Moreover, by the presence of either flat or irregular outlines on angular or rounded quartz grains because of matrix or cement. There is no presence of lustre or of micro-cracks. Some quartzites show bedding planes on their surfaces. In general, in this type, it is easy to recognise at least 25 different grains, although sometimes this task is complicated by the high presence of matrix or non-quartz cement (Fig. 13).
CA type (clastic quartz arenite) is characterised through thin section by the association of clastic grained texture, tangent or tangent-to-complete packing, and the presence of clastic grains delimited by concave-convex quartz limits. This is reflected through non-destructive techniques in the association of granular texture, tangent or sometimes complete packing, and flat and rounded or angular quartz grain limits. Lustre is present, but it is of low intensity. There is a small increase in the presence of micro-cracks. Bedding appears in some of the studied samples and it is generally easy to recognise > 25 quartz grains in a 1.6 × 1.3-mm surface sector (Fig. 13).
OO type (syntaxially overgrown orthoquartzite) is characterised through thin section by the association of clastic grained texture, complete packing, the presence of evenly distributed quartz with undulatory extinction, concavo-convex quartz limits and chiefly by syntaxial quartz overgrowths over the former quartz grains. This is reflected on the quartzite surface by the association between compact and grainy texture, complete packing and flat and rounded quartz grain limits, surrounded by the halo of quartz overgrowth. Most of the samples have medium lustre, while others are more Fig. 11 Microphotograph of the surface of six quartzite pieces. a ATS-001 to × 250 magnification shows orange/red irregular iron oxides. b HA-5814 at × 50 magnification shows black and irregular manganese oxides. c HA-5654 at × 250 magnification shows two black, plain and subrounded undetermined heavy minerals. d HA-5715 at × 250 magnification shows very small, black, plain and angular pyrites. There are also iron oxides near the pyrites. e ATS-002 at × 50 shows some small, white and bright exfoliating feldspars. f ATS-308 at × 250 magnification shows some small, white and bright mica with changeable structures variable. The presence of micro-cracks is also variable. It is easy to recognise grains, although not as easy as in the previous sedimentary quartz arenites (Fig. 14).
SO type (sutured grain orthoquartzite) is characterised through thin section by the association of clastic grained texture, suturated grained packing, stylolitic boundaries, high presence of undulatory extinction and occasional presence of recrystallised grains or deformation lamellae. In hand specimens, there is no boundary detection, but somehow it is related to fine and grainy texture, suturated packing and flat and ruffled grain boundaries. As in the previous type, a variable lustre is common, as well as the presence of surface microcracks. Grains are less easily recognisable than those in the previous type. Foliation structures are easy to recognise on some of the samples (Fig. 14).
BQ type (bulging recrystallised quartzite) is characterised through thin section by the association of mortar texture, suturated packing and presence of stylolitic quartz grains, deformation lamellae, clear undulatory extinction and significant presence of recrystallised grains. It is associated with characteristics on the quartzite surfaces, such as fine texture, suturated packing, flat and ruffled limits of quartz grains and the non-detection of quartz grains. The quantity of recognisable grains is smaller than five and the presence of surface micro-cracks is again variable. The intensity of lustre is high. Finally, and as in the previous type, foliation is obvious in most of the samples (Fig. 15).
RQ type (subgrain rotation recrystallised quartzite) is characterised through thin section by the high presence of recrystallised quartz grains together with mortar texture, suturated packing and presence of stylolitic quartz grains. On the quartzite surface, this type is associated with a soapy texture and absence of evident quartz grain boundaries and few suturated borders. The intensity of lustre is high. As in previous types, foliation is obvious on some samples. There are almost no micro-cracks on the surface (Fig. 15).
MQ type (grain boundary migration recrystallised quartzite) is characterised through thin section by the association of clastic/foam texture, suturated packing, concavo-convex quartz grain limits, presence of deformation lamellae and Fig. 12 Schematic representation of the features that define the groups and types of quartzites, showing the relationship between the petrographic and nondestructive petrological characterisation. In red, textural and packing and characterisation. In blue, the features of quartz grain and morphology. In black, other diagnostic features. The solid lines that link different columns represent the association of similar characteristics. The dashed lines represent unclear or infrequent relationships. Blue triangles represent relationships where the presence of a quartz grain feature conditions texture or packing. In order to avoid duplicated information, features of quartz grains present in previous phases do not appear in the following types. P: packing; T: texture; F: quartz grain feature; M: morphology of quartz grains. In lustre and micro-cracks (M. crack), 0 means absence, 1 low, 2 medium and 3 high lustre intensity or density of microcracks/scale stylolitic quartz grain limits. On the quartzite surface, it is related with soapy texture and the absence of quartz grain boundaries. Intensity of lustre is high, and there is an almost complete absence of micro-cracks on the surface. Foliation is not present (Fig. 15).
Due to methodological limitations, grain size as observed with stereomicroscopy does not show any correlation with sizes measured in thin sections so we could only approximate a general increase in size for the type OO. Therefore, grain size determination is only used as a cumulative feature to describe variability within the types, as a possible consequence of the internal characteristics of the former sediment from which the quartzite was created.
As discussed by Prieto et al. (2019) for the samples studied in thin sections, neither non-quartz mineralogy nor geochemical XRF data show any clear relationship with petrogenetic types. The influence of the mineralogy of the former sediment and post-depositional weathering could explain the variability of non-quartz minerals. It is important to highlight the limited accuracy of mineral detection just by observing the quartzite surface and also the higher influence of postdepositional weathering on the surface than in inner parts.
Management of quartzite in El Habario
The seven proposed petrogenetic types could be identified by stereomicroscopy in the quarzitic implements (n = 467) from El Habario. Metamorphic quartzite is the best represented group owing to the high quantity of the BQ petrogenetic type, with > 50% of the assemblage. Sedimentary orthoquartzite is the second-best represented group, and both petrogenetic types (OO and SO types) are similarly represented. Finally, the group of quartz arenite is underrepresented ( Table 2). The most frequent grain size category is heterogeneous distribution, in 47% of the pieces, even though homogeneous distribution is also well represented, in 37% of them. Regarding quartzite types and size varieties, nine preferential varieties are associated with OO, SO and BQ types and fine and medium grain sizes.
The most frequent technological category is knapping product, followed by core, and chunk. Cores are restricted to orthoquartzite and quartzite groups (Table 3). Cores made on orthoquartzite and BQ type appear in similar percentages. There is a higher representation of knapping products in the OO, SO and BQ types than in RQ and MQ quartzites. Finally, chunks are only represented in MA, OO, SO and BQ types.
We identified 71 cores. The most frequent is the irregular one, followed by discoid, core on flake, levallois and prismatic-shaped core (Online Resource 1). There is no clear correlation between type of core and petrogenetic quartzite type, even though most levallois cores are made on SO type. In addition, the representation of core on flake is different between petrogenetic types, especially comparing the orthoquartzite and quartzite groups.
We identified 355 knapping products. The most frequent type is blank. Core preparation/rejuvenations products are scarce, forming less than 2% of the assemblage and they are only made on SO, BQ and MQ petrogenetic types. Regarding blanks, there are some differences between petrogenetic types and the quantity of negative scars on them, especially in the comparison between CA and MQ types with all other types. In the first two types, all blanks possess at least two negative scars, while on the other types, each category is represented. There are more blanks with high quantity of negative scars on SO and RQ types than on BQ and OO types. The latter type is the quartzite with greatest frequency of blanks without negative scars (Online Resource 2). The extension of cortex on dorsal surfaces is different on each petrogenetic type, especially because of the absence of cortical surfaces on MQ blanks and the larger extension of them on CA blanks. In addition, blanks with broad cortex of more than 66% are more frequent in the OO type than in SO, BQ and RQ (Online Resource 3).
According to the presence of retouch on different petrogenetic types, the CA and RQ types are, on proportion, more retouched than OO, SO, BQ and MQ types (Online Resource 4). In addition, there is a moderate increase in the frequency of retouched artefacts with the increase in deformation/ metamorphic processes throughout the OO, SO and BQ types.
There are clear differences in weight based on the technological categories and petrogenetic types (Fig. 16). They are clear in core category because of the absence of OO and SO lighter than 40 g. This is especially significant when these cores are compared with those in RQ quartzite, all lighter than that weight. In addition, cores heavier than 600 g are limited to those made on OO and BQ petrogenetic types. Moreover, the mean weight indicates that heavier cores are made on OO and SO types, followed by BQ. Finally, RQ cores are the lightest. There are also differences in weight of knapping product. In general, there is a clear decrease in weight from CA quartz arenite to RQ quartzite. Columns are petrogenetic types and rows contain the characteristics of grains according to size, classified first by distribution and second by size itself. Values in italics are the categories representing more than 10% of the total cases. Values in bold are the categories representing between 5 and 10% of cases. Finally, values in bold italics are the categories representing between 1 and 5% of cases In addition, the weight of orthoquartzites and BQ type is more variable than RQ and MQ types. Finally, chunk weight also differs between the OO, SO and BQ petrogenetic types due to the decrease in mean weight and variability as deformation increases.
Management of quartzite in El Arteu
Six petrogenetic types have been identified in the quarzitic implements (n = 237) from El Arteu. Sedimentary orthoquartzite (47%) is the best represented group and quartz arenite (21%) is the second most important one. Finally, the group of metamorphic quartzite is the least frequent (19%) ( Table 4). The most frequent grain size distribution category is homogeneous distribution around one mode in 46% of cases, even though general heterogeneous distribution is also well represented. Regarding grain size, the most frequent categories are fine and medium grain sizes. We identified six preferential varieties: three belong to OO type, one with fine size and homogeneous distribution and another two medium-sized varieties with homogeneous and heterogeneous distribution. Another two varieties are BQ and SO types with fine grain size and homogeneous distribution. The last one is the CA type with heterogeneous distribution and medium quartz grain sizes. The most frequent technological category is knapping product, followed by chunk and core. Except for the RQ quartzite, cores are represented in every petrogenetic type (Table 5). Finally, chunks are well represented in the MA, RQ and OO types, while they are underrepresented in the SO and CA petrogenetic types.
We identified 17 cores in the whole collection. The most frequent type is core on flake, with eight, followed by discoid and irregular ones, represented by five and four items, respectively. There is only one prismatic-shaped core. There is no clear correlation between type of core and petrogenetic types of quartzite (Online Resource 5).
One hundred seventy-four knapping products were identified in the lithic assemblage from El Arteu. The most frequent type is blank, amounting to over 97% of the items. Core preparation/rejuvenation products are scarce and they are only represented on CA, OO and SO types. There is a great variability in the number of negative scars on blanks depending on the quartzite types because of the smaller presence of negative Columns are the categories of knapping products and the total of items belonging to each petrogenetic type. The columns % are the percentage of each petrogenetic type in relation to each category of knapping product, while the columns rel.% are the percentage of each category of knapping product in relation to each petrogenetic type of quartzite. Values in italics are the categories representing more than 10% of the total cases. Values in bold are the categories representing between 5 and 10% of cases. Finally, values in bold italics are the categories representing between 1 and 5% of cases scars on the quartz arenite group than on orthoquartzites (Online Resource 6). Cortical areas on blanks are more frequent in the CA type, while they are uncommon in the others, especially in SO type (Online Resource 7). SO orthoquartzites and the group of quartzite are, in general, more retouched than other types of quartzites. In addition, OO orthoquartzite is more retouched than the quartz arenite group (Online Resource 8). The OO type, followed by the SO and the BQ ones, are the petrogenetic types with the highest representation of blanks with multiple primary types.
The relationship between petrogenetic types and technology through weight demonstrates clear differences (Fig. 17). Cores made on MA type are heavier than cores made on other petrogenetic types. Knapping products and chunks follow similar patterns, and there are none lighter than 5 g in the MA type.
Discussion
From a sample to the whole: losing precision to gain information Textural and quartz grain features through microscopic nondestructive techniques show an analogous association of Columns are petrogenetic types and rows contain the characteristics of grains according to size, classified first by distribution and second by size itself. Values in italics are the categories representing more than 10% of the total cases. Values in bold are the categories representing between 5 and 10% of cases. Finally, values in bold italics are the categories representing between 1 and 5% of cases features and forces to those appreciated through petrographic characterisation. Nevertheless, the diagenetic or metamorphic changes (the forces) that can be more accurately determined by petrographic analysis are not easily appreciated on the present surface of the rock as they are in thin section. However, the correlation between destructive petrography and stereomicroscope characterisation allows us to recognise some geologic processes in the rocks and, therefore, the petrogenetic types. In this way, it lets us extrapolate the information from the petrographic characterisation of a representative sample to the complete collection that we study by nondestructive methods.
In quartz arenites, grains are easy to recognise and the borders from the former clastic sediment are well defined. They are only modified by the presence of matrix or cement. In this group, compaction and cementation are the diagenetic processes that consolidated former sediments into sedimentary rocks. In the group of orthoquartzites, grain borders are not so easy to observe in hand specimens, since they are modified by silica overgrowth (more evident in the OO type) and the deformation of quartz grains. For these reasons, and also because of the increase in undulatory extinction on quartz grains, lustre is more intense. The presence of micro-cracks is more frequent, due to the increase of compactness. In SO type, foliation is also a consequence of the deformation. Overgrowth and deformation are the main forces that affect and create these features on rock surfaces. In the group of metamorphic quartzites, individual grains could not be recognised due to high deformation, but also because of the presence of new small recrystallised grains that create a bright and crystalline surface. The progression of the degree of metamorphism generates more crystalline structures, resulting in very smooth and flat surfaces that make it impossible to recognise the original quartz grains in hand specimens and sometimes also in thin sections.
Despite the limitation imposed by grain size observation on quartzite surfaces, we could accomplish grain size measurement of each lithic piece using qualitative categories based on descriptive criteria for medium size and quartz grain distribution. These criteria are based on petrographic characterisation of samples using a large quantity of high-resolution measurements (Prieto et al. 2019). These categories allow us to understand preferential varieties selected by prehistoric societies.
Finally, mineral characterisation was not correlated with the information provided by other criteria. As discussed above, the influence of the mineralogy of the former sediment and especially post-depositional weathering could explain the variability of non-quartz minerals.
The proposed methodology helped us to understand not only a few samples of the quartzites, but the complete assemblage of these two Middle-Palaeolithic sites, and therefore, we were able to propose human activities carried out in both places.
From geoarchaeological characterisation to human knowledge
In El Habario site, the best represented quartzite type is BQ, followed by OO and SO types. Except for the OO type, without core preparation/rejuvenation products, all technological products are represented, indicating complete chaînes opératoires in all three types. The degree of exploitation of SO type and the quartzite group seems to be greater than of OO type (Fig. 18). The representation of quartz arenites is residual, showing only partial processes. MQ and RQ types are also slightly represented with a partial process of lithic reduction. Nevertheless, these chaînes opératoires are more complex, and they are more related with the last stages of the shaping processes.
This information points to different management strategies depending on each quartzite type and related with the properties of each one. We observe fragmented chaînes opératoires reflecting the creation of storage and toolkits on determined quartzite types, associated with the conglomerate outcrops near El Habario site, which contain rock pebbles of the petrogenetic types MA, OO, SO and BQ. BQ and SO type could be the main product of exploitation, while OO type could be a secondary product. We also observed the discarding of previously stocked raw material and partial reparation and partial discard of toolkits made on materials unrepresented in the adjacent quartzite conglomerate (RQ and MQ types).
In El Arteu, there is a higher presence of quartz arenites and OO type and a decrease in BQ and SO types. Complete chaînes opératoires on quartz arenites are clear although they are weakly exploited. The high representation of OO type and the presence of all technological products show that this quartzite was knapped in the site as the main raw material. Fig. 17 Boxplot showing the distribution of weight in grams of all lithic remains grouped first by technological order and second by raw material in the chart on the left, and by petrogenetic type in the chart on the right. There are another four MA petrogenetic quartzite type heavier than 60 g, not shown in the chart (a 722-g chunk, a 296-g blank and two cores 192 and 263 g in weight) Its degree of exploitation shows that it was used intensively, similar to SO and BQ types, although they are less represented. RQ type seems to be also intensively exploited, although not every technological product is represented (Fig. 18).
This information points to different management strategies than at the previous site, probably due to the availability of different quartzite types. We observe fragmented chaînes opératoires which reflects the use of storage and toolkits, like those created in El Habario, certifying the fragmented character of Middle Palaeolithic stone tool technology (Meignen et al. 2009;Turq et al. 2013Turq et al. , 2017. We also observed the use of CA quartz arenites and the OO type, probably obtained in the fluvial deposits near El Arteu. Their use is probably related to a versatility phenomenon rather than an expedient one (Vaquero and Romagnoli 2017).
These data underline the complex management of raw material made by Middle-Palaeolithic societies to exploit their habitat in the Cantabrian Region, as it was also suggested by other studies (Carrión et al., 2008(Carrión et al., , 2013Gonzalez-Urquijo et al. 2005). Once we could individualise each type of Fig. 18 Systematic management strategies followed by human populations who inhabited El Habario and El Arteu, grouped by raw material and petrogenetic types. For abbreviations: MA for clastic fabric with matrix or non-quartz cement quartz arenite, CA for clastic quartz-arenite, OO for syntaxially overgrown orthoquartzite, SO for sutured grain orthoquartzite, BQ for bulging recrystallised quartzite, RQ for subgrain rotation recrystallised quartzite and MQ for grain boundary migration recrystallised quartzite quartzite, we observed different managements guided by their features for knapping, their availability and probably by human decisions on changeable circumstances. The coexistence of multiple methods of lithic reduction in the same layer could be the consequence of these issues, as it was also suggested by other researches along the Cantabrian Region (Carrión and Baena 1999;Baena et al. 2012;Maillo 2007;Rios-Garaizar 2010). The use of different environments where different types of quartzite (or other raw materials) was caught/ selected also contributes to this variability, as it was suggested by other authors (Castanedo 2001;Manzano et al. 2005;Rios-Garaizar 2012;Roy et al. 2017). Finally, lithic assemblages also show a complex management as a consequence of the different human decisions for taking advantage in the changeable environment where societies inhabited (Carrión et al. 2013;Cuartero et al. 2015;Santamaría et al. 2010). We would like to underline that these human and socio-economic standpoints based on complex management of lithic resources in Middle-Palaeolithic contexts are being proposed not only in the Cantabrian Region, but also in other parts of Europe (Richter 2016;Turq et al. 2013Turq et al. , 2017. All these data are transforming our chrono-cultural perspectives of lithic variability into a more complex human, social and economic perspectives.
Conclusions
Our main goal in this paper was to explore and fill the gap between petrographic and non-destructive characterisation of quartzite. The starting point of this research is the characterisation of 18 samples through thin-section analysis (Prieto et al. 2019). This characterisation was based on qualitative (also quantitative) description of texture, packing and quartz grain features; the quantitative characterisation of size, morphology and orientation of quartz grains; and the determination of nonquartz minerals. This information helped us to understand the variability of the term quartzite but also to adopt seven petrogenetic types that cover the wide spectrum of rocks used under this term, from the pure sedimentary arenites, the more deformed quartz arenite and the truly metamorphic quartzite types, all of them exploited as raw materials in Prehistory. This information is used to understand different features on the quartzite surface, such as texture, packing, quartz grain morphologies, lustre, presence of micro-cracks, bedding and foliation, and also, the grain size and the non-quartz minerals on the rock surfaces. All these characteristics have been precisely described and they have been associated with the information derived by petrographic analysis. This procedure helps us to classify the different types of quartzite, not only for a sample of the assemblage but for the whole collection. In addition, we also establish different grain size varieties. Despite the lower resolution of the non-destructive approach, we have filled the gap between petrographic and nondestructive characterisation using a solid geoarchaeological procedure. An increase in the number of quartzites analysed, the application of data mining and machine learning software on micro-pictures or even better recognition of weathering effects could improve the presented methodology (Bustos-Pérez et al. 2019).
This study also contributes to the knowledge of raw material management strategies during the Middle Palaeolithic in the Cantabrian Region. Management strategies are related with the creation of stores of blanks or cores of a determined type, their discard in a situation of abundance of raw material, their exploitation depending on the quartzite qualities or toolkit production and maintenance of determined types. All these behaviours are inserted in a fragmented management strategy taking into account the geographic and temporal variables. These data point at the variability and especially the versatility of management of quartzites by Middle Palaeolithic people in a complex economy and society (Carrión et al. 2013;Turq et al. 2013). The data suggest hidden strategies that could only be analysed by the geoarchaeological characterisation of quartzites. The application of the criteria to characterize the materials in potential source areas would enable the determination of the mechanisms used for the selection of raw materials and/or the short-and long-distance mobility circuits and territories of the societies that inhabited this area.
need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-01-14T16:09:58.316Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "e1624fb29188ef81304b6bdd258904d2d595eca1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12520-019-00981-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "613002c65c4e7ee7989578c2eec4bee392ed77d9",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
235645931 | pes2o/s2orc | v3-fos-license | Effects of Different Non-Cage Housing Systems on the Production Performance, Serum Parameters and Intestinal Morphology of Laying Hens
Simple Summary With the increased interest in animal welfare, poultry housing systems have been a concern for the last decade, and conventional cages have been replaced by non-cage systems or enriched cages. However, the environmental conditions in non-cage housing systems and their association with hens’ production performance and health have not been studied extensively. Therefore, the present study compared the indoor environmental microbial content in two different non-caged systems, namely, a plastic-net housing system and floor-litter housing system, and investigated its effects on the production performance, serum parameters and intestinal morphology of hens during the peak laying period. The results indicated that the NRS resulted in better indoor environmental air quality and ground hygiene than the LRS and enhanced the production performance, antioxidant capacity and intestinal health of hens, represented by positive changes in the laying rate, serum parameters and intestinal morphology. Abstract This study investigated the effects of plastic-net housing system (NRS) and floor-litter housing system (LRS) on the production performance, serum parameters and intestinal morphology of Shendan laying hens. A total of 1200 30-week-old hens were randomly allocated to the NRS and LRS groups, each of which included five replicates with 120 chickens in each replicate. The experiment was conducted from 32 to 40 weeks of age. Indoor airborne parameters were measured every 2 weeks, and indoor ground contamination was measured monthly. The laying rate and mortality of hens were recorded daily, and egg quality traits and serum parameters were measured every 2 weeks. At 40 weeks of age, four birds per replicate from each experimental group were selected for intestinal morphological observation. The results showed that the airborne bacteria number in the LRS was significantly higher than that in the NRS (p < 0.05) for most of the experimental period (except at 32 and 38 weeks of age), and the bacterial numbers on the surfaces of the floor and floor eggs in the LRS were approximately 10 times higher than those in the NRS (p < 0.05). Compared with the LRS, the NRS improved the laying rate (p < 0.05), reduced serum malondialdehyde (MDA) (p < 0.05) and corticosterone (CORT) concentrations and increased serum glutathione peroxidase (GSH-Px) and superoxide dismutase (SOD) activities, indicating favourable effects on antioxidative status. The NRS was significantly associated with an increased villus height (VH), villus height to crypt depth ratio (VCR) in the small intestine (p < 0.05) and increased VCR in the caecum (p < 0.05). Overall, the lower rate of bacterial contamination in the NRS than in the LRS indicated better environmental hygiene. The NRS enhanced the laying performance and antioxidant capacity of hens and was superior to the LRS in improving intestinal health. The current findings support the advantages of the NRS for the health and welfare of Shendan chickens during the peak laying period.
Introduction
The housing system is one of the most important non-genetic factors for hens that affects both production performance and health status [1,2]. There are several different housing systems in poultry production, mainly including the cage housing system (CRS), floor-litter housing system (LRS) and plastic-net housing system (NRS). In some countries, especially in North-West Europe, laying hens have been kept in non-cage systems for many years, as it is perceived as being more respectful to animal welfare than cage housing systems which could allow behavioural freedom and promote eco-friendliness [3,4]. The ban on housing hens in conventional cages has led to a search for more suitable non-cage housing systems.
Several studies have been conducted to evaluate the effects of NRS and LRS on poultry production; however, the results are not consistent. Almeida et al., (2017) [5] found that birds reared in the NRS had a higher laying rate than birds reared in the LRS. Zhang et al., (2018) [6] reported that the NRS decreased the high-density lipoprotein cholesterol content, and enhanced total protein and triacylglycerol contents of birds compared to the LRS. In contrast, Li et al., (2016) [7] favoured the LRS because it was associated with higher body weight gain and a greater number of Bifidobacteria in the caeca of 28-day-old broilers compared to the NRS. Wang et al., (2015) [8] concluded that broilers raised in the LRS had increased gizzard weights at days 21 and 42 compared to those raised in the NRS. However, the environmental conditions in non-cage housing systems and their association with laying hens' hygiene and health have not been studied extensively. Aerosol microbial contamination was found to be higher in LRS than in the NRS [9]. Birds raised in NRS showed better hygiene and had a lower incidence of hock injury and footpad dermatitis than those reared in LRS [5]. Birds in aviaries with wire mesh flooring had fewer wounds on their body surface and lower mortality as compared to hens in aviaries with plasticslatted flooring [10]. Although non-cage systems provide more opportunities to perform natural behaviours compared to cage systems, their environmental monitoring need to be further concerned.
In the present study, we hypothesised that the different non-cage systems might have great effects on hens due to their different circumstances and rearing methods. Therefore, two different non-cage housing systems, i.e., NRS and LRS, were used under the same management conditions to compare their indoor environmental microbial content and investigate their effects on the production performance, serum parameters and intestinal morphology of hens during the peak laying period.
Materials and Methods
The experimental protocol of the current study was approved by the Committee for the Care and Use of Experimental Animals at Anhui Academy of Agricultural Science under permit No. A11-CS06.
Animals and Management
The Shendan chicken, which originated in North China, is used as a dual-purpose breed and is one of the most well-known and popular local chicken breeds in Hubei Province. A total of 1200 30-week-old healthy, commercial Shendan laying hens with similar body weights (1295.20 g ± 106.54) and that were raised in cages were obtained from Hubei Shendan Health Food Co., Ltd., Anlu, China, and were randomly divided into the NRS and LRS groups. Each group included 5 replicate pens with 120 birds in each replicate. Birds in the NRS treatment group were raised indoors on a perforated plastic floor; the faeces dropped onto the belt under the plastic floor and were removed every day. Birds in the LRS treatment group were raised indoors on a floor covered with wood shavings that was cleaned every 2 weeks. Each replicate pen in both groups had the same indoor stocking density (4.4 birds/m 2 ) and had a free-range area measuring 8 × 6 m (2.5 birds/m 2 ). There was a plurality of nest boxes in indoor houses for hens to lay eggs. The free-range area, which was used as an activity field, was separated from surrounding areas by wire fences. Feeders and bell drinkers were located in both the indoor and free-range areas. There were also some yellow wooden perches available for the chickens to rest upon. A preliminary trial was conducted for 2 weeks, and the formal experiment was performed from week 32 to 40. The poultry houses with the two housing systems were close to one another.
Measurement of Indoor Airborne Parameters
The airborne bacteria obtained at different sites in each replicate pen (one plate for each front, middle and back side) were evaluated by the sedimentation plate method every 2 weeks. An uncovered culture plate (9 cm in diameter) containing 20 mL of culture medium (NA, Nutrient Agar) was distributed and exposed to air at 0.5 m above ground for 5 min. Thereafter, the plates were collected and were then incubated in the dark at 37 • C for 48 h using the thermostatic incubator (DRP-9052, Nade Scientific Instrument Co., Ltd., Zhejiang, China), and the airborne bacteria number was calculated by the following equation: where N is the number of colony-forming units (CFUs) per plate; A is the base area of the plate in cm 2 ; T is the exposure time in min and C is the airborne bacteria number, CFU/m 3 . The airborne bacteria number for each replicate was the mean value of the front, middle and back sides. Simultaneously, the indoor temperature, relative humidity and CO 2 concentration were recorded. The temperature and relative humidity were measured by the Portable Temperature Measuring Instrument (Fluke 971, Tianchuang Instrument Co., Ltd., Zhuhai, China). The CO 2 concentration was detected by the Carbon Dioxide Gas Detector (GT-903-CO 2 , Korno Electronic Technology Co., Ltd., Shenzhen, China).
Sample Collection
To determine indoor ground bacterial contamination, three samples from three sites (from the front, middle and back sides) for the surface of the indoor floor, nest eggs and floor eggs, were collected. The surfaces were sampled with sterile swabs, which were then transferred into screw-cap tubes with 50 mL of DEPC (diethylpyrocarbonate) water. Samples were stored at −4 • C until analysis. All samples were collected every 4 weeks.
DNA Extraction and PCR Amplification of 16S rRNA Sequences
The collected sample was filtered through a 0.22-µm membrane and eluted with 1 mL of DEPC water. Bacterial genomic DNA was extracted using a PowerMag Microbiome DNA isolation kit (OMEGA Bio-Tek, Norcross, GA, USA) according to the manufacturer's instructions. The V4-V5 hypervariable regions of 16S rRNA genes were PCR-amplified from microbial genomic DNA using the universal primers V515F (5 -GTGCCAGCMGCCGCGGTAA-3 ) and V907R (5 -CCGTCAATTCMTTTRAGTTT-3 ). PCR was performed in a 20-µL reaction system containing 0.8 µL of each primer, 10 ng of template DNA, 4 µL of 5× FastPfu buffer, 2 µL of 2.5 mM dNTPs and 0.4 µL of FastPfu polymerase. The thermocycling parameters were as follows: a 2 min initial denaturation at 95 • C; 30 cycles of denaturation at 95 • C for 30 s, annealing at 50 • C for 30 s, and elongation at 72 • C for 45 s and a final extension at 72 • C for 10 min. The amplicons were pooled, purified and then quantified using a NanoDrop 2000 UV-vis instrument (Thermo Scientific, Wilmington, DE, USA).
Preparation of the Plasmid Standard
The plasmid standard for quantitative real-time PCR (qRT-PCR) was constructed as follows. According to the amplification positions of the bacterial 16S rDNA uni-versal primers (515-F/907-R), a 412-bp sequence, from Escherichia coli 16S rDNA position 515-926, was inserted into the cloning vector pUC57 (the plasmid standard in this experiment was obtained by gene synthesis), and the resulting recombinant standard plasmid was named pUC57-16S rDNA. The total length of pUC57-16S rDNA was 3122 bp, and the extracted plasmid concentration was 142.93 ng/µL. The copy number of pUC57-16S rDNA was 4.17 × 10 10 copies/µL. The plasmid standard was diluted to 4.17 × 10 9 -4.17 × 10 3 copies/µL with double-distilled water, and 7 concentrations were used to establish the standard curve for qRT-PCR.
qRT-PCR
qRT-PCR (absolute quantification) was used to determine the bacterial 16S rDNA copy numbers in the samples. qRT-PCR was performed in a final reaction volume of 20 µL, containing 0.5 µL of each primer, 1 µL of template DNA, 10 µL of 2× TB Green Premix Ex Taq II and 8 µL of double-distilled water with the SYBR ® Green PCR Master Mix Kit (TaKaRa, Osaka, Japan) with a CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA) using the following program: a 30 s initial denaturation at 94 • C followed by 40 cycles of denaturation at 94 • C for 5 s and annealing/extension at 61 • C for 50 s. All reactions were performed in triplicate for each sample. Bacterial 16S rDNA copy numbers are presented as the logarithm (base 10).
Production Performance
Laying rate and mortality were recorded daily. The body weight of the birds was recorded twice a week. Twenty eggs were randomly collected from each replicate for egg-quality measurement twice a week. All eggs were stored indoors at 18-20 • C and were measured within 4 h after laying. Egg weight was measured using an electronic scale with an accuracy of 0.01 g. Shell strength was measured with an eggshell force gauge (RH-DQ200, Runhu Instrument Co., Ltd., Guangzhou, China). Haugh units (HUs) were measured using an electronic egg tester (EMT-7300, Sanly Chemical Food Co., Ltd., Shenzhen, China).
Measurements of Serum Biochemical Parameters
Sixty birds from each group (12 for each replicate) were randomly selected for blood sampling twice a week. A 4-mL blood sample was collected from the wing vein of the chickens into 2 heparinised tubes (2 mL in each tube). The time between securing the bird and obtaining the blood sample did not exceed 90 s. Samples were placed in an ice bath immediately after collection and then transported to the laboratory for processing. Blood serum was separated by centrifugation for 10 min (3000× g) at 4 • C and stored at −20 • C until analysis. The collected serum was assayed to detect the levels of total cholesterol (T-CH), triglyceride (TG), malondialdehyde (MDA) and corticosterone (CORT) and the activities of glutathione peroxidase (GSH-Px), superoxide dismutase (SOD) and creatine kinase (CK). The concentrations of these parameters were determined by commercial analytical kits (Sigma, Thermo Fisher Scientific, Shanghai, China) with an autoanalyser (Hitachi Ltd., Tokyo, Japan).
Measurements of Intestinal Morphology
At 40 weeks of age, four birds per replicate in each experimental group were randomly selected for intestinal morphological observation. One-centimetre sections from the duodenum, jejunum, ileum and caecum were excised and preserved in 10% neutral buffered formalin solution. Segments were then embedded in paraffin wax, fixed onto slides and stained with hematoxylin and eosin. The observation for stained slides was performed by a Motic BA210, and the villus height (VH) and crypt depth (CD) were measured using imaging software (Motic Image Plus 2.0 ML Soft, Motic China Group Co., Ltd., Xiamen, China). The VCR was calculated by the ratio of villus height to crypt depth.
Statistical Analysis
Performance data were subjected to repeated-measures analysis, with each replicate representing an experimental unit. The parameters were averaged for each replicate. Prior to analysis, the normality of the data was verified using the Kolmogorov-Smirnov test, and the homogeneity of variance was examined by Levene's test. Data were subjected to analysis of variance (ANOVA) using the general linear model (GLM) command in SAS version 9.3 statistical software (SAS Institute Inc., Cary, NC, USA). Statistical analyses were performed by Student's t-test. All data are expressed as means ± standard deviations (SDs). Differences were considered statistically significant at p < 0.05.
Indoor Airborne Parameters
The temperature, relative humidity and CO 2 concentration are shown in Figure 1. There were no significant differences in temperature and relative humidity during the whole period between the two housing systems (p > 0.05), and the CO 2 concentration was slightly higher in the NRS than in the LRS. The indoor airborne bacteria number is shown in Figure 2. The aerosols bacterial counts in the LRS were significantly higher than that in the NRS (p < 0.05) during most of the experimental period (except at 32 and 38 weeks).
Indoor Ground Contamination
The results of the bacterial counts (i.e., bacterial 16S rDNA copy numbers) for the surfaces of the floor and eggs are shown in Figure 3. The mean values of the bacterial counts for the floor, nest eggs and floor eggs were 8.11 ± 1.08, 6.86 ± 0.51 and 7.77 ± 0.87 units (lg copies/mL), respectively, in the NRS and 9.12 ± 0.85, 7.42 ± 0.68 and 8.91 ± 0.65 units, respectively, in the LRS. By comparing the bacterial counts from the floor and the surfaces of floor eggs, it was found that the mean values in the LRS were nearly 10 times higher than those in the NRS (p < 0.05).
Production Performance
Production performance was measured by body weight (Figure 4A), laying rate ( Figure 4B), mortality and some egg-quality traits (Table 1). No significant differences in body weight or mortality rate were found between the two groups (p > 0.05). However, the laying rate in the NRS group decreased gradually (by 4.14%) with age, while there was a sharp decline (from 32 to 38 weeks of age) in the LRS group (by 11.39%); the laying rate of bird in the NRS was significantly higher than that of birds in the LRS from 34 to 40 weeks of age (p < 0.05). The egg weight and shell strength in the NRS group were slightly higher than those in the LRS group, while the HUs were slightly lower.
Serum Parameters
The serum parameters of hens are shown in Table 2. Serum T-CH, TG and CORT levels were slightly lower in the NRS group than in the LRS group (p > 0.05), while the level of MDA in the NRS group was significantly lower than that in the LRS group (p < 0.05). Birds raised in the NRS had significantly higher serum concentrations of GSH-Px and SOD than those raised in the LRS (p < 0.05). No significant difference in serum CK concentration was found (p > 0.05).
Intestinal Morphology
The effects of housing type on the morphological parameters of the intestine are shown in Table 3. Compared to those in the LRS group, the VH and VCR in the jejunum and ileum in the NRS group were significantly increased (p < 0.05). Similarly, a higher VH in the duodenum and a higher VCR in the caecum were observed in the NRS group than in the LRS group (p < 0.05). No significant difference in the VH in the caecum or the intestinal CD was observed between the two groups (p > 0.05).
Discussion
Environmental conditions in poultry houses are very important to bird health. The CO 2 concentration was observed to be higher in the environment containing wood shavings than in the environment with plastic floors by Almeida et al., (2017) [5], which was likely to be caused by the microbial degradation of organic matter accumulated in the wood shavings. In contrast, the CO 2 concentration was slightly higher in the NRS than in the LRS during the whole period in the present study, which may be due to aerobic fermentation of the excreta from hens and the moisture contained in the excreta [11]. The similar temperatures and relative humidity values in the two housing houses were related to their identical house structure and close location.
Bioaerosols in poultry houses can cause respiratory problems due to infection as well as general respiratory stress due to constant contact with non-pathogenic bacteria [12]. The airborne bacterial content was found to be higher in LRS compared to NRS, which was similar to those obtained by Madelin et al., (1989) [13], who found that the respirable dust concentrations and numbers of airborne microorganisms were significantly higher in deep-litter systems than in net-floor systems. Litter, e.g., wood shavings, straw and rice husk, contributed directly to the airborne dust in the LRS house and are assumed to have served as a reservoir for microorganisms [14]. Birds scratching and moving the litter release particles contaminated with microorganisms into the air, where they could disperse and redeposite on the floor and eggs. This is possible to be an important cause for higher bacterial counts from the surfaces of the floor and floor eggs in LRS as compared with NRS. Similarly, Almeida et al., (2017) [5] concluded that the presence of a plastic-net floor improved plumage hygiene, as the birds had less contact with faeces. Akpobome and Fanguy (1992) [15] also observed better results with the cleaning of the broiler feathers for poultry reared on plastic floors than for those reared on wood shavings. The present study investigated higher environmental microbial pollution in LRS housing than in NRS housing, and this environmental pollution caused by poultry litter (wood shavings) may contaminate birds body and threaten their health, and consequently be a violation of animal welfare.
The housing system influenced the performance characteristics of the hens. No significant effect of the housing system on the body weight of laying hens was observed in the present study; this was inconsistent with the results of Almeida et al., (2017) [5], who found that broilers reared on plastic nets gained more body weight than those reared on wood shavings. Moreover, Wang et al., (2015) [8] found that the growth performance of broiler chicks reared in the NRS was better than that of broiler chicks reared in the LRS.
However, a novel housing system effect identified in this study was that hens reared in the NRS had a significantly higher laying rate than those reared in the LRS during 34 to 40 weeks of age, resulting in better production performance. Lower egg production in the LRS indicated negative laying performance, which may have been caused by the fact that the hens were not adapted to the LRS after being raised in cages; therefore, there was a drop in egg production. In addition, some eggs were laid in the litter and were not easily counted [16]. The different findings among studies could be attributed to differences in chicken breeds and environmental conditions. There was no difference in egg weight, shell strength or HUs between the two housing systems, similar to the results of Shimmura et al., (2010) [17], who found no significant effects of various housing systems on egg weight, egg mass or HUs in hens. In contrast, Englmaierová et al., (2014) [16] found that all of the internal and external egg quality characteristics were influenced by the housing system (litter system versus aviary).
Serum biochemical parameters are considered to be important indicators of the physiological and metabolic status of birds and are influenced by numerous factors, among which the housing system is one of the most important [8]. A previous study reported that birds raised in a conventional cage housing system had higher levels of TG and T-CH than those raised in a free-range system [18]. In the present study, the serum T-CH and TG levels in hens showed no significant differences between housing systems but were slightly higher in the LRS groups, which partially aligned with the results of Sun et al., (2015) [19], who found that birds raised in the LRS exhibited higher serum TG levels than those raised in the NRS. This may also be associated with the higher serum CK levels in the NRS group than in the LRS group, as the release of CK is thought to be proportional to the intensity and duration of exercise [20], while exercise can briefly lower serum TG and cholesterol [21]. GSH-Px and SOD are usually considered antioxidant indices that reflect the antioxidant status of animals, while MDA is the main final product of lipid peroxidation and has often been used for determining oxidative damage [22,23]. The NRS group had lower serum concentrations of MDA and higher activities of GSH-Px and SOD than those in the LRS group, which showed the superiority of the NRS in oxidation resistance, indicating a better welfare state of hens physiology. The enhanced antioxidant status induced by the NRS is likely due to lower rates of bacterial contamination of the air and ground as well as better ground hygiene and plumage conditions [10]. Similarly, the lower serum CORT concentration in the NRS group also demonstrated less physiological stress, as CORT has been suggested to be a sensitive indicator of environmental stress [24].
The health and morphology of the intestinal tract of birds are easily affected by the environment and environmental conditions [25]. The present study measured intestinal morphological parameters such as VH, CD and VCR, as they are frequently used as indicators of nutrient absorption and growth performance in hens [26]. A larger VH and VCR in the small intestine as well as a larger VCR in the caecum were observed in the NRS group. These histomorphological findings were similar to the results of Li et al., (2016) [7], who found that the VCR in the jejunum decreased at 28 days and that the VCR in the ileum decreased at 42 days in the LRS group compared with the NRS group. The worse intestinal morphology in the LRS group might be accordingly related to the higher rate of environmental bacterial contamination and microbial fermentation in faeces and litter, which could be detrimental to the growth and the repair of the intestinal mucosa, and indirectly decrease the villus height.
Conclusions
In conclusion, compared to the LRS, the NRS produced a higher-quality environment, i.e., it reduced bacterial contamination in the air and on the floor and was associated with superior laying performance and antioxidative status in hens. This system may have some positive effects on intestinal health, as indicated by a higher VH and VCR in the small intestine and higher VCR in the caecum. The current findings support the advantages of the NRS for the health and welfare of Shendan chickens between 32 and 40 weeks of age.
Further studies are needed to investigate the effects of non-cage housing systems on more production traits in hens during different laying periods. | 2021-06-27T05:22:55.908Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e03cc22622b0625c17dda8e6114c05f45f35a815",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/6/1673/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e03cc22622b0625c17dda8e6114c05f45f35a815",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237878170 | pes2o/s2orc | v3-fos-license | DEVELOPMENT OF REGIONAL LABOUR MARKETS IN CONDITIONS OF DECENTRALIZATION
The decentralization reform in Ukraine began in 2014 and is aimed at implementing the provisions of the European Charter of Local Self-Government, which provides for the redistribution of powers, resources and responsibilities on the basis of subsidiarity. Currently, local governments are endowed with powers and instruments of influence on enterprises located on their territory, but employment and labour market policies remains the prerogative of the central government. For a comparative analysis of the development of regional labour markets in the context of decentralization, labour force survey indicators are more appropriate, since they reflect the actual policy results from the perspective of households. For the 2015–2019 period in most regions of Ukraine, there was an increase in the level of employment and a decrease in the level of unemployment; at the same time, the gender gap in employment has increased in 16 of 25 regions; the sectoral structure of employment continues to stagnate. Some regions have significantly increased the coverage of public works, but this may indicate not the activity of territorial communities, but the lack of stable employment opportunities. Local selfgovernment bodies should play a key role in the formation and implementation of local employment and the labour market policies, their interaction with the state employment service requiring a corresponding transformation.
INTRODUCTION
Decentralization is one of the key European integration reforms, launched in 2014 after the Association Agreement between Ukraine and the EU was signed. The reform is aimed at implementing the provisions of the European Charter of Local Self-Government and involves the transfer of a significant part of powers, resources and responsibility from central executive authorities to local self-government bodies that are in close proximity to the inhabitants of a territory. The redistribution of functions based on the principle of subsidiarity should provide residents with mechanisms and tools to influence local authorities and participate in decision-making. At the basic level, the institution of local self-government is that of united territorial communities ("hromady"), which are then combined into larger ones (hereinafter referred to as "hromady"). In 2014-2019 the formation and unification of the hromady proceeded on a voluntary basis, beginning with 2020 -according to long-term plans approved by the government.
One way or the other, decentralization, encompasses all spheres of activity and politics, changing the balance of power and priorities in markets and in society. The labour market is at the intersection of economic, demographic and social factors; therefore, the impact of decentralization requires analysis and improvement of regional employment and the labour market policies.
LITERATURE REVIEW
The experience of the hromada formation and activity is the focus of research by Ukrainian scientists and analysts. The attention of specialists in the field of labour economics and social economics is focused, first of all, on identifying new risks for the development of regional labour markets associated with decentralization. Such risks are most often called by the following factors (Benovska, 2019;Libanova, 2017;Rоmanyuk, 2019): ❖ the formation and unification of hromady is often formal in nature, without initiative and without a vision on new opportunities for local economic and social development; ❖ the behaviour of hromady is dominated by a consumerist approach, they are much more focused on receiving subsidies and subventions from higher-level budgets than on increasing economic activity on their territory; ❖ socio-economic disproportions are aggravated not only between regions but also within regions and districts; in particular, the concentration of economic activity in large cities is accelerating, while, at the same time there is a decline and desolation in small towns and villages; ❖ the structural imbalance between supply and demand in local labour markets is growing, as a result of which unemployment increasingly exists in parallel with a shortage of personnel; ❖ the intensity of all types of labour migrations is increasing, starting with daily commuting and ending with long foreign ones; migrant workers travel is becoming a routine employment model; ❖ enterprises and individuals -entrepreneurs are increasingly choosing a place of registration based on the favourable conditions of local taxation, as a result of which the asymmetry between the fiscal and social aspects of the policy is growing; ❖ in the formation of policy, a clear "top-down" administrative vertical line is preserved, which determines a formal-declarative approach to the development of local development programs, measures for their implementation and monitoring of performance.
A survey of territorial hromady of four regions of the Carpathian area (Kravciv & Instytut Rehional'nych Doslidžen', 2013), conducted in November 2018, showed that only 11% of hromady developed a program or action plan in employment. Some of them used the old employment programs, which were adopted even before the formation of the hromada without considering the new conditions. Even those hromady that developed their own employment program or plan did not consider the real data on the available labour potential. Among surveyed hromady, 2/3 did not have data on the age composition of the population, 1/3 did not have information on the number of officially employed persons, 1/5 did not know the number of locally registered individualsentrepreneurs. Most often, the activity of hromady on labour market issues was limited to a rather formal interaction with the local state employment centre (72% of the surveyed communities), 56% of communities taking part in the organization of public works. At the same time, almost all communities noted that limited opportunities for official employment on their territory constitute a problem for community development.
An important area of research is also the improvement of tools for measuring and assessing the dynamics of regional labour markets. Most often the following indicators are taken for comparative analysis: the employment rate, the unemployment rate, the rates of accepting and quitting of regular staff at enterprises, the scale of underemployment and informal employment, employment in harmful conditions and industrial injuries, the size of the average monthly wage of regular staff of enterprises and the declared salary in vacancies, performance indicators of the state employment service, including the number and structure of vacancies, the number of registered unemployed per vacancy (Bilyk & Filipchuk, 2019; Шаульська & Кримова, f.a.). The quarterly monitoring of regional socio-economic development (Ministry of communities and territories development of Ukraine 2020) contains 60 indicators in 12 areas, including the "labour market efficiency" block of four indicators: the unemployment rate according to the ILO methodology, the level of employment, the real wage index and the amount of wage arrears as a percentage of the wages fund.
A review of research shows that in Ukraine, in the context of decentralization, employment is viewed primarily as a component of the financial solvency of the local budget and the investment attractiveness of the territory, whereas issues of employment and unemployment are viewed as part of the state employment service activity. The social role of employment and the role of communities in employment policy are greatly underestimated.
The EU's regional policy aims at cohesion and harmonize development, and it is on this basis that it allows to combine job creation, competitive business, economic growth, sustainable development and improving the quality of life for people from all regions, cities and rural areas. Cohesion policy involves supporting local development initiatives and empowering local governments to manage funds (EU Budget, 2018). Thanks to this approach, employment and labour market policies is strongly represented at all management levels.
In countries with a high level of local decentralization (Poland, Belgium, Denmark, the Netherlands, Spain), the public employment service is part of regional self-government bodies and does not belong to the governmental executive vertical. This makes it possible to bring the labour market policy as close as possible to local conditions and needs, to establish specific local tasks and target groups. The disadvantage of fully decentralizing public employment services is represented by the difficulty to obtain integrated information on the provision of consulting services and active labour market policy measures across the country as a whole(European Commission. Directorate General for Employment, Social Affairs and Inclusion. & ICON INSTITUT., 2016).
In countries where the public employment service is an autonomous public institution (France, Austria, Germany, Greece, Finland), there is also a tendency towards the decentralization of activities at local level. For example, in Germany, the 2003 reform at the local level created public employment agencies (that serve applicants with an unemployment duration of less than one year) and public job centres (that serve long-term unemployed), which closely interact with municipalities. This decentralization of functions and management allowed for a personalized, customer-centric approach, which reduced the average duration of unemployment and increased customer satisfaction for both job-seekers and employers (Finn et al., f.a.).
In countries where the public employment service is a centralized structure subordinate to the ministry (Ireland, UK) or government (Hungary, Sweden), there is also a tendency to deepen cooperation with local authorities. For example, the UK initiated the expansion of the standard service package with additional services (jobcentre plus), which facilitate a quick return to work thanks to the established local funds and tools. Cooperation also takes the form of joining the efforts of the public employment service with local social services to develop joint programs and deliver services tailored to the local situation and needs.
DATA AND METHODS
The situation on the labour market depends on the intersection of many factors, therefore, it is not possible to separately measure exclusively the impact of decentralization. Based on the objectives of the analysis, preference is given to indicators that characterize the social role of employment and, at least at the legislative level, provide for the significant participation and responsibility of local governments for employment and labour market policies.
The current national legislation refers to the powers of local self-government bodies as general administrative measures (preparation of programs for socio-economic development, etc.) (Verkhovna Rada of Ukraine 1997). Although local governments are endowed with powers and instruments of influence on enterprises located on their territory, comprehensive regional policy, employment and labour market policies remain the prerogative of the state and central executive authorities. Regional development strategies, territorial and local employment programs are fully subordinated to the main directions and principles of state policy, local state administrations are responsible for their development and implementation (Verkhovna Rada of Ukraine 2012 and 2015). Local self-government bodies in these processes have rather a coordinating and supportive role, being mentioned last in the list of entities responsible for implementation.
The only labour market policy measure where local governments play a significant role are public works, which by definition are a type of socially useful paid work in the interests of a territorial community. The practical involvement of local governments in organizing public works was facilitated by the establishment, in 2013, of the principle of parity of their funding from local budgets and funds from the Fund of Compulsory State Social Insurance against of unemployment (Cabinet of Ministers of Ukraine 2013). This prompted the territorial employment centres, which are strictly subordinate to the central vertical of power, to establish a more effective interaction with local authorities. In the crisis conditions of 2015, the requirement for parity funding was cancelled, since the lack of funds from local budgets called into question the conduct of public works, but, at least, the tradition of complicity was established.
Taking into account the existing distribution of powers, the data of the state employment service and other administrative data rather characterize the activities of the central executive authorities in the implementation of national policy. An exception here is the information on public works, the organization and financing of which are carried out with the active participation of local governments.
For a comparative analysis of the development of regional labour markets, data from labour force surveys are much more informative, since they reflect not indicators of activity, but the actual results of socio-economic policy in a given region, and this information is obtained from households, that is, directly from participants on the labour market. The capabilities of this source also have limitations. First, the survey methodology is clearly regulated and assumes a fairly standard set of statistical indicators. Secondly, when analysing in a regional context, the problem of data reliability arises. In particular, data on the unemployment rate in almost all regions have an error estimate of more than 10%, which limits their use for quantitative analysis (State Statistics Service of Ukraine 2020).
Some researchers offer integral indices that allow synthesizing the content of a set of indicators and ranking regions by the value of the index and sub-indices (Bobukh & Shchehel, 2019). However, for operational monitoring and management decisions, one still needs to focus on the initial values of the indicators, and there should not be many of these indicators. Considering existing sources of regular and reliable data, developments of national researchers and foreign experience (Eurostat, 2019) of socio-economic analysis of regional development, this work uses indicators such as those from table 1. Source: Own compilation.
The base year for comparison is 2015, when the implementation of decentralization began. The last reporting year for which a complete set of statistical indicators is available is 2019. In addition, in 2019, the first stage of decentralization was completed, in which territorial communities were formed and united on a voluntary basis. The dynamics of regional indicators and interregional differentiation for 2015-2019 allow you to assess the scale and consistency of changes at the stage of hromada formation.
RESEARCH RESULTS AND DISCUSSIONS
Employment rate and unemployment rate characterize the implementation of the labour supply, reflecting the regional potential for economic growth and the availability of the labour market. Employment opportunities and a choice of jobs are the key to the social well-being of the region's residents. On the contrary, a high unemployment rate can be an indicator of depressed areas.
In 2015-2019 in the regions of Ukraine, there was a heterogeneous dynamic of the levels of employment and unemployment, which may be, among other things, a consequence of the decentralization process. In most regions, there was an increase in employment, most significantly in the Khmelnytskiy, Luhansk, Sumy, Chernivtsi (more than +4 percentage points in each region) and Zhytomyr (+3 percentage points) regions. At the same time, employment decreased in the Volyn (-2.2 percentage points), Dnipropetrovsk (-1.4), Zakarpattya (-0.8) regions (Fig. 1 The unemployment rate grew, although insignificantly, in four regions -Volyn, Vinnytsya, Dnipropetrovsk and Mykolayiv. Moreover, in the Volyn and Dnipropetrovsk regions, the increase in unemployment occurred against the background of a decrease in employment. The greatest success in reducing the unemployment rate was achieved by the Sumy, Chernivtsi, Khmelnytskiy, Kharkiv regions (a decrease of more than 2 percentage points). It should be noted that Zhytomyr, Khmelnytskiy, Volyn, Dnipropetrovsk regions were leaders in the rating of regions in terms of the hromada formation rate (Моніторинг процесу децентралізації, f.a.), but this was reflected in the levels of employment and unemployment in very different ways.
In general, there is a logical pattern that the most significant decrease in the unemployment rate occurred in regions where there was a significant increase in the employment rate, while in regions with a higher level of employment, there is a lower unemployment rate.
Employment gender gap characterizes the situation with the equality of opportunities for women and men in the labour market. In virtually all countries of the world, the employment rate for men is higher than the one for women, as women bear a greater burden of maternal and family responsibilities. Therefore, when analysing this indicator, the main attention is paid to the direction of the trend (with the right policy, the gender gap should be reduced), as well as interregional comparison.
For 2015-2019 in 16 out of 25 regions of Ukraine, the gender gap in terms of employment increased (Fig. 2). The largest growth occurred in the Zakarpattya region (twice), the Mykolayiv and Luhansk regions (by half in each), Ivano-Frankivsk and Chernivtsi regions (by a quarter in each). As a result, in 2019, the Zakarpattya, Ivano-Frankivsk and Chernivtsi regions became "leaders" in terms of the size of the gender gap (almost 20 percentage points). The largest reduction in the gender gap in terms of employment took place in Rivne, Lviv, Vinnytsya, Kharkiv regions. Interregional differentiation of the gender gap in terms of employment may be associated not only with the lack of jobs for women or the specifics of the sectoral structure of employment (spread of informal employment, labour migration, etc.), but also with the (in) accessibility of early childhood development infrastructure. Sectorial employment structure characterizes the level of productivity and conditions of employment, for example, the opportunity to have a stable job with official registration, decent wages and a social package.
The industry sector provides predominantly stable, qualified, formal wage employment with wages significantly higher than the national average. Disadvantages of employment in this sector usually include tight working hours and risks of harmful working conditions. In Ukraine, industrial centres located mainly in the eastern regions -Dnipropetrovsk, Donetsk, Zaporizhzhya, Kharkiv, Luhansk, as well as Kiev and Poltava regions have formed historically. The share of industrial employment in these regions is over 15%. The smallest specific weight (less than 10%) is occupied by the industrial sector in the structure of employment in the Odesa, Kherson, Ternopil regions and the city of Kiev (table 2).
The business services sector includes three types of economic activities: information and telecommunications (section J), professional, scientific and technical activities (section M), activities in the field of administrative and support services (section N according to NACE-2010). This sector is highly innovative, capable of providing high quality employment with motivation to learn (High-Level Group on Business Services 2014). However, in Ukraine, hired labour in this sector is often disguised as selfemployment in the status of individuals -entrepreneurs, or is carried out in atypical conditions (freelancing, etc.). As a consequence, employment flexibility translates into weak social protection. In terms of the structural share of employment in the business services sector, the city of Kyiv is by far the leader (over 20%), which is explained by the status of the capital. Among the regions, a significant percentage of employment in the business services sector comes to Kyiv, Kharkiv, Dnipropetrovsk regions (7-9%). This sector is least represented in the Zakarpattya and Chernivtsi regions (less than 3%). regions by 1-3 percentage points, which is a consequence of government reforms to "optimize" the infrastructure network. Coverage of unemployed with public works characterizes the level of cooperation between local governments and the state employment service. It should be remembered that public works are a temporary form of employment and do not solve the problem of finding a suitable job and stable employment. This is why most EU countries do not practice public works at all, with the exception of Greece, Ireland, Italy and new member states (Anna Manoudi 2014).
For the period 2015-2019 the level of coverage of registered unemployed with public works decreased in five regions (Sumy, Cherkasy, Lviv, Poltava, Ternopil), in three it remained unchanged (Vinnytsya, Rivne and Khmelnytskiy), in other regions, there was an increase (Fig. 3). The most rapid growth was observed in Donetsk (from 13% in 2015 to 54% in 2019) and Luhansk (from 14% to 31%) regions, which have a high unemployment rate. This means that the interpretation of this indicator is rather contradictory. On the one hand, the high coverage of public works testifies to the active position of local self-government bodies, on the other hand, to the unfavourable situation in the labour market and the lack of reliable employment opportunities.
ua/storinka/vidkryti-dani
Summarizing the analysis of the indicators, it should be noted that with the start of the decentralization reform, not so much time has passed for its advantages to become obvious in such a delicate and multifactorial substance as the labour market. Nevertheless, it is safe to say that several regions have been able to significantly improve their performance. And these are not only regions that previously demonstrated success (Kharkiv, Kyiv regions), but also regions that were considered economically depressed for a long time -Sumy, Cherkasy, Zhytomyr regions.
CONCLUSIONS
Study results make it possible to substantiate the following new directions of regional development policy: ❖ to expand the powers of local self-government bodies in the formation and implementation of employment and labour market policies, in particular, transfer the local employment policy to the list of their own powers and delegate powers to participate in the implementation of the national employment and labour market policy; ❖ to adapt the organizational and managerial model of the state employment service to the conditions of decentralization, to provide for more flexibility and freedom in the formation of local target programs and instruments of active labour market policy, for example, for vocational training | 2021-09-28T01:10:15.025Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "d48aa960a7ffd913dbdd7aa889a298342daf5d75",
"oa_license": "CCBY",
"oa_url": "https://economy-sociology.ince.md/?edmc=3489",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "08fc22071181b2546d6bbcb51d7935f81dc48330",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
237322638 | pes2o/s2orc | v3-fos-license | Personalised Advanced Therapies in Parkinson’s Disease: The Role of Non-Motor Symptoms Profile
Device-aided therapies, including levodopa-carbidopa intestinal gel infusion, apomorphine subcutaneous infusion, and deep brain stimulation, are available in many countries for the management of the advanced stage of Parkinson’s disease (PD). Currently, selection of device-aided therapies is mainly focused on patients’ motor profile while non-motor symptoms play a role limited to being regarded as possible exclusion criteria in the decision-making process for the delivery and sustenance of a successful treatment. Differential beneficial effects on specific non-motor symptoms of the currently available device-aided therapies for PD are emerging and these could hold relevant clinical implications. In this viewpoint, we suggest that specific non-motor symptoms could be used as an additional anchor to motor symptoms and not merely as exclusion criteria to deliver bespoke and patient-specific personalised therapy for advanced PD.
of 11
can be effectively managed by oral and transdermal dopamine replacement therapies, treatment of the more advanced phases remains a challenge, partly complicated by the requirement to choose which device-aided therapies (DAT) to offer to which patients, including levodopa-carbidopa intestinal gel infusion (LCIG) with or without entacapone, subcutaneous apomorphine infusion (APO), and deep brain stimulation (DBS). An optimal therapeutic choice is important as advanced PD is associated with motor and non-motor complications which may be refractory to standard oral/transdermal therapy negatively affecting quality of life [3][4][5][6]. International consensus and standard guidelines have attempted to address ideal DAT selection, but the latter still remains an unmet need [7][8][9]. A recent initiative based on an international Delphi-panel approach identified key motor, non-motor, and functional indicators of advanced PD [10], externally validated in the OBSERVE-PD study [11]. This has led to the development of the '5-2-1' paradigm (≥5-times oral levodopa doses/day, ≥2 h of 'off' symptoms/day, ≥1 h of troublesome dyskinesia/day) to identify motor aspects of advanced PD and ensure timely referral for DAT initiation [10]. The interim analysis of DUOGLOBE, an observational study evaluating the long-term effectiveness of LCIG in patients with advanced PD, showed that only 20% of patients met all of the 5-2-1 criteria, but 98% met at least one criterion, highlighting the need for further refinement and personalisation of DAT selection [12].
A clinically relevant issue is the debate on whether earlier (than currently adopted in clinical practice) initiation of DAT may be beneficial for patients with PD. The EARLYSTIM study as well as the post-hoc analysis of the GLORIA registry have explored an earlier introduction of DBS and LCIG, respectively, but appropriate timing of DAT initiation largely remains an area of debate [13][14][15]. Moreover, older patients (≥75 years), for whom DBS is often not considered because of risk-benefit uncertainty, may nonetheless benefit from a modified approach involving DBS of several nuclei [16]. Another emergent debate is focused on how non-motor symptoms (NMS) may guide DAT selection for patients with PD as a positive inclusion criterion, rather than being used purely as an exclusion criterion, e.g., severe depression as a contraindication for DBS and severe hallucinations for APO.
Finally, also in relation to initiatives of providing earlier initiation of DAT in patients with PD, the relatively high costs of DAT need to be taken into account. Here, the societal impact of advanced PD is considerable as the 20% most affected patients are responsible for around 70% of secondary care costs [17]. The costs of DAT can be considerable, but NMS have not been taken into account in cost-effectiveness analyses [18]. This is a relevant observation as NMS contribute at least equally, if not more, to quality of life as motor symptoms [19,20]. Additionally, motor fluctuations, the most common indication for DAT, are often accompanied by non-motor fluctuations, adding to perceived quality of life [21,22]. Thus, it seems reasonable to include NMS in the decision to initiate DAT in patients with PD, especially for those with only moderate motor symptoms but severe non-motor burden [23].
Therefore, in this viewpoint, we will focus on the emerging role of the non-motor profile integral to the choice and outcomes of personalised medicine [1] when delivering DAT in PD. We aim to delineate the emerging field of non-motor indications for DAT and discuss possible implications for clinical practice.
Current Use of Non-Motor Symptoms in Device-Aided Therapies Selection
NMS have been proposed as criteria to consider for use of DAT; however, they are not considered in most country-based guidelines by licensing authorities or are merely used as exclusion criteria. The latter has been reviewed as part of the NAVIGATE PD initiative [7], for instance, and NMS constitute both relative and absolute contraindications for certain DAT while data suggests NMS could be improved by DAT. An absolute contraindication (in most countries) for all DAT is severe dementia, whereas non-motor aspects representing relative contraindications are more diverse. For APO and LCIG these include impulse control disorder and dopamine dysregulation syndrome, along with mild to moderate cognitive dysfunction; for DBS the main non-motor contraindications are severe depression and clinically relevant cognitive impairments [7,24]. Moreover, presence of symptomatic orthostatic hypotension, excessive daytime sleepiness, and severe hallucinations could be considered exclusion criteria for APO [25].
Device-Aided Therapies and Differential Effect on Non-Motor Symptoms
While therapeutic decisions and research on DAT have largely focused on the influence and effect on motor symptoms, NMS are an integral feature of PD and, therefore, should play an active part in the decision-making process to select the ideal DAT for patients with PD [7,10]. Although APO, LCIG and bilateral subthalamic nucleus (STN) DBS have been available for many years for the treatment of PD in many countries, head-to-head comparative studies are limited. Following on from the original EuroInf study [26], the EuroInf 2 study is the first and only study concurrently comparing all three DAT [27]. Although open-label in its design, it offers Class IIb evidence on the differential effects of these DAT on NMS measured by the NMS scale (NMSS) total burden and its domain's scores. In agreement with other studies, all three therapeutic options confirmed an improvement in motor complications, Hoehn and Yahr stage and quality of life [26][27][28][29][30][31][32]. Although all three DAT decreased total NMS burden, interestingly, each treatment appeared to have a bias towards specific NMS thus providing some early indications of varied responsiveness to each therapy. For instance, in this cohort of 173 patients, APO decreased the attention/memory domain scores, while bilateral STN-DBS and LCIG did to a lesser extent which was not statically significant. Nonetheless, it needs to be acknowledged that patients with cognitive problems would be excluded a priori from receiving DBS. Similarly, in this study patients receiving APO had higher NMSS attention/memory baseline scores compared to the other groups, leaving more room for improvement. Data on patients with severe attention/memory problems are not available. On the other hand, DBS and LCIG appeared to reduce the urinary and gastrointestinal domains scores, respectively. All three treatment options decreased the mood/apathy and miscellaneous domains scores, the latter including weight changes, altered thermoregulation and olfaction as well as unexplained pain. Improvements here were heterogeneous, and while APO reduced weight change-related scores, LCIG and DBS improved most of the symptoms contained within the miscellaneous domain. Aspects of sleep dysfunction and fatigue as measured by the NMSS also improved with both LCIG and bilateral STN-DBS, but not after APO initiation. Finally, there is evidence to suggest that APO and bilateral STN-DBS decrease the perceptual problems and hallucinations domain scores, although typically these are considered contra-indications [10]. The mechanisms behind these associations need to be further elucidated; however, it is possible to argue that, for instance, historical presence of visual hallucinations which are mainly drug-induced, and which might subside after drug withdrawal at the expense of a troublesome motor worsening, might benefit from DAT initiation. Finally, combined DAT-related data is also emerging, and may help us to overcome specific issues [33][34][35][36][37][38].
Non-Motor Effects of Deep Brain Stimulation
Important conceptual advances may hold promise in relation to the delivery of personalised medicine and DAT in PD [2]. In addition to the abovementioned EuroInf studies, this is exemplified by several studies that have been conducted on the non-motor effects of DBS, showing improvements in several non-motor areas that have been reviewed elsewhere [39][40][41][42][43][44]. In brief, a recent meta-analysis, including 48 studies with mainly 12-month follow-up data, suggested post-STN-DBS improvements of depression and anxiety-related symptoms but increased apathy [41]. Another meta-analysis of seven studies with followup data ranging from three to 24 months showed post-STN-DBS improvements in sleep quality and restless leg syndrome; however, a high degree of heterogeneity among studies was reported [39,44,45], and few studies have investigated the effect of STN-DBS on REM sleep behaviour disorder [42,46]. Another recently published review summarised post-DBS positive outcomes related to urinary dysfunction (mean bladder volumes at desire and urge point to void), while controversial and limited data are available in relation to sexual, cardiovascular, thermoregulatory and gastrointestinal dysfunction [40]. Finally, even though presence of dementia is a contraindication for DBS, a systematic review of 13 studies showed that although there was a decline in verbal fluency and attention domains of cognition, other cognitive functions remained unchanged over a follow-up period ranging from six months to eight years [43]. It needs to be acknowledged that most included studies had small cohort sizes and heterogenous outcome measures.
Further advancements in relation to personalised medicine with DBS might be achieved by directing neurostimulation to specific parts of the basal ganglia and leveraging their specific connectivity profiles [47][48][49].
More theoretical approaches, such as adaptive DBS, have been developed as a method where DBS is turned on and off according to a closed-loop feedback signal recorded from the tissue surrounding the stimulating electrode. This may develop into personalised approach if it can show to activate DBS at times of necessity and reduce it at times of quiescence, for example in sleep, with the aim of a more physiological treatment and potentially reducing the frequency for battery replacements in non-rechargeable systems. Presently, limitations to the clinical application of adaptive DBS are: (1) Tremor frequency, beta-band and other oscillations required for the closed-loop feedback arc of adaptive DBS are not recordable in all patients with PD [50]; (2) beta-band activity represents not only pathological alterations, but is also modulated by physiological functions [51,52] (3) pathological tremor frequency and beta-band oscillations may, in some patients, reflect tremor, bradykinesia and rigidity, but not NMS [53]; (4) motor symptoms can fluctuate at different times during the course of the day than non-motor fluctuations [21,22]. As such, situations may arise in which the neurostimulation is not active because tremor frequency and beta-band oscillations cannot be detected, but the patient nonetheless presents with NMS such as pain or depressed mood. Therefore, studies are needed to investigate the effect of adaptive DBS on quality of life and NMS, not only motor symptoms [54].
Non-Motor Effects of Levodopa-Carbidopa Intestinal Gel Infusion
There is robust evidence on the effect of LCIG on NMS. In 2015, a systematic review identified eight open-label studies confirming that LCIG improved total NMS burden after a follow up period ranging from six to 25 months, with specific positive effects on sleep and autonomic dysfunction, and particularly gastrointestinal issues measured by the NMSS [55]. Additionally, more recent reviews have highlighted the non-motor effect of LCIG where a general improvement in the non-motor burden was noted [56,57]. Studies included in these reviews were, among others, the GLORIA registry, whose 24-month follow up data showed a remarkable beneficial effect of LCIG on sleep disturbances, apathy, and gastrointestinal dysfunction as measured by the NMSS [29], and the interim analysis of the DUOGLOBE study, where an overall improvement in the NMS total burden was also shown after only six months [12]. Additional open labels studies with 6-month followup data showed a post-LCIG improvement in NMS total burden, including reduction of the cardiovascular, attention/memory, urinary and miscellaneous domains scores of the NMSS [26,58]. Interestingly, the baseline total burden of NMS in PD can predict a robust total non-motor response to LCIG therapy at two years follow up. This observation can underpin DAT selection with an NMS focus, specifically when considering personalised LCIG therapy for instance [59].
Non-Motor Effects of Apomorphine Subcutaneous Infusion
Although APO has been in use longest compared with DBS and LCIG (APO became available on the European market in the early 1990s), data regarding APO and selection of this device-aided therapy based on patients' non-motor profile is less obvious and the results from the double-blind TOLEDO study are awaited with interest [60]. However, several open-label and case report-based studies show that this treatment can have a beneficial effect on the NMS total burden as well as on specific non-motor areas, and these have been reviewed elsewhere [56,61,62]. In brief, there is evidence suggesting post-APO improvements in depression, anxiety, apathy, perceptual problems, cognitive impairment, sleep dysfunction (insomnia and restless leg syndrome), fatigue, urinary dysfunction (urinary frequency, urgency and nocturia), and gastrointestinal dysfunction (dribbling of saliva) as measured by the NMSS at both 6-and 12-month follow up [26,63]. The reported beneficial effect or tolerability of APO on mild visual hallucinations is of interest given that it is a dopamine D1 and D2 receptor agonist, and suggested underlying mechanisms include the associated reduction in oral medication and/or a psychotropic action of APO, possibly due to the piperidine moiety in its structure [64,65]. In addition, the potential beneficial role of APO on cerebral amyloid deposition is worth considering in relation to its positive modulatory effect on cognition [26,63,66,67].
Need for Personalised Treatment in Advanced Parkinson's: Clinical Cases
Taking into account the distinct NMS effects of these three DAT, it can be postulated that the specific non-motor profile of patients with advanced PD may serve as an additional anchor to motor symptoms to deliver personalised medicine. Two illustrative clinical cases are presented in Figure 1 showing the different non-motor profile of two patients with advanced PD evaluated for DAT initiation.
Non-Motor Effects of Apomorphine Subcutaneous Infusion
Although APO has been in use longest compared with DBS and LCIG (APO became available on the European market in the early 1990s), data regarding APO and selection of this device-aided therapy based on patients' non-motor profile is less obvious and the results from the double-blind TOLEDO study are awaited with interest [60]. However several open-label and case report-based studies show that this treatment can have a ben eficial effect on the NMS total burden as well as on specific non-motor areas, and these have been reviewed elsewhere [56,61,62]. In brief, there is evidence suggesting post-APO improvements in depression, anxiety, apathy, perceptual problems, cognitive impair ment, sleep dysfunction (insomnia and restless leg syndrome), fatigue, urinary dysfunc tion (urinary frequency, urgency and nocturia), and gastrointestinal dysfunction (drib bling of saliva) as measured by the NMSS at both 6-and 12-month follow up [26,63]. The reported beneficial effect or tolerability of APO on mild visual hallucinations is of interes given that it is a dopamine D1 and D2 receptor agonist, and suggested underlying mech anisms include the associated reduction in oral medication and/or a psychotropic action of APO, possibly due to the piperidine moiety in its structure [64,65]. In addition, the po tential beneficial role of APO on cerebral amyloid deposition is worth considering in re lation to its positive modulatory effect on cognition [26,63,66,67].
Need for Personalised Treatment in Advanced Parkinson's: Clinical Cases
Taking into account the distinct NMS effects of these three DAT, it can be postulated that the specific non-motor profile of patients with advanced PD may serve as an addi tional anchor to motor symptoms to deliver personalised medicine. Two illustrative clin ical cases are presented in Figure 1 showing the different non-motor profile of two patient with advanced PD evaluated for DAT initiation. The clinical assessment revealed that both patients suffered from motor complications including troublesome dyskinesia and motor fluctuations refractory to conventional therapies; in addition, the non-motor profile of patient 1 was dominated by mild cognitive decline and non-intrusive perceptual issues, whereas for patient 2, cardiovascular, urinary, and gastrointestinal dysfunction were particularly pronounced. On the basis of these two different non-motor profiles and according to the EuroInf 2 data, it can be argued that APO may represent the best therapeutic option for patient 1, while, for patient 2, APO may not be suitable as it may exacerbate pre-existing cardiovascular problems, including orthostatic hypotension. On the other hand, while LCIG may be useful to improve gastrointestinal symptoms, STN-DBS may be the best option to improve urinary dysfunction for patient 2. As such, it would be important to inquire which one of the two is the most troublesome/severe NMS to better tailor the decision-making process.
Other factors are also implicated in the delivery of personalised DAT in PD [2]. Evaluation of patient age, for instance, represents a key aspect in the assessment for DBS suitability; indeed, age >70 or 75 years is an exclusion criterion for DBS in some centres given the associated higher risk of complications [68]; nevertheless, biological age is more often taken into consideration than chronological age in addition to the fact that the impact of "healthy ageing" is growing [69]. Another relevant aspect of this decision-making process is the evaluation of comorbidities. For instance, poorly controlled diabetic patients with PD have a higher risk of developing skin infections and this should be considered in the evaluation for any DAT [70]. Other comorbidities, such as pre-existing significant and symptomatic peripheral neuropathy needs consideration for LCIG, impulse control disorder and intrusive psychosis (as opposed to mild non-intrusive psychosis) for APO, and severe depression or suicidal trends for DBS [7]. Last but not least, patient personality and preferences need to be taken into account: some active young patients may prefer a more invasive brain surgery than a percutaneous endoscopic gastrostomy in order to avoid carrying a visible infusion pump every day, and for a "quick fix" of dyskinesias and tremor [71]. Body weight has also emerged as an important aspect of the decision-making process [72]. Low body weight patients with advanced PD may develop pain, discomfort and worsening of postural problems with subsequent risk of falls when carrying a heavy infusion pump [73]. The advent of a smaller infusion pump with the new levodopacarbidopa-entacapone intestinal gel product now licensed for use in Sweden and Germany may represent a significant advance in this respect [74,75]. Whether this new product will have an impact on NMS similar to LCIG remains unexplored. Evaluating the ability of the patient and/or caregiver to handle the medication and the device, as well as daily skin hygiene, is also critical [73].
Conclusions
Device-aided therapies are now established worldwide for the management of advanced Parkinson's disease. While the emphasis of device-aided therapies selection remains based on the motor profile of patients with PD, non-motor symptoms have also been shown to play a part in the prognostic aspects of the successful delivery of these therapeutic options and are now included in the diagnostic algorithm of advanced PD. Considering the differential effect on non-motor symptoms of the currently available device-aided therapies, non-motor symptoms are relevant to delivering personalised medicine in Parkinson's disease. We envisage that the identification of different motor and non-motor phenotypes of Parkinson's may guide the delivery of personalised medicine in the advanced stage of the condition, perhaps guided by technology able to predict motor and non-motor responses to device-aided therapies on the basis of the patient-specific pre-intervention symptom's profile. We suggest that non-motor symptoms are an important enabler of the constituents of the "circle of personalised medicine" and offers a chance to deliver bespoke personalised therapy for advanced PD (Figure 2). enabler of the constituents of the "circle of personalised medicine" and offers a chance to deliver bespoke personalised therapy for advanced PD (Figure 2).
Figure 2.
Non-motor enablers for a successful selection of device-aided therapy for patients with advanced Parkinson's disease. The figure shows non-motor exclusion and possible inclusion criteria for a successful patient initiation on deviceaided therapies. We emphasise that these conditions should not be considered an absolute contraindication or indication for the device-aided therapies and expert opinion based on multi-disciplinary assessments should have the final say. ~ Multi-domain MCI with a predominant cortical pattern (e.g., memory, language, visuospatial); * Further studies are needed to better clarify which aspect of sleep, gastrointestinal and cardiovascular dysfunction can improve after deviceaided therapy initiation. Abbreviations: APO, apomorphine subcutaneous infusion; DAT, device-aided therapies; DBS, Deep brain stimulation; EDS, excessive daytime sleepiness; ICD, Impulse control disorder; LCIG, levodopa-carbidopa intestinal gel infusion; MCI; mild cognitive impairment; NMS, non-motors symptoms; OH, orthostatic hypotension; RLS, Restless legs syndrome. We emphasise that these conditions should not be considered an absolute contraindication or indication for the device-aided therapies and expert opinion based on multi-disciplinary assessments should have the final say.~Multidomain MCI with a predominant cortical pattern (e.g., memory, language, visuospatial); * Further studies are needed to better clarify which aspect of sleep, gastrointestinal and cardiovascular dysfunction can improve after device-aided therapy initiation. Abbreviations: APO, apomorphine subcutaneous infusion; DAT, device-aided therapies; DBS, Deep brain stimulation; EDS, excessive daytime sleepiness; ICD, Impulse control disorder; LCIG, levodopa-carbidopa intestinal gel infusion; MCI; mild cognitive impairment; NMS, non-motors symptoms; OH, orthostatic hypotension; RLS, Restless legs syndrome. | 2021-08-28T06:17:19.564Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "f033aa31fb7099ac4fa508aaf78fe6c649a25f3d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jpm11080773",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c3e62354831b72de2ec201e6f7233f97621854d",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21478661 | pes2o/s2orc | v3-fos-license | Knowledge , Perceptions and Acceptability of HPV Vaccination among Medical Students in Chongqing , China
Cervical cancer is one of the most common types of cancer worldwide in women, and in 2012, approximately 528, 000 new cases were diagnosed with 266, 000 women dying from the malignancy (WHO et al., 2012). In China, the latest annual age-standardized morbidity and mortality estimates are 9.6 and 4.3 per 100, 000 women, respectively (Markowitz et al., 2007). Despite facing a huge national burden, China currently has no nationwide screening program for cervical cancer in place and no prophylactic human papillomavirus (HPV) vaccine on the market (Zhang et al., 2010). Prophylactic HPV vaccines are important and effective tools in preventing cervical cancer. Studies have shown that HPV is a causative agent contributing to the development of cervical cancer (Peckham et al., 1995). It is the most common sexually transmitted disease occurring primarily in adolescents and young adults and, in sexually active youth, a cumulative prevalence rate of up to 82% has been observed (Brown et al., 2005). Genital HPV is classified into high-risk (mainly HPV16 and 18) and low-risk (mainly HPV6 and 11) types, according to its potential for causing invasive cervical cancer. HPV16
Introduction
Cervical cancer is one of the most common types of cancer worldwide in women, and in 2012, approximately 528, 000 new cases were diagnosed with 266, 000 women dying from the malignancy (WHO et al., 2012).In China, the latest annual age-standardized morbidity and mortality estimates are 9.6 and 4.3 per 100, 000 women, respectively (Markowitz et al., 2007).Despite facing a huge national burden, China currently has no nationwide screening program for cervical cancer in place and no prophylactic human papillomavirus (HPV) vaccine on the market (Zhang et al., 2010).
Prophylactic HPV vaccines are important and effective tools in preventing cervical cancer.Studies have shown that HPV is a causative agent contributing to the development of cervical cancer (Peckham et al., 1995).It is the most common sexually transmitted disease occurring primarily in adolescents and young adults and, in sexually active youth, a cumulative prevalence rate of up to 82% has been observed (Brown et al., 2005).Genital HPV is classified into high-risk (mainly HPV16 and 18) and low-risk (mainly HPV6 and 11) types, according to its potential for causing invasive cervical cancer.HPV16 accounts for 50% of cases of cervical cancer and HPV18 accounts for 20% of cases (Serrano et al., 2012).Though HPV6 and 11 show a low risk for inducing cervical cancer, they cause 90% of cases of genital warts (Munoz et al., 2003).To date, two prophylactic HPV vaccines have been developed and approved in a number of countries; one is a quadrivalent vaccine, Gardasil, targeted at HPV6, 11, 16, and 18, while the other is a bivalent vaccine, Cervarix, targeted against HPV 16 and 18.The efficacy and safety of these vaccines has been validated in clinical trials (Schiller et al., 2012).Phase III clinical trials of both the quadrivalent and bivalent vaccines are still in progress in China (Zhang et al., 2013).A novel HPV vaccine against HPV16 and 18, developed by Xiamen Innovax Biotech (Xiamen, China), was recently tested in a phase III clinical trial (Wei et al., 2009).It is expected that prophylactic HPV vaccines will play an essential role in mediating cervical cancer prevention in China.
It is crucial to assess HPV-related knowledge among the target population before the initiation of a vaccination program.Existing studies have shown that knowledge about HPV may strongly influence vaccination acceptance (Gerend et al., 2007;Gottvall et al., 2009).In China, there have been some studies examining this topic, both in Hong Kong (Lee et al., 2007;Kwan et al., 2008;Kwan et al., 2009) and in mainland China (Li et al., 2009;Zhao et al., 2012;Zhang et al., 2013).The study populations assessed in these studies have included women, adolescent girls, parents, government officials, and medical personnel.To date, there have not been any studies examining HPV knowledge and HPV vaccination acceptability among medical students.
As future health care providers, medical students constitute important sources of information regarding the target vaccination population.Medical students play an important role in promoting public knowledge and awareness of cervical cancer (Pandey et al., 2012).To address this gap in the literature, as part of a multicenter survey in southwest China, we piloted a survey in a medical school to evaluate the level of HPV-related knowledge and the acceptability of HPV vaccination among medical undergraduates in China.Medical students' knowledge of HPV, cervical cancer and HPV vaccination may reflect the current state of HPV education in medical schools.An understanding of this status quo can aid the development of comprehensive vaccination policies and campaigns.Hence, the aims of this study were to evaluate medical students' knowledge of HPV and HPV-related diseases and assess their attitudes towards HPV vaccination.
Study design and population
This cross-sectional survey was conducted at Chongqing Medical University in May 2013.Considering the distribution of different majors, we sampled 650 medical undergraduates from a sampling frame comprising all enrolled students (about 7%), 605 responded to our survey when approached before class (response rate: 93%).Data was collected from the respondents through a self-administered anonymous questionnaire adapted from the existing research literature (Pandey et al., 2012;Rashwan et al., 2012;Zhang et al., 2013).It consisted of sections soliciting socio-demographic information such as age, gender, ethnicity, and grade; knowledge of HPV, cervical cancer and/or genital warts, and HPV vaccines; and perceptions of HPV vaccination.The study was approved by the Ethics Committee of Sichuan University Fourth Hospital/West China School of Public Health.
Data collection and quality control
The survey instrument was delivered to participating students in the classroom before lecture.Prior to the investigation, researchers gave a brief introduction on the purpose of the study.This introductory information was also included in the informed consent form.Since the questionnaire was anonymous, no written consent was requested from participating students.They were informed that participation is voluntary, and those unwilling to participate were asked to return a blank questionnaire.No consent was requested from parents or guardians of any participants.In order to prevent inter-communication, participants were required to maintain some distance from one another while completing the questionnaire.
Statistical analysis
Survey data was analyzed using SAS version 9.2.Demographic information was presented as frequencies and percentages.Total HPV related knowledge score was calculated by rewarding 1 point for each correct answer and students' knowledge level was categorized based on the resulting score.The acceptability of HPV vaccination was assessed by the question: "Would you like to receive/ advise HPV vaccination?"Participants who answered "Yes" were regarded as accepting HPV vaccination.Univariable logistic regression analysis was conducted to explore factors influencing the acceptability of HPV vaccination, and the overall model was developed using multivariable logistic regression analysis.Two-tailed tests were employed, and α=0.05 was set as the threshold for statistical significance.Regarding premarital sex behavior, most students (58.1%) expressed a neutral attitude.
Knowledge of HPV, cervical cancer and HPV vaccination
Respondents' knowledge of HPV, cervical cancer, and HPV vaccination is summarized in Table 2. Most students (67.8%) were aware of the mode of transmission.Only 43.6% of responders recognized that HPV infection is common in women, and a similar percentage of participants believed that HPV could be cleared by the immune system.Half of responders were aware of the asymptomatic nature of HPV infection, and 67.2% thought that HPV could infect men as well.86.1% of responders knew that HPV could cause cervical cancer and 68.7% knew of the association between HPV and genital warts.13.6% believed that persistent HPV infection was necessary for cervical cancer.383 students (63.3%) expressed the belief that cervical cancer could be prevented by HPV vaccination.18.4% and 76.4%, respectively, thought that HPV vaccines could be administered to women with existing HPV infection and to sexually active women.86.9% believed that it is necessary for girls/women to be screened for HPV before vaccination, while 88.5% thought that cervical cancer screening is still necessary after vaccination.In addition, 57.4% did not think that it is safe to have multiple sexual partners even after a full course of HPV vaccination.Only 10.6% (data not shown) of responders achieved a total knowledge score of 11 or higher, meaning that they correctly answered more than 11 out of14 knowledge questions on HPV, cervical cancer and HPV vaccination.
Perceptions and concerns about HPV vaccination
Table 3 summarizes perceptions and concerns about HPV vaccination among the surveyed medical students.36.9% of male students preferred 13-18 years as the age group for vaccination, while 40.1% of female students preferred 19-25 years.A majority of both male and female students agreed that it was best to receive HPV vaccination before sexual debut.The local center for disease prevention and control (CDC) was selected as the most appropriate venue for vaccination by most males (67.9%) and females (73.6%), while only 19.3% males and 22.5% females preferred a school setting.60.0% of male students thought that HPV vaccines could be given to boys, while a slightly higher percentage (68.4%) of females held the same view.In respect to sources of knowledge about HPV vaccination, male and female students responded similarly, with classroom courses cited most frequently.A majority of students (89.6% vs. 93.7%)had not been consulted by friends or relatives regarding HPV vaccination, and most (88.7% vs. 96.4%)would like to be educated by experts.Over half of males (57.2%) indicated that they would like to receive/advise HPV vaccination, as compared to 78.5% of females.Among those who did not accept HPV vaccination, inadequate information and fear of complications were cited as the most important obstacles, while worries about the efficacy of vaccination and high cost were additional reasons for the rejection of HPV vaccination.Over half of male and female students preferred imported vaccines, while one third would base their choice on the pricing of the two types of vaccine.81.8% and 87.4% held positive views of the prospects for HPV vaccination in China, 22.5% of males and 17.9% of females held neutral views, and were influenced by considerations relating to cost, unclear longterm efficacy and possible side effects.Over half of those with a positive view (54.0%) called for pricing regulations and government subsidy for HPV vaccination.
Factors associated with acceptability of HPV vaccination
In univariable logistic regression analysis Table 4, gender, sexual maturity, interest in being educated by experts about HPV vaccination, and total knowledge score were significantly associated with acceptability of HPV vaccination among surveyed medical students.Table 5 presents results from a multivariable logistic regression analysis, which included the above four factors.Students obtaining a higher total score of HPV related knowledge were more willing to accept HPV vaccination.Compared with males, female students were 2.69 (95%CI: 1.53-4.72)times more likely to receive or advise the vaccines.In addition, students who expressed an interest in expertdirected education about HPV vaccination showed higher acceptability of vaccination.
Discussion
To our knowledge, this is the first study to investigate knowledge and perceptions of HPV vaccination among medical students in China.We found that the acceptability of HPV vaccination was 57.2% among male and 78.5% among female medical students, corroborating the findings of one Indian study (53.0% vs. 79.4%)(Pandey et al., 2012) but lower than among medical students surveyed in another study (Mehta et al., 2013).Other studies conducted among general college students reported varying levels of acceptability of vaccination ranging between 10.1%-75.6%among males and 11.6%-93.8%among females (Durusoy et al., 2010;Medeiros et al., 2010;Wong et al., 2010;Bourke, 2012).This may be due to culture-specific differences and varied emphases on preventive strategies for cervical cancer in different countries.
Our study found limited knowledge of HPV, cervical cancer and HPV vaccination among medical students, with only 10.6% correctly answering over 11 of 14 knowledge questions.In particular, we noted a lack of knowledge about HPV vaccination.A similar study conducted in Malaysia founded that 33% of students majoring in medicine, dentistry and pharmacy obtained a high score of 6-8 on an 8-item questionnaire (Rashwan et al., 2012).The fact that the Malaysian government has introduced free HPV vaccine for all girls 13 and older (The Star Online, 2009) might contribute to this high level of knowledge about HPV vaccine.In our study, though participants were largely aware of HPV transmission mode, many did not realize that HPV infection was common in women and that infected women may show no symptoms.Moreover, the role of the immune system in infection clearance was not identified by over half of students.This reflects a gap between education and practice in this regard.Most students knew that HPV is a causal agent for cervical cancer, but few noted that the precondition for development of cervical cancer is persistent HPV infection, suggesting that depth of knowledge should be stressed in future medical school curricula.The acceptability of HPV vaccination was relatively low among the medical undergraduates surveyed in this study.It was higher than among parents of young adolescents (<40.0%)(Chan et al., 2007;Zhang et al., 2013) , but relatively lower than among surveyed Chinese women (>80.0%)(Kwan et al., 2009;Li et al., 2009).The vast majority of students consented to HPV vaccination after 13 years and 38.0% of males and 50.7% of females suggested an age over 19 years.This was inconsistent with World Health Organization (WHO) guidelines, calling for vaccination between the ages of 9-10 through 13 (WHO, 2009).Previous studies have demonstrated that it was best to receive HPV vaccination before sexual debut (Villa et al., 2005; Centers for Disease Control and Prevention, 2010), as HPV infection most commonly occurs in young and sexually active individuals (Weaver, 2006).Though most students preferred vaccination before sexual debut, they might be unaware of current statistics regarding sexual activity among young Chinese.While an average age at sexual debut ages ranging from 17.4 to 19.8 has been reported for college-educated women in other countries (Bendik et al., 2011;Juntasopeepun et al., 2012;Makwe et al., 2012), Chinese researchers have found a trend toward the lowering of age at sexual debut (Zhao et al., 2012).Moreover, researchers have noted a trend towards decreasing age at menarche.Research has found a strong association between earlier age at menarche and a lowering of age at sexual debut (Glynn et al., 2010).Together, these findings suggest the possibility of escalating HPV risk among young Chinese women.Students may not understand the potential link between sexual debut and HPV infection; for example, about 50% of women may develop genital HPV infection within 2 years after sexual debut (Lenselink et al., 2008).It is necessary to educate medical students about current trends in sexual behavior among young Chinese in order to increase the acceptability of HPV vaccination.
In terms of the most appropriate venue for HPV vaccination, health care institutions and especially local CDCs and women and children's hospitals were selected, paralleling the findings of another study conducted among Chinese parents (Zhang et al., 2013) indicating that these health care providers are broadly trusted by the public.In some countries, governments utilize HPV vaccination programs in schools to expand coverage (Australian Government Department of Health and Ageing, 2006;Ministry of Health in New Zealand, 2008).This approach is not applicable in China since the CDC network represents a reliable network provider of vaccination services (Zhang et al., 2013).As an important information source, health care providers can communicate the necessity of HPV vaccination to parents of young children in clinical practice and refer them to a local CDC for vaccination.Medical students selected school course work as the most important source of HPV information, which was consistent with a finding from an Indian study (Pandey et al., 2012), followed by mass media, hospital, and family/friends.Interestingly, hospitals were not cited as a common information source; one reason may be that most students in our survey had no practical experience in hospital settings.Overall, our findings suggest that it might be important to implement tailored HPV vaccination education programs for medical students.
We identified inadequate information as the most important obstacle preventing students from accepting HPV vaccination.Consistently, lack of information about HPV infection and vaccines has been identified as a common barrier to the uptake of HPV vaccines in earlier studies (Kahn et al., 2003;Lee et al., 2007;Iliyasu et al., 2010).Most medical students surveyed in our study were willing to be educated by experts about HPV vaccination and this highlights the importance of medical education on this topic.Consistent with our findings, many studies have shown that concerns about complications and the efficacy of HPV vaccination were also important obstacles (Wong et al., 2010;Bourke, 2012).However, cost seemed less likely to influence students' attitudes towards HPV vaccination.Anticipating high prices, most responders requested government subsidy; 157 of 195 respondents desired a subsidy covering over 50% of the cost (data not shown).In an earlier study, vaccination pricing had been identified as an important factor in parents' decision-making regarding HPV vaccination, since it was expected to be the most expensive vaccine available in China (Zhang et al., 2013).Government officials also expressed concern about the high cost, and most agreed that the government should help finance vaccination (Zhao et al., 2012).
Three factors were associated with medical students' acceptance of HPV vaccination in the current study.First, females were more willing than males to receive or advise the vaccination, contradicting findings of studies conducted in the United States (Boehner et al., 2003;Blumenthal et al., 2012).This might be because students generally think that HPV vaccines prevent cervical cancer alone and thus are primarily targeted at women.This misconception is dangerous because HPV is transmitted between partners and some genotypes also cause genital warts among men.Secondly, students willing to be educated by experts about HPV vaccination were more open to receiving vaccination.This openness may be mediated by individual perceptions of the risks associated with HPV infection.Moreover, we observed a positive association between total score of HPV related knowledge and willingness to accept HPV vaccination, corroborating findings in other populations (Woodhall et al., 2007;Jones et al., 2008).Educational interventions such as distributing information pamphlets or leaflets have been recommended as a means of increasing the acceptability of HPV vaccination in other countries (Davis et al., 2004;Chan et al., 2007).The Chinese government may sponsor such education campaigns for medical students and health care providers.
In conclusion, this study found low levels of HPV related knowledge and acceptability of HPV vaccination among medical students.Gender, willingness to be educated by experts, and total knowledge score are factors associated with vaccination acceptability.HPV education should be systematically incorporated into school courses to increase awareness of HPV vaccination among medical students.As future health care providers, they will significantly influence public perceptions of HPV vaccination.Considering the small sample size and convenient sampling strategy used in this study, we look forward to undertaking a larger survey to scale up this pilot study and thereby help inform policy making with regard to upcoming HPV vaccination programs.
Table 2 . HPV Related Knowledge among Medical Students
*They corresponded to correct answers to the knowledge questions; # They did not add up to 100% because the numbers were rounded up
Table 5 . Factors Associated with Acceptability of HPV Vaccination in Multivariable Analysis
*adjusted for other variables in the table
Table 3 . Perceptions and Concerns of HPV Vaccination among Medical Students
:http://dx.doi.org/10.7314/APJCP.2014.15.15.6187Knowledge, Perceptions and Acceptability of HPV Vaccination Among Medical Students in Chongqing, China # They did not add up to 100% because the numbers were rounded up.DOI | 2017-06-17T23:37:21.444Z | 2014-08-15T00:00:00.000 | {
"year": 2014,
"sha1": "90586d19e54b6b0fab5fb338b3dddc61d9caf177",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201429765167190&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "90586d19e54b6b0fab5fb338b3dddc61d9caf177",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258056927 | pes2o/s2orc | v3-fos-license | Identifying optimal locations for community electric vehicle charging
This research seeks to identify suitable locations for deploying community Electric Vehicle charging points using a Geographic Information System (GIS)-based approach. The charging infrastructure is classified into shared-residential, en-route , and destination charging types, and each type ’ s selection criteria are chosen according to the characteristics of targeted end-users. The investigation identified 770 ideal locations in Dublin that may be given priority for the initial installation of charging infrastructure. Further, 3080 suitable sites were identified for later implementation to satisfy the charging requirements forecasted by the Dublin Local Authority for 2030. The population served by the proposed residential charging points is determined while considering accessibility by five-minute walking or five-minute cycling. Results from the study can be helpful for practitioners while deploying charging stations in the region. The proposed methodology utilises an open-source GIS-supported approach that can be adapted to similar cities worldwide.
Introduction
Increased greenhouse gas (GHG) concentration from urban transport significantly contributes to climate change (United Nations, 2021). In Ireland, 17.7% of GHG emissions were attributable to the transportation sector, with internal combustion engine vehicles (ICEVs) accounting for 94% of all transport emissions (Caulfield & Charly, 2022;Environmental Protection Agency, 2021). Electrification of the vehicle fleet is considered an effective approach to reducing emissions since they are energy efficient, do not generate tailpipe emissions, have fewer maintenance requirements and reduced engine noise (Albatayneh et al., 2020;Ghandi & Paltsev, 2020;Kinsella et al., 2023;Sanguesa et al., 2021;Windsor, 2021). Recent battery manufacturing also aims to use more eco-friendly techniques (Krajinska, 2021).
Despite these benefits, electric vehicles (EVs) have limitations, including limited range and shortage of charging infrastructure which affects their widespread implementation (Bonges & Lusk, 2016;Metais et al., 2022;Morton et al., 2018). Facilitating a smooth and swift transition to EVs requires adequate charging infrastructure to be developed based on the needs and characteristics of the population (Pardo-Bosch et al., 2021;Selena et al., 2022). A lack of sufficient charging infrastructure causes range anxiety in users (Neubauer & Wood, 2014). Range anxiety occurs when an EV driver worries that they will not have enough battery charge to reach their destination or the next charging station, preventing EVs from being widely deployed (Forrest et al., 2016;Melliger et al., 2018).
These electric vehicle charging station (EVCS) locations are of great significance in accelerating EV use (Frade et al., 2011;Janjić et al., 2021). For instance, a driver's motivation to use EV for a long-distance journey, such as an inter-city trip, would depend on the availability of EVCS along the major roadways . However, for short-distance trips, the driver would require a charging-enabled parking space at the journey's end. Accordingly, charging infrastructure is needed to cater to different types of charging demand, including residential, workplace, en-route, and destination charging (Cluzel et al., 2022). These EVCS should also be widespread and available for everyone to ensure equity (Iravani, 2022). Poor location of charging stations can lead to waste of resources and negatively impact decarbonisation efforts (Ademulegun et al., 2022).
This study seeks to identify the ideal locations to install EVCS for shared-residential, en-route, and destination charging purposes through a Geographic Information System (GIS)-based analysis using Open Street Maps (OSM). Site-selection criteria are chosen, including sociodemographic factors, travel patterns, and availability of appropriate street infrastructure such as lamp posts and parking spaces. A novel methodology using an open-source GIS-supported approach is proposed to identify appropriate EVCS locations. This approach can be adapted to other cities with similar characteristics.
The main motivation for this paper is to demonstrate how cities worldwide will have to examine the methods by which charging infrastructure is introduced to the urban realm. This changing use of public spaces will need to be achieved if the goals for fully electrifying private and shared vehicles are to be completed. This is shown empirically in the research presented. However, all cities have different characteristics, and the methods presented may need to be tailored to each region. The other motivation for this study is to recognise that not all households have access to private charging facilities in densely populated urban areas. Therefore, more research is needed to determine how cities can facilitate this charging.
Previous studies have considered techniques including multi-criteria analysis to identify optimal EVCS locations (Janjić et al., 2021;Kaya et al., 2020). However, few have categorized the site-selection criteria based on different charging categories and intended purposes (Csiszár et al., 2020;Frade et al., 2011;Schmidt et al., 2020). This research attempts to locate EVCS for various types, including shared-residential, en-route, and destination charging purposes. Moreover, this research considers the potential of utilizing existing lampposts for EV charging, which is not considered in other EVCS locations studies. Considering the urgency and the demand for a high number of charging stations, conversion of lampposts to slow residential charging points could provide a fast and effective solution to local councils to increase the coverage of EVCS with less infrastructure investment while ensuring practical usage (Bruno et al., 2019;Mahdy et al., 2022;Zhang et al., 2019).
Further, this work attempts to satisfy the requirements of EVCS as estimated by the Dublin Local Authority for the period up to 2030 (Cluzel et al., 2022). Hence, the study would benefit policymakers and practitioners while deploying charging infrastructure.
The rest of the paper is organized as follows: Section 2 reviews the literature focusing on methods to identify EVCS locations and objectives of the current study. Section 3 provides a brief of the study area and the data used. Section 4 discusses the proposed methodology and the siteselection criteria considered in the study. Section 5, 6, and 7 give the results and discusses the policy implications and conclusions, respectively.
Literature review
This section discusses the site selection criteria and the modelling approaches used in previous studies to identify the optimal location of charging stations.
Site selection criteria for installation of EVCS
An EV owner's preference towards a particular charging location is primarily influenced by the accessibility, speed, and cost of charging (Fotouhi et al., 2019;Philipsen et al., 2016;Skjølsvold et al., 2022). Studies also suggest that workforce population, mobility aspects within the region, road characteristics and parking spaces seem to influence the optimal location of EVCS (Bitencourt et al., 2021;Gupta et al., 2018;Lee et al., 2021). However, these factors depend on the charging station type and intended usage. Though studies have attempted to identify ideal locations for EVCS based on several criteria, only a few have considered the requirements based on the type of charging station required for the region Frade et al., 2011).
The major types of EVCS within local councils' administration are shared-residential charging, en-route charging, and destination charging (Cluzel et al., 2022). Shared-residential charging, referred to in studies as nighttime home charging, is expected to cater to residences without off-street parking and is mainly intended for overnight slow charging near houses and apartments (Frade et al., 2011). Appropriate criteria for selecting these shared-residential charging stations would be population density, availability of overnight parking spaces and desirable walking distances Iravani, 2022).
En-route charging typically caters to long-distance EV travellers, can be located along major roadways near service stations, and would require fast charging (Ademulegun et al., 2022). Nearness to motorways and service stations, location safety and parking potential is relevant for locating ideal en-route charging stations (Janjić et al., 2021;Philipsen et al., 2016).
Destination charging, also called daytime charging, aims to provide topup charging, especially near facilities like supermarkets and shopping centres and can have slow to rapid charging options Frade et al., 2011). Proximity to social areas and activity centres, availability of parking lots, accessibility by car, and time spent charging or parking are factors considered for locating ideal destination charging stations (Carra et al., 2022;Kaya et al., 2020).
Criteria that are critical for almost all types of charging stations include parking potential (either for a short time or for extended periods), safety (of car, driver and passengers) and distance between stations (Banegas & Mamkhezri, 2022;Carra et al., 2022;Giménez-Gaydou et al., 2016;Kaya et al., 2020). These factors are also highlighted in user perception studies but not considered together while identifying locations previously (Morrissey et al., 2016;Philipsen et al., 2016).
Utilising existing lamp posts for EV charging is also considered an effective method for rapidly expanding public charging points owing to their ease of installation, widespread availability, and lower cost than other technologies (Bender, 2021;Daniel & Lex, 2020). A single-phase lamppost charging point has a power range of 1 kW to 7 kW, making it suitable for topping up and overnight charging (Mahdy et al., 2022;Manning, 2021). Furthermore, the placement of charging points near street lights is expected to improve safety and reduce the inhibitions of drivers while charging EVs, especially in dim-lit and secluded areas (Cuff, 2022).
Studies have also considered the demand for EVs, indicated by the number of EVs in the region, while identifying ideal locations for EVCS (Carra et al., 2022;Kaya et al., 2020). Though this approach can provide better accessibility of EVCS to current or early EV adopters, it could also become unjust and biased towards the privileged segment of the population (Iravani, 2022;Roy & Law, 2022). Moreover, early EV adopters will likely have an independent driveway or off-street parking space for home charger installation. They would not rely on public charging stations as their primary source (Collett et al., 2022).
Modelling approaches to identify the optimal location of EVCS
Different modelling techniques have been adopted to determine the optimal location of EVCS, including hierarchical clustering approaches and hybrid methods such as an analytical hierarchy process (AHP) combining systematic review with stakeholder interviews (Bitencourt et al., 2021;Karolemeas et al., 2021).
Analytic Hierarchy Process (AHP) has been used in several studies for weighting the EVCS site selection criteria (Carra et al., 2022;Janjić et al., 2021;Karolemeas et al., 2021;Kaya et al., 2020). Carra et al. (2022) An open-source desktop-based geographical information system (GIS) platform they did not apply the same to any specific case study. Janjić et al. (2021) used AHP and a multi-criteria p-median methodology to optimize charging station numbers and locations in Serbia. This study also considers a Greedy heuristic approach for minimizing walking distances. Ademulegun et al. (2022) adopted a multi-stage decision analysis methodology considering critical and techno-physio-socio-economic and site-specific optimality factors to identify ideal locations for rapid EV charging stations within and across the border regions of Northern Ireland. Liu (2020) used a multi-agent Stackelberg game model combined with a road segment transmission model, path selection, station selection, and station EV interaction strategy to improve the effectiveness of the charging stations. Multi-criteria analysis using a fuzzy axiomatic design and linguistic weight approach based on literature review, expert comments, and on-site inspection was also used to determine the optimal location of EVCS (Feng et al., 2021). The results demonstrated that the suggested assessment criteria are reliable and consistent and may be applied to various economies. Pan et al. (2020) proposed a coverage placement model to locate the optimal public charging stations based on the driver's previous charging pattern using a Genetic Algorithm approach. Fotouhi et al. (2019) proposed a stochastic model considering the range anxiety of EVs. Ideal site selection has also been made using a picture-fuzzy environment and Pythagorean Fuzzy VIKOR techniques (Cui et al., 2018;Ju et al., 2019). Data mining techniques such as spectral clustering and a Gaussian Mixture Model were used to determine the ideal location of charging stations in Turkey (Catalbas et al., 2017).
GIS-based multi-criteria decision-making methods, including the Preference Ranking Organization Method for Enrichment of Evaluations (PROMETHEE) and VlseKriterijuska Optimizacija I Komoromisno Resenje (VIKOR), were used for selecting the most suitable EV charging station sites in Istanbul (Kaya et al., 2020). These methods considered five main criteria and nineteen sub-criteria ranked based on AHP. used a two-level multi-criteria approach to identify 300 EVCS locations to support local short-distance trips. This study used a weighted sum model to determine areas based on willingness to use EV and parking behaviour at the macro level and a hexagon-based approach with a greedy algorithm at the micro level . Further, the study considers the factors influencing daytime and nighttime charging separately, which has not been done in many studies. Frade et al. (2011) adopted a mixed integer optimization model to maximize EV demand coverage considering both nighttime and daytime demand. Coverage of EVCS is considered through maximum desirable and maximum acceptable walking distances.
Recent studies also highlight that the previous approach to deploying EVCS led to spatial disparity and inequity (Roy & Law, 2022). Roy and Law (2022) uses a machine learning framework to examine this spatial disparity in the placement of EVCS. They use Kernel density estimation, machine learning models using random forests, multinomial logistic regression and support vector machines to examine the spatial disparity and develop an EV charging inequity index. Iravani (2022) attempts to determine the location of charging stations considering equity and efficiency to maximise accessibility and usage. The study attempts to solve two problems: a set covering location problem to ensure ubiquitous and equitable charging stations and a maximum covering location problem to satisfy the demand of early adopters, thus attempting to strike a balance between the two approaches.
Insights from literature, problem statement and study objectives
In the current state of EV development, the location of public EVCS is of utmost importance for it to be used efficiently. Previous studies demonstrate that a GIS-based approach is suitable for determining the ideal sites for charging stations. Optimal site selection for an EVCS is also well-recognised as a multi-criteria assessment problem due to diverse influencing factors. Acceptable and desirable walking distances are also considered in studies to estimate the coverage of proposed charging stations.
Several factors, including population density, proximity to amenities or residences, road characteristics, proximity to parking spaces, charging infrastructure available and location safety, influence preference for an EVCS. However, the requirement of a charging station also depends on the type of facility and its intended purpose. Though previous studies have considered some of the factors mentioned above in site-selection criteria, only a few have considered the distinction in the charging category while locating charging points. Some of these criteria are relevant for all charging types as per user perception surveys but not considered together in previous site-selection studies. Furthermore, converting lampposts to EVCS has the potential to assist in the rapid expansion of charging infrastructure. None of the existing studies has used lamppost as a site-selection criterion for installing slow-residential charging points.
This research aims to identify the best locations for EVCS in the Dublin region, using a GIS-based analysis considering the three major categories of charging: shared-residential, en-route, and destination charging. The study attempts to cater to the EV charging requirement outlined by the Dublin Local Authority up to 2030. It suggests the ideal location for installing these chargers while requiring minimal additional infrastructure cost by utilising designated parking areas and proximity to street lamp posts. The analysis provides meaningful insights that other cities with similar characteristics may adopt to improve their public charging infrastructure.
Data
This section gives a brief description of the study area and data used.
Study area
Approximately 28% of the population of Ireland resides in Dublin, the capital city of Ireland, making it the most populous county. County Dublin is divided into four council areas: Dublin City Council, Fingal County Council, South Dublin County Council, and Dún Laoghaire- Rathdown County Council (Dún Laoghaire-Rathdown County Council, 2013), as shown in Fig. 1. Counties are further divided into Electoral Districts (EDs). Dublin has an extensive network of motorways and primary, secondary, and tertiary roads. The road network in the county totals 3949 km across the four local authorities (Transport Infrastructureutre Ireland, 2016). The EDs were further divided into grids of size 1 km 2 for the present study.
EV charging requirement in Dublin
Dublin accommodates 25% of Ireland's total car fleet and plays a significant role in decarbonising the country's transport system (Cluzel et al., 2022). The adoption of policies and incentives to encourage EV production has increased its sales over the years Department of Transport, 2021b). The range of actions advocated toward reducing emissions by half to meet the 2030 EU target and achieving carbon neutrality by 2050 suggests a target of 936,000 EVs, including 845,000 passenger EVs on Irish roads by 2030 (Government of Ireland, 2021). Dublin has over 24,000 registered EVs as of 2022 and is expected to have about 138,000 registered EVs by 2030, of which around 34,000 will solely rely on public EV Charging Stations (EVCS) (Cluzel et al., 2022; Society of the Irish Motor Industry, 2022).
The Dublin Local Authority Electric Vehicle Charging Strategy outlines the EVCS requirement for Dublin based on this projected vehicle stock, EV uptake scenarios and charging behaviour, split by charging type, as shown in Table 1 (Cluzel et al., 2022).
Data collection
Multiple datasets were used to extract the required information on the criteria used in this study. The data required for this study include the population density of EDs, layout of main roads, number of trips to places of work or study and location of car parking spaces, lamp posts, residences and amenities. The description of data and sources used to gather the data for each charging type are shown in Table 2.
The location of lamp posts and the number of trips to work or study were obtained from the Central Statistics Office (CSO) (Central Statistics Office, 2022). The location of lampposts is shown in Fig. 2(a). The EDs within the study area were categorised based on the trips attracted, as shown in Fig. 2 The population data at the ED level was also obtained from the CSO (Central Statistics Office, 2022) ( Fig. 3(a)). Existing charging point locations were extracted from Open Street Map (OSM) (Fig. 3(b)). The location of residences and amenities were also obtained from OSM ( Fig. 4(a) and (b)).
The location of major roads within the study area was extracted from OSM, as shown in Fig. 5(a). Data on designated car parking spaces were obtained from Geofabrik, a part of OSM, and had high spatial accuracy compared to other data sources (Mooney & Minghini, 2017) (Fig. 5(b)).
Information from OSM is considered relatively accurate, with an accuracy of 6 m and 80% overlap with Ordnance Survey digitised motorway objects between the two datasets (Haklay, 2010). These point datasets obtained were cross-checked with google satellite in QGIS to confirm the accuracy of the data.
Methodology
The study aims to identify ideal locations for deploying EVCS for shared-residential, en-route and destination charging categories based on suitable parameters appropriate for each type through a GIS-based analysis. This section discusses the workflow, site selection criteria, Table 1 Projected EV charging requirement for Dublin in 2025 and 2030 according to medium EV uptake scenario (Cluzel et al., 2022).
County 2025 EVCS Requirement 2030 EVCS Requirement
Residential En-route Destination Residential En-route Destination Dublin City 65 19 280 247 55 1065 South Dublin 23 13 149 85 38 560 Fingal 23 14 157 86 41 589 Dun Laoghaire-Rathdown 23 11 135 85 33 509 Total 133 57 722 504 166 2723 Table 2 Datasets used in the study and corresponding sources. and methodology to identify EVCS locations. The overall workflow to determine the optimal EVCS locations is presented in Fig. 6. Initially, the criteria for site selection are defined based on literature review, logical reasoning and expected end-user characteristics. The corresponding data is extracted from OSM and obtained from the CSO. This data in vector form is converted into raster form, and proximities are calculated for different data layers. The layers are further classified into high-and medium-priority areas based on their installation priority. Different data layers are combined in OSM, and priority zones are extracted, after which suitable sites are identified based on the availability of designated car parks. The final results provide the optimal EVCS locations for three charging categories, further differentiated based on their priority. The methodology for identifying optimal EVCS for all types (shared-residential, en-route and destination charging) follows GIS-based analysis differing only in the selection criteria. Each step of the methodology is explained in detail in the following subsections.
Categories of EVCS
Three EVCS categories are considered in the present study, including shared-residential, en-route and destination charging. Shared-residential charging aims to provide charging for residents without access to home charging. En-route charging provides charging access along the motorway, primary, secondary, and tertiary roads to facilitate longdistance travel. Destination charging aims to provide charging close to supermarkets, shopping centres, theatres, public parks, and universities. They can serve as a valuable source of top-up charging.
Site-selection criteria
The criteria used for site selection are based on the literature review and characteristics of key user groups expected to be served by each EVCS. These parameters include the population, proximity to major roads, places of work or study, location of car parks, residences, lamp posts, and existing EVCS. Description of each of these criteria, their reason for inclusion and references from the literature are shown in Table 3.
This study prioritises placing EVCS in areas with high population density. Hence, the study area was divided into smaller grids based on population density. Regions with a population of up to 100 inhabitants per km 2 were considered low-density, 100 to 4000 were considered moderate-density, and over 4000 were considered high-density. The areas of work or study are also classified into regions of high, moderate, and low activity based on the number of trips attracted per km 2 area. The study proposes charging stations at a minimum distance from the existing stations. Thresholds were designed to ensure an adequate distance between the proposed charging point and existing ones to ensure equal distribution of charging stations.
Level of priority
Regions within the study area are categorised based on their priority level of EV infrastructure installation. Areas with high priority are identified for installing charging stations in the first stage of EVCS deployment. The study assumes these high-priority charging stations are to be installed by 2025 as per the EV charging strategy for Dublin Local Authority (Cluzel et al., 2022). Further, charging stations that are required in regions of medium priority are identified for installation at a later stage. The present study assumes that medium-priority charging stations will be installed by 2030 based on the EV charging strategy. The present study does not consider charging stations required in regions of low priority.
The priority of EVCS locations is decided based on applying different thresholds for each selection criteria discussed above. Walking distances from an EV owner's residence to nearby charging stations influence charging stations' usage (Janjić et al., 2021). Therefore, charging points are proposed to be placed within 5 min of walking distance from the user's residence while assuming an average walking speed of 4.8 kmph (Caselli et al., 2021;Donoghue & Kenny, 2015;Doorley et al., 2015). Also, high-priority charging points will be placed in regions with high population density and more considerable distances (>1000 m) from existing charging points to ensure fair distribution. All charging points will be located within designated car parks and close to lamp posts (0 to 20 m). Hence their values do not change with the level of priority. Also, all en-route charging points are proposed to be placed within 150 m of the road for both priority levels to ensure safety, comfort, and minimal detour. Details of site selection criteria, based on the charging category and priority, are provided in Table 4. The proximities to each of the amenities selected (in Table 4) were selected based on the authors' judgement. Changing these thresholds would change the results; others looking to apply the methods produced should keep this in mind. Given the number of dynamic variables in the research, it was not possible to estimate certainty values around the 2025 and 2030 values. This caveat should be considered when interpreting the results. It also should be noted that some spatial correlation may be present between the spatial selection criteria.
The selection criteria for shared-residential charging are car park availability, proximity to lampposts, population density, proximity to houses and apartments and proximity to existing charging stations. As these shared-residential charging points are to be placed within car parks, only such car parks located within the area that satisfy all the other parameters are considered. The selection criteria for en-route charging are car park availability, proximity to lampposts, proximity to motorways, primary, secondary, and tertiary roads, and proximity to existing charging stations. The selection criteria for destination charging are car park availability, proximity to lampposts, amenities and place of work or study, and proximity to existing charging stations.
While identifying charging point locations for the second stage (medium-priority), those proposed in the high-priority stage were included in the existing stations to prevent overlapping locations. While selecting sites for the en-route category, the proposed locations of Fig. 6. Overall workflow to identify optimal site locations for EVCS. residential charging points for both high and medium-priority areas were included in existing charging stations to avoid overlap in charging point locations. Similarly, while selecting sites for the destination category, the proposed locations of residential and en-route charging points for both high and medium-priority areas were included in existing charging stations.
QGIS analysis and plugins
The steps followed to perform the analysis in QGIS are discussed in detail in this sub-section. A GIS-based approach has been utilised in this study to analyse spatially continuous data to obtain optimal solutions. Vector, raster, and other processing tools, including plugins, were used in QGIS to assemble and evaluate the spatial information and create graphical maps by composing the data. The plugins used in the study are autoSaver (to save the project automatically every five minutes), Qpackage (used to export the vector layers from one coordinate reference system to the other) and QuickOSM (to download OSM data as shape files).
After the site selection criteria were finalised, the data corresponding to relevant criteria were collected from multiple data sources and collated for the study area. The collated vector data was then converted to a raster format, and a proximity tool was utilised to derive the distances between the raster cells. This tool in the QGIS environment measures the Euclidean distance between two points. The proximity layers were then reclassified into two intervals based on their priority. The reclassified datasets were combined using the raster calculator tool for each category and priority level. The priority regions obtained from the raster calculator were then extracted and polygonised to get priority regions in the vector format. The reclassification analysis and raster calculation are conducted separately for high and medium-priority cases. Finally, all the car parks within the priority regions are highlighted, providing the details of locations suitable for the placement of EVCS. These steps are conducted separately for all three charging types (shared-residential, en-route, and destination) for two priority levels (charging points to be installed by 2025 and 2030).
Evaluation of the proposed methodology
To evaluate the proposed methodology and ensure that the results follow the proposed selection criteria, vector analysis using a buffer tool is constructed around each parameter. A buffer tool is a vector technique used in GIS to define zones at specific distances from an item or collection of objects (QGIS Project, 2022). They are ideal for analyses with distance restrictions. If the final charging points are inside the designated buffer zones (using euclidean distances), the site meets all requirements and is thus optimal.
For high-priority shared-residential charging points, buffer zones with radii of 500 m, 1000 m, and 20 m were built around the respective selection criteria, including houses and apartments, existing stations, and lighting. The EDs having a population beyond 4000 inhabitants per km 2 were highlighted. For medium-priority shared-residential charging points, buffer zones with radii of 1000 m, 500 m, and 20 m were built around the respective selection criteria, including houses and apartments, existing stations, and lighting. The EDs with a population of up to 4000 per km 2 were highlighted. Sharedresidential, en-route, and destination (Carra et al., 2022;Janjić et al., 2021;Kaya et al., 2020) 2 Proximity to a lamp post Utilising existing lamp posts for EV charging is an effective method for rapidly providing public charging points ( Cluzel et al., 2022;Manning, 2021). Lighting also improves driver comfort and location safety ( Cuff, 2022). Similar evaluation and verification are done for en-route and destination charging categories based on respective site selection criteria to confirm the correctness of results.
Determining the coverage of proposed shared-residential charging stations
The success of any charging station depends on the number of people who benefit from it. Hence, an attempt was made to determine the population served by each proposed EVCS. Such an estimate also enables the prediction of usage rates, which are crucial for assessing the possible impact of the charging point. Studies show that people are willing to walk for five minutes, also known as a 'pedestrian shed', before driving (Morphocode, 2018). A five-minute walk is around 400 m based on the average walking speed, and a five-minute by bicycle covers about 1000 m considering an average cyclist speed of 12 kmph in Dublin Doorley et al. (2015); Irish Cycle (2009).
The population served by each proposed charging point is calculated using vector analysis in QGIS. The buffer tool in the vector analysis defines zones at specific distances from each charging point. The ED population within each buffer zone is extracted, and this value is taken as the population served by the corresponding charging station. The distances for the buffer are determined by two active modes of travel, including walking and cycling. This study assumes that a charging station is most beneficial to those who can access the same by walking or cycling for less than five minutes.
Results
This section presents the results obtained from the GIS-based analysis, including the selected sites for shared-residential, en-route, and destination charging points. Further, the results are verified, and the population to be served by the proposed charging points is estimated. It should be noted that the results presented in this paper assume no changes to the urban dynamics of the city, nor does it postulate future modifications to these dynamics. In the longer term, cities like Dublin may have changing dynamics that could result in different findings than those presented in this paper.
Suitable locations for shared-residential charging stations
The locations suitable for shared-residential charging stations were identified based on proximity to residential apartments, population, distance from existing charging stations, proximity to lamp posts and Table 4 Criteria for selection of the location of EVCS based on category and priority. availability of car parking. These regions are further extracted, and the car parks are highlighted as optimal locations for EVCS. The proposed and existing EVCS locations are shown in Fig. 7. This study identified 117 EVCS locations for the shared-residential category to be installed by 2025. These sites are within 500 m of residential zones and 1000 m from existing charging stations. The proposed locations are dispersed throughout the study area, mainly on the outskirts with high population density due to a concentration of existing charging infrastructure in the city centre. Further, 453 charging point locations in the shared-residential category are proposed to be installed by 2030. These charging points are located in regions with medium to low population density within 1000 m of residential areas and at a distance of 500 m from existing EV charging stations. The proposed number of shared-residential charging point locations within the study area differentiated by county and level of priority are listed in Table 5.
Suitable locations for en-route charging stations
The suitable locations for en-route charging points were identified based on proximity to major roads, distance from existing stations, proximity to lamp posts, and availability of car parking. The car parking spaces within these regions are highlighted as the proposed en-route charging locations and plotted in Fig. 8.
This study identified 68 potential en-route charging stations along the motorway, primary, secondary, and tertiary roads for implementation by 2025. These locations are situated 150 m from the road and 500 m away from the existing charging infrastructure and are mainly located along the borders of each county council region. For the next stage (implementation by 2030), 165 potential charging point locations are identified within 150 m proximity to roads and 500 m away from the existing charging infrastructure. The proposed number of en-route charging station locations within the study area differentiated by county and level of priority are listed in Table 6.
Suitable locations for destination charging stations
The suitable locations for destination charging stations were identified based on proximity to amenities, place of work or study, distance from existing charging stations, proximity to lamp posts and availability of car parks. The proposed locations for destination charging stations for high and medium-priority scenarios are shown in Fig. 9, along with existing charging station locations.
In the destination charging category, 585 and 2462 ideal EVCSs need to be installed by 2025 and 2030, respectively. These locations are close to trip attractors, such as supermarkets, hotels, and hospitals. The proposed number of destination charging station locations within the study area differentiated by county and level of priority are listed in Table 7.
Evaluation of the proposed site-selection methodology
The buffer zones and the highlighted ED population for different levels of priority are displayed in Fig. 10.
The selected car parks were cross-verified to confirm that all the optimal car parks obtained from the analysis fall into buffer zones containing houses, apartments, and lighting and are within the highlighted EDs. It is also ensured that these car parks are located away from the buffer zone of the existing charging stations. Results show that all the proposed charging stations are well within the required buffer zones.
Population served by the proposed shared-residential charging stations
The overall population served by the proposed charging points considering five-minute walking and five-minute cycling for 2025 and 2030 are shown in Table 8.
The buffer zones around each proposed charging station and the corresponding population served are shown in Fig. 11.
The results show that the deployment of the proposed charging stations will significantly increase the population serviced between 2025 and 2030, and the accessibility of the charging stations will expand over time.
Discussion
This research used a GIS-based analysis to identify ideal locations of EVCS in urban areas based on spatial overlaps between critically relevant criteria specified based on expert opinion and literature. This approach aligns with the current requirements of the Dublin Local Authority and a few previous studies in the adoption of a practical GISbased analysis to identify EVCS locations which can be implemented with limited resources (Cluzel et al., 2022;Department of Transport, 2021a;Grote et al., 2019). However, in addition to the criteria used previously, this research identifies lamp posts for conversion into slow shared-residential charging stations, which is a significant contribution of this work and aligns with the current advancements in the Internet of Things (Griffiths, 2018). Moreover, the approach also considers the categorisation of EVCS based on targeted end-users and installation priority. The lamp posts at the specified locations could be retrofitted with the EV charging equipment and provide improved coverage of slow charging stations with less infrastructure cost to increase EV penetration (Bruno et al., 2019;Zhang et al., 2019).
Compared to the methodologies adopted in literature (Chen et al., 2013;, the GIS-based approach adopted in this study is practical for implementation by local councils as it requires minimum resources and uses open-source data, which is readily available. The study findings demonstrate the need for more charging points further away from the city. As suggested in previous studies, the population served by five-minute walking or cycling to EVCS is critical to the enhanced EV uptake, especially in dense areas with limited access to off-street parking (Janjić et al., 2021;Mahdy et al., 2022). The EVCS locations proposed in this work can provide access to a total of 140,274 people by a five-minute walk in the first level of implementation. The population served will significantly increase between 2025 and 2030. Previous studies have shown that habit compatibility is vital in ensuring the practical usage of EVCS (Philipsen et al., 2016). EVCS locations proposed in this study are synchronous with parking habits, safety perceptions and refuelling demands.
The results presented in this paper have significant implications for policymakers in comprehending potential EV users' charging patterns, mainly where they prefer to charge. With this information, decisionmakers may develop intelligent charging policies and offer financial or other incentives that may be helpful for grid load management. The locations proposed in the study can be utilised as a guide when distributing charging stations, resources, and funding. The provision of EVCS at these ideal locations can improve the uptake of EVs amongst those who are reliant on cars but do not have access to off-street parking, thus improving EV access equitably as envisioned by the Department of Transport, Ireland (Department of Transport, 2021a; Iravani, 2022). Based on anticipated future development scenarios, the results also contribute to the information on the potential of renewable energy sources to satisfy EV demand. It may be noted that the present study does not consider judgement criteria or stakeholder preferences in decision-making, as followed in other multi-criteria methodologies (Carra et al., 2022;Kaya et al., 2020). Instead, a practical GIS-based spatial overlap analysis is adopted to identify the ideal locations of EVCS based on expert opinion, literature and the open-source database.
Conclusions
This study identified the optimal locations for EVCS for shared-residential, en-route, and destination categories in Dublin region through a GIS-based analysis using OSM. Results from this study can serve as a helpful guide for practitioners and policymakers in deciding the best locations for the successful deployment of charging infrastructure. A significant highlight of this study is the transferability of the proposed approach. As the suggested methodology is based on QGIS and OSM, this research can serve as a model for employing open-source GIS-based analysis to resolve complicated spatial issues. In such cases, the site selection criteria may be chosen based on the targeted end-user characteristics for that region.
This study segregates the proposed charging points into high-priority sites, which may be installed by 2025 and medium-priority sites, which may be established by 2030. The sites were selected based on population density, availability of parking spaces, distance from existing charging stations, and proximity to residences, lamp posts, roads, amenities, and places of work or study. The study identifies 770 and 3080 optimal charging station locations to be installed by 2025 and 2030, respectively. One hundred and seventeen shared-residential charging points are proposed for 2025, expected to serve a population of 140,274 and 553,306, considering accessibility by five-minute walking and fiveminute cycling, respectively. Four hundred and fifty-three shared-residential charging points are proposed for 2030, which could serve a population of 525,630 and 2041,977, considering accessibility by fiveminute walking or cycling, respectively. The study has certain limitations that might have influenced the interpretation of the research findings. Most of the datasets used in this study are obtained from OSM, which is considered sufficiently accurate (Mooney & Minghini, 2017). However, it is powered by open-source software making the data vulnerable to corruption. Additional selection criteria, such as the number of parking spaces within the car parks, number of EV users per household, traffic volume, distance from 10 kV grid lines, and off-street parking data, were included in the preliminary analysis. However, these criteria were excluded later on due to the unavailability of data. The presumptive proximity distances and the GIS data structure are the other research constraints of the study. Also, for lamp posts to be considered suitable for charging, they should be positioned near the road pavement and have a minimum internal diameter of 140 mm (Cluzel et al., 2022). Another limitation of this study is that it does not consider the electric distribution network in the region.
The results obtained from the GIS analysis can be further enhanced by incorporating additional criteria such as the number of parking spaces within the car parks, number of EV users per household, traffic volume, distance from 10 kV grid lines, building type, availability of driveways in the area and on-street parking behaviour. Considering user charging behaviour, perspectives of municipalities and grid companies, and the impact of pricing on demand fluctuation could further improve the identification of EVCS locations. Also, further research should be completed on the investment priorities of en-route and destination charging, especially as battery technology improves and driving range increases.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
The authors do not have permission to share data. | 2023-04-11T15:02:59.912Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "99af1b296b24bac888569fced0dda458cc435383",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.scs.2023.104573",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eb0937f44d9b681a55217a56b728498ea3b0cb27",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
252891887 | pes2o/s2orc | v3-fos-license | Feline Oncogenomics: What Do We Know about the Genetics of Cancer in Domestic Cats?
Simple Summary Cancer is a significant cause of suffering and death in domestic cats. In humans, an understanding of the genetics of different types of cancers has become clinically important for all aspects of patient care and forms the basis for most emerging diagnostics and therapies. The field of ‘oncogenomics’ characterises the alterations of cancer-associated genes that are found in tumours. Such a thorough understanding of the oncogenome of human tumours has only been possible due to a high-quality reference genome and an understanding of the genetic variation that can exist between people. Although a high-quality reference genome for cats has only recently been generated, investigations into understanding the genetics of feline cancers have been underway for many years, using a range of different technologies. This review summarises what is currently known of the genetics of both common and rare types of cancer in domestic cats. Drawing attention to our current understanding of the feline oncogenome will hopefully bring this topic into focus and serve as a springboard for more much-needed research into the genetics of cancer in domestic cats. Abstract Cancer is a significant cause of morbidity and mortality in domestic cats. In humans, an understanding of the oncogenome of different cancer types has proven critical and is deeply interwoven into all aspects of patient care, including diagnostics, prognostics and treatments through the application of targeted therapies. Investigations into understanding the genetics of feline cancers started with cytogenetics and was then expanded to studies at a gene-specific level, looking for mutations and expression level changes of genes that are commonly mutated in human cancers. Methylation studies have also been performed and together with a recently generated high-quality reference genome for cats, next-generation sequencing studies are starting to deliver results. This review summarises what is currently known of the genetics of both common and rare cancer types in cats, including lymphomas, mammary tumours, squamous cell carcinomas, soft tissue tumours, mast cell tumours, haemangiosarcomas, pulmonary carcinomas, pancreatic carcinomas and osteosarcomas. Shining a spotlight on our current understanding of the feline oncogenome will hopefully serve as a springboard for more much-needed research into the genetics of cancer in domestic cats.
Introduction
Between 25-56% of households in the UK, USA and Canada report having a pet cat [1][2][3]. Cancer is a significant cause of morbidity and mortality in domestic cats with many studies showing that the majority of tumours arising in these animals are malignant (Table 1). Sadly, however, feline cancers have not been studied as extensively as human or canine cancers. In particular, there have been comparatively fewer investigations into the genetics of these tumours and as such there are missed opportunities; once the oncogenome of feline cancers have been more fully characterised, there is much potential for the development of novel diagnostic tools, prognostic markers and/or more targeted therapies ( Figure 1). In addition, there are benefits in understanding the aetiology of these tumours, such as detecting the presence of viruses or a mutational signature demonstrating exposure to carcinogens such as ultraviolet (UV) radiation or tobacco smoke. Finally, analysis of the germline of tumour-bearing cats will potentially allow identification of possible tumour-predisposing alleles, which would allow such genes to be bred out of the population. Thus, a thorough understanding of the oncogenome of feline cancers is critical. Table 1. A summary of studies looking at the type of tumours most commonly found in domestic feline populations and their relative severity of the tumour.
As detailed in the sections below, many early studies investigating the genetics of feline cancers have used cytogenetics, which looks at the structure of chromosomes for changes such as broken, missing, rearranged or extra chromosomes. Other studies have looked at DNA methylation patterns, as methylation is an epigenetic mechanism of controlling gene expression without altering the DNA sequence. Those that have performed genetic studies to investigate the mutation or expression status of genes have mostly taken a single gene approach, examining the mutational status of specific exons of a particular gene known to be involved in human cancer, with the most commonly investigated gene being the tumour suppressor gene, TP53. TP53 encodes p53 which governs a complex anti-proliferative transcriptional program critical for counteracting transformation and tumour growth and is the most mutated gene in human cancers [10]. For a more genomewide approach, a well-annotated genome is required.
In 2006, the International Cat Genome Sequencing Consortium (ICGSC) released 'ASM18133v3' as the first assembly of the domestic cat (Felis catus) genome reference. It was generated from the DNA of a female Abyssinian breed cat (named 'Cinnamon') kept by Dr. Kristina Narfstrom at the University of Missouri. This 2× coverage draft assembly was highly fragmented, requiring 174,000 contigs to cover half the cat genome, and both the genome assembly and annotation were heavily dependent upon comparative mapping to human and dog genomes [11]. The ICGSC has gradually improved this reference genome over the years with many version releases benefitting from advances in sequencing, assembly and mapping technologies. In 2014, an ~14× coverage whole-genome assembly for 'Cinnamon' was released, closing gaps of sequence that were unavoidable with previous low-coverage shotgun genome sequencing; it spanned 2.35 Gb and included annotation of 21,865 protein-coding genes [12,13]. The current assembly version, 'Felis_ca-tus_9.0' was released in 2017 and has exceptionally long gap-free segments and improvements in genomic features include defining pseudogenes, small non-coding RNAs, lin-cRNAs and novel genes that were absent in previous versions of Felis_catus [14]. This is now a high-quality reference genome for the domestic cat.
The present paper summarises the genetic investigations of feline cancers that have been carried out to-date, utilising the range of technologies that have been available over the years.
Lymphoma
Lymphoma is a malignant cancer of lymphocytes, and thus many different tissues can be affected, with common sites including the gastrointestinal tract (GI; Figure 2), mediastinum (organs in the chest, such as lymph nodes and thymus) and kidneys. GI lymphoma is the most common type, accounting for 50-70% of the cases [15]. Mediastinal and renal lymphoma are frequently associated with feline leukaemia virus (FeLV) and as cats have become more routinely vaccinated for this virus, these two sites of lymphoma are now less common [16], although this can differ between countries [17]. Despite this decline in viral-associated cases in some countries, studies have shown that GI lymphoma is becoming more prevalent [18]. Lymphoma is usually treated with chemotherapy, and/or surgery depending on the location, and the prognosis typically depends on the location and histologic grade of the tumour. Most cases of gastrointestinal lymphoma are low-grade and with standard treatment of chlorambucil and glucocorticoids up to 96% of cats may see a clinical response (complete or partial remission) and this may last for 2-3 years if complete remission is achieved [19,20], whereas only 30% of cats with high-grade GI lymphoma have achieved complete remission using a standard multi-agent chemotherapy protocol, with a median progression-free interval of only 50 days [21]. The reported median survival time (MST) for mediastinal lymphoma is variable and may be dependent on FeLV status; however, recent literature generally only includes a small number of FeLV-positive cases [22][23][24]. Feline renal lymphomas have a worse prognosis (MST of up to 203 days) compared to other locations, regardless of treatment with either glucocorticoids alone or in combination with multiagent chemotherapy [25].
There have only been limited investigations into understanding the genetics of feline lymphoma. Several of these have involved the use of feline lymphoma cell lines ( Table 2) and there are only a handful that have involved the use of lymphoma tissues from cats. Table 2. Examples of feline lymphoma cell lines and how they have been used to understand the genetics of feline lymphoma.
Lymphoma Cell Lines: Genetic Investigations Performed and Results
Obtained 3201 cell line [26]: Examined for mutations in TP53; a non-synonymous mutation was found at codon 235 3201, FT-1, FL-74, KO-1, R96 cell lines [27]: Cytogenetic analysis and examined for mutations in TP53 and mRNA expression levels of MDM2; 3201, FL-74 and R96 showed centrosomal amplification and chromosomal instability, mutations in TP53 were found in all the cell lines (although only 3201 had a non-synonymous mutation at codon 235), and none of the cell lines showed elevated MDM2 mRNA levels FL-7, FT-1, 2301, KO-1, R96 cell lines [28]: Examined for mutations in GADD45; no mutations were found S87 cell line [29]: Cytogenetic analysis; showed centrosomal amplification and chromosomal instability 3281, FT-1, MS4 cell lines [30]: Treated with a DNA methylation inhibitor; aberrant gene expression patterns observed FT-1, MS4, KO-1 cell lines [31]: Genome-wide methylation profiling; showed thousands of CpG sites with gain of methylation at normally unmethylated CpG islands and loss of methylation at normally methylated non-CpG islands Investigation of lymphoma tissue samples found that TP53 was mutated at codon 282 (exon 8) in a case of feline lymphoma [32] and at codon 199 in 1/8 cases of feline lymphoma [26]. In addition, southern-blot analyses confirmed there was no deletion or rearrangement of either the p21WAF1 or p27Kip1 genes in 19 cases of feline leukaemia and lymphoma, with the analysis of p27Kip1 cDNA in 10 of these cases indicating there were no non-synonymous mutations present [33]. Investigations into N-RAS found a somatic non-synonymous mutation giving rise to a Q61K amino acid change in 1/15 feline lymphomas (Q61K is a hot-spot mutation in human cancers) [34]. Investigations into Fas, which belongs to the tumour necrosis factor receptor family and transduces the death signal after binding to the Fas ligand, led to one group detecting FAS mRNA in 7/11 feline lymphoma samples, with some samples showing the presence of alternatively spliced transcripts (similar to what they had found using feline lymphoma cell lines) [35]. More recently, analysis of 42 feline T-cell GI lymphoma samples by immunohistochemistry demonstrated broad activation of STAT3 and STAT5B, and screening for known activating mutations in the same samples identified the presence of the STAT5B N642H driver mutation (found in humans with type II enteropathy-associated T-cell lymphoma [36]) in 7/42 (17%) samples, with the majority being found in the low grade subtype (known as enteropathy-associated T-cell lymphoma type II or small cell lymphoma), which is the most common subtype [37].
Mammary Tumours
Mammary tumours are the third most frequently occurring tumour in cats. The vast majority of feline mammary tumours are aggressive and malignant (~85%), with multiple tumours and metastasis frequently seen at time of diagnosis, and relapse with rapid progression generally resulting in a poor prognosis ( Figure 3) [38]. Most of the malignant feline mammary tumours are classified as carcinomas, which are divided into different histological subtypes [39]. Surgery is the most widely used treatment for mammary neoplasms in the cat (complete unilateral or bilateral mastectomy to reduce chance of local recurrence), either alone or in combination with chemotherapy (although the overall benefit of adjuvant chemotherapy remains unproven [40]). As the survival times (disease-free survival (DFS) and cancer-specific overall survival (OS)) of cats with mammary carcinoma are short, prognostication is essential. Molecular classification of feline mammary carcinomas using five markers (ER, PR, HER2, Ki-67 and CK 5/6) has been shown to be prognostic, similar to that seen in human breast cancer patients [41]. For example, several studies have found that the Luminal A subtype (ER + and/or PR + , HER2 − , Ki-67 low index ) is associated with the highest OS whereas the triple negative basal-like subtype (ER − , PR − , HER2 − , CK5/6 + ) is associated with the lowest OS [41][42][43]. Analysis of copy number variation (CNV) in 33 female cats with tubulopapillary and solid mammary carcinoma followed for 2 years post-surgery revealed important survival differences between molecular subtypes (but not histopathological subtypes); luminal A tumours exhibited the highest DFS/OS and were associated with the lowest amount of CNVs, whereas the basal-like triple negative mammary carcinomas had the worst DFS/OS and were the most aberrant [43]. There was an association between specific CNVs and poor prognosis; copy-number losses in chromosome B1 (1-23 Mb) and copy-number gains in B4 (1-29 Mb) and F2 (64-82.3 Mb) were associated with poor outcomes [43]. Interestingly, several potential prognostic markers identified in these feline mammary carcinomas were recently found to be relevant prognostic markers in human triple negative basal-like breast cancer [44].
Several studies have looked at the expression of genes known to be important in breast cancer in humans, such the tyrosine kinase receptor proto-oncogene RON, and the ERBB2 proto-oncogene (also known as HER2, neu), to determine if they also show altered expression in feline mammary tumours. The short form isoform of RON (sf-RON) and ERBB2 are overexpressed in human breast cancer and correlate with poor prognosis [45,46]. In feline mammary carcinomas, sf-RON mRNA expression was found in 18/47 (38%) cases, and was associated with poorly differentiated tumours, a shorter disease-free interval and a shorter overall survival [47]. ERBB2 mRNA levels have been found to be elevated in 44-54% of mammary carcinomas in cats (6/11 cases [48] and 12/44 cases [49]), although numerous studies have shown that the majority of these tumours do not carry ERBB2 gene amplifications [49][50][51]. One study found a significant association between ERBB2 RNA levels and tumour malignancy as well as molecular subtypes; higher ERBB2 expression correlated with lower malignancy grade and with the luminal A and HER2 subtypes, whereas the lowest ERBB2 levels correlated with the triple negative tumours [49]. Another study found that relative to disease-free tissue from the same cat, mammary carcinomas showed overexpression of several other cancer-associated genes, including CCND1, PKM2, PTBP1 and TOP2α, whilst levels of TP53, c-MYC, YBX1 and FUS remained unchanged [52]. Interestingly, positive correlations were found between tumour size and YBX1 RNA levels, lymph node metastasis and c-MYC RNA levels [52]. A study of TWIST1 mRNA expression in 25 feline mammary gland tissues (7 normal glands, 3 hyperplastic lesions, 1 benign tumour and 14 carcinomas) found that the carcinomas had significantly lower TWIST1 mRNA levels than benign lesions and disease-free mammary glands [53].
Genomic sequence variants in feline mammary tumours have not been widely investigated and to-date only single genes have been looked at, specifically genes known to be mutated in human breast cancer. Somatic alterations in TP53 in feline mammary tumours have been reported in individual cats, including a missense mutation in exon 8 of a solid mammary carcinoma [54], a deletion of a tandem repeat (involving codons 251-256) of exon 7 in a tubulopapillary mammary carcinoma [55] and a missense mutation in exon 5 of a mammary adenocarcinoma [56]. Similarly, there was report of a cat with mammary carcinoma carrying a pathogenic somatic mutation in exon 11 of BRCA2 [57]. One study looked at the coding region of the TWIST1 gene in 34 feline mammary carcinomas, but found no somatic variants present [53].
There have been very few genetic studies on pre-disposition to mammary tumours in cats, with the focus having been to look for the presence of germline mutations in BRCA1 and BRCA2, as the cumulative breast cancer risk in humans is 72% for BRCA1 and 69% for BRCA2 carriers [58]. One study found that none of the 24 cats with mammary carcinoma they analysed showed any variants in these genes [59], though a more recent study found that 3/9 cats with mammary carcinoma carried germline variants in exon 9 of BRCA1 and they postulated that these might be associated with a higher risk of hereditary mammary carcinogenesis [57]. In humans, germline mutations in the TWIST1 oncogene have been suggested to predispose to breast cancer [60], however, a study of 34 feline mammary carcinomas only identified two intronic germline variants (535delG and 460C>T, in 3 and 4 cases, respectively) and noted there was no association between these alterations and TWIST1 mRNA levels [55].
Squamous Cell Carcinoma
Squamous cell carcinoma (SCC) is a malignant neoplasm of epidermal cells in the squamous epithelium. The most commonly occurring sites of SCC in cats are the skin and oral cavity. Cutaneous SCC (cSCC) accounts for 15% of skin tumours in cats, with most occurring on the head (Figure 4), often involving the pinna, eyelid and nasal planum (cSCC at the tip of the ears in white cars has a characteristic presentation) [61]. SCC is the most common malignant tumour of the oral cavity in cats (60-70% of all oral malignancies), with most involving the lingual region and dentate jaws [62,63]. Most SCCs are locally invasive and in particular, oral SCC (oSCC) can exhibit bone invasion and osteolysis. Tumour spread to regional lymph nodes may occur, but distant metastases are rare and usually do not occur until late in the disease process. Surgery is the most widely used treatment option, with the ability for complete excision dependent upon the location of the tumour and its size. For example, a retrospective study of 61 cats with cSCC (of the nasal planum or pinna) found that surgery provided a median disease-free time of 594 days [64]. In contrast, due to its location, rapid tumour progression and diagnosis often at a late stage, the median survival time of oSCC rarely exceeds 1 year [62,63,65].
Prolonged exposure to UV light, lack of skin pigment ( Figure 4) and a sparse haircoat all contribute to the development of cSCC [66]. The mechanism frequently proposed for cutaneous SCC and its association with UV light involves mutation of the tumour suppressor gene p53. However, whilst a mutant form of p53 has been detected by immunohistochemistry in 13/19 (68%) feline cSCC cases (and 3/5 (60%) of feline oSCC cases) [67], genetic analysis of TP53 has only been performed in feline oSCC cases to-date. A 1 base pair deletion/frameshift somatic mutation in exon 4 of TP53 has been reported in one case of feline oSCC [58]. More recently, a group reported that 18/26 (69%) [68] and 21/31 (68%) [69] cases of feline oSCC carried non-synonymous mutations in exons 5-8. However, it is not clear whether these represent somatic or germline mutations as although these variants were only found at low frequency, or not at all, in normal mucosa, the normal samples were not from the same cat as the oSCC tumours. The same group also analysed the methylation status of 10 genes (TERT, ZAP70, GP1BB, LRRTM1, FLI1, MiR124-1, MiR296, KIF1A, PARP15 and MAGEC2) in oSCC of cats, with statistically significant differences being found relative to normal and non-neoplastic mucosa from unaffected cats; ZAP70, FLI1 and MiR124-1 were hyper-methylated, whilst LRRTM1, KIF1A and MAGEC2 were hypo-methylated [69]. Recently, the same group demonstrated that DNA obtained from oral brushing could be analysed for TP53 mutation and DNA methylation (6-gene panel : ZAP70, FLI1, MiR124-1, KIF1A, MAGEC2 and MiR363) status to differentiate feline oSCC from controls (with a 69% sensitivity and an 86% accuracy) [70]. This is a promising example of how understanding the genetics of feline cancers can aid with diagnostics.
The ability of papillomaviruses to cause disease in felines has long been recognized and there have been suggestions they may be a significant cause of feline SCC [71]. However, the causal relationship between papillomavirus and feline SCC is not well established and there are mixed reports of the relative proportion of SCC samples carrying papillomavirus. For example, a study using PCR primers to specifically amplify Felis domesticus papillomavirus-2 (FdPV-2) found papillomaviral DNA in 76% of UV-protected SCCs, and 42% of UV-exposed SCCs [72]. However, more recently a metagenomic approach using 'ViroCap' (a targeted capture and next generation sequencing tool to identify all known vertebrate DNA viruses) on 20 feline oSCC cases found that papillomavirus was not commonly associated with feline oSCC [73].
Soft Tissue Tumours
Soft tissue tumours (STT) is a generalised, and likely oversimplified, term for a diverse group of tumours that arise out of the extraskeletal connective tissues, such as fibrous connective tissue of the dermis or subcutis (fibrosarcoma), fat (liposarcoma), nerve sheaths (malignant nerve sheath tumours) and small blood vessels below the skin (perivascular wall tumours) [74]. In veterinary pathology, there is a drive towards more precise classification of these tumours, rather than lumping by general histologic characteristics, since in the dog these subtypes may have prognostic differences [75,76]. Malignant soft tissue tumours in the cat have historically and predominantly continue to be grouped as soft tissue sarcomas (STS) in the literature and account for approximately 7% of skin and subcutaneous tumours in cats, within which fibrosarcomas are common [6,15,77]. These tumours are often grouped together as they behave similarly; often presenting as pseudoencapsulated tumours that are locally invasive, with up to 20% chance of metastasis (if high-grade) and a high risk of recurrence after surgical excision due to the tumours being poorly demarcated [6,15,77,78]. Some other types of sarcomas, including histiocytic sarcoma, rhabdomyosarcoma and haemangiosarcoma, are typically much more aggressive with distinct histologic characteristics and thus are generally considered separately.
There is a subset of malignant soft tissue tumours that develop at injection sites, after administration of vaccines and drugs, or sometimes due to other causes of chronic localised inflammation (for example microchip implantation), which are termed feline injection-site sarcomas (FISS). Fibrosarcomas are a common histologic type of these tumours, however, osteosarcoma, chondrosarcoma and others have been reported [79]. These are typically aggressive, and despite surgical removal with wide margins and additional postoperative treatment, local recurrence is common and a main cause of euthanasia. It has been suggested the chronic inflammation that develops at the injection site plays an important role in the tumourigenic processes [80]. Indeed, the identification of the altered expression of matrix metalloprotease (MMP) family members in FISS compared to nonfeline injection-site sarcomas and carcinomas (specifically levels of MMP2 and MT-MMP16, which were also significantly correlated with survival time) supports an underlying inflammatory pathogenesis for this tumour type [81]. In addition, it is well-established that inflammation can play key roles in driving tumour initiation, growth, progression and metastasis in human cancers (reviewed in [82]).
There have been very few studies looking at the underlying genetics of feline STS, with those to-date mostly focusing on the tumour suppressor gene TP53. A study of 150 domestic short-hair cats in the USA (50 with FISS and 100 'controls' that were disease free at the time of the study) found a strong association with FISS and two specific germ line single nucleotide polymorphisms in TP53 [83]. However, these findings were not replicated in a study using similar-sized cat population from Germany [84]. Somatic mutations in TP53 have been reported in several studies of FISS, ranging in frequency from 24-60% of cases [85,86] and were found to be significantly associated with rapid tumour recurrence and reduced overall survival [86]. Somatic mutations in TP53 have also been reported in other feline sarcomas, including spindle cell sarcoma, pleomorphic sarcoma and fibrosarcoma [56,87,88]. In contrast, no mutations were found in the coding regions of the tumour suppressor and cell cycle regulatory genes p21 waf1/cip1 and p27 kip1 in 45 feline fibrosarcomas [56].
Interestingly, microarray-based profiling of 46 feline sarcomas (19 FISS and 27 non-FISS) found numerous regions of highly recurrent copy number alterations, however, deletions of two specific regions were significantly associated with non-FISS cases, possibly suggesting a fundamental difference in the underlying genetics of FISS tumours from other STTs, or at the very least a discrete set of discriminatory markers [89]. The same group later used a higher resolution microarray platform to profile one of these FISS tumours and found DNA copy number imbalances involving several key cancer-associated genes including TP53, KIT, PTEN, RB1 and FAS [90]. This study also found major alterations in chromosome structure, with complex intrachromosomal rearrangements typical of those seen in aggressive soft-tissue sarcomas of other species [90]. Similarly, a study using RNAseq to profile the transcriptomes of three FISS tumours with patient-matched normal tissue (as well as cultures of FISS-derived cell lines and feline primary skin fibroblasts) found many similarities at the gene expression level with that seen in STT of dogs and humans [91].
Mast Cell Tumours
Mast cell tumours (MCTs) arise from mast cells, which are granulocytes that play a critical role in some immunological responses, and are found in a wide variety of tissues throughout the body. MCTs can occur in cats both as cutaneous (including subcutis) lesions and as visceral lesions (splenic and intestinal MCTs). The cutaneous manifestation is more common, with cutaneous MCTs ( Figure 5) being the second most common type of skin cancer in cats, accounting for 15-21% of all feline cutaneous neoplasms [61]. MCTs are the most common cause of splenic disease in cats and frequently involve multiple other viscera and bone marrow, whereas gastrointestinal MCTs are rare (accounting for just 4% of all intestinal neoplasms in cats [92]). Although the clinical and histological features of MCTs are well described, the biological behaviour of these tumours is still an issue that is poorly understood (Figure 5), especially compared to dogs, which have wellestablished grading schemes with correlations to clinical outcomes available for cutaneous and subcutaneous MCTs [93][94][95][96]. Whilst many feline cutaneous MCTs may be benign, some can recur and/or spread to distant sites within months after excision [97]. . HE-stained, ×20 magnification (inset is lowpower view). The long-term clinical outcome was reported as good, with a follow-up period of 1838 days. (b) Microscopic images of a well-differentiated mastocytic subtype of feline cutaneous mast cell tumour, in an 8-year old, male Domestic short hair cat. The mitotic count was 2 per 10 HPFs (400×; 2.37 mm 2 ). HE-stained, ×20 magnification (inset is low-power view). The overall the survival time was 598 days, with clinical outcome reported as suspected metastatic spread (not histologically confirmed) involving the skin and the spleen. Critically, these cases show great similarity in histological appearance, but their outcomes were vastly different.
MCTs of the gastrointestinal tract are generally viewed as an aggressive form of feline MCT with metastasis to mesenteric lymph nodes being common [98]. One study reported that cats with poorly differentiated intestinal MCTs survive 2-30 days compared to 28-538 days for those with well-or moderately differentiated tumours [99]. Thus, whilst surgical removal of MCTs may be curative for many low grade cutaneous MCTs, the prognosis is much more guarded when there is metastatic disease or involvement of the spleen or intestine.
There has been very little investigation of the genetics that underlie MCTs in felines. Indeed, to-date only the mutation status of c-KIT has been investigated, as it encodes the receptor tyrosine kinase Kit, which plays a key role in mast cell proliferation, differentiation and migration [100]. Activating mutations have been identified in exons 6, 8, 9 and 11 (Table 3) in feline MCTs, in both domestic and wild felines. c-KIT point mutations in exons 8 and 9 can cause ligand-independent homodimerisation and subsequent Kit autophosphorylation in MCTs from humans [101], canines [102] and felines [103]. In canine MCTs, c-KIT exon 11 mutations comprise 64-83% of all reported c-KIT mutations [102,104,105] and c-KIT mutations are associated with both reduced progression-free and overall survival rates [106]. Tyrosine kinase inhibitors (TKIs), such as imatinib mesylate, are routinely used in canine MCT and human mastocytosis patients to target mutant Kit [107]. However, whilst an in vitro study observed growth inhibitory effects for four different TKIs in three feline MCTs carrying exon 8 mutations in c-KIT [108] and in vivo studies have found a beneficial response to a variety of TKIs in ~70% of cats with MCTs at different sites [103,109,110], the prognostic relevance of c-KIT mutations in feline MCTs is questionable. Indeed, one study reported that the mutation status of c-KIT is not significantly related to Kit protein expression, is not strictly correlated with biological behaviour of the tumour and has no influence on prognosis [111].
Haemangiosarcoma
Haemangiosarcoma (HSA) is a malignant tumour, for which, at least in dogs, current evidence suggests it originates from a hematopoietic precursor cell rather than from the blood vessel lining (endothelial) cells [114,115], and as such it can occur anywhere in the body. However, evidence in the mouse suggests an endothelial precursor [116] and the authors could not find similar studies performed in the cat. The main types of HSA in cats are dermal/cutaneous, subcutaneous/intramuscular and visceral (involving internal organs) [117]. The most common forms of HSA in cats are dermal/cutaneous and subcutaneous, which tend to form on the head [117], suggesting that sun exposure may be a risk factor, as it is for angiosarcoma (AS) of the head/face/neck/scalp (HFNS) in humans [118]. This is in contrast to dogs, in which visceral forms, particularly the splenic and right atrial/auricular, are more common [15]. Surgical removal of dermal/cutaneous HSAs in cats tends to be curative, with the subcutaneous forms tending to recur after surgery due to the difficulty for complete excision [117]. Surgical resection of visceral tumours may be possible for localized disease, however, as in dogs, metastatic disease is commonly already present at the time of diagnosis in cats [15,119].
There is only one study that has looked at the genetics of HSA in cats. A targeted sequencing approach was taken to look at ~1000 cancer-associated genes in paired tumour-normal samples of 13 feline cutaneous HSA cases [120]. The most recurrently mutated genes in the HSA samples were TP53 (6/13, 46% cases) and NOTCH1 (2/13, 15% cases) [120]. It is interesting to note that in human HFNS AS, the most commonly mutated gene is TP53 (9/19 patients, 47% cases), with NOTCH1 also being frequently mutated (5/19 samples, 26% cases) [120]. In addition, there were mutations in other genes in the feline HSA samples that have been shown to be recurrently mutated in human AS samples, including ATRX, GLI1, MTOR, PCLO, PGR, PIK3CA, RELN, SETD2 and TERT [120]. Copy number analysis of the feline cutaneous HSA samples showed relatively few somatic copy number alterations compared to human AS, with the most penetrant SCNAs being in smaller regions of chromosomes A2 and D2 [120]. In terms of these putative pathogenic germline variants in the feline orthologs of established human AS susceptibility genes, the same study found missense mutations in ERCC2, RB1, IDH1, IDH2, POT1, TP53 and XPC [120]. However, further analysis of larger cohorts will be needed to determine whether these genes/alleles do indeed play a role in germline predisposition to cutaneous HSA in cats.
Pulmonary Carcinoma
Primary pulmonary carcinoma is rare in cats, with one study finding they represented only 0.69% of all feline cases admitted to a veterinary teaching hospital over a 4.5 year period [121]. It is an aggressive neoplasm with one study finding that regardless of the histological subtype, ~80% of the feline pulmonary carcinomas had metastasized at the time of diagnosis (Figure 6a), with the most frequent type of metastasis being intrapulmonary metastasis (66.7%) [121]. Treatment frequently involves lung lobectomy and adjuvant chemotherapy, and depending on the degree of differentiation of the tumour, the median survival time can be as much as 698 days or as low as 75 days [122]. However, a significant number of feline lung tumours are classified as inoperable at diagnosis, due to extensive disease, metastasis or concurrent decompensated cardiomyopathy [123]. Additionally, some cats may present clinically for lameness due to digital metastases and be subclinical for the primary pulmonary carcinoma [124], so called "lung-digit syndrome" (Figure 6b).
There have only been three studies that have investigated the genetics of lung cancer in cats, and all of them have focused on the genes known to be important in human lung cancer, specifically TP53 and K-RAS [125] and HER2 (in a subset of human lung cancers) [126]. Specifically, a feline bronchioloalveolar carcinoma cell line (SPARKY) was demonstrated to have a mutation in TP53 at codon 167, but no alterations were seen in K-RAS (or H-RAS), and the karyotype was aneuploid (with evidence of genomic inability) [127]. However, sequencing of key regions of K-RAS and TP53 in three feline lung carcinomas (and matched normal lung) did not reveal any alterations [121]. More recently, fluorescence in situ hybridisation (FISH) in a dual-core tissue microarray was used to demonstrate HER2 amplification in 3/13 cases of feline pulmonary carcinoma, which significantly associated with HER2 overexpression as analysed by immunohistochemistry [128].
Pancreatic Carcinoma
Pancreatic carcinoma in cats most commonly involves the exocrine acinar cells of the pancreas and it is an aggressive disease that has frequently undergone distant metastasis at the time of diagnosis [129]. As such, pancreatic carcinoma in cats is associated with a poor prognosis; the survival time is 2-4 months, with diagnosis frequently occurring at necropsy [15,129]. Longer survival times have been reported in cats who have their masses surgically removed (median survival time >300 days) [130] or cats treated with chemotherapy (toceranib phosphate treatment resulted in a survival time of 792 days post-diagnosis) [131].
As KRAS mutations occur at a high frequency in human ductal pancreatic carcinomas (although rarely in pancreatic acinar cell carcinomas), all studies on pancreatic carcinoma in felines to date have focused on the mutational status of KRAS (in particular codon 12, which is a hotspot in human cancers) [132]. One study found codon 12 KRAS mutations in 2/3 cats [133], although it is not clear whether these cases were acinar or ductal origin. A subsequent study of 18 cats with pancreatic acinar cell carcinoma found no KRAS mutations in either codon 12 or 13 [134], similar to that seen in pancreatic acinar cell carcinoma in humans.
Osteosarcoma
Osteosarcoma (OSA) is a malignant neoplasm arising from bone and is histologically characterised by the presence of tumour-derived osteoid. Primary bone tumours in cats are rare (an incidence of 4.9 of 100,000 individuals), with the majority of these being osteosarcoma (70-80%) [135,136]. They are characterised based on their location in and around the bone, with central (medullary) OSAs being the most common in dogs and cats. OSAs can arise from any bone, but are slightly more common from appendicular skeletal sites than axial ones [135]. Prognosis is dependent upon the location of the tumour, with axial OSA generally being poor (with an average survival time of 6 months) in contrast to appendicular OSA, which is associated with a more favourable outcome, particularly when treated with complete surgical excision or amputation (with an average survival time of 26-49 months) [137]. Although historically considered to have a better prognosis compared to dogs, a more recent report suggests a shorter median survival time and metastatic rate than previously reported, with a median survival time of 527 days after limb amputation and distant metastases in 46.3% of all cases [138].
To-date there has only been one investigation into the genetics of OSA in cats and that was identification of a somatic missense mutation in codon 273 of exon 8 of TP53 in an OSA from the shoulder of an 8-year old female cat [55]. This is interesting as this codon is a hotspot in human cancers. Unfortunately, overall feline OSA has not garnered the same research attention as canine OSA.
Looking towards a Future of Greater Understanding of the Feline Oncogenome
The recent release of a high-quality reference genome for the domestic cat [14] heralds the start of new and exciting times for our ability to understand the feline oncogenome. For example, alignment of whole-genome sequencing data from 54 domestic cats to the newly released reference genome, identified genome-wide sequence variant information for this species, specifically single nucleotide variants (SNVs) and structural variants (SVs) [14]. Of the 16 loss-of-function SNVs that were identified, the most notable was that of a potential cause for early onset feline mediastinal lymphoma, with a stop gain in tumour suppressor gene FBXW7 (found in one cat and its offspring) [14]. Screening of additional cats will be needed to validate this FBWX7 stop gain as a causative mutation for mediastinal lymphoma susceptibility, however it is an exciting prospect that we may be close to identifying a candidate gene/mutation for inherited cancer susceptibility in cats, and serves to show the benefits that are to be gained from now having a high-quality reference genome for the domestic cat. These are truly exciting times for feline oncogenomics and it is hoped that this will rapidly translate into significant advances in diagnosis, prognosis and treatment of cancer in cats. | 2022-10-14T15:02:23.898Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "d964612c93e774a235301cb5185fc5c36e6db591",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/9/10/547/pdf?version=1664885610",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c8cac079f81215a7852a83d2a255e1fb86d9e662",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27807480 | pes2o/s2orc | v3-fos-license | The Structure of Mycobacterium tuberculosis CYP125
We report characterization and the crystal structure of the Mycobacterium tuberculosis cytochrome P450 CYP125, a P450 implicated in metabolism of host cholesterol and essential for establishing infection in mice. CYP125 is purified in a high spin form and undergoes both type I and II spectral shifts with various azole drugs. The 1.4-Å structure of ligand-free CYP125 reveals a “letterbox” active site cavity of dimensions appropriate for entry of a polycyclic sterol. A mixture of hexa-coordinate and penta-coordinate states could be discerned, with water binding as the 6th heme-ligand linked to conformation of the I-helix Val267 residue. Structures in complex with androstenedione and the antitubercular drug econazole reveal that binding of hydrophobic ligands occurs within the active site cavity. Due to the funnel shape of the active site near the heme, neither approaches the heme iron. A model of the cholesterol CYP125 complex shows that the alkyl side chain extends toward the heme iron, predicting hydroxylation of cholesterol C27. The alkyl chain is in close contact to Val267, suggesting a substrate binding-induced low- to high-spin transition coupled to reorientation of the latter residue. Reconstitution of CYP125 activity with a redox partner system revealed exclusively cholesterol 27-hydroxylation, consistent with structure and modeling. This activity may enable catabolism of host cholesterol or generation of immunomodulatory compounds that enable persistence in the host. This study reveals structural and catalytic properties of a potential M. tuberculosis drug target enzyme, and the likely mode by which the host-derived substrate is bound and hydroxylated.
The global threat to human health posed by the bacterium Mycobacterium tuberculosis (Mtb) 5 was recognized by the World Health Organization some years ago (World Health Organization fact sheet on "Tuberculosis" located online at: www.who.int/mediacentre/factsheets/fs104/en), and it is estimated that one-third of the world's population is infected with the Mtb bacillus. Synergy with the HIV virus, failures in drug administration to patients, and the consequences of the development of drug and multidrug-resistant strains of Mtb have made the situation ever more perilous and it is widely acknowledged that novel intervention strategies are needed (1).
The determination of genome sequences of Mtb strains led to revelations relating to the protein repertoire of the pathogen, and highlighted the large number of enzymes involved in lipid metabolism (2,3). Mtb has an extraordinary array of complex lipids, including unusual long chain, extensively substituted lipids (mycolipids) that form a waxy coat around the bacterium and are likely important in preventing antibiotic entry (4). Another interesting observation relating to lipid metabolizing enzymes is the large number (20) of Mtb cytochrome P450 (P450 or CYP) enzymes. P450s are heme-containing monooxygenases, well known for their roles in metabolism of fatty acids, steroids, and other lipophilic molecules (5). This suggests there may be critical roles for a number of these enzymes in Mtb lipid metabolism (6). Consistent with this theory, gene disruption and gene deletion studies have, to date, shown that Mtb CYP121 and CYP128 are essential genes for cell growth and viability (7,8). These P450s have recently been proposed to have roles in C-C bond formation in a cyclic dipeptide and in hydroxylation of respiratory menaquinone, respectively (9,10). Although physiological roles for many Mtb P450s remain unclear, Mtb CYP51B1 has been structurally and biophysically characterized, and catalyzes demethylation of various sterols (11,12). This activity is consistent with that of eukaryotic CYP51 enzymes, suggesting that CYP51B1 has roles in host sterol metabolism. Importantly, it was demonstrated that various azole drugs (that inhibit fungal CYP51 by coordinating the heme iron) are also potent inhibitors of mycobacterial growth, thus suggesting that one or more Mtb P450s may be azole targets (13)(14)(15). Econazole and other azoles bind tightly to various Mtb P450s, including CYP121, CYP51B1, and CYP130 (10,11,13,16,17). Econazole is effective in clearing Mtb infection in a mouse model, and recent studies on Mtb CYP130 (a P450 whose gene is deleted in the vaccine strain Mycobacterium bovis BCG) revealed the binding mode of the drug to this P450 (16,18).
Recently, a gene cluster in Rhodococcus sp. strain RHA1 was identified as being involved in catabolism of cholesterol (19). Several of these genes are conserved in Mtb, including the P450s CYP125 and CYP142 (20), suggesting that these have roles in cholesterol (or possibly other sterol) metabolism. Early studies of the protein interactions of the Mtb CYP125 with nitric oxide indicated that its ferrous-nitric oxide complex was relatively labile, and thus that CYP125 may be relatively resistant to macrophage-generated nitric oxide (21). Transcriptomic studies showed that Mtb H37Rv CYP125 is induced in macrophages, and it is reported to be essential for infection of mice; one of only 26 genes present in both categories (22). Furthermore, cholesterol, along with the phagosomal tryptophan-aspartate-containing coat protein, is crucial for Mtb entry into the macrophage and for establishment of intracellular infection by Mtb (23). In other work, genetic inactivation of the Mtb cholesterol oxidase (ChoD) resulted in attenuation of the choD mutant strain, implicating ChoD in Mtb pathogenesis (24). Also, recent studies implicated the actinobacterial mce4 gene locus (conserved in Mtb) with cholesterol/steroid uptake (25). Finally, it was shown that Mtb uses cholesterol as a source of carbon and energy for growth, suggesting that exploitation of host cholesterol may underlie persistence and survival in humans (26).
To investigate properties of the CYP125 P450 from the putative Mtb "cholesterol cluster," we have purified Mtb CYP125 heterologously expressed in Escherichia coli and explored its thermodynamic and spectroscopic features, including its ligand-binding properties. We have determined the CYP125 crystal structure in a ligand-free state and in complex with econazole and androstenedione. Generation of a molecular model of the cholesterol complex indicated that cholesterol C25 and the terminal methyl (C26/27) carbons are exposed to the heme iron. Turnover studies demonstrated conclusively that CYP125 is a cholesterol 27-hydroxylase. Our data suggest a key role for CYP125 in Mtb cholesterol metabolism as a C27 hydroxylase, and thus its importance in infectivity and in persistence of Mtb in the human host.
EXPERIMENTAL PROCEDURES
CYPI25 Cloning, Expression, and Purification-CYP125 was cloned by PCR from a Mtb H37Rv cosmid library (from Institut Pasteur, Paris). The BAC clone containing CYP125 (Rv3545c) was prepared by standard protocols, and used as template DNA for the PCR using Pfu Turbo DNA Polymerase (Stratagene) and the oligonucleotide primers designed from the Mtb genomic sequence: upstream 5Ј-GGACAGCATATGTCGTGGAATC-ACCAGTCA-3Ј and downstream 5Ј-CAGTGGGATAGATC-TCCATTAGTGAGCAAC-3Ј. The bold letters in the upstream primer indicates an engineered NdeI restriction cloning site, including the initiation codon ATG. The underlined letters in the downstream primer indicate a BglII restriction cloning site. Amplification conditions were 95°C for 2 min, 30 cycles of 95°C for 50 s, 63°C for 30 s, and 72°C for 2 min, followed by a final polymerization step of 72°C for 8 min. CYP125 was cloned into pET15b (Merck) using the NdeI and BamHI restriction sites and using the compatible cohesive ends between BglII on CYP125 and BamHI on the vector, allowing expression of the CYP125 gene from a T7lac promoter under isopropyl 1-thio--D-galactopyran-oside induction, and producing a recombinant P450 protein with an N-terminal His 6 tag.
Protein was produced in E. coli HMS174 (DE3) (typically 15-20 liters, grown in 2ϫYT medium) by isopropyl 1-thio--D-galactopyranoside (0.15 mM) induction in the presence of the heme precursor ␦ aminolevulinic acid (0.1 mM) at OD 600 ϭ 0.6, with temperature then reduced from 37 to 23°C and culture continued for 24 h. Thereafter, cells were harvested by centrifugation (9,000 ϫ g, 4°C, 20 min), resuspended in 50 mM potassium phosphate, 250 mM KCl, 10% glycerol, pH 8.0 (buffer A), containing protease inhibitors (Complete EDTA-free proteasefree inhibitor tablets, Roche) at 4°C, and re-centrifuged as before. The pellet was then resuspended in a minimal volume of buffer A (all buffers contained standard protease inhibitors), and the cells were broken by a combination of sonication and French pressure treatment, as described previously (17,27). The disrupted cell extract was centrifuged (40,000 ϫ g) for 30 min to remove particulate material and then loaded onto a nickel-nitrilotriacetic acid resin column (Qiagen). The column was washed twice in buffer A, containing 30 mM then 75 mM imidazole, and eluted using 200 mM imidazole in the same buffer. The CYP125-containing fractions were pooled and dialyzed versus 50 mM Tris, 1 mM EDTA, pH 7.2 (buffer B), prior to further fractionation using a Resource-Q column on an AKTA purifier (GE Healthcare). CYP125 was bound to the column in buffer B and eluted in a gradient of 0 -500 mM KCl in buffer B. The most intensely red CYP125-containing fractions were retained, pooled, and concentrated to a final volume of Ͻ1 ml (using a Vivaspin 30 concentrator, Generon) prior to a final gel filtration step using a Sephacryl S-200 column (1.6 ϫ 70 cm) with 10 mM Tris, pH 7.5. CYP125 purity was determined by SDS-PAGE and UV-visible spectroscopy. The most pure fractions were retained, concentrated as previously (to ϳ500 M), and used directly for crystallogenesis, or dialyzed into 50 mM potassium phosphate, pH 7.5 (buffer C), containing 50% glycerol and stored at Ϫ80°C.
Ligand Binding and Thermodynamic Studies-Optical titrations for determination of azole ligand binding constants (K d values) were done as previously described (11). Pure CYP125 (typically 2-5 M) was suspended in buffer C in a 1-cm path length quartz cuvette and a spectrum for the ligand-free form recorded (250 -800 nm) at 25°C on a Cary UV-50 Bio scanning spectrophotometer (Varian, UK). Azole ligands (clotrimazole, econazole, fluconazole, miconazole, ketoconazole, voriconazole, 2-phenylimidazole, and 4-phenylimidazole) were titrated from concentrated stocks in dimethyl sulfoxide solvent (apart from the phenylimidazoles, which were prepared in 60% ethanol) until apparent saturation of the optical change was observed. Induced optical change versus ligand concentration data were fitted using Equation 1, which provides the most accurate estimation of K d values for the tight binding azole drugs, as we have described in previous studies of the Mtb CYP121 and CYP51B1 P450s (8,17). Data were fitted using Origin software (OriginLab, Northampton, MA).
In Equation 1, A obs is the observed absorbance change at ligand Crystal Structure of M. tuberculosis CYP125 concentration S, A max is the absorbance change at ligand saturation, E t is the CYP125 concentration, and K d the dissociation constant for the CYP125-ligand complex.
Binding of the sterols cholesterol, testosterone, progesterone, and epiandrosterone was done by addition of small volumes of stock solutions of the sterols (suspended in EtOH) to CYP125 in buffer C, with spectral measurements taken before and after sterol addition. Other spectral measurements reporting on the sodium dithionite-dependent reduction, binding of CO to the ferrous enzyme form, and nitric oxide to the ferric form (for enzyme quantification and establishment of typical P450-type features of CYP125) were done using a Cary 50 UVvisible spectrophotometer, either aerobically or under anaerobic conditions in a glove box (Belle Technology, Portesham, UK) for ferrous enzymes (8,28).
CYP125 redox titrations were performed in a Belle Technology glove box under nitrogen atmosphere, as described previously (29). Protein solution (approximately 9 M in 5 ml of 100 mM potassium phosphate, 10% glycerol, pH 7.0) was titrated electrochemically by the method of Dutton (30) using sodium dithionite as reductant and ferricyanide as oxidant. Mediators were added to facilitate electrical communication between enzyme and electrode (2 M phenazine methosulfate, 7 M 2-hydroxy-1,4-naphthoquinone, 0.3 M methyl viologen, and 1 M benzyl viologen, to mediate in the range from ϩ100 to Ϫ480 mV) (31). Spectra (250 -800 nm) were recorded using a Cary UV-50 Bio UV-visible scanning spectrophotometer. The electrochemical potential of the solution was measured using a Mettler Toledo SevenEasy meter coupled to a Pt/Calomel electrode (ThermoRussell Ltd.) at 25°C. The electrode was calibrated using the Fe 3ϩ /Fe 2ϩ EDTA couple as a standard (ϩ108 mV). A factor of ϩ244 mV was used to correct relative to the standard hydrogen electrode. Redox titrations were performed in both reductive and oxidative directions to ensure that the redox processes were fully reversible and hysteretic effects were not observed. Absorption change versus applied potential data were fitted to the Nernst function (using Origin software) to derive the midpoint potential for the CYP125 heme iron Fe 3ϩ /Fe 2ϩ couple (29).
Spectroscopic Studies-Electron paramagnetic resonance (EPR) was done on ligand-free and imidazole (10 mM)-bound ferric CYP125 (220 M) in buffer C. EPR spectra were recorded on a Bruker ER-300D series electromagnet and microwave source interfaced with a Bruker EMX control unit and fitted with an ESR-9 liquid helium flow cryostat (Oxford Instruments), and a dual-mode microwave cavity from Bruker (ER-4116DM). Spectra were recorded at 10 K with a microwave power of 2.08 milliwatts and a modulation amplitude of 10 G. Resonance Raman was done using 15-milliwatt, 406.7 nm radiation from a Coherent Innova 300 krypton ion laser, and acquired using a Renishaw micro-Raman system 1000 spectrophotometer.
CYP125 Crystallization, Structure Elucidation, and Molecular Modeling-CYP125 was concentrated to 13 mg/ml. Sitting drops were prepared by mixing 0.1 l of CYP125 with 0.1 l of mother liquor and incubating at 4°C. Crystallization conditions were refined to two different conditions, both consisting of MgCl 2 with 0.1 M HEPES, pH 7.0 or 7.5, and PEG 6000 (20%) or PEG 3350 (25%), respectively. The PEG 6000 conditions mainly generated crystals belonging to the C222 1 space group, whereas crystals generated using PEG 3350 belonged to the P2 1 2 1 2 1 space group. Ligands 4-androstene-3,17-dione (52 mM) and econazole (33 mM) were prepared in ethanol and diluted 1/10 in mother liquor prior to soaking single crystals for 15 min. Single crystals were cooled to 100 K after addition of 10% PEG 200 as cryoprotectant, and data were collected at ESRF and Diamond beamlines. The CYP125 structure was solved by molecular replacement using the P450terp structure as the search model. Full details are in the supplemental data section. Data and final refinement statistics for the CYP125 crystal structures are in supplemental Table S1.
Molecular modeling of the interaction of cholesterol with CYP125 was based on a soft-restrained molecular dynamics (MD) approach previously described for P450s (32). Briefly, cholesterol was positioned in the ligand-free structure of CYP125, close to the positioning of androstenedione in the androstenedione-bound CYP125 structure, in 4 different orientations, so that no steric clashes with CYP125 residues could be observed and such that either the cholesterol tetracyclic moiety or its alkyl chain was pointing to the heme. All 4 positions were chosen so that the cholesterol molecule main axis was aligned with the entrance channel, to minimize the large conformational changes that would occur during the substrate motion in the channel. Up to 5 different dockings were performed from each starting orientation, using small adjustments of the conformation and coordinates. In the following described protocols, the side chains of residues located in a 10-Å sphere centered on cholesterol, as well as water molecules, were defined as the only mobile atoms, to preserve the tertiary structure of CYP125 as observed in the crystal structure. All MD simulations and energy minimization experiments were performed using the NAMD program (33) with Amber force field parameters (34). Topology and parameter files for cholesterol were obtained using the Antechamber program (35) with AM1-BCC charges (36). The cut off parameter for the computation of non-bonded interactions was set to 12 Å, and the electrostatic forces were "softened" by defining a relative dielectric constant of 2 for the system. Energy minimization (1000 steps, conjugate gradient) and MD simulations (200 ps) were initially performed in vacuo at 100 K to thermally equilibrate CYP125-cholesterol complexes. Then, a distance-dependent constraint whose force constant values ranged from 1.5 to 2 (kcal/mol)/Å 2 was applied between the heme iron and the closest cholesterol carbons (3 to 4 atoms), and MD simulations were performed at 100 K for 1 ns. Equilibration of the docked ligand in the active site was done by releasing the constraint in a final MD run of 1 ns at 100 K. Final minimization (1000 steps, conjugate gradient) was performed to obtain the CYP125-cholesterol complexes. Comparison and selection of the docked cholesterol models was done by comparing the stabilization energy due to the CYP125-cholesterol interactions (supplemental Table S4) and the minimal distances between cholesterol heavy atoms and the iron atom of the heme. Minimal distances greater than 7 Å led to the dismissal of the docked model. The model considered for the "Results" and "Discussion" was obtained from a starting position corresponding to orientation C (as represented in Fig. 7).
Reconstitution of Cholesterol Hydroxylase Activity of CYP125-
Incubations with CYP125 and cholesterol were carried out in 1 ml of 50 mM potassium phosphate, pH 7.2, using 0.5 M CYP125, 10 M E. coli flavodoxin, 2.5 M E. coli flavodoxin reductase, 2 nM [ 3 H]cholesterol, and 1 mM NADPH with a NADPH regenerating system (glucose 6-phosphate and glucose-6-phosphate dehydrogenase) (37). The enzymatic reaction was initiated by the addition of NADPH and terminated by vortexing with 2 ml of CH 2 Cl 2 . The organic phase was isolated, evaporated, dissolved in acetonitrile, and subjected to HPLC as previously described (37).
To characterize the product of CYP125 activity by gas chromatography-mass spectrometry, the concentration of cholesterol in the enzyme assay was increased to 1 M. After termination of the enzyme reaction, the substrate and product were extracted, converted into trimethylsilyl ethers, and injected into a VF-35MS capillary column (60 m ϫ 0.32 mm ϫ 0.25 m) in a splitless mode at an injection temperature of 270°C with a helium flow of 1.1 ml/min. The initial oven temperature was kept at 200°C for 1 min, then increased to 280°C (20°C/min), ramped up to 310°C (3°C/min), and held for 14 min isothermally. The mass spectrometer (Agilent 5973N-MSD combined with an Agilent 6890 GC system) was operated in electron impact ionization (70 eV) at 230°C. The retention time and mass spectrum of the trimethylsilyl CYP125 product was essentially identical to that of authentic 27-hydroxycholesterol (purchased from Steraloids, Newport RI), with the base peak at m/z 129 and prominent peaks at m/z 417, 456 and 546.
Materials-Bacterial growth medium (Tryptone, yeast extract) was from Melford Laboratories (Ipswich, Suffolk, UK). A 1-kb DNA ladder was from Promega. Azole drugs were from MP Biomedicals Inc. All other reagents were from Sigma and were of the highest grade available.
RESULTS
Genetic Context, Expression, and Production of M. tuberculosis CYP125-To define the biochemical and structural characteristics of CYP125, we expressed and purified the P450 from E. coli. Purified CYP125 was dark brown (not red) in color, and optical spectroscopy revealed an extensively high spin (HS, Ͼ80%) enzyme with heme Soret features at 393 (HS, major) and 416 nm (low spin, LS, shoulder) (Fig. 1A). The HS/LS ratio was affected by temperature, ionic strength, and pH, although the protein was predominantly HS under all conditions. In contrast, and despite apparent homogeneity by SDS-PAGE, certain fractions obtained during gel filtration purification had predominantly LS heme iron with A max at 415 nm (Fig. 1A). Solvent treatments of HS CYP125 fractions did not result in extraction of potential substrates bound to the enzyme, but did demonstrate that the heme spin state could be readily modulated by organic solvents (e.g. methanol, see below).
Ligand Binding Characteristics of CYP125-Addition of heme coordinating ligands resulted in occupancy of the 6th (distal) position on the heme iron, with Soret optical shifts seen for imidazole (maximum at 426 nm), cyanide (439 nm), and nitric oxide (433 nm) (Fig. 1B). A fundamental property of P450s is their binding of carbon monoxide (CO) to ferrous heme iron to give a spectral species with maximum near 450 nm. For CYP125 the Fe(II)⅐CO complex spectrum has two maxima at 450 (P450) and 422 nm (P420), suggesting protonation of the proximal cysteinate ligand (Cys 377 ) to a thiol in the P420 form, as seen previously (11) (Fig. 1C). Consistent with this conclusion, higher buffer pH increased the P450:P420 ratio, with optimal P450 content achieved in 100 mM potassium phosphate, pH 9.0. The LS form of CYP125 showed lower stability of heme thiolate ligation in the Fe(II)⅐CO complex than did the major HS fraction, with a higher P420:P450 ratio observed (Fig. 1C).
Preceding studies have revealed high affinity and type II binding characteristics for the interactions of various azole drugs with other Mtb P450s (e.g. CYP121, CYP51B1, and CYP130) (11,17,21). Azoles typically directly coordinate to P450 heme iron to produce type II (red) shifts of the Soret band. For CYP125, unusual binding properties of various azoles were seen. Voriconazole did not induce a spectral shift, whereas fluconazole and ketoconazole produced small type II shifts, suggesting ϳ20 and 35% heme iron coordination, respectively. In the case of econazole, previous work showed its binding induced a near complete HS conversion (21). Although we found this reproducible at ambient temperature, treatment of the HS CYP125 at low temperature (10°C) with methanol or an ethanol/methanol mixture (10%) produced a form of CYP125 that displayed type II binding for econazole ( Fig. 2A). For miconazole and clotrimazole, these azoles also bound to the HS form of CYP125 to produce type I shifts at low concentration (up to ϳ0.5 M), but type II shifts (to ϳ422 nm) at higher drug concentrations (Fig. 2B). K d values for azole binding were determined as described under "Experimental Procedures," and were in the range ϳ4 -45 M (supplemental Table S2). In addition, the LS CYP125 fractions obtained from gel filtration studies (see above) also displayed type II binding of these azoles.
In view of the likelihood that CYP125 binds sterols, optical binding studies of the interactions with various sterol-type molecules were done. The predominant HS state of the purified CYP125 precluded accurate attempts to establish further type I binding of most molecules. However, type I optical changes were induced by addition of androstenedione and cholesterol to the solvent-treated form (which exhibited increased LS heme content), whereas negligible spectral changes were induced by the addition of other steroids (e.g. testosterone, pregnenolone) (Fig. 2C). In parallel studies, no significant CYP125 optical perturbation was induced by addition of various fatty acids and terpenes, including palmitic acid and ␣-terpineol.
Spectroscopic and Thermodynamic Analysis of CYP125-To further probe the properties of CYP125, we undertook EPR, resonance Raman, and redox potentiometry studies, as described under "Experimental Procedures," and previously (8). EPR of ligand-free CYP125 at 10 K was typical for a thiolatecoordinated, LS P450, with the major set of g values at g x ϭ 2.40, g y ϭ 2.25, and g z ϭ 1.94 (supplemental Fig. S1). A very small signal from a HS species was detected at 10 K. Room temperature resonance Raman confirmed the ferric state of the CYP125 heme iron, with the main oxidation state marker band ( 4 ) at 1372 cm Ϫ1 . The spin state marker band ( 3 ) showed features at 1487 (major) and 1500 cm Ϫ1 , reflecting a dominant population of HS heme iron over the LS form. Binding of imidazole (10 mM) to CYP125 resulted in a LS form (see Fig. 1B) with 3 at 1501 cm Ϫ1 predominant (supplemental Fig. S2). The redox potential for the Fe 3ϩ /Fe 2ϩ transition of the CYP125 heme iron was Ϫ303 Ϯ 5 mV (versus NHE), consistent with the mainly HS nature of the P450 (supplemental Fig. S3) (21). Full analyses of EPR, resonance Raman (supplemental Tables S2 and S3), and thermodynamic data are presented in the supplemental data.
Crystallization and Structural Determination of Ligand-free CYP125-In view of the importance of CYP125 to Mtb viability in its host, we determined the crystal structure in both the presence and absence of ligands. The structure was solved to 1.4 Å by molecular replacement using the structure of the Pseudomonas sp. P450terp (CYP108A1) as the search model (38). CYP125 has a typical P450-fold with the heme cofactor sandwiched between a major ␣ helical domain and a smaller domain with substantial  sheet content (Fig. 3A). An entrance to the active site is clearly defined by the BЈ and F ␣-helices and their preceding loop regions (Val 96 -Lei 117 and Met 200 -Ile 221 , respectively) in addition to contributions by the I-helix (Phe 260 -Thr 272 ) and Trp 414 -Leu 415 from the C-terminal loop region. The entire cavity is lined by hydrophobic residues and resembles a "letterbox" shape with the BЈ and F helices defining the opposite sides (Fig. 3B). This putative substrate binding pocket becomes a funnel-like shape, with a progressive narrowing of the active site cavity on approach to the heme. The position and nature of the active site residues in the immediate vicinity of the heme group bear remarkable resemblance to the P450terp structure, despite the apparent lack of ␣-terpineol binding to CYP125.
A distinct crystal form (form 2) could be obtained that gave data until 1.7 Å and also contained one CYP125 monomer in the asymmetric unit. No significant changes were observed when comparing both crystal structures (Fig. 3A) with the notable exception of the environment and position of the I-helix residue Val 267 that is located in the immediate vicinity of the heme distal pocket. In both crystal structures, the Val 267 side chain is clearly defined as occupying two positions, but the relative occupancy of these positions is markedly different in both crystal structures (Fig. 3C). In one orientation (A), the Val 267 carbonyl backbone oxygen is involved in I-helix H-bonding interactions, whereas the second orientation (B) positions this atom within the heme distal pocket. In conformation B, a water molecule occupies a position similar to that observed for the Val 267 carbonyl backbone oxygen in conformation A. The rel-ative occupancy of states A and B appears directly linked to the coordination state of the heme iron, with the Val 267 A orientation linked to a hexa-coordinate LS state, whereas the B conformation gives rise to a penta-coordinate HS state. In state B, an indirect H-bonding interaction between the Val 267 carbonyl backbone oxygen and the water molecule closest to the heme iron is observed. This could account for the observed link between heme iron coordination state and Val 267 conformation, as reorientation of this residue affects the heme distal pocket H-bonding network and hence the extent to which water will ligate the heme. Thus, it is possible that upon substrate binding there is a reconfiguration of active site organization and that the structural rearrangement of Val 267 is a trigger for aqua ligand displacement and concomitant P450 heme LS to HS conversion. This would link the conserved Thr 272 (implicated in proton delivery) via the newly introduced water molecule (only observed in conformation B) to a network of hydrophilic residues (Thr 201 and Glu 271 ) and water molecules that could easily serve as a proton relay. It is also likely that CYP125 reduction itself is gated by a LS to HS transition, as seen for other P450s (39,40).
Crystal Structures of CY125 Androstenedione and Econazole Complexes-Soaking CYP125 crystals with both the steroid androstenedione and the azole econazole produced complexes that were solved to resolutions of 2.0 and 2.2 Å, respectively. In both cases, these molecules are bound within the observed letterbox cavity, with neither ligand able to penetrate the funnelshaped access tunnel to the heme group (the closest atoms to the heme iron are at 12.9 and 9.3 Å for androstenedione and econazole, respectively). The binding mode for androstenedione (which lacks the alkyl side chain found in cholesterol) is not compatible with P450 oxidation, and the funnel-like nature of the active site clearly prevents the steroid moiety from reaching the direct vicinity of the heme iron (Fig. 4A). Binding of this ligand appears to introduce little change in the protein structure with ligand-protein interactions predominantly through hydrophobic packing of the steroid moiety between residues from the BЈ-helix and F-helix regions. In addition, a limited set of polar contacts are made between both hydrophilic substituents on the steroid moiety and residues Gly 202 , Lys 214 , and Ser 217 . Econazole binds in a similar hydrophobic region, and is again prevented from further migration into the active site by steric constraints (Fig. 4B). In contrast to androstenedione, econazole binding introduces a minor change in the position and conformation of Val 267 due to the close contact made with the econazole chloride substituent that is closest to the heme. In similar fashion to the androstenedione-CYP125 structure, protein-ligand contacts are dominated by a series of hydrophobic interactions with the BЈ-and F-helix residues, in addition to a single polar contact between the azole moiety and Asp 108 . green and yellow, respectively). The BЈ-helix, I-helix, and the FG helices are colored in blue, cyan, and red. B, solvent accessible surface of CYP125 with BЈ-helix, I-helix, and FG helices colored as in panel A. A large crevice is seen sandwiched between the BЈ-helix and the FG helices that allows access to the heme and presumably functions as the substrate binding site. C, detail of the CYP125 active site. The alternative positions for Val 267 with associated waters are shown colored in blue (conformation A) and red (conformation B). Residues depicting multiple conformations that are possibly linked to proton transport to heme iron are shown in atom colored sticks. Residues or waters that do not display multiple conformations are colored gray. Ligand binding studies revealed the ability of econazole to coordinate heme iron only in an enzyme form obtained by solvent treatment at low temperature, and these data are consistent with conformational rearrangements of the enzyme induced by alteration of the chemical environment and ambient temperature, and that enable the ingress of econazole toward the heme in a proportion of the enzyme molecules.
In addition to androstenedione and econazole, we sought to establish the binding mode of cholesterol to CYP125. However, crystal soaks with cholesterol persistently failed to reveal interpretable density for the cholesterol ligand, whereas co-crystallization attempts failed to generate crystals of suitable quality for diffraction studies. For this reason, we investigated the cholesterol docking mode using molecular modeling methods.
Molecular Modeling of Cholesterol Binding to CY125A1-Cholesterol was docked using soft restrained dynamics docking (32) into the CYP125 active site, using the androstenedione binding pocket as the access channel. Several orientations were used as a starting point for docking (Fig. 5), with either the alcohol function on the tetracyclic moiety or the alkyl chain pointing to the heme. During molecular dynamics the backbone CYP125 coordinates were restrained to the conformation observed in the crystal structure. As described in the supplemental data, the final model was chosen considering the highest energy stabilization of the CYP125-cholesterol complex as well as the cholesterol-iron distances. The final model (Fig. 6A) exhibited the greatest stabilization energy among all the models obtained (more than 6 times higher than any others, see supplemental Table S4). The cholesterol is deeply buried in the CYP125 active site, with a calculated buried surface of 312 Å 2 , which corresponds to 86% of the total substrate surface. The tetracyclic portion of the cholesterol occupies the same region of the active site as seen in the androstenedione complex, but the molecule is "flipped" through 180°s uch that the hydroxyl group on ring A (a carbonyl in androstenedione) is orientated toward the mouth of the active site rather than being internalized. structure (Fig. 6B), as the tetracyclic portions of cholesterol and androstenedione can be readily superimposed, with methyl groups on the rings oriented in the same direction. The apparent rotation of the tetracyclic moiety between the androstenedione complex and the cholesterol model structures can be explained by the additional favorable binding energy associated with the burial of the cholesterol alkyl chain in the hydrophobic region leading to the heme (as opposed to burial and desolvation of the cholesterol alcohol when considering an androstenedione-like orientation). It is interesting to note that the terminal portion of the cholesterol side chain is in close contact with Val 267 , an interaction that may be important to promote conformational readjustment of the side chain to displace the distal water and trigger catalysis.
Experimental Validation of Cholesterol C27 Oxidation by CYP125-To establish that Mtb CYP125 actually catalyzed oxidation of cholesterol and determine the position(s) of oxidation, we reconstituted the P450 with a bacterial redox partner system (E. coli flavodoxin reductase and flavodoxin proteins and NADPH reductant) that has been well characterized and used widely to drive both prokaryotic and eukaryotic P450 catalysis (41,42). Experiments were done using gas chromatography-mass spectrometry as performed previously for human CYP46A1 and as detailed under "Experimental Procedures" (37). A single product was formed using the E. coli redox system with CYP125. By comparison with authentic standards, this was shown to be 27-hydroxycholesterol, consistent with our predictions based on structural modeling of the mode of cholesterol association with CYP125 (Fig. 7).
DISCUSSION
The location of CYP125 in a gene cluster conserved from Rhodococcus to Mtb suggests a likely role in cholesterol metabolism (19). Cholesterol may be important for Mtb entry into macrophages, and for establishing infection. The fact that CYP125 is both induced in macrophages and reported as essential for establishing mouse infection is also indicative of a crucial role for this P450 (22,23). CYP125 is retained in all Mtb strains and in some related actinobacteria, e.g. Nocardia and Streptomyces spp. The genetic context of CYP125 is conserved within these bacteria, and the surrounding acyl-CoA dehydrogenase genes (FADE28, FADE29, and FADA5, likely involved in lipid degradation) form an operon with CYP125. Gene knock-out studies on the CYP125 and associated FAD-containing intergenic region (igr) implicated this cluster of genes to have an important role in early mycobacterial infection (43). Despite genetic conservation in non-pathogens, many of the genes within the cholesterol operon are critical for Mtb pathogenesis. The Mtb cholesterol catabolic gene cluster is under the control of a TetR transcriptional repressor ktsR (Rv3574) likely to have an essential role in pathogenesis and lipid degradation. Genes in this cluster may metabolize diverse lipids, using the mce4 system involved in cholesterol/ steroid uptake (44). Collectively these genetic studies and the presence of CYP125 in the cholesterol operon suggest a critical role in bacterial cholesterol metabolism, and in mycobacterial infection and pathogenesis. Our determination of the structure of CYP125 represents the first insight into active site architecture of this important P450, and explains unusual spectroscopic phenomena previously described (21).
Although type II azole binding has been demonstrated clearly for Mtb CYP51B1, CYP121, and CYP130 (11,12,16,17), peculiar type I binding of econazole was reported for CYP125 (21). For the purified, HS form of CYP125 characterized here, this was shown to be the case for econazole. Moreover, clotrimazole and miconazole gave type I binding at low ligand concentrations, but type II binding (heme coordination) at higher concentrations. The phenomena observed for clotrimazole and miconazole suggest alternative binding modes and/or distinct conformers of the P450. On treatment of CYP125 with alcohol (10%) at 10°C, we were able to produce a mixed spin species that gave type II binding with econazole. Higher concentrations of alcohol destabilized the protein, but also resulted in a further shift toward LS for the ligand-free enzyme. The crystal structure of the econazole-bound (Fig. 4B) CYP125 reveals narrowing of the active site "funnel" precluding further entry of econazole to coordinate the heme iron. The spectral studies are thus suggestive of different conformational states of the enzyme that are favored under different environmental conditions. EPR studies also suggest some heterogeneity in the thiolate-coordinated CYP125 species, which again may suggest the presence of different conformers in the enzyme population studied.
Both crystal structures of the ligand-free CYP125 reveal a clear active site crevice that is roughly rectangular in form and of dimensions well suited to the binding of cholesterol. The majority of this binding pocket is defined by the BЈ and F helices, which, together with a section of the C-terminal loop and I-helix residues, also contribute to formation of the heme distal pocket. There are some important parallels in relation to the recently determined crystal structure of human CYP46A1, a cholesterol 24-hydroxylase (45; Protein Data Bank code 2Q9F) and of the vitamin D 3 -bound CYP2R1 (46) (PDB code 3C6G). An overlay of CYP125, CYP46A1, and CYP2R1 reveals that CYP125 and CYP2R1 share a common substrate binding pocket, whereas the sterol moiety of cholesterol in CYP46A1 is bound by a distinct region of the protein (Fig. 6C). In the cholesterol sulfate-CYP46A1 complex, the ligand C24 and C25 carbons are placed closest to the heme iron (both at distances of ϳ5.7 Å), consistent with the preferred position of oxidation at C24, with the terminal methyl groups more distant. Similarly, the vitamin D 3 -CYP2R1 complex reveals the C25 and C26/27 carbons located at distances of 5.5 and 6.5 Å, respectively, from the heme iron, which again is in agreement with the observed oxidation at C25 (46). The cholesterol-CYP125 model predicts the C26/C27 cholesterol carbons to be close to the iron center, at a distance of ϳ5.3 and ϳ6.3 Å., and we therefore predicted that CYP125 would catalyze oxidation of cholesterol on one or both of the terminal methyl groups. This was proven to be the case in turnover studies, with CYP125 shown to form exclusively 27-hydroxycholesterol.
CONCLUSIONS
The CYP125 cytochrome P450 from M. tuberculosis was expressed, isolated, and structurally resolved. The P450 exhib-its an obvious letterbox substrate access channel of dimensions appropriate for entry of the prospective substrate cholesterol. Complexes with androstenedione and econazole revealed ligand binding near the top of the active site cavity and exclusion for further ingress due to the narrowing of the active site funnel. Although solution state studies reveal econazole (and other azole drugs) are able to coordinate the heme iron under certain conditions, CYP125 clearly demonstrates lower type II binding affinity for a number of azole drugs compared with other Mtb P450s, e.g. CYP121 (17), consistent with the constricted nature of its heme access channel. Our model for the cholesterol-CYP125 interaction, and hence the catalytic activity, was obtained a priori and used to guide further experiments. This model indicates that the alkyl chain of this substrate can extend down the narrow binding funnel with the terminal methyl carbons of the chain presented to the heme iron to facilitate C27 oxidation, as confirmed by turnover studies. Given the likely role of CYP125 in catabolism of host cholesterol, this reaction is likely a primary event that enables the breakdown of the cholesterol side chain. However, the hydroxylation of cholesterol at the terminal position also has the potential to generate a product capable of modulating host cholesterol synthesis, competitively antagonizing estrogen receptor action, and inhibiting expression of nitric-oxide synthase (1). In this respect, it is tempting to speculate that CYP125 participates in cholesterol oxidation to generate a product that is further broken down to generate metabolic fuel for Mtb and/or is used directly to modulate host responses and thus facilitate persistence of the pathogen. | 2018-04-03T01:05:05.460Z | 2009-10-21T00:00:00.000 | {
"year": 2009,
"sha1": "3b3c2a6a7196a7635b7e7f821694fd3ee402c646",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/51/35524.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "368407a0cbcb0e1ad20b6fc835320539eb44836d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
126210542 | pes2o/s2orc | v3-fos-license | Characterization of density oscillations in confined and degenerate Fermi gases
Friedel oscillations appear in density of Fermi gases due to Pauli exclusion principle and translational symmetry breaking nearby a defect or impurity. In confined Fermi gases, this symmetry breaking occurs also near to boundaries. Here, density oscillations of a degenerate and confined Fermi gas are considered and characterized. True nature of density oscillations are represented by analytical formulas for degenerate conditions. Analytical characterization is first done for completely degenerate case, then temperature effects are also incorporated with a finer approximation. Envelope functions defining the upper and lower bounds of these oscillations are determined. It is shown that the errors of obtained expressions are negligible as long as the system is degenerate. Numbers, amplitudes, averages and spatial coordinates of oscillations are also given by analytical expressions. The results may be helpful to efficiently predict and easily calculate the oscillations in density and density-dependent properties of confined electrons at nanoscale.
Introduction
Global thermodynamic properties of classical and quantum gases confined in nanoscale domains have been widely considered in the literature [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. It was shown that, global thermodynamic properties of gases confined in nanodomains are strongly affected by the sizes of the domain due to wave nature of particles. These effects are called as quantum size effects (QSE). In order to understand physical mechanisms of size dependence of global thermodynamic properties as well as to be able to use the models based on local equilibrium assumption, it is important to examine local properties, like density distribution, of confined gases.
Local density distribution of an ideal Maxwell-Boltzmann (MB) gas confined in a rectangular domain has been shown to be not homogeneous even at thermodynamic equilibrium [9]. Density goes to zero near to domain boundaries and a quantum boundary layer (QBL) occurs where the homogeneity in density distribution is disrupted. The thickness of this layer is on the order of thermal de Broglie wavelength of particles, and QBL vanishes when Planck's constant goes to zero. It has been seen that existence of QBL is crucial to understand and even predict QSE on thermodynamic behaviors of the particles confined in nanodomains where the particle density is not uniform. QSE terms in expressions of thermodynamic properties can directly be recovered by using QBL concept without even solving Schrödinger equation [9,11,12].
QSE become significant at nanoscale and bring in some unique features for nanomaterials. Semiconductor or metal nanostructures are some of the most common and convenient materials to examine QSE. Although electrons confined in non-degenerate semiconductors can be modeled by MB statistics, it is necessary to consider Fermi-Dirac (FD) statistics when dealing with confined and degenerate electron gas (e.g. degenerate semiconductors or metals).
Unlike unbounded Fermi gases, density oscillations appear in bounded Fermi gases because of translational symmetry breaking near to boundaries in confined systems. In this study, oscillations in local density distribution of a confined and degenerate Fermi gas are examined. These density oscillations actually correspond to Friedel oscillations in metals or semiconductors due to defects or impurities. By invoking some mathematical tools such as Poisson summation formula, we first obtain an analytical expression representing the true nature of oscillations for completely degenerate case (e.g. T = 0K) which is called 0 th order approximation. Upper and lower envelope functions giving the ultimate bounds of the oscillations as well as amplitudes and averages of oscillations are derived. Furthermore, by making a more precise approximation, which is called 1 st order approximation, we consider the effect of temperature and derive analytical expressions also for finite temperatures. We compare exact and analytical expressions based on two different approximations and show that the errors of analytical expressions are quite low, as long as degeneracy is sufficiently high where the oscillations are considerable. Averages, amplitudes, spatial coordinates and the numbers of these oscillations are also analytically given.
Density distribution of confined and degenerate Fermi gases
For an ideal Fermi gas in thermodynamic equilibrium, the number of particles in a differential local volume dV centered at r−location for a quantum state s is written as dN s = N (f s / s f s ) |ψ s (r)| 2 dV , where N is total number of particles, ψ s is the eigenfunction of the quantum state s and f s = 1/ [exp(−Λ +ε s ) + 1] is FD distribution function. Here, Λ = µ/k B T is degeneracy parameter,ε s = ε s /k B T is dimensionless energy eigenvalue of quantum state s, µ is chemical potential, k B is Boltzmann constant and T is temperature. We don't take spin degree of freedom into account, since it cancels in density expression in the absence of an external magnetic field. Note that (f s / s f s ) denotes the probability of finding a particle in quantum state s (thermodynamic probability) and (|ψ s (r)| 2 dV ) gives the possibility of a particle to be in a volume dV centered at r−position (quantum probability). By summing up dN s over all quantum states, local particle density reads which is actually the ensemble average of quantum probability density times the number of particles. In dimensionless form, local density distribution of a Fermi gas confined in a 1D domain is represented asñ
Density oscillations of a 1D Fermi gas
where n cl = N/L is classical density andx = x/L is dimensionless position. By using Eq. (2), we can examine the exact density distribution of an ideal Fermi gas confined in a 1D rectangular domain.
Dimensionless density distributions of a confined and degenerate 1D Fermi gas are shown in Fig. 1 for four different confinement and degeneracy conditions. It is seen that increment in degeneracy leads to denser oscillations (higher wavenumber) but weakens their amplitudes. Increasing confinement, on the other hand, decrease the wavenumbers of oscillations but strengthens the oscillation amplitudes. In this sense, degeneracy and confinement have opposite effects on density oscillations.
As is seen from Fig. 1, density goes to zero near to the boundaries which shows the existence of QBL in Fermi gases. Decreasing degeneracy or increasing confinement enlarges the thickness of QBL, which means particles are affected more by the presence of boundaries in those cases. On the contrary, weak confinement and high degeneracy leads to sharper peaks that are nearest to boundaries. Although oscillations practically diminish to the middle of the domain for weak confinements, oscillations in fact become persistent even at the middle regions of the domain when confinement is relatively high.
These oscillations occurred in degenerate Fermi gases are called Friedel oscillations. In general, Friedel oscillations arise due to translational symmetry breaking nearby a defect or impurity in the system and they have an exponential decay characteristic away from symmetry breaking point [20][21][22][23][24]. Coulomb interactions between Fermions in the system also cause Friedel oscillations [25][26][27][28]. Here the same oscillations occur due to symmetry breaking caused by domain boundaries.
Charaterization of density oscillations by 0 th order approximation
For exact density distribution given by Eq. (2), it is not possible to obtain analytical expression without making any approximation. To obtain analytical characterization of density oscillations, we need to make approximations on FD distribution function. 0 th and 1 st order approximations to FD distribution function can be done by representing the distribution function as Heaviside step function (Θ) and piecewise ramp function respectively as follows: where Λ F corresponds to Fermi energy, i min (−) and i max (+) equal to √ Λ/α ∓ 1/(α √ Λ) respectively. Comparison of two approximations can be seen in Fig. 2.
We first approach to the problem for completely degenerate case, Λ → ∞ (T → 0 or n → ∞), which corresponds to the 0 th approximation to FD distribution function. Under this condition (Λ → Λ F ), the nominator of Eq. (2) can then be approximated using Eq. (3a) as and the denominator can be calculated by considering the first two terms of Poisson summation formula as Considering Eqs. (4) and (5) and defining α F = α/ √ Λ F for brevity, fully analytical expression for Eq. (2) in completely degenerate case is obtained as It can easily be seen from Eq. (6) that α F is actually equal to the oscillation wavelength in position space. As long as α F << 1 (which is the condition for highly oscillatory regime), percentage errors of the analytical expression are quite low. Even for α F = 0.1 value, the error is under 3% and for α F = 0.01 it drops to 0.15%.
From Eq. (6), it is also possible to find the envelope function of the oscillations. Oscillations are direct result of the sin terms in Eq. (6). Therefore, by taking the minimum and maximum values of the sin function (∓1) upper (+) and lower (−) envelope functions are analytically obtained as The first and second terms with brackets represent the left and right parts of the envelopes respectively. The term with square brackets represents the contribution of the counterpart when left or right part at its maximum.
Generating the envelope function for the oscillations is important as the difference of upper and lower envelopes give the ultimate bounds of the oscillation amplitude in a confined system. Thus, for completely degenerate case, bound of the oscillation amplitude depending on the position is analytically expressed as From the first peak to the last one, envelope functions characterize the oscillations. Before the first peak or after the last peak, envelope functions have no use. Therefore, integral average of oscillations can be found by considering the envelope functions as follows where osc subscript indicates that the integral average is taken over the oscillation range. ñ osc is actually equal to the average of envelope functions atx = 0.5 (the middle point of the domain) for completely degenerate case.
For a given set of Λ and α, it is also possible to define the numbers of maxima and minima (numbers of peaks and dips) of oscillations which are, where · · · bracket denotes the floor function. Sum of N peak and N dip give number of extremum points of oscillations. Then, positions of maxima and minima of oscillations can also be found asx ± j = 4j ∓ 1 4 α F , with j = 1, 2, 3, . . . , j max (11) where j max = N peak /2 for maxima (+) and j max = N dip /2 for minima (−). Although Eq. (11) is valid only for left part (0 ≤x ≤ 0.5), the right part can easily be found by 1 −x ± j , which is valid between the interval 0.5 ≤x ≤ 1. Density values corresponding to extrema of oscillations can easily be found by replacingx in Eq. (6) with Eq. (11). Relative percentage error of Eq. (11) with respect to Eq. (6) is under 5% for the first peak and decrease sharply below 1% right after.
Charaterization of density oscillations by 1 st order approximation
Even though 0 th order analysis gives some analytical expressions with reasonable errors, these results are obtained just for zero temperature. In other words, α F does not depend on temperature and the results of 0 th order approximation does not represent temperature dependence. In order to incorporate temperature into results and to get higher precision, we can approximate FD in a finer way using the piecewise ramp function defined in Eq. (3b).
Using the similar methodology that is followed in the derivation of 0 th order expressions, we can use Eq. (3b) this time to calculate the density distribution. After mathematical operations the expression is simplified as Considering the minimum and maximum values of trigonometric products in Eq. (12), envelope functions for the 1 st order approximation are found as From the difference of upper and lower envelopes given by Eq. (13), oscillation amplitude in the 1 st order approximation are found as Oscillation average is given as Analytical density distribution, envelope functions, amplitude and average of oscillations give better results than their 0 th order counterparts respectively as expected. Note that number of oscillations stays the same in the 1 st order approximation also, so Eq. directly be seen in Fig. 4, where relative differences of both approaches are plotted for four different degeneracy-confinement values used in Fig. 3. 1 st order approximation always has lower errors than the 0 th one and it gives particularly good results when confinement and/or degeneracy is weak. For strongly degenerate cases, both approximations start to approach each other. Since 1 st order approximation accurately takes the slope of FD distribution around Fermi level, temperature appears in equations, particularly by the α √ Λ term.
Conclusion
In this study, we considered a degenerate Fermi gas confined in a 1D domain and examined the characteristics of density oscillations by making 0 th order and 1 st order approximations to FD distribution function. By using mathematical tools like Poisson summation formula, we obtained accurate analytical expressions for both the density profile and its characteristic parameters such as oscillation numbers, amplitudes, averages and their spatial coordinates. Although 0 th order approximation does not take the effects of temperature into account, with the 1 st order approximation, temperature is also incorporated into the results. It should be noted that although the characterization of density oscillations is done by considering 1D Fermi gas, it is trivial to consider 2D and 3D cases. Due to the orthogonality of eigenstates, the analytical expressions obtained in this article can be directly extended into higher dimensions in a rectangular confinement domain, which is actually a common geometry in semiconductor structures. On the other hand, the methods that are used in this article take advantage from the characteristic nature of Fermi-Dirac distribution function, so it can only be used to characterize the degenerate and confined Fermi gases.
With the help of the expressions given here, it may be possible to efficiently and easily predict and calculate the density oscillations as well as the oscillations in density-dependent properties of confined Fermions. | 2019-04-22T13:12:55.972Z | 2018-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "324261bde5cd35fa6b4f677b3136a0ba1875d6f0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.04258",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "728ac5899a2716092c7a0e44d5253b8aeef00f8f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
9742939 | pes2o/s2orc | v3-fos-license | Proton-Pump Inhibitor Utilization Associated With the Change to Nonpreferred Formulary Status for Esomeprazole in the TRICARE Formulary
BACKGROUND: Inhibitor (PPI) esomeprazole in the third copayment tier on the TRICARE formulary on July 17, 2005. The change to nonpreferred formulary status for esomeprazole included a $13 copayment increase (from $9.00 to $22.00) for either a 30-day supply purchased from a community pharmacy or a 90-day supply purchased from the mail-order pharmacy and a $0 copayment if obtained from a military pharmacy but with a prior authorization (PA) requirement. The change to nonpreferred formulary status was designed to encourage the use of PPIs other than esomeprazole and to increase the use of the mail-order pharmacy for esomeprazole purchases. OBJECTIVES: To quantify changes in (a) the TRICARE beneficiary utilization of esomeprazole relative to other PPIs and (b) the pharmacy settings used for filling esomeprazole prescriptions following implementation of a copayment increase and nonpreferred formulary status for esomeprazole. METHODS: A census of outpatient pharmacy fill records for prescription acid-reducing medications (PPIs, histamine-2 blockers, misoprostol, and sucralfate) obtained by beneficiaries aged 18 years or older from January 1, 2005, through December 31, 2006, was examined. Interrupted time series regression analyses without a control group were used to compare the utilization of esomeprazole relative to other PPIs, as well as the pharmacy setting used to obtain esomeprazole, in the months before and after the formulary change. The rates of continued esomeprazole use, switching to other prescription PPIs (lansoprazole, omeprazole, pantoprazole, and rabeprazole), switching to non-PPI prescription acid-reducing drugs, and discontinued prescription acid-reducing medication use among existing esomeprazole users (i.e., beneficiaries who obtained esomeprazole as the last PPI fill before the formulary change) were calculated overall and for each pharmacy setting used prior to the formulary change. RESULTS: Over the 24-month study period from January 1, 2005, through December 31, 2006, the total numbers of PPI fills and PPI users increased by 8.5% and 9.0%, respectively, and the number of esomeprazole users decreased by 4.6%. Of esomeprazole users, the percentages of individuals obtaining esomeprazole from military pharmacies and community pharmacies, respectively, decreased from 1.7% to 1.1% and from 89.7% to 81.7%, while the percentage of individuals obtaining esomeprazole from the mail-order pharmacy increased from 8.8% to 17.6%. Time series analysis yielded a positive, statistically significant growth in esomeprazole fills (ß1=0.114; P=0.012) during the 6-month pre-intervention period (January through June 2005) and a significant reduction in August 2005 (ß2=5.0%; P less than 0.001), the month immediately following the formulary change. During the 17-month post-intervention period (August 2005 through December 2006), no statistically significant change in trend for esomeprazole fills (ß3=0.0265; P=0.534) was observed, although a small increase in the raw number of esomeprazole fills was noted. Among the 117,801 existing esomeprazole users, 86,386 (73.3%) continued using esomeprazole, 17,676 (15.0%) switched to other prescription PPIs, 679 (0.6%) used only non-PPI prescription therapy, and 13,060 (11.1%) discontinued all prescription acid-reducing pharmacotherapy after the formulary change. Significantly higher PPI switching and acid-reducing therapy discontinuation rates were observed among men aged 18-44 years and in TRICARE enrollees relative to women, those over 45 years of age, and those who used other health insurance (P less than 0.001). Individuals who used military pharmacies, where a PA requirement was implemented, were more likely to change pharmacy settings to obtain esomeprazole (43.8%) than were users of community pharmacies (11.9%) or the mail-order pharmacy (22.8%). Mail-order pharmacy users were less likely to discontinue acid-reducing pharmacotherapy (4.9%)
• Current research suggests that there are no significant differences with respect to safety and efficacy among the currently available PPI formulations. While interventions to motivate the use of preferred PPIs, such as 3-tier copayment plans, therapeutic maximum allowable cost programs, and coverage for over-the-counter PPIs, have generally resulted in cost-savings for health plans, variable rates of formulary compliance and therapy discontinuation have been reported. • In a previous study of 3-tier copayment implementations, users of nonpreferred PPIs were more likely to switch to a preferred PPI if their copayment changed from $12 to $24 than if their copayment remained at $12 (17.6% vs. 2.1%, respectively), but were not more likely to discontinue therapy. However, nonpreferred PPI users experiencing a $23 copayment change from $7 to $30 were more likely to switch (35.1% vs. 1.5%) and to discontinue PPI therapy (32.0% vs. 18.9%) than were nonpreferred PPI users whose copayment remained at $15. • A 36% PPI switch rate and a 16% acid-reducing therapy discontinuation rate were reported among publicly insured enrollees aged 66 years or older in the 12 months following a formulary change that reduced the number of covered PPIs from 4 to 1 and imposed a requirement for treatment failure with a histamine-2 blocker. • Greater use of preferred maintenance medications in other therapeutic classes has been observed among insured populations transitioned from 2-tier to 3-tier formularies, but patients with gastroesophageal reflux disease have been reported to be less responsive to formulary changes than those with other chronic diseases.
T he increasing reliance on pharmacotherapy for the management of chronic disease has significantly increased the pharmaceutical component of overall health care spending in the United States each year for more than a decade. 1 In an effort to contain these rising costs, strategies to increase patient costsharing have evolved to promote the use of medications believed to be more cost-effective for the health plan. Foremost among them are multi-tiered formularies, which offer lower patient cost-shares for first-tier (generic) or second-tier (preferred brand) medications relative to nonpreferred medications placed in higher tiers. Patient response to increased cost-sharing has been the focus of numerous studies that often present inconsistent conclusions. Quasi-experimental studies of insured populations transitioned to a multi-tier formulary generally reported modest cost savings for the health plan and no adverse effect on patient compliance with critical medications, but cross-sectional studies have reported significantly greater utilization changes with increased patient cost-shares and the suggestion of an increased risk of negative patient outcomes. 2 Since the first proton-pump inhibitor (PPI), omeprazole, was launched in 1989, PPIs have demonstrated superior acid suppression relative to histamine-2 blockers and have been incorporated into the treatment guidelines for acid-related disorders including gastroesophageal reflux disease (GERD), Helicobacter pylori -negative peptic ulcer disease, and nonsteroidal anti-inflammatory drug-induced gastropathy. 3,4 As new PPIs were introduced into the market, this class has grown into one of the top-selling medication classes in both total sales and market share. 5 As a result, the PPI class has been a frequent target for patient cost-share increases as health plans have imposed incentive-based formularies, prior authorization (PA), and other utilization management techniques to encourage use of (a) preferred medications within the PPI class or (b) non-PPI acid-reducing medications to manage acid-related conditions. 6,7 While implementation of these measures has generally resulted in cost savings for health plans, variable rates of formulary compliance and therapy discontinuation have been reported. Huskamp et al. compared PPI utilization changes following the implementation of 2 different 3-tier copayment structures. 8 In a plan that switched from a 2-tier ($6/$12 [generic/brand]) to a 3-tier ($6/$12/$24 [generic/preferred brand/nonpreferred brand]) formulary, users of nonpreferred PPIs, who experienced a copayment change from $12 to $24, were more likely to switch to a preferred product than were those whose copayment remained at $12 throughout the study (17.6% vs. 2.1%, respectively; P < 0.001), but were no more likely to discontinue therapy (18%-19% in both groups; P = 0.79). However, nonpreferred PPI users who experienced a $23 copayment change from $7 (single tier) to $30 (3-tier: $8/$15/$30) were more likely both to discontinue PPI treatment (32.0% vs. 18.9%, respectively; P < 0.001) and to switch to a preferred PPI (35.1% vs. 1.5%, respectively; P < 0.001). Huskamp et al. observed similar findings for users of angiotensin-converting enzyme (ACE) inhibitors and statins. 8 Schneeweiss et al. reported a 36% switch rate and a 16% acid-reducing therapy discontinuation rate among publicly insured enrollees aged 66 years or older in the 12 months following a more restrictive formulary change from 4 to only 1 covered PPI and a histamine-2 receptor antagonist treatment failure requirement. 6 Delate et al. reported a 92% decrease in the rate of PPI claims among Medicaid recipients in the month directly following the implementation of a PA requirement and an overall acid-reducing therapy discontinuation rate of approximately 22%. 7 Although the generalizability of these findings is limited by broad differences in study populations and methodologies (only one of these studies used comparison groups, while the others used time series analyses with no comparison groups), 9 the addition of a tier or restriction to a health plan's formulary has generally been associated with increased utilization of preferred medications in many maintenance medication classes. PPI users and patients with GERD, however, have been reported to be less responsive to formulary changes than patients diagnosed with other common chronic conditions. 10 Like many health plans, the Department of Defense (DoD) health plan, TRICARE, implemented formulary changes within the class of acid-reducing medications as part of an effort to • In the first calendar month following an increase in the copayment for esomeprazole from $9 to $22 for a 30-day supply purchased from a community pharmacy or a 90-day supply purchased from the mail-order pharmacy, the percentage of PPI fills attributable to esomeprazole decreased approximately 25% and, in the 17 months following the change, use of the mailorder pharmacy doubled among esomeprazole users. • Among the 117,801 users of esomeprazole when the formulary change was implemented, 73.3% continued using esomeprazole, 15.0% switched to a preferred PPI, 0.6% switched to non-PPI prescription medication, and 11.1% discontinued all prescription acid-reducing pharmacotherapy. • Esomeprazole users who were enrolled in a TRICARE managed care health plan were more likely than those who used TRICARE only to obtain prescription drugs to switch to preferred PPIs (19.1% vs. 13.7%, respectively), to discontinue acidreducing pharmacotherapy (14.7% vs. 9.9%, respectively), and to switch to a different pharmacy setting for esomeprazole fills (15.8% vs. 11.8%, respectively; all comparisons P < 0.05). • The relatively low rates of PPI switching (15.0%) and discontinuation of prescription acid-reducing medication (11.1%) suggest that the copayment differential ($13 vs. other brand PPIs, $19 vs. generic omeprazole) was below the threshold amount needed to promote switching to preferred medications.
contain the rising cost of its prescription drug benefit. Initially, the TRICARE formulary used a 2-tier copayment structure to encourage use of generic over brand medications. The formulary also included financial incentives, such as reduced or $0 copayments, to promote the use of military pharmacies and the TRICARE mail-order pharmacy over community pharmacies. The DoD purchases medications under a federally mandated pricing structure that allows it to stock medications at its military pharmacies and at the mail-order pharmacy at a lower cost than the reimbursements paid to community pharmacies used by TRICARE beneficiaries.
In 2005, a third copayment tier was established, and esomeprazole (Nexium) became one of the first medications to be placed in the third (nonformulary) copayment tier. The formulary change was made by consensus vote of the DoD Pharmacy and Therapeutics Committee, following review of meta-analyses that identified no significant differences with respect to safety and efficacy among the available PPI formulations. 11,12 A costminimization analysis undertaken to rank PPIs from most to least cost-effective (based on weighted average cost per day of treatment) found esomeprazole to be the least cost-effective PPI on the formulary. The formulary change was announced on May 31, 2005, and implemented on July 17, 2005. At the time of the formulary change, the TRICARE Web site was the main method by which drug benefit changes were communicated to beneficiaries outside of the dispensing environment. The primary objective of the change was to promote the use of generic omeprazole or other brand PPIs over esomeprazole. The secondary objective was to promote the use of the mail-order pharmacy over community pharmacies for esomeprazole purchases.
The copayment structure for acid-reducing medications before and after the formulary change is presented in Table 1. Prior to July 17, 2005, generic medications required a $3 copayment and formulary brand medications required a $9 copayment for a 30-day supply from a community pharmacy or a 90-day supply from the mail-order pharmacy. Medication dispensed at military pharmacies had a $0 copayment. Effective July 17, 2005, the copayments for esomeprazole (nonformulary) were raised from $9 to $22 for a 30 day-supply in community pharmacies or a 90-day supply at the mail-order pharmacy, and PA was required to obtain esomeprazole from a military pharmacy at a $0 copayment. To obtain PA, the esomeprazole prescription had to be written by a military provider or civilian provider to whom the patient was referred by a military provider, and medical necessity had to be demonstrated by (a) evidence of contraindication to the formulary agent, (b) adverse effects or therapeutic failure with the formulary agent, or (c) previous response to esomeprazole in a patient for whom changing to the preferred medication presented unacceptable risk. The PA form, containing justification for esomeprazole use over each formulary alternative, had to be signed by the prescriber and faxed or mailed to the dispensing location.
Copayment amounts for the remaining PPIs (lansoprazole, omeprazole, pantoprazole, and rabeprazole) and other acidreducing medications (cimetidine, famotidine, nizatidine, ranitidine, misoprostol, and sucralfate) were not affected by the formulary change. No other cost-sharing or coverage changes for acid-related disorders under the TRICARE benefit were made, nor were any brand, generic, or over-the-counter (OTC) PPIs introduced or removed from the market in the 24-month period during which we assessed utilization changes among the PPI class of drugs.
■■ Methods
The DoD maintains an enterprise-wide information system that captures patient demographic and prescription information for all prescriptions filled by beneficiaries using their TRICARE pharmacy benefit. A fill record is created in real time when the prescription is filled regardless of whether a military, community, or mail-order pharmacy is used. The fill records are forwarded to a central data repository for processing to remove transactions that have been reversed (e.g., prescriptions that were filled but never picked up) and are coded with an auto-generated, pseudo-patient identifier that enables researchers to link pharmacy and health care service records for the same person without the inclusion of any protected health information in the study datasets. This data repository was the source of the data used in this study. A census of outpatient pharmacy fill records for prescription acid-reducing medications (identified by First DataBank generic code number) obtained by beneficiaries aged 18 years or older during a 24-month period (January 1, 2005, through December 31, 2006) was extracted from this central repository. Approximately 0.05% of the fill records were then excluded from the dataset because they contained missing data or represented medications administered by a clinician during a clinic visit. The resulting 9,395,357 fill records were aggregated into 1,478,815 unique beneficiary records that included all acid-reducing medication prescriptions filled during the study period. In all analyses, we defined a "fill" as any prescription for a PPI filled by a beneficiary regardless of the pharmacy type used or the days supply obtained (i.e., a 30-day fill at a community pharmacy and a 90-day fill at a mail-order pharmacy each counted as 1 fill).
The raw numbers of PPI fills and beneficiaries filling prescriptions for PPIs were calculated for each study month by PPI drug and type of pharmacy used, and the percentage of change over the 24-month study period was calculated. Interrupted time series regression analyses as described by Wagner et al. were used to compare the utilization of esomeprazole relative to other PPIs as well as the types of pharmacies used to obtain esomeprazole before and after the formulary change. 13 The 24-month study period was subdivided into pre-intervention months (January 1, 2005, through June 30, 2005) and post-intervention months (August 1, 2005, through December 31, 2006). Because the formulary change occurred on July 17th, July could not appropriately be classified as either a pre-intervention or a postintervention month and was thus excluded from the time series analysis. Claims from July 2005 were included in calculations of medication discontinuation and switch rates.
To assess the change in utilization of esomeprazole relative to other PPIs, the numbers of esomeprazole fills, generic omeprazole fills, and other branded PPI fills (lansoprazole, brand omeprazole, pantoprazole, and rabeprazole) were plotted as a percentage of total PPI fills by month. To assess the changes in the pharmacy setting used, the percentages of users who obtained esomeprazole from each pharmacy setting were plotted by month. Regression models were used to estimate the level and slope of each preintervention and post-intervention period for each fill type and pharmacy setting: An initial plot of the error terms, e t , over time indicated the presence of positive autocorrelation between adjacent months. Autocorrelation violates the assumptions of ordinary leastsquares regression analysis and has been shown to cause underestimation of error terms and overestimation of significance of effects. Thus, we used a maximum-likelihood autoregression analysis, a technique commonly applied in time series studies. 13 Maximum-likelihood autoregression analysis produces estimates that are more likely than those derived from ordinary least-squares regression analysis to represent the true relationship between variables because correlated errors between adjacent months are considered in the calculation. Significance testing was 2-sided at the 95% confidence level.
Proton-Pump Inhibitor Utilization Associated With the Change to Nonpreferred Formulary Status for Esomeprazole in the TRICARE Formulary
For beneficiaries who obtained esomeprazole as the last PPI fill before the effective date of the formulary change, the rates of 4 outcomes were calculated: (a) continued esomeprazole use (a prescription fill for esomeprazole on or after July 17, 2005); (b) switch to another prescription PPI, including generic omeprazole; (c) switch to a non-PPI prescription acid-reducing therapy; and (d) discontinued prescription acid-reducing medication use (no prescription fills for acid-reducing medications on or after July 17, 2005). Switching and discontinuation rates were calculated by comparing esomeprazole and other prescription acid-reducing medication utilization before and after the formulary date. Outcomes were calculated by beneficiary characteristics and for each pharmacy setting used prior to the formulary change, and between-group differences were assessed for statistical significance using the Pearson chi-square test. Because long-term PPI therapy may not be appropriate for many patients, it is likely that some patients completed their esomeprazole treatment near the time of the formulary change. Therefore, we calculated rates of prescription PPI switching and therapy discontinuation following the formulary change among users of other (preferred) PPIs for comparison purposes.
All analyses focused on changes in beneficiary utilization rather than health plan expenditures both before and after the formulary change. Analysis and reporting of TRICARE drug expenditures are complicated by variability among the 3 pharmacy settings in the net acquisition cost of individual medications and by regionally or locally negotiated contracts with pharmaceutical manufacturers that prohibit disclosure of price information. All data manipulations and analyses were performed using SPSS (SPSS Inc., Chicago, IL), Base 10.0. This study was reviewed by the TRICARE Management Activity Exempt Determination Officer on February 28, 2007, and was found to be exempt under 32 CFR 219.101(b)(4).
■■ Results
The number and percentage of TRICARE beneficiaries who filled 1 or more prescriptions for an acid-reducing medication, PPI, or esomeprazole at any time during the study period are presented in Table 2 by gender, age group (at first fill), and enrollment status. Beneficiaries were categorized as "enrolled" if they were enrolled in a TRICARE managed care option similar to a health maintenance organization in the civilian health care sector. Enrollees are required to obtain all of their health care services within the TRICARE network of military and civilian providers. Enrollees consist primarily of active-duty service members and their families, but retired service members and their dependents may also enroll. Beneficiaries categorized as users of other health insurance programs consist of retired military service members, their dependents, and those who use their TRICARE pharmacy benefit to obtain prescription medications but use private insurance or Medicare as the primary payer for their health care services.
Proton-Pump Inhibitor Utilization Associated With the Change to Nonpreferred Formulary Status for Esomeprazole in the TRICARE Formulary
Compared with acid-reducing medication users overall, esomeprazole users were disproportionately women (64.5% vs. 57.1%), aged 65 years or older (50.7% vs. 40.2%), and more likely to use other health insurance to obtain health care (74.6% vs. 55.1%). The mean [SD] age of esomeprazole users enrolled in other health insurance programs was significantly higher (67.0 [12.8] years) than that of TRICARE-enrolled beneficiaries (49.7 [13.5] years; P < 0.001).
The raw number and distribution of PPI fills by PPI type and pharmacy type, the number of beneficiaries filling prescriptions for PPIs and esomeprazole, and the distribution of beneficiaries by pharmacy type for esomeprazole fills for key months throughout the study period are presented in Table 3. Over the entire study period from January 2005 through December 2006, the raw number of PPI fills increased by 8.5%. The percentage of esomeprazole fills as a proportion of PPI fills decreased from 19.4% to 17.0%. As a proportion of all PPI fills, other brand PPI fills decreased from 71.4% to 66.2% and generic omeprazole fills increased from 9.3% to 16.8%. The proportion of PPI prescriptions filled at military pharmacies decreased from 37.4% to 31.4%, while the proportion of PPI fills at community pharmacies and the mail-order pharmacy increased from 52.5% to 56.0% and from 10.1% to 12.6%, respectively.
Similar to the trend observed for the number of PPI fills, the number of beneficiaries filling prescriptions for PPIs increased by 9.0% over the study period; however, the number of esomeprazole users decreased by 4.6%. The percentage of esomeprazole users obtaining esomeprazole from community pharmacies and military pharmacies decreased from 89.7% to 81.7% and from 1.7% to 1.1% respectively, while use of the mail-order pharmacy to obtain esomeprazole increased from 8.8% to 17.6%. Comparing the calendar months directly before (June 2005) and after (August 2005) the formulary change, the percentage of esomeprazole fills as a proportion of PPI fills decreased from 20.0% to 15.7%, and the percentage of esomeprazole users who obtained esomeprazole from the mail-order pharmacy increased from 9.4% to 10.9%.
The percentages of total PPI fills for esomeprazole, generic omeprazole, and other brand PPIs by month are plotted in Figure 1. During the pre-intervention period, positive and statistically significant slopes were observed for generic omeprazole (β 1 = 0.140; P = 0.030) and esomeprazole (β 1 = 0.114; P = 0.012). A negative and statistically significant slope was observed for other brand PPIs (β 1 = -0.244; P < 0.001). In August 2005, directly following the formulary change, statistically significant increases in the percentages of fills for generic omeprazole (β 2 = 1.5%; P < 0.001) and other branded PPIs (β 2 = 3.3%; P < 0.001) were observed, corresponding to approximately 5,000 and 11,000 fills, respectively. A statistically significant decrease in the percentage of fills for esomeprazole was observed (β 2 = -5.0%; P < 0.001), corresponding to approximately 16,700 or 25% of fills relative to June 2005. During the post-intervention period, the magnitude of the positive slope for generic omeprazole (β 3 = 0.184; P = 0.013) increased significantly, and the magnitude of the negative slope for other brand PPIs (β 3 = -0.169; P = 0.003) decreased significantly. A negative but nonsignificant slope for esomeprazole (β 3 = -0.0265; P = 0.534) was observed.
The percentages of esomeprazole users who used a military pharmacy, community pharmacy, or the mail-order pharmacy to fill a prescription for esomeprazole by month are shown in Figure 2. During the pre-intervention period, nonsignificant slopes for the mail-order pharmacy (β 1 = 0.126; P = 0.152) and community pharmacies (β 1 = -0.0673; P = 0.457) were observed, and a small but statistically significant negative slope was observed for military pharmacies (β 1 = -0.0870; P < 0.001). In August 2005, in the month following the formulary change, statistically significant increases in the percentage of esomeprazole users who obtained esomeprazole from a military pharmacy or the mail-order pharmacy were observed (β 2 = 0.2%, P = 0.006; and β 2 = 1.9%, P < 0.001, respectively), corresponding to approximately 100 and 950 users, respectively. A statistically significant decrease in the percentage of beneficiaries who obtained esomeprazole from a community pharmacy was observed (β 2 = -1.8%; P < 0.001), corresponding to approximately 900 users.
During the post-intervention period, use of the mail-order pharmacy (β 3 = 0.252; P = 0.009) accelerated, use of community pharmacies (β 3 = -0.292; P = 0.005) continued to decline, and a small but significant reversal of the pre-intervention trend in the use of military pharmacies (β 3 = 0.068; P < 0.001) was observed, indicating a slowdown of the rate of decline of military pharmacy use that was observed in the pre-intervention period. Across the entire class of prescription PPIs, use of military pharmacies (military pharmacy users as a proportion of all PPI users) decreased (β = -0.189; P < 0.001) and use of community pharmacies and the mail-order pharmacy increased (β = 0.082, P < 0.001; and β = 0.107, P < 0.001, respectively) across the 24-month period (graphical data not displayed). A summary of utilization changes among existing esomeprazole users, that is, study subjects whose last PPI fill prior to the formulary change was for esomeprazole, is presented in Table 4. Among the total of 117,801 existing esomeprazole users, 86,386 (73.3%) continued to obtain esomeprazole, 17,676 (15.0%) switched to other prescription PPIs, 679 (0.6%) switched to non-PPI prescription acid-reducing medications, and 13,060 (11.1%) discontinued all prescription acid-reducing pharmacotherapy (i.e., did not fill any prescriptions for acid-reducing medications) after July 17, 2005. Among those who continued esomeprazole use, 10,942 (12.7%) changed the pharmacy setting(s) through which they obtained esomeprazole.
Proton-Pump Inhibitor Utilization Associated With the Change to Nonpreferred Formulary Status for Esomeprazole in the TRICARE Formulary
Men were significantly more likely than women to switch to a preferred PPI (18.0% and 13.3%, respectively; P < 0.001), and individuals aged 18-44 years were more likely to discontinue all acid-reducing pharmacotherapy (25.2%) than were those aged 45-64 years (9.5%) or those aged 65 years or older (9.0%; P < 0.001). Among age groups, the rates of switching to a preferred PPI revealed a bimodal pattern. Switch rates were higher for the youngest (18-44 years) and oldest (65 years and older) age groups (18.4% and 15.2%, respectively) than for the middle age group (45-64 years, 13.6%; P < 0.001). TRICARE enrollees were significantly more likely than those using other health insurance plans to switch to a preferred PPI (19.1% and 13.7%, respectively; P < 0.001), to switch to non-PPI prescription medications (0.8% and 0.5%, respectively; P < 0.001), to discontinue all prescription acid-reducing therapy (14.7% and 9.9%, respectively; P < 0.001), or to change pharmacy settings for obtaining esomeprazole (15.8% and 11.8%, respectively; P < 0.001). Other smaller but statistically significant utilization changes were observed among all gender and age subgroups (P < 0.05).
Because community pharmacies were used by 101,166 (85.9%) of the 117,801 existing users to obtain their esomeprazole fills, utilization patterns were determined primarily by these users, may have motivated some new users (i.e., those who did not fill any prescriptions for esomeprazole prior to the formulary change) to choose the mail-order pharmacy over community pharmacies for their esomeprazole fills, but the degree to which the formulary change impacted the purchase decision cannot be validated using administrative data alone. Although a positive trend toward increased mail-order pharmacy use was observed for esomeprazole and PPIs in general, use prevalence was substantially lower for the mail-order pharmacy-less than 18% for esomeprazole and less than 13% for all PPIs-than for community pharmacies during all months in the study period. Other health plans have used similar financial incentives (i.e., offering a 90-day supply of medication for the same price as a 30-day supply purchased from a community pharmacy) to promote use of mail order pharmacies, 14,15 but limited research has been done to assess the extent to which the lower out-of-pocket cost motivates a patient to voluntarily use a mail-order option over a community pharmacy. 16 Fewer than 2% of esomeprazole users obtained the medication from a military pharmacy during any study month, but significant utilization changes were observed among users of this setting. Following the intervention, we observed a nearly constant use of military pharmacies for esomeprazole fills, which is likely a reflection of the baseline of beneficiaries who pursued and obtained PA to fill their esomeprazole prescriptions with a $0 copayment. While the military pharmacy setting is unique to the TRICARE pharmacy benefit, utilization changes associated with the use of military pharmacies may arguably be compared with those from other low or $0 copayment plans in which a PA requirement was imposed. Delate et al.
Proton-Pump Inhibitor Utilization Associated With the Change to Nonpreferred Formulary Status for Esomeprazole in the TRICARE Formulary
reported that approximately 50% of Medicaid enrollees who received a prescription for a PPI did not pursue PA following implementation of a PA requirement for all PPI medications. 7 In the present study, approximately 44% of existing esomeprazole users who obtained the drug from a military pharmacy before the formulary change and who chose to continue esomeprazole use elected to obtain esomeprazole elsewhere at greater expense to themselves, presumably because they did not pursue or meet the esomeprazole PA requirement. Clearly, a PA requirement that applies to all PPIs is more restrictive than one that applies to esomeprazole exclusively, and the degree to which DoD enrollees can be compared with Medicaid recipients is questionable, but the magnitude and direction of the beneficiary response to the 2 PA requirements appears comparable.
Consistent with other studies, we found that esomeprazole users were disproportionately female and older compared with users of PPIs overall 17,18 and that these subgroups were generally less responsive to the formulary change. We also found that those who were TRICARE enrollees displayed a greater sensitivity to the formulary change in terms of significantly higher PPI switching and therapy discontinuation rates. These beneficiaries receive their care from TRICARE providers who are expected to assimilate formulary changes into their prescribing practice, but statistically significant differences across pharmacy settings were observed for all rates examined (P < 0.05). The esomeprazole continuation rate was highest for the mail-order pharmacy users (80.9%) relative to military or community pharmacy users (70.4% and 72.4%, respectively; P < 0.001). Among those beneficiaries who continued to obtain esomeprazole after the formulary change, military pharmacy users were most likely to switch pharmacy settings (43.8%), followed by the mail-order pharmacy users (22.8%) and community pharmacy users (11.9%; P < 0.001). The prescription PPI switch rate and acid-reducing medication discontinuation rate were highest among military pharmacy users (15.7% and 12.9%, respectively) and lowest among the mail-order pharmacy users (13.9% and 4.9%, respectively). Among those using other prescription PPIs prior to July 17, 2005, switching and acid-reducing pharmacotherapy discontinuation rates during the 17-month follow-up period were comparable with those observed for esomeprazole: generic omeprazole (5.1% and 12.7% for switching and discontinuation, respectively), brand omeprazole (14.5% and 11.0%, respectively), lansoprazole (7.8% and 16.2%, respectively), pantoprazole (5.5% and 16.5%, respectively), and rabeprazole (4.0% and 14.2%, respectively; data not shown).
■■ Discussion
This study examined changes in PPI utilization associated with the placement of esomeprazole in the third tier of the TRICARE formulary. The DoD's primary objective was to promote the use of generic omeprazole or the other 4 brand PPIs over esomeprazole. Our time series analyses indicated that, in the 6 months prior to the formulary change, esomeprazole fills represented approximately 20% of the PPI fills with a trend of gradual, positive growth. Esomeprazole fills dropped to less than 16% of total PPI fills in the calendar month following its removal from the formulary. A roughly commensurate 1.5% increase in fills for generic omeprazole and 3.3% increase for other brand PPIs from June 2005 to August 2005 suggests that the DoD successfully migrated approximately 5% of the PPI fills, 25% of the esomeprazole fills, and 15% of existing esomeprazole users to the preferred PPIs and slowed the trend toward increased esomeprazole market share over the 17-month post-intervention period.
The DoD's secondary objective was to promote greater use of the mail-order pharmacy for esomeprazole fills. A small but significant jump in mail-order pharmacy use was observed in the month immediately following the formulary change, and a significant trend toward increased mail-order pharmacy use to obtain esomeprazole was observed over the study period. The roughly 2% increase in the mail-order pharmacy usage relative to community pharmacies during the first post-intervention month was likely a combination of existing esomeprazole users switching to the mail-order pharmacy, as well as higher rates of PPI switching or therapy discontinuation among community pharmacy users relative to mail-order pharmacy users. The formulary change medications may be achieved when applying a similar copayment restructuring strategy to other therapeutic classes. Huskamp et al. reported substantially higher switch rates and lower therapy discontinuation rates among ACE inhibitor and statin users relative to PPI users, 8 and other studies have found variable subject responses to the same copayment restructuring for medications in different therapeutic classes. 2 Like other managed care payers, the DoD faces the challenge of communicating benefit changes to prescribers and beneficiaries to effectively promote formulary compliance. During the study period, formulary change notices published on the TRICARE Web site were the primary means of communicating benefit changes, and it is not clear how frequently the Web site was used for obtaining formulary information. It is unlikely that the formulary change affected prescribing practices beyond the military and DoD-contracted hospitals and clinics where providers are expected to assimilate TRICARE formulary changes into their prescribing practice. Non-TRICARE providers, who treated nearly 75% of the esomeprazole users in our study, were probably unfamiliar with the TRICARE drug benefit or the formulary status of the medications they prescribed. 19 Many beneficiaries may not have even realized that they were using a third-tier medication unless they queried their pharmacist for other options. 21 Since the time of the present study, the DoD has recognized the importance of advertising benefit changes and has initiated direct mailings to notify beneficiaries when the formulary status of their medications is changing.
A study that examined the impact of a letter-based notification program in a commercially insured group found that beneficiary mailings improved formulary compliance for many medications. One of the highest switch rates to a formulary alternative was associated with generic omeprazole, a finding that the authors potentially attributed to direct-to-consumer (DTC) advertising (for Prilosec) and consumer loyalty for the OTC product with the same name. 17 Our study was conducted concurrently with an extensive marketing campaign, in which more spending was dedicated to DTC advertising for esomeprazole than was spent for any other prescription medication in 2005. 22 Exposure to DTC advertising has been associated with increased prescribing and utilization of the advertised medication, as well as higher rates of switching to the advertised product. 20,23 The $22 copayment for a 30-day supply of esomeprazole may be inadequate to motivate beneficiaries to investigate less-familiar alternatives, even if they can save up to $19 by doing so. These findings suggest that formulary changes involving less well-advertised brands may achieve a higher rate of conversion to preferred medication than was observed in this study.
whereas non-TRICARE providers are unlikely to be aware of TRICARE formulary changes. 19 The bimodal PPI switching pattern we observed among different age groups was reported by Nair et al., who found formulary compliance rates to be higher in the 18-to-25-year and 65-year-and-older age groups than in the 26-to-64-year age group. 10 Our observation may be attributable to the combined influence of TRICARE enrollment, which is more common among those aged 18-44 years, and the prevalence of multiple comorbidities, which are likely to be highest among those aged 65 years or older. A prior study that examined the association between PPI switching and variables hypothesized to influence switching for well-advertised products reported lower PPI switch rates among subjects without significant comorbidities than among those with multiple comorbid conditions. 20 Underlying all formulary and drug policy changes is the potential risk of motivating a premature discontinuation of therapy. We found no evidence that the TRICARE formulary change was associated with an increased prescription acid-reducing medication discontinuation rate among esomeprazole users (11.1%) relative to users of the preferred PPIs (ranging from 11.0% to 16.5%) on the TRICARE formulary. For esomeprazole users, our observed therapy discontinuation rate was also lower than the stable 16% background PPI discontinuation rate among PPI users reported among publicly insured seniors by Schneeweiss et al. 6 Huskamp et al. evaluated utilization changes following the introduction of a 3-tier plan restructuring strategy similar to that imposed on TRICARE beneficiaries. 8 Among nonpreferred PPI users who experienced a $12 copayment increase (from $12 to $24), the change was associated with a higher PPI switch rate (17.6%) but was not significantly associated with the therapy discontinuation rate, which was 18%-19% in both the intervention and comparison groups. Our 15.0% switch rate from esomeprazole to preferred PPIs was lower than the 36.4% switch rate reported by Schneeweiss 6 or the 23%-24% rate among commercially insured populations by Hall et al. 18 under the normal course of treatment, but it was higher than the 2% PPI switch rates observed among Huskamp's comparison groups whose 2-tier plans ($6-$7 copayment differentials) underwent no formulary change. Our switch rates among existing users of other preferred PPIs on the TRICARE formulary ranged from 4.0% to 7.8% with the exception of brand omeprazole, with a switch rate of 14.5%, similar to that observed for esomeprazole.
Although the formulary changes, target populations, and study methodologies reported in the literature varied considerably, their findings when combined with our results reinforce the notion that copayment increases in the $12-$15 range can promote switching to a preferred medication without significant increases in therapy discontinuation. 8 Our relatively modest switch rates to preferred PPIs suggest that financial incentives greater than the $13 copayment difference between second-and third-tier medications are needed to achieve the formulary compliance observed among other populations. However, a greater switch rate to preferred ing formulary change. Anticipatory stockpiling during June 2005 may also have resulted in a biased estimation of the immediate effects of the formulary change if beneficiaries obtained esomeprazole fills early to avoid the higher copayment later. Postintervention trends and the nature of acid-related conditions do not suggest the presence of seasonal effects.
Finally, the absence of data about nonprescription medication usage may have inflated our acid-reducing therapy discontinuation rates if some users elected to switch to nonprescription medications only. This behavior would more likely be seen among users of military pharmacies where many OTC medications, including omeprazole OTC, are available to DoD beneficiaries at no cost. The TRICARE benefit, however, provides no coverage for OTC medications at community pharmacies, where the beneficiaries' cost to purchase OTC medications would exceed a PPI prescription copayment if the OTC medication was used daily. Prior studies reported significant switch rates from prescription PPIs to omeprazole OTC when the OTC medication was available at a lower member cost-share. 26,27 No such financial incentive is available under TRICARE; thus, it is difficult to assess the extent to which omeprazole OTC utilization may have impacted our findings.
■■ Conclusions
Moving esomeprazole to the third tier of the TRICARE formulary and changing the copayment from $9 to $22 for a 30-day supply obtained at a community pharmacy were associated with a 25% reduction in the number of esomeprazole fills in the calendar month following the change, slowdown of the trend toward increased esomeprazole use, and acceleration in the use of the mail-order pharmacy for esomeprazole fills in the post-intervention period. The significantly lower sensitivity to the formulary change that was seen among individuals who used TRICARE only for their prescription fills but obtained their health care through other plans highlights the challenge of improving formulary compliance without prescriber involvement, particularly when the nonpreferred medication is highly advertised. In the case of the PPI class or other medication classes in which multiple therapeutic equivalents are available at different costs, health plans attempting to transition users to preferred medications should consider larger third-tier copayments; more robust interventions, such as a PA requirement for third-tier medications; step therapy (if applicable); mandatory use of a mail-order pharmacy for third-tier medications; or complete removal of the nonpreferred medication from coverage.
Limitations
Foremost among the study limitations is the absence of a comparison group. Although a comparison group would have strengthened the validity of our findings, the unique nature of the TRICARE benefit complicated the identification of a suitable comparison group. Ideally, esomeprazole utilization among military pharmacy users should be compared against that of a managed care population with low or $0 copayment transitioning to a PA requirement, while utilization among users of community pharmacies and the mail-order pharmacy is probably best compared with that of other large insured populations transitioning from a 2-tier to a 3-tier formulary. A suitable comparison group would have assisted in controlling for other potentially confounding effects of intensive DTC advertising, direct-to-physician promotions, changes in Medicare or other health plans used by numerous study subjects, and patient-specific or other factors beyond the control of the TRICARE planners. However, the strengths of this study include the uniform prescription drug benefit and the absence of other changes-such as modifications to the TRICARE benefit design or introduction of new PPI drugs-that could have affected the treatment of acid-related disorders among the study population during the study period. Despite the absence of a suitable comparison group, the direction and magnitude of our utilization changes suggest that TRICARE beneficiary response is comparable with that of other populations when exposed to similar formulary changes.
Second, although we found no evidence of increased pharmacotherapy discontinuation rates, we did not examine clinical data to assess whether patients who discontinued or switched PPI therapy had any related increase in other health care service utilization. However, previous research reported no impact on the utilization of medical services in a 12-or 30-month period following implementation of a 3-tier formulary and copayment increase. 24,25 Additionally, the extent to which the formulary change was associated with changes in patients' adherence to pre-intervention dosing levels, symptoms, or overall quality of life cannot be assessed using our methodology.
Third, costs for dataset extraction and analyses limited our study period to 24 months. The use of a 6-month pre-intervention period likely limited our statistical power to detect significant differences in pre-intervention and post-intervention trends but had little effect on our calculated rates of PPI switching and medication discontinuation. These rates could have been underestimated, however, if beneficiaries elected to switch medications or pharmacy settings in June 2005 in anticipation of the upcom- | 2017-06-16T21:16:14.483Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "b0f20c7999b1d647900de57dff3819c9335bcc70",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "226d98c323e5a798cb80fd7be91ccda70b63d4a9",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3320775 | pes2o/s2orc | v3-fos-license | Traditional agricultural practices and the sex ratio today
We study the historical origins of cross-country differences in the male-to-female sex ratio. Our analysis focuses on the use of the plough in traditional agriculture. In societies that did not use the plough, women tended to participate in agriculture as actively as men. By contrast, in societies that used the plough, men specialized in agricultural work, due to the physical strength needed to pull the plough or control the animal that pulls it. We hypothesize that this difference caused plough-using societies to value boys more than girls. Today, this belief is reflected in male-biased sex ratios, which arise due to sex-selective abortion or infanticide, or gender-differences in access to family resources, which results in higher mortality rates for girls. Testing this hypothesis, we show that descendants of societies that traditionally practiced plough agriculture today have higher average male-to-female sex ratios. We find that this effect systematically increases in magnitude and statistical significance as one looks at older cohorts. Estimates using instrumental variables confirm our findings from multivariate OLS analysis.
Introduction
The sex ratio-the proportion of males relative to females in a population-exhibits remarkable heterogeneity across countries. In particular, in many countries around the world, there is a disproportionate number of males relative to females. This well-recognized fact led Amartya Sen to declare in 1990 that more than 100 million women were "missing" [1,2]. Examining data on the global cross-country distribution of the average male-to-female sex ratio between 2000 and 2009, one finds that the sex ratio varies widely. For example, for children under age 5, it ranges from just under 1 for a number of countries (e.g., Rwanda, Angola, Togo, etc.) to 1.10 in China. S1 Fig shows the distribution of sex ratios for different age groups. As it is apparent from the figure, most countries in our sample have a sex ratio around 105. A nontrivial fraction of our sample (53 countries out of 153) has a sex ratio higher than 105. These countries belong to all continents (47% from Europe, 36% from Asia, 10% from Africa and 7% from North and South America). The sex ratio goes down with age but still shows a large amount of variation for all age groups. Although China is particularly well-known for having a a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 male-biased sex-ratio, there are many other countries that also have sex ratios that are strongly male-biased, e.g., South Korea (1.08), India (1.08), Albania (1.07), Georgia (1.06), Singapore (1.06), Jordan (1.06), etc.
A range of factors underlying the noticeably larger proportion of males in many countries has been well studied. The correlates of the sex-ratio at birth include parental well-being [3], birth order [4], and fertility [5]. The correlates of the sex ratio after birth have also been widely studied and include economic opportunities for women [6,7], patriarchy and kinship structures [8,9], and disease burden [10]. For a review of the literature see [11,12].
In this paper, we test the hypothesis that differences in the sex ratio are determined, in part, by differences in the agricultural technologies traditionally used by a society's pre-industrial ancestors. Earlier research by Ester Boserup [13] has suggested that the use of plough agriculture generated a division of labor where men worked in the fields and women specialized in work within the home. This is because the use of the plough in agriculture requires significant upper body strength, grip strength, and bursts of power to pull the plough itself or control the animal that pulls it. This can be contrasted to shifting cultivation, where women tended to participate as actively in agriculture as men. This form of agriculture tended to use the hoe and digging stick, and, although it required very hard work, did not require the same level of physical strength as plough agriculture. In addition, shifting cultivation was compatible with child care, in contrast to plough agriculture, where the presence of large animals made child care dangerous.
According to Boserup, differences in agricultural practices led to the appearance of different norms about the role of women in society. Societies featuring plough agriculture exhibited weaker beliefs about the equality of men and women in society, which has persisted until today, affecting differences in female labor force participation [14]. An additional consequence of traditional plough agricultural is that girls came to be valued less than boys. In turn, this resulted in the emergence of the custom of the dowry, where girls' parents are required to make a large payment to the newly formed family and/or the groom's family. In societies without the plough, a bride price was more likely to arise, where a large payment is made from the groom's family to the bride's. These differences in marriage customs further reinforced societies preference for boys over girls in societies with plough agriculture. This male-preference is likely to result in male-biased sex ratios of children. Although this could occur through selective abortion or infanticide in more extreme cases, the most likely mechanism is through differential access to nutrition, health care, and similar resources, which result in differential mortality rates for boys and girls.
In Boserup's own words: "In communities where girls live in seclusion, and a large dowry must be paid when they marry, parents naturally come to dread the burden of having daughters. In some of the farming communities in Northern India, where women do little work in agriculture and the parents know that a daughter will, in due course, cost them the payment of a dowry, it was customary in earlier times to limit the number of surviving daughters by infanticide." (p. 37) Boserup continues to explain that "This practice has disappeared, in its outward forms, but nevertheless the ratio of female to male population in these districts continues to be abnormal compared to other regions of India. [. . .] The only plausible hypothesis is that mortality among girls was higher than among boys. [. . .] the persistence of socio-cultural factors are believed to be largely responsible for the excess of female mortality over the male. For example, one of these socio-cultural factors seems to be a widespread supposition that milk is not good for girls, but is good for boys. [. . .] There is also a tendency to care more for sick boys than for sick girls." (p. 37) Recent research has shown that parents often treat boys and girls differently. For example, differential allocation of food to sons relative to daughters can explain the global distribution of stunting [15], and it is plausible that they also have effects on mortality rates, thus affecting the sex ratio. Evidence from India shows that boys are breastfed longer [16]. In the same country girls with congenital heart disease are less likely to have a surgery than boys [17]. More generally, parents may be more willing to spend money on healthcare on their son than their daughter. A son preference will also cause parents to have a sex-biased stopping rule [16]. For example, if a family that could only financially support four children had four daughters, a male gender bias would make this family more likely to have another child in an attempt to have at least one son. Thus, families that consist of more daughters tend to be larger and living beyond their means.
In this paper, we test whether a tradition of plough agriculture is associated with differences in the male-to-female sex ratio. Our focus on a specific historical determinant of the sex-ratio is not meant to imply that other factors are irrelevant in determining the sex ratio. The sex ratio can be affected by a host of factors at conception (primary sex ratio), during pregnancy (secondary sex ratio), and after birth (tertiary sex ratio) [18,19]. A large literature in biology and medicine has studied the determinants of primary and secondary sex ratios [11]. Sociologists and economists have examined a wide range of explanations for differences in tertiary sex ratios, including income [20][21][22][23][24], intra-household allocation of resources across children [25,6], different preferences about the gender of the children [7] and social norms ( [26,27,28,29] for the case of China). Our analysis accounts for those alternative determinants of the sex ratio through the use of an instrumental variables estimation strategy.
The hypothesis we test relies on the assumption that the historical adoption of the plough continues to affect the sex ratio today, decades after most societies have moved out of agriculture. Recent evidence in economics shows that historical events can have highly persistent effects on beliefs and values. For example, [30] show that areas that were more anti-semitic during the Black Death were also more anti-Semitic during the early 20 th century. It has been shown that the slave trade in Africa, which occurred roughly from 1500 until 1850, not only continues to affect economic outcomes like per capita income [31], but also a range of cultural characteristics, such as ethnic diversity [32], trust [33], polygamy [34], and gender roles [35]. Differences in geographical characteristics related to the suitability of crops with different caloric yields appear to be responsible for the evolution of more patient time preferences [36]. Evidence of such long-term persistence is also not confined to the economics literature. The classic work of [37] on the culture of honor traces the effects of ancestral herding on the culture of honor and violence that one observes in the U.S. South today, while, more recently, [38] document how historical specialization in wheat, rather than wet rice agriculture in China, led to a greater prevalence of individualism today.
Variable construction and definition
Ancestral plough use. Our analysis examines the cross-country relationship between ancestral plough-use and the male-to-female sex ratio in the period after World War II. The measure of ancestral plough agriculture that we use is constructed from the variable v39 of the Ethnographic Atlas [39], which is a dataset that contains information on the traditional characteristics of 1,265 ethnic groups. The validity of the data from the Ethnographic Atlas has recently been verified by [40] in a study that links Ethnographic Atlas data to the same measures from contemporary individual-level survey data. The study finds a strong correlation between the coding of cultural practices in the Ethnographic Atlas and the prevalence of the same practices today. Similarly, [14] also documented that the measure of female participation in agriculture from the Ethnographic Atlas correlates very strongly with female labor force participation today.
The variable v39 classifies each ethnic group as being in one of the following three categories: (1) the plough was absent, (2) the plough existed at the time the group was observed, but it was not aboriginal, and (3) the plough was aboriginal, having existed prior to contact. Using this information, we construct an indicator variable that equals one if the plough was ever adopted during the pre-industrial period (whether aboriginal or not) and zero otherwise. We collapse the three categories of traditional plough use into one indicator variable because of the small number of ethnicities (only 18) that fall into the second category.
To construct measures of historical plough use at the country level, we use the same procedure as in [14]. The procedure, which is described in more detail in [16], uses the geographic distribution of 7,612 languages and dialects across the globe today, obtained from the 16 th edition of the Ethnologue: Languages of the World (Lewis, 2009), together with the Landscan 2000 database, which reports estimates of the world's population in 2000 at the 30 arc-second (roughly 1 km) grid cell level globally. We combine the Ethnologue data, which are in the format of a shape file, with the Landscan data, which are in the format of a raster file, to obtain an estimate of the location and number of people speaking all languages and dialects today.
To illustrate the procedure, we follow [14] and use the country of Ethiopia as an example. Outcome variables. The outcome of interest is the male-to-female sex ratio, measured as the number of males per 100 females. We construct our measure of male-to-female sex ratio by taking the average of quinquennial data from the Demographic Yearbook of the United Nations from 1960-2010. The Demographic Yearbook contains information on the sex ratio of different age groups. We examine the sex ratio of different groups of children, including the sex ratio: at birth, for ages 0-1, for ages 0-4, and for ages 5-14.
Control variables. All regressions include the following control variables from the contemporary period: the level of economic development, measured by the natural log of a country's real per capita GDP, and expressed non-linearly with a second-order polynomial; fertility; and infant mortality. All controls are constructed by taking the average from 1960-2010. We also include continent fixed effects, which control for broad differences in the sex ratio that vary across large geographical regions.
We also control for a rich set of ethnographic controls that measure pre-industrial characteristics of a country's ancestors. These are constructed using data from the Ethnographic Atlas and the same procedure that was used to construct the ancestral plough measure. The controls include the presence of large domesticated animals; the presence of a tropical or subtropical climate; a measure of overall agricultural suitability; the number of levels of political hierarchy beyond the local community (political complexity); and a measure of the complexity of settlement patterns (economic complexity).
The presence of large domesticated animals is measured using variable v40. In the original classification, the nature of animal husbandry is classified into seven categories: absence of large domesticated animals; pigs are the only large animals; presence of sheep and/or goat without any larger domesticated animals; presence of equine animals; presence of deer; presence of camels, alpacas or llamas; and presence of bovine animals. We create an indicator variable that equals one if any type of large domesticated animals were present in the society. Economic development is measured using variable v30. This variable classifies groups into one of eight different types of settlements: (1) nomadic or fully migratory, (2) semi-nomadic, (3) semi-sedentary, (4) compact but not permanent settlements, (5) neighborhoods of disperse family homesteads, (6) separate hamlets forming a single community, (7) compact and relatively permanent settlements and (8) complex settlements. We construct a variable that takes on integer values, ranging from 1 to 8, that is increasing with settlement complexity. Political complexity is measured using variable v33, which classifies the number of jurisdictional hierarchies that exist beyond the local community. The variable takes on values from 1 to 5, with 1 indicating no levels of hierarchy beyond the local community and 5 indicating four levels.
Data on fertility (number of children born per woman) and infant mortality (number of infant deaths per 1,000 live births) are taken from the same source as the sex ratio data, which is the Demographic Yearbook of the United Nations. Real per capita GDP measures are taken from the Maddison project and are expressed in 1990 international dollars.
We construct our geographical characteristics variables using information coming from the FAO's Global Agro-Ecological Zones (GAEZ) v3.0 database [41]. The database reports the suitability for the cultivation of different crops for grid-cells 5 arc-minutes by 5 arc-minutes (approximately 56 km by 56 km) for the world. Using this, we calculate the proportion of land within a 200-kilometer radius of an ancestor's location that is defined as being either tropical or subtropical. Information on the location of ancestral groups is taken from the Ethnographic Atlas.
Instrumental variables. Our baseline analysis uses instrumental variables to estimate the causal effect of ancestral plough use. For instruments, we use information on the suitability of growing different crops, some of which benefitted from the introduction of the plough more than others. Pryor [42] classifies crops into those that benefitted more from the plough (called plough-positive crops) and those that benefitted less from the plough (called plough-negative crops). Plough-positive crops, which typically require land preparation over a large surface area and during a very short period of time, include barley, wheat, rye, teff, and wet rice. Plough-negative crops, which tend to yield more calories per acre, have longer growing seasons, and can be cultivated on more marginal, rocky, and/or slopped land, include maize, sorghum, millet, tree crops, and root crops.
To construct instruments for the IV regressions, we first restrict attention to Old World cereals crops: wheat, barley and rye (plough positive), and foxtail millet, pearl millet and sorghum (plough negative). These crops are comparable on a number of dimensions: they require similar preparations for consumption, produce similar yields, and thus can support similar population densities. From the FAO's GAEZ database, we extract the raster files for the relevant crops and, using information from the Ethnographic Atlas on the historical locations of ethnic groups, construct a measure of the fraction of land within 200 kilometers of the centroid of ancestral ethnic groups that can grow each plough positive crop and each plough negative crop. We then create an average measure of three plough-positive suitability measures and normalize this by overall suitability for the cultivation of crops. This is the first instrument. The second instrument is the same measure but created as an average of the three plough-negative suitability measures also normalized by overall agricultural suitability.
Descriptive analysis
We begin our analysis by first examining the raw data. This is done visually in Fig 1, which reports the average sex ratio from 1960-2010 for two groups of countries. One is 79 countries for which more than 95% of the population has ancestors that used the plough traditionally and the other is 67 countries for which less than 5% of the population has ancestors that used the plough. From the figure, we see that for all age cohorts, the male-to-female sex ratio is greater among societies that traditionally engaged in plough agriculture. While this is true for the sex ratio at birth, it becomes stronger for older children. This is potentially explained by higher mortality rates among girls than boys.
The differences that we observe in the raw data are potentially explained by a host of other determinants other than traditional plough agriculture. Thus, we now turn to more formal multivariate OLS and IV estimates of the determinants of the sex ratio. In particular, the IV analysis addresses the possibility that societies with different gender norms historically may have been more likely to adopt technologies, like the plough, that reinforced the original biases. Before presenting the IV estimates, we first report, for comparisons, the multivariate OLS estimates of the relationship between ancestral plough agriculture and the sex ratio. These estimates account for a number of factors that may have their own effect on the sex ratio. The list of control variables included in the regression was described above. Countries with ancestors who used the plough show a higher male-to-female sex ratio today. The figure shows the average sex ratio (boys per 100 girls) in countries with ancestors that engaged in plough agriculture and in countries with ancestors that did not engage in plough agriculture for different age ranges. The sample includes 146 countries, 79 of which are countries that are categorized as traditionally engaging in plough agriculture and 67 as not traditionally engaging in plough agriculture. "Countries with plough agriculture" are countries for which 95% or more of its population has ancestors that belong to ethnic groups that traditionally engaged in plough agriculture. "Countries without plough agriculture" are countries for which 95% or more of its population has ancestors that belong to ethnic groups that traditionally did not engage in plough agriculture. The reported sex ratios are quinquennial averages from 1960-2000. The difference in the sex ratios between the two groups is significant at the 1% level for all age groups. The multivariate OLS estimates are reported in Panel A of Table 1. Each column of the table reports estimates for a different age cohort. The table reports coefficient estimates and robust standard errors, as well as Conley standard errors that adjust for non-independence of the observations [43,44]. For our baseline estimates, we use the average geographical distance between a country's ancestors to adjust for spatial dependence. As we discuss below, standard errors are similar to the use of a range of alternative methods to correct for non-independence using the Conley method.
The estimates show that countries with more ancestral plough use tend to have a higher male-to-female sex ratio. In addition, the estimated effect is systematically larger for children of older age cohorts. Note that the finer age categories are not mutually exclusive. This should Table 1. The instruments comprise two variables: one measuring the ancestral suitability of the environment for plough-positive crops (the average fraction of ancestral land that was suitable for growing barley, rye and wheat divided by the fraction that was suitable for any crops) and the ancestral suitability of the environment for plough-negative crops (the average fraction of ancestral land that was suitable for growing foxtail millet, pearl millet and sorghum divided by the fraction that was suitable for any crops). In square brackets we report Conley standard errors adjusted for spacial correlation (window = 10 degrees). ***, **, and * indicate significance at the 1%, 5%, and 10% levels. be taken into account when comparing the estimates across different dependent variables that measure the sex ratio of different age cohorts. The magnitudes of the estimated coefficients are sizeable. Based upon the estimates from column 1, a one-standard-deviation increase in ancestral plough use (0.44) is associated with an increase of 0.20 standard deviations of the sex ratio measure. For children aged 5-14, the estimated effect of a one-standard-deviation increase in ancestral plough use is 0.33 standard deviations of the sex ratio measure. Columns 5 and 6 of Table 1 report the principal component across the four different sex ratio measures (column 5) and across the sex ratio measures for the two non-overlapping cohorts, 0-4 and 5-14 (column 6). In Fig 2, we report the partial correlation plot for the specification in column 4, where the dependent variable is the sex ratio of children aged 5-14. As shown, the correlation is quite general and not driven by a small number of influential outliers. As part of Supporting Information, we also report the partial correlation plots for other age ranges. The coefficient increases with age (see S6 and S7 Figs).
As an additional measure of the magnitude of the historical plough use, we calculate the change in the R-squared when the historical plough use variable is added to the regression equation. For the specification where the dependent variable is the principal component of the sex ratio measures from all cohorts (column 5), the inclusion of the historical plough use variable increases the R-squared by 0.0226. (from 0.5733 to 0.5959). Therefore, a history of plough agriculture accounts for 2.3% of the total variation in the sex ratio's principal component and 5. The specification includes continent fixed effects, historical covariates (economic complexity, political hierarchies, the presence of large animals, agricultural suitability and a measure of tropical climate), and contemporaneous covariates (per capita GDP and its square, fertility, and infant mortality). Each country is labeled with its 3-digit iso code. https://doi.org/10.1371/journal.pone.0190510.g002 Understanding the historical origins of differences in sex ratios across countries PLOS ONE | https://doi.org/10.1371/journal.pone.0190510 January 16,2018 For comparison, these magnitudes are similar to those from the inclusion of continent fixed effects.
Instrumental variable regressions
Although our OLS estimates include a wide range of covariates, there remains the concern that omitted factors, correlated with both traditional plough use and the sex ratio today, could be driving the results. To address this, we follow [16] and use instrumental variables to exploit variation in historical plough use that is due to differences in climatic conditions that affected the types of crops that could be grown in a location, and whether they were crops that significantly benefitted from the introduction of the plough or not.
With the assumption that the difference between ancestral suitability for plough-positive and plough-negative crops is only correlated with gender role attitudes today through their impacts on the plough (conditional on covariates), we can use the two measures as instruments in an IV strategy that provides causal estimates of the effect of ancestral plough use on the sex ratio today. We believe that the exclusion restriction is likely satisfied, especially given that in all specifications, we control for the proportion of land historically inhabited by an ethnic group that was tropical or subtropical, as well as its overall agricultural suitability. We also include the same large set of ancestral and contemporaneous controls as is included in the multivariate OLS regressions.
Results and discussion
The IV estimates are reported in Panel B and C of Table 1, where we report the same set of specifications as for the OLS estimates. Panel C reports the first-stage estimates, where ancestral plough use is the dependent variable and plough-positive crop suitability and plough-negative crop suitability serve as instruments. The estimates show that while plough positive suitability is positively associated with the adoption of the plough, this is not true for ploughnegative crop suitability. Thus, the estimates provide confirmation for Pryor's hypothesis that crop type affected the adoption of the plough. In Tables D and E in S1 Supplementary Material, we examine the stability of these findings over the period of our analysis, 1960-2010. For simplicity, we report the results for the two non-overlapping sex ratios (age 0-4 and 5-14). The importance of the plough appears to be very stable and shows only a very small decline over time. They also provide confirmation about the relevance of our instruments for plough adoption.
The second-stage IV estimates are reported in Panel B of Table 1. They show that ancestral plough use is associated with higher male-to-female sex ratios. This provides evidence that the positive relationship between traditional plough use and the sex ratio, that was estimated using OLS, is not due to omitted variables bias or reverse causality and, thus, is likely causal. As with OLS, the IV estimates are sizeable in magnitude and show that the magnitude of the estimated effect is larger for older age cohorts. For example, based upon the estimates from column 4 and 5, a one-standard-deviation increase in traditional plough use (0.45) is associated with a 0.44 standard-deviation increase in the sex ratio of children aged 0-4 and a 0.58 standard-deviation increase in the sex ratio of children aged 5-14. Columns 5 and 6 of Table 1 report the principal component for the four different sex ratios (column 5) and for the 0-4 and 5-14 sex ratios (column 6). The results are qualitatively similar when we examine these aggregated measures.
Comparing Panels A and B of Table 1, we see that the magnitudes of the IV estimates are consistently larger than the OLS estimates. This is most likely due to a bias towards zero in the OLS estimates, which arises due to the fact that plough societies tend to be more economically advanced both in the past and today. If there is a positive relationship between economic development and equality of gender norms, and if our controls for historical and contemporary income are imperfect, then our OLS estimates will be biased towards zero, while the IV estimates will not suffer from this bias.
There are a number of potential limitations that should be kept in mind in interpreting our results. The primary is that although our procedure, which links historical ethnic groups to contemporary populations using languages, works well for many countries of the world, it works less well for countries where languages have been adopted by other groups within the country. An example is the United States, where all populations speaking English today are assumed to have ancestors that are English, even though we know that many ancestors of English-speakers within the U.S. were from non-English-speaking backgrounds, e.g., Italian, German, etc. To check the importance of this issue for our estimates, we have re-estimated our baseline regressions after omitting all countries from North and South America, as well as Australia, New Zealand and South Africa. The estimates, which are reported in Table F in S1 Supplementary Material, show that our estimates are very similar when we remove these countries from our sample.
A second potential concern relates to the non-independence of ethnicities in the Ethnographic Atlas, which has the potential to result in non-independence of observation in our country-level in our sample (i.e., Galton's problem) [44]. We have addressed the issue of the non-independence of observations by also reporting Conley standard errors, which correct for non-independence of observations in our sample. The correction requires information on the distance between observations. For our baseline estimates, we have used the spatial distance, measured as the average distance between the traditional location of a country's ancestors, which was calculated using the latitude and longitude of ethnic groups (as reported in the Ethnographic Atlas). An alternative strategy is to use genetic distance between countries [45]. Standard errors using this method are very similar to those calculated using geographical distance. These are reported in Table A in S1 Supplementary Material. The Conley correction also requires an assumption about the range of distances for which the correlation between observations is non-zero. Table A in S1 Supplementary Material reports calculated standard errors for different assumptions about this range. Overall, we find that the Conley standard errors, which correct for non-independence of observations, tend to be slightly smaller than conventional standard errors when we use geography and slightly larger when we use genetic distance. Overall, adjusting for non-independence does not alter our statistical inference nor our conclusions.
The third concern is related to the geographical variation in plough use. There is little variation within Europe and within sub-Saharan Africa. Therefore, our estimates could simply be driven by broad differences between continents. All of our estimates include continent fixed effects. To also rule out the possibility that we are not capturing differences between Sub-Saharan Africa and the rest of the world we also disaggregate the African continent indicator into an indicator for sub-Saharan Africa and one for North Africa. The estimates are robust to this additional specification (Table G in S1 Supplementary Material).
The fourth potential concern is that for our instrumental variable strategy, the difference between plough-positive and plough-negative environments may be correlated with geographic features that affect gender attitudes today through channels other than the plough. Although we have controlled for an extensive set of geographical and historical variables, which mitigates this concern, the violation of the exclusion restriction remains a possibility.
The final caveat is that our estimates only reflect the effects of having a tradition of plough agriculture versus not. Ploughs that were adopted varied significantly in size and complexity, and these may have had very different effects on gender roles and, thus, on the sex ratio today.
Our analysis is only able to report the average effect of plough adoption across all of the types of ploughs that were adopted.
Conclusions
Our findings show that the variation in sex ratios that we observe today has deep historical roots. We find that descendants of societies that traditionally practiced plough agriculture have higher average male-to-female sex ratios today. The evidence is consistent with traditional plough use shaping the value placed on boys relative to girls, which continues to persist until today, affecting observed sex ratios. The sample includes 153 countries. The sex ratio is a quinquennial average from 1960-2000. The specification includes continent fixed effects, historical covariates (economic complexity, political hierarchies, the presence of large animals, agricultural suitability and a measure of tropical climate), and contemporaneous covariates (per capita GDP and its square, fertility, and infant mortality). Each country is labelled with its 3-digit iso code. (TIF) S6 Fig. The partial correlation between ancestral plough use and sex ratio between ages 0 and 1, after the inclusion of continent fixed effects, and historical and contemporaneous controls. The graph shows the correlation between historical plough use and sex ratio between ages 0 and 1 (boys per 100 girls) for a sample of 153 countries during the period 1960-2000, after controlling for average differences in the sex ratio between continents, historical country differences (including economic complexity, political hierarchies, the presence of large animals, agricultural suitability and a measure of tropical climate) and contemporaneous measures of per capita GDP, fertility and infant mortality. Each country is labelled with its 3-digit iso code. (TIF) S7 Fig. The partial correlation between ancestral plough use and sex ratio between ages 0 and 4, after the inclusion of continent fixed effects, and historical and contemporaneous controls. The graph shows the correlation between historical plough use and the sex ratio between ages 0 and 4 (boys per 100 girls) for a sample of 153 countries during the period 1960-2000, after controlling for average differences in the sex ratio between continents, historical country differences (including economic complexity, political hierarchies, the presence of large animals, agricultural suitability and a measure of tropical climate) and contemporaneous measures of per capita GDP, fertility and infant mortality. Each country is labelled with its 3-digit iso code. | 2018-03-04T05:50:44.930Z | 2018-01-16T00:00:00.000 | {
"year": 2018,
"sha1": "d0ef7fd7e7b9c3bbe7641a6c043bcc0e2d094456",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190510&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "365b120bda1c608a1cf5cbf13a35399bd409580c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Sociology"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
139818039 | pes2o/s2orc | v3-fos-license | Experimental study on aqueous phase liquids migration in double-porosity soil under non-isothermal effect using digital image analysis
Aqueous phase liquid (APLs) leakage and spillage into the subsurface system, leading to groundwater contamination is an issue that needs to be addressed. This paper aims to investigate the APLs migration characteristics in fractured non-isothermal double-porosity soil. A laboratory experiment was conducted to observe and monitor the characteristics of the soil structure and APLs migration in heated deformable double-porosity soil using digital image processing technique. The results show rapid liquid migration for the fractured soil samples. The time taken for the liquid to migrate under the application of heat is less for sample with low moisture content due to faster dry off and rapid evaporation. It can be concluded that APLs migration under vibration and non-isothermal effect is highly influenced by the soil sample structure, the soil fractured pattern, the soil water content, and the applied heat in the soil.
Introduction
With an increase in global populations, groundwater has become one of the world most extracted natural resources. The current groundwater withdrawal rates is estimated at 982 km 3 per year [1]. Although groundwater is part of freshwater resources that occurs naturally, excessive withdrawal has put this finite resource below sustainable levels, where naturally occurring water that can be renewable on the earth surface, but finite resource as groundwater in aquifers, while it's getting limited as usage around the world exceeds sustainable levels [2]. Furthermore, due to climate change and increased occurrence of natural disaster that lead to groundwater pollution, negative impact on human health and geo-environment is unavoidable [3,4]. These occurrences have drawn attention to the significance of vibration, groundwater contamination and liquid migration effect on non-isothermal double-porosity subsurface system. These problems need to be addressed to ensure the sustainability of geo-environmental and groundwater utilization. This problem becomes more complex when the surface or subsurface is subjected to earthquake vibration leading to underground storage tanks leakage and migration of harmful liquids into the groundwater network [5]. Earthquake leads to fractured and unstable soil structure [6]. The changes in soil structure and rock micro crack has significant influence on the soil mechanical behaviour [7]. This also changes the hydraulic conductivity of the soil which has substantial effect of groundwater production [8] as fractured soil has low shear strength but high hydraulic conductivity [9]. The behaviour of liquid migration into the subsurface system become increasing complex if the fractured soil has double-porosity feature and is affected by temperature difference. Soil that displays two distinct scales of porosity, macro and micro pores, are called 'double-porosity soil' that appear different characteristic of pore sizes [10,11]. Masciopinto et al. [12] demonstrated that fractured porosity formations were characterized by water-bearing formations where groundwater flows along the fissures and fractures in the solid rock. The fractures are usually caused by rock mass breakage due to tectonic forces. The heat (non-isothermal) imposed to the soil may influence the liquid migration due to changes in the liquid properties, such as viscosity and density, with temperature [13].
Therefore, this study applied the model concept of fractured double-porosity under nonisothermal in laboratory with the soil that overlaps the three continuums, which consist of fracture porosity, primary porosity, and secondary porosity features as developed by Loke et al. [5]. Image analysis, a simple but powerful method has been used to investigate complicated contaminants migration and liquid saturation rate [14]. Maas and Hampel [15] reported that, image analysis technique in the field of civil engineering has been applied to the study of flow movement in boundary layers and liquid migration of small properties, which specific to the propagation of structure crack extension. Digital image analysis method has been used to capture and analyse liquid migration in soil under laboratory condition by Alazaiza et al. [16]. As the porosity and fractured soils are difficult to observe physically, digital image processing techniques provide an attractive for the study of liquid migration in fractured double-porosity soil. This technique is used in this study to investigate the aqueous phase liquid (APL) migration in fractured double-porosity soil under non-isothermal condition with the aims to provide better insight to the problem.
Experimental materials and methods
This section discusses the fractured double-porosity soil preparation, non-isothermal experiment setup procedure, and digital image processing setup used in this study.
Fracture double-porosity soil preparation
The available commercial Kaolin soil S300 was utilized as the soil sample to produce double-porosity soil in this experiment. Based on BS 1377-2: 1990 and BS 1377- 5: 1990 tests, the Kaolin soil has the following properties, liquid limit = 41%, plastic limit = 27.5%, plasticity index = 13.5%, particle density = 2.65 Mg/m 3 , particle size distribution = (2% sand and 97% fine, less than 0.06mm), and falling head permeability test (K average) = 5.42 x 10 -9 m/s. Based on the value of Atterberg limits and particle size distribution. The kaolin soil was classified under the Unified Soil Classification System as clay with low plasticity (CL). The aggregated soil was prepared on the method proposed by Bagherieh et al. [17]. The dried kaolin powder was first mixed with water to obtain a sample with moisture content of 25% and 30% for experiments 1 and 2, respectively. Two different moisture content soils were considered in this study to investigate their role in liquid migration processes [18]. These moisture contents were selected based on the moulding ability of the kaolin soil mixture and liquid limit. The kaolin granules started to disintegrate and fail to form kaolin aggregates when the moisture content exceed 35%, on the other hand, if the moisture content is less than 25%, the kaolin granules become too dry and crumbly [19,20].
Water was poured at a constant rate during mixing of the dried kaolin powder to control moisture content within the mixture. Afterward, the mixtures were placed in a plastic container and kept at room temperature for a minimum of 24 hours to allow water content to reach equilibrium. The mixtures were then sieved through a 2.36 mm sieve to create double-porosity structure. Next, the kaolin granules were placed in an acrylic circular soil column and compressed to a sample height of 100 mm with 1 bar of pressure. The compression height was selected to ensure uniformity throughout the soil sample depth [21]. The preparation of double-porosity kaolin soil samples are shown in Figure 1. The experiments were undertaken in acrylic circular soil column as shown in Figure 2 with a sealed base having height and diameter of 150 mm and 100 mm, respectively. The acrylic soil column with triangle base plate was perfectly fixed and bolted on the vibration table to prevent backlash during the vibration process. The vibration table frequencies were calibrated using a high sensitivity accelerometer and Dewesoft Sirius System data-logger to ensure it is functioning well. The vibration table vibrates in the vertical direction. To apply heat (non-isothermal) to the double-porosity soil, two rods of heater and one rod of thermocouple were installed at the centre of the soil as shown in Figure 2. The full setup of the vibration table for experiment, first proposed by Loke et al. [6], is shown in Figure 3. The vibration frequency of the vibration table was set to 0.98 Hz for a period of 60 seconds has been applied as reported elsewhere by Loke et al. [6]. The effect of the fractured double-porosity soil sample is shown in Figure 4. The fracturing of the top soil surface is clearly evident as the consequences of the vibration applied to the doubleporosity soil.
Non-isothermal experiment setup procedure
After the soil fracturing process, the non-isothermal experiment was setup as shown in Figure 5 for the APL migration image acquisition. The adjustable heater machine was used to generate heat at a specific temperature. The adjustable heater machine was connected to two heater rods to apply heat throughout the soil column. A thermocouple rod was also installed in the soil to check the soil temperature directly. After the cable of heater and thermocouple has been installed in the heater machine, push the button to switch on the heater to impose heat to the soil sample for create the simulation of soil sample with nonisothermal condition.
A digital camera (NIKON D90 DSLR) used as the image acquisition device to record the APL migration. The camera has a medium size image format of 3216 x 2136 pixels with a sensor size of 23.6mm x 15.8mm. The digital camera setting during the experiments was operated at a minimum shutter speed of 1/640 second, ISO sensitivity set at ISO-2500 and aperture fixed at f/5.6. A V shape mirror arrangement was used to allow the whole soil column surface to be captured in a single shot as shown in Figure 5. The snapshot of the APL migration over the whole soil column circumference can be captured at a prescribed time instances. A 2600 lumens/watt light was installed above the soil column to provide adequate light during image acquisition.
Fig. 5. Digital image acquisition experiment setup with V shape mirror and non-isothermal machine
A white paper with pre-drawn gridline (20mm x 20mm) was first sheathed onto the soil column and act as a control point on the reference image. Once the reference image was taken, the pre-drawn gridline paper was removed from the soil column. The adjustable heater machine starts heating up the soil to 30 Degrees Celsius. The values were calibrated by the thermocouple. The experiment started with instantaneously pouring of the APL onto the top centre of the fractured soil sample in circular acrylic soil column. The APL used in both samples 1 and 2 is 70ml dyed water (inorganic compounds) with miscible phase. After the APL had covered the whole top surface area of the fractured double-porosity soil sample, the first snapshot of the APL migration was taken. The subsequent snapshots were taken at a specific time interval to capture the APL migration for the both experiments based on the total completed liquid migration times. Sample 1 was recorded for a total of 62 minutes, while sample 2 was recorded for 90 minutes to ensure the complete migration of APL. Using image acquisition frequency given in Table 1, a total of 119 images and 117 images were captured for samples 1 and 2, respectively.
Digital image processing setup procedure
A digital image processing technique (DIPT), a technique that uses computer algorithms to perform image processing on the images that have been digitally captured was used. The images were captured in colour and saved in the JPEG format. The recorded images then were transferred from digital camera to a computer for further image processing using Matlab routine and Surfer Software version 10. The Surfer software was used to digitize the control point from the reference image and to generate plot the APL migration using Hue Saturation Intensity (HSI) value. The flow chart of the DIPT is shown in Figure 6. The digitization of the control points of area of interest on the reference image is carried out to extract the coordinates of the control points on the circumference surface of the soil column, as shown in Figure 7. Area of interest refers to pre-determined migration boundary area (front image and V shape refection image) for the experiments 1 and 2 that contained the dyed water. Matlab routine was then used to convert area of interest into RGB and HSI image format. The RGB and HSI intensity values from the image were extracted and saved in ASCII format. Matlab routine was used to loop the subsequent digital image three times to extract and save the intensity values for all three section areas of interest (front image and V shape mirror refection image) of the acrylic soil column. Lastly, a contour plot pattern of the migration pattern of APL in fracture double-porosity under non-isothermal condition using HSI value was generated. The HSI contour plots that describe the APL migration provide useful information such as the migration pattern and flow rate.
Results and discussion
The results after the APL migration process for the top soil surface was divided by actual size measurement of the column circumference to visualize the crack position for experiments 1 and 2 as shown in Figure 8. Based on the observation Figure 8, it is clear that the fracture pattern and soil colour are different for both samples. The soil colour for sample 2 is darker because the soil is wetter with higher moisture content as compared to soil sample 1. The downward migration pattern of HSI contour plot of the APL in the non-isothermal fractured double-porosity soil sample with 25% and 30% moisture content for samples 1 and 2, respectively, are shown in Figure 9. Samples 1 and 2 used 70ml dyed APL. It was poured instantaneously on top centre of the soil sample surface by using a glass funnel. This is to ensure that the dyed APL penetrated consistently. At time, t = 6 minutes (Figure 9a), the HSI plots are clearly visible in two-dimensional shape when using the HSI plot to assemble the curve joins the left and right-hand boundary forming a circular shape similar to whole soil column round shape reminiscent. However, the fluid migration in the acrylic soil column was actually one-dimensional.
In non-isothermal soil sample 1, the selected HSI plots of dyed APL migration at interval of such as 0.5, 6, 36, and 62 minutes, respectively, can be seen in Figure 9a. Based on the HSI intensity contour plot result, the flow of dyed APL migration was not uniform because rapid migration occurred in the fractured soil surface condition compared to other locations that did not display cracked soil surface as shown in Figure 8a. Observation during APL migration process, the dyed water completely migrated on the whole top soil surface area into the non-isothermal fractured soil sample 1, which took about 12 minutes. At 0.5 minutes after the commencement of the experiment, the dyed APL migration reached only 1% through of the test sample height. It is due to the non-isothermal heat imposed to the double-porosity soil, which slow down the migration flow. The deepest APL migration depth along the soil column was 23mm out of 100mm soil sample column height. Meanwhile, the duration for dyed APL migration from the top surface to the stop point was 62 minutes and further monitoring at 90 minutes showed no changes in the migration pattern. The APL migration stop at 23mm, because based on the observation of the migration process, the heat imposed to the soil has caused the APL to go through the process of evaporation. In non-isothermal soil sample 2, the interval of 0.5, 6, 40, and 90 minutes were selected for theAPL migration HSI plot as shown in Figure 9b. For the experiment soil sample 2, the migration pattern bore slightly similar to the result found in experiment soil sample 1. It occurs since both of the samples were not fully 100% migrated to the bottom of the soil column. For dyed APL in sample 2 to completely recede from view on the whole top soil surface area into the non-isothermal fractured soil surface of the test took about 25 minutes. During the initial experiment of 0.5 minutes, the dyed water migration just only reached 7% through the test sample height. This is similar happens to sample 1, where the soil sample exposed to heat effect during the migration process and the APL entry pressure was low in the heated soil. Meanwhile, the overall duration for dyed water migration from the top surface to the stop point was 90 minutes and further observation at 120 minutes showed no changes in migration pattern where the deepest dyed APL migration was at 240 mm. It occurs along the x-axis and reached approximately 42% downward soil column depth.
In previous research, an experiment on toluene migration in double-porosity soil with 25% and 30% moisture content and the results have shown the non-aqueous phase liquid (NAPL) migration from top to bottom took about 38 minutes and 4 minutes with fully 100% NAPL migrated into the soil were conducted by Sa'ari et al. [19] and Ngien et al. [10], respectively. A significant difference was found in this experiment with the result of APL migration from top to the stop point took about 62 minutes and 90 minutes with not fully APL migrated into the soil for 25% and 30% respectively. It found that the fact of water viscosity was 0.00089 kg/(m.s), while toluene viscosity was 0.00055 kg/(m.s), the both viscosity difference of about 38% [22]. This both experiments showed the migration time was slowed down and not fully migrated because of the water has a higher viscosity compared to toluene, therefore, dyed water caused the high friction and resistance to gradual migration. Both samples were not fully migrated, this could also because the physical bonding between toluene and soil is weaker than water and soil. The physical bonding between toluene and soil was attributed to Van Der Waals Force, which are weaker than hydrogen bonding, and which has stronger physical bonding between water and soil.
The measured value of APL saturation depth as a function of column circumference for every 30mm column circumference are shown in Figures 10 and 11 for soil samples 1 and 2, respectively. Based on Figure 10, the fastest and most critical migration downward to a cumulative saturation depth of APL migration occurs in 60mm column circumference within 6 minutes as demonstrated by the steepest gradient of the graph lines within that duration. Thereafter 6 minutes, showed a gradual downward decline until the end of the experiment. 270mm and 240mm column circumference showed the second and third critical rapid infiltration, respectively, within 6 minutes as demonstrated by the sharp gradient of the graph line within that duration and subsequently began to gradually incline horizontally until the end of the experiment. Meanwhile, the rest remaining column circumference positions displayed a slight decrease of APL migration from start until the end of the experiment. Based on the Figure 11, the cumulative saturation depth of dyed APL migration at 240mm column circumference was the fastest critical infiltration within 6 minutes as displayed by the sharp steepest gradient of the graph lines within that duration, and after 6 minutes began to slightly move downward until the end of the experiment. At 270mm column circumference was the second fastest critical infiltration within 6 minutes as demonstrated by the sharp gradient of the graph line within that duration. At 0mm column circumference was the third fastest critical infiltration within 6 minutes as shown by the steep gradient of the graph line and began to gradually incline horizontally until the end of the experiment after 6 minutes. The soil sample 2 displayed that the pattern of APL saturation had the deeper and higher pressure at initial condition of non-isothermal fractured double-porosity soil compared to soil sample 1 because the high moisture and high capillary force occurred at the fracture position at soil sample 2. The calculated migration speed rate for the higher and overall average speed for every 30mm column circumference zone was displayed in Table 2. Soil sample 1 shows the higher APL migration speed rate from initial to 30 seconds was at 60 mm column circumference zone with the migration speed rate of 0.13mm/s. Meanwhile, the other high APL migration speed rates occurred at 270mm and 240mm column circumference with the value of 0.10 mm/s and 0.09 mm/s, respectively. Thus, the overall average dyed APL migration speed rate for non-isothermal fractured double-porosity soil sample 1 is 0.02 mm/s. At the 60 mm column circumference zone there was the highest average APL migration speed rate with 0.04 mm/s with the depth of 3mm as shown in Figure 9a with 30 seconds. This could be because the worst non-isothermal fractured soil structure after the vibration was at the position of 60mm based on the observation. The overall average dyed APL migration speed rate for fracture soil sample 2 is 0.04 mm/s. The higher average speed rate occurred at 240mm column circumference zone with 0.06 mm/s and the deeper migration along the soil column depth at the time between initial to 30 seconds with the depth height of 10mm. This scenario happened due to the inter-aggregate pore for 30% moisture content in sample 2 was quite big compared to sample 1 with 25% moisture content, in which this condition could cause the faster migration in sample 2 but it took longer duration of the migration because the wettability of sample 2.
Conclusion
The laboratory experiment on APL migration in non-isothermal fractured doubleporosity soil with 25% and 30% moisture content had been carried out. This study was intentionally designed to monitor the dyed APL migration characteristic and pattern in the non-isothermal fractured double-porosity soil placed in circular acrylic soil column model by using digital image processing technique. The laboratory experiment successfully provided the results of the various behaviours and speed rate of dyed APL migration at different moisture content between 25% and 30%. From the results observed, both experiments indicate that the dyed water migrated slower at the initial stage because the heated soil with the evaporation process made the migration slow down. Both samples were not fully migrated to the bottom of the soil column since the water has a higher viscosity compared to toluene which was conducted by previous researchers. It also due to the physical bonding between toluene and soil attributed by the Van Der Waals Force that are weaker than hydrogen bonding. It has stronger physical bonding between the water and soil. The total migration time for soil sample 1 was shorter compared to soil sample 2, due to the fact that sample 1 is drier in the non-isothermal condition. This will allow the surrounding water to be absorbed, which then resist further infiltration to the bottom of the soil. Soil sample 2 has a faster migration speed rate compared to soil sample 1 due to bigger inter-aggregate pore size which, allow APL flow faster into the non-isothermal fractured double-porosity soil. This phenomenon caused by the capillary force exerted by the dyed APL and high flow water pressure at the condition where the soil samples were fractured. This study indicates that DIPT with HSI plot can provide useful information to understand the non-isothermal APL migration that can be used to identify the remediation method and sustainable groundwater resources. | 2019-04-30T13:08:46.778Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "99ba86fd777b75415c9e21f72c25f53b80d8ae46",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/109/matecconf_sepka-iseed2018_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "02fee2a0c6c58c151b308546cc477caeaad79589",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
118391345 | pes2o/s2orc | v3-fos-license | Curvaton Decay into Baryons, anti-Baryons and Radiation
This paper calculates the amount of baryon/radiation isocurvature fluctuation produced through the decay of a curvaton field. It is shown in particular that if curvaton decay preserves baryon number and the curvaton dominates the energy density at the time of decay, the initial curvaton/radiation isocurvature mode is entirely transfered into a baryon/radiation isocurvature mode. This situation is opposite to that previously studied in three fluid models of curvaton decay; this difference is related to the conservation of the pre-existing baryon asymmetry and to the efficiency of the annihilation of all baryon/anti-baryon pairs produced in the decay. We study in detail the relevant cases in which the curvaton decay preserves or not baryon number and provide analytical and numerical calculations for each situation.
II. DESCRIPTION OF THE MODEL
We consider a model where four fluids are present: baryons (denoted in what follows with the subscript "b"), anti-baryons ("b"), radiation ("r") and the curvaton field ("σ"). At the fundamental level, the curvaton is a priori a massive scalar field but can effectively be treated as a pressureless fluid. One assumes that it can decay into radiation, baryons and anti-baryons. Each of these processes is controlled by a partial decay width denoted Γ σr , Γ σb and Γ σb respectively. The curvaton decay occurs when the condition Γ σ ∼ H is met, where Γ σ is the total decay width, namely Γ σ = Γ σr + Γ σb + Γ σb and H the Hubble parameter.
We do not discuss the phenomenology of curvaton to dark matter decay in the present paper. It is fair to assume that by curvaton decay, dark matter is effectively decoupled from radiation and baryons/anti-baryons. Even though the decay of curvaton may induce a dark matter -radiation isocurvature mode (see Ref. [25] for a detailed analysis), or a baryon -radiation isocurvature mode, both baryonic and dark matter sector will evolve independently. In this sense, the constraints obtained on dark matter or baryon isocurvature modes give complementary constraints on the physics of curvaton decay.
The freeze-out of baryon/anti-baryon annihilations is controlled by the velocity averaged cross-section where m π = 135 MeV. This relation originates from the fact that, in the present context, the pion can be viewed as the gauge boson mediating the strong force. Freeze-out of bb annihilations occurs when Γ b|f ≡ nb |f σ bb v and/or Γb |f ≡ n b|f σ bb v are of the order of the expansion rate H which corresponds to a temperature ∼ 20 MeV in the absence of curvaton decay (that is to say, assuming that radiation always dominates the energy content of the Universe). Big-Bang Nucleosynthesis (BBN) puts rather extreme upper bounds on the amount of energy density injected at temperatures T 1 MeV (see Ref. [27] for a recent compilation). For all practical purposes, it suffices to impose that T d ≥ 1 MeV to satisfy these constraints. Furthermore, the late time decay of a scalar field at temperatures of order 1 − 10 MeV is a fairly generic case in the framework of moduli cosmology. As is well known, such fields generically possess a very large energy density and a very small decay width Γ σ ∼ m 3 σ /m 2 Pl , hence they decay after big-bang nucleosynthesis if their mass is of the order of the weak scale. Therefore, in order to reconcile the existence of such fields with the success of big-bang nucleosynthesis, one has two choices: either the energy density of these fields at the time of big-bang nucleosynthesis is very small or their mass is large, leading to early enough decay. The mass also cannot be arbitrarily large, otherwise one has to face a hierarchy problem, hence the generic decay temperature is 1 − 10 MeV. Supersymmetric models with anomaly mediated supersymmetry breaking provide an explicit realization of particle physics model building in which the masses of moduli is of the order of m σ ∼ 10 − 100 TeV, which leads to decay temperatures of the moduli/curvaton T d ∼ 1 − 10 MeV [28].
The above motivates the present study of the phenomenology of curvaton decay at temperatures of order 1−10 MeV. Out of simplicity, we keep this temperature fixed to a value T d = 5.9 MeV in our numerical analysis, which corresponds to a total decay width Γ σ = 1.6 × 10 −20 MeV. We will argue that the results obtained remain unchanged if the decay temperature is higher, in particular if T d 20 MeV. At the background level, following the approach of Ref. [25], the above situation can be modelled by the following set of equations Let us describe these equations in more detail. As usual, the parameters Ω (α) are defined as the ratio of the energy density of the fluid α to the critical energy density, Ω (α) ≡ ρ (α) /ρ cr . The time variable is the number of e-folds, N ≡ ln a, where a is the scale factor. The quantity Ω eq b is defined by Ω eq b ≡ m b n eq b /ρ cr , where n eq b is the particle density at thermal equilibrium, expressed as: with a similar expression for n eq b . The quantity µ b is the chemical potential of the baryons and one has µ b = −µb. The temperature T can be expressed in terms of the variables of the previous system of equations as: Note that the above description implicitly assumes that the curvaton decay products thermalize instantaneously. This assumption will be discussed at the end of Section III. One should already underline that the above ratios Γ σb /Γ σ , Γ σb /Γ σ and Γ σr /Γ σ should be understood as characterizing the fraction of curvaton energy that eventually goes into thermalized "b", "b" and "r", rather than the branching ratios associated with curvaton decay channels. For the sake of simplicity, we ignore any temperature dependence of the function g * and we take g * = 10.75. If we compare with the equations of motion established in Ref. [25] in the case where the curvaton can decay into dark matter χ (rather than baryons and anti-baryons), the only difference is that terms like Ω 2 χ or Ω 2 χ,eq are replaced by Ω b Ωb and Ω eq b Ω eq b . Notice that, as a consequence, the evolution of the system does not depend on the chemical potential which cancels out, thanks to the fact that µ b = −µb. Finally, there is a factor 2 in front of the last term in Eq. (4). This factor originates from the requirement that the total energy density be conserved.
Let us also discuss how the initial conditions are chosen. Initially, we start with thermal equilibrium and some baryons/anti-baryons asymmetry. This implies that These two relations lead to Therefore, if the initial values of Ω and δ can be expressed as where x ≡ m b /T and where the quantity ǫ b is defined by The present-day value of the baryon asymmetry is ǫ b ≃ 5.4 × 10 −10 [29]. The initial value of ǫ b well before the freeze-out of bb annihilations must therefore be tuned in order to reproduce the final value after curvaton decay and entropy transfer from e + e − to the photons. Curvaton decay may dilute any pre-existing asymmetry through entropy production or even produce net baryon number if the curvaton decay process violates baryon number. In all our calculations presented further below, we have tuned this initial asymmetry in order to match the observed present-day value. In order to establish Eq. (12), we have used the fact that the number of photons is given by n γ = ζ(3)gT 3 /π 2 . It is important to notice that the difference n b − nb is normalized with respect to the photon energy density (or number) and not to the total radiation energy density. In these formulas, x = x ini ∼ 10 (for instance) and m b ∼ 0.9GeV are known (or chosen). Moreover, the Hubble parameter and Ω r are related through Eq. (8). Then, using the fact that the space-like sections are flat, i.e. Ω σ + Ω r + Ω b + Ωb = 1, and considering the (initial value of) Ω σ,ini as a free quantity, one can derive the following expression Therefore, for a given value of Ω σ,ini , H ini can be computed and the other quantities Ω r,ini , Ω b,ini and Ωb ,ini , simply follow from the above equations.
Let us now consider the perturbations. In order to establish the gauge-invariant equations of motion, we follow the method of Ref. [25]. It consists in formulating the equations in a covariant way in order to be able to perturb them consistently. One can write where Q µ = ΓT µν u ν is the curvaton decay term, u ν being the four velocity of a fundamental observer, and the term Y µ is a phenomenological description of the interaction term. It reads Of course, a rigorous treatment of the problem would rely on the full Boltzmann equation but this phenomenological description will be sufficient for our purpose. In particular, one can check that Eq. (14) exactly reproduces the background equations (2)- (6). Moreover, it is straightforward to perturb Eq. (14). This leads to the following system where ∆ α ≡ ∆ρ α /ρ α is the gauge-invariant density contrast for the fluid α. The quantity ∆ eq is defined by the following expression ∆ eq ≡ 1 4 With this definition, it is easy to see that we deal with a "closed" system of equations since x must be viewed as a function of H and Ω r , see Eq. (8). Let us now turn to the discussion of the solutions of the two systems of equations presented in this section.
III. RESULTS
The main parameters that govern the cosmological consequences of curvaton decay into radiation and baryon/antibaryons are: (i) the time of decay of the curvaton, which is encoded in the total decay width Γ σ , (ii) the respective branching ratios Γ σb /Γ σ and Γ σb /Γ σ ; (iii) the magnitude of the curvaton energy density at the time of decay, i.e. Ω < d σ when H = Γ σ . The main parameters are therefore the respective branching ratios and Ω < d σ . Note that the branching ratios are constrained by the measured baryon asymmetry ǫ b ≃ 5.4 × 10 −10 . In particular the baryon asymmetry ǫ b measured immediately after curvaton decay should be equal to 1.5 × 10 −9 , in order to obtain the measured value after the reheating of the photon fluid by electron/positron annihilations. According to whether Γ σb = Γ σb or not, two possibilities may arise. In the case of symmetric decay, meaning Γ σb = Γ σb , the baryon asymmetry is generated by some unspecified mechanism acting at a higher energy scale; it is simply diluted during curvaton decay by the extra entropy brought by the curvaton. In the case of asymmetric decay, the curvaton contributes to the net baryon asymmetry. We note that direct baryogenesis at a low temperature T ∼ 10 MeV is very contrived; we will nevertheless study this case for the sake of completeness and discuss the robustness of the results for higher decay temperatures. These two scenarios indeed exhibit different consequences, as discussed in turn in the following.
We will use the standard definition of the curvature perturbation in fluid (α) [30,31,32]: where, in order to expressρ (α) , we have not considered the interaction term. The corresponding definitions for the isocurvature modes read In particular, we will be interested in the transfer of the initial curvaton/radiation isocurvature perturbation into the final baryon/radiation isocurvature mode, as expressed by the ratio S σr . The quantity indexed with (f) [resp. (i)] is evaluated well after the decay (resp. well before). In the following, we also express quantities evaluated immediately before (resp. after) decay with the superscript < d (resp. > d ).
A. Symmetric decay
In this sub-section, we explore the phenomenology of models in which the curvaton decays symmetrically into baryons and anti-baryons, i.e. Γ σb = Γ σb . We find that two situations may arise, according to whether the curvaton dominates the energy density at its decay, i.e. Ω < d σ ∼ 1, or not. Consider first the case in which the curvaton dominates the energy density at decay, Ω < d σ ∼ 1. We find that the transfer coefficient of the isocurvature mode is maximal, as exemplified for instance in Fig. 1 This result is quite different from a "standard" scenario of curvaton decay into radiation and dark matter, in which the domination of the curvaton at the time of decay ensures that only adiabatic modes subsist, as all fluids have inherited the same curvaton perturbations. This difference can be related to the annihilation of all bb pairs produced by the curvaton, which effectively reduces to zero the net energy transfer of the curvaton to the baryon fluid. In order to put this statement on quantitative footing, it is useful to evaluate the ratio of the annihilation rates of baryons and anti-baryons to the expansion rate immediately after curvaton decay (H = Γ σ ): The first equation gives the ratio Υ b of the annihilation rate of baryons to the expansion rate, while the second gives the corresponding ratio Υb of the annihilation rate of anti-baryons to the expansion rate. Considering Υ b , the above formula shows that if Ωb exceeds ∼ 10 −16 , annihilations are effective. In the absence of curvaton, the freeze-out of bb annihilations occurs as the abundance of anti-baryons is reduced to below this threshold. In the presence of a curvaton however, the decay of this field will regenerate the annihilations provided the amount of curvaton produced anti-baryons is sufficient, i.e.
Then all pairs of baryon/anti-baryon produced by curvaton decay will annihilate. Of course, if the branching ratio Γ σb /Γ σ 10 −16 , the regeneration of annihilations will not take place, but the curvaton will not exert any influence on the pre-existing baryon fluid either.
In fact, the behaviors of the different quantities plotted in Fig. 1 can be understood in more detail along the following lines. Consider the variables associated to net baryon number, in particular Ω b − Ωb. Its equation of motion reduces to: This composite fluid is isolated, as neither annihilation nor curvaton decay violates baryon number. Therefore, the curvature perturbation associated to this composite fluid is conserved, as predicted in Ref. [5]. Furthermore, one has Ωb ≪ Ω b before curvaton decay and after annihilation freeze-out, so that this fluid of "net baryon number" approximately corresponds to the baryon fluid. As annihilations of bb pairs produced by curvaton decay is efficient, the above inequality remains valid after curvaton decay, hence "net baryon number" remains a good approximation for the baryon fluid. All in all, the above indicates that the curvature perturbation of the baryon fluid should remain conserved if curvaton decay preserves baryon number and if annihilations of bb pairs are efficient. This implies that the above results remain unchanged if the decay temperature T d 20 MeV. The theorem of Ref. [33] stipulates that the isocurvature mode between two fluids sharing thermal equilibrium are erased on a small timescale, unless there exists a conserved charge. In the present case, baryon number is conserved, or more precisely net baryon number does not couple to radiation, hence the above theorem does not apply. Consequently, once the isocurvature mode is produced, it remains conserved unless baryon number violating processes take place. In other words, one can extrapolate the above results to temperatures at least as high as the electroweak scale. Let us also remark that, if baryons are relativistic (at temperatures above the QCD scale), the above equations are slightly modified, but the above results remain unmodified.
One a more formal level, one can follow the evolution of the different variables as follows. Neglecting Ωb in front of Ω b in Eq. (25) above indicates that Ω b scales as a when Ω r ∼ 1 (i.e. after curvaton decay), while Ω b remains approximately constant when Ω r ≪ 1. These trends are observed in Fig. 1.
The behavior of Ωb is less trivial to obtain (but its cosmological relevance is also much less). One can approximate Eq. (3) with the following, after curvaton decay: where the term Ω r Ωb has been neglected as the annihilations are dominant. Using the fact that H ∝ a −2 and Ω b ∝ a after curvaton decay, one derives the following late time value of Ωb: Since Υ > d b takes enormous values of order 10 9 , the annihilations regenerated by curvaton decay essentially erase all trace of anti-baryons and the corresponding plateau cannot be observed in Fig. 1 because it is too small.
Let us now turn to the perturbations and assume that curvaton decay is instantaneous. If the curvaton dominates the energy density before decay, and transfers its energy to radiation, then: This relation can be obtained through standard methods and corresponds to the conservation of the total curvature perturbation throughout curvaton decay. Similarly, one can build the variable associated to the perturbation of net baryon number, Ω b ∆ b − Ωb∆b, which for all practical purposes, can be approximated by Ω b ∆ b . The equation of motion for this quantity reads: Since Φ is conserved both before and after curvaton decay, the first term on the r.h.s. can be neglected, and Ω b ∆ b − Ωb∆b is approximately conserved when Ω r ≪ 1. Approximating Ω b ∆ b − Ωb∆b with Ω b ∆ b , this implies that ∆ b is approximately conserved, since Ω b is constant in this case (see before) and, hence, that ζ b is also conserved. At late times, after curvaton decay, Ω r ∼ 1 implies that Ω b ∆ b scales as a, hence that ∆ b (and therefore ζ b ) is again approximately constant because Ω b ∝ a. One thus finds that: As mentioned above, this property can be traced back to the fact that net baryon number behaves in the present case as an isolated fluid, hence its curvature perturbation is a conserved quantity. Finally, one derives from Eq. (28) and (30) above the transfer of isocurvature perturbation: These results match the numerical evolution observed in Fig. 1. Obviously, the above discussion suggests that S (f) br → 0 as Ω < d σ → 0 since the net baryon number must remain unaffected, while a decreasing curvaton energy density at the time of decay implies that a lesser amount of radiation is produced during the decay. In more detail, one should obtain since This trend is confirmed in Fig. 2 which provides an example with Ω (i) σ = 0.01, corresponding to Ω < d σ ≃ 0.09 at decay (T d ≃ 5.9 MeV). The final transfer coefficient is of order −Ω < d σ as expected.
B. Asymmetric decay
If the inflaton can decay asymmetrically, Γ σb = Γ σb , the phenomenology is different, as all "b" and "b" produced by the curvaton will not be able to annihilate with each other. In particular, the production of net baryon number during curvaton decay comes with the transfer of the curvaton perturbations to the baryon fluid. As already mentioned, known models of baryogenesis produce baryon number at a much higher scale than 1−10 MeV. We nevertheless discuss this asymmetric case for the sake of completeness and because it provides useful insights into curvaton cosmology. Moreover, as we have argued in the previous section, the present results can be extrapolated to a higher decay temperature, possibly as high as the electroweak scale.
In the present case, one may expect cosmological consequences opposite to those found in the case of symmetric decay: if the curvaton dominates the energy density of the Universe shortly before decaying, and produces during its decay most of the baryon number, both baryon and radiation fluid will inherit its perturbations, hence there should be no final baryon/radiation isocurvature mode. On the contrary, if the curvaton energy density is small compared to the radiation energy density shortly before decay, but the curvaton still produces most of the baryon number, a maximal isocurvature mode between baryon and radiation should be produced.
In what follows, we will use the short-hand notation: Assuming that the initial baryon asymmetry vanishes and that curvaton decay is instantaneous, one can obtain an order of magnitude of the decay asymmetry needed to reach the observed value of ǫ b as follows: which implies: Numerical calculations differ from this simple estimate by a factor of order unity. In order to understand these results, it is instructive to express the time evolution of the baryon asymmetry using the system of Eqs. (2), (3), (4) and (6). The baryon asymmetry can indeed be written as: Hence the time evolution of the baryon asymmetry is governed by the following equation: In order to obtain the above equation, we have neglected the baryon/anti-baryon annihilation term in the equation for Ω r [Eq. (4)], which is justified insofar as the amount of radiation produced in baryon/anti-baryon annihilations is negligible at or after freeze-out. The above equation is interesting because it shows how the baryon number can be modified: either through baryon number violating curvaton decay (first term on the r.h.s), or through dilution due to entropy production (second term on the r.h.s). It also provides an estimate of the conditions under which the initial curvaton/radiation isocurvature mode is efficiently transfered to the baryon/radiation mixture. Such an efficient transfer can indeed be achieved if |∆ǫ b /ǫ b | ≈ 1 at curvaton decay, without significant production of radiation by the curvaton. The latter condition amounts to negligible entropy production, or what is equivalent, to assuming that the second term on the r.h.s of Eq. (38) is negligible compared to unity. The former condition then implies that the first term on the r.h.s of Eq. (38) is larger than unity. All in all, efficient transfer of the isocurvature mode occurs if: It is interesting to remark that this situation is very similar to that encountered for curvaton decay in a three-fluid model incorporating radiation and dark matter. Borrowing from the method of Refs. [23,25], it is possible to express the final baryon/radiation isocurvature fluctuation in terms of the initial curvaton and radiation curvature modes, as follows. One first constructs a composite fluid that has the property of being isolated, with energy density: Notice that this construction is possible because each component of the composite fluid is pressureless. Its curvature perturbation, which is conserved by construction, is: Then, assuming that curvaton decay is instantaneous, one can match the value of ζ comp after decay to that before decay, which gives: In order to obtain the first equality, we have used the fact that Ω > d b ≪ Ω > d b as a result of the efficient annihilation of bb pairs after curvaton decay. Although the quantity ζ < d comp is evaluated here immediately before decay, it can be evaluated at any initial time, since it is conserved.
The radiation perturbation is given by Eq. (33), hence the final baryon/radiation isocurvature perturbation can be written as: where we used the fact that S (i) br = S (i) br = 0. As expected, the isocurvature transfer vanishes as Ω < d σ → 0 (since this also implies Ω (i) σ → 0). When Ω < d σ → 1, one can see that the first term in the bracket on the r.h.s. of the above equation also tends to one, and therefore the transfer coefficient of the isocurvature mode also vanishes. The initial isocurvature fraction is transfered efficiently only if the conditions expressed in Eq. (39) are fulfilled. Note also that in the limit ∆B bb → 0, one recovers the result of Section III A presented in Eq. (32).
Finally, a last point is to be made concerning the assumption of instantaneous thermalization of the curvaton decay products. If the center of mass energy √ s ∼ (EE th ) 1/2 for an interaction between a high energy particle of energy E and a thermalized particle of energy E th is well above the QCD scale, then the ratio of rates of thermalization processes to bb producing ones is of the order of (α em /α s ) 2 ≪ 1. It is even less if √ s is smaller than the QCD scale. Therefore the above approximation is not strictly speaking justified. However the neglect of these additional interactions would not modify our conclusions, for the following reason.
The only effect that could modify our conclusions is if one fluid (either radiation or baryon) were "contaminated" by the other fluid (respectively baryon or radiation) through the interaction of high energy particles produced through curvaton decay with thermalized particles. One typical example is given by the transfer of energy from the photon to the baryon fluid through γ + γ th → b +b, where γ stands for a high energy photon. However, net baryon number does not couple to radiation, hence transfers of energy between these two fluids cannot take place after curvaton decay (provided this latter occurs after any baryogenesis event).
Hence all the conclusions remain unaffected by these processes that occur between curvaton decay and thermalization. It is important to stress, however, that Γ σr /Γ σ , Γ σb /Γ σ and Γ σb /Γ σ should not be interpreted strictly speaking as the branching ratios of curvaton decay into radiation, baryons or anti-baryons, but rather as the fraction of curvaton energy eventually transfered into these fluids after all thermalization processes have occured.
IV. CONCLUSIONS
In this section, we recap our main results. We have studied the production of isocurvature perturbations in the curvaton scenario where the curvaton field can decay into radiation, baryons and anti-baryons. Two different cases have been considered. The first one is the symmetric case in which the curvaton/baryon decay width equals the curvaton/anti-baryon one, i.e. curvaton decay preserves baryon number. We have found that if the curvaton dominates the energy density before decay, then a baryon/radiation isocurvature mode can be produced. In the opposite situation in which the curvaton contributes negligibly to the total energy density immediately before decaying, the isocurvature mode vanishes. This result is opposite to the standard prediction of the simplest curvaton scenario in which any pre-existing isocurvature mode is erased by curvaton decay if this latter dominates the energy density at the time of decay. This difference can be traced back to the conservation of baryon number and to the annihilation of all bb pairs produced during curvaton decay.
One noteworthy consequence of the above is to forbid the liberation of a significant amount of entropy by a late decaying scalar field at temperatures below any baryon violating processes, such as is often invoked for the dilution of unwanted relics.
Another consequence of the above is that a baryon-radiation isocurvature mode S br cannot co-exist with a (WIMP) dark matter -radiation isocurvature mode S χr , since the conditions to produce these modes are opposite to one another. Since S χb = S χr − S br , the existence of a baryon-dark matter isocurvature mode appears generic in this case (unless Ω < d σ is so small at the time of decay that the curvaton exerts essentially no influence on dark matter and baryon perturbations).
The asymmetric decay presents a different phenomenology. Since the curvaton decay does not produce the same number of baryons and anti-baryons, the annihilations cannot suppress all the baryonic decay product and, as a consequence, when the curvaton dominates at decay, the isocurvature perturbations are erased. In this case, most or all of the baryon and radiation fluctuations indeed originate from the curvaton. If the curvaton contribution to the energy density is smaller than unity at the time of decay, then radiation cannot be affected substantially, while the baryon fluid may be strongly affected; this situation results in a large baryon/radiation isocurvature fluctuation. In some sense, this case appears similar to the case of curvaton to dark matter decay studied in Ref. [25]. Contrary to the previous symmetric case, non vanishing S br and S χr can co-exist. We note however, that baryogenesis at low scales (below the electroweak phase transition) is rather contrived.
On more general grounds, the study presented in this article exemplifies how scenarios where scalar fields can decay at late times can be constrained not only at the background level, as it is usually done, but also by investigating the consequences at the perturbed level. It is clear that, if this type of information is taken into account, one can hope to improve our understanding of the feasibility of such theories. We hope to return to this question in future publications. | 2008-10-06T08:29:17.000Z | 2008-02-12T00:00:00.000 | {
"year": 2008,
"sha1": "80cad556a505bfaf5b3ffc298148a60d3522ab61",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0802.1601",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "80cad556a505bfaf5b3ffc298148a60d3522ab61",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256461642 | pes2o/s2orc | v3-fos-license | Characteristics and treatment of congenital perineal groove in male patients
Background Congenital perineal groove (CPG) of male patients has rarely been reported before. The purpose of this study was to review our cases and describe their characteristics and treatment. Methods Four male patients diagnosed with CPG were included in this study. Medical records were retrospectively reviewed. Type of CPG and anal position index (API) of the patients were recorded. Follow-up was through outpatient visits. Results Their age ranged from 4 years and 2 months to 10 years and 9 months. Among the four patients, two complained of intermittent CPG mucosal hemorrhage and the other two had mucous secreting and soiling. The API was 0.24, 0.35, 0.36, and 0.40 for each patient, respectively, all represented anterior displacement. Type of CPG for the four patients were all partial, and the sulcus was from the posterior perineum to the edge of anus. Two patients were associated with hydrocele, imperforated anus, and rectoperineal fistula; one patient had left varicocele; the remaining patient had sacrum split. All the patients had no postoperative complication, and during the follow-up period of 5–8 months, no symptoms recurred in the four patients; they all had normal defecation. Conclusions Both genders share the common three characteristics. In addition, shortened perineum with anterior anus, association of perineal malformations, and partial type occurrence are the extra morphological features in male patients. Furthermore, CPG in males are rarely accompanied by urinary tract infection. Favorable prognosis could be reached after operation.
Introduction
Congenital perineal groove (CPG) is a rare malformation characterized by the presence of a moist sulcus with mucous membrane on the perineum since birth (1,2). Because of the rarity, exact incidence cannot be calculated. Articles concerning this topic were mainly case reports and an obvious female predominance was noticed (2,3). To date, there have only been four male patients documented in the literature (4)(5)(6)(7). Thus, characteristics and treatment of male patients have yet to be summarized.
Currently, CPG can be divided into two types based on the morphology of perineum, complete and partial (2,3,8). The complete type has the sulcus extended from the posterior fourchette to the anterior edge of the anus in female patients. In the partial type, the sulcus is incomplete and reaches either the posterior fourchette or the anus. This classification can be adopted in male patients when assessing the sulcus from the scrotum to the anus.
The purpose of this study was to retrospectively review our CPG male patients, describe their characteristics and treatment consideration, and discuss the potential pathogenesis.
Materials and methods
From January 2020 to March 2022, the General Surgery Department of our hospital accepted four male patients who were diagnosed with CPG. The clinical data were retrospectively reviewed with attention to age, manifestations, associated anomalies, and treatment-related information. The CPG type was categorized as complete when the sulcus extended the whole way from the scrotum to the anus, and noted partial if otherwise. Their follow-ups were held through regular outpatient visits.
Anal position index (API) is a quantitative measurement to define the position of anus on the perineum by calculating the ratio of anal-scrotum distance to coccyx-scrotum distance for males, and <0.51 was indicative for anterior displacement (9). Each patient was measured three times, and the mean value was recorded for future analysis.
Results
CPG was diagnosed in four male patients in the referring period ( Table 1). The age ranged between 4 years and 2 months and 10 years and 9 months. Among the four patients, two complained of intermittent CPG mucosal hemorrhage and the other two had mucous secreting and soiling since birth. No dermatitis, local complication, or urinary tract infection was found in the four patients, and they interacted well with their peers. All four patients had no history of constipation, perineum trauma, or sexual abuse. The API was 0.24, 0.35, 0.36, and 0.40 for each patient, all of which represented anterior displacement, and their perineum appearances were as shown in Figures 1-4, respectively. Types of CPG for the four patients were all partial, and the sulcus was coincident from the posterior perineum to the edge of anus. After analyzing history and comprehensive examinations, one patient had left varicocele; another patient was associated with sacrum split; the remaining two patients had imperforated anus with rectoperineal fistula and hydrocele, and one of them had undergone high ligation of the patent processus vaginalis before. No other anomalies, like hypospadias, penoscrotal transposition, bifid scrotum, or labioscrotal fold deformity, troubled the patients.
With the goal of smooth and efficient recovery, all the patients received preoperative intestinal preparation of daily colon irrigation for approximately 7-9 days with diet control. Surgery was through anterior perineal approach (Figures 2, 4) with lithotomy position after general anesthesia. First, we inspected the relationship of the rectum and anal sphincter position by using an electric stimulator to differentiate anterior anus and imperforated anus with rectoperineal fistula (Supplementary Video S1). When the rectum was located in the center of the sphincter, anterior displacement anus was diagnosed (Figure 1), if not it was diagnosed as rectoperineal fistula. In addition, if a pouch could be touched by rectum digital examination, the diagnosis would be rectoperineal fistula ( Figure 2B). When a rectoperineal fistula was identified, a cutback anoplasty was done on the first step ( Figure 2C). Second, mucous membrane of the sulcus was removed beginning from the perineum to the edge of anus ( Figure 4B), continued with anatomical reset and intermittent layered suture in situ ( Figures 2D, 4C). Urethra was well protected during surgery and catheter was utilized as an indicator for urethra recognition. Pathology of all four patients revealed squamous epithelium.
The patients fasted for the first postoperative week, combined with anus nursing by cleaning and local physiotherapy. They suffered no infection or dehiscence of the wound. From the second week, the two patients who had cutback anoplasty began regular anal dilation for up to 6 months. During a follow-up period of 5-8 months, no symptom recurred in the four patients, and they all had normal defecation.
Discussion
CPG is a well-delineated, erythematous, non-epithelized mucous membrane on the perineum (10, 11), which was classified as part of the miscellaneous rare group in the 1984 Wingspread classification, and redefined as rare or regional variants in the 2005 Krickenbeck classification (12). Currently, CPG has limited case reports, and most of them are female pediatric patients. After a thorough literature search, information on four male patients was found (4-7), but only two had details. Thus, it is crucial to report the characteristics and treatment of CPG male patients according to our single center experience. Compared with current literature, this study has included the most number of male patients. Pictures (A,B) of case 2 displaying the right hydrocele, shortened perineum, and imperforated anus with perineal fistula. There was a pouch behind the fistula (B). Operation was through cutback anoplasty (C) and sulcus mucous membrane removal (D).
Frontiers in Pediatrics
Our results showed that the four patients had mucous-related symptoms such as hemorrhage or secretion. Different from female patients, the males had no urinary tract infection. This could be explained by the fact that the urethra and the sulcus has a closer anatomic relationship at the perineum in female patients, especially when the sulcus is the complete type with the posterior fourchette open; mucous secretion would easily contaminate the orifice and induce infection. However, this hardly happens in males as the sulcus is usually the partial type and at a posterior place, separated from the urethra by the scrotums.
In 1968, professor Stephens concluded that female CPG had normal formation of the vestibule, hypertrophy of the minoral tails, and the presence of a wet groove in the perineum between the posterior fourchette and the anus (13). From our study, males may have other extra morphological features. First, API of the four patients all reflected shortened perineum and anterior displacement anus, and two were imperforated anus with rectoperineal fistula. In addition, previous reported cases also had hypospadias, penoscrotal transposition, and bifid scrotum (5,6). Additionally, all four patients were classified as partial type CPG, and the sulcus was from the posterior perineum to the anus instead of from the scrotum to the anus. Shortened perineum with anterior anus, associated perineal malformations, and the occurrence of partial type CPG may be the morphological characteristics of male patients in addition to the three female features. This can only be further proved by enlarging the sample volume. Explanation for such a phenomenon can be stemmed from the interruption of embryology process of genital folds, labioscrotal folds, and urorectal septum. Normally in the 7th embryo week, development of the urorectal septum separates the cloacal into the anterior urogenital sinus and the posterior anus (13,14). Then, the labioscrotal folds grow to surround the urogenital sinus bilaterally, and from the 9th week, they begin to descend caudally and fuse at the midline, forming the scrotums in males (10,13,15). Meanwhile, the urorectal septum keeps developing, leading to the elongation of the perineum, which pushes the anus backward to its correct anatomical position (15). When various insults interrupt the process and cause defects in extension of the urorectal septum, it would result in a shortened perineum and anterior anus (10). The failure of genital folds fusion at midline would cause sulcus (1,3,8,10,11,16). Interruption of labioscrotal fold development contributes to penoscrotal transposition and bifid scrotum (3,10). Push force from the bilateral mesenchyme promotes the midline fusion (14). Thus, we hypothesized that the forming of scrotums with descending of testis had a larger push force for bilateral fuse than the labia fusion. As a result, it is more rare to encounter CPG male patients, and when diagnosed, they tend to have the partial type of CPG whereby the sulcus ranges from the perineum to the anus instead of scrotum-anus type or scrotum-perineum type. Of course, this hypothesis should be verified by embryological experiments in the future.
CPG is a clinical diagnosis, which is easy to make by inspection (3,8). It should be differentiated from anal fissure, dermatitis, trauma, or even sexual abuse (3,8,11). Apart from the same point of fissure and hemorrhage, patients with anal fissure usually have a definite history of constipation with hard and dry stool, as well as FIGURE 4 pain defecation that can be relieved when the stool improves. Unlike sulcus, the fissure can be anywhere around the anus in males, unlike the standard 12 o' clock anal position in CPG. New onset dermatitis can help distinguish from persistent CPG. Trauma or sexual abuse can be ruled out with a clear injury history. Conservation management is recommended for CPG because it is usually asymptomatic and has the potential of self-resolving by epithelialization before the age of 2 years (1,4,8,17). Surgery is only considered when patients are associated with other perineum anomalies that need operation, or over 2 years old with bothersome symptoms (1, 3, 4), like the four patients included in this study. Some patients also received operations due to their parents' request for cosmetic purposes. Different from females, the urethra should be protected carefully in male patients during surgery, as the sulcus is very close to it. Usage of a urethra catheter is essential before the excision to better indicate and protect the urethra. Postoperative course of the four patients were uneventful, and excellent prognosis was reached, which was in accordance with the literature.
In conclusion, CPG is a kind of regional variant of perineum anomalies, which results from incomplete perineum development. It has a self-resolving potential before the age of 2 years. Unlike females, appearance in male patients is rarer. They share the features of a normal urogenital system formation, hypertrophy of the minoral tails, and wet groove in the perineum. Extra morphological features in male patients include shortened perineum with an anterior anus, association of perineal malformations, and a higher likelihood of being partial type CPG. Furthermore, they are less prone to have a urinary tract infection, and special attention should be paid to the urethra during surgery in case of iatrogenic injury. Favorable prognosis could be reached after operation.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. | 2023-02-02T14:27:30.129Z | 2023-02-02T00:00:00.000 | {
"year": 2023,
"sha1": "68a0faf11eaaf0bfbbda3661b42dcddfa098cb8c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "68a0faf11eaaf0bfbbda3661b42dcddfa098cb8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
248840787 | pes2o/s2orc | v3-fos-license | Measuring the State Losses in Corruption Cases at the Village Credit Institutions in Bali
the initial capital for the establishment of the Village Credit Institution.
INTRODUCTION
In essence, whatever is attempted and done by human beings in the community both as an institution or as a community actually aims to improve the welfare of themselves, their families and society. In addition, the things done are also in order to ensure the realization of social order 1 . Unfortunately, in practice, there are many obstacles, challenges, and hindrances in realizing the intended goal. One of them is the corrupt and mental behavior that ignores the procedural process and is very concerning. Criminal acts of corruption by stealing people's money have really worst results. They are considered as extraordinary crimes 2 , because they can damage the democratic system and undermine the SASI, 28(2) 2022: 225-233 P-ISSN: 1693-0061, E-ISSN: 2614-2961 competitiveness of the nation, besides being able to loosen the spirit of the younger generation to study and struggle.
Speaking of corruption as an obstacle to development, it turns out that by the end of 2021, the legal issue having appeared as an actual and interesting trending topic and hot issue to be discussed and researched has emerged in Bali. In particular, it is the phenomenon of the emergence of cases of criminal acts of corruption committed in congregation by the management of the Village Credit Institution in the area 3 . The Village Credit Institution (in Indonesian 'Lembaga Pengkreditan Desa' and abbreviated as LPD) refers to microfinance institution belonging to traditional villages that are established based on local wisdom 4 5 6 .
In the case of a criminal act of corruption in the form of abuse of money at the Village Credit Institutions, the losses suffered are not half-hearted; it reaches hundreds of billions of rupiah 7 . Generally, these cases are resolved in a traditional method with the spirit of local wisdom filled with a sense of kinship. They are resolved through deliberation or Paruman Krama of Traditional Villages to reach an agreement or consensus 8 . However, the prosecutor' office suddenly entered the realm of this custom, and then took part in conducting preliminary investigations and investigations 9 . The result is actually counterproductive, because positive law enforcement by the prosecutor's office on this case is considered not beneficial to the community 10 , especially in calculating the state losses with very minimal findings so that they tend to lighten penalties for violators of this customary village mandate.
The involvement of the prosecutor's office in resolving cases of criminal acts of corruption at the Village Credit Institution is only based on a document published in 1988 regarding the Bali Governor's Assistance amounting to two million rupiah as the initial capital for the establishment of the institution. In other words, the truth is not the material truth required in the realm of criminal law, but only the formalistic truth as required by the realm of civil law. It is because it appears that in the realm of empirical facts, the initial capital for the establishment of the Village Credit Institution originated from the collective funds collected by the karma (community) of Traditional Villages in the form of urunan or pecingkreman 11 . The matter that is even more concerning is the distortion of the gap regarding the amount of state losses determined by the prosecutor. For example, an audit conducted by a traditional village officer or administrator through the Village Credit Institution Asset Rescue Team found the potential loss to the community was IDR 3,000,000,000 (three billion rupiah). Then, the Prosecutor's Office claimed the state's loss was only IDR 1,000,000,000 (one billion rupiah). Thus, it became out of sync with the basis of the document for capital participation from the Governor of Bali which amounted to IDR 2,000,000 (two million rupiah) in 1988 with claims for state losses that reached IDR 1,000,000,000, and also with the results of the audit by the traditional village management or administrators who found the community concerned suffered losses of up to IDR 3,000,000,000 12 .
The prosecution's claim is considered a far-fetched and light act by the public; as if the practice of the legal mafia had occurred. Based on observations in the field, the reason why many traditional villages have not established a Village Credit Institution is they feel unable to manage it professionally. They doubt that an institution which in essence has a noble purpose for the welfare of the community turns out to serves as the cause of disastrous divisions and commotion for the community due to the wrong management and administration. If examined in more depth, it appears that this reasoning is true, considering that until the end of 2021, there had been many Village Credit Institution administrators who were detained by the prosecutor's office because they were indicated to have abused public funds collected in the institution. There is a lot of news regarding the Village Credit Institution that went viral on social media because its administrators were indicated to have committed criminal acts of corruption. Among them are the following: 1) The Head of LPD Anturan Buleleng, being indicated to have abused funds of approximately IDR 174 billion.
2) The Head of LPD Kapal Badung, being indicated to have abused public finances of no less than IDR 240 billion.
3) The Head of LPD Desa Tuwed, the Regency of Jembrana, being indicated to have abused public funds up to IDR 2.4 billion. There are many more LPD heads who are entangled in legal cases alleging committed criminal acts of corruption of public money or harming the state. 4) The Head of LPD Sunantaya, the Regency of Tabanan, being indicated to have abused public funds of approximately IDR 125 million. 5) The Head of LPD Temukus, the Regency of Buleleng, being indicated to have abused public funds of approximately IDR 1.2 billion. 6) The Head of LPD Tamlang, the Regency of Buleleng, being indicated to have abused public funds amounting to IDR 700 million. 7) The Head of LPD Manik Liu Tampaksiring Gianyar, being indicated to have abused public funds of approximately IDR 240 million.
The data above represent the data regarding the criminal acts of corruption committed at the Village Credit Institutions in Bali which went viral after reported by the mass media, both electronic and print media. Of course, with the present research, cases of corruption at other Village Credit Institutions experiencing a liquidity crisis will be revealed and the number will certainly be very large. This is mainly because the institution's financial management system is not yet based on Information Technology whose data can be accessed anytime, anywhere, and by anyone. This research is designed to answer the basic problems regarding cases of criminal act of corruption at the Village Credit Institution in Bali by measuring losses the state sufferred. Therefore, the problems examined in this study are formulated as follows: 1) Can the results of the audit of the regency/municipality inspectorate be used as a basis for prosecuting suspects in cases of financial abuse committed at the Village Credit Institutions? 2) What alternative solutions should be taken in an effort to resolve the cases of financial abuse at the Village Credit Institutions?
METHOD
This research belongs to the type of mixed legal research method. There are two research approaches applied in this study. The purpose is to make the legal materials and data collected more accurate and guarantee the validity of the research results. Legal materials were collected using document search methods and library materials by means of identification and text analysis. The process was performed by collecting data and legal materials in the form of laws and regulations, minutes and notations, legal documents, scientific books, publication documents or journals, and other sources of information related to the problem under the study, including field-based information collected through interviews. All legal materials and data that had been collected were comprehended, reviewed, recorded, and clarified to be adapted to the material in the research problems. The research was conducted in Bali Province by taking one sample from the village credit institutions in each of the five regencies -consisting of Buleleng Regency, Karangasem Regency, Klungkung Regency, Tabanan Regency, and Jembrana Regency.
The Audit Results of the Regency/Municipality Inspectorate Cannot Be Used as a Basis for Prosecuting Suspects in Financial Abuse Cases at Village Credit Institutions
Quo Vadis in this context appears in the form of putting aside, not respecting, and ignoring the decisions of indigenous peoples that are autonomous, unique, and religiously magical by law enforcers who work in the name of legal certainty, benefit, and justice 13 14 . Law enforcers carry out their duties at a cost of approximately fifty million rupiah to deal with one corruption case at the Village Credit Institutions 15 . However, it is as if the law enforcers are deliberately setting aside the miseries, screams, and cries of the people who save their money at the Village Credit Institution when the money has been abused by the administrators of the Village Credit Institution. Thus, it can be argued that the model of law enforcement applied by the prosecutor's office is out of a sense of justice and there is no sensitivity to public unrest and suffering; and therefore, it is useless.
In fact, the wider community and law enforcers do not need to be too excessive in judging the Village Credit Institution or its administrators as a failed financial institution. ISSN: 1693-0061, E-ISSN: 2614-2961 should always keep the atmosphere in the traditional village conducive, as the Village Credit Institution indeed belongs to the traditional village and does not belong to the State 16 . The logical construction of legal thinking is the problem has been resolved in a customary method, but then it is drawn into the realm of state law into a crime of corruption that is detrimental to the state. In fact, the capital to establish a Village Credit Institution is the urunan or pacingkrem or the voluntary assistance of the local indigenous krama (community). In other words, the Village Credit Institution belongs to the indigenous peoples, not the State. Ideally, the alleged loss to the management and the loot that has been taken back should be returned to the Village Credit Institution, because it would be very strange if the confiscated proceeds were returned to the State treasury in the name of legal certainty, truth, and justice based on the One Godhead.
Thus, law enforcement like this is "a misguide". The point here is the legal basis used to drag and prosecute Village Credit Institution administrators who are experiencing problems is a wrong legal basis. Despite being wrong, the legal basis in question continues to be applied for prosecution, without the will and courage to make improvements. In this context, the legal basis referred to refers to the results of an audit conducted by the Regency/Muncipalilty Inspectorate to ensnare the problematic management of the Village Credit Institution; that is the wrong base; and thus, it should be called "misguiding". In this context, there are at least three regulations that have been ruled out by law enforcement 17 , namely:
1) The Minister of Home Affairs Regulation (PERMENDAGRI) Number 64 of 2007
concerning Organizational Technical Guidelines and Work Procedures for Provincial and Regency/Municipality Inspectors, in particular Article 3 paragraph (2) expressly states that, the Regency Inspectorate has absolutely no authority to audit the finances of traditional village institutions, and of course, including the village credit institutions. 2) The regency/municipality inspectorate has the task of supervising the implementation of administration affairs in the regency/city area, the implementation of guidance on the administration of village government and the implementation of village administration affairs.
The village referred to in this regulation refers to the "service village", not at all "traditional villages". Accusing and punishing the Village Credit Institution management on the basis of the audit results from the regency/municipality inspectorate is "misguiding". The Inspectorate is an institution that has absolutely no competence, no authority, and no right to audit the Village Credit Institution's finances. This was emphasized by the Head of the Inspectorate of Jembrana Regency saying "We have no right to audit the Village Credit Institution's finances. The Village Credit Institution does not belong to the State".
Thus, accusing and dragging Village Credit Institution administrators by law enforcement on charges of corruption is a wrong legal construction from the start. If it is forced, of course in the future law enforcement in Indonesia will no longer be trusted by the public because law enforcement is principally based on legal certainty, targets and budgets.
It is conceivable that punishing people, in this case the Village Credit Institution administrators, with a wrong legal basis will certainly cause the court process to lose the public's trust 18 .
As the law enforcement jargon says, "It is better to release a hundred criminals than to punish one innocent person". The implication is the law in Indonesia pays very serious attention to human rights and provides legal protection to citizens who do what is right and have good intentions.
Moreover, the Village Credit Institution management who abused the finances of the institution had a good intention to return client/public funds by voluntarily surrendering their assets. The state certainly has an obligation to provide protection to every citizen of good faith. This is the regulation that is violated and set aside by law enforcers with the argument for legal certainty without paying attention to the principles of expediency and justice as the goal of law enforcement.
Law Number 1 of 2013 concerning Microfinance Institutions, especially the provisions of Article 39 which expressly states that "This Village Credit Institution is an institution that is exempt from the supervision of the Financial Services Authority and is exempted from the obligation to pay taxes'. It means that financial problems that took place at the Village Credit Institutions are not within the authority of the State to manage, because no State finances are harmed, considering that the Village Credit Institutions do not belong to the State, but belong to traditional village institutions. The original quote of Law Number 1 of 2013 Article 39 Paragraph (3) is as follows: "Village Credit Institutions and Lumbung Pitih Nagari as well as similar institutions that existed before this Law came into effect shall be declared to be recognized for their existence based on customary law and not subject to this Law." This means, if there is a financial problem at the Village Credit Institution, it does not necessarily involve law enforcement in its resolution, but should be resolved in a traditional manner through deliberation to reach consensus in accordance with local wisdom. This does not mean that law enforcers intervene in the name of law and truth which are very normative.
Bali Provincial Regulation Number 4 of 2019 concerning Traditional Villages in Bali. Article 28 junto Articles 30 and 36 of this regulation confirms that: "Baga Kerta Desa shall be obligated to resolve talks/problems' in traditional villages based on local wisdom". The original quotation from the Article 30 letter (e) and Article 4 Paragraph (2) in the Bali Provincial Regulation Number 4 of 2019 is as follows: Article 30 letter (e) Prajuru Desa Adat shall be obliged to settle customary/talk cases that occur in the Wewidangan of Traditional Village. Article 34 "In carrying out the task of resolving customary/talk cases as referred to in Article 30 letter (e), the Traditional Village Prajuru shall be accompanied by the Kerta of Traditional Village". Therefore, the presence of law enforcement in this context, in addition to using the wrong legal basis in carrying out prosecutions, is also flawed in the way to protect the SASI, 28(2) 2022: 225-233 P -ISSN: 1693-0061, E-ISSN: 2614-2961 public. It means that law enforcement becomes like "wearing horse glasses", completely insensitive to the suffering of the people who are obliged to be protected by the State.
Alternative Solutions That Should Be Taken in Efforts to Resolve Financial Abuse Cases at Village Credit Institutions
In the event that an act of financial abuse occurred at a Village Credit Institution and was committed by the management of the institution, based on the Regional Regulation of the Province of Bali Number 4 of 2019 concerning Traditional Villages in Bali, the case shall be settled in a traditional manner based on local wisdom, a sense of kinship, and deliberation to reach consensus. If this approach cannot be achieved, the best way to take is to report to the police on the basis of the provisions of Article 372 of the Criminal Code on Embezzlement and Article 362 of the Criminal Code on Fraud.
Based on the findings serving as the results of this study, behind the occurrence of financial abuse at the Village Credit Institution which amounts to billions of rupiah, there are five leading factors identified. They are: a) The supervision of management is not optimal b) The emergence of Illegal credit or lending of funds outside the existing mechanism and not recorded in the Village Credit Institution administration c) The emergence of non-performing loans d) Fraudulent credit collector e) Expansion of credit with incorrect calculations.
There are a great number of Village Credit Institutions administrators who manage finances by means of speculation, joining in with the act of "land grabbing". Land is purchased at a relatively low price but then offered at a high price. Not only that, these speculative lands were reported as high-value assets that the supervisors thought the Village Credit Institution was earning a large and healthy profit. In fact, that is not true at all. Moreover, if the plot of land does not sell because it is not suitable for sale, the people's purchasing power has decreased due to the Covid-19 pandemic which is still raging, and the public's lack of interest in purchasing the land because the price is expensive and the location is far and remote.
CONCLUSION
Based on the problems and research findings, it turns out that established on the regulation of Law Number 1 of 2013 concerning Microfinance Institutions, the Inspectorate Service at both the regency/municipality level is not permitted to conduct audits of financial abuse committed by the Village Credit Institution management. The material used to accuse the Village Credit Institution management personnel on committing financial abuse by law enforcement is the result of an audit conducted by the regency/municipality Inspectorate Institution. In addition to the wrong use of the legal basis, the law enforcement actions referred to are also wrong in fulfilling the elements of the articles of the law on corruption as confirmed in the provisions of Law Number 31 of 1999 concerning the Crime of Corruption, which requires state losses. | 2022-05-18T15:13:16.191Z | 2022-05-16T00:00:00.000 | {
"year": 2022,
"sha1": "d192b01f04999275526b25316ae53ebf67b189d4",
"oa_license": "CCBYNC",
"oa_url": "https://fhukum.unpatti.ac.id/jurnal/sasi/article/download/837/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a879389845941f527d462e69a1736a2a6bba49c7",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": []
} |
255133760 | pes2o/s2orc | v3-fos-license | Metabolic profiling of a transgenic Caenorhabditis elegans Alzheimer model
Despite decades of research, no early-onset biomarkers are currently available for Alzheimer’s disease, a cureless neurodegenerative disease afflicting millions worldwide. In this study, transgenic Caenorhabditis elegans were used to investigate changes in the metabolome after induced expression of amyloid-β. GC- and LC–MS-based platforms determined a total of 157 differential features. Some of these were identified using in-house (GC–MS) or public libraries (LC–MS), revealing changes in allantoin, cystathionine and tyrosine levels. Since C. elegans is far better suited to metabolomics studies than most other model systems, the accordance of these findings with vertebrate literature is promising and argues for further use of C. elegans as a model of human pathology in the study of AD.
Introduction
Alzheimer's disease (AD) is the most common cause of dementia, accounting for over 70 % of all cases worldwide (World Alzheimer's Report, 2009). In addition, AD causes a severe social and financial impact on patients and their environment. The number of patients worldwide with AD is estimated at a current 24 million, a number that will have tripled by 2050 (World Health Organization, 2012). AD is a progressive disease with a long preclinical phase of 10-15 years (Tarawneh and Holtzman 2012), which creates opportunities for a biomarker detection approach. AD is characterized by inflammation, neuronal loss, intracellular aggregation of the protein tau and extracellular plaques of the peptide amyloid-b. Amyloid-b, generated from the amyloidb precursor protein (APP), is mainly processed in an antiamyloidogenic manner (Haass et al. 2012). However, in AD patients, APP is overly processed according to the amyloidogenic pathway, which leads to the release of aggregating amyloid peptides (Haass et al. 2012). Additionally, the microtubule stabilizing protein tau becomes hyperphosphorylated and forms intracellular neurofibrillary tangles (Mandelkow and Mandelkow 2012). Since there is no cure for AD, it is becoming increasingly more important to find early pathological markers which could easily be measured by a non-invasive method, prior to the emergence of clinical symptoms.
Current biomarkers in cerebrospinal fluid (CSF) are the amount of total tau, phosphorylated tau, amyloid-b (Blennow et al. 2012) and the 42/40 ratio of amyloid-b (Wiltfang et al. 2007). Although CSF removal is an uncomfortable, invasive procedure, it is often used to diagnose AD. A range of imaging techniques has been developed over the last couple of years to improve the diagnosis of AD (Johnson et al. 2012). Despite the improvement of fluid biomarker discovery and imaging techniques, AD is typically diagnosed when patients start displaying cognitive impairment. Recently, researchers have started turning towards metabolomics as a promising method to assist in the search for early biomarkers of AD (Trushina and Mielke 2013).
Metabolomics uses a range of sensitive and complementary analytical platforms to study the levels of small molecules in cells, tissues, bio-fluids and entire organisms, referred to as the metabolome (Roessner and Bowne 2009). Because gene expression, protein activity and environment all exert certain influences on the metabolome, metabolomic readouts closely reflect cellular processes and provide highly accurate snapshots of an organism's state. Two platforms are mainly used: gas or liquid chromatographic separation hyphenated to mass spectrometry (GC-MS and LC-MS) and nuclear magnetic resonance (NMR)-based spectroscopy (Temmerman et al. 2013). Current literature on metabolomics related to AD is rather limited and a distinct set of metabolic markers has not yet been discovered (Graham et al. 2013a, b;Kaddurah-Daouk et al. 2011Lin et al. 2013;Mapstone et al. 2014;Motsinger-Reif et al. 2013;Orešič et al. 2011;Salek et al. 2010;Sato et al. 2012;. This may in part be due to the underlying heterogeneity of the sample groups. To improve this issue, the use of a model can be advantageous. Model organisms are used to investigate the function of certain factors in a simplified system in comparison to humans. Many model organisms have been used in the study of AD (e.g. Saccharomyces cerevisiae (De Vos et al. 2011), Caenorhabditis elegans (Link 2006), Drosophila melanogaster (Iijima-Ando and Iijima 2010), Mus musculus (Elder et al. 2010). Here, C. elegans was chosen because it allows for the most stringent level of experimental control in the study of multicellular organisms. This addresses the importance of minimizing unwanted variation, especially necessary when using sensitive techniques like metabolomics, biomarker discovery and compound screens. A transgenic, temperature-sensitive strain expressing amyloid-b in the neurons was selected, enabling time-and site-controlled expression of the transgene. Such strains have already proven their value in AD research in the study of amyloid-b aggregation (Fay et al. 1998), gene expression (Link 2003), toxicity screening (Dostal and Link 2010), learning behavior (Dosanjh et al. 2010) and proteomic changes (Boyd-Kimball et al. 2006). In this study, a metabolic fingerprint was generated of a wellestablished (Boyd-Kimball et al. 2006;Dosanjh et al. 2010;Dostal and Link 2010;Link 1995, 203) transgenic AD strain in order to monitor metabolic changes due to expression of amyloid-b. Both non-targeted GC-MS and LC-MS analyses were performed, ensuring a broad detection of the extracted metabolites. LC-MS analysis was further refined using two different chromatographic separation methods (reversed phase (RP) and aqueous normal phase (ANP)). Because C. elegans can be used to screen a large amount of metabolites in a relatively short time, these findings will form the basis of future testing of drug efficiency and the mode-of-action during AD progression.
Age-synchronized transgenic and control strains were cultured in liquid medium with E. coli K12 as a food source, for 34 h at 16°C. Food availability was held constant (OD 600 = 1.68) by adding bacteria twice a day during culturing. After 34 h, both strains were shifted to 23°C, inducing the expression of Ab 1-42 in the AD strain only. Another 30 h later, worms were collected by pelleting on ice (Fig. 1). The removal of bacteria and debris was carried out using a sucrose-flotation (60 % sucrose). After collection, the worms were washed with S-buffer (Brenner 1974) for 5 times and partitioned into aliquots of maximum 0.3 ml per tube (Precellys lysing kit, Bertin Technologies). The aliquots were snap frozen in liquid nitrogen and stored at -80°C.
A total of 14 biologically independent samples for each condition were used. As negative controls, three samples of each condition without temperature upshift were produced, in addition to four supernatant samples. The latter were used to establish the metabolite baseline of the medium: features extracted from these negative controls were omitted from all further sample analyses.
Extraction
We opted for an extraction using ice-cold 80 % methanol, as described earlier for C. elegans (Geier et al, 2011). Icecold methanol, containing an external standard ( 13 C 6 -sorbitol (0.5 mg ml -1 ), 13 C 5 15 N-Valine (0.5 mg ml -1 ), 2-aminoanthracene (0.25 mg ml -1 ) and pentafluorobenzoic acid (0.25 mg ml -1 ), was added to the worm pellet until 80 % methanol was reached. All samples were homogenized at low temperature (Precellys 24, Bertin Technologies; 2 9 30 s, 4800 rpm, -20°C) to avoid potential metabolite degradation. Subsequently, samples were held on ice for 15 min and were then centrifuged for 15 min at 13,000 rpm. Supernatants were transferred to new Eppendorf tubes and pellets were re-extracted with 80 % methanol. To avoid protein contamination, all samples were filtered using a 3 kDa cutoff filter (Amicon Ultra UFC500308). Both supernatants were combined and redivided into 3 aliquots for GC-MS and LC-MS (RP and ANP coupled to quadrupole time of flight mass spectrometry (Q-TOF)) analyses. These aliquots were dried using a speed-vacuum concentrator and then stored at -80°C until analysis.
GC-MS analysis
The dried samples were redissolved in 10 ll of 30 mg ml -1 methoxyamine hydrochloride in pyridine and derivatised at 37°C for 120 min with mixing at 500 rpm. The samples were then treated for 30 min with 20 ll N,O-bis-(trimethylsilyl)trifluoroacetamide (BSTFA) and 2.0 ll retention time standard mixture [0.029 % (v/v) n-dodecane, n-pentadecane, n-nonadecane, n-docosane, n-octacosane, n-dotriacontane, n-hexatriacontane dissolved in pyridine] with mixing at 500 rpm. Each derivatised sample was allowed to rest for 60 min prior to injection.
Samples (1 ll) were injected into a GC-MS system comprised of a Gerstel 2.5.2 autosampler, a 7890A Agilent gas chromatograph and a 7000 Agilent triple-quadrupole MS (Agilent). The MS was adjusted according to the manufacturer's recommendations using tris-(perfluorobutyl)-amine (CF43). The GC was performed on a 30 m VF-5MS column with 0.2 lm film thickness and a 10 m Integra guard column (J & W, Agilent). The injection temperature was set at 250°C, the MS transfer line at 280°C, the ion source adjusted to 250°C and the quadrupole at 150°C. Helium was used as the carrier gas at a flow rate of 1.0 ml min -1 . For the polar metabolite analysis, the following temperature program was used; start at injection 70°C, a hold for 1 min, followed by a 7°C Fig. 1 Scheme of experimental setup. Both AD model and control strains were cultured for 34 h at 16°C. After this period, a temperature upshift to 23°C was performed, inducing amyloid-b 42 expression in the AD strain only. Thirty hours later, worms were collected and snap frozen with liquid nitrogen. An 80 % ice-cold methanol extraction was performed and samples were aliquoted for GC-MS, RP-QTOF or ANP-QTOF. A After data-analysis, features that were putatively identified showed a similar pattern as seen in previous human AD research. B Some of our unidentified features could be linked to LC-MS AD metabolomics profiling literature Transgenic Caenorhabditis elegans Alzheimer model 479 min -1 oven temperature ramp to 325°C and a final 6 min heating at 325°C in which the data was acquired in fullscan mode. Both chromatograms and mass spectra were evaluated using AMDIS (NIST, www.chemdata.nist.gov) and Agilent MassHunter Workstation Software, Quantitative Analysis, Version B.05.00/Build 5.0.291.0 for GCMS. Mass spectra of eluting compounds were identified using NIST08 database and the in-house Metabolomics Australia mass spectral library. All matching mass spectra were additionally verified by determination of the retention time and index in comparison to those of standard substances. Every six samples, a pooled instrument control sample consisting of 54 standard compounds was run to evaluate potential retention time shifts and loss in sensitivity (Supplemental Fig. 1). As can be expected for GC, all quality control runs overlapped clearly and no significant retention time shift was present. A significant loss in sensitivity was not observed. It can therefore be taken into account by normalization strategies (see below). None of the differential and identified metabolites had multiple TMS derivatives. All data were exported as a comma separated value file for further data analysis.
Materials
LC-MS grade formic acid and ammonium acetate were purchased from Sigma-Aldrich (Sydney, Australia). Deionized water (18.2 MX) was used throughout all experiments. HPLC grade methanol and acetonitrile (ACN) were purchased from Burdick and Jackson (Ajax, Sydney, Australia). Reversed phase (RP) chromatography was done using a Zorbax Eclipse XDB-C18 m 2.1 9 100 mm, 1.8 lm (Agilent, Santa Clara, CA, USA). The Cogent diamond hybrid 2.1 9 100 mm, 4 lm particle size ANP column was purchased from MicroSolv Technology (Brisbane, Australia).
LC-MS system
In this experiment an Agilent 1200 series HPLC was used (Santa Clara, CA, USA) comprising of a vacuum degasser, binary pump, thermostated auto sampler and column compartment. Extraction procedures, solvent gradients, concentration sample, column conditions and mass spectrometer settings were optimized using pooled samples; reflecting an averaged sample of the overall experiment.
The settings found to be optimal for C. elegans metabolites were then used for all subsequent runs. For RP chromatography, a 10 min linear gradient of 95:5 water/ACN to 5:95 water/ACN at 0.4 ml min -1 was used while the column temperature was held at 50°C. Both mobile phases contained 0.1 % formic acid. For the complementary ANP procedure, solvents were made with uttermost care and the system was thoroughly flushed to ensure a proper separation of the metabolites. The organic mobile phase solvent (B) was composed of 90 % ACN with 0.1 % (w/v) ammonium acetate and 0.1 % acetic acid. The aqueous mobile phase (A) was composed of 100 % deionized water with 0.1 % (w/v) ammonium acetate and 0.1 % acetic acid (pH 3.4). The column flow-rate was 0.4 ml min -1 and column temperature was kept at 50°C. The optimal gradient started at 100 % B then linearly decreased to 40 % B over 10 min, followed by a 1 min hold at 40 % B. The column was then re-equilibrated at 100 % B for 6 min 20 . For both modes, a washing step was added every run to control for unwanted carry-over. Every six samples, a pooled biological control sample was run to evaluate potential retention time shifts and variations in mass accuracy (Supplemental Fig. 2). Retention time shifts were never bigger than 0.1 min and the average deviation of the mass accuracy always remained lower than 1.78 ppm (Supplemental Table 1). The mass spectrometer used was an Agilent 6520 QTOF MS system (Santa Clara, CA, USA) with a dual spray ESI source. The conditions for the source were: nebulizer pressure of 45 psi, gas flow-rate of 10 l min -1 , gas temperature 300°C, capillary voltage of 4 kV and skimmer 65 V. Measurements were performed in the extended dynamic range mode (m/z range of 70-1700 amu), both in positive and negative ion mode and collecting centroid data. Data were exported as.mzdata to be further analyzed in MZmine 2.10 (Pluskal et al. 2010). In addition, to increase the accuracy of the identification, high resolution (70,000), more accurate (\3 ppm) MS and MS/MS data of differential features were obtained by running pooled samples using a 'top 10' method on a Q Exactive Hybrid Quadrupole-Orbitrap mass spectrometer (Thermo Scientific).
LC-MS data analysis was performed using MZmine 2.10 (Pluskal et al. 2010). After centroid peak detection, all data points above the noise level were processed as pairs of m/z and intensity values. Peak lists were created using the chromatogram builder. The chromatograms were deconvoluted and isotopic peaks were grouped. Finally, peak lists were aligned using the random sample consensus (RANSAC) alignment method (Pluskal et al. 2010). After filtering and gap filling, the data matrix was exported as a comma separated value file for further processing. All parameters were optimized for each data collection mode. LC-MS identification was performed using public databases (HMDB, KEGG and Metlin). Based on mass value (Dppm \ 10 ppm) and accurate mass (Dppm \ 3 ppm) features were matched against these databases.
Data analysis
Statistical analysis was performed using the R package metabolomics ) and the MetaboAnalyst webserver (Xia et al. 2012). An initial log 2 transformation was applied to obtain a normal distribution. After this transformation, the dataset was median normalized and a combination of multivariate and univariate statistical tests was performed. Principal components analysis (PCA), an unsupervised explorative data analysis method, was performed to evaluate the overall variance in the obtained datasets. Similarly, supervised partial-least squares discriminant analysis (PLS-DA) was conducted to better explore the variance differentiating the two experimental conditions (AD vs. control). Variable Importance in Projection (VIP) scores represent which of the features contribute most to the differentiation of the experimental groups in PLS-DA analysis. Significant differences in abundance of individual features between conditions were evaluated using a standard t test. All p values were adjusted according to the Benjamini & Hochberg principle to take false discovery rate into account.
Results and discussion
There are already a few reports on the metabolome of human AD cerebrospinal fluid (CSF) (Kaddurah-Daouk et al. 2011Orešič et al. 2011;) (Graham et al. 2013a), plasma ) and of CSF reflecting the pathological progress from mild cognitive impairment to AD (Orešič et al. 2011). Despite these research efforts, working with human samples implies a high level of inherent variation (age, sex, diet, medical history, etc.), which may mask relevant results. Therefore, use of more controllable model organisms can help to deliver a more delineated fingerprint, which can then be used for targeted studies in patients. Based on this reasoning, metabolic analyses on transgenic AD mice (Fukuhara et al. 2013;Graham et al. 2013b;Salek et al. 2010) revealed a widespread perturbation of metabolism in different tissues and bio fluids. In the same vein, here C. elegans was used to discover metabolic changes due to AD by generating a metabolic fingerprint of a transgenic, panneuronal amyloid-b strain.
Caenorhabditis elegans is a suitable model organism to screen multiple drug compounds efficiently and has the potential to discover markers for diseases in a cheap, fast and controlled manner. Metabolomics has proven its value for C. elegans research (Fuchs et al. 2010;Hughes et al. 2009), although profiling, to our knowledge has not been performed with transgenic C. elegans AD models. We used a metabolic approach to evaluate the C. elegans amyloid-b AD model, relying on a combination of GC-MS and four LC-MS platforms (ANP-MS and RP-MS, each acquired in positive and negative ion mode).
Overall feature detection and sample separation
A clear chromatographic separation was achieved for all approaches (Fig. 2), resulting in a final total of 157 differential features (p value \0.05) ( Table 1). Mean normalized abundance and standard deviation of all samples in all modes were determined (Supplemental Table 2).
Unsupervised multivariate statistics did not always succeed in separating the experimental groups (Supplemental Figs. 3,4). Principal components analysis (PCA) generally resulted in a modest separation of the experimental groups. Poor PCA plot separations are also the case in similar studies (Lin et al. 2013;), but these rely on less controllable model systems. This could possibly be explained by the sample type: as opposed to human CSF or brain tissue samples, we used whole-mount extracts for analysis. Therefore, the behavior of some differentiating features might yet be diluted or masked. Even though the central nervous system is the actual site of interest, it can currently not be distinguished from other tissues in C. elegans. Nevertheless, partial least squares discriminant analysis (PLS-DA) in combination with variable importance in projection tables illustrate that observed differential features considerably contribute to the separation of the two groups. Evaluation of volcano plots, showing the significance and fold-changes of all data points, was also indicative of a defined set of robustly differential features .
Differential features as a result of AD pathology in C. elegans
For the GC approach every peak corresponding to a feature was manually selected and compared to a library of reference compounds, resulting in a final list of 76 unique compounds. Upon differential analysis, only a few known metabolites were significantly altered in AD conditions as compared to controls (Table 2). Of all features, 38 % could not be identified because of a lack of corresponding library entry (in-house or NIST08) (Supplemental Fig. 17). LC-MS data were acquired in positive and negative ion mode, each combined with two separation methods (RP and ANP) (Fig. 2). As such, an elaborate spectrum of metabolites could be examined. RP and ANP methodologies are suitable for differing sub-populations of the metabolome, explaining why only a few features were seen in both approaches. Features detected in both ANP and RP delivered similar readouts (Supplemental Fig. 18 It is immediately clear that the RP negative mode dataset shows the highest amount of differential features (Table 1, Supplemental Table 3). By looking in further detail to this mode, many features were observed to co-elute and have similar fold changes. This may imply that a single metabolite may be present as a number of adducts which could lead to deconvolution of several related features. Therefore, an adduct search using the CAMERA package was performed on RP negative mode data (Kuhl et al. 2012). Approximately 15 % of the features were identified as adducts. The most significant differential (p value \0.01) features are presented in Table 3, remaining differential results are shown in supplemental Table 3. In sum, LC-MS analysis revealed 149 differential features, mostly detected from reversed phase LC in combination with negative ion mode.
By comparing the obtained results with research on AD LC-MS metabolomics literature (Lin et al. 2013;Trushina Fig. 2 Representative UPLC ESI-base peak chromatograms of C. elegans extracts from different platforms. X-axis: Retention time (from 0 to 14 min), Y axis: Peak intensity (total ion count from 0 to 8.0E6) Z-factor provides a useful tool for comparison and evaluation of the quality of the assay (Zhang 1999) et al. 2013), some putative features were found with matching mass (Dppm \ 50 ppm) and similar fold changes ( Fig. 1; Table 3, Supplemental Table 3). These confirm that the same reactions might be (de)activated in C. elegans when comparing to mouse or human. Since most metabolomics experiments conducted to date relied on targeted methods, it can be assumed that this number is an underestimation of the actual correspondence (i.e. when comparing to other non-targeted studies). In addition, more corresponding features can be expected if not only amyloid-b 1-42 , but also the protein tau would be expressed in the C. elegans model. Amyloid-b 1-42 probably only induces a part of the metabolic changes which occur in AD (Ittner and Götz 2011), therefore, adding tau might better reflect the biochemical changes related to AD progression.
Known metabolic markers of AD
In our C. elegans experiments, several putatively identified metabolites correspond to metabolites previously associated with AD pathology (Tables 2, 3). These are of special interest for further discussion. A higher level of allantoin was observed in our experiments in C. elegans, matching observations in human plasma and mice (Fukuhara et al. 2013;Zitnanová et al. 2004). Allantoin is produced in a non-enzymatic oxidation reaction when uric acid is exposed to reactive oxygen species (ROS). This is further supported by evaluation of the GC-MS VIP scores (Supplemental Fig. 7), which are high for both uric acid and allantoin itself, indicating these features contribute strongly to the differentiation of both conditions. Allantoin is often used as an oxidative marker (Yardim-Akaydin et al. 2006). Oxidative stress is a frequently discussed topic in AD research, since it may precede the appearance of pathological hallmarks, e.g. senile plaques and neurofibrillary tangles (Perry et al. 2002). Oxidative stress in AD is probably the result of a disturbed redox balance due to malfunctioning of the mitochondria (Zhao and Zhao 2013). Both amyloid-b and tau can be found in the mitochondria where they dysregulate the oxidative phosphorylation system (complex IV and I, respectively) (Rhein et al. 2009), associated with an increased ROS production. This could then indeed set the scene for the observed increase in allantoin levels. Upon expression of amyloid-b, tyrosine was upregulated in both LC-and GC-MS analyses, indicating the robustness of this result. Alterations of the tyrosine pathway in CSF, serum and autopsy-confirmed brain tissue of AD patients were also previously observed (Kaddurah-Daouk et al. 2011;. Tyrosine is an important precursor of the neurotransmitter dopamine, and of the catecholamines norepinephrine and epinephrine. When dopamine is formed, tyrosine is processed by tyrosine hydroxylase (TH). A reduced activity of TH (Trillo et al. 2013) and norepinephrine/epinephrine (Kaddurah-Daouk et al. 2011) have been observed in AD patients. The here observed upregulation of tyrosine might therefore be due to a reduced activity of TH, but this remains to be confirmed.
A decreased cystathionine concentration was observed after the expression of amyloid-b in C. elegans. Polymorphisms in cystathionine beta synthase (CBS), catalyzing the conversion of homocysteine to cystathionine, are wellknown risk factors for AD (Perluigi and Butterfield 2012). These gene polymorphisms are known to decrease CBS activity and cause a high concentration of homocysteine and a low concentration of cystathionine (Bi et al. 2010), in line with our observations in C. elegans. For tyrosine, formula and adduct ion are also shown. Z-factor provides a useful tool for comparison and evaluation of the quality of the assay (Zhang 1999). Adducts were combined and indicated with 0 a Confirmed in Lin et al. 2013. Accurate mass of tyrosine in Supplemental Table 4 Transgenic Caenorhabditis elegans Alzheimer model 483 The clear correlations of these identified, differential metabolites with vertebrate AD pathology support the robustness of the C. elegans model system.
Long-lived C. elegans exhibit opposite alterations
Because C. elegans is also a well-established model system for aging research, there is added value in comparing information for long-lived, healthy strains with the here used AD strain. This is because the latter is hallmarked by a decreased lifespan and impaired learning behavior due to neurodegeneration (Dosanjh et al. 2010), therefore displaying opposite phenotypes. Proteomic analysis (Depuydt et al. 2014) of the long-lived daf-2 mutant revealed an increase in tyrosine catabolism. This contrasts with the higher levels of tyrosine observed in the AD model used here. Similarly, a strong upregulation of CBS is observed in the long-lived daf-2 mutant (Depuydt et al. 2014), implying increased concentrations of cystathionine. This again confirms the molecular basis for the opposite phenotypes. As more and more-omics data are becoming available, such comparisons could in the future assist in discriminating general, aging-related effects from more AD-specific perturbations.
Prospects
Although our results are promising and some comparisons with human AD pathology could be made, elaborate comparison of studies is not straightforward. This has a dual reason: for one, human studies rely on very distinct sample types. Brain tissue, on one hand, can only be used from post-mortem patients and often suffers from degradation. CSF from patients, on the other hand, is an achievable alternative, but implies a change towards indirect results from a biofluid, rather than direct information from the affected tissue. In addition, limited effort is made for the complete determination of the metabolic fingerprint. This is readily understood from the set of metabolic markers discussed above, which do not (yet) display any clear biological coherence or pathway logic. If we want to accurately map the pathological process, complete identified metabolic fingerprints-which represent the comprehensive status of all extracted metabolites-should be compared over time in profiling experiments. This strategy will allow for the robust discovery of potential biomarker candidates and grants an invaluable advantage to complementary compound screens. After administration of a certain lead compound, the fingerprint can indicate which (sub)processes are altered. Currently, essential information can easily be overlooked due to the partial identification of the fingerprints. Such analyses should ideally be performed preclinical, in a controlled model where metabolic fingerprints can be identified more easily.
Concluding remarks
Our results show that C. elegans has the abilities to develop into an amenable model for AD metabolomics experiments. The here described set of metabolites provide a blueprint for future completion of the AD fingerprint, as such further refining our mechanistic insights into this devastating disease. Metabolomic analyses, compound screenings and biomarker discovery require an exceptional high level of experimental control. Future experiments with optimized double transgenic worms, expressing amyloid-b and tau together, will therefore be invaluable to assist in the advances of metabolomics with regards to AD progression. | 2022-12-27T15:41:14.081Z | 2014-07-30T00:00:00.000 | {
"year": 2014,
"sha1": "dd04ffdc0db27a1a47cfb7fccf94bfcab08ea007",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11306-014-0711-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "dd04ffdc0db27a1a47cfb7fccf94bfcab08ea007",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
167023 | pes2o/s2orc | v3-fos-license | Pitx2-dependent Occupancy by Histone Deacetylases Is Associated with T-box Gene Regulation in Mammalian Abdominal Tissue*♦
The homeodomain transcription factor Pitx2 and the T-box transcription factors are essential for organogenesis. Pitx2 and T-box genes are induced by growth factors and function as transcriptional activators or repressors. Gene expression analyses on abdominal tissue were used to identify seven of the T-box genes of the genome as Pitx2 target genes in the abdomen at embryonic day.10.5. Pitx2 activated Tbx4, Tbx15, and Mga and repressed Tbx1, Tbx2, Tbx5, and Tbx6 expression. As expected, activated genes showed reduced expression patterns, and repressed T-box genes showed increased expression patterns in the abdomen of Pitx2 mutants. Pitx2 occupied chromatin sites near all of these T-box genes. Co-occupancy by coactivators, corepressors, and histone acetylation at these sites was frequently Pitx2-dependent. Genes repressed by Pitx2 generally showed increased histone acetylation and decreased histone deacetylase (HDAC)/corepressor occupancy in Pitx2 mutants. The lower N-CoR, HDAC1, and HDAC3 occupancy observed at multiple sites along Tbx1 chromatin in mutants is consistent with the model that increased histone acetylation and gene expression of Tbx1 may result from a loss of recruitment of corepressors by Pitx2. Genes activated by Pitx2 showed less consistent patterns in chromatin analyses. Reduced H4 acetylation and increased HDAC1/nuclear receptor corepressor (N-CoR) occupancy at some Tbx4 sites were accompanied by increased H3 acetylation and reduced HDAC3 occupancy at the same or other more distal chromatin sites in mutants. Pitx2-dependent occupancy by corepressors resulted in alteration of the acetylation levels of several T-box genes, whereas Pitx2-dependent occupancy by coactivators was more site-localized. These studies will provide the basic scientific underpinning to understand abdominal wall syndromes.
Definitive endoderm and lateral plate mesoderm cells are formed and begin to migrate laterally between the ectoderm and primitive endoderm during early gastrulation. Together, these cells form the abdominal wall that begins to enclose the internal organs shortly after the mouse embryo turns. The abdominal wall at mouse embryonic day (E) 3 9.5 is composed of lateral plate mesoderm-derived mesenchymal cells inserted between ectoderm-and endoderm-derived cell layers. Abdominal somites are beginning to extend ventrally into the lateral plate-derived mesenchyme to form abdominal wall muscle anlagen. Classic ventral body wall defects are characterized by a thin body wall, muscular dysplasia, and/or absence of midline fusion (1, 2).
Several sequence-specific DNA binding transcription factors (SSTFs) are involved in the pathogenesis of congenital body wall defects. These include the homeodomain transcription factor Pitx2, which is essential for organ formation and body wall closure (3)(4)(5)(6). Pitx2 is a target of growth factor signaling pathways that mediate cell type-specific control of proliferation. Activation of the Wnt/-catenin pathway results in the release of Pitx2-associated corepressors and mediates recruitment of specific coactivator complexes in myoblasts (7). T-box transcription factors are involved in induction of mesendoderm (8), subdivision of posterior mesoderm into rostral and caudal domains, organ patterning and formation (9,10), and body wall development (11). T-box genes can be rapidly induced by growth factors (12), function as transcriptional activators or repressors (13)(14)(15)(16)(17)(18)(19)(20), and interact with nucleosome assembly (21) or chromatin-modifying proteins (22).
Coactivators and corepressors mediate the regulatory actions of SSTFs by a series of exchanged cofactor complexes that execute enzymatic modifications of nucleosomes and chromatin and interact with the basal transcription apparatus. Coactivators and corepressors are components of complexes that exhibit diverse enzymatic activities. These typically involve the covalent modification of histone tails by adding or removing acetyl, methyl, phosphate, ADP-ribose, ubiquitin, or small ubiquitin-like modifier moieties (23). Acetylation of histones H3 and H4 counteracts the tendency of nucleosomal fibers to fold into higher order chromatin structures. Relaxed chromatin is more accessible to interacting proteins and active gene expression. Acetylation is mediated by a series of histone acetyl transferases including the CBP/p300 family. In contrast, histone deacetylases (HDACs) reduce gene expression by deacetylating lysine residues in the N-terminal tails of histone proteins, thereby encouraging chromatin compaction. HDAC1, HDAC2, and HDAC3 have also been found in both N-CoR and silencing mediator for retinoid or thyroidhormone receptor (SMRT) complexes (24 -26).
Pitx2 is first expressed in the ventrally located somatopleure at E8.5, which will become the abdominal wall, and follows myogenic expression in the trunk (27). Pitx2 plays a critical role in jaw development by regulating Tbx1 (28). It is known that several T-box genes are expressed in and play essential roles in body wall development (11). We therefore examined T-box expression in Pitx2 mutants during abdominal wall development. Seven T-box genes were identified and validated as Pitx2 target genes. The genomic non-coding regions surrounding all of these genes showed clustered and conserved Pitx2 binding motifs within and outside of conserved and ultraconserved regions. Chromatin immunoprecipitation (ChIP) assays with abdominal tissue were used to demonstrate Pitx2 occupancy at all T-box target genes in vivo and compare Pitx2 occupancy at several sites scattered throughout the Pitx2-repressed Tbx1 and Pitx2-activated Tbx4 genes. The Pitx2 dependence of co-occupancy by coactivators (CBP and PCAF), corepressors (N-CoR, HDAC1, and HDAC3), and histone acetylation was also examined at these sites. Pitx2-dependent changes in chromatin were consistent with changes in expression for almost all sites in Pitx2-repressed T-box genes. In contrast, chromatin changes in Pitx2-activated T-box genes showed a less predictable pattern. The data were consistent with the model that Pitx2 represses its target genes by recruiting corepressors and HDACs at numerous genomic locations within these genes (7). The opposing Pitx2-dependent chromatin effects observed at different genomic sites in Tbx4 suggest that Pitx2 activates its target genes by more site-localized, or indirect, mechanisms.
EXPERIMENTAL PROCEDURES
Mice-All research was conducted in compliance with the Public Health Service (PHS) Policy on the Humane Care and Use of Laboratory Animals, the United States Department of Agriculture (USDA) Animal Welfare Act and Regulations (66), and the United States Government Principles for the Utilization and Care of Vertebrate Animals Used in Research, Teaching and Testing. All research that involved the use of vertebrate animals was reviewed and approved by the Oregon State University Institutional Animal Care and Use Committee (IACUC). ICR Pitx2 ϩ/LacZ mouse line (6) was used. Pitx2 ϩ/LacZ mice were bred, and females were checked for the presence of vaginal plug (E0). Embryos were isolated at E10.5, and yolk sacs were used for genotyping.
RNA Preparation and Microarray Analysis-Total RNA was prepared using Qiagen RNeasy mini kit, labeled using Affymetrix one-step labeling, and used to probe the Affymetrix Mouse Genome 430a 2.0 array. Results were analyzed using FileMakerpro software (29,30).
Whole Mount RNA in Situ Hybridization-Mouse embryos free of membranes were fixed with 4% paraformaldehyde, dehydrated, and rehydrated with a graduated MeOH in phosphatebuffered saline, 0.1% Tween series. Embryos were treated with 10 g/ml protease K (Invitrogen) and processed as described previously (28). Stained samples were photographed with a Discovery V8 Zeiss microscope and Axiocam camera system. Quantitative Real-time PCR (qPCR)-cDNA or immunoprecipitated (IP) DNA (25 ng) was analyzed by qPCR using SYBR Green I methodology. DNA was preincubated at 95°C for 10 min followed by 40 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 32 s, extension at 72°C for 32 s, and a final elongation step of 2 min at 72°C. The melting curves of all samples were routinely determined by melting curve analysis to ascertain that only the expected products had been generated. All samples were analyzed in triplicate and normalized by glyceraldehyde-3-phosphate dehydrogenase expression levels. Primers used are listed in supplemental Table 1.
Immunoprecipitation and Immunoblotting-Nuclear extracts were prepared as described previously (34) from abdominal tissue of E10.5 Pitx2 ϩ/ϩ mice. 100 g of nuclear extract were precleared with protein G-Sepharose (Amersham Biosciences) in Buffer IP150 (10 mM HEPES, pH 7.9, 10% glycerol, 1 mM EDTA, 150 mM NaCl, 0.1% Nonidet P-40) at 4°C for 1 h to reduce nonspecific protein binding. After centrifugation, the precleared samples were incubated with 1 g of antibodies at 4°C overnight followed by the addition of protein G-Sepharose and incubation at 4°C for 1 h. Sepharose beads were collected by centrifugation, washed three times with buffer IP350 (10 mM HEPES, pH 7.9, 10% glycerol, 1 mM EDTA, 350 mM NaCl, 0.1% Nonidet P-40) and once with IP150, and resuspended in denaturing sample buffer. Immune complexes were separated by SDS-PAGE and analyzed by Western blotting with appropriate antibodies.
Identification of T-box Genes Regulated by Pitx2 in Abdomi-
nal Tissue-Gene expression analysis was used to identify Pitx2 target genes during development. Abdominal walls of E10.5 mice were obtained by cutting across embryos behind the forelimb and in front of the hindlimb. The neural tube and obvious internal organs were removed. The abdominal tissue from which total RNA was extracted therefore consisted predominantly of abdominal body wall and included somites. Total RNA was prepared from the abdominal walls of three pools of wild type (Pitx2 ϩ/ϩ , WT), three pools of heterozygote (Pitx2 ϩ/LacZ , HET), and three pools of mutant (Pitx2 LacZ/LacZ , MUT) embryos. Probes prepared from these RNA were applied to nine Mouse Genome 430a 2.0 microarrays. The results from all nine arrays were normalized by RMA. The average expression value obtained from three biological replicates was compared between genotypes. Array results have been deposited for public access at ArrayExpress under the accession number E-MEXP-2332.
The arrays demonstrate that Pitx2 RNA levels decline significantly in the MUT isolates. Pitx2 expression was measured by two probe sets. Probe set 1424797-a-at, which was located within the 3Ј-non-translated region of Pitx2, produced a robust signal and showed 1.8-fold lower signal in MUT than WT or HET. A second Pitx2 probe set (1450482-a-at), which was located within the coding portion of the last exon, produced a 3-fold lower signal than the first probe and showed 4.7-and 4.3-fold lower signal in MUT than in WT or HET, respectively. Pitx2 levels were not significantly different in WT and HET arrays. Only 10 of ϳ20,000 other genes showed higher -fold changes than Pitx2.
Pitx2 affected the expression of many other genes at lower levels. Scatter plot comparisons of individual arrays showed fewer changes in internal comparisons, between replicate arrays from the same genotype, than in cross comparisons, between individual arrays from different genotypes (data not shown). The number of regulated genes can be estimated by permutation fold-scanning analysis (28,29). This non-parametric method counts probe set comparisons that fall above each -fold cutoff between conditions (cross comparisons) and within conditions (internal comparisons) and uses them to calculate the number of regulated genes as a function of -fold change or false discovery rate (FDR) (29,30). An FDR of 8% was calculated for WT versus MUT comparisons at a 1.7-fold cutoff. There were 600 probe sets that change above this cutoff, corresponding to ϳ300 genes. An FDR of 9% FDR was calculated for HET versus MUT comparisons at this -fold cutoff, and 550 probe sets changed above the cutoff. In contrast, a minimum FDR of 55% was calculated at a 1.4-fold cutoff for WT versus MUT comparisons, and less than 50 probe sets changed above this cutoff (data not shown).
Scatter plot comparisons between individual replicate arrays within conditions also showed that one of the three replicate arrays in each condition was different from the other two, which matched more closely. The outlier in each of the three conditions was the initial array that was produced for preliminary studies at a different time than the other two, suggesting a technical deviation. The latter two were therefore used to establish a list of putative T-box target genes (Table 1). T-box gene expression was consistently altered in all three replicates.
Previous studies have shown that Pitx2 regulates Tbx1 by direct interaction with its promoter during branchial arch development (30). Consequently, alteration of T-box gene expression was of particular interest to us in the abdominal wall context as well. All probe sets corresponding to T-box genes
Microarray expression measurements on all T-box genes in the mouse genome
Genes in boldface were selected for further evaluation. *, difference not significant at p ϭ 0.05 using Student's t test.
Average of two biological replicates from the probe set with the highest average signal across all six arrays. c -Fold changes computed by dividing the higher value by the lower and then assigning direction; Each -fold change is for a different probe set for the same gene; organized from highest to lowest average signal for that gene over all six arrays.
were therefore examined for Pitx2-dependent expression (Table 1). Tbx2, Tbx4, Tbx5, Tbx15, Tbx19, Tbx20, and Mga showed consistent, significant alterations of expression in different comparisons and/or with different probe sets. Six genes of the T-box family were selected as putative Pitx2 targets in the abdomen (Table 1, bold). Single probe sets for Tbx6 and Tbx1 showed little and/or insignificant change of expression in array data. These genes were selected for further analyses because of their known abdominal expression and interaction with Pitx2, respectively. Tbx1 is involved in craniofacial and cardiac development (35,36). Tbx2 is involved in cardiac and limb development (37). Tbx4 is involved in allantois and hindlimb formation (38,39). Tbx5 is involved in cardiac development and is required for forelimb formation (13,40). Tbx15 is involved in skeletal development (41,42). Mga is a dual specificity transcription factor that regulates both Max network and T-box genes with no known function yet during development (43). Tbx19 and Tbx20 were not selected for further evaluation because they showed very small changes in WT versus MUT comparisons, and they have predominantly been associated with pituitary (44) and cardiac development (45). Microarray data were initially validated by qPCR on reversetranscribed RNA from E10.5 abdominal tissue. RNA levels for selected putative T-box target genes were always altered in the same direction but sometimes to a different extent, as in microarray results. RNA was both from the same preparations used in array analyses and from fresh isolates. In triplicate analyses, RNA levels for Tbx15, Tbx4, and Mga were lower in MUT by 10-, 2.6-, and 2.4-fold, respectively (Fig. 1). Pitx2 therefore activated these three genes in the abdomen. Tbx1, Tbx2, and Tbx5 RNAs were higher in MUT by 2.4-, 2.3-, and 4.2-fold, respectively (Fig. 1). Additionally, qPCR showed that Tbx6, which showed no significant signal in the microarray data, produced 9-fold more RNA in mutants. Pitx2 therefore repressed these four genes in the abdomen.
Abdominal Expression Patterns of T-box Target Genes-The T-box genes, identified as Pitx2 target genes, were examined by RNA whole mount in situ hybridization of E10.5 WT and MUT mice. Tbx1 was expressed in the foregut but not in the abdominal wall of WT animals ( Fig. 2A) (46). However, MUT had a severely underdeveloped and hypocellular intestine, and the observed, elevated Tbx1 expression of the MUT may also be in the body wall as well as in the intestine (Fig. 2B, asterisk). This would be consistent with the idea that Tbx1 ϩ splanchnopleure cells fail to colonize an underdeveloped gut in MUT. Tbx2 was strongly expressed in the somatopleure (Fig. 2D) forelimbs, hindlimbs, and the surrounded area in MUT. This expression was not observed in WT (Fig. 2C). Tbx5 expression was apparent in the MUT somatopleure (Fig. 2F, arrow) but not in WT embryos (Fig. 2E). Tbx6 expression was higher in the intestine (Fig. 2H, asterisk) and somatopleure of MUT (Fig. 2H, arrow). Tbx4 expression was lower in the intestine (Fig. 2J, asterisk), somatopleure (Fig. 2J, asterisk), and hindlimbs (data not shown) of MUT. Tbx15 was expressed in the margin of hindlimbs (Fig. 2K, double arrows) and somatopleure (Fig. 2K, arrow) in WT mice. It was not expressed in any of these tissues in the MUT (Fig. 2L). Mga is expressed before gastrulation (43), and no changes of expression pattern were detected in the abdomen at E10.5 (Fig. 2, M and N). The expression patterns of all selected T-box genes, except Mga, were altered in the area between the posterior margin of forelimb and the anterior margin of hindlimb in a manner consistent with the array and qPCR results. Thus, Pitx2 regulates expression of at least six T-box genes in the abdomen.
Selection of Test Amplicons in T-box Genes-
The genomic sequences surrounding Pitx2-dependent T-box genes were searched for possible cis-regulatory modules that could mediate the observed Pitx2-dependent expression. Several methods were used to identify PCR amplicons of genomic sequences for subsequent ChIP analyses. The mouse genomic sequences (mouse build 37) encompassing each T-box transcripts were downloaded using the genomic representative sequences link in the mouse genome informatics (MGI) web site. Genomic sequences 20 kb upstream and downstream of the transcription unit were included in the download to give the genomic loci to be searched. Initially, these loci were searched for the optimal consensus Pitx2 binding motif TAATCY (47,48). Several PCR amplicons were selected from the region 20 kb upstream of the MGI transcription start site by looking for clusters of these motifs or individual motifs clustered with bicoid motifs. A bicoid motif was defined as CAATCC, TGATCC, TATTCC, or AAATCC (49).
All other amplicons were selected by the following method. The sequence of each genomic locus was submitted to GenomeVISTA, selecting Mouse Feb 2006 as the base genome (mm8). Results were viewed as a VISTA track on the UCSC Genome Browser (mm8). The sites containing the TAATCY motif were identified. Each of these sites was evaluated for the number of species (of mouse, rat, human, dog, opossum, chicken, Xenopus, Tetraodon) in which the motif was conserved and the number of species in which the surrounding block was conserved (Fig. 3). Amplicons were designed around sites that were conserved in multiple species.
Pitx2 Occupies Genomic Sites in All T-box Target Genes-ChIP assays of E10.5 abdominal tissue from WT and MUT mice were performed to determine whether Pitx2 occupies identified sites in T-box loci in vivo. Formaldehyde cross-linked chromatin was sonicated, immunoprecipitated with an anti-Pitx2 antibody, and analyzed by qPCR. The signal from immunoprecipitated chromatin was represented as a percentage of the signal obtained from the chromatin put into the immunoprecipitate (% input (% 1P)). Test genomic amplicons were analyzed for each T-box gene to determine whether any enrichment of Pitx2 occupancy could be demonstrated in WT abdominal tissue ( Table 2). At each T-box gene, Pitx2 occupancy was strongly enriched in the abdominal tissue of WT when compared with MUT mice. The crossing threshold C t value in MUT was not significantly different from the no-antibody control. The E10.5 abdominal tissue preparations include a large fraction of Pitx2 ϩ cells derived from the somatopleure, splanchnopleure, and dermomyotome. However, they also include a large fraction of Pitx2 Ϫ cells derived from the sclerotome and other ectodermderived cells. Pitx2 occupancy indicated that it was physically present in the chromatin near the T-box target genes and that it may directly influence their expression at the transcriptional level.
Pitx2-dependent Recruitment of Coactivators to T-box Target Genes-A large number of complexes have been identified that serve as coactivators and corepressors of transcription. SSTFs can recruit distinct combinations of these cofactors depending on cell type, promoter, DNA binding site, and the activity of different signaling pathways. Thus, we tested whether Pitx2 affected the recruitment of coactivators or corepressors to T-box target genes. ChIP assays were used to assess the role of Pitx2 in recruiting coactivators or corepressors to T-box genes in the abdominal tissue at E10.5 (Fig. 4). The recruitment of the coactivators CBP (Fig. 4A) and PCAF (Fig. 4B) and of the corepressors HDAC1 (Fig. 4C), HDAC3 (Fig. 4D), and N-CoR (Fig. 4E) was determined at the same seven genomic sites shown to be occupied by Pitx2.
CBP and PCAF have histone acetyl transferase activity. If Pitx2 recruited these proteins to the three Pitx2-activated target genes (Tbx4, Tbx15, Mga), then one would expect higher occupancy of these proteins in WT than in MUT. CBP occupancy at the Pitx2-occupied amplicons was higher in WT for only one of the three activated target genes (Mga; Fig. 4A). The amplicons in the other two activated target genes showed significantly lower CBP occupancy in WT (Tbx4 and Tbx15; Fig. 4A). PCAF occupancy at all three activated genes was not significantly dependent on Pitx2. The levels of Pitx2 and CBP occupancy at the Mga in WT were 3 and 6%, respectively. Both Pitx2 and CBP occupancy drops below 0.5% in MUT, consistent with the model that Pitx2 binds to the Mga and helps recruit CBP.
If Pitx2 occupancy prevented recruitment of coactivators to genomic sites in the four repressed target genes, then one would expect lower occupancy of coactivators at these sites in WT. Clearly, the Tbx1, Tbx5, and Tbx6 amplicons have significantly lower occupancy of both CBP and PCAF in WT when compared with MUT (Fig. 4, A and B). However, the Tbx2 test amplicon had significantly less CBP recruitment in MUT, and PCAF recruitment was not Pitx2-dependent (Fig. 4, A and B). The recruitment of the coactivators CBP and PCAF was not D, F, H, J, L, and N) E10.5 mice. A and B, Tbx1 expression in the foregut (asterisks) was higher in mutants. B, Tbx1 expression was higher in mutants. C and D, Tbx2 expression was higher in the somatopleure and limb mesenchyme of mutants. E and F, Tbx5 expression was higher in somatopleure (arrow) and intestine (asterisk) of mutants. G and H, Tbx6 expression was higher in somatopleure (arrow) and intestine of mutants. I and J, Tbx4 expression was lower in somatopleure (arrow) and intestine (asterisk) of mutants. K and L, Tbx15 expression was lower in the hindlimb margin (double arrows) and somatopleure of mutants. M and N, no alteration of Mga expression pattern was apparent.
Pitx2-dependent across all test amplicons but may contribute to the transcriptional regulation of a subset of T-box genes.
Pitx2-dependent Occupancy by Corepressors to T-box Target
Genes-Corepressors can reduce transcription in a chromatin region they occupy by recruiting histone deacetylase activity. If Pitx2 recruits corepressors to the sites it occupies on repressed T-box genes, then one would expect to see higher corepressor occupancy at these amplicons in WT tissue. HDAC1 occupancy at T-box genes in WT tissue was significantly higher in all four repressed target genes (Tbx1, Tbx2, Tbx5, Tbx6; Fig. 4C) and unaffected in all three activated target genes (Tbx4, Tbx15, Mga; Fig. 4C). The data were therefore consistent with Pitx2-dependent recruitment of HDAC1 at the repressed genes. HDAC3 occupancy was detected at three of the four repressed T-box genes, but it was not Pitx2-dependent (Tbx1, Tbx2, Tbx5; Fig. 4D). However, HDAC3 occupancy was not detected in MUT at test amplicons at Tbx15, Tbx4, Tbx6, and Mga (Fig. 4D). In the three activated target genes (Tbx4, Tbx15, Mga), all or nearly all of the Pitx2-occupied sites could also be occupied by N-CoR. N-CoR occupancy declined in all seven amplicons in MUT (Fig. 4E). The data were consistent with the model that Pitx2 recruits N-CoR to these sites in a subset of Pitx2-expressing abdominal cells. However, lower N-CoR occupancy in MUT at sites near activated target genes is inconsistent with the lower expression levels of these genes in MUT tissue.
Pitx2 Alters Histone Acetylation of T-box Genes In Vivo-The acetylation of histones H3 and H4 at each of the T-box gene amplicons was measured to determine whether Pitx2-dependent changes in gene expression were correlated with expected changes in histone acetylation (Fig. 4F). The Pitx2-dependent occupancy of HDACs at Pitx2-repressed loci suggested that histones at these loci would be less acetylated in the presence than in the absence of Pitx2. Immunoprecipitations with anti-ac-H3 or anti-ac-H4 antibodies demonstrated increased histone H3 and histone H4 acetylation in MUT at test amplicons of Pitx2repressed genes. Thus, increased expression of Pitx2-repressed genes in MUT was clearly associated with increased histone acetylation. Similar tests on amplicons associated with the Pitx2-activated genes also showed the expected correlation. Both histones were less acetylated in the Tbx15 amplicon, and only H4 was less acetylated in the Tbx4 amplicon. No acetylation was detected in the Mga amplicon (data not shown).
These data suggest that Pitx2 is involved in regulating the acetylation state of chromatin in T-box genes in the abdominal wall. This is likely due to the recruitment of corepressor complexes including HDACs at Tbx1, Tbx2, Tbx5, and Tbx6, which were repressed by Pitx2. One gene that was activated by Pitx2, Tbx15, showed greater histone H3 and H4 acetylation in WT. This was surprising because WT tissue showed higher or equal occupancy by N-CoR, lower occupancy by CBP, and only modestly high occupancy by PCAF, suggesting that other factors are involved. The Tbx15 gene locus contains many potential Pitx2 sites, so it is possible that this is a result of the sites chosen for analysis. Tbx4 showed higher H4 acetylation and similar H3 acetylation in WT tissue. As with Tbx15, this was difficult to reconcile with the similar or higher corepressor HDAC occupancy and the similar or lower coactivator/histone acetyl transferase occupancy at this locus. Alternatively, in the absence of Pitx2, untested corepressors may increase occupancy at these genes, or contributions from other Pitx2 sites can be an indirect effect.
Chromatin Analyses along the Repressed Tbx1 Locus-The analyses described above use a single site to probe Pitx2, corepressor, and coactivator occupancy in each Pitx2-regulated T-box gene. The fact that occupancy of Pitx2 was observed at each selected site suggests either that the site selection was very good or that Pitx2 occupancy is a property that can generally be demonstrated in conserved sites that contain consensus DNA binding motifs for Pitx2. Pitx2-dependent occupancy by corepressors was therefore tested at four other genomic sites around the repressed Tbx1 gene in addition to the position Ϫ1335 examined above (Fig. 6A).
ChIP assays showed that Pitx2 occupied both intergenic and intronic regions of Tbx1 (Fig. 6B). There was no enrichment for any amplicon in MUT tissue. No Pitx2 enrichment was detected at a test amplicon to a region containing no Pitx2 sites (data not shown). It appeared that Pitx2 occupancy was higher in the upstream intergenic region and decreased significantly in the intronic and downstream intergenic region. HDAC1 occupancy decreased in MUT at all sites tested (Fig. 6C). Pitx2 dependence of HDAC1 occupancy was 4-fold higher at the position Ϫ9409 than at any of the other four amplicons. HDAC1 occupancy was greatest at the ϩ1691 and ϩ7153 amplicon, which showed the lowest Pitx2 occupancy. HDAC occupancy depended in part on Pitx2 at all amplicons and did so at some sites more strongly than at others. HDAC3 occupancy showed a far greater variation in level and Pitx2 dependence than HDAC1. All amplicons, except one (Ϫ1335), showed higher HDAC occupancy in WT than MUT. This further confirms the correlation between corepressor occupancy in WT and Pitx2-repressed gene expression than was described above. The extent of Pitx2-dependent HDAC3 occupancy seems to complement that of HDAC1. The Ϫ9409 amplicon showed the lowest -fold change in HDAC3 occupancy and the highest -fold change in the HDAC1 occupancy. Conversely, the Ϫ2711, ϩ1691, and ϩ7153 amplicons showed less Pitx2 dependence in HDAC1 occupancy and far more Pitx2 dependence in HDAC3 occupancy. N-CoR occupancy at all amplicons was Pitx2-dependent with no occupancy detected in mutants. N-CoR occupancy was higher at Ϫ9409, the amplicon that had the highest Pitx2 dependence in HDAC1 occupancy (Fig. 6E).
Acetylation of histone H3 and/or H4 increased at all five amplicons in MUT (Fig. 6F). The Ϫ9409 amplicon showed Pitx2 dependence for H3 but not for H4 acetylation. The converse was true at the ϩ1691 amplicon. The lack of acetylation of either H4 or H3 at these amplicons appeared to be compensated by robust increases in the acetylation of the other histones. Clearly, increased acetylation of H3 and H4 occurred in mutant tissue in all regions of the Pitx2-repressed genes.
Chromatin Analyses along the Activated Tbx4 Locus-Similar studies were performed for Tbx4, a gene activated by Pitx2. 10 amplicons containing Pitx2 binding sites throughout the non-coding region of the Tbx4 gene were analyzed together with the Ϫ9923 amplicon analyzed above (Fig. 7A). The Pitx2 binding site at position Ϫ48,363 was within an ultraconserved region just downstream of the Tbx2 gene. Tbx2 and Tbx4 are neighboring genes. ChIP assays demonstrated that Pitx2 occupied numerous amplicons in WT tissue. However, Pitx2 occu- pancy was not detected at amplicons containing Pitx2 sites at positions Ϫ10,654 and ϩ13,839 (data not shown), confirming that Pitx2 is not present at all Pitx2 elements. No occupancy was detected on any amplicons in MUT tissue. A control amplicon in a region with no Pitx2 binding sites found no enrichment above background in WT or MUT (data not shown). As with Tbx1, Pitx2 occupancy appeared to be greatest in amplicons located in the upstream intragenic region (Fig. 7B).
HDAC1 occupancy could also be detected at all sites but was dependent on Pitx2 at only 5 of 11 amplicons (Fig. 7C). Pitx2 knock-out resulted in the increased occupancy in three proximal amplicons and decreased occupancy in two distal amplicons. HDAC1 occupancy was not correlated with Pitx2 occupancy at different sites. Moreover, the Pitx2 dependence of HDAC1 occupancy was also not correlated to Pitx2 occupancy.
HDAC3 occupancy was also strongly Pitx2-dependent but could not be detected on two amplicons. HDAC3 occupancy was lower in MUT at most amplicons. This would be expected to increase expression in MUT. Tbx4 expression decreases in MUT. Increased corepressor occupancy in MUT tissue was only sporadically observed for different corepressors at different amplicons. HDAC1 occupancy was increased at two amplicons (Ϫ13,589, ϩ11,161), HDAC3 was increased only at ϩ11,161, and N-CoR was increased at three amplicons (13,079, Ϫ2000, ϩ11,161). Thus, corepressor occupancy of amplicons at the Pitx2-activated locus (Tbx4) was less consistent with changes in gene expression, less consistent across the locus, and less Pitx2-dependent than at the Pitx2-repressed locus (Tbx1).
Histone acetylation also did not change in as consistent fashion along the entire locus in the Pitx2-activated gene as in the Pitx2-repressed gene. H3 acetylation decreased in one amplicon and increased at in two amplicons in MUT tissue. H4 acetylation decreased at five amplicons. However, the hypoacetylation of chromatin in MUT was more consistent closer to the transcriptional start site. Tbx4 is downstream of Tbx2, a Pitx2repressed gene, which may contribute to the inconsistent acetylation pattern in chromatin regions upstream of Tbx4. Histone acetylation was extremely low in amplicons surrounding the upstream end of the transcription unit.
DISCUSSION
Pitx2 is a bicoid-related homeodomain transcription factor that plays a critical role in the development of multiple organs, including heart, lung, intestine, pituitary gland, tooth, muscles, and body wall closure (3)(4)(5)(6). Although it is obvious that Pitx2 is required for proper organ and tissue formation, the role and mechanisms by which Pitx2 influences body wall formation and closure are unclear. In the present studies, we provide evidence indicating that Pitx2 acts upstream of T-box gene family members by occupying specific sites on their promoters. Occupancy by coactivators and corepressors occur by a Pitx2-dependent mechanism and results in modification of their histone acetylation status.
The most striking phenotype of the Pitx2 knock-out mice is the open body wall and the externalization of the thoracic and abdominal viscera during embryonic development (5,6). Pitx2 is expressed in the lateral plate mesoderm and somatopleure (27), tissues that contribute to body wall formation. Thus, Pitx2 is likely placed temporally upstream in gene regulatory networks involved in this developmental process, and there was a need to identify its target genes. Gene expression analysis was used to identify the genes that are regulated by Pitx2 during body wall formation. One of the families regulated by Pitx2 was the evolutionarily conserved family of T-box genes. T-box genes are essential for patterning embryonic mesoderm and forming organs. Tbx2 superfamily members Tbx2, Tbx4, and Tbx5 and the Tbx1 superfamily member Tbx15 are expressed in the body wall and in the anterior and posterior margins of the developing limbs (11). Expression of all four genes was altered in the absence of Pitx2. Tbx1, Tbx2, and Tbx5 expression was up-regulated in mutant embryos (Figs. 1A and 2). The differential gene regulation of T-box genes by Pitx2 in different tissues during development could be explained by changes in the transcriptional machinery in time and space. These experiments were performed at E10.5 when Pitx2 MUT already exhibit gross morphological defects. T-box gene expression is widespread during development. T-box genes are expressed in all stages of development from the oocyte to the adult. Pitx2 expression overlaps in the developing embryo with T-box genes from as early as E8.0. Future experiments should characterize the role of Pitx2 expression on T-box gene expression at other stages in development.
Multiple sites occupied by Pitx2 have been identified in the upstream genomic sequences and intronic sequences of T-box genes. Evolutionary alignment of T-box genes confirmed the presence of evolutionarily conserved Pitx2 binding motifs (Fig. 3). The identification of multiple sites that matched the optimal DNA binding sequence of Pitx2 suggests that Pitx2 directly binds to T-box genes and may regulate their expression. This was confirmed with in vivo ChIP assays, where Pitx2 was found to occupy its optimal sites (Table 2). To fully characterize Pitx2 occupancy at the T-box gene loci, genome-wide ChIPsequence would need to be used due to the number of potential Pitx2 binding sites at these loci. The exact role of Pitx2 occupancy in regulated transcription needs to be further characterized due to possible indirect effects from the altered expression or activity of other genes.
Regulated exchange of HDAC1/-catenin converts Pitx2 from repressor to activator for cell cycle control genes, such as cyclin D2 (7). Pitx2-dependent HDAC1 occupancy was associated with decreased acetylation of histone H3 and H4 and repression of Tbx1, Tbx2, Tbx5, and Tbx6 (Figs. 5-7). How-ever, coactivators CBP and PCAF and/or corepressors HDAC3 and Pitx2-dependent N-CoR occupancy did not universally correspond to activation or repression of T-box genes. Corepressor occupancy in Pitx2-activated genes was unaffected or affected in the direction opposite of what was expected at some sites, suggesting that although Pitx2 alters the occupancy of these cofactors, different complexes may occupy different Pitx2 sites. Due to the number of Pitx2 sites at T-box loci, the occupancy of coactivators and corepressors throughout the loci needs to be characterized. The coactivators and corepressors targeted to these sites in the presence and absence of Pitx2 are likely due to nearby other transcription factor binding sites. In addition, intronic Pitx2 binding sites can regulate gene transcription (50). Pitx2 interacts with multiple coactivator and corepressor complexes (7). Pitx2 was physically interacting with N-CoR and either HDAC1 or HDAC3 to occupy its T-box target genes. The interaction of Pitx2 with N-CoR, HDAC1, and HDAC3 is one mechanism by which corepressors occupy Pitx2 targets (Fig. 5). Pitx2dependent HDAC1, HDAC3, and N-CoR occupancy was detected at multiple sites in Tbx1 (Fig. 6). These data suggest that decreased histone acetylation results in Tbx1 repression due to occupancy of Pitx2-dependent corepressor complexes. Recent studies also indicated that Pitx2 induces acetylation of histone H4 at the promoter of structural maintenance of chromosomes (SMC) proteins differentiation marker genes through exchange of HDAC2 and HDAC5 with p300 (51). In branchial arch mesoderm, Pitx2 activates Tbx1 (28), whereas repressing Tbx1 in the abdominal wall, further suggesting that Pitx2 can switch from an activator to a repressor due to interaction with different complexes in a tissue-specific manner.
In the absence of Pitx2, N-CoR and HDAC1 occupancy increased at several amplicons in the Tbx4 locus. Pitx2 has been shown to interact with several transcription factors such as Lef-1, -catenin, and Pit-1 (7,52,53). Lef-1 can also act as a repressor in the absence of Wnt signaling by interacting with Groucho and HDACs (54), whereas -catenin interacts with PCAF and CBP (55,56). Pitx2 occupied amplicons in close proximity to the lymphoid enhancer factor/Tcell factor (Lef-1/TCF) and Pit-1 binding sites at Tbx4 and Tbx15, and our recent studies have shown that Lef-1 occupancy was lower in MUT (data not shown). Thus, Pitx2 may interact with Lef-1, -catenin, Pit-1, and coactivators in Pitx2-activated genes.
The genomic sequence upstream of the Tbx4 transcriptional start site was hypoacetylated in MUT. Tbx4 was activated, and Tbx2 was repressed by Pitx2 despite their proximity in the genome. Tbx4 and Tbx2 are closely positioned on chromosome 11. Analysis of the Tbx2 gene showed increased acetyl-H3 and acetyl-H4 in the absence of Pitx2. Analysis of the amount of acetyl-H3 and acetyl-H4 in the Tbx4 showed that within the intergenic region between Tbx2 and Tbx4, there was a gradual shift from more acetylated histone H3 in the chromatin near Tbx2 to hypoacetylated histone H4 near Tbx4, suggesting that Pitx2 can regulate chromatin structure in a highly localized manner (Fig. 7). Histone modifications due to the presence of coactivators and corepressors are only one type of chromatin remodeling. Further studies will be necessary to determine whether other chromatin modifications are Pitx2-dependent at the T-box gene loci. These modifications can act alone or in concert, in a context-dependent manner, to facilitate or repress chromatin-mediated processes because they can influence one another (57).
Gene expression profiling of abdominal tissue and whole mount in situ hybridization at E10.5 indicated that several signaling pathways and transcription factors that regulate the expression of T-box genes are Pitx2-dependent. Pitx2 was acting as a repressor of Wnt10a, Wnt6, Dkk1, and Axin2 and as an activator on the Wnt5b, Tcf12, and adenomatous polyposis coli (7). The Wnt/-catenin pathway regulates the expression of several T-box genes (58 -60). Expression of several bone morphogenetic proteins (BMPs) was Pitx2-dependent, and BMP2 and BMP4 regulate Tbx2, Tbx3, and Tbx4 expression (10,60). Additionally, T-box genes can regulate their own expression and the expression of other T-box genes through T-box binding elements. Tbx20 negatively regulates the expression of Tbx2 (10,61,63). Pitx1 negatively regulates Tbx1 expression in dental epithelium (64) and activates Tbx4 expression in the limb (65). Pitx1 expression was Pitx2-dependent in the abdominal tissue and dental epithelium (data not shown). These pathways and transcription factors may also contribute to Pitx2-dependent expression of T-box genes in the abdomen at E10.5.
We conclude that T-box gene expression depends on the Pitx2-dependent occupancy of coactivators and corepressors in the abdominal wall and that this reveals one of the pathways by which Pitx2 regulates development. However, Pitx2 is part of a complex gene network that can also regulate the expression of T-box genes via signaling pathways and other interactions. | 2017-12-27T18:06:52.160Z | 2010-02-03T00:00:00.000 | {
"year": 2010,
"sha1": "c2c3426642da9fff9c14ccf941158f4181d7b870",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/285/15/11129.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4eb4c600254029e7a60299b52e7ac88298feca3d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
220347684 | pes2o/s2orc | v3-fos-license | Gravitational Wave Test of the Strong Equivalence Principle
The Strong Equivalence Principle (SEP) holds the full essence and meaning of the General Theory of Relativity as the nonlinear relativistic theory of gravitation. It asserts the universal coupling of gravity to all matter and its interactions including the gravitational interaction and the gravitational self energy. We point out that confirming the gravitational coupling to gravitons, and hence to the gravitational waves, is the direct test of the SEP. We show that the near simultaneous detection of gravitational waves and gamma rays from the merger of binary neutron stars provides a unique and the most precise test of the SEP, better than a part in $10^{9}$, which is also the only test of the SEP in the radiation sector.
I. INTRODUCTION
The Einstein Equivalence Principle (EEP) that asserts the local equivalence of all physical phenomena in a uniform gravitational field g and an accelerated frame with a = −g was postulated by Einstein on the basis of the weak equivalence principle (WEP), or the universality of the ratio of the gravitational to inertial mass. The WEP in turn rests empirically on the observed Universality of Free Fall (UFF), which is the fact that the gravitational acceleration does not depend in the mass or other material properties of the body in free fall. Due to the equivalence of the mass and energy, UFF implies that the gravitational ac-celeration does not depend on the binding energies of the standard model interactions in the falling body. The formulation of the General Theory of Relativity as a geometrical theory presupposes that the EEP includes the gravitational phenomena as well. Then, one has to insist on the UFF of the gravitational binding energy of the falling bodies, leading to the Strong Equivalence Principle (SEP). Generalizing from the equivalence of the gravitational mass and the inertial mass, The SEP asserts the universal coupling of gravity to all matter and their interaction energy, including that of the gravitational interaction, highlighting the nonlinear nature of gravity. Hence, at the fundamental microscopic level, the SEP affirms the universal gravitational coupling to gravitons themselves. This completes the Einstein Equivalence Principle as the grand generalization that asserts the local equivalence of all physical phenomena, including gravitational phenomena, in a uniform gravitational field and in a uniformly accelerated frame. The SEP is the final frontier of explicit tests of the principle of equivalence [1,2].
A significant test of the SEP requires that the precision in the measurement of the differential acceleration δa/g in the gravitational field g is better than the fractional amount of of the gravitational self energy relative to the total energy ε g /E. Since E is essentially the rest mass energy of a massive test body, ε g /Mc 2 ≃ GM 2 /R(Mc 2 ) = GM/Rc 2 . Hence, a significant test requires the observational access to the relative trajectories of a pair of planetary to stellar scale test objects, in the gravitational field of a third body.
The SEP has been tested and verified directly in the 'free fall' of the Earth and the Moon towards the Sun employing the Nordtvedt effect [1,3], by monitoring the orbital distance of the Moon from the Earth in Lunar Laser Ranging (LLR) [4,5]. This feat was possible because of the impressive long term precision achieved, of several millimeters, in the determination of the orbital distance of 3.7 × 10 8 m. This translated to a test of the UFF and the WEP with a precision of 2 parts in 10 13 . With the gravitational self energy of E g /mc 2 ≃ 5 × 10 −10 for the Earth, and relatively negligible self energy for the Moon, one obtains the significant observational constraint on any violation of the SEP of < 0.04%, which is remarkable considering the smallness of the contribution of the gravitational self energy of these bodies to their total mass-energy.
The only better observational constraint on the SEP is obtained by comparison of the orbits of a triple stellar system PSR J0337+1715, consisting of a binary system of a millisecond radio pulsar (366 Hz) and a white dwarf, with a 1.6-day orbital period, that itself is in a much longer period orbit (327 days) with another white dwarf [6][7][8]. The orbits can be monitored with the radio pulses from the neutron star (pulsar). The gravitational self energy of a neutron start is about 10%, and it is comparatively negligible for the while dwarf. With the limit of 2 parts in 10 6 on the universality of free fall in this triple stellar system, the SEP is established with a precision of 10 −5 , which is more than an order of magnitude tighter than the LLR result. However, the analysis involved in obtaining this constraint is necessarily more elaborate and complicated, owing the fact that the entire information on the orbits is extracted from the observation of the pulses from the pulsar in the system.
There is evidently a paucity of accessible tests of SEP, owing to the extreme weakness of the gravitational interaction. There is only one Earth-Moon-Sun system in this universe for which we have easy access to, for the study of such a deep and characteristic foundational issue of gravity. Pulsar-white dwarf systems provide test systems to study situations in which there is a large contribution of the gravitational self energy to the total mass energy, but the precision is limited because the galactic gravitational field in which they free fall is very small. An observationally convenient and accessible triple stellar system is a rare chance that is very hard to come by. In this context, we have found an entirely new way for the precision tests of the SEP and GTR, leveraging the realization that the gravitational waves are the purest and manifest form of gravitational energy. Gravitational waves conceptually correspond to real gravitons, whereas the notional gravitational self energy in bulk matter corresponds to the virtual carriers of the interaction. In fact, the binding energy or the self energy, both in case of electromagnetism and gravitation, is the absence of real energy, rather than its presence; that much positive energy is released in making the bound system. The manifest form of the gravitational energy is the gravitational wave, just as the manifest form of the electromagnetic energy is the electromagnetic wave. Therefore, testing for the gravitational coupling of the gravitons themselves is the ultimate test of the SEP. The photons can provide the ideal reference for the comparison. Clearly, a test of the universal coupling of gravitational waves to the gravity of bulk matter is the most direct test of SEP.
A new window for this direct and reliable precision test of SEP opened with the detection of near simultaneous gravitational waves (GW) and gamma rays from the merger of binary neutron stars [9,10]. This unique test has the clear possibility of scores of future detections, allowing crucial statistical reliability. The central idea of the test stems from the realizations that gravitational waves are propagating form of pure gravitational energy, released directly from the gravitational binding energy of the binary system. Then, the gravitational effects of massive structures, like galaxy clusters, on the propagation of pristine gravitational energy, relative to the same effects on photons, is a transparent direct test of SEP. The gamma rays serve as the reference matter-energy, without any significant gravitational energy, providing a complete test. The Shapiro delay is a first order test, proportional to the integrated gravitational potential in the intervening space [11], and the gravitational bending is a second order test, proportional to the gradient of the potential. Hence, the Shapiro delay provides a much better test of the SEP than the gravitational bending. But the latter is the one that resembles the 'free fall' tests. We examine both effects for the completeness of the discussion.
II. SENSITIVITY OF THE TESTS OF THE SEP
The classic test of the SEP along the lines of the tests of the universality of free fall involves comparing the relative trajectory of two bodies in the gravitational fall towards a third massive body. What is measured is the difference in acceleration δa in the gravitational field g. The ratio δa/g is identical to the Eötvös WEP parameter η. When the relative contribution of the gravitational self energy is larger than the sensitivity η if the test, one gets a useful test of the SEP. There are three key quantities to consider while estimating the precision of the test of the SEP, denoted by the symbol ∆. One is the gravitational acceleration at the location of the falling bodies, g. The second parameter is the difference in the ratio of the gravitational energy in the falling bodies to their total energy. Since the total energy is essentially the rest mass energy, The third parameter is of course the precision achieved in the test of the universality of free fall from factual observation (including both the statistical and systematic errors). This is the precision to which the differential acceleration of free fall is constrained relative to the local gravitational field, δa/g. Clearly, for a given precision in the measurement of the differential acceleration, a better test is obtained when the actual gravitational acceleration g is larger. The precision of the test of WEP is determined by Then the sensitivity for the test of SEP is given by ∆ = η/δε g . For the solar system tests, g ≃ 10 −2 − 10 −3 m/s 2 and the precision (2σ) achieved in observing δa/g (for LLR) is about 10 −13 m/s 2 . But the quantity δε g for the Earth-Moon system is only 4.6 × 10 −10 . Therefore, the constraint on the violation of the SEP is limited to The test employing the free fall of a neutron star-white dwarf system towards the galaxy can take advantage of the much higher gravitational energy of the neutron star (δε g > 10 −1 ), but much of that advantage is offset by the tiny galactic gravitational acceleration of only g ≃ 2 × 10 −10 m/s 2 . Yet, impressive constraints on η have been obtained of the order of η ≤ 10 −3 , which translates to the constraint on SEP of ∆ = η/δε g ≤ 10 −2 [12].
In contrast, a triple stellar system like the PSR J0337+1715 in which a neutron star-white dwarf system orbits another neutron star, allows the better constraint through painstaking analysis of the orbital data and careful modeling [7,8]. The gravitational acceleration (with the 327 day orbit) is comparable to the LLR case, g ≃ 5 × 10 −3 and the difference in the gravitational self energy is δε g ≃ 10 −1 . The precision achieved in estimating the differential acceleration is η ≤ 2 × 10 −6 . This provides the strongest constraint on SEP to date of ∆ ≤ 2 × 10 −5 .
The GW test of the SEP is a class apart from the tests involving massive bodies in two respects. A gravitational wave is pristine gravitational energy, propagating over cosmological distances. Therefore, the ratio of the gravitational energy to the total energy is unity for GW, ε g /E = 1. Then we need another entity that can co-propagate with GW with negligible gravitational energy, for comparison, and that is light, for which ε g /E ≃ 0. Thus δε g = 1.
The differential comparison is possible if both the GW signal and the electromagnetic signal from the same source is observed. Fortunately, we have one confirmed astrophysical event with these criteria satisfied, in the gravitational wave detection in the LIGO-Virgo detectors and the Gamma ray detection by the Fermi satellite, GW170817+GRB [9,10].
III. GW TESTS OF THE SEP
The triad of interferometric gravitational wave (GW) detectors, LIGO Hanford, LIGO Livingston, and Virgo, sensed the arrival of gravitational waves from the inspiral and the merger of a binary neutron star (BNS) system, on the 17th August 2017 [9]. Named GW170817+GRB, it became the one-of-a-kind event (so far) because of the near simultaneous detection of gamma rays, within about 1.7 s, by the Fermi gamma ray satellite [10]. This detection with the three GW detectors allowed the delineation of a relatively precise localization area in the sky, which led to the identification of the source galaxy as NGC 4993 at a distance of 40 megaparsecs (10 24 m). What is relevant for our test of the SEP is that the gravitational waves and the gamma rays have to traverse this vast distance in the gravitational presence of the mighty Virgo cluster of galaxies, before they pass through the gravitational field of our Milky Way galaxy, and arrive on the Earth.
The geometrical configuration of the GW-GRB event relative to the Earth and the Virgo cluster is indicated in the figure 1. The universal gravitational coupling of the Virgo cluster mass to the gravitational energy of the GW and the electromagnetic energy of the gamma rays would manifest in two physical effects. One is the Shapiro time delay which of first order in α = φ/c 2 ≃ GM/Rc 2 , and the second is the time delay due to the gravitational bending, which is of second order in α (the gravitational bending itself, which is first order in the potential, cannot be observed for the gravitational waves).
The Shapiro delay is the excess propagation delay of the waves due to the gravitational potential, ds(2φ/c 3 ). Any violation of the SEP due to a nonuniversal coupling to the gravitational energy of GW will alter this delay, relative to the Shapiro delay of the photons. One already know that the electromagnetic energy in bulk matter obeys the UFF to the very high precision of η em ≤ 10 −9 , because the ratio of the total electromagnetic energy in the atom to its rest mass energy is E em /mc 2 > 10 −5 , and the best tests of the WEP have reached η ≤ 1.5 × 10 −14 [14,15]. Therefore, we the gravitational coupling of the gamma rays would obey the WEP to better than a few parts in 10 9 . With the galactic potential φ mw /c 2 ≃ 10 −6 , the Shapiro delay during the propagation over 30 Kpc (∼ 10 21 m) in the Milky Way galaxy itself amounts to about 100 days, as has been analyzed in a comparison of the velocity of light and GW in the BNS merger event GW170817+GRB [10]. However, we should consider the Virgo cluster, because the main contribution of the gravitational potential in our cosmological neighbourhood is from this galaxy cluster. The Shapiro delay is scalar cumulative effect, steadily increasing with the same sign. There are no other theoretical complication like fixing a metric because we know with certainty that the propagation is in the unique k ≃ 0 FLRW metric of our factual universe. Therefore, the calculation with the mass of the Virgo cluster gives a reliable lower limit to the gravitational effect between NGC 4993 and the Earth. Because the propagation duration t (over the distance >10 Mpc) in the average potential φ V /c 2 ≃ 1.5 × 10 −6 of the Virgo cluster is more than 10 15 s, a very conservative value for the gravitational Shapiro delay is readily estimated from δt = ds(2φ/c 3 ) as which is to be compared with the observed 1.7 s between the arrival times of the gravitational waves and the gamma rays. This implies the gravitational coupling of propagating gravitational energy to the source masses is the same as the gravitational coupling of electromagnetic energy within ∆ ≃ 6 × 10 −10 . Note that any kind of electromagnetic factor along the path can only introduce further delays in the propagation of the photons; hence the factual constraint is always better than our conservative constraint. The similar propagation of the GW and the gamma rays from GW170817+GRB has yielded a very high precision test of the SEP at better than part in a billion! We can also get a constraint on the SEP from the similar bending of the GW and light in the field of the Virgo cluster, albeit with lower precision. This is similar to the free fall test of two bodies in the gravitational field of a third body, with the additional aspect of relativistic propagation of the tested entities. If there are many massive structures, distributed around the path of propagation, the precise calculation of the resultant gravitational bending is complicated, and requires the detailed catalogue of matter distribution. Unlike the scalar Shapiro delay, the gravitational bending by a large structure like the Virgo cluster can be partially nulled, if there are several smaller structures that are closer to the path of propagation. If the gravitational waves and the gravitational energy in them did not experience the same 'free fall' as the photons in the gravitational field of the Virgo cluster, there would have been significant discrepancy in the time of arrival of the gravitational waves and the gamma rays.
The angle of bending due to the gravity of a compact source is well known as α = 4GM/c 2 R. Since the bending is small, the difference in the distance of propagation between the deflected path and the unperturbed path is (figure 1), The excess delay due the gravitational bending is then δt = δL/c = Lα 2 /2c. Hence, this effect is of second order in α. A more accurate expression from the lensing equation is δt ≃ (1 + z)α 2 D L D S /2cD LS , which corrects for the redshift distance of the source. The time delay due to the bending of waves propagating at the velocity c in the gravitational field of a mass distribution of size smaller than the distance between the source and the observer can now be estimated referring to the figure 1.
Conservatively taking 3×10 14 M ⊙ as the total mass of the Virgo cluster of galaxies and its dark matter halo within a radius of about 3 Mpc (and the 'impact parameter' of 10 Mpc), the bending angle α is approximately 6 × 10 −6 rad [16]. This is consistent with the gravitational field of the Virgo cluster at the local group, estimated from the infall velocity of about 200 km/s of the local group towards Virgo cluster. The deflection of the path translates to the gravitational bending delay Lα 2 /2c > 3 × 10 4 s, where we have taken L as half the distance to NGC 4993 (this is consistent with the combination of distances appearing in the lensing equation). This has to be compared to the 1.7 s delay between the GW from the BNS merger and the GRB. Since the gravitational waves are pure gravitational energy in propagation, this observed universality of 'free fall' under the gravitational action of the mighty cluster of matter on both electromagnetic waves and gravitational waves readily proves the Strong Equivalence Principle, with the tightness of the constraint on any violation smaller than ∆ < 6 × 10 −5 . As expected, the Shapiro delay constraint on the SEP that we obtained in this work is by far the most stringent, ∆ ≤ 6 × 10 −10 .
IV. CONCLUDING REMARKS
A unique test of the strong equivalence principle, which asserts the universal gravitational coupling to the gravitational energy itself, is devised by recognizing that the propagating gravitational waves are entirely gravitational energy. A comparison of the propagation of gravitational energy with gamma photons, from the merger event of the binary neutron stars detected by the LIGO-Virgo detectors, yielded the most stringent test of the SEP, with the constraint on any violation ∆ ≤ 6 × 10 −10 . Besides, this is the only test of the SEP in the radiation sector. The design sensitivity of the upgraded advanced GW detectors is 170-300 Mpc for binary neutron star mergers, which is 3 times the sensitivity they had when the event GW170817 happened. This means that one can expect an order of magnitude higher event rate at the full sensitivity, expected by 2025, and about 3 to 10 BNS events/year, with source identification. With many such events, the statistical precision and confidence in our unique test of the SEP will steadily improve. In such a scenario, there is no doubt that this gravitational wave test will remain the most precise confirmation of the strong equivalence principle. | 2020-07-06T01:01:30.714Z | 2020-07-02T00:00:00.000 | {
"year": 2020,
"sha1": "ba9adfe6bd6cae4641f5823d7c776e5ad0a6fd79",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "629d3f7ae513cd42ba49d7ab01843ac2e8d5b135",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
242469283 | pes2o/s2orc | v3-fos-license | Factors generating ambivalent feelings in women who gave birth after receiving a negative result on non-invasive prenatal testing: A qualitative study
Background: Some women with negative results on non-invasive prenatal genetic testing (NIPT) may have ambivalent feelings later. This study aimed to investigate the factors generating ambivalent feel-ings among women who gave birth after having received negative results on NIPT.
Methods: A questionnaire was sent to women who received a negative NIPT result, and a contents analysis was conducted for those 1562 women who responded to the open-ended question. The con-tents of these qualitative data were analyzed using the N-Vivo software package.
Results: Environmental factors, genetic counseling-related factors, and increased anticipatory anxiety affected the feeling of ambivalence among pregnant women. Furthermore, pregnant women desired more information regarding the detailed prognosis for individuals with Down syndrome and living with them and/or abortion, assuming the possibility that they were positive.
Conclusions: Three major interrelated factors affected the feeling of ambivalence in women. High-lighting and discussing such factors during genetic counseling may resolve some of these ambivalences, thereby enhancing the quality of decisions made by pregnant women.
Background
A prenatal test for fetal abnormalities may cause maternal anxiety [1,2]. Pregnant women who made an uninformed choice for non-invasive prenatal genetic testing (NIPT) feel more decisional regret associated with prolonged anxiety [3] [4].
Women typically have two conflicting thoughts at the early stage of pregnancy: the desire to know about the neonate via ultrasound examination and the reluctance to receive negative news. Many women reportedly take the NIPT to relieve their ambivalence regarding negative results [5].
Furthermore, certain women feel like their pregnancy is provisional or temporary until they learn about their NIPT results, i.e., although they may feel physical changes, they may not acknowledge their pregnancy until they see their results [6]. Ambivalence has been identified among the attitudes of the general public regarding prenatal testing; however, the origins of the ambivalence remain unclear [7,8]. Lewis et al. reported that numerous women taking the NIPT have ambivalent feelings [9]. Ambivalence can be defined as the "simultaneous existence of positive and negative evaluations 5 of an attitude object" [10].
In our previous study, we reported that women with negative results had a higher NIPT rating than women with positive results, but with respect to the ethical aspects of NIPT, women with negative results responded unclearly [11]. Even if NIPT result is negative, congenital diseases other than trisomy 13, 18, and 21 are not determined, and diseases other than the three chromosomal aneuploidies may later be identified [12].
Based on these findings, we thought that even women with negative test results could have ambivalent feelings, later. This study aimed to investigate factors affecting ambivalent feelings in pregnant women after undergoing NIPT.
Design
The present study is based on qualitative research methodology, involving content analysis using the free-form description of the first year after questionnaire answered by women who received negative results for NIPT.
Research procedure
All pregnant women who took the NIPT were asked to participate on a questionnaire-based survey of the Japan NIPT Consortium conducted: (i) the survey for pre-and immediately after the test, from April 2013 to March 2014 and (ii) the survey for a year-after the test from April 2014 to March 2015. We have already reported about (i) [11], the present study is for (ii). A year-after research was conducted by a mail-in survey to determine whether the assessment of the NIPT and genetic counseling was changed after the test and one year after. The contents of the questionnaire consisted of a one-yearafter evaluation and free description for NIPT and genetic counseling, and the question of the free description part was an open-ended question like, "Please show your opinion or feedback of the NIPT." In the present study, we conducted contents analysis for 1562 respondents (20.6%) who included provided the free-form description to the open-ended question (Fig. 1).
Participants
Pregnant women who were identified as being at an increased risk of chromosomal disorders due to advanced maternal age, the results of ultrasound marker or maternal serum marker tests, or a family history of chromosomal abnormalities or those who had a high risk of being a translocation carrier were eligible for NIPT. In most cases, the indication for NIPT was advanced maternal age (96.5%). In addition, 14.4% of the women underwent NIPT based on a recommendation from their family.
Because the responses in the first year after the questionnaire were anonymous, and it was impossible to obtain the background data, the personal background of the respondents was unknown; however, the available necessary background information of the respondents who underwent NIPT was shown in the previous study [11] of 7740 women, whereas the present study was limited to those who received negative NIPT results among the 7740 women, and therefore, the background data were not same.
Data Analysis
The questionnaires were subsequently mailed to a third-party NIPT data center for data entry.
Qualitative methods are required in situations warranting detailed analysis and an understanding of the process to determine the nature of the issue being investigated. We undertook a combined content analysis (quantitative and qualitative) via text mining approaches, using N-Vivo Qualitative software for the free-form description part of the questionnaire. The advantage of computer-assisted content analysis of textual data is the coding reliability that helps to generate comparable results [13]. The text in the free-form description part was entered into the computer and used as input for the N-vivo software. N-vivo was used for line-by-line coding of the texts, to identify emerging themes.
Furthermore, all the text was read, and the coding process was discussed with two researchers. The sub-categories extracted by N-vivo were assessed by two reviewers. The code of those lower categories determined after the evaluation was extracted inductively into the above category by two reviewers. The coding process is shown below.
Step 1 Line-by-line coding with N-vivo, using an inductive approach.
Step 2 Thematic grouping. The transversal analysis showed new themes by two researchers.
Step 3 Creating higher-level categories and evaluating relationships Step 4 Reviewing these analyses. Thematic stabilization. All authors.
Ethics
To conduct this study, the participating centers obtained approval from their respective ethics committees.
Results
We identified three primary categories generating feelings of ambivalence among women: (1) factors related to genetic counseling; (2) environmental factors, and (3) increased anticipatory anxiety.
Factors related to genetic counseling
This category Factors related to genetic counseling comprised two subcategories: "lack of information" and "Lack of psychological care adapted to individual needs." "Lack of information" further comprised two mid-level categories: "Inadequate support by medical staff up to taking NIPT" and "Lack of information in case of positive result." "Lack of psychological care adapted to individual needs comprised "Lack of psychological care at genetic counseling," and "Lack of genetic counseling at the appropriate moment." Regarding "Lack of information," "support by medical staff up to taking NIPT" referred to the lack of support from the family physician during NIPT. Many family physicians were not only not helpful in referring their patient for genetic counseling, but they also themselves were unaware of it. Owing to the limited number of NIPT facilities for pregnant women in Japan, some respondents also reported receiving a reproachful response from their family physician upon requesting information regarding NIPT or referrals.
Furthermore, certain women were criticized for taking the NIPT during genetic counseling, and many women wished that genetic counseling would be performed assuming that NIPT could give positive 8 results regarding "Lack of information in case of positive result." They wished to receive detailed information regarding the methods of terminating their pregnancy as one of the alternatives in case of chromosomal abnormalities in the fetus.
I don't think it hurts to take some time to explain about the options for getting an artificial termination of pregnancy in the case of a positive result. There is this idea that an abortionbad, but if the chance that you can love your child unconditionally is 0%, then I believe that it is an important option, even though it's not easy to discuss...
(if the pregnancy was terminated)
However, some women who considered continuing their pregnancy despite receiving positive results upon NIPT were seeking information regarding the exact quality of life for individuals living with such illnesses and the social resources available for them. (if the pregnancy was continued) These data indicated that respondents were seeking information during genetic counseling that would support the decision to continue or terminate their pregnancy. These women desired information regarding the daily lives of children with disabilities and other related materials to guide their decision of whether they could raise a child with disabilities; furthermore, they desired follow-up support for women reporting with positive results in the NIPT.
Environmental factors
Environmental factors included the comfort levels of women for raising a child with disabilities; this aspect comprised three subcategories: "Lack of awareness and education about diversity," "Insecurities on raising the child," and "Lack of social support systems for people with disabilities.
"Insecurities on raising the child" comprised "Insecurities for the future," "Vague insecurity," "Insecurities for the child-rearing environment," and "Familiarity with challenges of living with a disabled person." The reason why many people have the image of prenatal testing as an "immoral thing" in Japan Regarding "Insecurities on raising the child", older parents had a sense of responsibility to raise the child; they were concerned regarding the life of the child after their death owing to their age at pregnancy, and they were concerned regarding burdening the siblings of the child and, thus, had "Insecurities for the future." Furthermore, certain respondents also exhibited "Vague insecurity" because raising a disabled child is unfathomable and difficult to imagine, owing to the lack of experience. Others, however, already had "Familiarity with challenges of living with a disabled person," because they knew somebody or had family members with disabilities, thereby fostering the understanding of the merits and demerits and having discerned that it would be challenging. They also faced "Insecurities for the child-rearing environment" owing to their concern regarding an inadequate financial support or the social environment for raising a child with disabilities.
Increased anticipatory anxiety
Increased anticipatory anxiety included the two subcategories "Time-related anxiety" and "Anxiety about test accuracy." Among the pregnant women who opted for NIPT, there were some who were so anxious during the two weeks until they got the results that they could not sleep until finding out that their results were not positive. Also, knowing that test accuracy of NIPT was not 100%, some people were anxious that their result might be a false-negative, and they could not eliminate this anxiety until the birth of the child.
Ambivalence
Ambivalent feeling comprised five subcategories: "Options in the case of a positive result," "Guilt towards the child," "Criticisms on NIPT from others," "Denial of disabled people, and "How to tell the child." Some respondents "Stated their decision" with regard to the course of action in case of a positive NIPT result, whereas others expressed "Difficulty stating their decision" regarding continuing or terminating their pregnancy. In both cases, respondents seemed agonized over their decision in the case of positive results (Fig. 3).
Discussion
The results of this study also showed that factors related to genetic counseling and environmental factors greatly influence women's decision-making regarding taking the NIPT. Because of the impact of factors related to adequate information, i.e., Factors related to genetic counseling before undergoing NIPT and Environmental factors, which describe an inadequate social environment wherein the ability of independent decision making among women is perturbed, the level of Anticipatory anxiety is increased. The decision-making among pregnant women who receive a prenatal diagnosis is affected not only by individuals close to them, e.g., family members or friends, but also by the opinions of medical staff, the social environment, and uncertainty regarding fetal health conditions [13][14][15][16].
Furthermore, the guilt that women experience when deciding to give birth or abort the fetus, or the guilt they experience regarding decision-making related to disabled individuals and contrary sentiments associated with ideas that promote prenatal testing and that having to face this moral opposition induces a state of Ambivalence among some women even one year after taking the NIPT, whenever the unresolved feelings associated with the thought "What if the result had been positive " resurfaced in their minds.
There are problems related to genetic counseling about assuming positive NIPT results. Genetic counselors themselves are concerned that talking to parents about Down syndrome during prenatal testing is likely to elicit fear among them. Overly optimistic or negative information from medical staff, who largely influence the parents' ultimate decision-making upon receiving positive NIPT results can also introduce a bias [17,18], similar to the genetic counselors themselves who lack the necessary skills and knowledge [3]. Perhaps, this is focused on avoiding any negative effects of providing information while assuming a positive NIPT result on the mental status of pregnant women, who seek emotional relief from undergoing the test. However, many women also seek information regarding the precise living conditions of children with the screened genetic disorders and regarding 13 the types of social resources available to them.
Statistics have revealed that in England, a few more women who received positive NIPT results chose to have an abortion compared to those who tested positive upon amniocentesis [3]. Concurrently, in Japan, the possibility of continuing the pregnancy existed among many women, despite obtaining positive NIPT results. The present results also show that it is important to offer genetic counseling under the assumption of a potentially positive NIPT result.
Among numerous pregnant women, the motivation to undergo prenatal testing stems from their need to "feel relieved [5]." However, among the women who inquire about prenatal diagnosis, many request information regarding not only about the actual test, but also regarding the many anxieties they may have about their children during pregnancy. Pregnant women attended NIPT genetic counseling sessions at a time when they could openly speak and be listened to; hence, it is very important to provide psychological support to pregnant women in addition to information regarding testing methods or disorders that can be screening during the test [19].
Fears regarding the potential discrimination faced by the child, prejudice, the lack of understanding among family members or other people, and the lack of societal support for raising the child are some of the reasons that motivate women to have a "healthy" child. There is a cultural feature in Japan that is sensitive to the surroundings and is easily affected. Furthermore, the present results suggest that the society is not very open and livable for individuals with disabilities. Simultaneously, numerous individuals strongly felt the need to adapt to the social support systems for people giving birth to children with disabilities. These results suggest a positive and supportive perspective towards disabled individuals, fostering the hope that in the future, diversity will be widely accepted and that society will be welcoming to all children, irrespective of being born with disabilities. Many of the present study subjects were mothers with late-life pregnancies. We believe the potential impacts on family and siblings, societal prejudices on disabilities, and attachment towards the unborn child were weighed after careful consideration and understanding of societal prejudices and the increased chance of having a child with an abnormal chromosomal number owing to their age before choosing to undergo NIPT.
Similar to other types of prenatal testing, we believe that NIPT may also increase the pregnant women's worries and anxieties about the unborn child. The rate of infertility treatment was high among the women who underwent NIPT (42.2%). Furthermore, women undergoing infertility treatment have strong fears regarding chromosomal abnormalities among their children [20] [21,22]; however, informing women that results are usually negative for 98% of older mothers may as well reduce the anxiety among women undergoing infertility treatment [23,24]. Offering detailed information to pregnant women during genetic counseling sessions to enable its juxtaposition with the mothers' own experiences and values to make an informed decision whether or not to undergo NIPT lowers the amount of uncertainty, stress, and anxiety they experience during the decision-making [25] [26].
The present results indicate that numerous individuals seek detailed information regarding the survival of children with positive results upon NIPT, including social support services, which indicates the lack of information and its accuracy regarding these disorders. As such, administration of NIPT is accompanied by high levels of stress among mothers, thereby potentially explaining their increased levels of anxiety until receiving the test results or their anxieties regarding the accuracy of the test [27].
Pregnant women taking the NIPT are compelled to gain adequate knowledge and understanding of the NIPT and the disorders screened therewith and to decide between continuing or terminating their pregnancy upon receiving a positive result within a very limited time. It is normal for all pregnant women to wish for a healthy child, which motivates them to take the NIPT to be "relieved [28]." However, even when they are relieved upon receiving negative results, many women were still ambivalent about their decision, feeling guilty about having tried to decide their child's life, or having felt denial towards individuals with disabilities, but also having to endorse the thoughts about the test. This is probably what made them recognize their ambivalence and internal conflict at having these unresolved complex emotions.
15
The present study shows that even negative results can induce ambivalent feelings among pregnant women, and it is important to recognize that these feelings may be due to interrelated factors concerning genetic counseling, environmental factors, and increased anticipatory anxiety. Moreover, pregnant women have requested additional information regarding the detailed prognosis of individuals with Down syndrome or other congenital disorders and/or artificial abortion, assuming positive NIPT results. Therefore, genetic counseling and appropriate follow-up sessions are essential for the well-being of pregnant women taking the NIPT.
Limitation and future direction
There may be some possible limitations to this study. Firstly, a possible bias due to only women filling in the free description column of the questionnaire after one year; the background of these women is also not clear. Further prospective study is needed to investigate the psychosocial situations of women undergoing NIPT before and after the examination and after the passage of time. However, even in case of a negative NIPT result, women may have various conflicts and ambivalent feelings, and recognizing the factors that elicit these feelings may help future genetic counseling.
Practice Implications
To reduce internal conflict in pregnant women due to ambivalent feelings, it is critical to actively provide them detailed information about living with a child affected by one of the disorders screened during the genetic counseling sessions and to discuss this information assuming a potentially positive result, including specifics regarding medical pregnancy termination. Importantly, we believe that pregnant women should receive personalized psychological care so that they can make independent decisions based on factual knowledge. Thus, even if pregnant women experience feelings of ambivalence, they should be reassured and accept that undergoing NIPT was their decision after careful consideration, thereby reducing their potential internal conflicts.
Abbreviation NIPT: Non-invasive prenatal testing Declarations Acknowledgement All authors contributed to and have approved the final manuscript.
Junko Yotsumoto was supported by the Grant-in-Aid for Scientific Research (C), Japan
The funding source had no role in designing or conducting the study, nor in the collection, management, analysis or interpretation of data; in the preparation, review or approval of the manuscript; or in the decision to submit the paper for publication.
Ethics approval and consent to participate
Subjects have given their written informed consent.
The study protocol has been approved by the ethics committee of Center for Maternal-Fetal, Neonatal and Reproductive Medicine, National Center for Child Health and Development (No 602), and all research institute's committee on human research.
Consent for publication
Not applicable. Components of the ambivalence Footnote: n = Number of occurrences of each category.
Categories may partially over-lap.
Supplementary Files
This is a list of supplementary files associated with this preprint. Click to download. | 2019-10-10T09:27:00.795Z | 2019-10-07T00:00:00.000 | {
"year": 2019,
"sha1": "a0ba507daf3feb113164609ab9356199730fcc8f",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-020-2763-z",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d3193fc976d68ac2e344836b77b160d3925aa0db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
208550882 | pes2o/s2orc | v3-fos-license | Evidence of transcription at polyT short tandem repeats
Background Using the Cap Analysis of Gene Expression technology, the FANTOM5 consortium provided one of the most comprehensive maps of Transcription Start Sites (TSSs) in several species. Strikingly, ~72% of them could not be assigned to a specific gene and initiate at unconventional regions, outside promoters or enhancers. Results Here, we probe these unassigned TSSs and show that, in all species studied, a significant fraction of CAGE peaks initiate at short tandem repeats (STRs) corresponding to homopolymers of thymidines (T). Additional analyse confirm that these CAGEs are truly associated with transcriptionally active chromatin marks. Furthermore, we train a sequence-based deep learning model able to predict CAGE signal at T STRs with high accuracy (~81%) Extracting features learned by this model reveals that transcription at T STRs is mostly directed by STR length but also instructions lying in the downstream sequence. Excitingly, our model also predicts that genetic variants linked to human diseases affect this STR-associated transcription. Conclusions Together, our results extend the repertoire of non-coding transcription associated with DNA tandem repeats and complexify STR polymorphism. We also provide a new metric that can be considered in future studies of STR-related complex traits.
Background
RNA polymerase II (RNAP-II) transcribes many loci outside annotated proteincoding gene (PCG) promoters [1, 2] to generate a diversity of RNAs, including for instance enhancer RNAs [3] and long non-coding RNAs [4]. In fact, > 70% of all nucleotides are thought to be transcribed at some point [1, 5,6]. Non-coding transcription is far from being fully understood [7] and some authors suggest that many of these transcripts, often faintly expressed, can simply be 'noise' or 'junk' [8]. On the other hand, many non annotated RNAP-II transcribed regions correspond to open chromatin [1] and cis-regulatory modules (CRMs) bound by transcription factors (TFs) [9]. Besides, genome-wide association studies showed that trait-associated loci, including those linked to human diseases, can be found outside canonical gene regions [10][11][12]. Together, these findings suggest that the non-coding regions of the human genome harbor a plethora of potentially transcribed functional elements, which can drastically impact genome regulations and functions [12,13]. Notably, short tandem repeats (STRs), repeated DNA motifs of 2 to 6 bp, constitute one of the most polymorphic and abundant repetitive elements [14]. STR polymorphism, which corresponds to variation in number of repeated DNA motif (i.e. STR length), is due to their susceptibility to slippage events during DNA replication. STRs have been shown to widely impact gene expression and to contribute to expression variation [15,16]. At the molecular level, they can affect transcription by inducing inhibitory DNA structures [17] and/or by modulating the binding of transcription factors [18,19]. Using the Cap Analysis of Gene Expression (CAGE) technology [20,21], the FANTOM5 consortium provided one of the most comprehensive maps of TSSs in several species [2]. Integrating multiple collections of transcript models with FANTOM CAGE datasets, Hon et al. built an atlas of 27,919 human lncRNAs, among them 19,175 potentially functional RNAs, and provided a new annotation of the human genome (FANTOM5 CAGE Associated Transcriptome, FANTOM CAT) [4]. Despite this annotation, many CAGE peaks remain unassigned to a specific gene and/or initiate at unconventional regions, outside promoters or enhancers, providing an unprecedented mean to further characterize non-coding transcription within the genome 'dark matter' [13] and to decode part of the transcriptional noise.
Here, we probed CAGE data collected by the FANTOM5 consortium [2] and specifically looked for repeating DNA motifs around CAGE peak summits. In all species studied, we showed that a fraction of CAGE peaks (between 2.22% in rat and 6.45% in macaque) initiate at repeats of thymidines (Ts) i.e. STRs of Ts (or T STRs). Biochemical and genetic evidence demonstrate that many of these CAGEs do not correspond to technical artifacts, as previously suspected [22], but rather exhibit genuine features of TSSs. Our results not only extend the repertoire of non-coding transcription but also complexify T STR polymorphism, with T STR having the same length but different transcription rate and, conversely, T STR with different lengths having similar transcription rate. We further learned a sequencebased Convolutional Neural Network (CNN) able to predict this transcription with high accuracy (∼ 81% in human). Extracting the features learned by this model reveals that this transcription is triggered by the length of T STRs but also by instructions lying in the downstream sequence. Using our CNN model, we finally showed that many genetic variants linked to human diseases potentially affect this STR-associated transcription, thereby advancing our capacity to interpret several regulatory variants [15].
A significant fraction of CAGE peaks initiates at T homopolymers in various species.
Using the motif enrichment tool HOMER [23], we looked at repeating DNA motifs in 21bp-long sequences centered around the FANTOM5 CAGE peak summits. As shown in Figure 1, the first motif identified in both human and mouse is the canonical initiator element INR [24,25], demonstrating the relevance of our strategy to unveil specific sequence-level and TSS-associated features. A second motif corresponding to T homopolymer is identified ( Figure 1A and B). This motif is present in 61,907 human and 8,274 mouse CAGE peaks ( Figure 1A and C). Because homopolymers of Ts represent the most abundant class of STRs [14,26], we hereafter called these CAGEs, T STR CAGEs. In human, the median size of CAGE-associated T STRs is 17 bp with a minimum of 9 bp and a maximum size of 64 bp. Similar results are obtained in mouse (median = 21 bp, minimum = 8 bp and maximum = 58 bp). Looking directly at the number of STRs of 9 Ts in the human genome, we found that 63,974 T STRs out of 1,337,561 exhibit a CAGE peak located at their 3'end + 2bp. Thus, the vast majority of T STR CAGEs are identified by motif enrichment and not all T STRs are associated with CAGE peaks. In mouse, only 8,825 T STRs out of 834,954 are associated with a CAGE peak but, compared to human CAGE data, mouse data are small-scaled in terms of number of reads mapped and diversity in CAGE libraries [2]. We further looked at CAGE peaks in dog, chicken, macaque and rat (Supplementary Figure S1) and T homopolymer is invariably detected by HOMER (sometimes even before INR motif).
The position of the CAGE summit at -2 is presumably artificially introduced by the hybridization step on the flow cell (Supplementary Figure S2A) : sequencing of any CAGE tags initiating within T homopolymers will preferentially start just after polyT track. The motif enrichment is therefore more indicative of transcription initiation at T STRs than of the existence of a true motif located precisely 2bp after T STRs. In line with this, ENCODE CAGE data, which were generated using Illumina technology, confirmed several FANTOM CAGE peaks (generated by HeliscopeCAGE) but were not exactly aligned on that of FANTOM (Supplementary Figure S2B). Note that the FANTOM CAT annotation was actually shown to be more accurate in 5' end transcript definitions compared to other catalog [4]. Moreover, a small fraction of Start-seq [27] and DECAP-seq [28] TSSs can also initiate in sequences with a preference for T and this clearly does not represent a genuine motif (Supplementary Figure S3A and B). A similar observation was made with TSS-seq data collected in Arabidopsis thaliana [29] (Supplementary Figure S4, motif #2).
Provided that Heliscope sequencing used by FANTOM5 can be internally primed at polyT track (Supplementary Figure S2), we can legitimately question the relevance of T STR CAGEs [22]. Several features are indeed in agreement with the idea that T STR CAGEs could for instance arise from internal priming within introns of messenger RNAs (Supplementary Figures S5 to S9 and Supplementary Tables S1 and S2). However, this artefact scenario could not explain all T STR CAGEs as 8,926 (out of 63,974, > 14%) are 'intergenic', i.e. not located in the same orientation as one of the 53,220 genes of the FANTOM CAT annotation, one of the largest gene annotations so far. No major sequence difference distinguishes intergenic from intragenic T STR CAGEs (Supplementary Figure S10). Besides, we observed some concordance between several technologies, which do not use oligo dT priming (Supplementary Figures S2B, S3, S4 and S9). Together these observations raise the possibility that a fraction of T STR CAGEs represent genuine TSSs. The corresponding RNAs appear rather stable [30] according to the CAGE exosome sensitivity score previously computed by FANTOM [4] (median sensitivity score = 0.08, Supplementary Figure S6B), suggesting that they do not correspond to cryptic transcription [7]. To clarify the existence of these TSSs, we further investigated whether T STR CAGEs exhibit canonical TSS features.
Several T STR CAGE tags are truly capped. We used a strategy described by de Rie et al. [31], which compares CAGE sequencing data obtained by Illumina (ENCODE) vs. Heliscope (FANTOM) technologies. Briefly, the 7-methylguanosine cap at the 5' end of CAGE tags produced by RNA polymerase II can be recognized as a guanine nucleotide during reverse transcription. This artificially introduces mismatched Gs at Illumina tag 5' end, which is not detected with Heliscope [31]. Although such G bias is not clearly observed when considering the whole CAGE peak, it is readily detectable when considering the 3' end of T STRs (position -2 from FANTOM CAGE summit) using Illumina EN-CODE CAGE data produced in Hela-S3 nuclei (Figure 2A). This bias is observed in other cell types and is comparable to that observed with CAGE tags assigned to gene TSSs ( Figure 2B and Supplementary Figure S11). The G bias is even more pronounced in intergenic than intragenic T STR CAGEs, suggesting that reads corresponding to host gene mRNA can mask transcription at STRs ( Figure 2B and Supplementary Figure S11). Conversely, most CAGE tag 5' ends perfectly match the sequences of pre-miRNA 3'end, as previously reported [31], or that of 61,907 randomly chosen genomic positions ( Figure 2B and Supplementary Figure S11). Mismatched Gs at the 3' end of all T STRs located within the same genes as T STR CAGEs are also detected ( Figure 2B and Supplementary Figure S11), though the abundance of tags is higher at T STR CAGEs ( Figure 2B), suggesting the potential existence of false negatives (see below). Overall, these analyses show that T STR CAGEs are truly capped as opposed to post-transcriptional cleavage products of pri-mRNAs [31] or to random genomic positions.
Many T STR CAGEs are associated with transcription-related epigenetic marks. We then determined whether T STR CAGEs are associated with epigenetics marks related to transcription at the chromatin level. Using ENCODE DHS Master list, we confirmed that ∼ 11% of T STR CAGEs (7,028 out of 63,974) lie in DHSs, while this is true for only ∼ 6% of T STRs without CAGE peaks (76,616 out of 1,273,587, Fisher's exact test p-value < 2.2e-16). This difference was also found with another compendium of DHSs (https://www.meuleman.org/project/dhsindex/: 7,028 out of 63,974 T STRs with CAGE peaks were found in DHSs while only 76,616 out of 1,273,587 T STRs without CAGE peaks, Fisher's exact test p-value < 2.2e-16) as well as with ATAC-seq data [32] (12% of T STR CAGEs (7,697 out of 63,974) but only 6.6% of T STRs without CAGE peaks (84,459 out of 1,273,587) were found in pan-cancer open regions, Fisher's exact test p-value < 2.2e-16).
Since most T STR CAGEs are intronic, the differences observed may merely be due to the transcriptional state of the host gene and/or the genomic environment. We then investigated transcription at T STRs at the gene level and library-wise in order to preclude the effect of global chromatin/gene environment. We created two sets of 'expressed' and 'non-expressed' T STR CAGEs: T STR CAGEs are considered as 'expressed' if (i) associated with a detectable CAGE signal in the sample considered (TPM > 0) and (ii) located in a gene containing at least one 'non-expressed' T STR CAGEs. Conversely, 'non-expressed' T STR CAGEs are (i) not detected in the sample considered but detected in other samples and (ii) located in a gene containing at least one 'expressed' T STR CAGEs. Using these two sets of T STR CAGEs makes the analyses independent of host gene expression because the same genes are considered in both cases. Genome segmentation provided by combined ChromHMM and Segway [33,34] shows that 'expressed' T STR CAGEs are systematically more enriched in regions corresponding to predicted transcribed regions than 'non expressed' T STR CAGEs ( Figure 3A, Fisher's exact test p-value < 2.2e-16 in HeLa-S3 (CNhs12325) and in GM12878 (CNhs12331), p-value = 2.053e-13 in K562 (CNhs12334)). The ChromHMM/Segway segmentations integrate several ChIP-seq data corresponding to RNA polymerase II, 8 chromatin marks (H3K4me1, H3K4me2, H3K4me3, H3K27ac, H3K9ac, H3K36me3, H4K20me1, H3K27me3) and the CTCF transcription factor. Using the same comparison, transcription at T STRs was also confirmed with GRO-seq data ( Figure 3B and Supplementary Figures S12 and S13).
We then looked at epigenetics marks individually and used data collected by the Roadmap Epigenome consortium in H1 embryonic stem cells to compare the epigenetics status of 'expressed' vs. 'non-expressed' T STR CAGEs. In three replicates of untreated H1 cells CAGE libraries, H3K36me3 and H3K4me3 peaks are invariably enriched in 'expressed' T STR CAGEs ( Figure 3C and Supplementary Figure S14). Similar profiles are obtained with ENCODE ChIP-seq data, although less pronounced in GM12878 (Supplementary Figure S15). H3K4me3 levels at T STR CAGEs are low (Supplementary Figures S14 and S15), as observed with lncR-NAs [35]. H3K36me3 is a histone modification mark enriched in the gene body region and associated with transcription elongation [35]. H3K4me3 is a mark classically associated with active or poised transcription start sites [35]. Hence, 'expressed' T STR CAGEs are more associated with H3K4me3/H3K36me3 domains than 'non-expressed' ones. Interestingly, this type of 'K4-K36' domains have been previously used to characterize lncRNAs [36]. Figure 3C also shows an enrichment in H3K4me2 (see also Supplementary Figure S14 and S15), which was previously associated with intragenic cis-regulatory [37] and TF binding [38] regions. We concluded that, overall, detection of CAGEs at T STRs is associated with transcription-related chromatin marks.
Several T STR CAGEs correspond to annotated transcript and enhancer TSSs. We found that 11,180 T STRs (end+2bp) were associated with a 'robust' CAGE peak i.e. peaks confirmed by external data of ESTs and associated with H3K4me3 marks and DNase hypersensitivity sites (DHSs) [2], representing > 17% of all T STR CAGEs , and thereby confirming the results shown in Figure 3. We then assessed the presence of T STR transcription among annotated TSSs. Motif enrichment around FANTOM CAT TSSs [4] shows that ∼ 1.5% of them initiate downstream T STRs ( Figure 4A). Looking directly at TSS coordinates of FANTOM CAT robust transcripts [4], we noticed that 6,734 TSSs, corresponding to 10,606 robust transcripts (out of 525,237, ∼ 2%), initiate 2bp after a T STR, with a clear enrichment in lncRNA intergenic, lncRNA sense intronic or sense overlap RNA (hypergeometric test p-values < 2.2e-16 for the 3 RNA classes). A list of these TSSs is provided in Supplementary Table S3A. Similar results were obtained for 5,889 stringent transcripts (total n = 402,813) [4]. Importantly, transcript models in FANTOM CAT combine various independent sources (GENCODE release 19, Human BodyMap 2.0, miTranscriptome, ENCODE and an RNA-seq assembly from 70 FANTOM5 samples) and FANTOM CAT TSSs were validated with Roadmap Epigenome DHS and RAMPAGE data sets [4]. As expected, these TSSs only moderately contribute to gene expression (Supplementary Figure S16). No specific Open Reading Frame could be detected 2kb downstream T STR CAGEs (Supplementary Figure S17).
The T homopolymer is also observed for ∼ 4% of FANTOM enhancer TSSs [3] ( Figure 4B), though no enrichment in enhancer epigenetic marks are predicted by chromHMM/Segway ( Figure 3A). Looking directly at genomic coordinates, 4,976 enhancers (out of 65,423, ∼ 7.6%) are defined with at least one T STR CAGE (Supplementary Table S3B) and 173 enhancers are defined by two T STR CAGEs [3]. Enhancer TSSs are therefore more often associated with T STRs than transcript TSSs (∼ 7.6% and ∼ 2% respectively, Fisher's exact test p-value < 2.2e-16). Similar results were obtained with mouse enhancers (Supplementary Figure S18) with 1,171 enhancers (out of 44,459, ∼ 2.6%) involving at least one T STR CAGE. These results strengthen the idea that a number of T STR CAGEs correspond to genuine TSSs.
T STR CAGEs interact with gene TSSs. We used ENCODE Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET) directed against RNAP-II in K562 cell line [39] to better characterize the genetic elements interacting with T STR CAGEs. As in Figure 3, we used the sets of 'expressed' and 'non-expressed' T STR CAGEs in K562 cells. The 'expressed' T STR CAGEs are slightly enriched in ChIA-PET data compared to 'non-expressed' T STR CAGEs ( Figure 5A Figure S19). We further used the FANTOM5 CAGE TSS classification [2] and showed that CAGEs interacting with 'expressed' T STR CAGEs preferentially interact with 'strict' and 'weak' TSSs, compared to 'non-expressed' T STR CAGEs ( Figure 5C and Supplementary Figure S20). Note that ChIA-PET data contain too many intragenic interactions to clearly conclude that these TSSs correspond to host gene TSSs.
Transcription at T STRs can be predicted by a sequence-based deep learning model. We decided to probe transcription at T STRs using a machine learning approach. Specifically, we used a regression strategy to predict the mean raw tag count of each T STR (> 9 Ts) across all FANTOM libraries (Supplementary Figure S21). We first studied the link between tag count and STR length and observed that T STRs with the same length can exhibit different tag counts and, conversely, T STRs with different length can harbour similar tag count (Supplementary Figure S22A). Plotting the densities of mean raw tag counts at T STRs associated or not with CAGE peaks indicates the existence of T STRs not associated with CAGE peaks but associated with CAGE tags nonetheless (Supplementary Figure S23). We computed G bias, as in Figure 2, at T STRs not associated with CAGE peak but associated with high or low tag count (Supplementary Figure S24). As expected, T STRs without Heliscope CAGE peak and low tag count are associated with few Illumina CAGE reads compared to T STRs without CAGE peak but high tag count, despite being > 4 times more numerous (Supplementary Figure S24). However, Illumina reads detected at T STRs without CAGE peak but high tag count (> 18.45, n = 52,999) are clearly biased towards G, in contrast to T repeats without CAGE peak but low tag count (< 4, n = 218,074). These results support the existence of potential false negative FANTOM CAGE peaks and likely explain results depicted in Figure 2B and Supplementary Figures S9 and S11.
The machine learning approach considered was a deep Convolutional Neural Network (CNN). Model architecture and constructions of the different sets used for learning are shown in Supplementary Figure S25. As input, we used sequences spanning ± 50bp around the 3' end of each T STR. Longer sequences were tested without improving the accuracy of the model (Supplementary Figure S26). The accuracy of our model, computed as Spearman correlation between the predicted and the observed tag counts, is remarkably high (median Spearman r over 10 trainings = 0.81, Supplementary Figure S27A) and the error low (median absolute error for one model = 1.82, Supplementary Figure S27B). Note that we also trained CNN model to predict transcription at T STRs in mouse (Spearman r = 0.77 and median absolute error = 1.82) and chicken (Spearman r = 0.61 and median absolute error = 1.09) but, as previously mentioned, compared to human, mouse and chicken CAGE data data are small-scaled in terms of number of reads mapped and diversity in CAGE libraries [2], making the signal less accurate than in human (see Supplementary Figure S28 for details).
Probing sequence-level instructions for transcription at T STRs. We further used our deep CNN model to probe the sequence-level instructions for transcription at T STRs. We noticed that the length of the T STRs associated with CAGE peaks is greater than that of T STRs not associated with CAGE peaks but located within the same genes ( Figure 6A). Likewise, the CAGE mean TPM expression, computed in [2], slightly increases with T STR length ( Figure 6B). We thus used a perturbation-based approach [40] and compared CNN predictions simulating varying lengths of STRs. We chose a T STR with a low error and increased the length of its polyT track ranging from 9 to 51. The predicted tag count increases with the length of the polyT track ( Figure 6C), revealing the influence of T STR length in CNN predictions. Furthermore, we randomly modified sequences in order to maximize their predicted tag count and to computationally generate an optimized sequence (see methods). Pairwise comparisons of these optimized sequences (2,000 comparisons) reveal that the main feature common to all optimized sequences is the length of the T repeat (Supplementary Figure S29).
It is important to note that the mean tag count used as input to learn our CNN model is based on CAGE data previously mapped along T STRs [2](see Supplementary Figure S30 for a representative example). Consequently, this metric may be influenced by the length of T STR itself (Supplementary Figure S22). To exclude that the importance of T STR length captured by our CNN model is merely linked to the tag count calculation, we built another model considering as input the mean tag count computed in an arbitrary window encompassing 20bp upstream and 5bp downstream T STR ends, no matter the length of the T STR. In that case, the mean tag count does not necessary reflect the CAGE signal mapped by FANTOM (see for instance Supplementary Figure S30) but it is insensitive to T STR length (Supplementary Figure S22B compared to Supplementary Figure S22A). This model yields similar accuracy (Spearman r between prediction/observation = 0.83) and the influence of T STR length is also captured, although, as expected, to a lesser extend than with the initial model (Supplementary Figure S31). Because this second model is built on an arbitrary window instead of genomic annotation (i.e. STR), and to stay in agreement with previous CAGE mapping [2], we decided to use the first model, trained on CAGE tag count computed along STR length, for the rest of the study.
We next tested whether our model learns additional features, lying downstream T STR. The accuracy of our model increases considering increasing lengths of sequence located downstream T STR with a plateau at 40-50 bp ( Figure 6D). We further considered 50 sequences with low error and high tag count (set 'high') and 50 sequences with low error and low tag count (set 'low'). For all pairs of 'high' and 'low' sequences, we sequentially replaced 5-mers from one sequence with 5-mers from the other. We then predicted the tag count of the 2 new sequences for each 5-mer swapping and assessed transcription change as the difference between tag counts before and after 5-mer swapping. Inserting 5-mers of 'high' sequences in 'low' sequences induces overall positive changes (i.e. increased transcription) ( Figure 6E), while inserting 5-mers of 'low' sequences in 'high' sequences overall decreases transcription ( Figure 6F). Besides, these 5-mer segmentation reveals that the instructions located near the end of T STRs (i.e. close to the CAGE peak summit) are crucial (i.e. can induce more change in the prediction) compared to the rest of the sequence (Figures 6E and F). Looking for motif enrichment, we further noticed that several motifs are enriched in T STR with high tag counts (top 5,000) compared to low tag counts (bottom 5,000) (Supplementary Figure S32). However, except the T homopolymer (Supplementary Figure S32), no single motif clearly distinguishes T STRs with high tag count from T STRs with low tag count. Likewise, a simple CNN with only one convolutional layer (1 layer-CNN), which is similar to finding enriched k-mers or motifs, poorly performs (median Spearman r = 0.2). Hence, a single representation cannot model the intricate structure of the features captured by CNN to predict transcription at T STRs.
Alternative methods were also tested, namely LASSO [41] and Random Forest [42], using, as predictive variables, mono-, di-and tri-nucleotides rates in the region located upstream (-50bp) or downstream (+50bp) T STR ends, the T STR lengths and the first nucleotide (A,C,G or T) downstream the T STR (-1 when considering 0 as the CAGE summit). In that case, the model accuracies are 51% and 61% for LASSO and Random Forest respectively. These models are not as accurate as CNN but they confirm that T STR length is the main feature being used from the sequence located upstream T STR end, while other features lie downstream the T STR (Supplementary Table S4 and Supplementary Figure S33). These analyses confirm the existence of instructions for transcription lying downstream T STRs, arguing against artifactual internal priming.
Clinically relevant variants are predicted to impact transcription at T STRs.
Previous studies showed that genomic variations at T STRs can impact gene expression. For instance, the variant rs10524523, located in TOMM40 intron 6, which corresponds to a T STR associated with a CAGE signal in several FANTOM libraries, is linked to the age of onset of cognitive decline, Alzheimer's disease and sporadic inclusion body myositis [43][44][45][46][47][48]. The TOMM40 mRNA brain expression appears to be linked to the length of the T STR, with the longer the variant the higher the expression [43,44]. Conversely, the KIAA0800/ENSG00000145041 promoter is repressed by an upstream element involving a T STR and two Alu repeats [49]. In that case, the CAGE signal detected at this particular T STR has been annotated as one TSS of KIAA0800/ENSG00000145041 in FANTOM CAT (namely ENCT00000303320.1, FTMT20900010597.1, MICT00000243080.1) [4]. More widely, the length of T STRs have been reported to impact gene expression (called expression STR or eSTR) in a directional and strand-specific manner [50].
We further used our model to predict the effects of genomic variations located within sequences surrounding T STR (end of T STR ± 50bp). First, we observed, in the case of TOMM40 intron 6 variant, that predicted transcription is positively correlated to the length of the T STR ( Figure 7A). Second, we compared the effect of variants listed in dbSNP, the world's largest database for nucleotide variations [51], and variants listed in ClinVar, which pinpoints medically important variants [52]. Almost all T STRs harbor variations listed in dbSNP (962,241 out of 1,169,145), while 2,577 T STRs are associated with variations listed in ClinVar. Strikingly, these T STRs tend to be associated with high tag counts compared to T STRs with dbSNP variants (Figure 7B), indicative of potential clinical relevance of transcription at T STRs. The ClinVar variants were not equally distributed around T STRs ( Figure 7C). Overall, ClinVar variants are frequently found around T STRs, with a peak at position 0 and +1 (0 being the end of T STR), close to the CAGE summit (+2 in that case). This particular distribution was not observed when considering all permissive CAGE peak summits (Supplementary Figure S34). The variants located at the vicinity of T STRs are predicted by our model to induce extensive changes in transcription, being either positive (i.e. transcription increase) or negative (i.e. transcription decrease) when located at the vicinity of T STR, and overall negative when located within T STR ( Figure 7D). Genomic variations within a T STR will more likely disrupt the repeat and decrease its length, thereby inducing an overall decrease in transcription prediction. The predicted changes are not linked to the nature of the variants (i.e. SNP or insertion/deletion) as any types of variants can have positive and negative effects on transcription with insertions/deletions having broader effects (Supplemental Figure S35). Variations linked to certain diseases were enriched in the 101bp-long regions centered around T STR ends, but no clear enrichment for a specific type of diseases was noticed (Supplementary Table S5).
Discussion
We looked at recurring DNA motifs associated with unassigned FANTOM CAGE peaks [2] and showed that a significant portion of them initiate at T STRs in several species. We do not exclude that some CAGE peaks initiating at T STRs are generated by internal priming during Heliscope sequencing. Yet we provide evidence that a fraction represents genuine TSSs as (i) T STR CAGEs are truly capped and associated with typical epigenetic marks, (ii) several of them correspond to annotated long non-coding transcripts or enhancer RNAs and are confirmed by various technologies, not requiring oligodT priming, and (iii) their expression can be predicted by a deep learning model, which uses features located downstream T repeats, unlikely involved in internal priming. We show that GRO-seq coincides with CAGE at T STRs ( Figure 3B and Supplementary Figures S11 and S12). Li et al. suggested that CAGEs detected at polyA/T repeats correspond to technical artifacts because they are not detected by a GRO-seq method that enriches for 5'-capped RNAs (GRO-cap) [22]. However, in comparison to CAGE, GRO-cap generates fewer reads mapping to introns [53], making hard to confirm the existence of faintly expressed and intronic CAGEs, such as T STR CAGEs, with this technology. In fact, non-coding transcription appear overall much less efficient than that of mRNAs [7] (Supplementary Figure 7A). It is also worth noticing that several features insinuating that T STR CAGEs are technical artifacts may constitute specific properties of non-coding RNAs with, notably, the majority of lncRNAs being enriched in the nucleus [54][55][56].
The human genome is scattered with repetitive sequences, and the vast majority of the genomic DNA is supposed to be transcribed [1, 5,6]. Implicitly, the human transcriptome should contain a large portion of RNAs derived from repetitive elements [57]. We provide here an example of such RNAs, which may represent the counterparts of satellite non-coding RNAs [58]. . Homopolymers of > 10 Ts appear at frequencies well above chance in the non-coding regions of several organisms, suggesting selective advantages that lead to their overrepresentation [26]. Additionally, using FANTOM and GENCODE genome annotations, we noticed that T STRs are enriched in introns of coding, as opposed to non-coding, genes (Supplementary Figures S1 and S2).
In human and mouse genomes, T STRs can correspond to the polyA tails of short interspersed nuclear elements (SINEs) [59]. According to RepeatMasker, only a fraction of human T STRs associated with CAGE peaks correspond to SINEs (∼ 37% vs. ∼ 50% for T repeats without CAGE peak, Supplementary Table S6). T STR CAGEs are also detected in chicken (Supplementary Figure S1) where SINE elements are rare [60] (note that repeats of 9 Ts are as abundant in chicken and human and represent 0.04% of the genome). These observations support the idea that transcription at T STRs exists independently of SINE elements. However, we do not exclude a possible co-evolution between T STR transcription and SINEs, at least in human and mouse, notably because of the importance of polyA tail length in SINE transposition in one hand [61,62] and the influence of this length on transcription revealed in this study in the other hand.
At this stage, whether T STR-associated non-coding RNAs or the act of transcription per se is functionally important [7] remains to be clarified. This is all the more warranted as T STR transcription is associated with clinically relevant genomic variations ( Figure 7B). This non-coding transcription, mostly sense and intronic, may specifically regulate certain coding -as opposed to non-coding -genes (Supplementary Figure S7, Supplementary Tables S1 and S2). In line with this idea, we show here that T STR CAGEs preferentially interact with gene TSSs (Figure 5).
T homopolymers have been shown to act as promoter elements favoring transcription by depleting repressive nucleosomes [63], specifically by orientating the displacement of nucleosomes [64]. As a consequence, T homopolymers can increase transcription of reporter genes in similar levels to TF binding sites [65]. Besides, eSTRs located within the gene body are more likely to have T on the template strand and show higher gene expression with increasing repeat number [50]. This directional and strand-specific bias [50], which cannot be explained by mere nucleosome depletion [63,64], could very well be molecularly caused by transcription. These observations, in addition to overrepresentation of long T STRs in eukaryotic genomes [26], suggest functional role(s) for transcription at T STRs. Dedicated experiments are now required to formally assess its overall contribution in gene expression.
Datasets and online resources.
Publicly available data used in this study can be found at the urls provided in Supplementary Table S7. The genome assemblies considered are human hg19, mouse mm9, chicken galGal5, dog canFam3, rat rn6, rhesus macaque rheMac8. Human Introns and RepeatMasker coordinates were downloaded from UCSC table browser.
Characterization of T repeats in human and mouse.
T STRs of > 9Ts were identified in the human, mouse and chicken genome using the tool developed by Martin et al. [66]. The human, mouse and chicken coordinates are provided as bed files available at https://gite.lirmm.fr/ibc/t-repeats.
Epigenetic marks at T repeats.
Average chromatin CAGE signal around T STR CAGEs ( Figure S9) were computed using the agg subcommand of the bwtool [68]. For RNAP-II ChIA-PET, interacting regions within the same chromosome were extracted using the column name $4 of the bed file. Roadmap and GENCODE ChIP-seq as well as GROseq data were extracted from URLs indicated in Supplementary Table S7. To look at epigenetic marks around T repeats into H1-hESC cell line, the coordinates of 'expressed' and 'non-expressed' T repeats ends+2 bp (see results for details) were intersected with Roadmap bedfiles using bedtools intesect [67].
intersectBed -wb -a epigenetic_mark.bed -b expressed_polyT_CAGEs.bed | sort -u | wc -l The fractions of intersecting T repeats within each set were compared for each available epigenetic mark and each of the 3 H1-hESC samples using Fisher's exact test. The same procedure was used in other cell lines with ENCODE broadpeaks data for the H3K4me1, H3K4me2, H3K4me3 and H3K36me3. For GRO-seq data, we used bigwig files from the 'gro-seq.colorado' repository (Supplementary Table S7). In H1-hESC cell line, we compared the fraction of intersecting 'expressed' and 'nonexpressed' T repeats similar to the procedure used for GENCODE and Roadmap data. The GRO-seq signal around these two sets was also compared as the mean signal into a 100 bp window centered on the T repeat end using deeptools multi-BigwigSummary [69]. We compared the fractions of T repeats with a no-null signal into both 'expressed' and 'non-expressed' sets using Wilcoxon test. We also calculated the sum of these mean signals across all the samples for each T repeat with similar results.
Convolutional Neural Network.
A Zenbu [73] xml custom script was used to calculate the mean raw tag count (Q3 filter, no normalization) of each base genome-wide in 988 FANTOM libraries. We summed these means along each T STR + 5bp. This values were used as predicted variables in a CNN model built on sequences centered around each T repeat end (n = 1,169,236). The same procedure was repeated with mouse (397 libraries) and chicken data (58 libraries) using the same architecture. The coordinates of the T repeats with the mean tag count values (score column) used to train our models are provided as bed files https://gite.lirmm.fr/ibc/t-repeats. The fasta file corresponding to T repeats sequences with their tag count was parsed using Biopython [74] library. To build the convolutional neural network, we used Keras [75] library in Python with Tensorflow [76] backend. The subsequent analyses of altering length of T repeats and swapping downstream sequence was performed by treating sequences as strings. The complete code can be found at https://gite.lirmm.fr/ibc/t-repeats.
Feature extraction.
We looked at motif by random optimization of the CNN prediction using the following algorithm : first, two sequences are randomly chosen in the test set. For each sequence, the effect of changing one nucleotide at a random position into A,T, C or G is assessed. The nucleotide with the maximum predicted tag count is kept and the procedure continues testing another position. This procedure is repeated until no further increase of predicted tag count is observed over 3,500 randomly selected positions. The two optimized sequences are then compared position-wise in order to return 1 for each position with the same nucleotide in both sequences and 0 otherwise. This process was repeated for 2,000 pairs of random sequences. Same procedure was applied minimizing the tag count for each sequence. The sequence logos were generated using the seqlogo library https://pypi.org/project/seqlogo/. The complete code can be found at https://gite.lirmm.fr/ibc/t-repeats.
Predicting impact of ClinVar variants
ClinVar and dbSNP vcf files were downloaded and then converted into bed files.They were intersected with coordinates of all T repeats using bedtools intersect [67] as follows: bedtools intersect -a clinvar_mutation.bed -b t_repeats.bed -wa -wb > t_repeats_clinvar.bed bedtools intersect -a db_snp_mutation.bed -b t_repeats.bed -wa -wb > t_repeats_dbSNP .bed The bed files generated above were then converted into Pandas [77] dataframe in python and variants were introduced in T repeats sequences. The CNN model developed previously was then used to predict the mean tag count of the mutated sequences. The change was computed using the following formula: where a is the predicted tag count after variation and b the predicted tag count before variation. The complete code can be found at https://gite.lirmm.fr/ ibc/t-repeats.
Comparison with other models.
To compare our deep learning regression model with other parametric and non parametric approaches, we used linear regression with l1-norm penalty via LASSO [41] and Random Forest [42] to predict the mean tag count signal. For both approaches, the predictive variables are the nucleotide, dinucleotide and trinucleotide content computed into the 50 bp downstream and 50 bp upstream the T homopolymer end, excluding the T homopolymers. We also integrated the length of the T repeat and the nature of the nucleotide following the T repeat end (A, C or G at position -1 referring to the CAGE summit at 0). LASSO inference was performed using the function cv.glmnet from the R package glmnet and Random Forest was performed using the randomForest R package. To compute the distribution of the minimal depth we used the min depth distribution function from the R package randomForestExplainer.
Declarations
Ethics approval and consent to participate Not applicable.
Consent to publish Not applicable.
Availability of data and materials All data used in this study are provided in Supplementary Tables. The code to predict transcription at T STRs with a CNN is available at https://gite.lirmm.fr/ibc/t-repeats.
Competing interests
The authors declare that they have no competing interests. Figure 1 Motif discovery around CAGE peakmax. HOMER [23] was used to find 21bp-long motifs in 21bp-long sequences centered around the peak summit of FANTOM5 CAGEs. [2]. C. Among 158,966 mouse CAGEs, 8,274 are associated with T homopolymers. Note that another repeat of 2-5 Gs is also identified in FANTOM CAGE data with strong similarities in both human and mouse (A and C). However, this repeat is less strict and is not conserved in dog, chicken, macaque and rat (Supplementary Figure S1) and is absent in Start-seq and DECAP-seq data (Supplementary Figure S3). Though this motif may represent a biologically relevant signal in human and mouse, these features make it less relevant for our study. Figures S12 and S13 for separated analyses). Fisher's exact test indicates that the fraction of T STR CAGEs with no-null GROseq signal obtained for 'expressed' T STR CAGEs is higher than that obtained for 'non expressed' T STR CAGEs (p-value < 2.2e-16). C. Intersection of 'expressed'/'non expressed' T STR CAGEs coordinates with that of Roadmap epigenetics data collected in H1 embryonic stem cells. The fraction obtained for 'expressed' and 'non expressed' T STR CAGEs were compared using Fisher's exact test. The color correspond to p-value < 5e-3 (green) or 5e-3 (grey).
Figure 4
Motif discovery around FANTOM CAT transcript and enhancer TSSs. HOMER [23] was used to find 21bp-long motifs in 21bp-long sequences centered around TSSs of FANTOM CAT robust transcripts (A) or enhancers (B). A. 525,237 transcript starts as defined in [4] were used. The top 5 is shown. Motif enrichment is indicated by binomial test p-value. B. 65,423 enhancers defined in [3] were used corresponding to 130,846 TSSs. Only 5 motifs were identified by HOMER.
Figure 6 Probing sequence-level instructions for transcription at T repeats A.
Length of T STRs associated (blue) or not (red) with CAGE peaks. For sake of comparison, only T STRs without CAGE peak and located within the same genes as T STRs with CAGE peaks were considered. B. Average CAGE TPM expression computed in [2] in five T STR length intervals as defined by the R quantcut function. The number of CAGE peaks considered is 10,632. Y-axis was limited to 0.5. C. A T STR whose transcription is predicted with low error was chosen and its length was gradually increased from 9 to 51 (x-axis). The prediction was assessed at each step (y-axis). D. Deep CNN models were trained using 50 bp upstream T STR ends and varying lengths of sequences located downstream, ranging from 0 to 50 bp (x-axis). For each trained model, the Spearman correlation was computed on the test set (30% of entire set of T STRs i.e. 350,770 sequences) between the prediction vector and that of observed tag counts (i.e. accuracy of the model, x axis). E and F. 50 sequences with low error and high tag count (set 'high') and 50 sequences with low error and low tag count (set 'low') were chosen. For all pairs of 'high' and 'low' sequences, we sequentially replaced 5-mers from one sequence with 5-mer from the other and predicted the tag count of the 2 new sequences for each 5-mer. Transcription change was assessed as the difference between tag counts before and after 5-mer swapping. (E) shows the results of insertions of 5-mers of 'high' sequences in 'low' sequences, while (F) shows the results obtained inserting 5-mers of 'low' sequences in 'high' sequences. Figure 7 Evaluating the effect of genetic variants on transcription at T STRs A. Increasing the length of the T STR located in TOMM40 intron 6 as in Figure 6C increases the predicted tagcount. B. Tag count distribution of all T STRs (black), T STRs with dbSNP variants (red) and T STRs associated with ClinVar variants (green). C. Frequency of ClinVar variants around T STR ends (position 0). The y-axis shows the fraction of all ClinVar variants located at the position indicated on the x-axis. Low variant frequency within T STRs is presumably due to mapping issues. D. Effect of prediction changes induced by ClinVar variants (see methods for details). The changes are shown as percentages of the original tag count in a position-wise manner (see 'methods' section).
Additional Files
Additional figures Figure S1 Motif discovery around CAGE summit. The procedure described in Figure 1 was repeated with CAGE peak summits collected in chicken, dog, macaque and rat. Figure S2 Heliscope vs. Illumina CAGE sequencing. A. In the HeliScopeCAGE protocol, capped RNA will be randomly primed and the RNA/DNA duplex will be captured on beads by the CAP Trapper reaction. The first-strand cDNAs are then released and a poly-A tail is added to prepare them for sequencing. The HeliScope platform primes the sequencing reactions with poly-dT oligonucleotides grafted to the surface of its flow cells. In the HeliScope CAGE, it is intended that the sequencing reactions will start after the poly-A tail added to the first-strand cDNA, thus producing reads that align at the TSS. If the first-strand cDNA contains internal A-rich regions, the priming can also happen internally and yield CAGE peaks at internal poly-T tracts. B.
Figure S3
Motif discovery around Start-seq and DECAP-seq peak summits. A. The procedure described in Figure 1 was repeated on TSSs detected by Start-seq [27] (forward data, n = 1,086,787) normalizing by total GC-content with -gc option. B. Same procedure applied on DECAP-seq TSSs [28] (n = 106,742). GSM2422532 and GSM2422533, which are not stranded data, were merged. In that case, the INR motif is not detected with this procedure and motif 4 (poly tract) is labelled as possible false positive. Note that the number of mouse CAGE summits [2] located within a 10bp window around Start-seq [27] and DECAP-seq [28] TSSs is low: ∼ 34.7% of CAGE TSSs have same coordinates as Start-seq TSSs but only ∼ 6.8% of CAGE TSSs share coordinates with DECAP-seq TSSs and ∼ 7.2% of Start-seq TSSs share coordinates with DECAP-seq TSSs. However, we noticed that mouse STRs of 9Ts associated with CAGE peaks are more associated with Start-seq and DECAP-seq peaks than T STRs not associated with CAGE peaks: ∼ 2% of CAGE-associated T STRs are also associated with Start-seq peaks (94 out of 4,600 T STRs considering only forward data) with only ∼ 0.3% T STRs not associated with CAGE but associated with Start-seq peaks (1256 out 413475, Fisher's exact test p-value < 2.2e-16). Likewise, ∼ 0.36% of CAGE-associated T STRs are also associated with DECAP-seq peaks (32 out 8825) with only ∼ 0.03% T STRs not associated with CAGE but associated with DECAP-seq peaks (307 out 826129, Fisher's exact test p-value < 2.2e-16).
Figure S4
Motif discovery around TSS-seq peak summits collected in Arabidopsis thaliana. The procedure described in Figure 1 was repeated on TSS-seq peaks [29] normalizing by total GC-content with -gc option. [4], of all CAGEs (red) and that of 10,926 T STR CAGEs (blue) are shown as boxplots. Directionality close to 1 indicates that sense transcription is more abundant than antisense transcription within a region of -800bp/+200bp centered around the CAGE summit while directionality of -1 indicates that antisense transcription is more abundant. These results argue against the presence of divergent (i.e. upstream antisense) RNAP-II transcription associated with T STR CAGEs , as widely observed for canonical TSSs [78] B. Exosome sensivity score were extracted from [4]. Exosome sensitivity of a CAGE cluster is measured as the relative fraction of CAGE signal observed after exosome knockdown in HeLa-S3 cells as previously described in [30]. The exosome sensitivity of all CAGEs (red) and that of 10,926 T STR CAGEs are shown as boxplots. Total CAGE signal on + strand is shown. Data are from ENCODE and were generated using Illumina technology. Chromatin-associated RNAs are detected even at T STRs without CAGE peaks, suggesting the existence of false negatives (see Supplementary Figure S24).
Figure S10 Motif discovery around intragenic and intergenic CAGE summit. HOMER [23] was used to find 21bp-long motifs in 21bp-long sequences centered around the summit of FANTOM5 CAGEs located (intragenic) or not (intergenic) in FANTOM CAT genes. Intergenic CAGE summits were used as foreground, while intragenic summits were used as background. The motif shown is found in only 0.13% of the intergenic CAGE summits, making it hardly discriminative. A. We evaluate the usage of FANTOM CAT TSSs associated (red) or not (blue) with poly tract for each gene with more than 1 TSS. For each assigned TSS, we counted the number of samples wherein the considered TSS is expressed. We then sorted all TSSs for each gene according to this usage value and divided the rank of each polyT-associated TSS by the length of the list of all TSS assigned to each gene (usage ratio indicated in y-axis). TSSs associated with polyT repeats are slightly less used than TSSs not associated with polyT repeat (Wilcoxon test p-value < 2.2e-16). However, the median usage ratio for polyT-associated TSSs is 0.5, indicating that these TSSs are far from being the least used TSSs. B. We further evaluated the contribution of polyT-associated TSSs in gene expression. For each gene in each library, we divided the sum tag count (CPM RLE normalized) of all TSS associated (red) or not (blue) with polyT tract by the sum tag counts of all TSSs assigned to the gene considered (contribution ratio indicated in y-axis). As expected provided faint expression of T STR CAGEs , polyT-associated FANTOM CAT TSSs contribute only poorly to gene expression with a median contribution ratio of 0 (the median contribution of TSSs not associated with polyT repeat logically equals 1).
Figure S17
No specific Open Reading Frame could be detected 2kb downstream polyT CAGEs. Stop codon distribution was assessed along 2,000 nt downstream polyT-associated CAGE summits (n = 63,974). The x-axis represents the distance to transcript start and the y-axis represents the frequency of the three stop codons TAA, TAG and TGA. All possible 6 frames were considered (see y-axis labels).
Figure S18 Motif discovery around FANTOM mouse enhancer TSSs. HOMER [23] was used to find 21bp-long motifs in 21bp-long sequences centered around TSSs of 44,459 mouse enhancers defined in [3] and corresponding to 88,918 TSSs. Only the top 5 motifs are shown. To compute the raw mean tag count of each T repeat, we first calculated the mean tag count of each base genome-wide in 988 FANTOM libraries. We then summed these values along the T repeat + 5bp. The distribution of these tag counts are plotted for T repeats initially defined as associated (red) or not (blue) with CAGE peaks as defined in [2]. The overlapping area likely represents the existence of potential false negatives i.e. T repeats not associated with CAGE peaks but associated with CAGE tags nonetheless. The y-axis was limited to 100 for sake of clarity. Figure S24 G bias at T STRs without FANTOM CAGE peak. G bias in ENCODE CAGE reads (nuclear fraction, polyA-) was assessed at -2 of CAGE peakmax as in Figure 2 in A549 (two replicates), GM12878, HelaS3 and K562. T STRs not associated with CAGE peak but associated with high (tag count > 18.45, n = 52,999) or low tag (tag count < 4, n = 218,074) count were distinguished. These thresholds were defined as the median of tag count observed in T STRs with CAGE (18.45) and the first quartile of tag count observed in the case of T STRs without CAGE (4).
Figure S28
Predicting transcription at T repeats. We built a model in chicken, human's most distant species considered in FANTOM5 CAGE data. The prediction, though not null, is much less accurate than in human or in mouse (Spearman r = 0.61 and median absolute error = 1.09). Only 58 libraries are available in chicken and the mean signal across these libraries may not be as robust as in human. To confirm that the robustness of the CAGE signal is directly linked to the number of libraries considered, we computed the mean raw tag count in only 58 randomly chosen human libraries and learned a new model using the same architecture as in Supplementary Figure S25. In that case, the signal appears very sparse (bottom) compared to a signal computed on 988 libraries (top) and the model accuracy falls to 0.61. These results reveal that the CAGE signal at T STRs is noisy when considered library-wise but becomes robust when averaged on numerous libraries. Arrows (middle panel) represent T STRs. Figure S29 Feature extraction. A. For each sequence, the effect of changing one nucleotide at a random position into A,T, C or G was assessed. The nucleotide with the maximum predicted tag count is kept and the procedure continues testing another position. This process is repeated over 3,500 randomly selected positions (x-axis) until no further increase of predicted tag count (y-axis) is observed . B. Two sequences were randomly chosen in the test set. The two sequences optimized as in A are then compared position-wise in order to return 1 for each position with the same nucleotide in both sequences and 0 otherwise. This process was repeated for 2,000 pairs of random sequences. y-axis: fraction of identical pairs ; x-axis: position (The end of T repeat is located at position 50 ; repeat of 9 Ts is indicated by red vertical lines). C. A sequence logo was designed from 100 optimized sequences. Figure S30 Example of CAGE signal at T STR. Example of CAGE peak calling from [2] (bottom track) overlapping a 25bp-long T STR (upper track). Significant fraction of the signal is lost when considering an arbitrary window encompassing -20bp/+5bp around T STR end, as indicated by the dashed vertical line. Conversely, this fraction is considered by computing the tag count along STR length. Figure S31 Validating the importance of the T STR length. A. A 9T-long sequence, whose transcription rate is well predicted by the model learned on the arbitrary window (-20bp/+5bp around T repeat end), is chosen (predicted transcription = 6.9967527 / observed transcription = 7). The number of Ts is progressively increased (x-axis) and a prediction is computed for each sequence (y-axis). This reveals that this second model is as sensitive to T repeat length as the model learned on tag count computed along STR length. B. The procedure described in Supplementary Figure 29 was repeated with the model learned with a tag count computed on an arbitrary window encompassing 20bp upstream and 5bp downstream T repeat end. A sequence logo was built on 30 optimized sequences revealing that the T repeat length is also learned as a key feature by this second model. Figure S32 Motif enrichment comparing T STRs with high and low tag count. A. Tag count distribution of the top 5,000 sequences with high (blue) and low (red) tag count. B. Motif enrichment was computed with HOMER [23] considering 101bp-long sequences centered around T STR ends with high and low tag count (top 5,000). The top12 motifs out of 33 are shown. Figure S33 Predicting transcription at T STRs using Random Forest. The predictive variables are the nucleotide, dinucleotide and trinucleotide content computed into the 50 bp downstream the polyT end and 50 bp upstream the polyT end excluding the T STR. We also integrated the length of the T STR and the nature of the nucleotide following the T STR end (A, C or G at position -1 referring to the CAGE summit at 0). Variable importance was assessed computing the minimal node depth, which reflects the predictiveness of a variable by a depth calculation relative to the root node of a tree (smaller values correspond to more predictive variables). The distribution of minimal depth is represented for top 10 variables according to mean minimal depth. The mean minimal depth was calculated using only non-missing values. Table S1 CAGE-associated T STRs are preferentially located in coding genes. The coordinates of CAGE-associated T STRs (T STR end + 2bp, n = 63,974) and that of all repeats of more than 9 Ts (n = 1,337,561) were intersected with the annotation provided by the FANTOM5 CAGE Associated Transcriptome ( | 2019-05-26T13:46:54.499Z | 2019-05-10T00:00:00.000 | {
"year": 2019,
"sha1": "003d58f30e9cf04af62bb74cebe404bc08b1a7c5",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/10/23/634261.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "6a2c536c02744a6a0310ec210e35d38e90d85b0c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
38916710 | pes2o/s2orc | v3-fos-license | Performance Evaluation of Tergitol NP-7 and Triton X-114 for the Removal of Crystal Violet Using Cloud-point Extraction
In the present study, a cloud-point extraction process has been developed to remove crystal violet dye using two different non-ionic surfactants, Tergitol NP-7 and Triton X-114. For different concentrations of dye and surfactant, cloud-point temperatures were determined for Tergitol NP-7 and Triton X-114. The effectiveness of two different surfactants, Triton X-114 and Tergitol NP-7, for the removal of dye from synthetic solution was investigated. The effects of surfactant and dye concentrations, as well as operating temperature on phase volume ratio, preconcentration factor, distribution coefficient, and extraction efficiency were studied. The optimum dosage of surfactant can be 0.05 M for both surfactants for removing crystal violet. The results showed that 97 % of crystal violet can quantitatively be removed by cloud-point extraction at 0.05 M surfactant concentration in a single extraction. The performance of Tergitol NP-7 was found to be more effective than Triton X-114 for the removal of crystal violet using CPE.
Introduction
Removal of dyes from wastewater is one of the most significant problems faced by the dyestuff manufacturing, textile dyeing, and printing industries 1 . Disposal of this untreated wastewater into water bodies can cause serious health problems like allergies and skin cancer. It will also give rise to some environmental problems, as they will interfere with photosynthetic activity and hence affect aquatic life 2 . Crystal violet (CV), being a basic cationic dye, is the brightest class of dye and has a very high tinctorial value 3 . It is very difficult to remove because of its non-biodegradable synthetic nature and high solubility in water 4 . It is basically used in the histological stain and in Gram's Method of classifying bacteria, for ball-point pens, as a dermatological agent, a veterinary medicine, an additive to poultry feed to inhibit propagation of mold, intestinal parasites and fungus, etc., in inkjet printers, fingerprinting, for dyeing paper, and DNA gel electrophoresis 5 . Many methods are in practice for color removal, like membrane separation, coagulation, flocculation, ozonation, oxidation, adsorption, photocatalysis [6][7][8] . However, every method has some limitation, and they are also economically infeasible. For example, in the case of adsorption, activated carbon is expensive, while in membrane separation there are chances of fouling which decreases the flux 9 .
Cloud-point extraction (CPE) is an effective separation technique for the removal of dissolved organic contaminants from effluents [10][11][12][13][14][15][16][17][18][19][20][21] . Aqueous solutions of non-ionic surfactants turn cloudy at a definite temperature, and this temperature is referred to as the cloud-point (CPT). By allowing the solution to settle at a temperature above the cloudpoint, phase separation takes place. The smaller phase contains most of the surfactant and usually sinks to the bottom, in some cases at the top, called the surfactant-rich phase or coacervate phase, while the more voluminous aqueous supernatant, also called the dilute phase, has a surfactant content approximately equal to the critical micellar concentration (CMC) 22 . This is known as cloud-point extraction (CPE). The dilute phase, which contains a very low concentration of pollutant, can be discharged into the environment. Cloud-point extraction using non-ionic surfactant is an alternative method for the removal of dyes from wastewater 6,23 .
The actual reason of phase separation is not known. However, some researchers have proposed different explanations for phase separation in CPE.
Micellar interactions, which are repulsive at lower temperature, become attractive at higher temperature 24 . Dehydration occurs in the external layer of the micelles at higher temperature. The dielectric constant of water decreases at elevated temperature and water becomes a poor solvent for the hydrophilic part of the surfactant molecules, leading to phase separation 25 . At higher temperature, the micellar aggregation number and micellar size increases causing phase separation 26 . The main advantage of CPE is that it uses a much less amount of surfactant, which is non-volatile and disposable in nature, while the other advantage is that it often uses water as solvent 27 . Although the CPE technique bears many advantages over the traditional liquid-liquid extraction process, that it is reversible, flexible, biodegradable, easy to handle, more efficient, safe and non-toxic, very few works related to the separation of pollutants have been performed 28 . The cloudpoint system can also be used for separation and purification of metal ions, organic compounds, and bioactive materials 29 . It is also used to preconcentrate copper, nickel, zinc and iron ions in environmental samples and remove oil from soil as well as heavy metals and toxic solutes like phenol from polluted water 30 . In order to make the cloud-point extraction process more economical, the organic compounds must be stripped from the non-ionic surfactant to enable its reutilization 1 .
Non-ionic surfactants are compatible with other types, and are excellent candidates to enter complex mixtures, as found in many commercial products. They are much less sensitive to electrolytes, particularly divalent cations, than ionic surfactants, and can be used with highly saline or hard water. They exhibit a very low toxicity level, and are used in pharmaceuticals, cosmetics, and food products. Non-ionic surfactants are found today in a large variety of domestic and industrial products, such as powdered or liquid formulations 31 .
In the present study, an attempt was made to compare the performance of two different non-ionic surfactants, Tergitol NP-7 and Triton X-114, to remove crystal violet from a synthetic dye solution by the cloud-point extraction process. Among various dyes, crystal violet (CV) is a well-known dye used for various purposes, as listed earlier, which is why it was selected for the present study. The cloudpoint of Tergitol NP-7 and Triton X-114 is relatively low and the experiments could be conducted at room temperature, which makes them the suitable choice of non-ionic surfactants for the present study. The effects of temperature, surfactant concentration, and dye concentration on various parameters like phase volume ratio, preconcentration factor, distribution coefficient, and extraction efficiency were studied.
Experimental Materials
Triton X-114 (Octyl phenol poly ethylene glycol ether, density at 298 K is 1.058 g mL -1 , mol. wt.: 537 g mol -1 , λ max : 223 nm) and Tergitol NP-7 (Nonyl phenol ethoxylate, density at 298 K is 1.044 g mL -1 , mol. wt.: 528 g mol -1 , λ max : 235 nm), purchased from Sigma Life Sciences India, have been used as non-ionic surfactants. The critical micellar concentrations (CMC) of TX-114 and Tergitol NP-7 at 298 K are 2.1·10 −4 M and 7.3·10 -5 M, respectively. Crystal violet (mol. wt.: 497.979 g g -1 mol -1 , density: 1.198 g mL -1 , λ max : 590 nm) of analytical grade was purchased from Sigma Life Sciences, India. The surfactants and dye were used without further purification. JASCO UV spectrophotometer was used for calibration and measuring dye concentration in dilute phase after phase separation. Water bath, purchased from TECHNICO Laboratory Products, Chennai, was used for maintaining temperature for cloud-point extraction. The chemical structure of surfactants and crystal violet dye is given in Figure 1, Figure 2 and The cloud-point of the aqueous surfactant solution was determined by heating 10 mL of such micellar solution in glass tubes. For heating the solution, a thermostatic bath was used. The rate of temperature increase in the water bath was set at 1 K per min. The cloud-point was determined by visual observation at the temperature at which the solution became obviously turbid. The solution was heated above cloud-point and then separated into two phases. On cooling, the solution became transparent. The experiments were repeated for different combinations of surfactant and solute concentrations. All the cloud-points reported in this work are the average of at least triplicate measurements, and the variation of error is no more than 1 K.
Cloud-point extraction (CPE)
An amount of 50 mL of micellar solution containing crystal violet dye and Tergitol NP-7 and Triton X-114 were taken. For different concentrations of crystal violet, such as 25, 50, and 75 mg L -1 , the concentration of surfactant varied from 0.01 M to 0.1 M. Each set of samples was kept in the thermostatic bath maintained at operating temperatures for 30 minutes. The heated solution was allowed to settle for 1 hour. The volumes of surfactant-rich phase and dilute phase were noted. Calibration of UV spectrophotometer for crystal violet was carried out by preparing dye solutions of known concentrations. The concentration of crystal violet in dilute phase was then determined by UV spectrophotometer. The surfactant-rich phase concentration was obtained from material balance calculations. The phase volume ratio, preconcentration factor, distribution coefficient, and extraction efficiency were then determined for all solute and surfactant concentrations, as well as operating temperatures.
Results and discussions
Cloud-point temperature The cloud-point of a non-ionic surfactant can be influenced by many factors, such as its own concentration, and the number of ethylene oxide units in its molecules, as well as the additives, like electrolytes 32 . To ensure the cloud-point extraction process at the desired temperature, it is important to have detailed information on the clouding behaviors and cloud-point temperatures of surfactants. The effect of surfactant and solute concentration on the cloud-points were analyzed in the present study. Figure 4 and Figure 5 show the effect of surfactant and solute concentrations on the cloud- points of Tergitol NP-7 and Triton X-114, respectively. Initially, the CPT of Triton X-114 decreases sharply with increasing surfactant concentration, and then gradually increases with further increase in surfactant concentration, both of which represent typical clouding behaviors of non-ionic polyethoxylated surfactants 33 . The decrease in CPT with increase in surfactant concentration of Triton X-114 is due to increase in micellar concentration, while the phase separation results from increased micellar interaction. CPT of Tergitol NP-7 increases gradually with increasing surfactant concentration. The increase in CPT is due to the structured water-surfactant system present in the micelles, which breaks at high surfactant concentrations. Even then, the molecules do not become free of the surfactant effect. They require more heat to become separated, and hence the CPT increases. Some of the water molecules are not attached to any individual micelle in particular, but to the micelle system, forming buffers between micelles, and thereby decreasing micelle-micelle interaction. More energy is required to remove these 'free floating' water molecules, and hence CPT increases with surfactant concentration. The cloud-point temperature increases with increase in solute concentration for both Triton X-114 and Tergitol NP-7. The increase in cloud-point may be due to the increased hydrophilic character of the surfactant micelle 34 . As the solute concentration increases, it requires more heat for the cloud formation, i.e. for the interaction between solute and surfactant, and to remove water molecules from the external layer of micelle. Therefore, the cloudpoints of Tergitol NP-7 and Triton X-114 increase with the concentration of crystal violet.
Effect of surfactant concentration on CPE
The surfactant and dye substrate can bind together, and there is an equilibrium relation between them. Surfactant molecules can bind to the dye substrate either monomeric or micellar (by one or more of their molecules) 35 . In addition to the interaction with substrate molecules, surfactants can sometimes form ion pairs with them. Non-ionic surfactants attack cationic crystal violet dye, and cause the formation of dye-surfactant ion pair 36 .
The mechanism between crystal violet dye substrate and non-ionic surfactants may be characterised by the Piszkiewicz model with some modifications 35 . According to this model, the dye substrate molecule associates with 'n' number molecules of surfactant to form a micelle, where 'n' is called the cooperativity index and is a measure of the association of additional surfactant molecules to an aggregate in the whole surfactant concentration range. If the value of the cooperativity index is greater than unity, the interaction between dye substrate and sur-factant molecule is positive; and if the value is less than unity, the dye substrate-surfactant interaction is negative; and if the value is equal to 1, the interaction is non-cooperative.
The aqueous solution of non-ionic surfactants containing crystal violet was transformed from single isotropic phase to two isotropic phases when the temperature of the solution exceeded the CPT. The dilute phase and the coacervate phase were then analyzed. The effect of surfactant and solute concentrations on design parameters were determined in the present work.
Phase volume ratio
The phase volume ratio, R V , is the ratio of the volume of the surfactant-rich phase (V s ) to that of the volume of the aqueous phase (V w ). Figure 6 shows the effect of Triton X-114 and Tergitol NP-7 on phase volume ratio at a temperature of 313 K, and a dye concentration of 50 mg L -1 . From the figure, it is observed that the phase volume ratio increases with increase in surfactant concentration at a particular operating temperature and dye concentration for both surfactants. The increase in phase volume ratio with surfactant concentration at a constant feed of dye may be due to the increased capability of the surfactant to solubilize the dye. At lower concentration, surfactant molecules exist as monomers. In this case, Triton X-114, less than or equal to 0.02 M, does not have the ability of phase separation. At higher surfactant concentration, i.e. above 0.02 M, there would be more micellar interaction, and hence all the solute molecules present in the mixture settled along with the surfactant in the coacervate phase. As the concentration of surfactant increases, the volume of coacervate phase also increases, thus decreasing the volume of dilute phase, which results in increased phase volume ratio, R v . Figure 6 also shows more phase volume for Tergitol NP-7, in comparison to the Triton X-114. The higher the phase volume ratio is, the greater is the separation achieved for a constant feed concentration. The increase in phase volume is due to the interaction of more surfactant micelle with the dye in the coacervate phase after phase separation. As the concentration of surfactant increases, the number of micelles will increase, leading to greater solubilization.
Preconcentration factor
The preconcentration factor, f c , is the ratio of the volume of bulk solution before phase separation (V t ) to that of the surfactant-rich phase after phase separation (V s ). The preconcentration factor is an indication of the ratio of solute concentration in the feed to that in the surfactant-rich phase. For higher value of preconcentration factor, the separation of solute will be lower, and vice versa. From Figure 7, it is clear that the preconcentration factor decreases with an increase in concentration of Triton X-114 and Tergitol NP-7. Tergitol NP-7 has lower volumes of preconcentration in comparison to Triton X-114. Because of the high solubility of solute, it is observed that the preconcentration factor decreases with an increase in surfactant concentration at constant operating temperature and dye concentration 36,37 .
Distribution coefficient
The distribution coefficient or equilibrium partition coefficient, K d or K p , is the ratio of the concentration of solute in the surfactant-rich phase (C s ) to the concentration of solute in the dilute phase (C w ). It is the measure of the difference in solubility of a component in two immiscible phases at equilibrium. If the distribution coefficient is higher, then phase separation will be very easy. The distribution of solutes depends on the specific solute-solvent interaction. If the interaction is greater, the distribution coefficient will be higher.
From Figure 8, the distribution coefficient increases with the concentration of Tergitol NP-7 and Triton X-114 at constant solute concentration and operating temperature. As the surfactant concentration increases, the system can extract more solute to the surfactant-rich phase, and thereby increase the concentration of solute in coacervate phase for a constant feed dye concentration. The increased solute concentration in coacervate phase leads to higher distribution coefficient at higher Tergitol NP-7 and Triton X-114 concentrations. The results show that Tergitol NP-7 gives a better distribution coefficient than Triton X-114 for the extraction of crystal violet.
Extraction efficiency
For cloud-point extraction, the efficiency of dye extraction is defined as the ratio of the amount of dye in coacervate phase to that in feed. In other words, the recovery efficiency of solute, η, can be characterized as the percentage of solute extracted from the bulk solution into the surfactant-rich phase.
where, C 0 is the initial concentration of solute in the micellar solution, C W is the concentration of solute in dilute phase, V t is the total feed volume, and V s is the volume of surfactant-rich phase. The extraction efficiency of dye increases with surfactant concentration at constant solute concentration. As the concentration of surfactant increases, the number of micelles for the extraction process will also be higher, which results in better phase separation, and thus increase in efficiency. With up (Figure 9). Even at a concentration of 0.01 M, Tergitol NP-7 could extract more than 85 % of the dye molecules from the solution, which suggests that Tergitol NP-7 is more efficient than Triton X-114 at low concentrations of surfactant. For a same degree of polyoxyethylation, polyoxyethlene nonylphenols (Tergitol NP-7) have lower CMCs than polyoxyethylene octylphenols (Triton X-114) 38 . The difference in alkyl group size leads to such a large difference in CMC and hence supports the fact that Tergitol NP-7 is more effective than Triton X-114.
Effect of dye concentration on CPE
Initially, at a particular operating temperature and concentration of surfactant, extraction efficiency of dye decreases with concentration of crystal violet. The extraction efficiency decreases with the concentration of crystal violet up to a surfactant concentration of 0.04 M (Table 1). Then the effect of dye concentration on extraction efficiency is negligible because the maximum amount of solute is extracted with that particular surfactant concentration and the efficiency reaches 99 %. At lower concentration of surfactants, more dye molecules will be present in the dilute phase with an increase in solute concentration, thus decreasing the efficiency. For Triton X-114 as well as Tergitol NP-7, a surfactant concentration of 0.05 M may be considered as the optimum dose for efficient CPE of crystal violet dye up to 50 mg L -1 . The introduction of dye molecules to the aqueous solution of non-ionic surfactant will increase the critical micellar concentration of that particular surfactant. Therefore, with the increase in dye concentration, the number of micelles in the solution will decrease, which results in more of unsolubilized dye molecules in the dilute phase after phase separation. Due to this, the extraction efficiency decreases with an increase in dye concentration for a constant surfactant concentration and operating temperature 40 .
Effect of operating temperature on CPE
The phase volume ratio decreases with increase in temperature at constant dye concentration ( Table 2). As the operating temperature increases, micellar interaction will be greater, which leads to the dehy- dration from external layers of micelles, resulting in a reduction in volume of coacervate phase. Hence, the phase volume ratio decreases with operating temperature. Table 3 and Table 4 suggest that the operating temperature has a great impact on the distribution coefficient as well as the preconcentration factor. The feed volume remains constant at 50 mL. Due to the reduction in volume of surfactant-rich phase, the preconcentration factor increases with operating temperature. Figure 10 shows that the distribution coefficient increases with operating temperature for a particular dye concentration 32,39 . As the temperature increases, the micellar interaction, which was repulsive at lower temperatures, becomes attractive, and hence the micellar aggregation number increases. This leads to increased solubilization of dye in the surfactant-rich phase, and thus increased solute concentration in coacervate phase, which leads to a higher distribution coefficient. As shown in Figure 11, the efficiency of extraction increases with operating temperature for Triton X-114 as well as Tergitol NP-7. However, the impact of operating temperature is greater on Triton X-114 than on Tergitol NP-7. CMC of non-ionic surfactants decreases with temperature 41 . Because of an equilibrium shift that favors dehydration of ether oxygen, these non-ionic surfactants become hydrophobic at higher operating temperatures 42 . Hence, the efficiency of extraction of crystal violet increases with operating temperature because of the increased solubilization of dye resulting from increased micellar size and aggregation number.
Conclusion
Cloud-point extraction was successfully implemented to remove crystal violet from synthetic dye solution using Triton X-114 and Tergitol NP-7 as non-ionic surfactants. The clouding patterns of Tergitol NP-7 and Triton X-114 with respect to surfactant and solute concentrations were analyzed. The effect of dye concentration, surfactant concentration, and operating temperature on various design parameters of CPE was evaluated. The phase volume ratio was found to increase with surfactant concentration, whereas it decreased with operating temperature. It was observed that the phase volume ratio was inversely proportional to the preconcentration factor, with respect to surfactant concentration and operating temperature. The concentration of solute present in the surfactant-rich phase increased with operating temperature and surfactant concentration, which resulted in a higher distribution coefficient as well as extraction efficiency. From the experimental results, it was observed that, for dye concentrations of 25 mg L -1 and 50 mg L -1 , quantitative recoveries (>97 %) were obtained in a single extraction. It was concluded that the removal of dye from aqueous solution was more effective by cloud-point extraction with non-ionic surfactants. From the performance of Tergitol NP-7 and Triton X-114, it was confirmed that Tergitol NP-7 could extract more crystal violet even at very low surfactant concentration than Triton X-114. The cloudpoint extraction procedure described above is safe, rapid, effective, and inexpensive. Moreover, the cloud-point extraction strategy can easily be adopted for large-scale samples. | 2017-05-03T03:42:49.010Z | 2016-07-16T00:00:00.000 | {
"year": 2016,
"sha1": "d3fe1c924e767c1e9f8efd17744f3a0adc9b1850",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15255/cabeq.2014.2143",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d3fe1c924e767c1e9f8efd17744f3a0adc9b1850",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
55660153 | pes2o/s2orc | v3-fos-license | Building Rehabilitation with Dry and Wet Systems – Embodied Water Comparison
Fresh water is a scarce resource. Its availability, essential for life, is being conditioned by several menaces, mostly from human origin. Studies on water saving strategies in building construction and what procedures should be adopted for its efficient use can respond to part of these problems, taking into account the relevance of construction activities, responsible for almost 40% of the overall environmental impact in Europe. Buildings rehabilitation assumes an important role in the construction activities of many European countries. The aim of this paper is to compare the water demand between dry and wet construction systems used in interior partition wall technologies existing in south European climates, specifically on Portuguese territory. This paper focus on the advantages of the dry and lightweight partition walls and it is expected that it may contribute for the reduction of water use in the construction industry. The evolution of interior partition walls, namely by its weight and embodied water analysis, is presented and discussed. Apart from the conclusion that dry construction systems, used in the rehabilitation of existing buildings, can contribute to a significant reduction of water demand, this analysis intends to also reveal on how much these systems can reduce other environmental impact indicators in relation to the wet conventional solutions. Dry construction strategies allows saving material and energetic resources, increasing flexibility and reducing wastes, what constitutes a relevant response to the growing concerns over environmental issues and incertitude on economic development. 1 Water use in the construction industry Until 2050 it is expected a world population growth, mostly in cities of poor and under development countries. This fact will contribute to the increase of environmental pollution and to resources management problems. However, in the developed countries, the population is decreasing and policies are focused on a significant decrease on the pollutant emissions. Knowing that building construction is responsible for a significant percentage of the world consumption of fossil fuels and the emissions of greenhouse effect gases, it doesn’t make sense to continue investing in new construction with so many buildings unoccupied or sub occupied. The rehabilitation of buildings has assumed an important role in the construction activities panorama of developed countries. According to published estimations, this activity represents in Europe about 45% of the total volume of investment [1-3]. Building rehabilitation will therefore be a preponderant factor to achieve sustainability. Although the demand for water consumption has increased significantly over the last century, the infrastructure capacity to harvest adequate water to serve the Earth’s growing population has not grown accordingly, and in many parts of the world, water scarcity is becoming a critical issue. In addition to the daily operation of buildings, water is used in the production of building materials. In Europe, the construction industry is one of the sectors that consume more fresh water [4]. Water consumption practically accompanies the entire lifecycle of a building material, ranging from the extraction of raw material to the demolition phase at the end of building's life cycle. The building’s water consumption management can be realized at three levels: (1) selection of materials and components with low embodied water; (2) selection of more efficient sanitary appliances and devices; (3) provide rainwater harvesting and water reuse solutions. Table 1. Water used in an office building during its life cycle of 50 years [5]. Building life cycle phases Resource Construction Use Deconstruction Water (m) 3.000 228.000 0 There are few studies about water consumptions in buildings during its complete useful life. One available case study corresponds to an office building in Finland, with a gross area of 15.600m2, constituted by three towers with five floors each. According to Junnila [5], this building had its water consumption distributed over the different life cycles phases as indicated on Table 1. At Table 1 it can be seen that during the building use phase occur the largest consumption, corresponding to , MATEC Web of Conferences 6 ICIEA 2016 8 13009 (2016) DOI: 10.1051/matecconf/20166813009 © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). approximately 99% of the total water used. Residential buildings present consumptions at the same or higher order of magnitude [5]. However, even if corresponding to just 1% of the overall number, the water consumption during the construction phase is still of 192 litre per square meter of gross area, which gives a notion of the significant water demand during the construction phase. For the use phase is usually considered a time period of fifty years, while the construction phase has a significantly lower period of about one year. Thereby, dividing the amount of water used during the use period by fifty, corresponding to the time which it was used, a value of 4 560 m3 / year was obtained.
Water use in the construction industry
Until 2050 it is expected a world population growth, mostly in cities of poor and under development countries.This fact will contribute to the increase of environmental pollution and to resources management problems.However, in the developed countries, the population is decreasing and policies are focused on a significant decrease on the pollutant emissions.Knowing that building construction is responsible for a significant percentage of the world consumption of fossil fuels and the emissions of greenhouse effect gases, it doesn't make sense to continue investing in new construction with so many buildings unoccupied or sub occupied.The rehabilitation of buildings has assumed an important role in the construction activities panorama of developed countries.According to published estimations, this activity represents in Europe about 45% of the total volume of investment [1][2][3].Building rehabilitation will therefore be a preponderant factor to achieve sustainability.
Although the demand for water consumption has increased significantly over the last century, the infrastructure capacity to harvest adequate water to serve the Earth's growing population has not grown accordingly, and in many parts of the world, water scarcity is becoming a critical issue.In addition to the daily operation of buildings, water is used in the production of building materials.
In Europe, the construction industry is one of the sectors that consume more fresh water [4].Water consumption practically accompanies the entire lifecycle of a building material, ranging from the extraction of raw material to the demolition phase at the end of building's life cycle.The building's water consumption management can be realized at three levels: (1) selection of materials and components with low embodied water; (2) selection of more efficient sanitary appliances and devices; (3) provide rainwater harvesting and water reuse solutions.There are few studies about water consumptions in buildings during its complete useful life.One available case study corresponds to an office building in Finland, with a gross area of 15.600m2, constituted by three towers with five floors each.According to Junnila [5], this building had its water consumption distributed over the different life cycles phases as indicated on Table 1.
At Table 1 it can be seen that during the building use phase occur the largest consumption, corresponding to approximately 99% of the total water used.Residential buildings present consumptions at the same or higher order of magnitude [5].However, even if corresponding to just 1% of the overall number, the water consumption during the construction phase is still of 192 litre per square meter of gross area, which gives a notion of the significant water demand during the construction phase.
For the use phase is usually considered a time period of fifty years, while the construction phase has a significantly lower period of about one year.Thereby, dividing the amount of water used during the use period by fifty, corresponding to the time which it was used, a value of 4 560 m3 / year was obtained.
Contribute to the reduction of water use
With an increasing fresh water scarcity it is necessary to review the currently adopted construction systems.For example, the construction based on concrete mortars, generates additional water waste on the construction sites.Water consumption in masonry wall systems uses approximately 500 litres per square meter of gross area, much more than in dry systems as can be verified at the following topics.
Advantages for buildings rehabilitation and consequent environmental impact reduction
Construction and mainly the housing sector consume big quantities of materials, what result on big quantities of waste according to Mateus et al [6].Therefore for an efficient construction system it is necessary to use prefabricated elements that facilitate assembling and disassembling.
The growing use of dry construction systems in the building sector is due to the many advantages of these systems, primarily their short construction time, economy, superior building functional performance when compared to masonry and concrete systems with the same thickness [7].Dry systems allow achieving solutions that are: lighter, flexible, quick and easy assembled, don't require much available space on site, and are easily transportable to construction siteimportant aspects to safeguard on rehabilitation interventions.
Assuring comfort and energy efficiency during use phase is one important goal for sustainable construction.According to Friedman [8] such aim can be accomplished through, among other aspects, the rethink of indoor space, with increased flexibility.
Lightness
Dry construction systems can be added to existing buildings in order to improve specific properties, an aspect that is especially important for infill development tasks (extra storeys, expansion, extensions) and alterations.The low weight of dry construction systems means that loadbearing components can be sized more economically than would be the case with a fitting out scheme involving masonry and concrete.A clear reduction in mass and at the same time better sound and thermal insulation properties are readily achievable in wall systems (partitions, external walls, facades).
Flexibility.
A great portion of housing designs are not flexible in use and therefore they are not adjustable to the constant updating of life styles and variations on the composition of the households.The conventional practice uses heavyweight and static partition walls and therefore it is difficult to rethink the use of the indoor spaces.This problem reveals to be especially important in the refurbishment phase of existing buildings, but also when new buildings are designed.
According to Tichelmann & Pfau [9] these building elements were underestimated in the past.They also perspective that the vast majority of buildings erected between 1950 and 1995 will become unusable in the long term and will become increasingly difficult to let or sell.The small-format interior layouts acceptable in those days are no longer popular with actual users and buyers.Changing room sizes means an expensive intervention in the building.
The use of dry systems in lightweight construction, optimized for their particular functions, is generally associated with a gain in floor space and greater flexibility of usagetheir intrinsic properties which were underestimated in the past.
Flexibility of housing spaces can increase the life time of the building, extending materials life-cycle [10].Flexibility can be achieved through neutral and polyvalent spacespassive flexibilityor through mobile elements that allow the transformation of the housing unitactive flexibility.The second strategy can produce daily, seasonal or annual transformations of the housing unit that increase its adaptability to the user and its changeable needs.This allows saving energy and resources, since people can use their home during larger periods of time and in a more adequate way to seasonal climate variations.
Innovation in building technology and products should be conducted to increase flexibility.One of these innovations is the distinction between the structural elements and the other elements of a building [11].A reduction in the use of load-bearing interior partitions and increase in the number of pre-manufactured components provide the opportunity for greater choice and adaptability in design and construction [9].
Water consumption of interior partition wall systems
Interior partition walls are thin elements built to divide the indoor space into rooms or other compartments.The importance of these building elements is reflected in the global construction cost, estimated by Köning et al [12] to be around fifteen percent in office and administration buildings of European countries.In the non-loadbearing group of technologies used on the buildings construction, these represent 41% of the environmental impacts, as can be seen on Figure 1.Generally, partition walls are non-load bearing.In terms of structural resistance, these elements need only to be strong enough to support themselves under normal conditions of service.In general, lightweight solutions present less embodied energy, less energy costs with transport to the building site and can be designed with smaller assembly fittings [13].
Many of the commonly used materials for interior partition walls use water as a resource for their manufacture.The following table lists some materials commonly used in construction of interior partition walls, with the corresponding amount of water demand in its manufacture according to their weight.It presents the amount of water which will have its quality drastically reduced as a result of the production process.The water recycling or purification was not taken into consideration.Embodied Energy and Embodied Water (adapted from Graham et al [14] and Vassig, Ozer and Spiegelhalter [15]).
The analysis of Table 2 makes possible to conclude that the careful selection of materials on the design phase is important for the water consumption reduction strategies on building construction.Materials such as galvanized steel consume too much water in its manufacturing process (3400 litres/kg).Moreover, the production of hemp fibres and straw presents a null use of water.
By analysing the evolution of internal partition walls, shown at Figure 3, it is possible to verify that an initial tendency for dry and lightweight solutions evolved to wet and heavyweight solutions.However, nowadays it can be verified a tendency to the return of dry and lightweight solutions that can include easier construction/deconstruction design principles.
Mediterranean climate -interior partition walls
This study takes as reference the constructive solutions of existing dividing walls in Mediterranean countries, and specifically in Portugal.
The most common solutions that can be considered as vernacular [21] are: sun-dried blocks of soil (adobe)characteristics of clay-rich zones, essentially in the south regions of Alentejo and Algarve, and the wattle and daub (Figure 4(c)) widespread through all the country, called as "tabique" (in all Portugal) and as "taipa de fasquio", (in the north region of Minho).This last one is made up of wooden planks placed vertically or in diagonal, over which are placed horizontally small strips of trapezoidal section, in which is adhered the mortar.Other solution combines timber frame filled with canes/reed.However, due to the loss of tradition in the techniques employed and consequent rising cost of hand labour, these solutions gave way to other wall types, such as the massive brick (Figure 3(c)), which later derived to the hollow brick wall (Figure 4(d)).The use of ceramic hollow brick in non-structural masonry became increasingly common, either by economic as well as ease of implementation reasons.With thicknesses, without finishing plaster, varying between 7 and 22cm, but more frequently with 11cm in interior walls, this system became the preferred solution and still remains in present times.Even if new and more efficient solutions are appearing in the market, most of them cannot compete with hollow brick, not only for the economic cost, but also due to the conservative mentality of the diverse agents present on the construction sector.
One exception is a solution that has been experiencing a great development since the 70's, the plasterboard panel.First it had a timber structure filled with a honeycomb cardboard in which were fixed the plasterboard plates (Figure 4(e)).Later, around the 90's, the wood structure was replaced by light steel gauge frames (Figure 4(f)).This type of partitions can be built with 51 × 76mm or 51 × 100mm studs normally spaced between 400 to 600mm, depending on the type and thickness of the wall finishing used.
Embodied water and environmental assessment
The environmental impact assessment of an internal partition wall solution can be made through several indicators, such as: the Embodied Energy (EE); the Global Warming Potential (GWP), the Acidification Potential (AP), the Chemical Oxygen Depletion (COD), the Photochemical Ozone Creation Potential (POCP), the Wastes generated in production process, the Coefficient of Heat Transmission [and the Embodied Water (EW).The Embodied water is the amount of water used to manufacture materials.Water cost in the manufacture of goods and services required by construction may be significantly diverse depending to regional differences on water availability.
Dry systems
This study compares seven different partition walls commonly used in Mediterranean climates, five dry solutions and two wet solutions.Figure 5 presents the dry solutions: mat canes filled with reed (Figure 5A), timber frame filled with straw (Figure 5B), wattle and daub (Figure 5C), wooden board with timber frame and rock wool insulation (Figure 5D), plasterboard partition wall with light steel frame structure and rock wool insulation (Figure 5E).
Dry solutions are lightweight, allowing a lower embodied energy and other more favourable environmental impact indicators.Compared to wet and heavyweight solutions, such as hollow brick and solid brick walls, dry and lightweight solutions allow also easier deconstruction/reuse scenarios.
The values presented on Table 3 are based on bibliographical references [6,22,23].The functional unit selected was one square meter of solution.f Generated waste in production process; g Coefficient of Heat Transmission [23].
Results and discussion
The aim of this research was to investigate the water demand in the production of interior partition wall systems for building rehabilitation.There is a considerable amount of water used in the materials production phase and in building construction phase.The highest values verified were for the wet and heavyweight systems.The worst solution presents the higher polygon area and the higher value for each indicator.
The highest value was 793.4 litres per square meter of gross floor area.These findings suggest that the selection of elements and materials has a great impact on a building's water demand during the construction phase.This research allows selecting the partition wall solutions that might be more sustainable in terms of water use.
The aim of the analysis was to provide multiple criteria for decision-making according to the values of different impact categories for several building materials.
From the results presented in Table 3, comparing the dry and the wet constructive systems, it is possible to conclude that the embodied water as well as other environmental impact indicators presents lower values in dry solutions.The worst solution in all indicators is the solution F. Between dry traditional Mediterranean solutions, like A B, C and D, the best solution is A except in EE indicator.However, solution D, wooden board with timber frame filled with rock wool, present the best values of EE and COD/POCP indicators, but it presents the second worst level of Water Demand and Generated Waste, when compared with other dry systems.Analysing the Figure 7, it can be seen that solution with worst overall impact is solution E and the best is A.
From the analysis of the solutions presented it can be concluded that to achieve a solution with low environmental impact levels it is necessary to reduce the quantity of materials used, especially those resulting from more industrialized and higher energy demanding production processes, with higher EE, such as light gauge steel frame or rock wool.For example, reed and cork present a significantly lower EE than Rockwool.
The dry and mechanically detachable wall systems have a smaller impact on the building's environmental impact, while the on-site construction systems present a significantly higher impact.
Policy focused on reducing water consumption should consider the use of dry and lightweight systems.
Conclusions
Dry systems of interior partition walls are certainly a wiser option in many situations as these can be more flexible and even portable, in some cases.The lower quantity of materials used in these lightweight solutions allow lower water use as well as other more favourable environmental impact indicators.
The reduction of material inputs to the minimum is a way to achieve higher eco-efficiency in a building and thus open the way to the development of efficient interior partition systems.The need for more sustainable construction, using innovative solutions has also motivated this study and the development of materials and construction technologies using lightweight materials.
Efforts for reducing the construction industry water consumption need to be focused up and downstream of the actual construction process.However, the complexity of the supply chain makes it difficult to ascertain precisely where efforts should be focused.A demand-side view is required to solve the problem of scarce water supplies.The drivers of demand, both direct and indirect, need to be managed through all stages of production.
In conclusion, the buildings rehabilitation with dry and lightweight systems constitute an important contribute to sustainability.These systems allow buildings to be adaptable in order to remain updated within the contemporary framework of sustainability.
Figure 1 .
Figure 1.Environmental impacts from materials use on non-load bearing construction elements of a typical house over 60 years (adapted from Addis and Schouten[11]).
Figure 3 .
Figure 3. Evolution of weight and water demand on Mediterranean partition systems: (a) animal skin coating vegetable poles or bones; (b1) mat canes filled with reed; (b2) Wattle and daub filled with straw; (b3) wattle and daub with planks; (c) Solid Brick; (d) Hollow Brick; (e) Wooden board with timber frame; (f) Plasterboard panel with timber frame; (g) Plasterboard panel with light gauge steel frames.Scheme adapted from Mendonça and Macieira [20].
Figure 7 .
Figure 7.Comparison between dry partition wall systems analysed.The overall best solution presents the lower polygon area and the lower value for each indicator.
Figure 8 .
Figure 8.Comparison between all partition walls systems analysed.The worst solution presents the higher polygon area and the higher value for each indicator.
Table 1 .
[5]er used in an office building during its life cycle of 50 years[5].
Table 2 .
Water demand on the production phase of materials in function of their weight.
Table 3 .
Environmental impacts of the dividing wall systems analysed. | 2018-12-07T00:49:56.592Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "7100cccbf8e07996f40c857e03e4f6cfb5892de4",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/31/matecconf_iciea2016_13009.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7100cccbf8e07996f40c857e03e4f6cfb5892de4",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
225656123 | pes2o/s2orc | v3-fos-license | Responsible or reckless? A critical review of the environmental and climate assessments of mineral supply chains
This paper critically reviews and identifies gaps in the methodologies used to analyze the environmental impacts of mineral and metal global supply chains. Of specific focus are assessments of the extraction and production of minerals and metals needed for a low-carbon energy future. Current trends and projections suggest that the future low-carbon energy system will have greater material needs than the current one. Thus, it is important to better understand the full impacts of increased resource extraction to help ensure a sustainable and just transition. This review reveals that existing methodologies are currently insufficient in capturing the full suite of environmental, social, and governance concerns. The copper supply chain is used as a case study to highlight areas that require refined or augmented methodologies, with an in-depth examination of the corporate practices of Freeport-McMoRan, Vale, and BHP. Together, this review of existing methodologies and examples from the copper supply chain highlight the incomplete and variable nature of environmental and climate reporting within the mining industry. Areas for future work are defined with the goal of advancing accounting frameworks for the mining industry and the associated supply chain.
Introduction
While low-carbon technologies have become economically competitive over the past decade, they also have relatively high material needs and different environmental impacts than the current energy system. A steadily growing body of reports and forecasts examining the demand for materials required for the energy transition has shown that an unprepared extractive industry could struggle to keep up with rapid increases in demand, and in some cases exceed current reserves by 2050 [1][2][3][4][5][6][7][8][9]. In particular, an analysis by the World Bank of total mineral demand for renewable power and energy storage shows that up to 200 million tons of iron, 100 million tons of aluminum, and the 30 million tons of copper might be required for wind, solar, and battery storage alone [1]. Similarly, increases in demand for other minerals and metals used in low-carbon technology may also increase dramatically [10,11]. Given the global climate goals tied to 2050, meeting the mineral requirements for the energy transition is of key strategic importance.
While material estimates serve as a first-order indication of scale, economists, e.g. Tilton et al (2018), argue that material shortages will be shortlived (if at all), and Mudd and Jowitt (2018) argue that the key factors governing mining and mineral extraction are not physical in nature, but social, environmental, and economic [13,14]. Following this and the strong relationship between developing economies and the extractive industry, Environmental, Social, and Governance (ESG) reporting has become a focal point for mining corporations looking to address a legacy of environmental degradation and the numerous environmental uncertainties surrounding increasing extraction of resources [11].
As there is no universally accepted or enforced international regulation, nor any global market driven certification schemes, the extractive industry has turned internally to ESG propositions, sustainability reports, and other voluntary disclosures to relay 'impact on critical sustainability issues such as climate change, human rights, governance and social well-being' [15, p 1, 11]. Public acknowledgement of these concerns has led to calls for increased transparency from mining operations, but they are still not well understood [11,16,17]. Among these, environmental impacts are perhaps the most mischaracterized despite their strong relationship with social and governance considerations. Climate agreements and advocacy groups for renewable technologies have presented the environmental impacts of resource extraction as a global problem that requires global solutions. To this end, virtually every nation on Earth has adopted the Paris Agreement with the understanding that emissions can be quantified, controlled, and regulated [18]. Nations troubled by conflicts and extreme poverty have signed the agreement with the understanding that the global environment can be independent the numerous social and governance issues, and that developmental changes can be implemented. Ultimately, companies are responsible for making changes to align their operations with the Paris Agreement; the mining industry has played a role in these developments, but their ability to report on, yet alone address, these changes can no longer be considered adequate. While indicators and reporting systems exist for geological, technical, structural, political, regulatory, and economic supply risks within the industry, 'there is currently no holistic method and information system for environmental concerns associated with the mining of raw materials' [19, p 1].
With no universal methodology that can assess environmental impacts consistently for minerals and metals, how effective are the current frameworks at relaying relevant environmental concerns? How effectively are international environmental concerns being assessed and governed? What does this mean for the energy transition? To answer these questions, this review presents a problem-oriented perspective aimed at identifying gaps in the current reporting of environmental issues across the mineral and metal supply chains supporting the energy transition. Of specific concern are environmental impacts from extraction and production of minerals and metals.
Published ESG reports, greenhouse gas (GHG) calculation standards, reporting questionnaires, and academic literature were collected and evaluated to understand potential flaws and points of contention . CDP reports for the 2018 reporting year formed the initial data source due to their consistent structure and organization. The reports chosen were based on mineral production quantities. Of the top ten copper producers, only six reported to CDP, and only five are members of the ICMM [42,44,88]. From these five companies, 2018 Global Reporting Initiative (GRI) reports were used as secondary sources to compare required disclosure categories [89,[90][91][92][93][94][95][96][97]. Inconsistencies and changes in relative focus between companies and their reports were categorized and are outlined below. They were then further contextualized through full-length peerreviewed academic papers comparing GRI reports, mining disclosures, and emissions from mineral sources. This review then focused on grey literature and white papers that were used within GRI methods, GHG protocol methods, and CDP questionnaires. From this, copper was chosen as a case study due to its critical role in renewable technologies, its production by many of the world's largest mining companies, and its complex emissions life cycle. Unlike steel or aluminum production, which has its primary climate impacts on processing, copper has GHG emissions associated with extraction, hauling, comminution, and processing. Discrepancies among reporting methods and relevant background information were organized into effective categories to form the sections of this paper.
The remainder of the paper is structured as follows: section 2 reviews the current environmental impacts of mining. Section 3 evaluates the state of corporate mining reports. Section 4 analyzes what is generally included in reports, whereas section 5 discusses missing information along the supply chain. To put all of this into context, and also offer a timely empirical case study, section 6 evaluates copper's representation on corporate mining reports. Section 7 concludes with areas for future investigation.
Current environmental impacts of mining
Environmental impacts from the mining industry include GHG emissions, ecotoxicity impacts, and human toxicity impacts, as outlined by the United Nations in their annual resources outlook report and figure 1 [12]. In the mining industry, these impacts primarily come from the common metals that account for >95% of global domestic extraction, namely iron, steel, aluminum, and copper [12]. However, the materials needed for the energy transition can compound these environmental concerns. With many renewable materials expected to rapidly increase in demand, new climate change impacts and toxicity sources are likely. It is therefore important to understand the potential environmental impacts of both common metals and specific materials needed for renewable technology.
Despite attempts at improved governance and better corporate management, procurement of many mineral and metal resources remains environmentally capricious and, in some cases, a source of conflict at the sites of resource extraction [20]. Due to lack of preventative strategies and measures, such as drilling with water and proper exhaust ventilation, many . Reprinted with permission from [12]. cobalt mines throughout the Democratic Republic of the Congo (DRC) contribute to deforestation, tailings pollution, landslides, dust, and fugitive emissions from diesel generators and trucks (figure 2) [21,22]. Mining for copper, needed for electric wires and circuits, thin-film solar cells, as well as lithium, used in batteries, has been criticized in Chile for depleting local water resources across the Atacama Desert, destroying fragile ecosystems, and converting meadows and lagoons into salt flats [23]. The extraction, crushing, refining, and processing of cadmium, a byproduct of zinc mining, into compounds for thin-film photovoltaic modules that use cadmium telluride or cadmium sulfide semiconductors, can lead to groundwater or agricultural soil contamination, or worker exposure to hazardous chemicals (cadmium chloride), and occupational air pollution [24]. Rare earth minerals, such as neodymium, are needed for magnets in electric generators and motors, electric vehicles, and the fluid catalysts for shale gas fracking. But their mining in China has resulted in chemical pollution from ammonium sulfate and ammonium chloride that now threaten rural groundwater aquifers as well as rivers and streams [25].
Mining and metals processing also have substantial carbon footprints. Due to its large ore volumes and high processing needs, the global iron-steel production chain is already responsible for as much as 7%-9% of direct GHG emissions [27,28]. When combined with aluminum's high energy requirements and copper's processing impacts, the three metals have come to represent more than one-quarter of global industrial energy demand and associated emissions (figure 1) [12]. In consideration with other commonly produced minerals and metals, the climate (global) and health (local) impacts from extraction and production has nearly doubled between 2000-2015, and represent over 10% of global GHG emissions, and 12% of global particulate matter health impacts [12]. When including non-metallic minerals, the extractive industry accounts for as much as 20% of global GHG and 20% of global particulate matter emissions [12] (figure 1).
Looking at future trends, environmental impacts may grow, rather than recede, and large increases in mineral and metal demand specifically for the energy transition can compound environmental concerns (figure 3). The Institute for Sustainable Futures' report Responsible Minerals Sourcing for Renewable Energy explores some of these 'hotspots' [29, p 1], and found significant environmental impacts associated with the mining and processing of these metals [29]. For the 14 materials needed for renewable technologies that they explored, they found issues with large volumes of solid waste, harmful chemicals, heavy metal contamination (air, water, and soil), water shortages, tailing spills, and broader health impacts for workers and surrounding communities [29]. Increases in cumulative demand through 2050 for cobalt, lithium, and rare earths were found to be of specific concern due to the rapid growth of vehicle electrification and the acceleration of battery storage technology (figures 4-6) [29]. For cobalt, 65% of the world's supply comes from the Democratic Republic of the Congo (figures 4-6), which has a history of environmental and social abuse, and is part of one of the ten most polluted places on Earth (African Copper Belt) [29,30]. Rare earth elements (REEs) have also already caused significant problems in China due to toxic chemicals and the technologically-enhanced, concentrated, radioactive materials from REE processing and extraction [25,29]. Lithium brine extraction, while less energy intensive than other processes, poses potential problems due to an expected 1000% increase in demand and the lack of long-term environmental investigations into the extraction's effects on one of the most arid locations on Earth [29].
The ability to track mineral and metal sources is also becoming increasingly relevant, both to capture the full environmental impacts of a material, and because impacts are often disproportionately felt by developing countries. Increases in mineral demand present numerous opportunities for lowincome countries focused on resource extraction and processing, but also can mean that operations may take place in environmentally sensitive areas such as forests, rivers, and coastlines, and sometimes without robust governance structures and regulations in place [31]. The 2017 Resource Governance Index found that across different minerals, on average, 37% of mineral reserves are in countries with a mix of strong and problematic areas of resource governance [31]. These problematic areas imply that resource extraction can help society, but it is likely that the eventual benefits to the surrounding areas will be weak [31]. A further 7% of minerals were in countries that have minimal procedures and practices to govern resources, where most of elements necessary to ensure societal benefits were found to be missing [31]. Countries with weak of poor governance are less likely to adopt policies that can benefit citizens, communities, environmental health, and mining operations [31].
McKinsey and Company's 2020 report on climate risk for mining companies warns of climate change hazards 'increasing physical challenges to mining operations' [33, p 1], with water stress and flooding being direct challenges that operators will need to overcome [33]. Analysis of the MineSpans database for copper, gold, iron ore, and zinc, found that 30%-50% of production already occurs in areas with high water stress, and that 'these hot spots will worsen in the coming decades' [33, p 3]. While more capital-intensive approaches and water intensity reductions can help to mitigate negative effects, shifting demand for minerals, and calls for the industry to decarbonize, present their own problems. With coal representing 50% of the global mining market and 'the most obvious victim' [33, p 5] of shifts to global decarbonization, many mining companies will need to rebalance non diverse mineral portfolios and begin considering the impacts of a circular economy [33]. Production of niche commodities can help to manage losses, but companies also need to look at decarbonizing through several operational levers (figure 7), with the understanding that 'building a climate strategy will not be quick or easy-but waiting is not an option' [33, p 1].
Background
A mix of emerging sustainable development concepts being applied within industry, high ESG sector risk analyses (figure 8), and global commodity markets, have all resulted in increasing economic pressure for mining companies to report and reduce their environmental impacts. For example, a KPMG survey of corporate sustainability reporting found that 93% of the world's largest 250 companies now publish sustainability reports, while the CDP claims that over 8400 companies have reported through them regarding climate change, water security, and forest health [34,35]. A core tenet of the literature on corporate social responsibility is that private firms must not only meet their fiduciary responsibility to shareholders and their legal responsibility to avoid fraud and illicit activities; they must also promote a broader social agenda. This agenda frequently includes facilitating the prosperity of communities, minimizing environmental degradation, and contributing to the creation of safe and peaceful societies with strong institutions and equitable distribution of costs and benefits [36]. Gallarotti (1995) even suggested, writing more than two decades ago, that the business community was beginning to shift towards 'green consumption,' [37, p 43] a transition that had the potential to create 'a new business ecosystem' [37, p 50] enhanced by the principles of human rights, transparency, and sound governance [37].
For the mining industry, their substantial role in sustainability reporting began after an economic and social crisis in the mid-to-late 1990s which threatened the industry's 'social license to operate' [38, p 1], and resulted in the creation of the International Council of Mining and Metals (ICMM) [38].
Following its formalization in 1999, the Global Mining Initiative (soon to become the ICMM), began working closely with the Global Reporting Initiative (GRI) to develop a reporting supplement aimed at 'a clearer understanding of the positive role the mining and minerals industries can play in managing the transition to sustainable development' [38, p 18, 40]. Membership to the ICMM now requires that companies report their sustainability impacts in accordance with the GRI's Mining and Metals Sector Supplement, and seek independent assurance of their reports [41]. The ICMM now includes 26 of the world's largest mining and metals companies, and 35 associations aimed at addressing 'the core sustainable development challenges faced by the industry' [42, p 1]. The mining industry has also started reporting to independent organizations such as the CDP, Science Based Targets (SBT), and Task Force on Climate-Related Financial Disclosures (TCFD), who help structure sustainability reporting and maintain the disclosures as a primary source for a company's ESG developments.
Numerous reporting initiatives and standards
The GRI and other reporting frameworks are meant to measure relevant environmental indicators, but corporate reports by the mining industry are not standardized and often struggle to weigh relevant inclusions and explanations. The numerous changes in reporting requirements (figure 9) have progressed through consumer/investor pressures to use disclosure programs that offer unique insights and indicators for specific environmental impacts or concerns related to the energy transition [42][43][44][45]. CDP's quantitative focus is meant to simplify and standardize GHG emissions reporting while empowering 'investors, companies, cities, and national and regional governments to make the right choices… for people and planet in the long term' [35, p 1]. The TCFD aims to 'help firms understand what financial markets want from disclosure in order to measure and respond to climate change risks' [46, p 1]. A company's GRI guided sustainability report is meant to 'demonstrate(s) the link between its strategy and its commitment to a sustainable global economy' , and promotes 'climate change, human rights, governance and social well-being' [15, p 1] The issue with these developments is that there is no underlying framework to identify the specific interactions between the mining industry and the environment, or to enable the selection and operationalization of the most relevant environmental indicators [47]. Each initiative varies in status (legal requirement vs. voluntary disclosure), scope (climate vs. impacts), and ambition(discourse vs. strategy), which is meant to present an encompassing picture of ESG developments, but instead forces companies to focus on specific 'silos of sustainability' within their organization and balance what is relevant for specific disclosures and what is financially best for their company [47,48]. Depoers et al (2016) illustrates this point in its study of SBF 120 (Société des Bourses Françaises 120 Index) firms, where it was found that managers adapt their disclosure strategy to address the information needs of different stakeholder groups by changing sources, traceability, and inclusions [49]. In promoting the analysis of dozens of indicators that neglect interactive effects and 'the state of the socioecological systems from which they are drawn' [47, p 73], there is an overt emphasis on individual metrics and not a company's actual sustainability practices [47].
This focus on individual metrics and general indicators can be seen at the end of almost every published sustainability report, where mining companies provide content indices on what metric was addressed, the response, the page number it can be found, what sustainability principles it was supposed to meet, if it has external assurance, and why it counts as a sufficient response. This is especially relevant for the extractive industry, where the sale of anonymous, primary goods creates the 'essential conflict between financial and other bottom lines, which, for the foreseeable future at least, the financial will always win' [47, p 72, 50]. With no market differentiation, and operating under a collective industry reputation, the appearance of sustainability becomes just as valuable as actual practice. This often leads to companies to report with numerous initiatives, without the additional initiatives having dramatic effects on reporting. Mark Carney, Chairman of the Financial Stability Board, conveyed this best when introducing the TCFD in 2015 [51]. With nearly 400 initiatives aimed at relaying the costs, opportunities, and risks associated with climate change, meeting effective disclosure standards requires coordination, and 'the existing surfeit of existing schemes and fragmented disclosures means a risk of getting 'lost in the right direction" [51, p 1]. Similar challenges exist within the Extractive Industries Transparency Initiative (EITI) which seeks to foster accountability and minimize corruption in the oil, gas, and mining sectors, but often has mixed outcomes in its ability to promote broader improved governance or sustainability [52,53].
Comparability
The lack of contextualized disclosures for the mining industry limits sustainability reporting initiatives, and users of their data, in their ability to convey a company's progress towards sustainability. Constructive critiques of reporting methodologies specifically cite the lack of guidance for geographic variations, scales, and interactive effects as major drawbacks [47]. These are especially prominent when considering the environmental impacts of different production routes, as they can reflect everything from Figure 10. Impacts of the iron and steelmaking sector [12].
'ore mineralogy and grade, mining type and available technologies, to resources for the mining and processing' [16, p 100]. Further, with a large number of multinational feed streams, waste streams, byproduct streams, and energy inputs associated with mining and processing, reporting frameworks are not capable of characterizing the entire supply chain of a mineral, or of specific production challenges. For the energy transition in particular, it is especially necessary to be aware of environmental reduction levers (figure 7), including the feasibility of adopting renewable technologies, and the regional energy mixes.
The scope and scale of environmental concerns is not limited to minor metals, but also to minerals and metals with established markets. For the steel industry, with a large global production chain, these influences can be seen in figure 10. For example, the United Nations Global Resources Outlook reported that the first processing step in the primary production of steel accounts for more impacts than the iron ore extraction phase for all reported indicators, and the climate change impacts for secondary steel production can vary as much as 10%-38% due to the electricity mixes between countries (figure 10) [12]. This has led many companies to attempt to curb their Scope 3 emissions, both upstream and downstream. For companies like Rio Tinto, who sell large amounts of iron ore to China, they have attempted to curb their Scope 3 emissions by pledging $400 million to help reduce their customer China Baowu Steel Group's emissions [54]. They hope to reduce their own supply chain emissions through their clients use of less metallurgical coal, transportation optimization, and possibly the unproven commercialization of hydrogen steel [54]. These changes and developments are in contrast to reports by American copper producing companies who state that truck haulage is a focused concern for both cost and GHG emissions, and that through site development and falling ore grades, 'trucks are required to move ore farther distances to processing facilities' [55, p 26, 56]. With different focuses on environmental concerns and different reduction levers (figure 7), using generalized reporting initiatives to compare environmental pathways and emission reductions is not comprehensive. In the context of global production networks and international supply chains, the dismissal of holistic and systemic perspectives diminishes comparability and assessment of progress [47,57].
Quantitative comparisons
The lack of contextualized disclosures can limit comparability between sustainability pathways, yet more focused and quantitative guidance can also fail to encompass relevant environmental considerations. Investigations into comparing quantitative aspects of sustainability reports for mining and processing companies have found that 'it is impossible… in a credible manner' and not useful 'to classify firms on this basis' [57, p 25]. Henri and Boiral's 2015 study of mining sustainability reports compared only A or A + GRI reports (the highest rated) and still ran into issues including: The study found that 'less than 50% of all GRI indicators focused on quantitative measurements,' [57, p 16] while the rest were unmeasurable and unspecific [57]. The result was a sea of words or 'tower of Babel syndrome' [58, p 1] that makes it difficult to interpret differentiating criteria while giving firms a great deal of freedom in how they responded and what they presented [57, 58, p1]. Critical information, such as weight of transported hazardous waste, was often incomplete or completely absent, while companies like Codelco wrote as much as 20 pages on renewable energy, salmon farming, and educational projects in Chile (2008), and Newmont more than 60 pages on the firm's sustainability management system [57].
The complications of having accurate, topical, and substantive measurements were further exacerbated by the existence of several affiliate organizations and different levels of data aggregation, even among single reports. Companies such as Rio Tinto were able to disclose information both with and without Alcan (a subsidiary purchased in 2007) depending on the indicator they were reporting, all while maintaining an A or A+ reporting score [57]. Even among CDP reports, Andre and Cortese (2012) examined disclosures of metals and mining companies operating in the Australasian region to determine whether voluntary information can provide a meaningful basis for climate change related decision making [59]. While acknowledging the important work the CDP undertakes, the combination of various disclosure methods, and the absence of emissions data, compounded difficulties and led to the conclusion that 'the CDP information is not comparable, and by extension, limited in its usefulness' [59, p 14].
Common evidence and themes within the literature
Notwithstanding these drawbacks, corporate mining reports serve a significant purpose. In this section we show they do establish a common ground for evidence, albeit emerging and often incomplete, on CHG emissions, life-cycle assessment (LCA)-based calculations, and impacts in developing countries.
GHG emissions
Whereas calculation standards, protocols, and methodologies are well established for Scope 1 and Scope 2 emissions, the recent interest in Scope 3 emissions has presented a challenge in accurately mapping and reporting GHG from global supply chains. The Greenhouse Gas Protocol Corporate Value Chain (Scope 3) Standard provides guidance and evaluation tools for Scope 3 emissions, but companies are still allowed to choose what values to report, the boundaries for emissions categories, and even what categories they consider relevant. Looking at figure 11, no mining company reported more than nine categories in 2018, with other inclusions being written off as not relevant [17]. Greene (2018) highlights the difficulties of comparing selective disclosures in her study of CDP reports and Scope 3 emissions stating that 'incomplete reports make it difficult to track emissions reduction goals or implement sustainable supply chain improvements' [17, p 1]. These inclusions are especially relevant when considering that 'emissions reported by one company operating at one stage of the life cycle contribute to the value chain emissions of another,' [17, p 7] and that GHG emissions from the value chain can amount to over 90% of total emissions for some companies [17].
Reported values become even more convoluted when considering that companies rely on default data to obtain their values and emissions factors, since supplier data is often difficult to obtain [47,17]. For minerals and metals specifically, being a commodity with long life-cycles and further processing needs means that most reporting companies have to make assumptions for end product uses and general processing routes. This often means assuming a conversion factor for ore feedstock, that all ore is converted to metal, and that all produced metal has the same emissions factor. These values can vary greatly depending on life-cycle assumptions, where a product is sold to, and the purchasing companies own subsequent emissions and policies.
LCA based calculations
With supplier data difficult to obtain, and global value chains becoming increasingly complex, many reported values are based off of LCA, which have their own caveats related to supply chains, metrics, and function. The use of LCAs as a tool to promote the sustainable design and redesign of products and processes is part of what has led to the adoption of renewable technologies and positive environmental changes in the mining industry. However, 'LCA is a relative tool intended for comparison and not absolute evaluation,' [60, p 1] which can limit its effectiveness in supporting environmental disclosures and representation of entire supply chains [60]. While data on global warming potential and energy inputs are well documented in life cycle inventories (LCIs), 'other relevant impacts resulting from, e.g. acidification, heavy metal emissions, water or land use are fragmentary' [19,93]. Nuss and Eckleman's 2014 LCA of 63 metals in their major use forms was the most comprehensive life cycle comparison of metals to date, but 'impacts other than global warming potential and cumulative energy demand could not be further investigated … due to the limited availability of life cycle inventory data' [19, 61, p 4]. Nuss and Eckleman (2014) also states problems with LCI data being reported in aggregate form at 'either pre-allocated or at system process level,' [61, p 2] which makes it difficult to make robust comparisons or to take co-production issues with minerals and metals into account [61]. Co-production is especially important for the energy transition as many of the materials needed for renewable technologies occur as secondary minerals that are embedded in base metals (including rare earth's, indium, and tellurium) [62]. Manhart et al (2018) also warns that major obstacles for assessing the life-cycle environmental relevance of primary raw materials are the lack of 'representative data for the mining sector on a global level' [19, p 94] and 'the current lack of scientifically sound models for input categories on resources, which are particularly relevant for mining' [19,93].
These LCA considerations are apparent in the steel and lithium industries and highlight the need to consider entire supply chains. Greene (2017) conveys some of these shortcomings in writing that life cycle data commonly used in the steel sector is based on a small set of studies and geographic regions, and that the World Steel Association's choice of boundaries can leave out important emissions [17,64]. The , coal supplying 75% of energy demand for global steel production, and China being one of the world's worst coal users and polluters, it can be hard to evaluate how representative aggregate values like these are, or how efficiently they can be used in reporting, without considering emissions intensity by geography [63,66,67]. For steel specifically, a 2016 comparison of carbon dioxide emissions intensity of production methods between various countries found that 'if the German, Mexican, and U.S. steel industries were similar in structure to the Chinese steel industry… the CO 2 emissions intensity of steel production in Germany, Mexico, and the U.S. would increase by 19%, 92%, and 56%, respectively' [68, p 16]. Together these variations show that LCAs need to be taken in context, but that they can also provide valuable insights into supply chains and possible points of intervention for environmental sustainability.
Environmental impacts in developing economies
Sustainability reports are meant to disclose negative environmental impacts in any developing economy, but this process has become increasingly convoluted. In the past, well publicized environmental disasters led to the ICMM 'crisis,' [42, p 1] and for mining companies to position themselves as drivers of sustainable development in emerging economies [38,40,69,70]. The agreement that multinational mining companies should operate in accordance with international environmental standards, despite the lack of regulatory enforcement in developing countries, assuaged concerns that multinational mining corporations might exploit people and resources [40,71,72]. This is once again becoming a major concern due to the geographic concentration of minerals and metals for the energy transition, and their subsequent vulnerability to price fluctuations and detrimental environmental/social practices. This has already been seen in the DRC with cobalt and REEs in China, and further concentrated supply of cobalt, rare earths, and tellurium can be seen in figure 13 [29]. Australia, the DRC, and South Africa have large shares of the production of metals for Li-ion batteries [29]. Japan, Korea, Canada, and Russia have significant production levels of metals for photovoltaics, while Chile, Argentina, and Peru have over half of the world's lithium [29,73].
Impacts in developing countries are still underreported, despite the numerous published projects and initiatives by mining companies. True Footprint's analysis of annual reports from 23 mining companies found that 70% of material indicators used for reporting were of inputs and outputs (how much was spent, how many natural resources were used, what activities were conducted), while only 26% explained the actual outcomes, and 4.5% the impacts [34]. The same analysis showed that it was possible to report outcomes for all material sustainability topics, but that companies chose not to, despite the outcome being potentially positive in some cases, as it was not required for reporting [34]. A study by Boiral (2013), found sustainability reports to mainly be a 'simulacra,' [74, p 1] due to their disconnect from reality, the distortion of information, and the use of images that were not considered relevant [74]. The 'emphasis on the firm's positive achievements,' 'emphasis on virtuous statements and commitments,' and 'showcasing of outside awards and distinctions,' [74, p 25] limited their reliability in conveying relevant information, while pictures of unspoiled nature and stewardship led to further misrepresentation [74]. The counter accounting analysis showed that 'only 10% of significant news events concerning sustainable development were reported clearly and explicitly in the sustainability reports,' [74, p 25] while the proliferation of images were largely disconnected from the firms' genuine impacts [74]. For developing countries, this presents a false image of sustainability and progress.
Common lacunae and missing information within the literature
While the common themes of included evidence are perhaps striking, just as significant is what is missing within the body of evidence, notably gaps in environmental reporting, scant focus on artisanal or smallscale mining, and an inability to capture illegal or criminal supply chains.
Gaps in environmental reporting within sustainability reports
Sustainability reports, and most environmental methodologies, do not include unreported, unregistered, or even illegal mineral and metal production, despite the large role that they play in global supply chains and environmental impacts. Introduction of these materials primarily occur at the beginning of the supply chain following extraction and prior to processing. The growth of artisanal or small-scale mining (ASM) is already an environmental concern for developing economies, even without the large increases in demand for minerals and metals that is expected with the energy transition. There are an estimated 40.5 million people engaged in ASM in 2017, up from 30 million in 2014, 13 million in 1999, and 6 million in 1993, compared to the 7 million working in industrial mining in 2013 [75]. With the majority of ASM workers using rudimentary tools and techniques, there are significant health and environmental impacts associated with their operations. Among environmental concerns are fine particles from blasting and drilling that degrade crops and farmland, streams and rivers becoming toxic, and exposure to mercury, zinc, vapor, cyanide, and even radioactive materials [25,75]. Scientific American's list of toxic pollution problems lists mining related activities as responsible for three of the top ten global issues, with mercury pollution from artisanal mining as the worst global toxic pollution problem [76].While gold is not as tied to renewable technology as other materials, large price increases for renewable materials have already seen similar spikes in artisanal mining operations.
Artisanal and small-scale mining enterprises
Existing examples of these practices and their environmental implications can be seen in parts of Africa and China [77][78][79]. Fortunes 2018 report, Blood, Sweat, and Batteries shows what increased prices for cobalt can do to insufficiently supported communities, while in 2019 the death of 43 miners once again demonstrated the impact of cobalt mining in the DRC [77][78][79][80]. Reports have found as many as 255 000 artisanal miners for cobalt in the Democratic Republic of the Congo, 35 000 of who are children working in 'exceedingly harsh, hazardous, and toxic conditions' [79, 81, p 1, 82]. As for where these materials go, the Chinese middle men haggling over cobalt prices in Mosomp cobalt market, and the children in a small village near Kolwezi greeting reporters with 'Ni hao!' [79, p 1] implicates China, the world's largest consumer of metals [79].
Illegal or criminal supply chains
Investigations into rare earth supply chains have found that illegal sources can add as much as 40% of official Chinese production, the results of which are tied to 'enormous social and environmental problems' [84, p 6744]. Due to rare earth mining in Jiangxi province, the region is facing a $5.5 billion cleanup bill, with a focal point of the cleanup focusing on keeping the polluted water from reaching a wider area in neighboring provinces [25]. Ma Jun, a leading Chinese environmentalist, and director of the Institute for Public and Environmental Affairs, says that he fears other regions around the world could suffer a similar fate if they become like China, and are the supplier of cheap rare earth elements with little or no environmental price attached [25]. These concerns over legal and accountable supply chains are very real, with ASM taking up entire percentages of populations (figure 14), and believed to account for 15%-20% of global non-fuel mineral production [75,84]. With mining corporations only reporting on their own operations, LCA's not accounting for external variables, and supply chains getting partially fueled by unreported ASM, environmental impact methodologies are once again proving to be non-inclusive. Minerals and metals are entering supply chains, but not being properly accounted for in a global context.
A case study in corporate governance: copper
In this section, to illustrate the tensions and challenges with corporate mining governance reported in the earlier sections, the GHG emissions as reported by copper producing companies are compared to evaluate environmental impact methodologies. GHG emissions are used due to their quantitative and comparable nature.
Copper as an exemplar for low-carbon transitions
Copper's role in both conventional and low-carbon energy transitions is well established through conventional motors, wiring, and circuitry to its substantial use in wind turbines, solar, panels, and energy storage technologies ( figure 15) [1,85,86]. Estimates by the Institute for Sustainable Futures show that peak annual demand for copper in renewable energy and storage could account for as much as 29% of annual production by 2050 (2017 data) [29]. Copper is representative of numerous minerals and metals by demonstrating environmental concerns along numerous points throughout its lifecycle. Unlike steel or aluminum production, which have a large environmental focus on processing, copper has GHG emissions from extraction, hauling, comminution, and processing, which makes it representative of a greater number of GHG emissions from other minerals and mining practices. Calls for emissions transparency by Azadi et al (2020), and copper's historical role as an environmental polluter, make it a valuable case study to assess environmental impact methodology [16]. To ensure consistency in evaluated impact, only the reported GHG emissions will be compared for copper mining entities.
Calculating the environmental impacts of copper
Of the top ten copper producers, only six reported to CDP, and only five are members of the ICMM [42,44,88]. From these five companies, BHP, Vale, and Freeport were chosen to represent varied CDP and GRI reporting scores (table 1), and as an indication of completion. For 2018, all three companies created sustainability reports following GRI guidelines, Table 2. Scope 3 and embodied emissions in copper, iron ore, metallurgical coal, and energy coal base (based on BHP data).
reported to CDP's Climate Change 2018 questionnaire, and had third party verification for their emissions values [89][90][91]. As such, CDP and GRIguided sustainability reports from BHP, Freeport-McMoRan, and Vale were used as the data source for this case study. Comparison of GRI reports, CDP reports, disclosed emissions factors, emissions sources, and reporting requirements were used to effectively compare reporting methods. From this, discrepancies and misalignments were focused on and elaborated in the sections below. BHP is used as a best case scenario due to the external publication of a supplementary Scope 3 Emissions Calculation Methodology 2018, and separate GHG data [92,93]. Through these published supplements, the embodied emissions in one ton of copper was calculated with the assumption that percentage share of mined material (by weight and excluding petroleum) was reflective of percentage share of the upstream reported Scope 3 emissions (table 2). From these assumptions, the resulting calculated Scope 3 emissions factor was found to equal 4.27 tonnes CO 2 e per ton of copper produced by BHP. Calculated Scope 3 values were then added to BHP's Scope 1 and Scope 2 values to get the total emissions for one ton of copper produced by BHP and accounted for by their methodology. Using BHP's new calculated emissions factor, the reported production values and emissions of Freeport-McMoRan and Vale were compared as a means to evaluate environmental calculation methodologies.
Freeport-McMoRan
Freeport reported on 3/15 Scope 3 categories as outlined by the Greenhouse Gas Protocol [94], but of specific interest are the Scope 3 categories relating to processing of sold products and use of sold products. When reporting 263 584 metric tonnes of CO 2 e emitted for the Processing of Sold Products, the calculation methodology states that 'because Freeport-McMoRan operates vertically integrated assets, many downstream processing emissions that would be considered Scope 3 emissions for other companies are Scope 1 emissions for Freeport-McMoRan' [95, p 16]. This is reflective of Freeport-McMoRan owned smelters, where the emissions reported on processing of sold products 'only represent emissions from the smelting of concentrate and the refining of copper anodes sold to third parties' [95, p 16] and were calculated by applying average emissions at Freeport smelters to the amount of material sold to third parties. Freeport also reported that they 'do not have access to emissions information for the broad spectrum of downstream manufacturing' [95, p 16] emissions factors (respectively) to get an estimate of Freeports emissions using BHP's methodology. These values therefore represent a conservative estimate of what Freeport would emit using BHP's methodology. Figure 16 shows how even ignoring potential emissions from Freeports production of gold and molybdenum, BHP's methodology predicts much larger emissions for Scope 3 emissions. Scope 1 emissions and total operational emissions are slightly lower using BHP's methodology, which is reflective of Freeport's vertical integration, but overall, total reported emissions would be 64% higher for Freeport using BHP's methods.
Vale
Vale's emissions were relatively similar to BHP's, but its calculation methodology/strategy does not reflect its changes in operations and is indicative of drastic swings in reporting or problems with emission accounting. Vale does not take a vertical integration approach like Freeport, and its relative Scope [90,96].
A supplement could not be found to explain its reporting methodology, but Vale actively cites the Greenhouse Gas Protocol in both its 2017 and 2018 CDP report. In its independently published 2018 sustainability report, for Scope 3 emissions Vale wrote 'In 2018, these emissions totaled approximately 586 million tCO 2 e in the year, a result very similar to that of 2017' [90, p 78]. The nearly 80% increase is not mentioned or explained. Operational changes also do not explain the shift, as Vale produced less manganese ore, nickel, copper, cobalt, and gold than in 2017. Small increases in iron ore, iron pellets, and coal, could not have caused such a drastic shift, unless the embodies emissions were over 13 000 tonnes of CO 2 e per tonne of iron and over 27 000 tonnes of CO 2 e per tonne of coal (assuming the same emissions as calculated from BHP, table 2). The cumulative production change relative to the change in Scope 3 emissions is shown in figures 18 and 19, with BHP's relative production and emissions changes shown for contrast. From this drastic shift, it can be assumed that emissions methodologies are still evolving despite the numerous forms of guidance, and Vale's commitment to substantial Scope 3 values.
BHP
BHP has become a leader in disclosure transparency through its Scope 3 Emissions Calculation Methodology and unrestricted Scope 3 values (figure 20), but that does not mean it is without methodology concerns. For 2018, its Scope 3 emissions methodology relies on numerous assumptions and 'double counting' [92, p 4]. For its processing of sold products calculations, the Scope 3 Emissions Calculation Methodology 2018 used copper production of 1237 648 tonnes, citing the BHP Operational Review for the year end 30 June 2018 as the source, and with the assumption that 'production volumes approximate sales volumes; small year-end inventory volumes will be smoothed out over year-on-year calculations' [92, p 27]. However, BHPs Operational Review for the year end 30 June 2018 (published on 18 July 2018), reports 1753 kt of copper production for FY18, leaving nearly 500 000 tonnes of copper unaccounted for in its Scope 3 calculation, or over 2 million tonnes of CO 2 e [92,97]. This review was not able to reconcile these values. Without a mapped supply chain, the methodology also assumed a 1.0 conversion factor between copper feedstock and end-use product, along with assuming that all copper is manufactured into copper wire [92]. This is not an inherently flawed assumption, but it illustrates how generalized even the most advanced calculation methodologies remain for GHG accounting along supply chains. Finally, the emissions factor used in the calculation is 4.2 tonnes CO 2 e per tonne copper wire produced, which is based on a 2012 LCA that is meant to represent 'all emissions associated with mining and extracting ore to create copper cathodes, as well as subsequent manufacturing into copper wires' [92, p 28]. This emissions factor is presented as a 'conservative' assumption that is also meant to reasonably reflect 'local electricity emissions intensity and other factors' with no explanation how, or why, other than reminders that it will 'provide a high-side estimation of emissions in BHP's value chain from this process' [92, p 28]. Together these values represent a calculated 8.57 tonnes of CO 2 e per ton of copper produced by BHP, which is in contrast to the International Copper Association's most recent LCA for cradle-to-gate copper, that cites the entire embodied emissions for 1 metric ton of copper cathode as 4100 kg CO 2 e.
Conclusion and further research
Many committed governments, investors, companies, and consumers are searching for better ways to effectively identify and manage the environmental impacts of rapidly growing mineral supply chains. With the advent of a global economy, it has become difficult to track the impacts of the numerous inputs, processes, and activities involved with the use and production of minerals and metals. This lack of transparency has in turn highlighted the need for better methods and understanding of environmental impacts especially at the early stages of extraction and processing.
Our copper case study demonstrates that the mining industry's ability to measure GHG emissions are likely inaccurate, incomplete, and differ dramatically from company to company. When looking at this variability for the same measurements, and for the same materials, it becomes clear that the process lacks cohesion and transparency. Even among what one would consider essential information, or highly quantitative values, such as CO 2 e, the variance in methodology, inclusion criteria, and what is considered relevant, make it almost impossible to compare or evaluate a company's effectiveness at reporting. BHP explicitly stated that they were overestimating their emissions to present themselves as taking a conservative approach. Other companies were able to select what emissions they chose to report or explain, and some left potentially enormous amounts of emissions unaccounted for. When considering these inaccuracies in the context of other minerals, metals, and mining companies, it is hard to claim that we have a clear understanding of the mining industry's GHG emissions and environmental impacts. When further buttressed by the assumptions made through LCAs, and the lack of accounting for ASM, it is clear that the environmental impacts of mineral and metal extraction are likely much larger than currently estimated. This has extremely troubling and sobering implications for those seeking to verify and validate the feasibility of a low-carbon transition.
In general, many of the world's largest mining companies have tried to demonstrate their commitment to reducing their environmental impacts, but without consistency, it is difficult to create trusted data. A standardized method for reporting environmental impacts is needed. This review contributes towards this goal by identifying gaps in current reporting mechanisms, as well as best practices, and key potential areas for further analysis. In identifying these contours, environmental reporting by mining companies can be improved to better represent the evolving shift towards accountability that is needed for a low-carbon environment.
This review offers a benchmark for future corporate governance and understanding of metal and mineral supply chains. By starting at the beginning of the supply chain and focusing on extraction and processing, it is possible to identify several areas for further research and a comprehensive path forward. These actionable steps, and further work needed to implement them, are as follows: (a) The development of a more uniform, widely accepted and consistent carbon accounting framework for the metals and minerals industry.
1. There needs to be alignment between companies on mineral-related standards and initiatives for accurate climate reporting. Discrepancies between inclusions, boundary definitions, and what type of data is considered viable, all make it impossible to compare carbon emissions across companies. 2. A region-, or supplier-, specific open source database of carbon intensity factors for various products, process, and activities would allow for upstream users of these minerals to estimate supply chain emissions more reliably and accurately and could support sustainable procurement efforts.
(b) Integration and sourcing of information from previous measurement efforts of energy security and material development. These sources will prove valuable in both political and technical approaches as lessons on mineral development can be applied to developing industries.
3. Continued monitoring of governance metrics (through initiatives such as the Resource Governance Index) and open dialogue with developing nation will be essential in properly supporting countries involved in the energy transition. 4. More LCAs need to be used to raise awareness of governance shortcomings and environmental sustainability. Social LCAs can help companies understand how to maximize societal benefits from mining projects and raise awareness of potential pitfalls and lessons from other resource-rich nations. LCAs can also be used to help identify points of intervention within supply chains. The current top-down approach of having companies choose what to report is not conducive of mineral or environmental sustainability. 5. If more reliable data becomes available, a meta-analysis of existing environmental issues would be essential to progress environmental accountability within the mining industry.
(c) Transparency within supply chains and the implications of legislative actions need to be better considered when developing policy that affects the wellbeing of those in other nations.
6. Dialogues are needed for establishing cooperation between mining companies and government agencies in charge of plans for governance and communal development. In most countries, these dialogues are not well-established, and must be linked to wider efforts of diversification and growth. 7. A method for mining companies to relay their positive environmental developments and be recognized for their continued efforts. The initiatives outlined in ESG reports are relatively superficial as companies are limited in their ability to accurately convey information. When actual progress can be conveyed, larger developmental change can be justified for the benefit of both the company and communities.
These points underscore how accounting for carbon and environmental performance across mineral supply chains is a complex, polycentric endeavor. It involves accounting frameworks and corporate governance institutions, transnational mining firms, ASM groups, intergovernmental institutions, suppliers, policymakers, and of course consumers. But this complexity of mining supply chains is not only a curse; it can also be a blessing, given that any one of these stakeholder groups can exert influence and pressure across the entire sector. And if there is a concerted push across many stakeholder groups, a coalition of those willing to be sustainable and promote best practices, then mining may very well contribute towards our low-carbon and environmentally sustainable future, rather than risking to diminish it.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary information files). | 2020-10-28T19:19:49.014Z | 2020-06-18T00:00:00.000 | {
"year": 2020,
"sha1": "80c2560c38d2a41db376b4efd7fc4dfa4a1b402d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/ab9f8c",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "87cfde4d35eace0de0f2c755a9333fe8a3647013",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
221292945 | pes2o/s2orc | v3-fos-license | Smart Healthcare for Diabetes: A COVID-19 Perspective
Diabetes is considered as an critical comorbidity linked with the latest coronavirus disease 2019 (COVID-19) which spreads through Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-Cov-2). The diabetic patients have higher threat of infection from novel corona virus. Depending on the region in the globe, 20% to 50% of patients infected with COVID-19 pandemic had diabetes. The current article discussed the risk associated with diabetic patients and also recommendation for controlling diabetes during this pandemic situation. The article also discusses the case study of COVID-19 at various regions around the globe and the preventive actions taken by various countries to control the effect from the virus. The article presents several smart healthcare solutions for the diabetes patients to have glucose insulin control for the protection against COVID-19.
Introduction
Diabetes occurs when the body of a person finds the difficulty to balance glucose level during various prandial states [1]. Diabetes cases has exponentially increased in the past decades around the world [2,3]. The unhealthy lifestyle is the prominent factor in magnifying the chance of being a diabetic patient. The unbalanced diet is one of the main factors for the occurrence of diabetes Mellitus [4]. An estimated 463 million adults worldwide have diabetes and addressing their quality of life through smart healthcare technologies can have significant social impact [5,6]. The main cause of the diabetes is deficiency of insulin level in the body against the generated glucose. The diabetes problem may lead to the reduction of blood pressure and other cardiovascular disease.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has infected millions of people around the globe leading to a large number of deaths [7]. The Corona Virus Disease (COVID -19) has been spreading among the people worldwide at an exponential rate. SARS-CoV2 affects population of various age groups and person with underneath health conditions. The relation of COVID-19 with underneath disease is as shown in Fig. 1.
The diabetes was associated as the major risk for the past severe disease such as Severe Acute Respiratory Syndrome (SARS), Middle East respiratory syndrome (MERS) types of infections from the corona virus and the severe influenza A H1N1 pandemic in 2009 [8]. The diabetes persons are in bigger threat for the infection of the novel corona virus disease (COVID-19) [9]. In general, the mortality rate of diabetes patients are 2 to 3 times more than other and the chances of intensive care is also higher than non diabetic patients. It is observed that the diabetes patients area at higher risk (up to 50 %) from the virus [10].
The studies show higher mortality rates of COVID-19 patients with pre-existing diabetes condition. It is difficult to provide the treatment for diabetic patients due to fluctuating of blood glucose levels. The reasons for diabetic patients serious infections by COVID-19 include the following [11]: 1. The high blood sugar would affect the immunity of the patient which made him vulnerable against the Corona virus infection as well lead to longer recovery period.
2. The virus would thrive in an environment of the increase blood glucose.
The rest of the article is organized in the following manner: Section 2 discusses the reasons for high risk for diabetic patients. Section 3 discusses the challenges for diabetic patients during COVID-19. A summary of various case studies for COVID-19 with respect to diabetic patients is provided in Section 4. Section 5 presents some thought on diabetes management during pandemic outbreak to improve quality of life. We discuss the potential roles of selected emerging technologies in Section 6. Section 7 presents some solutions for the diabetic population during pandemic outbreak like COVID-19. We present concluding thoughts in Section 8.
Why Are Diabetes Patients at High Risk
COVID-19 has fostered various challenges for the most vulnerable group of society "diabetic patients" [12]. Human body with underlying infection and high temperature inhabits the normal production of insulin. The role of insulin is undoubted in controlling the level of glucose in the blood. This results in a serious diabetes complication known as Diabetic Ketoacidosis (DKA). Here, the body cell don't get the glucose needed for energy, whereby the body begins to burn down the body fat for the same which results in production of excess blood acids (known as ketones) [13]. The impact of virus on the diabetes patients is shown in Fig. 2.
Unbalance Glucose Insulin
Increase the level of DKA
SARS CoV2 Connect with ACE2
High Blood Sugar Studies have showed that Angiotensin-Converting Enzyme 2 (ACE2) acts as a cellular receptor for SARS-CoV-2 having 10 to 20 times higher binding affinity than the previous existing SRAS-CoV. ACE2 is a natural occurring enzyme present in the cell membrane of lung and enterocytes of small intestine. This duo forms a bond with their protein along with a sugar molecule. This dependency of virus leads to hyperglycemia wherein a damage is caused to pancreas islets (known as the islet cell of the pancreas) which plays a prominent role for insulin generation for blood sugar control [14].
Hyperglycemia is a characteristic of diabetes and when this chronic metabolic disorder is left untreated causes various other serious complication like kidney failure, cardio disorders and respiratory complications. This hyperglycemia environment increases the virulence of some pathogens make the patients at high risk. It is reported that phagocytes and chenotaxis are also impaired. Overall immune response is hampered which plays a vital role in fighting COVID-19. Therefore, a good monitoring over the glucose blood levels in patients with COVID-19 is essential [15].
Challenges for Diabetic Patients during COVID-19 Pandemic
The current epidemic has influenced psychological well being of people around the world [16,17]. It has massively affected mental state of the people. With thoughts like anxiety of being infected by the virus, trauma of locked down and being away from the loved ones over a long period. At the same time, depression have crept in due to news of social damage done by COVID-19 around the globe. Moreover, diabetics people already suffers from anxiety and various forms of negative emotions that naturally gets intensified amid epidemic situation [18]. The non well being state of mind would somehow has been reflected in unbalanced glycemic control of diabetes people during this COVID-19 period. Therefore continuous counselling is required for self management of blood glucose control during the ongoing pandemic through collaboration with endocrinologists, psychiatrists, physician, nutritionists and diabetes educators. However, the diabetes patients have to take few self-measures and take appropriate consulting through teleconsultation mode for close coordination with physicians ensuring proper care during COVID-19 pandemic.
Diabetes Self-Management Amid COVID-19 Pandemic
The COVID-19 has come out as global pandemic which has affected millions of lives around the globe [19]. The diabetes mellitus (DM) has appeared as serious comorbidity factor for increased mortality of infected people. Nevertheless, DM has played major role towards the invasive ventilation and/or intensive care unit of COVID-19 infected case [20]. The lock down and imposed restriction on the movement by higher authority has made difficult condition for diabetic patients to control their glycemic profile.
Diabetes Self-Management: Barriers and Solutions
The various challenges faced by diabetes care during COVID-19 include the following: • Lack of Confidence in Self-care: It has been difficult choice to have belief on self-care device for diabetes management. A proper counselling is required that could address to each and individual's requirement.
• Technology Awareness: The patient should be aware of information and communication technologies (ICT) for the use of e-health, mhealth and telemedicine technologies which are clubbed under a big theme of Internet-of-Medical-Things (IoMT) leading to healthcare Cyber-Physical System (H-CPS) [6]. There should be good scientific materials available in the form of books or video to guide them properly.
• Proper Diet Plan: The usage of high carbohydrate and saturated fats should be avoided in their regular meal. One has to set their own diet plan to maintain their calorie goals.
• Economic and Social Obstruction: The proper actions from the Government to ensure cost-effective therapeutic materials and provision of essential medical care should be main focus from the country government.
• Legal Barrier: The healthcare providers have been always sceptical for the remote monitoring and telemedicine due to legal barrier in many counties like India. Many Government agencies have issued guidelines to enable teleconsultations service for the chronicle diseases such as diabetes.
In Asia
India has an estimated 77 million people with diabetes [21]. There are several reasons to have better control of diabetes during the pandemic times as compared to normal situations. First, doing so will almost certainly improve the outcomes in case someone does contract the COVID-19. All diabetes is not the same. Poorly controlled diabetes in an elderly person with heart and kidney issues is very different from well-controlled diabetes in a fit, active 40-year-old. The outcome of COVID-19 in the latter's case could well be compared to those infected but without diabetes. The seriousness of COVID-19 infection is very likely to get attenuated with better control of diabetes [22]. This itself is enough reason to ensure good sugar readings.
The second reason why diabetes control is important is the usual one -It is necessary to have a fix on our numbers (sugar, blood pressure, cholesterol) to prevent long-term complications [23]. With population 1.3 billion, it is very difficult to maintain social distancing and also stay in lockdown state (or stay-at-home) for a longer period. It is really difficult to take precautions like social distancing and staying home for the prevention of COVID-19 spread, specially for diabetes patients. The diabetic people are at higher risk of COVID-19 infection, and have bigger threat in-terms of mortality as well as morbidity. The extra precaution are required for such people to avoid any exposure to outside world.
A study of patients at China showed that diabetes patients had higher risk in terms of mortality as 7.3% in comparison of overall rate of 2.3% [24]. Another study in China shows that the infected people of COVID-19 with diabetes reported worse results in comparison with the gender-and age-matched patients without diabetes [25]. The different analysis from overall six study from China have observed that out of overall COVID-19 patients the average 9.7% (6.9%-12.5%) patients were pre-existing diabetic patients. There was separate meta analysis was carried out on thousands COVID-19 infected people where around 8% had historical background of diabetes. The main observation was made that diabetic people of COVID-19 infection have more mortality rate.
Russia has thousands of COVID-19 infected people along with the USA, Brazil and India. However, it has the lower death rates. A report suggested that in Russia, the diabetes was observed one of main chronic disease along with heart disease and chronic obstructive pulmonary disease among the corona virus victims who are in intensive care [26].
In Australia
The threat for COVID-19 pandemic has been addressed in Australia [27]. The products for diabetic people have been provided along with insulin supply and important medicines under National Diabetes Services Scheme (NDSS) to reduce the risk of infection. The people with underneath diabetes have suggested to obtain their regular medicines. Australia has already planned against COVID-19 risk for medical support with tele-healthcare, special assistance for old-age people, home delivery of the medication and better service of remotely located people.
The tele-healthcare would be provided by pharmacies to the participant who seeks any service via call. The consultancy service for telehealth would be provided to all the needed people (pregnant women, aged people over 65 and parents of new born child) for chronic disease like diabetes. The government Australia provides assistance to diabetic patient for their safety and health related issues. The government has created one web page for the health related guidelines which is updated regularly [28].
New Zealand has announced some specific guidelines for diabetes patients [29]. The diabetic people can consult their personal doctor or general practitioner for their individual recommendation during the COVID-19 situation. It is advisable for diabetes people to have flu vaccination to reduce the any risk for co-infection. It is recommended to maintain the glycaemic control for minimizing the threat of COVID-19 contraction. The age has also been considered as additional factor for corona virus risk.
In Europe
It is estimated over 59 millions people being diabetic across 44 countries and territories [30]. A study shows that out of total death in UK due to COVID-19, approximately 26 % suffered from either type-1 or type-2 diabetes [31]. Thus, making recovery of these patients a challenging task.
Italy is one of the most affected countries from the COVID-19 pandemic [32]. As per a study of the hospitalized patients at University Hospital of Padova suffering from COVID-19, the diabetic people had a 8.9% prevalence rate. Thus, the recovery of such type of people was really a challenging task.
The study carried out in Finland have evidently shown that prevention can certainly control the infection to a extend in people being overweight and suffering from glucose metabolism [33]. It also revealed that higher threat among diabetic people could able to reduce upto 58% with their daily physical activity and by proper managing their diet.
The preventive actions of diabetes could be easily applicable to other non communicable disease like (Cancer and Chronic respiratory) which are also at equal risk with diabetes. There are over 60 million diabetes patients in European countries [34]. Out of which about 10.3% of those are male and around 9.6% are female of 25 years and older. However, the rate of prevalence has been increased from past several years among all age people which are at greater risk in this COVID-19 epidemic. The studies revealed that daily life style greatly accompanied with unhealthy diet, obesity, overweight and lesser physical movement. These factors have even made difficult to easily recover from corona virus.
Spain is also one of the most affected country in Europe from COVID-19 along with Italy. As per a study, few thousand deaths were reported in Spain whereas the rate of prevalence of diabetes was around 12% [35]. As per same report, In Romania, approximately 50% of people who died in COVID-19 were suffering either cardiovascular diseases or diabetes mellitus [35].
In North America
In USA, more than 34.2 million people are suffering either diabetes or pre-diabetes including 14.3 millions senior citizen as per report in 2018 [36]. Elderly patients having pre-existing health condition such as diabetes, heart diseases and chronic lung disease have been reported. It was observed that 32 % patients in Intensive Care Unit (ICU) due to COVID-19 had diabetes whereas 24 % hospitalized patients had diabetes [37]. Moreover, only 6% diabetes patients who were COVID-19 infection did not require any kind medical attention. There is no conclusive evidence about the fact that whether type-1 or type-2 diabetes patients are at more risk for COVID-19. The improper blood sugar management is more so ever reason for COVID-19 infection.
People living with diabetes require uninterrupted access to essential medicines, supplies, technologies, and care. Without insulin, a person with type-1 diabetes can potentially fatal health issues in a few days of time, and the lack, or irregular supply, of other diabetes medication, supplies, devices and/or technologies will also adversely affect the ability of all people with diabetes to manage their diabetes optimally and prevent the development of potentially fatal, short-and long-term complications.
In South America
Brazil is one of the most affected countries in the world due to COVID-19 infection. As per report of 2019, there are approximately 16.8 million (around 11.4%) of people in Brazil have Diabetes between age 20 to 79 years, which is the fourth highest in the world [38]. Diabetes has been identified as one of the main threat along with other Non Communicable Disease (NCD) by SARS-CoV-2. There are approximately 71.2% of diabetes patients in Brazil suffering for hyperglycemia which would put them in higher threat of infection from COVID-19 [39].
Diabetes Management during COVID-19
A continuous monitoring of blood glucose level in diabetic patient is required [40,41] for proper diagnosis. The person infected with virus and having underlying condition of diabetes will find it difficulty in balancing their glycemia profile during the aliment [42]. Proper insulin dosage becomes essential to mange the glucose level in order to protect against COVID-19 infection. With this vision many countries have presented certain guidelines about the self-management for diabetes patients.
Moreover, due to pandemic around certain self-care practices need to be followed by the homebound diabetes patients who are unable to carry out their routine visits [43]. One has to be extremely careful with the old ages diabetes patients during this COVID-19 crisis who are suffering from comorbidities as renal diseases, pulmonary, cardiovascular and kidney. However, management of diabetes would be a real challenge in such cases and further precaution should be taken for such genre of people [44]. The people with all types (type-1, type-2 and gestational), with some illness are more prone towards COVID-19.
There are several recommendations formulated for diabetic patients during COVID-19 by various as follows [45,46]: • One has to drink substantial volume of fluid in order to avoid any sort of dehydration.
• The glycemic profile has to be maintained by everyone to the target value suggested by concerned doctor. Especially, the female patient with Gestational Diabetes needs a continuous measurement of glucose level that could be beneficial to control the glucose profiles [47]. • Continuous monitoring of blood glucose level throughout the day inhibits ketoacidosis and hypoglycemic condition. In case of type-1 diabetes patients if the value goes beyond 180 mg/dl then it is recommended to pump insulin into the blood for maintaining the glucose level. It is a fact that certain medicine dose could play a crucial role in glucose control [48]. This medicine may work towards blocking the binding of sugars and proteins. Thereby, high blood sugar value is avoided. This would theoretically impede the virus from interacting with its receptor and modulate the inflammatory response to the virus. Pre-diabetes patients are also requested to have glycated haemoglobin (HbA1c) test which calculates the average glucose level over period of 3 months [49].
High blood glucose keeps body in a low-inflammation state, thus recovery process in diabetics patients tends to be slower as compared to others. Therefore, the recommendation enlisted above are most important for diabetes people and for those who are in close contact with such people. The balanced glycemic profile is imperative as it would help to improve the immunity [50].
Diabetes patient has to take precautions in order to avoid major impact of virus by monitoring the blood glucose at regular interval at home and scheduled proper diet plan accordingly [12]. The good control of glycemic profile would help to boost the immune system of vulnerable population against COVID-19 [51] (see Fig. 3). The insulin secretion would help in developing immune system with balancing glucose by reducing risk of infection against the virus. The poor control of glycaemic profile may result in serious issues and also make treatment difficult for the patient. Glycaemic control of the infected person may lead to hyperglycaemia stage which makes hard to cure in presence of high fever and abnormal respiratory issues. Thus, there is a need of continuous monitoring of glucose and taking antidiabetic medication.
6 The Roles of Emerging Technologies during Pandemic Outbreak
Role of Robotics
The robotic applications have been used extensively in hospitals, restaurants, transportation, airports, hotels, in various ways to ensure the minimum human contact to avoid the spread of COVID-19 (see Fig. 4). A simple example, even robots serving as waiters in the restaurants. The autonomous vehicles, drones and intelligent robots help in sterilising public area, delivering materials, the measurement of body temperature, collection of samples from corona virus patients, providing value added service to patients, and also looking towards security or safety aspects. The advancement of technology and various intelligence techniques have helped in tourism and hospitality management. They are useful in providing service such as food delivery, housekeeping, concierge service and other related tasks during this pandemic situation. Tele robot help in sensing environment and consequently taking necessary reactive action automatically through various machine and deep learning algorithms. They are useful in placing and picking the things, delivering and cleaning actions. Teleoperator type of robots can be very useful in such epidemic situation where tasks can be performed with robotic control mechanism from human. Social robot aid in social activities with interacting human in an acceptable manner [52]. They can also assist in healthcare, entertainment industry, teaching and providing comfort communal services. If the growth of robotic based application continues with same pace then it has been predicted that approximately 600 millions jobs around the world and one quarter hospital staff in the USA would be replaced with robots [53]. The role of robot is very crucial in tourism and hospitality during this crisis situation to improve the quality of service, safety measures and expectations. The embracing of robotic technology in such situation where social distancing from the human is desired is welcomed. The self-belief in such technology from the people would gradually grow with more trust and minimal risk. The spread of COVID-19 has shown growth of robotic application all over the world. Their role in various field is expected to grow in many areas to improve the sustainability and quality of life.
Robots in Healthcare
Social Robot Transportation with Robotics
Robotics in
Tourism/ Hospitality Figure 4: Role of Robotic during pandemic outbreak.
Role of Industry 4.0
The outburst of COVID-19 has evolved the demand of necessary healthcare equipment (insulin pump, glucometer) for diabetes patient who are at higher risk than others (see Fig. 5). The fourth generation industry revolution industry 4.0 has potential to cater the demand with advanced digital technology. It would be beneficial to provide smart system where real-time information would be processed through Artificial Intelligence (AI), machine learning (ML), intelligent data analytic techniques and Industrial Internet of Things (IIoT) [54]. The automation of technology has been possible with aid smart manufacturing. The development of any medical device/system could be done rapidly with advances in manufacturing technologies such as 3D printing. The industry 4.0 has made possibility of connected environment where all the technologies are fasten to exchange information for the development of vaccine, healthcare facility, surveillance systems and necessary measures without much human involvement. Industry 4.0 could detect and predict the prevalence of COVID-19 with acquiring data from the smart systems. It can also allow to tackle the diabetes management in better way through intelligent technological solutions. The security solution with block chain technology, through cryptography approaches for peer-to-peer system has resolved the security concerns at various physical locations with suitable traceable mechanism.
Trusted Food Supply Chain Management
Food supply chain is a part of Internet-of-Agro-Things (IoAT) based Agriculture Cyber-Physical System (A-CPS) that makes smart agriculture [55]. However, food or diet is important part of smart healthcare and needs discussion. The consumption of counterfeit and adulterated foods can also affect the immunity of person and may act as a gateway of COVID-19 [56]. The good quality of food is really important for the consumers and required some technology to identify them easily. This is very crucial in this pandemic situation where healthy diet could always help to maintain good immunity against any kind of virus infection. There are number of food product of fraudulent category which are available in order to get some financial benefits. It has also been observed that the food contents of packaged food may not be the similar as it mention. While in some cases the product itself is counterfeit. In present scenario, people are trying to consume immune boosting product to maintain their strong immunity but quality of the same product is compromised. Therefore, it is required to develop such kind of technology that can identify a safe and authentic food product. It is really challenging task to find such kind of cost-effective solution which will not increase the cost of product.
It is essential to develop portable and economical device to provide the quality assurance of food throughout its life cycle from the production to final consumption stage. This kind of solution would be not only useful in such pandemic situation but could be useful all the time to have proper diet management. This device should be compact and user friendly product at consumer end. The solution is also required to be robust in order to its usage at various stages of the supply chain. There is not such type of solution is available in the market till date, however few researchers have attempted with Surface Enhanced Raman scattering (SERS). The days are not far where such type of solutions would be available in the market. Table. The blockchain has a potential to make great impact in the food supply chain for smart agriculture [57][58][59]. The technology could be useful to analyse the quality of the food product through the reliable supply chain management framework. The sustainable system would be developed using block chain technology which is helpful to develop trust and transparency from its production to consumption. This would be beneficial for diabetes patients to maintain their proper healthy diet plan during this COVID19 outbreak.
Smart Healthcare for Diabetes during Pandemic Outbreak
There is some prediction that the novel corona virus SARS-CoV-2 may not go from the world in near future. Therefore, each and individual has to change their life style and take all the preventive actions against the virus. A diabetes patients are more susceptible, they have to adopt some healthcare measure in their daily life. There are few smart healthcare solutions which are useful during this COVID-19 crisis is discussed in the current Section.
Telemedicine for Diabetes Patients
The COVID-19 pandemic has made the impact drastically on healthcare organizations all over the globe. Consequences of this lockdown for diabetic patients would result in reduced values of insulin and antihyperglycemic components. The mandate on social distancing has also posed the restrictions on routine visits for the patients to doctors. The role of telemedicine is significant during such unparalleled situation which allow patient to manage their health profile for the diabetes alike chronic diseases [60].
The telemedicine service could be classified as per the time frame and mode of communication with health service provider (See Fig. 7). The telemedicine could be provided through text emails, Fax, short messaging service, emails, Fax and conversion by chat on social network platforms. There are several video conferencing software. There are few traditional way of taking proper medication such as voice over internet protocol, phone etc. This kind of telemedicine service would be important for situations where less numbers of doctors per person available and would be able to reach rural area where it is scarcity of the health service. The service of telemdicine would be also popular gradually with the increase of smart-phones day by day. This kind of service for diabetes patients is considered as blessing where the vulnerability gets reduced from COVID-19 infection.
Continuous Glucose Monitoring
Tele Consultancy Glucose Insulin Balance
Tele Prescription For Glucose Control
Medication for Glycemic profile control Through this any physician can connect with the patient and give him advice by examining his various medical diagnosis like prevailing history, self-monitored blood glucose charts (SMBG) and self-monitored blood pressure (SMBP) values. Tele medicine has come a long way and its usage guided diabetic patients. A small study, where 35 randomised controlled trials (RCTs) done with aid of telemedicine (like video, phone and email) in China (where a pool of (n,3514) population) was given the consultancy for 36 months. This experimentation demonstrated decrease in HbA1c by 0.37% (p < 0.001) in group telemedicine as compared to other. Similar review was conducted by Flodgren and colleagues,this was done on 21 RCTs of patient with diabetes (n,2768). They were given an interactive session through (real time video or remote monitoring) in an appendage to only standard care alone. With this measure a overall decrease of HbA1c by 0.31% (p < 0.001) in patients on telemedicine as set side by side with controls. A review article recently published showed by 46 studies including both the types of diabetes with patients of type-2 diabetes mellitus (T2DM, n, 24000) and type-1 diabetes mellitus (T1DM n, 2052) were experimented with different modes of telemedicine. This investigation reported an overall reduction in mean value of HbA1c by T1DM (0.12 to 0.86%) and T2DM (0.01% to 1.13%) patients, respectively.
India is in its early stage of research on telemedicine and diabetes. The Pilot project known as "Diabetes Rath" (Hindi name of Mobile Vehicle) was started with aiming of spreading knowledge of diabetes care using mobile van facilitated with telemedicine (like computer and video conferencing app for transmitting retinal images to ophthalmologist, consult with diabetes foot specialist and diabetologist at a tertiary care centre ) especially in underprivileged areas [61]. This study has shown positive results in screening and managing diabetes through telemedicine.
There are several general guidelines for telemedicine service as follows: • The confidential and private information should be maintained properly.
• The personal information related to patient such as name,age, address should be maintained.
• The medical history and records for the patient medical records should be kept with their prescription along with the any testing records. • The consultancy charge of the patient could be decided as per his/her medical prescription and the procedure of getting tele-consultancy.
Telemedicine would allow the possibility to expand the medical facility and healthcare sector every part of the world without any kind of geographic barrier. Consequently, telemedicine platform can help to reach the health related services to places where there is any constraint of physical structures. It would gain massive popularity in the current epidemic where social distancing is the need of the society. The remote consulting with doctor is really attractive solution for urban tertiary where medical centres faces real challenge to accommodate each patient with proper point of care facilities. Telemedicine option is also useful when it is required to coordinate complex multidisciplinary care via a tumour board conference format among specialists that are geographically separated.
Non-Invasive measurement glucose measurement and automatic glucose control
The non-invasive glucose measurement device (such as iGLU, see Fig. 8) would be the state of art solution for frequent glucose measurement [3,6,40,41,62]. The proposed solution will be useful for continues glucose monitoring and its control with insulin secretion along with integration of Internet of Medical Things (IoMT) based healthcare Cyber-Physical System (H-CPS) framework. iGLU would help to provide instant diagnosis to remote located diabetes patient through telemedicine in this crisis situation. The conventional blood picking process would not be an ideal choice for frequent measurement for older population and children. It has also associated risk with trauma and touching the needle could be source for infection. For diabetes patients, if the blood sugar goes above 250, then it would be at risk for ketones. The ketones is considered as the poison for body. During this pandemic, it is very vital to keep these ketone levels down. The continues monitoring solution would help to balance the glucose profile in the body. The glucose insulin model has been proposed to analyse the plasma insulin variation with proper insulin secretion plan. The proposed insulin delivery system is integrated with IoMT to have prescribed diet and insulin plan under the supervision of remotely located doctor. The insulin pump would be helpful to have the proper glucose-insulin level that is important to improve the immune system to avoid COVID-19 infection.
Intelligent Diet Control for Glucose Insulin Balance
Unhealthy diet can also cause several health issues. The diabetes patient has to plan their diet to control glycemic profile. Automatic IoMT-based mechanisms which allow user to monitor their food intake and help to create awareness about the right kind of food with suggestions for next diet (see Fig. 9) can be crucial for diabetic people. iLog educate users for Normal-Eating and Stress-Eating, where Stress Eating defines the uncontrollable consumption of high caloric foods. It also provide a fully-automated edge level device which is useful to detect automatic stress variation in healthy lifestyle. The iLog framework uses a mobile platform as a user interface. The images of foods are being detected automatically and subsequently quantified foods are compared with the images of stored database [63,64].
The proposed approach could be helpful to have proper glucose-insulin balance for diabetes patients which ultimately helps to minimize the chances of COVID-19 infection. It provides a fully-automated platform for healthy life style with monitoring the stress behavioural by virtue of no input from user end.
Rapid Detection of COVID-19
The development of rapid, accurate and portable diagnostic technique for corona virus is also required for the diabetes patients. If complications of diabetic patients increase then they are in more danger. The near infrared (NIR) spectroscopy could be the breakthrough for this purpose. The focus is to develop a system for the Novel Coronavirus (COVID-19) detection using optical technique (see Fig. 10). The proposed system shall use light with specific wavelengths for instant measurement. The proposed technology would help to sense the novel coronavirus from saliva of the patient. The proposed system is on chip (SoC) with specific emitters and detectors, analog-todigital converter (ADC) and acquisition module be embedded to process the sample through saliva. The acquired values would be then processed through machine learning models to detect the presence of corona virus. The obtained values require to be calibrated with standard reference value. The novel system would have higher precision level and stability in measurement compared to traditional measurement. The system would be capable to test patients within few minutes. The system is to be integrated with an IoT framework for data storage where remote instant diagnosis is possible through shared server data of the patient. The proposed solution shall be cost-effective and provide instant measurement of COVID-19 infection. The similar technology was applied successfully for rapid detection of Zika virus and hepatitis B and C virus infection [65,66]. Post Processing using Machine Learning Models Figure 10: Rapid Detection of COVID-19 using NIR Spectroscopy.
Wearable Safety-Aware Mobility Tracking Device
An IoMT device easy-band based sensors is useful to sense the presence of another COVID-19 patient within a radius of 6 to 13 feet [67] (See Fig. 11 All must have to wear this device for safety aware mobility tracking purpose. The device comprises of three LED colors mainly as red (highly suspected), Green (safe) and yellow (mildly suspected) [68]. Every device is capable of storing the local information such as timestamp, time period and device ID of those devices which are in contact of the present device within area of 6 feet zone. The device is able to store these information upto last 14 days. The healthy tested persons (with diabetes or pre-diabetes) would have the access of this device as mobility pass with active green light. The person has to wear the device once having the possession of this device and will not attempt for the removal. In case of such any efforts of tempering, it will be directly reported to concerned authority. If any person comes in close contact of another COVID-19 person then the device comes at red light state and record the details of patient. The information are updated on server at regular basis. Device automatically starts vibrating to alert when it come in an area of 4 meters (13 feet) in vicinity of yellow or red device. Another alert is generated by a beep sound (critical warning) when it comes in contact with yellow or red device. When a green device spends a long time in close contact with a yellow/red device, its status will automatically change to yellow. This device will also have a temperature sensor (e.g. infrared/IR sensor) to sense the body temperature and measure respiratory of the person for issuing necessary preventive actions and changes status from green to yellow. This way the device will help the diabetic patients (or any citizen) to stay safe by automatically sensing the suspects. The battery life of the devices has been enhanced with ultra low power operation. To social distance while traveling, one has to sense the colour that pop up on the device which can warns people through their signals.
With the pandemic around, quarantining a potential carrier of COVID-19 can curb the spread to a certain extend. However, with the difficulty in pinpointing a carrier of virus many countries are adhering to other measures like shelterin-place, stay-at-home, and lockdown. Prolonged stay-at-home measure has created many other problem such as economical crises, unemployment, food scarcity, and mental health problems of individuals etc. Again contact tracing of positive cases and isolating them will be a hectic, unreliable and error prune task. As a result again re-imposition of lockdown is essential. Nevertheless, if the mentioned solution is controlled with a technological approach, it would help in stabilizing the current scenario. SARS-CoV2 effects more to elderly people, children and person with preexisting diseases, such pre-diabetes, and diabetes. Such type of safety aware device would be really helpful to diabetic community for their self-care measurement.
Conclusion and Future Directions
The article attempts to spread awareness among the people (specially diabetes patients) with possible recommendation and future technologies for smart healthcare. The case studies around the globe suggest that diabetic people are more vulnerable to COVID-19 infection and also suffer from more severity in terms of medical complications. The underneath diabetic patients have higher risks of two to three times compared to non-diabetic person. The studies show that a better control over glycemia can be advantageous to the patient simultaneously suffering from both diabetes and viral respiratory diseases such as COVID-19. The proper diabetic care of the person would help in reducing the prevalence of COVID-19 infection. The balance glucose insulin profile could improve the morbidity rate as well as mortality rate against SARS-CoV-2. It has potentials to reduce length of stay in the infected COVID-19 patients and also useful in avoiding the wide spared of virus among the community. We have discussed many devices and techniques available for diet management, stress management, and glucose-level management which can be helpful.
Presently, significant effort going on around the globe for the development of vaccine and medicines against novel corona virus COVID-19 and many agencies across the world are working for the success of the same. However, there is requirement to have proper planning so it have early access of most vulnerable people such as diabetic, cardiovascular disease, hyper tension people. Although there has been substantial development of technological solution, there is a need for smart long term strategy that would require to fight against such pandemic in future in order to minimize the social and economical impact. In this respect, IoMT driven healthcare Cyber-Physical System (H-CPS) will play a major role [69]. | 2020-08-26T01:00:35.012Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "52102c772c03537724c435d4e3dcea1dda6842e8",
"oa_license": null,
"oa_url": "https://ieeexplore.ieee.org/ielx7/5962380/9281049/09174644.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "55270d9045726143a0b39883dd891692bed09096",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Engineering",
"Medicine"
]
} |
125117001 | pes2o/s2orc | v3-fos-license | Study of Nonlinear Characteristics and Model Based Control for Proportional Electromagnet
The nonlinear characteristics of proportional electromagnet caused by hysteresis bring great difficulties on its accurate position tracking control by current. In order to enhance the practicability and reliability of long stroke electromagnet in case of position sensor fault and improve the position tracking performance during current closed-loop control, experimental investigations on the electromagnet actuator hysteresis characteristics of diesel engine governor are carried out to analyze the system dynamic features and the effects of hysteresis on actuator position tracking performance. It is clear that hysteresis can significantly hinder the accurate position control of the electromagnet actuator. Consequently, the fuel injection will be delayed, which will lead to hysteresis of engine speed control as well as deterioration of engine performance. In this paper, the hysteresis phenomenon of an actuator and its influence on control performance of engine are investigated. The model of proportional electromagnet actuator (PEA) is established and the hysteresis principle is analyzed. Then the inverse model control strategy based on neural network (NN) is proposed to linearize the transfer behavior of electromagnet and compensate for the magnet hysteresis. Rapid control prototyping (RCP) experiment based on MicroAutoBox is further implemented to validate the real-time performance of the proposed control strategy in D6114 diesel engine.The results show that the speed fluctuation (SF) under steady-state conditions (especially under idle speed condition) and the recovery time as well as the overshoot under transient conditions are significantly improved. This makes it possible to develop redundant electromagnet driving control strategy.
Introduction
PEA has been widely applied to position tracking control system of engine automation due to its simple structure and high strength. Diesel engine governor is a typical application of long stroke electromagnet actuator. Position sensor is commonly used to measure the position of the actuator during its position closed-loop control (PCL). It is fatal to the engine once there are some faults causing the position sensor breakdown. So developing sensorless position control technology is of great significance for the proportional electromagnet actuator and further for the engine. And proportional electromagnet actuator is promising in sensorless position control because of its current-force correspondence. However, the idle speed fluctuation (ISF) hinders its development [1]. Since the 1990s, minielectromagnet actuator position tracking based on current closed-loop control (CCL) has been studied in many engineering applications such as valve actuators, fuel injectors, and braking systems [2][3][4]. In the field of accurate position control research, a typical nonlinear memory shared with most magnetic materials has been frequently proposed. Hysteresis is a phenomenon that the associate output force depends not only on the instantaneous magnetic field strength but also equally on the previous states. The existence of hysteresis may hinder the performance of actuator and consequently increase position tracking error and energy loss. Analyzing and compensating for the hysteresis are a mandatory task for a satisfactory design of a feedback control system [5]. Jiles-Atherton (J-A) hysteresis physical model was firstly proposed in 1983 [6], and the latest J-A model can even describe the hysteresis in dependence on temperature [7]. Preisach model is currently the most commonly used phenomenological model [8]. And multiple inverse Preisach models have been proposed for the purpose of control [9,10]. Rosenbaum [12]. A compensating algorithm was proposed to improve the position tracking error and energy loss in a micropositioning task [13]. Pseudo compensator was used to compensate for the Terfenol-D actuator and its effect on feedback control was analyzed [14]. However, the effect of hysteresis on position control of large stroke proportional electromagnet actuator, especially on control performance of the controlled object, was rarely mentioned in the previous researches. In this paper, the effect of hysteresis on the ISF of diesel engine governor, in which a large stroke electromagnet is used as its actuator, is analyzed. An inverse model hysteresis compensator based on neural network is developed to optimize the position tracking and ISF performance of the governor.
Experiment Setup
The stroke of the actuator is 20 mm. An electromagnet is used as the armature material. The diesel engine used in this experiment is of four strokes, six cylinders, a mechanical fuel injection pump, and an electronic governor. The cylinder diagram is 114 mm, and the rated power is 90 kW. The engine load is applied directly to the crankshaft through a generator controlled by a PC controlled load system. The diesel engine governor control strategy is undertaken by a RCP system. A magnetoelectric tachometric transducer is mounted on the top of the flighwheel and a hall sensor is inserted in the actuator to measure the rack position. The schematic diagram of the experiment bed and the validating route of the control strategy are, respectively, shown in Figures 1 and 2. Figure 1 also exhibits the connecting relationship of the experiment bed. The push rod of PEA is connected to the rack of oil pump. According to the principle of oil pump, the quality of diesel fuel which is compressed into fuel injector by plunger is determined by the position of rack. So the quality of injected fuel can be indirectly controlled by the position of PEA. Furthermore, the output torque of crankshaft is positively correlated with the fuel quality injected into the engine cylinders. Load torque generated by generator is directly applied to crankshaft through coupling. The mutual relation between crankshaft torque and load torque determines the angular acceleration of crankshaft and generator main shaft and then determines the angular speed. The angular speed of generator main shaft determines the frequency of alternating current. So the frequency of alternating current will reflect the speed control performance of diesel engine.
The PEA and Its Hysteresis
3.1. The PEA Model. Generally, the dual-closed-loop PID control is used as the core algorithm of diesel engine speed and actuator position control. To make full use of the advantages of the PEA and improve the reliability in case of the position sensor fault, a redundant control strategy, which switches between PCL and CCL in different position sensor state, is designed. From our previous tests [1], it is found that the instantaneous speed change rate of speed and current dual-closed-loop control (SCDCL) is much better than that of speed and position dual-closed-loop control (SPDCL). However, idle speed stability is a major challenge for SCDCL. As shown in Figure 3, the PEA is generally composed of coil, soft magnetic material armature, and return spring. There are many kinds of ferromagnetic materials. Hysteresis is an irreversible phenomenon which exists during the magnetizing process of magnetic materials. The H (magnetic field strength) and M (magnetization) have a typical hysteresis relationship as plotted in Figure 4. The curve increases rapidly at first and then approaches an asymptote which is called the magnetic saturation. If the magnetic field is reduced monotonically, the M will follow a different curve. At zero field strength, the magnetization is an offset from the origin of the so-called remanence. The hysteresis loop will be formed by plotting the H and M relationship for all magnetic field strengths [15]. Proportional electromagnet is an executive element with strong nonlinearity. The elements of electricity, magnetism, and machine work together to achieve position control. During the working process of the electromagnet, coil inductance is a function of magnetic flux and the armature position, and the energy dissipation, which is caused by the vortex and the magnetic effect in the process of establishing the magnetic field and characterized by the dissipating resistor, is a function of the electric force and the position of the armature. These nonlinear elements result in the hysteresis characteristics of the electromagnet output force and current. In addition, due to the large friction force introduced by the armature of actuator and the dry friction structure of liner, the performance of actuator has obvious dead zone characteristics.
According to Figure 3, the incremental equation of the PEA can be described as where is the mass of armature pusher, kg; D is damping coefficient, N/(mm⋅s −1 ); is stiffness of the spring, N/mm; ΔF is increment of the electromagnetic force, N; and Δx is increment of displacement, mm. The character of the PEA control force can be expressed as where = / is current gain, N/A; = / + is the sum of displacement force; and spring stiffness, N/mm.
Laplace transform is used to combine (1) and (2): And the incremental equation of coil inductance is given as where is coil inductance, H; is coil resistance, Ω; is circuit resistance, Ω; and is counter electromotive force of the induction coil, V.
If we ignore counter electromotive force (due to it is very small), the Laplace transform of (4) can be expressed as The system transfer function can be represented by If hysteresis is ignored, this system is constituted of the one-order element of inductance coil, the two-order element of armature mass, and the spring. By cascading a 2 (symbolic transfer function of hysteresis) and feedforward control, the hysteresis is considered and controlled. The logical relationship of each part of the system is shown in Figure 5.
Hysteresis is caused by irreversible magnetization between magnetic field intensity and electromagnetism as shown above. If we get the hysteresis model and then inverse the model and add it to the control system as a feed-forward controller, the hysteresis effect would be weaken or even offset.
Hysteresis Test and Analysis.
The existing hysteresis may decrease the accuracy of the control system and cause periodic fluctuation. The hysteresis of the PEA is tested in this paper. The testing process consists of two steps.
(1) Rise. The actuator stays at the initial position in the beginning and then increases the duty cycle gradually. The actuator rod keeps still at initial stage because of the spring restoring force and friction force. With the increasing of duty cycle, the actuator rod begins to move. Record the duty cycle, current, and the actuator position until the actuator reaches its maximum position.
(2) Fall. Decrease the duty cycle when the actuator rod reaches its saturation. Because of the existence of hysteresis and the mechanical structure, the actuator stays still till the duty cycle decreases to 40%, AB is the saturation phase, and A is the disaturated point. After that, the actuator rod decreases rapidly.
The same law will be obtained by changing the initial condition. Thus the duty cycle and the position hysteresis loop (including the dead zone and the saturation) are presented in Figure 6. And Figure 8 gives a more detailed illustration of the hysteresis loop presented in Figure 6. Figure 7 shows the hysteresis error which is similar to bell curve and the maximum error occurs in the disaturated point. The testing results are comprehensive results regarding the electronic property, the magnetic property, and the mechanical property. According to the PWM duty cycle and the actuator position correspondence, a clear nonlinear characteristic including hysteresis and dead zone is demonstrated. In addition, all the data in this paper are normalized during the process of analyzing.
The Effect of Hysteresis on Position Closed-Loop Control.
In order to further investigate the working character of the CCL experiment results in Figure 11, the set current begins to increase at 4s as the red line shows in Figure 11; the actuator position starts to response at about 7s. And the set current inverses its change direction at 12s, 16s, and 21s. We can find that the maximum position tracking delay is as high as 3s. During 25s to 28s, a fast current change process is presented. However, the position stays still without any slight response, even though the PWM duty follows the set value. The CCL will extend the operating lag in a large scale, which is unacceptable for real-time position tracking control system like diesel engine governor. The PCL static experiment is conducted as follows. The actuator is stabilized at the set position by PID control. By observing the position fluctuation and the duty cycle change from Figure 10, the following phenomena can be easily understood: (1) Actuator stabilized at the set point; however, there is long periodic, ineffaceable small amplitude fluctuation. Meanwhile the duty cycle fluctuates in a large scale accordingly.
(2) The wave amplitude decreases as the increasing of the set position. Figures 8 and 9, a brief analysis of hysteresis induced fluctuation is presented. With the control of PID, actuator position will approach the set point gradually. However, small overshoot or static error is inevitable. The set point is indicated by the dark line in Figure 8; actuator starts to increase from the initial point alongside the red line. However, the actuator reaches point 1 because of a small overshoot. The control system aims to eliminate this error. The control force starts to decrease alongside the green line from point 1 to point 2. Because of hysteresis loop, the actuator stands still till PWMDUTY reaches point 2. Then the actuator decreases alongside the green line till another overshoot presents at point 3. At point 3 the actual actuator position is smaller than the set point. The control force increases alongside the purple line 3-4-1, then a periodical control output fluctuation is presented. Since the operation of hysteresis loop fluctuation derives from the natural characteristics of magnetic material, the special method should be adopted in order to avoid the system fall into this limit loop.
Hysteresis Fluctuation Loop Analysis. Combining
The impact of hysteresis on the control performance is serious. If the fluctuation character of diesel engine speed is taken into consideration, the situation will be more complicate. This degrades the low speed steady-state performance of the diesel engine governor. Besides, the huge instantaneous speed change rate may derive from the steep duty cycle and position curve during the descent stage. Based on the above conclusions and experiment results, a special control strategy is necessary to improve the stable performance of the diesel engine governor. In this way, the nonlinear integrate circuit causes a large current position hysteresis loop as shown in Figure 8, which can influence the fuel quantity control performance of the SCDCL. Moreover, good fuel quantity control performance is a prerequisite for good speed control performance. Therefore, compensation of the current position hysteresis characteristics is critical to the steady-state speed control performance of the SCDCL.
Optimization of PCL
The long periodical small amplitude fluctuation of the PEA cannot be neglected especially when coupling with engine speed. The experiments also found a similar fluctuation at the rated speed. Further analysis should be performed in order to weaken or even eliminate the fluctuation. On the basis of the phenomena, a dead zone module is added during the position error calculation process in order to remove the small error and improve the regulating effect. The optimized experiment results are exhibited in Figure 12. The results prove the existence of hysteresis loop.
Hysteresis Compensation and Control.
Being a strict nonlinearity with memory, hysteresis capturing, modeling and compensation have attracted the attention of researchers. Several physics based and phenomenological hysteresis models, which are suitable to capture the magnetic hysteresis, are proposed. Inverse model control, which reshapes the given reference value so that the control signal drives a hysteretic plant along such trajectories that provide a linear input-output relationship [16,17], is an effective method to compensate for the hysteresis. The intelligent neural network identification is proposed in this paper in order to identify the inverse hysteresis nonlinearity of position and current/voltage rather than discussing the mathematical model. NN based inverse model is used as a feed-forward controller to compensate for the hysteresis. In addition, a feedback controller is used to control the engine speed with zero steady-state error.
Neural Networks Based Inverse Model Control.
Inversing the input and output direction and modeling the effect of input on output are the so-called inverse model. The hysteresis is a nonlinear phenomenon. The output of the system relies not only on the current input but also on the past input of the system. According to this characteristic, the inputs of the model should include the current set position, the past set position, and the past state of the actuator. Thus, a 3-input, 1-output, 3-layer BP NN inverse model for position and duty cycle is developed to compensate for the impact of hysteresis. The input neurons are, respectively, the current set position, the last set position, and the last duty cycle. The output is the current duty cycle. The active functions are TRANSIG and PURLIN. The structure of the NN inverse model is shown in Figure 13. The NN is offline trained, based on the experiment data of the hysteresis character. The NN performance is presented in Figure 14, with a learning rate of 0.0013 and a momentum factor of 0.11 [10,18]. In order to prevent overfitting, the data is divided into three parts for training, validating, and testing. Results of Figure 14 show that the regression coefficient of the testing process is nearly 0.95, and all data regression coefficient is nearly 0.98, which means that the NN based model is feasible in learning the hysteresis relationship between current and PEA position. The overall performance confirms that NN based current position model is capable of compensating for the hysteresis loop as well as improving the PEA position tracing performance.
Speed variation tendency depends on the fuel injection quantity which is determined by the actuator position. The calculation result of the speed loop controller is the expectation of the rack position. Double PID controller, combined with feed-forward control strategy for engine speed regulation, is designed as shown in Figure 15. The speed loop outputs a set position to the NN feed-forward module. The NN is responsible for transmitting a target position to a target current, then the current loop exports a PWM duty cycle [19]. The initial weight coefficient of the NN inverse model is obtained from offline training. The NN online learning is not necessary because the actuator hysteresis is static rather than time varying. The NN offline training is a proved technique which is unnecessary to be discussed in detail in this paper.
Simulink Based Control Strategy Model.
A diesel engine governor control strategy model is developed. A real-time control is then achieved by compiling and downloading the model into the RCP system. The control strategy model includes hardware I/O and control algorithm module. According to the functional requirements of the diesel engine governor, the hardware I/O should include a frequency measurement, three AD channels for converting acquisition to rack position, actuator position and target speed, a bit input channel for stop switch, and a PWM output for actuator driving. The control strategy model is responsible for regulating the PWM duty cycle according to the inputs. The control strategy model mainly includes operating mode judgement, speed closed-loop calculation, NN feed-forward calculation, and current closed-loop calculation. The actuator is driven by RapidPro hardware of dSPACE.
Bench Experiments and Results
In order to validate the control performance of the inverse model hysteresis compensator, diesel engine bench experiments are carried out. The experiments contain CCL position tracking experiment and diesel governor performance experiment. The position tracking results present nearly no delay response in the case of reversing operation as depicted in Figure 16. This proves that the designed control algorithm is capable of compensating for the actuator hysteresis.
Inverse model integrated with PID is capable of optimizing the position tracking accuracy of hysteresis system. However, whether the hysteresis should be responsible for the ISF or not needs further investigation. The main purpose of the bench experiments is to validate the performance of the proposed NN hysteresis compensator and confirm the relationship between ISF and hysteresis. According to the requirements of power plant, ISF is the major concerned performance index. Meanwhile, the proper instantaneous speed change rate of the original control algorithm is preserved. Figures 17 and 18 are respectively the speed and rack position curve for start process and idle speed stead condition. The engine is successfully and stably started within 1.5s, and the ISF is 2.21%. Compared with the results of SCDCL without hysteresis compensation in Figure 19, the SF is 1.4% at 1050r/min [1]. The ISF is far better than that of the original strategy without compensation. The reason why the original strategy is validated under 1050r/min condition instead of 700r/min condition is that the original strategy cannot achieve stable operation at the idle speed 700r/min. Figure 20 shows the stability performance under rated speed condition where the SF rate is 0.2%.
Load Sudden Change Experiments.
Instantaneous speed change rate is an essential performance index of diesel engine generator. And the alternating current frequency is directly related with its main shaft angular speed. When engine operates at 1500r/min, the generator has two pairs of pole and the alternating current frequency is 50 Hz. So the frequency of alternating current can be taken as a performance index of the governor. Figures 21 and 22 and generator frequency fluctuation. It can be seen that the minimum transient frequency is 47.76Hz, the instantaneous speed change rate is 4.8% and its recovery time is 1.8s when full loaded suddenly. In addition, the maximum transient frequency is 52.3Hz, the instantaneous speed change is 4.6% and its recovery time is 1.7s when full unloaded suddenly.
Conclusions
In this paper, a feed-forward control strategy based on NN to compensate for the PWM duty cycle and the hysteresis of a 20 mm long stroke proportional electromagnet is proposed. control force fluctuation, the hysteresis loop and the limited cycle behaviors are described. Hysteresis is supposed to be responsible for the ISF of SCDCL. A 3-inputs and 1-output NN inverse model is designed to linearize and compensate for the hysteresis. An inverse model feed-forward calculation and standard speed-current closed-loop control are developed in RCP. Experiment results indicate that the hysteresis significantly deteriorates the position tracking accuracy of the system, leading to a large ISF. With the proposed NN feed-forward compensator, the position tracking response time is greatly shortened, the ISF decreases to 2.21%, and
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2019-04-22T13:12:29.936Z | 2018-09-06T00:00:00.000 | {
"year": 2018,
"sha1": "d1fdd32b15ee0e43d4b20f8644fef9e000d778f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/2549456",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ca648c7e7e7bc4abe93cef109553802079e019d",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118874271 | pes2o/s2orc | v3-fos-license | Thermally assisted ordering in Mott insulators
Ginzburg-Landau theory describes phase transitions as the competition between energy and entropy: The ordered phase has lower energy, while the disordered phase has larger entropy. When heating the system, ordering is reduced entropically until it vanishes at the critical temperature. This established picture implicitly assumes that the energy difference between ordered and disordered phase does not change with temperature. We show that for the Mott insulator KCuF3 this assumption is strongly violated: thermal expansion energetically stabilizes the orbitally-ordered phase to such and extent that no phase transition is observed. This new mechanism explains not only the absence of a phase transition in KCuF3 but even suggests the possibility of an inverted transition in closed-shell systems, where the ordered phase emerges only at high temperatures.
Mott insulators with orbital degrees of freedom often exhibit orbitally ordered phases [1]. There are two established explanations for this: (i) electron-phonon coupling induces cooperative Jahn-Teller distortions [2] that lead to orbital ordering or (ii) Kugel-Khomskii superexchange [3] gives rise to orbital order that leads to a cooperative lattice distortion. Since both mechanisms tend to result in the same type of ordering, identifying which one drives it is a 'chicken-and-egg problem' [4]. Even though they strongly differ, these two mechanisms have one fundamental aspect in common: Their hallmark is a conventional Ginzburg-Landau-type transition [5] between a low-temperature ordered structure and a symmetric high-temperature phase [6]. Here we show that this conventional picture overlooks a key aspect: Taking thermal expansion into account leads to a novel scenario, and to a third, unconventional, ordering mechanism. We find that, as consequence, the order is not necessarily destroyed by temperature. In fact, for the prototypical orbital-ordering perovskite KCuF 3 [3] we explain how thermal expansion favors the symmetrybroken phase with an order parameter that increases with temperature. The key feature of the new mechanism is the strong dependence of the energy gained by breaking the symmetry on the lattice constant, and ultimately, via thermal expansion, on temperature. We anticipate that this thermally-assisted ordering can operate even in closed-shell systems. This would result in an inverted Ginzburg-Landau transition, with symmetrybreaking above a critical temperature. These surprising conclusions are based on very general arguments. We thus expect that similar effects will play a key role in other ordering phenomena of totally different nature.
Following the seminal work of Kugel and Khomskii [3], the fluoride KCuF 3 is considered the prototype of an orbitally ordered material. Its structure [7], shown in Fig. 1, derives from a cubic perovskite with Cu in d 9 configuration with one hole in the e g orbital surrounded by an octahedron of fluoride ions in a cage of potassium a b c FIG. 1. (Color online) Crystal structure of KCuF3. Inequivalent atoms inside the I4/mcm unit cell (thick black lines) are shown in color (Cu: brown, F: green, K: violet). The additional atoms in grey show the pseudocubic setting in which the network of corner sharing octahedra becomes apparent. The pseudocubic axes are defined as x = (a + b)/2, y = (−a + b)/2, and z = c/2. For clarity lattice distortions are exaggerated twofold.
ions. The actual crystal shows a tetragonal compression, slightly lifting the e g degeneracy. The fluoride ions in the a-b plane move along the lines connecting the Cu ions such that long (ℓ) and short (s) bonds alternate in the x and y directions, leading to a cooperative Jahn-Teller distortion and a competing splitting of the e g orbital. The distortion pattern also alternates in the z direction, resulting in an antiferrodistortive orbital-ordering. The tetragonal distortion is parametrized by c/a √ 2 and the Jahn-Teller distortion by δ = (ℓ − s)/a √ 2. The mechanism driving the distortion δ and orbitalordering has been the subject of intense controversy. As early as 1960, Kanamori noted that the structure of KCuF 3 could arise from the cooperative Jahn-Teller effect [2]. Later, Kugel and Khomskii showed that orbital [11] and under hydrostatic pressure [12], for RbCuF3 [13], and (NH4)CuF3 [14] compared to our calculations and the values obtained for constant short Cu-F distance smin. order in KCuF 3 can originate from electronic superexchange even in the absence of distortions [3]. As one of the first applications of the density-functional theory plus U method (DFT+U) [8], Liechtenstein et al. found that a Hubbard U is necessary to stabilize the distorted structure and concluded that an electronic Kugel-Khomskii mechanism drives the transition. In the same year Towler et al. found that Hartree-Fock also gives reasonable agreement with experiment, despite the complete lack of correlations [9]. Eventually, the question was settled by density-functional theory plus dynamical mean-field theory (DFT+DMFT) calculations, which showed that Kugel-Khomskii superexchange alone can only account for orbital order below T KK ≈ 350 K [10], while experimentally it persists to much higher temperatures [7]. In fact, the transition to the undistorted hightemperature phase has never been seen experimentally, and the analysis of the crystal structure up to 900 K showed that, contrary to conventional wisdom, the distortion increases with temperature instead of vanishing above some critical value [11]. Applying hydrostatic pressure dramatically reduces the distortion as the lattice constant decreases [12], while expanding the lattice by chemical pressure, substituting K by Rb [13] or NH 4 [14], results in an increase of the distortion parameter δ following the same trend as in thermally expanded KCuF 3 . All this points to the lattice constant as the key player in determining the size of the distortion (see Fig. 2). Remarkably, the importance of this has been missed so far.
To understand the role of the lattice, we have performed DFT+U calculations for KCuF 3 with the experimental lattice constants at different temperatures. Calculations were performed using the Vienna ab-initio Simulation Package (vasp) [15] within the generalized gradi- ent approximation (GGA) of Perdew, Burke, and Ernzerhof [16] to density functional theory using the projector augmented-wave (PAW) [17] pseudopotentials of Kresse and Joubert [18]. We apply onsite Coulomb interactions on the Cu 3d orbitals through the "+U " correction of Liechtenstein et al. [8] with double-counting corrections in the fully localized limit. It is known that DFT+U describes structural properties remarkably well [19]. We find that also the energy gained by moving the fluorine ions agrees with both experimental estimates [20] and calculations explicitly including many-body effects [21,22]. Moreover, extracting the frequency of the A 1g mode, we find excellent agreement with Raman data [23]. Our results are fairly independent of the model parameters U and J, as long as they are large enough to open a gap. The main effect of increasing U is to slightly increase the effective radius of the cation [24]. Fig. 3 shows the energy gained by distorting the lattice for the experimental unit cell parameters at increasing temperatures [11,25] calculated using the established values U = 7 eV and J = 0.9 eV [8]. We find that the distortion δ increases with lattice constant in good agreement with the experimental values (see Fig. 2).
While our calculations reproduce the observed increase of the distortion very well, neither of the established theoretical mechanisms can explain it: The orbital superexchange coupling decreases with distance so that the Kugel-Khomskii mechanism weakens as the lattice expands [3]. The Jahn-Teller mechanism could in principle explain a distortion that increases with volume. Writing the energy gained by displacing the fluorine ions by ∆ = (ℓ − s)/2 = a δ/ √ 2 from their symmetric position as E JT (∆) = −g∆ + C∆ 2 /2, where g gives the split- ting of the e g level and C the elastic constant, the energy is minimized for ∆ JT = g/C [2]. The dependence of ∆ JT on the lattice constant a is thus given by the change of g and C. Both will decrease with a, and if C decreases much faster than g, ∆ JT could become arbitrarily large. From crystal-field theory we know that g scales with 1/a 4 . To obtain the experimentally observed increase in ∆ min the elastic constant C would have to decay faster than 1/a 14 . This contradicts, however, the observed temperature (volume) dependence of the A 1gmode [23], ruling out the Jahn-Teller mechanism as well.
The complete failure of the established distortion mechanisms calls for a change of perspective. Instead of focussing on the displacement from the high-symmetry position, we consider the Cu-F distance. The shortest distance s should be given by when the ions touch. In fact, for KCuF 3 it is practically independent of temperature [11] so that the increase of ∆ is simply a consequence of thermal expansion. The same is true when applying pressure [12], substituting K by Rb [13], or NH 4 [14]. We can make this picture quantitative by plotting the energy curves of Fig. 3 as a function of the Cu-F distance (see Fig. 4): pushing the ions closer together than the optimal distance results in a strong repulsion -a Born-Mayer potential [26].
This suggests a straightforward model: we describe E(∆) using a Born-Mayer repulsion energy E BM = A exp(−r Cu−F /ρ), the Ewald energy E Ewald of the periodic arrangement of ions, and a term E CF = Γ − Γ 2 + (g∆) 2 describing the crystal-field splitting of the e g level due to the tetragonal compression and the displacement ∆. Since the ionic charges are practically independent of the lattice constant, as are the Born-Mayer coefficients A and ρ, the couplings Γ and g are the only T (K) a (Å) c (Å) Γ (eV) g (eV/Å) C Ewald (eV/Å 2 ) 10 (Figs. 3 and 4). Γ increases with the tetragonal distortion, while g decreases with a. The Born-Mayer parameters B = 9188 eV and ρ = 0.2186Å are independent of the lattice constants. This is also true for the charges of the ions entering the Ewald energy: ZCu = 1.86 e = 2ZK = −2ZF.
parameters that depend on the lattice constant. As expected, the crystal-field splitting scales as 1/a 4 , while the change in Ewald energy can be approximated by −C Ewald ∆ 2 /2, where C Ewald scales as 1/a 3 . The resulting expression gives not only excellent fits to the DFT+U energies for KCuF 3 as shown by the curves in Figs. 3 and 4 with the parameters given in Table I, but should also describe strongly ionic Jahn-Teller-active compounds in general.
We can now explain the anomalous behavior seen in KCuF 3 . To simplify the discussion we neglect for the moment the tetragonal splitting, setting Γ = 0. The Cu 2+ cation is fairly small, i.e., ρ ≪ a/2 √ 2, so that the frequency about the undistorted position, mω 2 0 = d 2 E(∆)/d∆ 2 ∆=0 = 2Be −a/ρ2 √ 2 /ρ 2 − C Ewald , is quite low. In a simple Jahn-Teller picture this would imply a very large distortion ∆ JT , that would bring Cu and F extremely close to each other. In reality, however, the ions repel strongly at short distance. Since this Born-Mayer repulsion increases exponentially, the distortion will be stopped at a Cu-F distance s min that is practically independent of the lattice constant. The observed linear increase of the distortion with the lattice constant ∆ min (a) ≈ a/2 √ 2 − s min is thus simply the consequence of a constant s min (see the line in Fig. 2). At the same time the energy gained from the distortion increases with ∆ min . The large thermal expansion a(T ) thus stabilizes the distortion in KCuF 3 , explaining the absence of a transition to the undistorted structure. We note that in our model the frequency ω 0 differs from the frequency of the A 1g Raman mode, which is given by the expansion about the minimum: mω 2 A1g = d 2 E(∆)/d∆ 2 ∆=∆min . The difference is due to the Born-Mayer potential, which makes the A 1g -mode quite anharmonic, in agreement with experiment [23]. The scenario of fixed s min is not limited to KCuF 3 . In fact, Table 5 of Ref. [13] lists the short Cu-F distances s of thirteen materials of widely varying structure and composition with Jahn-Teller-active CuF 6 octahedra. They all differ by less than 2%.
For larger cations, ω 0 will be harder, leading to smaller distortions and a more Jahn-Teller-like picture. There is, however, a crucial difference: ω 0 softens dramatically with the expansion of the lattice, leading to a robust distortion even as the temperature increases. Remarkably, this is what is actually observed in the tetragonal phase of another fluoride, KCrF 3 , up to the volume-collapse transition at 973 K [27]. The persistence of the distortion with increasing temperature is expected for all strongly ionic Jahn-Teller-active compounds with significant thermal expansion coefficients. This thermally assisted ordering mechanism should be particularly useful for engineering materials, where we want the symmetrybroken phase to survive to high temperatures [28]. Moreover, it suggests an intriguing scenario: When a exceeds the critical value a c = ρ 2 √ 2 ln(2B/ρ 2 C Ewald (a c )) the frequency ω 0 becomes imaginary so that even a system with a non Jahn-Teller-active cation would start to distort. When a c is crossed in thermal expansion such a system could show an inverted Ginzburg-Landau transition from a high-symmetry phase at low temperatures to an ordered high-temperature structure. Ideal candidates are compounds with large lattice constant and small Bsite cation as shown in Fig. 5. While it might be difficult to find a material where a c can be reached by thermal expansion alone, it is conceivable to additionally increase the lattice constant by strain [29] or negative pressure [30] to just beyond the critical value, so that this unusual phase transition can be reached.
It turns out, then, that Ginzburg-Landau theory is oversimplified in that it assumes a temperatureindependent electronic Hamiltonian. We have identified a striking example which highlights the failure of this standard model of symmetry breaking: in the Mott insulator KCuF 3 orbital-ordering is stabilized by thermal expansion. In fact, this is just an instance of a more general principle: When the effective Hamiltonian describing symmetry breaking has a significant temperature dependence, we can expect to observe more exotic phenomena than predicted by Ginzburg-Landau theory.
This work has been supported by the Deutsche Forschungsgemeinschaft through FOR 1346. Calculations have been done on JURECA at the Jülich Supercomputer Centre under grant GRS300. * e.koch@fz-juelich. de [1] Y. Tokura and N. Nagaosa, Science 288, 462 (2000). | 2019-04-13T16:32:15.964Z | 2016-11-16T00:00:00.000 | {
"year": 2016,
"sha1": "7eedffc2bc2012ebae5530a467480ef45a954da2",
"oa_license": null,
"oa_url": "http://juser.fz-juelich.de/record/836738/files/PhysRevB.96.054107.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "52e0201911ff71385c04e10f59bb19a1cdc2ced9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
253038112 | pes2o/s2orc | v3-fos-license | Exploration of a Novel Circadian miRNA Pair Signature for Predicting Prognosis of Lung Adenocarcinoma
Simple Summary Identifying new prognostic markers can provide a reference for the treatment of lung adenocarcinoma (LUAD) to improve its prognosis. Circadian rhythm disturbances are closely linked to the initiation, progression and prognosis of lung cancer. We aimed to explore the value of circadian miRNA (cmiRNA) as a prognostic marker of LUAD. A prognostic signature comprising seven pairs of cmiRNAs was established, and it exhibited excellent predictive value for overall and progression-free survival. High-risk patients showed higher sensitivity to primary chemotherapy drugs and targeted medicine compared with low-risk patients. Overall, the novel circadian-related miRNA pair signature could provide a precise prognostic evaluation with the potential capacity to guide individualized treatment regimens for LUAD. The cmiRNA–Cgenes network and corresponding enrichment analysis might provide clues for studying the underlying circadian dysregulation mechanisms involved in the progression of LUAD in the future. Abstract Lung adenocarcinoma (LUAD) is the primary histological subtype of lung cancer with a markedly heterogeneous prognosis. Therefore, there is an urgent need to identify optimal prognostic biomarkers. We aimed to explore the value of the circadian miRNA (cmiRNA) pair in predicting prognosis and guiding the treatment of LUAD. We first retrieved circadian genes (Cgenes) from the CGDB database, based on which cmiRNAs were predicted using the miRDB and mirDIP databases. The sequencing data of Cgenes and cmiRNAs were retrieved from TCGA and GEO databases. Two random cmiRNAs were matched to a single cmiRNA pair. Finally, univariate Cox proportional hazard analysis, LASSO regression, and multivariate Cox proportional hazard analysis were performed to develop a prognostic signature consisting of seven cmiRNA pairs. The signature exhibited good performance in predicting the overall and progression-free survival. Patients in the high-risk group also showed lower IC50 values for several common chemotherapy and targeted medicines. In addition, we constructed a cmiRNA–Cgenes network and performed a corresponding Gene Ontology and Gene Set enrichment analysis. In conclusion, the novel circadian-related miRNA pair signature could provide a precise prognostic evaluation with the potential capacity to guide individualized treatment regimens for LUAD.
Introduction
Lung cancer remains the leading cause of cancer-related deaths in the world [1,2], with lung adenocarcinoma (LUAD) being the primary histological subtype [3,4]. The prognosis of patients with LUAD is markedly heterogeneous [5]. Therefore, the identification of effective biomarkers that are related to prognosis and drug efficacy in patients with LUAD is important: (1) providing early intervention and assisting clinical strategies to improve prognosis; and (2) revealing the underlying mechanism that may contribute to the discovery of new underlying therapeutic targets for preventing lung adenocarcinoma recurrence or progression.
Chronic circadian rhythm disturbances are closely linked to the initiation, progression and prognosis of tumors [6,7]. For example, a long period (more than 20 years) of shift work is related to an increased risk of breast and prostate cancers [8]. Disturbed daily sleepactivity cycles often occur in patients with lung cancer [9]. Mice harboring key circadian rhythm gene alterations accelerate the initiation and progression of lung cancer [6,10]. A study by Ye et al. revealed extensive alterations in clock genes across multiple cancer types. The correlations between clock genes, key oncogenic pathways, and clinical features might explain the underlying mechanisms of circadian rhythm disruption and tumor progression [11]. In addition, metastatic colorectal cancer patients with circadian rhythm disruptions have significantly lower survival rates and poorer quality of life than those with normal circadian rhythms [7,12]. The rhythm of cortisol is relatively "flat" in some lung cancer patients with poor survival [13]. Therefore, circadian rhythm disturbance may serve as a potential prognostic risk factors for lung adenocarcinoma.
The microRNAs participate in crucial physiological balance and pathological processes, including the malignant characteristics of cancer, and have been identified as excellent signatures associated with the diagnosis and prognosis of cancer [14][15][16]. Increasing evidence indicates that miRNAs play a major role in circadian rhythm regulation [17][18][19]. However, the value of circadian miRNAs (cmiRNAs) in lung adenocarcinoma has not yet been investigated. Moreover, a gene pair-based strategy is a promising approach that is compatible with the data from various gene detection platforms and has a wide range of applicability [20]. Therefore, our study aimed to develop a personalized prognostic signature for patients with LUAD based on cmiRNA pairs. We also preliminarily explored the value of the signature in guiding treatment approaches and the potential mechanism.
Study Design and Patients
We downloaded the miRNA and RNA-seq expression data of LUAD from TCGA and GEO databases. Patients lacking corresponding clinical information and those who survived for <30 days were excluded. Postoperative tumor and adjacent normal tissues of 11 patients with LUAD in the Union Hospital (Wuhan, China, WHUH cohort) were collected. All the patients signed an informed consent form. The study complied with the Declaration of Helsinki and was approved by the Institutional Review Board of Union Hospital, Tongji Medical College, Huazhong University of Science and Technology (protocol: [2020](S363), 30 December 2020).
The samples and data were collected from 10 March to 10 July 2022. Figure 1 shows the overall design of this study. The TCGA dataset was randomly split into training and testing datasets. The GSE63805 dataset was selected for independent validation.
Pairing of Differentially Expressed Circadian miRNAs (DEcmiRNA)
Circadian genes (Cgenes) were retrieved from the circadian gene database (CGDB, http://cgdb.biocuckoo.org (accessed on 5 March 2022)) [21]. Circadian miRNAs (cmiRNAs) were predicted according to the Cgenes through the microRNA data integration portal (miRDIP) database (http://ophid.utoronto.ca/mirDIP/ (accessed on 20 March 2022)) [22] and the microRNA target prediction (miRDB) database (http://www.mirdb.org (accessed on 21 March 2022)) [23]. To improve the accuracy of predictions, we set strict filter criteria as follows: mirDIP (score class: very high, integrated score more than 0.5 and number of sources more than 8); and miRDB (Include functional miRNAs only and exclude those with a target prediction score below 80, and those with more than 800 predicted targets in the genome). The "edgeR" package was employed in screening out the differentially expressed miRNAs (DEmiRNAs) between tumor and normal tissues (|log 2 FC| > 1 and FDR < 0.05). Then the differentially expressed cmiRNAs (DEcmiRNAs) were obtained via the intersection of DEmiRNA sets with cmiRNAs predicted by the mirDIP and miRDB databases for subsequent pairing and construction of the model.
Pairing of Differentially Expressed Circadian miRNAs (DEcmiRNA)
Circadian genes (Cgenes) were retrieved from the circadian gene database (CGD http://cgdb.biocuckoo.org (accessed on 5 March 2022)) [21]. Circadian miRNAs (cm NAs) were predicted according to the Cgenes through the microRNA data integrati portal (miRDIP) database (http://ophid.utoronto.ca/mirDIP/ (accessed on 20 March 202 [22] and the microRNA target prediction (miRDB) database (http://www.mirdb.org ( cessed on 21 March 2022)) [23]. To improve the accuracy of predictions, we set strict fil criteria as follows: mirDIP (score class: very high, integrated score more than 0.5 and nu ber of sources more than 8); and miRDB (Include functional miRNAs only and exclu those with a target prediction score below 80, and those with more than 800 predict targets in the genome). The "edgeR" package was employed in screening out the diff entially expressed miRNAs (DEmiRNAs) between tumor and normal tissues (|log2FC 1 and FDR < 0.05). Then the differentially expressed cmiRNAs (DEcmiRNAs) were o tained via the intersection of DEmiRNA sets with cmiRNAs predicted by the mirDIP a This pairing strategy was consistent with that reported in previous studies [20,24]. Specifically, cmiRNAs were randomly paired. If the value of cmiRNA x was greater than that of cmiRNA y, then the score of cmiRNA pair (cmiRNA x/cmiRNA y) was defined as 1, and 0 otherwise. Moreover, cmiRNAs pairs with frequencies of 0 or 1 above 80% or below 20% were considered unrelated to prognosis and therefore excluded.
Establishment and Verification of a Prognostic cmiRNA Pair Signature
Clinical data (including survival information) were integrated into the cmiRNA pairing matrix. In the training dataset, prognostic cmiRNA pairs were filtrated by univariate Cox regression (p < 0.01). We further performed LASSO to minimize the risk of overfitting. Finally, the stepwise regression method (method = "both") and the Akaike Information Criterion (AIC) were applied to screen cmiRNA pairs to construct the optimal Multivariate Cox regression model with the formula as follows: Risk score = h 0 (t) × exp (score of cmiRNA pair 1 × β 1 cmiRNA pair 1 + score of cmiRNA pair 2 × β 2 cmiRNA pair 2 + . . . . . . + score of cmiRNA pair n × β n cmiRNA pairs n). The risk scores of the samples in the testing and validation datasets were calculated on the basis of the above formula. The median risk score was set as the cutoff point for the high-or low-risk group classification.
Receiver operating characteristic (ROC) curve and Kaplan-Meier survival analysis were performed to evaluate the correlation between the signature and prognosis in the training, testing, total TCGA, and independent GEO (GSE63805) datasets. Moreover, we assessed the predictive value of the signature for progression-free survival (PFS) in the TCGA dataset. To further validate the clinical implications of the cmiRNA pair signature, we conducted multivariate analyses to explore the correlation between the signature and clinical features. The predictive values of the signature and the clinical variables for prognosis were also compared.
Formulation and Assessment of the Nomogram
Based on the results of multivariate analyses, we integrated smoking, age, sex, stage, and risk score into a composite nomogram by applying Cox proportional hazards regression to the TCGA dataset. The prognostic accuracy of the nomogram is shown by the ROC and calibration curves. The "M" stage (28% missing) was not included to ensure consistency. The R package "rms" was employed in drawing the nomogram plot.
Analysis of Tumor Immune Microenvironment
Immune cells in the tumor microenvironment were estimated using seven previously reported algorithms [25][26][27][28][29][30][31]. Spearman correlation analyses were conducted to investigate the correlation between the risk score and the immune cell composition. The Wilcoxon signed-rank test was performed to explore the differences in immune cells between the high-and low-risk patients.
Assessment of Drug Sensitivity
The IC50 values of common antitumor medicines for each sample were predicted using pRRophetic [32]. The Wilcoxon signed-rank test was applied to compare IC50 between high-and low-risk patients.
MiRNA Extraction and Quantification
Total RNA from postoperative tumor and normal tissues was extracted by RNAiso Plus (#9109 Takara Bio, Beijing, China). The RNA concentration was measured via Nan-oDrop 2000 (NanoDrop Technologies, Wilmington, DE, USA). The miRNA first-strand cDNA was synthesized using a tailing reaction (#B532451, Sangon Biotech, Shanghai, China) with total RNA (1 µg). Forward primers for the miRNAs (listed in Table S1) were obtained from Sangon Biotech. Quantitative real-time polymerase chain reaction (qRT-PCR) was carried out using the TB Green ® Premix Ex Taq™ II (#RR820A, Takara Bio, Beijing, China) and CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). The relative expression of miRNAs was calculated using the 2 −∆∆Ct method, with U6 as an internal reference.
Key cmiRNA-Cgene Network Construction and Gene Ontology (GO) Enrichment Analysis
The Cgenes matched to the cmiRNAs of the signature were obtained by the intersection of the above cmiRNA-target Cgene prediction analysis from the mirDIP and miRDB databases. Differential expression of the cmiRNAs and Cgenes in tumor and normal tissues in the TCGA cohort was calculated. Experimentally confirmed cmiRNA-target Cgene relationships were retrieved from the miRWalk database for miRNA-target interactions (which includes data from the miRTarBase database) [33]. We then used Cytoscape software to construct the cmiRNA-Cgene network. Spearman correlation analysis between cmiRNA expression and target Cgene in the TCGA-LUAD cohort was conducted using the Origin Cancers 2022, 14, 5106 5 of 15 2021 software. GO enrichment analysis of these target Cgenes was conducted using the DAVID database (https://david.ncifcrf.gov (accessed on 18 April 2022)).
Gene Set Enrichment Analysis (GSEA)
GSEA was performed using GSEA v4.2.1 software to explore the enrichment pathways and biological processes between the high-and low-risk groups in the TCGA dataset. Cancer hallmark and circadian rhythm-related gene sets were also investigated.
Construction of the Prognostic Signature Based on the cmiRNAs Pairs
The flow diagram of the study is shown in Figure 1. First, we retrieved the miRNA-seq data of 46 normal and 521 tumor samples from the TCGA-LUAD cohort and a list of the circadian genes from the CGDB database (Table S2). A total of 368 and 995 cmiRNAs were predicted based on the circadian genes using miRDB and miRDIP databases, respectively (Table S3). A total of 362 DEmiRNAs between normal and tumor samples were screened out (Figure 2A,B). A total of 90 alternative DEcmiRNAs were obtained by the intersection of cmiRNAs (predicted by miRDB and miRDIP databases) and DEmiRNAs ( Figure 2C). Then, 1038 valid cmiRNA pairs were identified and filtered after random pairing of the 90 DEcmiRNAs.
450 LUAD patients with complete clinical data and survival time > 30 days in the TCGA dataset were randomly divided into a training dataset (225 cases) and a testing dataset (225 cases). The clinicopathological features of patients in each dataset are shown in Table 1. In the training dataset, 26 survival-related cmiRNAs pairs were extracted using univariate Cox regression analysis (p < 0.01, Table S4). Then, nine cmiRNA pairs were screened out by LASSO regression analysis to prevent overfitting ( Figure 2D,E). Finally, seven cmiRNA pairs were selected using the stepwise regression method (method = "both") based on AIC to construct a prognostic signature using the multivariate Cox proportional hazard analysis ( Figure 2F).
Evaluation and Validation of the Prognostic Signature
The testing dataset and GSE63805 cohort were used for internal and external independent verification of the prognostic signature, respectively. Clinicopathological data and the corresponding miRNAs in the GSE63805 cohort are presented in Table 1 and Table S5, respectively. The risk scores of the cases in the testing and GSE63805 datasets were calculated using the prognostic model. The ROC curves of the signature for 1-, 3-, and 5-year overall survival in the TCGA training dataset, TCGA testing dataset, TCGA total dataset, and GSE63805 dataset were drawn (Figure 3A-C,G). The AUC values for the training, testing and independent GSE63805 dataset were 0.794, 0.693 and 0.730, respectively. We determined the median risk score of the training dataset as the critical value for highor low-risk classification. Kaplan-Meier curves indicated that the low-risk patients had significantly longer overall survival than the high-risk patients across all datasets (p < 0.001) ( Figure 3D-F,H). Additionally, the risk score model also showed a certain capacity to predict 1-, 2-, and 3-year PFS, and low-risk patients had longer PFSs ( Figure 3I,J). All of the above results showed the robust prognostic power of the established signature.
We further explored the association between the risk score and clinicopathological features. Multivariate Cox regression analysis indicated that clinical stage (p < 0. 450 LUAD patients with complete clinical data and survival time > 30 days in the TCGA dataset were randomly divided into a training dataset (225 cases) and a testing dataset (225 cases). The clinicopathological features of patients in each dataset are shown in Table 1. In the training dataset, 26 survival-related cmiRNAs pairs were extracted using
Integrated Nomogram Combining the Risk Score with Clinical Variables
To further improve accuracy and practicality, we combined significant clinical factors (smoking, age, sex, and clinical stage) and risk score to fit a Cox proportional hazards regression model using the total TCGA dataset ( Figure 5A). As shown in the nomogram, each factor was assigned a point and the total nomogram point was calculated from the sum of the individual points of all predictors. In association with the total points, the survival rate of patients can be estimated by projecting the total points downward. The calibration ( Figure 5B) and ROC curves ( Figure 5C) indicated that the integrated nomogram achieved a higher accuracy of survival estimation than the risk score alone.
Correlation between Risk Score and Tumor Immune Microenvironment
The immunosuppressive microenvironment is one of the main factors contributing to poor prognosis in patients with LUAD. Therefore, we tested the correlation between the risk score and the tumor immune landscape ( Figure S1A-I). The results indicated that the risk score had a negative correlation with the majority of immune cells, especially crucial anti-tumor immune cells (such as CD8 + T and NKT cells), but had a positive relationship with tumor-promoting Th2 cells in the tumor microenvironment.
to predict 1-, 2-, and 3-year PFS, and low-risk patients had longer PFSs ( Figure 3I,J). All of the above results showed the robust prognostic power of the established signature. Figure 4A). The risk score indicated the optimal AUC value compared with sex, age, smoking, clinical stage, T and N stage ( Figure 4B). Strip illustration ( Figure 4C) and scatter drawings ( Figure 4D,E) showed that there were significant correlations between the risk score and clinical stage and T stage.
Application of Risk Score in Predicting Primary Drug Efficacy
Based on the pRRophetic algorithm, we explored the correlation between the risk score and drug sensitivity in the TCGA dataset. As shown in Figure 6, high-risk patients had lower IC50 values than low-risk patients for several common chemotherapy drugs, indicating that the former were more susceptible to cisplatin, docetaxel, gemcitabine, and vinorelbine ( Figure 6A-D). Similar results were observed for targeted therapy drugs, including the tyrosine kinase inhibitors erlotinib and gefitinib and the farnesyltransferase inhibitor tipifarnib (Figure 6E,F). The above results indicate the potential of this signature to predict drug sensitivity and guide treatment.
Integrated Nomogram Combining the Risk Score with Clinical Variables
To further improve accuracy and practicality, we combined significant clinical factors (smoking, age, sex, and clinical stage) and risk score to fit a Cox proportional hazards regression model using the total TCGA dataset ( Figure 5A). As shown in the nomogram, each factor was assigned a point and the total nomogram point was calculated from the sum of the individual points of all predictors. In association with the total points, the survival rate of patients can be estimated by projecting the total points downward. The calibration ( Figure 5B) and ROC curves ( Figure 5C) indicated that the integrated nomogram achieved a higher accuracy of survival estimation than the risk score alone.
Correlation between Risk Score and Tumor Immune Microenvironment
The immunosuppressive microenvironment is one of the main factors contributing to poor prognosis in patients with LUAD. Therefore, we tested the correlation between the risk score and the tumor immune landscape (Figure S1A-I). The results indicated that the risk score had a negative correlation with the majority of immune cells, especially cru-
cmiRNA Quantitative Verification and Key cmiRNA-Cgene Network Construction
The relative expression of the 11 cmiRNAs between cancerous and adjacent normal tissues of patients with LUAD were further quantified by qRT-PCR. Only eight out of 11 miRNAs were confirmed to be significantly different between tumor and normal tissues, which might be explained by the low number of samples. (Figure 7A). The 48 targeted Cgenes matched to the 11 cmiRNA were extracted, and the key cmiRNA-Cgene network was established ( Figure 7B). The relative expression levels (tumor vs. normal tissues) of cmiRNAs and their target Cgenes are also displayed in Figure 7B and Table S6. In addition, we searched the miRWalk database for experimentally validated target genes of these miRNAs (sourced from the miRTarBase database), which are marked by the red arrow in Figure 7B and listed in Table S7. It is clear that most relationships between these cmiRNAs and target Cgenes require experimental verification. Therefore, we performed Spearman correlation analysis between these cmiRNA expression and target Cgenes in the TCGA-LUAD cohort ( Figure 7C). The expression correlation between Mir-138-5p and the ROCK2 and RMND5A genes was consistent with the classical negative regulation relationship between miRNA and target gene expression. GO enrichment analysis suggested that these targeted Cgenes were enriched in pathways including TOR signaling, response to DNA damage by P53 class mediator, and regulation of centrosome cycle (Figure 7D). GSEA between the high-risk and low-risk groups showed that most of the HALL-MARK pathways, circadian regulation of gene expression, and cell cycle-related signaling pathways were active in high-risk patients ( Figure S2A-C). Overall, these results might provide clues to explore new potential mechanisms of cmiRNAs and Cgenes that promote lung cancer progression.
cmiRNA Quantitative Verification and Key cmiRNA-Cgene Network Construction
The relative expression of the 11 cmiRNAs between cancerous and adjacent normal tissues of patients with LUAD were further quantified by qRT-PCR. Only eight out of 11 miRNAs were confirmed to be significantly different between tumor and normal tissues, which might be explained by the low number of samples. (Figure 7A). The 48 targeted Cgenes matched to the 11 cmiRNA were extracted, and the key cmiRNA-Cgene network was established ( Figure 7B). The relative expression levels (tumor vs. normal tissues) of cmiRNAs and their target Cgenes are also displayed in Figure 7B and Table S6. In addition, we searched the miRWalk database for experimentally validated target genes of these miRNAs (sourced from the miRTarBase database), which are marked by the red arrow in Figure 7B and listed in Table S7. It is clear that most relationships between these cmiRNAs and target Cgenes require experimental verification. Therefore, we performed Spearman correlation analysis between these cmiRNA expression and target Cgenes in the TCGA-LUAD cohort ( Figure 7C). The expression correlation between Mir-138-5p and the ROCK2 and RMND5A genes was consistent with the classical negative regulation relationship between miRNA and target gene expression. GO enrichment analysis suggested that these targeted Cgenes were enriched in pathways including TOR signaling, response to DNA damage by P53 class mediator, and regulation of centrosome cycle ( Figure 7D). GSEA between the high-risk and low-risk groups showed that most of the HALLMARK pathways, circadian regulation of gene expression, and cell cycle-related signaling pathways were active in high-risk patients ( Figure S2A-C). Overall, these results might provide clues to explore new potential mechanisms of cmiRNAs and Cgenes that promote lung cancer progression.
Discussion
The five-year survival of lung cancer is 10-20% in most countries [34]. It has increased in some countries over the past 20 years due to early diagnosis, improved treatment methods, and the development of precision medicine. Prognostic biomarkers for the risk stratification of patients with lung cancer and providing references for clinical interventions are an important part of precision medicine. Chronic circadian rhythm disturbances are closely linked to the initiation and progression of tumors [6]. Some studies have shown that circadian genes have promising potential as prognostic markers of lung cancer. The miRNAs have unique advantages as prognostic markers. In this study, we established a signature, consisting of seven cmiRNA pairs, to predict the prognosis of lung cancer for the first time. The prognostic model has good predictive performance for the overall and progression-free survival of patients with LUAD in the TCGA dataset, and it has been validated in the GEO dataset (external validation). In addition, it has important reference significance in guiding the treatment of patients with LUAD. Nomograms also increase the usability and readability of the results.
Studies have suggested that clock genes and other genes related to the killing function (e.g., cytolytic factors, perforin, and granzyme B) have a circadian oscillatory rhythm, which is altered by chronic shift-lag. These alterations may be associated with the inhibited circadian rhythm of the cytolytic activity of NK cells and the progression of lung cancer [35]. Another study showed that lung cancer can induce deep reprogramming of the liver circadian rhythm at both the transcript and metabolite levels [36]. Further studies on the mechanism of circadian rhythm disorders leading to poor prognosis of lung cancer can provide potential therapeutic targets. The role of all 11 cmiRNAs in our signature in NSCLS has been studied to varying degrees. MiR-539-3p [37], miR-584-5p [38], miR-138-5p [39], miR-335-5p [40], miR-376a-3p [41], MiR-133b [42], MiR-382-5p [43], and miR-215 [44] were suggested to suppress the proliferation, migration, and invasion of NSCLC. While miR-153-3p [45], and miR-31-5p [46] were shown to promote the metastasis and invasion of lung cancer cells. In addition, studies have shown that miR-539-3p [47] and miR -138-5p [48] enhances chemosensitivity to cisplatin in NSCLC, and miR-136-5p promoted anlotinib resistance in NSCLC [49]. These studies suggest that these miRNAs play important roles in cancer development. However, whether they are involved in circadian rhythm regulation and how they contribute to the progression of lung cancer requires further study.
The classical biological role of miRNAs is to degrade or to inhibit the translation of target genes to reduce the level of protein-coding genes expression [50]. In addition to a small number of verified cmiRNA-Cgene, the remaining were only predicted by the databases to have a targeted regulatory relationship. Although the relative expression levels between tumor and normal tissues and Spearman correlation analysis results preliminarily showed that there was a significant negative correlation between the expression of some miRNAs and predicted target genes, they require further experimental verification in the future. It is worth noting that miRNA can function outside classical paradigms such as coding for peptides, directly activating transcription, up-regulating protein expression, and activating toll-like receptors [51]. Overall, the constructed cmiRNA-Cgene network, GO enrichment analysis and GSEA showed potentially important cmiRNA-Cgene regulation pathways involved in the progression of LUAD, which should be the focus of future in-depth research.
Several shortcomings are present in this study: First, the cmiRNA was predicted by the database based on circadian genes. Although we set strict criteria to increase the reliability of this targeting regulation, and the partial targeting regulation relationship has been validated by other studies, the true targeting regulation relationship of them requires further verification. Second, owing to the difficulty in collecting clinical samples and the limited time, no large-scale clinical cohort study was carried out to verify the prediction model. Finally, the cmiRNA-Cgene network pathway and its target regulation mechanism were only preliminary explorations and require further in-depth validation.
Conclusions
To the best of our knowledge, as this is the first study to explore the role of cmiRNAs in lung adenocarcinoma. The established prognostic model based on cmiRNA pairs exhibited good performance in predicting the overall, progression-free survival and sensitivity to chemotherapy and targeted therapy.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers14205106/s1. Table S1. The forward primer sequences of 11 cmiRNA in the prognostic model used in quantitative real-time polymerase chain reaction (qRT-PCR) analysis. Table S2. The list of circadian genes downloaded from the Circadian Gene Database (Supplied in a separate Excel file, Table S2). Table S3. The list of miRNAs in TCGA-LUAD and GSE63805 cohort and circadian miRNAs predicted by miRDB and miRDIP databases (Supplied in a separate Excel file, Table S3). Table S4. cmiRNA pairs associated with prognosis obtained by Univariate COX regression analysis (p < 0.01). Table S5. Corresponding cmiRNAs in the prognostic model in TCGA-LUAD and GSE63805 cohort. Table S6. Expression levels of miRNA of the model and its targeted genes in tumor tissue (compared with normal tissues) in TCGA-LUAD cohort. Table S7. cmiRNAs of the prognostic model and corresponding target Cgenes (Supplied in a separate Excel file, Table S7). Figure S1. Analysis of immune cells between the high-and low-risk group. Figure S2. Gene Set Enrichment Analysis (GSEA) between the high-and low-risk group. | 2022-10-21T15:11:29.927Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "39de72bef12ee955e68ed4a0714324236d790824",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/20/5106/pdf?version=1666095594",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c9df24f6d22466c2e9cedf4067548fd2051ac9d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
54075220 | pes2o/s2orc | v3-fos-license | ANALYSIS OF MATERIAL NON-LINEARITY OF STEEL GIRDERS SUBJECTED TO PATCH LOADING
Subject review The paper presents an analysis of behaviour of plate girders (welded steel I girders) under patch loading. Concentrated or uniformly distributed loading on a short length of the flange, and in the plane of web is known as patch loading. Behaviour of the girder in non-linear area, as well as the ultimate load capacity, which is manifested by buckling in the loaded zone were analysed. Girder modelling was performed in ANSYS Workbench 15 software [1]. In accordance with the standing regulations EN 1993-1-5 [2] and BSK 07 [3], in the course of the numerical analysis, six different material models were employed. The obtained results were compared to the experimentally obtained results.
Introduction
Concentrated or uniformly distributed loading on a short length of the flange, and in the plane of web, of welded steel I girders (see Fig. 1) is known as patch loading [4].In practical application, this type of load can be found in case of crane girder or during assembly of bridges when the bridge is being slid into designed position over the temporary or permanent supports etc.The forces occurring then can surpass the bearing capacity of the structure at certain points because of the patch loading, so the case of assembly can be relevant for design [5].
Due to its complexity, until now the comprehensive theoretical solution of this problem has not been found.For this reason, experimental research is of great importance as well as numerical modelling.As a basis for problem solving, experimental results obtained by N. Marković will be used [6].
According to the experimental models, a numerical simulation of the girders in ANSYS Workbench 15 software was performed.Initial geometrical imperfections on the web were assumed for the girders, based on the experimental model.Numerical modelling of steel girders, loaded by patch loading was performed in compliance with the standing regulations EN 1993-1-5 [7] and six different material models were considered in nonlinear analysis.
We shall consider the girder behaviour in non-linear area, as well as the ultimate load capacity which is manifested by the occurrence of buckling in the loaded zone.
Two increases of the ultimate load capacity were analysed: the case when there is a longitudinal stiffener on the web, and the case the length of loading is increased from 50 mm to 150 mm.
The results obtained by the numerical analysis were compared with experimentally obtained results.
Experimental model
The research presented in this paper included a part of experimental research conducted in 2003 at the Faculty of Civil Engineering of Podgorica by N. Marković from the Faculty of Civil Engineering of Belgrade.[6].Four types of girders of A series were consideredhaving spans of 500 mm either without longitudinal stiffener or with longitudinal stiffener near the upper flange of the girder.The load was applied along the entire width of the top flange with the lengths of loaded sections of 50 mm and 150 mm.A girder of A series with characteristic notations is presented in Fig. 1, and the dimensions are given in Tab. 1.
Preparation of the girders for the tests included marking of the grid on the web of the girders (Fig. 2), measuring of initial deformations and material characteristics testing.Measured initial deformations (geometrical imperfections) in characteristic points (see Fig. 3), were used for formation of the initial geometry of a girder in numerical modelling.
Figure 2 Grid marked on the girder (see [6]) Figure 3 Grid plot of the points on the web where initial geometrical imperfections were measured Material property tests were performed and the obtained results were used for formation of non-linear material models.
The girders were tested in specially designed closed frame (see Fig. 4).The load was applied by the press connected to the hydraulic pump.Initially the force was increased in large, and later in smaller load increments.
In the course of testing, lateral displacements were registered in the characteristic points of the web, as well as deflections at the middle of top and bottom flange.Strains were measured in a number of chosen points on the web and flange as well as the lateral displacements in the midspan cross section of the web after each load variations.
The load was increased until ultimate load was reached.The ultimate load was manifested as progressive increase of registered deformations of flanges and the web and strains without further increase of force.In the process, the visible buckling of the web below the zone where load was applied was observed.In case of the girders with longitudinal stiffener, the buckling is dominant between the loaded flange and the longitudinal stiffener.
Figure 4 Girder test frame
After maximum load was kept for some time (different for the individual tests) the girders were unloaded and residual deformations of the webs and loaded flanges were measured.The deformations outside the web plane were measured in as many points as previously measured geometrical imperfections (see Fig. 3).
The results of the obtained ultimate loads are presented in Tab. 2.
Girder numerical modelling
Numerical modelling was performed in ANSYS Workbench 15 software [1].
Girder modelling included: • Assigning of material properties and formation of material models; • Formation of girder geometry with initial geometrical imperfections; • Assigning of appropriate limit conditions as it was done in the experimental models; • Assigning of loads as it was done in the experimental models; • Discretization of the model using a mesh of finite elements; • Assigning of appropriate analysis models parameters.
Formation of numerical models was performed so as to describe experimental models in the best possible way.
Material models within non-linear analysis
It is very important to assign the appropriate material behaviour parameters in a proper way in order to obtain relevant data using numerical analysis.As the steel girder is in question, its mechanical properties are determined on the basis of the modulus of elasticity, Poisson coefficient, compressive and tensile yield strength.It is assumed that the material is isotropic with identical values of mechanical properties in all directions.
For the purpose of obtaining as precise data as possible, the steel coupons cut out from the girder webs were tested.The coupons had the width of 12,5 mm, and the length of 65 mm.The obtained test results were used for formation of simplified material models of σ-ε curves.
The tests proved that the material corresponds to the steel class S275 and the σ-ε diagram of material characteristics of one of the coupons is presented in Fig. 5.The diagrams σ-ε of other coupons have the same shape and similar values.Recommendations for design EN-1993-1-5 Annex C, also concern the material property recommendations.Behaviour of structural steels is elasto-plastic.After reaching the yield point f y material no longer behaves in the linear-elastic manner, but becomes plasticized and strengthened, meaning that material non-linearity occurs.It is proposed, depending on the accuracy and availability of the data, usage of the following material models: bilinear stress-strain curve, in cases where material data are not available, the yield is theoretically assumed as a horizontal line, or the slope with the value E/10000 is defined.The other case is approximation of the curve with the slope E/100 where strengthening of material is considered.In case when as real presentation as possible is desired, it is possible to approximate the real curve in such a way to obtain a multilinear curve which will present the material characteristic in the best way if those data are available.According to BSK07, Swedish standards [3], a multilinear curve obtained by calculation presented in Fig. 6 is recommended.In this paper bilinear curves with the tangent modulus E t = E/10000, E t = E/1000 will be used as given in the European standard EN1993-1-5 as well as E t = E/100 and E t = E/12,5 for an additional analysis.Multilinear curves recommended by EN1993-1-5 and BSK07 were used, in accordance with the data obtained in the mentioned laboratory tests in terms of characteristic values of stresses and strains.
Material modulus of elasticity is 210 MPa, Poisson coefficient is υ = 0,3.The used material models are based on the isotropic rule based on the theory of Von-Mises plasticity.
Within non-linear analysis, the values of stresses and strains were calculated on the basis of various material models in elastic-plastic area.This means that for instance, for the model A3, for the same girder, numerical simulation with five different ways of behaviours of the same material during loading until failure was performed according to the following: • EN1993-1-5-multilinear curve corresponding to the real curve (Fig. 7); • BSK07 (Swedish standard)-multilinear curve according to Fig. 6, and for the used models it is presented in Fig. 8; • EN1993-1-5-bilinear with tangent modulus E t = E/10000 (Fig. 9); • Bilinear curve with tangent modulus E t = E/1000 (Fig. 10), for additional analysis; • EN1993-1-5-bilinear curve with tangent modulus E t = E/100 (Fig. 11); • Bilinear curve with tangent modulus E t = E/12,5 (Fig. 12), for additional analysis.Multilinear curve according to BSK07, Swedish standard, demonstrated satisfactory results in work.(Braun, B. [8]).For our research, it is assumed to be as the formulae presented in Fig. 6, where the average values of stress for all the tested coupons in the experiment were considered characteristic, those being: f y = 329,33 MPa and f u = 440,83 MPa [9].For the bilinear curves, further in the text, also the average value of yield stress was assumed, f y = 329,33 MPa.
For each model, ultimate load capacity for the different material models was presented (see Tab. 2).
Geometry of girders, limit conditions, load and finite elements grid
Girder geometry was assigned according to the experimental models, as presented in Fig. 1, and the dimensions of the girder were presented in Tab. 1.Initial geometrical imperfections are assigned only on the girder web, while the flanges and stiffeners have no imperfections.In Fig. 13 there is a numerical model with initial imperfections on the web.The girder supports have the hemi-cylindrical form and they are restrained on the bottom side.The girder load was applied through the load plate as in the experiment, and it was permitted to move only in vertical direction.
As for the values, the load was applied in steps, more precisely in six steps using the function step controls.Depending on the girder, the value of the load differed, because different values of ultimate loads were expected for each one of them, in accordance with the experimental data.
Finite element mesh was discretized and after that, the element size of 15 mm was adopted, see Fig. 14.Finite elements are of the SOLID 186 type.This finite element has 20 nodes with three degrees of freedom per a node, and those are displacements in axial directions x, y and z.It is very useful for non-linear analysis.The model of the girder with the adopted finite element mesh is presented in Fig. 14.One of the main goals of this modelling is determination of the girder ultimate load value.In this section, determination of ultimate load of four types of girders with six different material models is presented.The obtained results are compared with the experimental results, and an analysis of the obtained results was performed.Increase of the ultimate load capacity with increase of length of loading and with the influence of stiffeners is presented.
In the numerical analysis, the criterion for ultimate load is that load at which the solutions start to diverge and when the girder starts to lose stability and exhibits abrupt build-up of deformations.Fig. 15 displays development of total deformation with gradual increase of force for A1 girder type, for the material model in accordance with EN1993-1-5 (see Fig. 7).Deformations gradually increase up to the value of 9.5 mm which corresponds to the load of 178,174 kN, and after that, the deformation abruptly increases and the solution diverges.Since after this value the girder loses stability, exactly this value of 178,174 kN was assumed to be the ultimate load.In Fig. 16 we can see a display of total deformation at ultimate load on the numerical model.In Fig. 17 it can be observed that at the force value of 92,96 kN (52,17 % of the ultimate load) the yield stresses (f y = 329,97 MPa) were reached in certain points.Stress reaches the maximum value for the obtained value of ultimate load, and abruptly decreases after that.Presentation of the status of Von-Mises stresses on the numerical model for the ultimate load of A1 girder is provided in Fig. 18.In the same way, ultimate loads were obtained for other types of girders as well, as for other material models.
The analysis of the values of ultimate loads for all the types of girders and various material models was performed, by comparing these results with experimental results of the material models which are proposed in current European standards, whereby both the bilinear curve with the tangent modulus E t = E/1000 and the bilinear curve with the tangent modulus E t = E/12,5 were taken into account because of additional research.
Values of the ultimate loads in kN for all four types of girders (girder A1, A2, A3 and A7) are presented in Tab. 2. Primarily, what is presented is the value of ultimate load obtained experimentally.
Other rows present ultimate loads obtained by the numerical simulation for each of various material models, and immediately below one may see the deviation from the experimentally obtained value.
The numbers display the values of the ultimate loads, expressed in kN, and the load difference in respect to the ultimate loads obtained experimentally is presented in percents.The last column presents the average value of deviation expressed in percents for each of the assigned material models.In Tab. 2 can be seen that all material models provided satisfactory congruence, except for the material model with tangent modulus E t = E/12,5 (see Fig. 12), so it can be concluded that it is not favourable for this type of analysis and it will not be further considered.
On the other hand, the best congruence was exhibited by the material model with the multilinear curve according to EN1993-1-5 (Fig. 7), whose deviation is 4,84 % in average for all the girder types.
Analysis of the obtained results provided the values of increase of ultimate load capacity for the cases when the longitudinal stiffener is added to girder.An increase of the ultimate load capacity for the girders having the patch loading length of 50 mm (A1 and A3) and of 150 mm (A2 and A7) for different material models, is presented in Fig. 19 and Fig In the following diagrams one presents an increase of ultimate load capacity when the length of patch loading is increased from 50 mm to 150 mm.In the first diagram, Fig. 21, the girders without the longitudinal stiffener were presented (A1 and A2), and on the other diagram, Fig. 22 the diagrams for the girders with the longitudinal stiffener were presented (A3 i A7), for different material models.
Tab. 3 presents an analysis of the results of increase of the ultimate load capacity.The first two columns present an increase of ultimate load capacity in percents, if the longitudinal stiffener is added to the girder.The girders having the length of patch loading of 50 mm are in the first column and in the second column those having the length of patch loading of 150 mm.The third and fourth column present increase of ultimate load capacity when the length of patch loading is increased from 50 mm to 150 mm.The girders without stiffener are in third column and the girders with stiffener are in the fourth.By analysing the obtained results, it was concluded that all material behaviours curves provided satisfactory results apart from the curve with the tangent modulus E t = E/12,5.
The best congruence with the experimental results was exhibited by the material model corresponding to the multilinear curve of real material, and the average deviation for all types of the girder amounts to 4,84 %.The material model which corresponds to the bilinear curve with tangent modulus E t = E/12,5 deviates 30,80 %, so it can be concluded that it is not favourable for this type of analysis.
When analysing the results, the following facts must be taken into account: initial geometrical imperfections are assumed to be present only on the web, the characteristics of the material for the real curve, as well as the yield stress f y and tensile strength f u are assumed as the average value of the tested coupons, the material characteristics of the flanges and the webs were assumed to be the same (which was not the case in the experiment), the impact of the welds and initial geometrical imperfections of flanges and stiffeners was ignored, it is impossible to accurately set boundary conditions etc.
By taking this into account, it can be said that satisfactory congruence of the results obtained experimentally and by the numerical analysis was achieved.
Behaviour of plate girders under the action of patch loading is very complex and depends on various parameters [10,11].Also, the stress analysis which is previously conducted indicates the onset of plasticization starts as early as after 58,5 % in comparison to the ultimate load (in average, for all the girders).As presented, deformations occurring then need not be considerable.Plasticization develops at the most loaded section of the web, firstly only on the surface, and then expanding across the thickness of the web [12].
This indicated that two combinations should be analysed for every structure: -Girder constructed of high grade steel, with a higher yield strength and a slender web; -Girder constructed of low grade steel, with a thick web.
Parameters leading to the increase of the ultimate load were analysed and the conclusion is that the ultimate load is increased along with the increase of the length of the load from 50 mm to 150 mm, more than when a longitudinal stiffener is added to the girder.[13].
This conclusion can have a practical application.On the event of assembling the bridges, when a bridge is being slid into designed position over the temporary or permanent supports, it is desirable to increase the contact surface of the girder and support which would increase the ultimate strength of the girders.
Figure 1
Figure 1 Girder model
Figure 5
Figure 5 Material property diagram for a coupon (steel S275)
Figure 6
Figure 6 Multilinear material model according to the BSK 07 standard [3] Approximation for the real curve σ-ε (see Fig. 5) in several points was made.The values are increased to ( ) ε σ σ + = 1 true
Figure 7
Figure 7 EN1993-1-5-multilinear curve which corresponds to the real curve of material according to Fig. 5
Figure 13
Figure 13 Initial geometrical imperfections on the numerical model
Figure 14
Figure 14 Finite element mesh, SOLID 186, with the element size of 15 mm 4 Determination of the girder ultimate load
Figure 15
Figure 15 Development of total deformation with the increase of force for the girder A1
Fig. 17
Fig.17presents the diagram of increase of Von-Mises stresses with the increase of force until the ultimate load for the A1 girder type is reached.At the force value of 178,174 kN, the highest stress value of 440,59 MPa is reached, while the tensile strength of steel is f u = 449,6 MPa.
Figure 16
Figure 16 Total deformation of the girder A1 for the value of ultimate load of 178,174 kN
Figure 17
Figure 17 Diagram of Von-Mises stresses with the increase of force for the A1 girder
Figure 18
Figure 18 Presentation of Von-Mises stress state of A1 girders for the force of 178,174 kN Figs. 15 and 17 display the values of deformations and stresses for A girders for the material model with multilinear curve in accordance with EN1993-1-5 (see Fig. 7).
Figure 19 Figure 20 Figure 21
Figure 19 Impact of longitudinal stiffener on the ultimate load capacity for c = 50
Table 1
Basic geometrical characteristics of girder model
Table 2
Ultimate load values for all types of girders in kN
Table 3
Increase of ultimate load capacity when the length of patch loading is increased and longitudinal stiffener is present, expressed in percents | 2018-11-30T23:48:04.787Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "20c7e71b97c044767a0f8186f2c66de27eab69b9",
"oa_license": "CCBY",
"oa_url": "https://hrcak.srce.hr/file/280328",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "20c7e71b97c044767a0f8186f2c66de27eab69b9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
111179172 | pes2o/s2orc | v3-fos-license | PROJECT OF ON-BOARD CONTROL SYSTEM WITH AIR-TASK EFFICIENCY ESTIMATION SUBSYSTEM BASED ON FUZZY LOGIC FOR UNMANNED COMBAT AERIAL VEHICLE ROCKETS
. In this article, a project of on-board control system with fuzzy efficiency of air-task estimation subsystem for UCAV rockets is presented. Moreover, some features allowing the use of fuzzy logic subsystems in unmanned aviation will be described as well. Complexity of the project and some mathematical calculations required relevant as-sumptions which are also reported. Finally, prognosis about operation of the system and conclusions are provided. This article is an integral part of authors’ scientific research.
Introduction
Aircraft on-board combat systems are crucial in modern warfare. Air defense systems (guns, missiles and rockets) are continuously being modernized and have become the most dangerous threat in air combat missions (Grzesik 2005). Localization and character of Iraq and Afghanistan conflicts are very strong indications that UCAVs are commonly used. There is no guarantee for completion of a task if on-board defensive and offensive systems are not improved.
Research conducted by the authors and an analysis of the available literature (Piegat 1999;Wójtowicz 2012;Rutkowska et al. 1997;Mulawka 1996) shows that fuzzy logic can be used in UCAV efficiency air-task estimation. Fuzzy logic controllers based on expert knowledge can be used in autonomous non-programmed missions.
Fuzzy logic can be used to evaluate the effectiveness of an air-task (completion of pre-flight analysis and ability to select optimal solutions) and also in on-board decision-making system employed during combat missions (Tomaszek, Wróblewski 2001;Grzesik 2012).
In this article authors present their vision of onboard control system with fuzzy efficiency of air-task estimation subsystem for UCAV rockets. It will provide offensive and defensive modes with the highest (optimal) accuracy. Pictures and schematic diagrams illustrate operational and functional ideas of the system's operation.
The project is characterized by reliable operation, and the ability to adjust to changing environmental conditions.
Due to the complexity of the problem of air-task efficiency estimation (Tomaszek, Wróblewski 2001), it was necessary to apply appropriate simplifying assumptions.
Expectations for on-board control system with fuzzy efficiency of air-task estimation subsystem for UCAV rockets: -small weight; -high level of accuracy; -survivability; -high level of reliability; -all weather conditions and day/night use; -ease to maintain and isolate faults; -modular construction; -real-time calculations; -fully autonomous action with pilot/operator control option (touch screen monitor); -on-board guided control system with fuzzy efficiency of air-task estimation subsystem should not significantly change UCAV construction.
Features allowing the use of fuzzy logic for UCAV on-board systems
Variability, dynamism phenomena and situations that occur in today's battlefield are vague and imprecise. This nonlinearity would significantly reduce the time available and necessary to decide what kind of armament to use during an air-task. Therefore, it is necessary to construct on-board systems operating at higher speed and higher reliability. These features are characterized in fuzzy logic theory (zadeh 1965, 1975Yager, Filev 1995). Fuzzy controllers based on expert knowledge (Mulawka 1996) could be a significant factor in the decision-making process of employing a specific weapon in autonomous, previously non-programmed or programmed combat missions. The controllers could be used to evaluate the effectiveness of various kinds of air-tasks (completion of pre-flight analysis and the ability to select optimal solutions), as well as in the on-board decision-making system. The second solution needs to be tested and analyzed more precisely, because at any time during a mission a pilot/operator should have the opportunity to change his decision.
Conception
On-board computer and fuzzy control subsystem are the main elements of the project. Gathering and analysis of air navigation data and mission control are its main tasks. All necessary information is sent to ground control station displays (air data and particular system efficiency estimation). It is the pilot/operator's duty to control mission progress and make corrections if needed. The corrections are transferred to UCAV computer (for example, defining accumulated angle corrections for targeting: target speed, self-speed, altitude, angle of attack, etc.) and then to proper control systems (for example, execution mechanism controllers).
Outside of fire zones, cockpit signalization (information on displays for the pilot/operator in ground control station) is indispensable.
UCAV receive all mission data directly from ground control station or via satellite (depends on distance and geographic conditions). If the target is unidentified the pilot/operator sends information (position, image of the target) to headquarters via digital transmission channels and waits for further decisions (Fig. 1). Fig. 1. Schematic diagram of on-board control system and ground control station with fuzzy efficiency of air-task estimation subsystem and ground control station -conception for UCAV rockets Typical ground control station is controlled by two pilot/operators. It consists of three modules (Wójtowicz 2012): -A navigator module responsible for: -planning of UCAV flight program and its uploading to on-board computer, -making changes in flight program during mission, -receiving flight program (mission plan) from headquarters and sending it to UCAV, -locating UCAV and making reports; -A pilot/observer module, the primary task of which is to observe UCAV location and control flight parameters; -An image analyzer module. There are usually two work stations in a ground control station. The station supports real-time work of pilot/operators (radio communication with UCAV provides reception of all indispensable air data). Vision monitor (Fig. 2, 1) displays flight data, navigation information, armament status and readiness, etc.
Fig. 2. Conception of ground control station
Air, navigation and armament data are displayed in the same way as in manned aircraft (on head-up display and color multifunction display). This enables rapid response and more accurate control of a UCAV.
During an air-task the display shows the parameters and position of a UCAV (aircraft status and progress of a mission) on the camera image or map.
UCAV communication system and telephone are located between work stations. Stick control, keyboard, mouse and integrated armament control panel support control of flight and on-board weapon.
Ground control station is also equipped with flight indicators. Flight indicator readings are the same as in manned aircraft. This introduces an impression of real flight.
Fuzzy control subsystem provides information about efficiency estimation of air-task and suggestion on which system must be used (for highest efficiency). Pilot/operator has to make the decision whether he/she approves the suggestion or refuses it. Schematic diagrams of fuzzy efficiency of air-task estimation subsystem and control surfaces are presented below. The project was made using Matlab and Fuzzy Logic Toolbox (Figs 3 and 4) (Mrozek, B., Mrozek, z. 1994The Math… 1995-1998
Conclusions
This paper describes the authors' vision about using fuzzy control sets in UCAV on-board armament systems. After close research authors conclude that the primary problem is to assure communication (mission computer and fuzzy subsystem), real-time data computing and visualization of the calculation results. Selection of adequate fuzzy inference system and method of defuzification are secondary problems to be solved. Weight of all necessary units should be a crucial factor.
In further analysis of the project, the authors will try to construct a model of fuzzy efficiency of air-task estimation subsystem and test it in laboratory conditions. Results of the tests should give all required data.
Complexity of the project requires the involvement of many specialists. | 2019-04-13T13:05:51.467Z | 2014-04-03T00:00:00.000 | {
"year": 2014,
"sha1": "511a6204bc4e35d85e913fd93b31f8f0c49b2235",
"oa_license": "CCBY",
"oa_url": "https://journals.vilniustech.lt/index.php/Aviation/article/download/3002/2481",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f1565fde827934803da08b54a4de8c3cff52c297",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
18897535 | pes2o/s2orc | v3-fos-license | Associations of recurrent miscarriages with chromosomal abnormalities, thrombophilia allelic polymorphisms and/or consanguinity in Saudi Arabia
Background Recurrent pregnancy loss (RPL) or recurrent spontaneous abortion is an obstetric complication that affects couples at reproductive age. Previous reports documented a clear relationship between parents with chromosomal abnormalities and both recurrent miscarriages and infertility. However, limited data is available from the Arabian Peninsula which is known by higher rates of consanguineous marriages. The main goal of this study was to determine the prevalence of chromosomal abnormalities and thrombophilic polymorphisms, and to correlate them with RPL and consanguinity in Saudi Arabia. Methods Cytogenetic analysis of 171 consent patients with RPL was performed by the standard method of 72-h lymphocyte culture and GTG banding. Allelic polymorphisms of three thrombophilic genes (Factor V Leiden, Prothrombin A20210G, MTHFR C677T) were performed using PCR-RFLP (restriction fragment length polymorphism) and gel electrophoresis. Results Data analysis revealed that 7.6 % of patients were carrier of numerical or structural chromosomal abnormalities. A high rate of translocations (46 %) was associated to increased incidence of RPL. A significant correlation between consanguineous RPL patients and chromosomal abnormalities (P < 0.05) was found. Both Factor V Leiden and Prothrombin A20210G allelic polymorphisms were significantly associated with a higher prevalence of RPL. Conclusions This study demonstrated a strong association between RPL and the prevalence of chromosomal abnormalities and inherited thrombophilia. Given the high rate of consanguineous marriages in the Saudi population, these results underline the importance of systematic cytogenetic investigation and genetic counseling preferably at the premarital stage or at least during early pregnancy phase through preimplantation genetic diagnosis (PGD).
(Continued from previous page)
Conclusions: This study demonstrated a strong association between RPL and the prevalence of chromosomal abnormalities and inherited thrombophilia. Given the high rate of consanguineous marriages in the Saudi population, these results underline the importance of systematic cytogenetic investigation and genetic counseling preferably at the premarital stage or at least during early pregnancy phase through preimplantation genetic diagnosis (PGD).
Keywords: Recurrent pregnancy loss, Chromosomal aberrations, Thrombophilia, Consanguinity, Cytogenetic analysis Background Recurrent miscarriages (RM) are clinically detectable pregnancies that fail to progress. They are common pregnancy complications that affects 15-20 % of couples [1]. It is a common obstetric health concern that affects around 5 % of women at the reproductive age [2,3]. Regarding the RM etiology, it is due to several causes including chromosomal, genetic, anatomic, immune and infective factors [1]. Although it is still a controversy whether RM is considered after 2 pregnancy losses versus 3 or more, most of clinicians recommended initiating evaluations from the onset of the second miscarriage since there was no significant difference in RM susceptibility between several patients with 2 versus 3 and more pregnancy losses [4].
Despite worthy studies in Obstetrics/Gynecology clinics and IVF (in vitro fertilization) centers worldwide of this sporadic complication of early pregnancy, RM etiology remains poorly understood [5,6]. Therefore, effective diagnosis and prevention/treatment approaches are still lacking [7,8]. Several causes underlying this failure to deliver a normal and viable embryo were reported including aged mothers, uterine abnormalities, placental abruption, incompetent cervix, parents' chromosomal and genetic background, immune disorders and/or endocrine imbalances. Among these causes, three in particular were considered as the major factors of RM including: (i) structural and numerical chromosomal abnormalities, (ii) inflammatory and autoimmune disorders, and (iii) allelic polymorphisms of some pro-thrombophilic genes [6,[9][10][11][12]. In fact, positive correlations were reported between chromosomal abnormalities in the embryos and higher RM [13]. This fetal genomic incompatibility to life was associated to chromosomal aneuploidies and mosaicism within the embryo/abortus [14][15][16][17]. Moreover, couples who carry chromosomal abnormalities were found to be at risk for repeated miscarriages and therefore have lower chances to deliver a viable offspring [7,[18][19][20][21]. Spontaneous miscarriages caused by the chromosomal abnormalities may arise from one of the parents producing defective gametes that will lead to fetal abnormalities and mental disorders. In fact, 3-6 % of RM were due to chromosomal abnormalities of one of the two partners [22][23][24]. Trisomies in chromosomes 13-16, 21 and 22 were the most common chromosomal aneuploidies followed by monosomy X (45, X). Thus, parental karyotyping is a recommended procedure to assess the cause of recurrent pregnancy losses [23,[25][26][27].
Pro-thrombophilic factors have also been suggested as one of the major causes of RM. In fact, some genetic polymorphisms of prothrombin (FII G 20210A), Factor V (Factor V Leiden, FVL) and methylene tetrahydrofolate reductase (MTHFR, C677T gene variant) genes were strongly associated with recurrent miscarriages [3,28,29]. These factors of inherited thrombophilia disturb normal placental vascularization and formation leading to fetal growth restrictions, pregnancy failure, placental abruption and therefore miscarriages or stillbirth [30].
In addition to the RM incidence worldwide, Saudi Arabia is well known by a high level of consanguineous marriages driven by ethnic or tribal considerations [31]. These consanguineous marriages significantly increase the incidence of inherited recessive disorders and affect some reproductive and developmental health parameters such as infertility rates, recurrent miscarriages, and congenital disorders (e.g. thrombophilia) [32][33][34][35]. The objective of this study is to assess possible correlations between chromosomal abnormalities and couples with history of RM. Additional investigations were also carried out to assess the presence of some thrombophilia genetic risk factors including factor V Leiden, Prothrombin A20210G, MTHFR C677T mutations that may further explain the RM outcomes.
Study approach
A cohort of patients with a history of two or more miscarriages up to 20 weeks was conducted at King Abdulaziz University Hospital in the Western region of Saudi Arabia between 2008 and 2013. This study included couples with repeated pregnancy losses who had presented themselves during this period for further investigations. Pregnancy was confirmed by a positive human chorionic gonadotropin (HCG) test using serum or urine in combination with ultrasounds.
Patients
Following King Abdulaziz University board approval and informed consent of the patients, a cohort of 171 RM patients (73 couples in addition to 25 women only because their husbands were not available for cytogenetic analysis) were selected for peripheral blood collection and a detailed counseling questionnaire covering their personal details, family history and any laboratory results or past investigations that had been conducted. Only patients who experienced two or more miscarriages up to 20 weeks of gestational age were included in this study. Patients' anonymity and data confidentiality were preserved.
Cytogenetic analysis
The peripheral blood of each patient (5-10 ml) was subjected to standard 72-h lymphocyte culture to produce Metaphases for cytogenetic analysis (karyotyping) using standard harvesting protocols. GTG banding (G banding) was performed by a pretreatment of chromosomes with trypsin followed by Giemsa staining. Chromosomes' analysis was done using Cytovision software, a semi-automatic Applied Imaging Karyotyper, and karyotypes were designed according to International System for Human Cytogenetic Nomenclature [36]. Karyotype analysis was performed using at least 20 cells for each patient. In case of suspected mosaicism, this number was expanded to 100 metaphases.
Genetic risk factors causing thrombophilia
Since associations between thromphobilia and RM were previously reported [37,38], we proposed in this study to further explore the thrombophilic genetic polymorphisms in our patients' cohort. The main genes investigated using PCR-RFLP (restriction fragment length polymorphism) were Factor V Leiden (FVL), Prothrombin A20210G and MTHFR C677T. Briefly and following DNA extraction (QIAamp DNA Blood Maxi kit, Qiagen), a Taq-polymerase based PCR using specific primers for each gene was performed ( Table 1). The PCR product for each gene is then fragmented using a specific restriction enzyme in order to target potential Single Nucleotide Polymorphisms (SNPs) and separated according to their base pairs' size by gel electrophoresis as summarized in Table 1.
Statistical analysis
Association between the patients' clinical features and the cytogenetic analysis in the cohort of patients were assessed using χ2 analysis and Fisher's exact test. The statistical analysis was carried out using MATLAB R2012a (Version 7.14; The MathWorks, Natick, MA, USA).
Patients' cohort and RM
This study reported the chromosomal analysis of the 171 RM patients (73 couples + 25 women). There were 492 documented pregnancies in this cohort (mean pregnancy per couple = 5.02, SD = 2.79). The age of the subjects ranged from 18 to 48 (mean 32.17, SD = 6.39) ( Table 2) and number of miscarriages were between 2 to 14 (mean miscarriage per couple = 4.18; SD = 2.578). Out of 98 women, 32 were able to achieve successful pregnancies which led to viable baby, whereas the remaining 67.35 % were unable to achieve any successful full term pregnancy despite several attempts. Overall, most of clinical pregnancies (79.59 %) were terminated in their first trimester while only 12.24 % and 8.16 % ended up respectively at the second and third trimesters.
Around 30 % of couples in this patients' cohort had a family history of consanguineous marriages. Interestingly, a significant positive correlation (P = 0.046) between consanguineous marriages and chromosomal abnormalities was recorded. However, correlation between the number of miscarriages and consanguinity was not statistically significant (Table 4). A relatively higher number of spontaneous miscarriages in couples with abnormal karyotype (mean 4.78, SD = 3.11) was observed when compared to those with normal karyotype (mean 4.12, SD = 2.57). Out of 52 pregnancies attempts performed by just 9 couples with chromosomal abnormalities, only 9 (17 %) ended up with live birth.
Thrombophilia allelic polymorphisms and RM
Results of SNPs analysis of 3 thrombophilic genes showed that the frequencies of FV Leiden, Prothrombin and MTHFR mutations among RM patients compared to general incidence reported in Saudi population. Allelic polymorphisms of mainly for FVL and Prothrombin genes were relatively high supporting thus our hypothesis of considering these as RM genetic factors (Table 5).
Discussions
Recurrent miscarriages are clinically detectable pregnancies that fail to progress due to several causes including chromosomal, genetic, anatomic, immune or infective factors [1]. Chromosomal and genetic abnormalities are among the most common factors leading to recurrent miscarriages and pregnancy demise [23,39,40]. Amongst these genetic factors, thrombophilia was shown to be a main cause leading to recurrent miscarriages [3,29]. Moreover, Saudi population is marked by a high rate of consanguineous marriages (about 58 %), the majority of them were between first-degree cousins [33,34]. In this particular context, the current study was designed to assess two major parameters know to be causative of RM: (i) the chromosomal abnormalities; and (ii) inherited thrombophilia.
Chromosomal abnormalities analysis
Following cytogenetic analysis of our patients' cohort, 77 % of carriers of numerical or chromosomal abnormalities were women. It means that a high female to male ratio of 4: 1 in couples carriers of chromosomal abnormalities was recorded, which is higher than previously reported ratios (around 2:1) in Saudi Arabia and elsewhere [18,41,42]. This higher ratio in women facing recurrent miscarriages could be explained by the heavy involvement of their oocytes in the fertilization process and early embryo development by providing all the molecular machinery of the new embryo start-up and early development [43][44][45]. Therefore, these molecular or cytogenetic imbalances look to affect the onset and/or the stability of the pregnancy with higher incidence in women compared to men. These findings are in line with previous reports describing associations between the maternal chromosomal status and RM [46,47]. Among the carriers' cohort, more than 46 % of patients showed various types of translocations. This is consistent with previous reports where chromosomal aberrations, mainly translocations in the parents or the abortus (embryo), were shown to be strongly associated with higher incidence of miscarriages [15,46,[48][49][50][51][52].
The presence of chromosomal polymorphisms were associated with abnormalities of the heterochromatin constitutively located in different loci of the chromosomes and might be associated with some diseases as infertility and RM [53][54][55].
Turner Syndrome (TS) is another chromosomal disorders reported in our patients' cohort ( Fig. 1) which usually is marked by a total or partial loss of one of the two X chromosomes. In fact, recurrent miscarriages, fetal perinatal death or malformed newborns are known to be frequent in TS patients [56][57][58]. Our data also confirm this strong association between women with TS and RM although the number of patients carrying this syndrome is not as expected in our cohort, since patients with TS are known to face RM and are not routinely referred for that. Concomitant with our results, other chromosomal disorders, including duplications and deletions have been also reported in RM couples [17,59].
Age, consanguinity and RM
The age range of the patients' group involved in this study was wide (from 18 to 48) ( Table 2). One striking finding is that the ages of all of the 13 patients (100 %) identified as carriers of chromosomal abnormalities were below 35 years (Table 3). Despite their relative young age of marriage, these couples were struggling to conceive since they (both or one of them) are carrier of chromosomal abnormalities leading to higher risk of repeated miscarriages and lower chances to deliver a viable and healthy offspring [18,19]. These patients are also facing an important social pressure to procreate [60] especially in conservative societies as Saudi Arabia. Such particular context may explain the high average (≈5) of pregnancy attempts per couple even at relatively young age. Such social pressure to procreate is very interesting to highlight compared to other societies where the childfree is one of the new reproductive lifestyle trends [60,61]. However, the high incidence of population inbreeding and consanguineous marriages could explain the high rate of RM at this early age [31,62]. In fact, around one third of the couples involved in this study were consanguineous (Table 4). Although it is somehow decreasing, this consanguinity driven by an old Arabic tradition is known to rise the frequency of recessive genetic diseases as well as reproductive and developmental disorders such as infertility rates, recurrent miscarriages, and congenital disorders (e.g. thrombophilia) [32][33][34][35]. A significant positive correlation between consanguinity and reported chromosomal aberrations and polymorphic variants (P = 0.046) documented in this study supports that the RM is mainly due to genomic instability manifested in several chromosomal abnormalities in this group (age ≤ 35) rather than reproductive aging or other known factors. On the other hand, no significant correlation between the number of RM and consanguinity was reported. This result might be explained by the fact that the consanguinity negative effect is observed mainly in the presence of [68] carried chromosomal and/or genetic abnormalities in one or both partners, and only very large cohorts of patients might detect such impact. However, the aging process seems to be the main cause of RM in couples beyond 35 years since no chromosomal disorders have been detected. In fact, the miscarriage frequency and subsequent reproductive failure were positively correlated to the increase of paternal and/or maternal age [63][64][65]. Such aging process (beyond 35 and 40 years respectively for women and men) is known to cause genetic and chromosomal disorders during gametogenesis, fertilization and early embryonic development [15,46,65]. Such fetal genetic alterations induce a genomic instability and therefore RM.
Out of 52 pregnancies among couples carrying chromosomal abnormalities, only 17 ended up with live birth. This rate is lower than that reported in other studies which showed up to 45 % live birth among couples with structural chromosomal abnormalities. This could be explained by predisposition of the Saudi society to other RM risk factors as thrombophilia [35,66].
Thrombophilia genetic polymorphisms and RM
Coagulation anomalies are reported to induce important pregnancy complications. In this context, prothrombophilic factors were reported to be involved in RM including FVL, Prothrombin A20210G and MTHFR C677T. These factors are known to disrupt key events associated to placentation, fetal development and pregnancy progression till the delivery [3,[28][29][30]67]. In this study, PCR-RFLP was used for molecular analysis of potential SNPs in three (3) thrombophilia-associated genes: FVL, Prothrombin A20210G and MTHFR C677T as detailed in Table 1. The screening for potential SNPs in these 3 genes showed that the frequencies of FVL, Prothrombin and MTHFR mutations (including both homozygous and carriers) were respectively 15.5 %, 6 % and 25.5 (Table 5). These results confirm the atypical and relatively high incidence of thrombophilic gene polymorphisms among Saudi population reported in previous studies [68]. Beside national awareness campaigns, these findings support more preventive measurements to be considered at the premarital and/or before the IVF procedures in ART clinics.
The prevalence of FVL and Prothrombin A20210G mutations reported in patients' cohort support a strong relationship between these traits and RM. Following analysis of the most important studies about inherited thrombophilia, our data are in line with previous findings where both mutations have been known as common genetic disorders that predispose to early and late RM [3,29,[69][70][71]. A recent study in Saudi Arabia also confirmed the involvement of mutations in these two particular genes (FVL and Prothrombin A20210G) in increasing RM incidence [72]. Additionally, these two thrombophilic genetic traits were associated with obstetric complications including miscarriages, placental abruption, intrauterine growth retardation (IUGR) or death [69,[73][74][75]. These findings explain the significant correlations with RM reported in this study and support the assumption that both factor V Leiden and Prothrombin mutations are major risk factors for RM. Suitable treatment of inherited and acquired thrombophilia will improve the pregnancy outcomes as discussed elsewhere [69]. Therefore, a national medical program for routine screening of these two genes in patients with repeated pregnancy failure in Saudi Arabia is highly recommended.
For MTHFR C677T mutation in RM patients and in agreement with other studies, no significant (P > 0.05) association with RM were found [29]. The general practice of folic acid supplementation during pregnancy seems to be the reason of masking the effect of the MTHFR mutation in RM patients as reported elsewhere [76].
Since RM is a challenging obstetric complication with various psychological, societal and economic burdens on both couples and the health care system in general, our study contributed to report an effect of both structural and numerical effects of chromosomal abnormalities on RM, which were amplified by consanguinity mainly for couples under 35 years. Moreover, thrombophilic polymorphisms of FVL and Prothrombin A20210G were significantly associated to higher prevalence of RM.
Conclusions
This study demonstrated a strong association between RPL and the prevalence of chromosomal abnormalities and inherited thrombophilia and confirms the high incidence of RM in the Western region of Saudi Arabia suggesting thus some main but non-exclusive causes of this disease. Our findings lay also foundation for larger cohort-based studies to further validate and confirm the impact of thalassemia gene polymorphisms and hemoglobinopathies in general, chromosomal aberrations and consanguinity, but also to predict the involvement of other anatomic, endocrine or auto-immune factors.
Our study highlights the importance of including cytogenetic and thrombophilia testing as part of the routine clinical investigation of RM and during Preimplantation Genetic Screening (PGS) in IVF clinics in order to suggest suitable management and/or treatment approaches. Such genetic testing along with standard karyotyping are highly recommended to be included in premarital test especially for consanguineous partners. In the genomic era, further studies focusing on the molecular mechanism of thrombophlic polymorphisms on placental pathologies and pregnancy loss using high-throughput technologies as array Comparative Genomic Hybridization (aCGH) and Next Generation Sequencing (NGS) are highly recommended. We believe that a multidisciplinary and collaborative approach between obstetricians, geneticists, hematologists, scientists and bioethicists combined with effective awareness program will contribute to relieve the burden of RM. | 2017-08-03T01:40:48.501Z | 2016-10-10T00:00:00.000 | {
"year": 2016,
"sha1": "56bc3c8f8f1956f95dc1493e8c58590d741ca212",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/s12881-016-0331-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56bc3c8f8f1956f95dc1493e8c58590d741ca212",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236428455 | pes2o/s2orc | v3-fos-license | In Defense of the Learning Without Forgetting for Task Incremental Learning
Catastrophic forgetting is one of the major challenges on the road for continual learning systems, which are presented with an on-line stream of tasks. The field has attracted considerable interest and a diverse set of methods have been presented for overcoming this challenge. Learning without Forgetting (LwF) is one of the earliest and most frequently cited methods. It has the advantages of not requiring the storage of samples from the previous tasks, of implementation simplicity, and of being well-grounded by relying on knowledge distillation. However, the prevailing view is that while it shows a relatively small amount of forgetting when only two tasks are introduced, it fails to scale to long sequences of tasks. This paper challenges this view, by showing that using the right architecture along with a standard set of augmentations, the results obtained by LwF surpass the latest algorithms for task incremental scenario. This improved performance is demonstrated by an extensive set of experiments over CIFAR-100 and Tiny-ImageNet, where it is also shown that other methods cannot benefit as much from similar improvements.
Introduction
The phenomenon of catastrophic forgetting (CF) of old concepts as new ones are learned in an online manner is well-known. The approaches to overcome it can be categorized, as suggested by De Lange et al. [3], into three families: (i) replay-based methods, which store selected samples of previously encountered classes, (ii) regularization-based methods, that limit the freedom to learn new concepts, and (iii) parameter isolation methods, which directly protect the knowledge gained in the past, by dividing the network parameters into separate compartments.
The field of continual learning is very active, with dozens of methods that have emerged in the last few years. However, it seems that the growing interest leads to confusion rather than to the consolidation of knowledge. As practi-tioners looking to find out which online learning method would be suitable for a real-world application, we were unable to identify the solid methods of the field and could not infer from the literature the guiding principles for tackling catastrophic forgetting.
Indeed, reviewing the literature, one can find many insightful ideas and well-motivated solutions. However, little data regarding the generality of continual learning methods, the sensitivity of the methods to the specific setting and hyperparameters, the tradeoff between memory, run-time and performance, and so on. Ideally, one would like to find a method that is not only well-grounded and motivated, but also displays a set of desired properties: (i) work across multiple datasets, (ii) be stable to long sequences of on-line learning tasks, (iii) benefit from additional capacity, (iv) display flexibility in network architecture that allows the incorporation of modern architectures, (v) display an intuitive behavior when applying regularization, and (vi) present robustness to hyperparameters.
We demonstrate that these properties hold for one of the first methods to be proposed for tackling CF, namely the Learning without Forgetting (LwF) method [22]. This is a bit surprising, since this method, as a classical method in a fast-evolving field, has been repeatedly used as an inferior baseline. However, we show that unlike many of the more recent methods, this scapegoat method can benefit from residual architectures and further benefits from simple augmentation techniques. Moreover, while the original LwF implementation employed techniques such as warmup and weight decay, we were able to train without these techniques and their associated hyperparameters. Overall, we find LwF, which is a simple data-driven regularization technique, to be more effective than the most promising regularization-based and parameter-isolation methods.
Related work
It is often the case that new methods are presented as having clear advantages over existing ones, based on empirical evidence. The inventors of these methods have little incentive to explore the underlying reason for the performance gap. Without a dedicated effort to do so, the litera-ture can quickly become misleading.
In our work, we demonstrate that the task-incremental learning methods that have emerged since the 2016 inception of the LwF method are not more accurate than this straightforward method. This demonstration is based on changing the underlying neural network architecture to a ResNet [10] and on employing a simple augmentation technique during training. Moreover, we show that LwF benefits from more capacity, width wise.
A recent related attempt by De Lange et al. [3] also addresses the need to compare multiple continual learning algorithms in task-incremental settings. That study has employed multiple architectures, and, similar to us, have noted that the LwF method benefits from the additional capacity given by extra width but not from extra depth. However, ResNets or augmentations were not employed and the conclusion was that LwF is not competitive with the more recent techniques. This conclusion is in sheer contrast to ours, demonstrating the challenge of comparing methods in a way that exposes their full potential, and the need to perform such comparative work repeatedly.
Task-incremental learning
CF in neural networks has been observed from the beginning. However, there is no consensus regarding the proper settings and metrics for comparing different techniques. In this work, we adopt a setting definition from the work of [33,12], who define three different settings for continual learning -task incremental, domain incremental, and class incremental. In all scenarios, the system is presented with a stream of tasks and is required to solve all tasks that are seen so far. In task incremental, the task identifier is provided both in train and inference time. In domain incremental, the task identifier is provided only in train time, and the classifier does not need to infer the task identifier but rather just solve the task at hand. In class incremental, the learner also needs to infer the task identifier in inference time.
We focus on the task incremental setting. Moreover, we do not consider replay-based methods since these rely heavily on accessing data retained from the previous tasks, which is not desirable in real-world scenarios, and depends on an additional parameter that is the size of the memory.
The literature has a great number of methods, further emphasizing the need for comparative work. In this work, we focus on the methods that are repeatedly reported in the literature [3,29,13,21]. These include: Elastic Weight Consolidation (EWC; [16], online version), Incremental Moment Matching (IMM; [20], both Mean and Mode variants), overcoming CF with Hard Attention to the Task (HAT; [29]), continual learning with Hypernetworks (Hyper-CL; [34]) and Adversarial Continual Learning (ACL; [4]).
Both the EWC and IMM variants, belong to a regularization-based family and add a structural, weight-based, regularization term to the loss function to discourage changes to weights that are important for previous tasks. IMM performs a separate model-merging step after learning a new task, which EWC does not. Although this family of methods is very rich, IMM and EWC are among the leading methods and are often cited as baselines.
The HAT approach belongs to the parameter isolation family and applies a light-weight, unit-based, learnable, and 'soft' masks per task. HAT is a successor to various works, including (i) progressive neural networks (PNNs; [27]), which applies a complete and separate network for each task (columns) with adapters between columns, (ii) PathNet [5] that also pre-assigns some amount of network capacity per task but, in contrast to PNNs, avoids network columns and adapters and instead suggests to learn evolutionary the paths between modules, and (iii) PackNet [24], which uses weight-based pruning heuristics and a retraining phase to maintain a binary mask for each task. Since HAT was shown to have both performance and computational advantages over (i)-(iii), we focus on it as a representative method from this line of work.
Hyper-CL [34], a recent addition to the parameter isolation family, belongs to a different branch in this family than HAT. Instead of using a fixed pre-determined capacity, Hyper-CL suggests learning the weights of a target network for each task. Hyper-CL employs a variant of Hypernetworks [8], called Chunked-Hypernetworks [25], which generates different subsets of the target network's parameters using the same generator. To do so, the method learns both the task embedding and the "chunk" embedding. This variant makes it possible to maintain a much smaller hypernetwork than the target network. To overcome CF, they apply regularization that constrains the weights of the previously seen target task from changing.
Some methods belong to more than one category. ACL [4] employs both parameter isolation using a small private network for each task, and regularization for a shared network across tasks. This regularization contains two parts: an adversarial loss that makes the shared encoding task-independent [6] and a disentanglement loss that acts to remove the overlap between the private-and the sharedencoding [28].
Naturally, given the number of relevant methods, it is not feasible to compare with all of them. The regularizationbased family presents two additional methods that we considered: Encoder Based Lifelong Learning (EBLL; [26]) and Memory Aware Synapses (MAS; [1]). EBLL extends LwF by adding a per-task auto-encoder, requiring further hyperparameter tuning. The literature shows that it only marginally improves over LwF for AlexNet-like architectures [3,1], and our attempts to apply it together with ResNets led to poor results. MAS was also shown in [3] to only slightly improved over LwF.
The LwF method and its modifications
The LwF method by Li et al. [22], belongs to the regularization-based family. However, unlike EWC and IMM, its regularization is data-driven. The method seeks to utilize the knowledge distillation loss [11] between the previous model and the current model to preserve the outputs of the previous task. Since maintaining the data of previous tasks is not desirable and rather not scalable, LwF uses only the current task data for knowledge distillation.
In the task-incremental setting, the learner is given a new set of labels to learn at each round. This set of classes is called a task. In LwF the classifier is composed out of two parts: the feature extractor f and a classifier head c i per each task for i = 1, 2, . . . , T .
Let {(x t j , y t j )} be the set of training samples for task t. The cross-entropy loss is used as the primary loss for training the classifier c t • f : where the subscript y t j is used to denote the pseudoprobability of the classifier for the ground truth label. When learning a new task t, to maintain previous task knowledge, we employ knowledge distillation between the "old" feature extraction and the previous task classifier heads and the new ones. These are denoted by f o for the previous feature extractor network (as learned after task t − 1), and c o i for i = 1, 2, . . . , t − 1 for the previous heads. The learned feature extraction is denoted by f and the updated task classifiers are denoted by c i , for i = 1, 2, . . . t.
For simplicity, we described the knowledge distillation process for one previous task and one sample (x, y) ∈ {(x t j , y t j )} from the current task t. However, the process is repeated for the classifier heads of all previous tasks and all samples of task t, while summing up the individual losses.
) be the vector of probabilities that the old classifier of task i assigns to sample x. Similarly, let Y := [y 1 , y 2 , ...] be the vector of probabilities for the same training samples obtained with c i • f . To apply the knowledge distillation loss, these vectors are modified in accordance with some temperature parameter θ: The temperature is taken to be larger than one, to increase small probability values and reduce the dominance of the high values. The knowledge distillation loss is defined as: where the summation is done over all labels of task i. We followed the author's suggestions and in all our experiments and set θ = 2 and the regularization weight to one, i.e., the knowledge distillation loss had the same weight as the classification loss of the new task. It is worth mentioning that although the original LwF work [22] evaluated the method in the two task scenario, it can be readily extended to any number of tasks by using knowledge distillation loss over all c o i , i = 1, 2 . . . , t − 1. This further highlights the need for performing our research, since such an extension was previously done in the context of attempting to present the preferable performance of a new method. We also note that it was suggested in [22] to use a warmup phase at the beginning of training for each new task, in which both f and c i , i = 1, 2, . . . , t − 1 are frozen and one trains c t with the cross-entropy loss until convergence. However, since the effect of this seems negligible even in the original paper, we do not perform this. The authors also used regularization in the form of weight decay during training, which we remove to avoid the need to fit a regularization hyperparameter for each experiment. Moreover, in our initial experiments weight decay tends to hurt the accuracy of new tasks.
Architecture
Li et al. [22] employed AlexNet [18] and VGGNet [30] to evaluate the performance of the method. Interestingly, even the recent review work by De Lange et al. [3] uses AlexNet as a reference network, despite ongoing advances in network architectures. There is also a key difference between the different versions of AlexNet-like architectures employed in [22] and [29]. The latter use Dropout [31], which as we show empirically, is detrimental.
We also offer to use the ResNet [10] architecture. We are not the first to attempt to use ResNets for LwF. Mallya et al. [24] employed LwF with a ResNet-50 network as an underperforming baseline. However, our experiments demonstrate that LwF mostly benefits from a Wide-ResNet [35] network rather than from deeper ones.
Data augmentation
Using a method with a shared model presents a challenge. On the one hand, the shared part must have enough capacity to learn new tasks. On the other hand, bigger networks are more vulnerable to overfitting when training on the first tasks. The parameter isolation family works around this problem by dynamically changing the capacity of the network as in PNNs [27] or learning a specific target network for each task with enough capacity for each task, like in Hyper-CL [34].
In addition to the capacity needs, another challenge that the LwF method faces is the need to mitigate the difference between the input distributions for different tasks. In the extreme, where the input distributions are very dissimilar, the knowledge distillation loss is no longer constraining the network to success on previous tasks.
Data augmentation, which is a well-studied technique for overcoming overfitting by virtually expending the dataset at hand, also has the potential to close the gap between different input distributions and therefore reduce forgetting. In our experiments, we employ a very basic set of augmentation consisting of random horizontal flips, color jitter (randomly change the brightness, contrast, saturation, and hue), and translation. As it turns out, these are sufficient to reduce the forgetting almost to zero, while substantially increasing the average accuracy for all tested settings.
Experiments
The common datasets for evaluating CF in classification problems include permutations of the MNIST data [32], a split of the MNIST data [20], incrementally learning classes of the CIFAR data sets [23], or on considering two datasets and learning the transfer between them [22]. Serrà et al. [29] points out the limitations of the MNIST setups, since these do not well represent modern classification tasks. The two-task scenario is criticized for being limited and does not enable the evaluation of CF for sequential learning with more than two tasks. CIFAR-100 splits are criticized for having tasks that are relatively similar in nature. However, in our experiments, performance on CIFAR-100 splits discriminates well between different methods and between different settings of the same method.
In addition to CIFAR-100 [17], we employ Tiny-ImageNet [19] in our experiments. The latter presents a higher diversity with more classes and the ability to challenge methods with longer and more meaningful sequences of tasks. To obtain a generic estimate, we shuffle the order of classes in each dataset and repeat each experiment setup five times with different seeds.
A common CIFAR setup, introduced in [36] offers to use CIFAR-10 as a first task, then split CIFAR-100 into five distinct tasks with 10 disjoint classes each. However, it may introduce a bias in evaluating task-incremental methods, since it makes the first task much larger and, therefore, conceals the problem of first-task overfitting. In this work, we consider a different setting, in which CIFAR-100 is divided into 5-Splits (i.e., 5-tasks), 10-Splits, and 20-Splits with 20, 10, and 5 classes in each task, respectively. Each class in CIFAR-100 contains 500 training images and 100 testing images. Each image size is 3 × 32 × 32. As a validation set, we shuffle the training data and use 90% as training examples and 10% as validation examples.
A recent work by De Lange et al. [3] employed Tiny-ImageNet as a benchmark using a similar setup to the CIFAR-100 setup above. However, they split the dataset to 20 disjoint tasks with 10 classes each. Since we opt for a longer sequence of tasks while still keeping them meaningful, we split the dataset into 40 disjoint tasks with 5 classes each. As our results will show, this setting pushes the limits of the task-incremental methods.
Each class in Tiny-ImageNet contains 500 training images, 50 validation images, and 50 testing images. The original image size for this dataset is 3 × 64 × 64. Since the test set is not publicly available, we use the validation set as a test set and as a validation set, we shuffle the training data and use 90% for training and 10% for validation.
To evaluate performance, we adopt the metrics of [23]: Backward Transfer: where T is the number of tasks and R i,j is the test accuracy score for task j after the model learned task i. We note that BW T < 0 reports CF, while BW T > 0 indicates that learning new tasks helped the preceding tasks.
The effect of the network architecture
We first present experiments for LwF with various network architectures and no data augmentation. The AlexNetlike architecture [18] we use follows [29] and has three convolutional layers of 64, 128, and 256 filters with 4 × 4, 3 × 3, and 2 × 2 kernel sizes, respectively. On top, there are two fully-connected layers of 2048 units each. This network employs rectified linear units (ReLU) as activations, and 2 × 2 max-pooling after the convolutional layers. A Dropout of 0.2 is applied for the first two layers and 0.5 for the rest. All layers are randomly initialized with Xavier uniform initialization [7].
While LwF is commonly used with an AlexNet-like architecture [21,29,3], we opt to use more modern architectures. We choose to use the popular architecture family of ResNets. In this work, we use ResNet-20 (RN-20), ResNet-32 (RN-32) and ResNet-62 (RN-62) [10], as well as Wide-ResNet-20 networks with width factors 2 or 5 [35] (WRN-20-W2 and WRN-20-W5 respectively). Those networks employ ReLU activations and Batch Normalization layers [14]. All convolutional layers were randomly initialized with Kaiming normal inits with fan-out mode [9], and the normalization layers were initialized as constants with 1 and 0 for weight and bias, respectively. All architecture tested use separated fully-connected layers with a softmax output for each task as a final layer. More details can be found in the appendix.
In all experiments, LwF is trained up to 200 epochs for each task. We use a batch size of 64 and an SGD optimizer with a learning rate of 0.01 and a momentum of 0.9. We used the validation set to schedule the learning rate, where we drop the learning rate by a factor of 3 if there is no improvement in the validation loss for five consecutive epochs.
Training is stopped when the learning rate becomes lower than 10 −4 .
The results are depicted in Tab. 1. Our clearest and most significant result is that the underlying network has a great effect on LwF performance. While LwF with AlexNet with Dropout architecture greatly suffers from forgetting which results in low ACC, just removing the Dropout from the network results in a sizable performance boost. This makes sense while using Dropout on the teacher side creates a strong teacher that can be viewed as a large ensemble of models that shares weight [11], on the student side, this weakens the regularization of LwF. Randomly choosing which weights to regularize ignores their importance for older tasks, which results in high forgetting.
Next, switching to RN-20 with an order of magnitude fewer parameters shows preferable performance. This change reveals the potential of LwF to obtain competitive ACC and BWT.
Following [3] we investigate the effect of width and depth of the architecture with the ResNet network on LwF performance. We used two deeper networks (RN-32 and RN-62) and two wider networks (WRN-20-W2 and WRN-20-W5). Our results (Tab. 1) show that while using a deeper network gives similar or inferior results compare to RN-20, using wider networks increases performance.
The effect of data augmentation
We conjectured in Sec. 3.2 that LwF performance can be further increased by using data augmentations. In this section, we conduct experiments on WRN-20-W5, which is the best performer among the tested architectures, with a relatively simple set of random augmentations: random horizontal translation of up to 3 pixels with reflection padding, random horizontal flip, and color jitter (brightness, contrast and saturation with jitter of 0.3 and hue with jitter of 0.2).
The results are summarized in Tab. 2. As can be observed, applying augmentation in this setting leads to improvement in both ACC and BWT. Therefore, there is no trade-off between accuracy and forgetting. We emphasize that even though no augmentations protocol search was conducted and that the set of augmentations in use is rather small and simple, the performance boost is substantial.
Comparison with other methods
We consider two regularization-based methods: EWC [16] and IMM [20] and two parameter isolation methods: HAT [29] and Hyper-CL [34]. ACL [4] is considered as a recent hybrid method. As an upper bound for overall performance we consider a joint training method (JOINT), which for each incoming task, trains on the data of all tasks seen so far. The hyper-parameters for EWC, IMM and HAT were the best found in [29] and for Hyper-CL to the best found in [34]. For ACL, we quote the results mentioned in the paper, i.e. for AlexNet-like architecture with Dropout (both private and shared) and no augmentations at all. Following our findings for LwF, we opt to use all baseline methods with WRN-20-W5. However, we found that none of the baseline methods performs well with it. We found that some of the baseline methods are tightly coupled with the architecture originally presented in the paper. The authors of Hyper-CL [34] did an extensive hyperparameter search for both the hypernetwork and target architectures. They conclude that it is crucial to choose the right combination since it has a great effect on performance. Therefore, we used the best Hypernetwork-Target pair they found for the "chunked", more effective, version. This pair consists of a hypernetwork which has a linear layer that maps task and chunk embedding of size 32 each to a chunk of size 7000 of a ResNet-32 target network. Another coupling we found was for the HAT method, we could not achieve reasonable performance with an underlying ResNet architecture. We conjecture that the masking process in HAT needs to be adapted for usage with batch normalization layers, and report results with the AlexNet-like network presented by Serrà et al. [29].
Both EWC and IMM, although not coupled with specific architecture, were found to be under-performing with Table 3. Comparison between multiple methods. BWT and ACC in %. *JOINT does not adhere to the task incremental setup, and is performed in order to serve as the upper bound for LwF. **Slightly different AlexNet-like architecture than used in HAT with a similar capacity. o results reported in [4]; all other results are reproduced by us and are averaged over five runs with standard deviations. D=Dropout, RN=ResNet, WRN=WideResNet, Lin=a linear layer, H=Hypernetwork, M=Target network. WRN-20-W5, see appendix. We conjecture that the difference from LwF lies in the type of regularization term used by each method. LwF employs a 'soft' regularization on the network output for previous tasks, which handles statistical shift due to batch normalization better than the weightbased regularization. For the comparison table we use the best evaluated architecture for each method.
All methods, except Hyper-CL and ACL, use separated fully-connected layers with a softmax output for each task as a final layer. Hyper-CL employs a separate generated network for each task, and ACL employs a separate 3-layer MLP with softmax output for each task on top of private and shared concatenation.
Training We made an effort to find the best training protocol for each method, based on the existing literature and initial experiments. For all methods except for Hyper-CL we followed the same training protocol described in Sec. 4.1. For Hyper-CL, we use batch size 32 and with the Adam optimizer [15] with a learning rate of 0.001. As for learning rate scheduling, Hyper-CL uses a validation accuracy to schedule the learning rate by dropping the learning rate with a factor of ( √ 0.1) −1 , if there is no improvement in the validation accuracy for 5 consecutive epochs. The Hyper-CL implementation further employs a custom multi-step scheduler adapted from Keras [2]. However, there is no early stopping in Hyper-CL. Also, no other regularization is used in any of the methods, except to the ones that are inherent to the method itself.
The Hyper-CL official implementation and the author's experiments use the test set for parameter selection in lieu of a proper validation set. We were able to fix and rerun the experiments in time only for the Hyper-CL experiments on CIFAR and not for the Hyper-CL experiments on Tiny-ImageNet. We observed that moving to an independent validation set reduces the performance of Hyper-CL on CIFAR by a significant margin. We, therefore, view the results obtained for this method on Tiny-ImageNet as an upper bound for the method's performance. We note that (i) Hyper-CL is by far the slowest method out of all methods tested, and (ii) On Tiny-ImageNet even though the results of this method are positively biased, the method is not competitive.
The comparison to the literature methods is provided in Tab. 3 and summarized in Fig. 1 for the best configuration for each method. Evidently, in contrast to the picture the literature paints, when a proper architecture and added augmentations are used, LwF, which is a simple regularization- base method, outperforms all other methods. The results also show that although IMM has evolved from EWC, both its variants are not competitive with EWC except for the smallest split (CIFAR 5-Split). When considering the augmentation mechanism, we have mixed results. Although augmentations increase ACC, they also increase forgetting for EWC and IMM-MEAN and only slightly reduce forgetting for IMM-MODE, which is still quite high. In contrast, for LwF, where we show that augmentations help to both ACC and BWT.
HAT as originally conceived (recall that it is not compatible with ResNets), has a very competitive ACC in CIFAR and even outperforms Hyper-CL for the longer and more challenging sequence of tasks from Tiny-ImageNet. It also further benefits from the augmentation. For Hyper-CL, we can see that although it has a smaller capacity (considering only the hypernetwork learnable parameters for capacity computation) it outperforms all of the baselines for CIFAR when augmentation is used. However, this advantage does not generalize to the Tiny-ImageNet dataset, and it falls behind HAT, and even EWC, for a longer sequence, which further emphasizes the need for comparison over a diverse set of experiments. To check if this shortcoming is a result of the capacity of the model, we experimented with larger models, both for the hypernetwork and target network. We observed that the performance drops significantly in all experiments for the larger network. This result emphasizes the need for careful tuning of the Hyper-CL method, which is challenging since unlike other methods it requires the tuning of two architectures at once, which enlarges the space of possible hyper-parameters dramatically. We note also that [34] reported that out of many architectures tried, the smallest ones showed the best performance-compression ratio.
For ACL, we quote the results for CIFAR 20-Split with no augmentation from the paper itself [4]. The network used in the paper was similar to the one used by HAT. As the results show, ACL outperforms both HAT and Hyper-CL when no augmentation is used. LwF is not considered as a baseline in [4]. However, LwF outperforms ACL with WRN-20-W5 even without augmentation. We emphasize that the difference does not come from capacity, since both networks have a similar capacity as described in Tab. 1.
We further analyze the performance by evaluating ACC and BWT after learning each task. Fig. 2 shows the results for the longer sequences of tasks, 20 for CIFAR and 40 for Tiny-ImageNet (the results for the other experiments can be found in the appendix). One can observe that the methods differ in substantial ways. First, the non-LwF regularization methods, namely EWC and IMM, are not competitive with LwF since the early stages of the online training. The results also indicate that although more careful tuning between the primary loss and the regularization loss could be made, there is a high degree of trade-off between forgetting and new learning in these methods. Where EWC and IMM-MEAN favor old tasks (low forgetting, low ACC) and IMM-MODE favors new tasks (high forgetting, comparable, or higher, final ACC to IMM-MEAN). Second, the same trade-off exists for HAT: while almost no forgetting exists, the accuracy for new tasks is lower. Since HAT is a parameter isolation method, we conjecture that it struggles to utilize the underlined architecture for learning new tasks. Third, while Hyper-CL and LwF seem close on CIFAR, an important difference is evident in Tiny-ImageNet. Looking at the profile of ACC for Tiny-ImageNet, Fig. 2 (c), shows that Hyper-CL struggles to learn new tasks after task 34 is learned, and the drop of accuracy is not due to forgetting, as is evident by the BWT plot in Fig. 2 (d). Interestingly, this drop also enables EWC to outperform Hyper-CL through more consistent performance after the drop in task 8. Last, for LwF, in both CIFAR and Tiny-ImageNet, it enjoys the capability of learning new tasks and almost does not forget previous tasks. We conclude that, although LwF is a regularization based method, given the right architecture and augmentation, it can maintain both the ability to learn new tasks and to not forget old ones, even at the tails of long tasks sequence.
This emphasizes the need for a careful evaluation of each method. While EWC, IMM, HAT, and ACL outperform AlexNet-based LwF with Dropout architecture they fall short when dropout is removed and when selecting more appropriate architectures. The reason that these other methods do not suffer from Dropout is that they employ hard regularization on the weights which considers their importance. However, as Fig. 2 shows, this type of regularization quickly results in a network utilization problem for fixedsize backbones.
Conclusions
Many of the recent task-incremental publications [21,29,1] compare with LwF and found their method to be superior. These conclusions seem to arise from the little incentive authors have to explore the effect of the evaluation settings on prior work, or to invest effort in modernizing the form (e.g., architecture) of baseline methods. However, LwF itself is built on top of solid knowledge-distillation foundations and, as we show, can be upgraded to become extremely competitive.
We demonstrate that the LwF method can benefit from a higher capacity (width-wise) and a network that employs residual connections as well as from augmentations. It is not obvious that the method would benefit from these changes, as many of the other methods cannot benefit from ResNets due to the challenges of applying batch normalization and the need to carefully control the capacity. Moreover, not all methods benefit from augmentations in both ACC and BWT.
Overall, our contributions are two-fold. First, we provide strong baselines for task-incremental methods, that form a solid foundation for comparing future methods. Second, we show the effect of added capacity, residual architectures, and regularization in the form of augmentation on taskincremental methods. Demonstrating sometimes paradoxical behavior, expected to improve performance but deteriorates it. We believe that LwF's ability to benefit from such improvements is a strong indication that this method would stand the test of time.
A. ResNets architectures
In section 4.1 of the main paper, we offered to use various ResNet architectures for LwF: RN-20, RN-32, RN-62, WRN-20-W2, and WRN-20-W5. All these networks share a common structure but differ in width or depth. This structure starts with a single convolutional layer of 16 filters with a kernel size of 3x3 and stride 1, followed by 3 groups of "blocks". Each group is parameterized by the number of blocks, width, and stride for the first block in the group. The baseline width (width factor equals 1) of each group is 16, 32, and 64, and strides 1, 2, and 2 respectively.
To implement the blocks, the class of BasicBlock from the PyTorch framework is employed. Each block contains 2 convolutional layers with a kernel size of 3x3 and a skip connection. The structure ends with an adaptive average pooling of size 1x1. Moreover, each convolutional layer is followed by a batch normalization layer and a ReLU activation function. The parameters of the architectures in our work: • RN-20 a width factor of 1 and 3 blocks in each group.
• RN-32 a width factor of 1 and 5 blocks in each group.
• RN-62 a width factor of 1 and 10 blocks in each group.
• WRN-20-W2 a width factor of 2 and 3 blocks in each group.
• WRN-20-W5 a width factor of 5 and 3 blocks in each group.
B. LwF with AlexNet and data augmentations
In the main text the best architecture is tested for LwF with data augmentations, namely WRN-20-W5. In this section we provide results for AlexNet-like architectures with augmentations as well, the results are provided in Tab. 4. We observe that the data augmentations does not provide recovery from the harmful Dropout component in AlexNet-D. However, it does provide performance boost for AlexNet-ND, as expected.
C. Width vs. depth for LwF
In Fig. 3 we offer another view on the effect of different depth and width for LwF. The results are provided for the baseline ResNet architecture, RN-20, and two comparable capacity architectures. One with greater depth, RN-62, and another with greater width, WRN-20-W2. The results show that although RN-62 and WRN-20-W2 share a similar amount of forgetting, from task 2 onward RN-62 underperforms with respect to ACC.
This suggests that LwF with a deeper ResNet network is struggling to acquire new knowledge while keeping the previous one. Comparing RN-62 with RN-20 highlights a more severe problem where LwF is struggling to utilize deeper networks both in terms of ACC and BWT. However, increased width has a positive effect on performance over time, even at the price of increased forgetting. Fortunately, we were able to mitigate this increased forgetting with data augmentations, which not only reduced forgetting substantially but also increased ACC.
D. EWC and IMM with WRN-20-W5
In our experiments we found EWC and IMM (both MEAN and MODE variants) to perform poorly with ResNet architectures and specifically with WRN-20-W5. The results, for this architecture, can be found in Tab. 5. As can be seen, using WRN-20-W5 the methods are not competitive and perform lower than when using the AlexNet-like architecture, as quoted in the main paper. This performance gap suggests that the methods require modifications in order to enjoy more modern architecture, like ResNet. We attribute this to the challenge imposed by the batch normalization layers.
E. ACC and BWT over time
In Fig. 4 we provide the BWT and ACC scores after learning each task for CIFAR-100 with 5 and 10 splits. These results were omitted from the main text for brevity and provided here as complementary results.
Similarly to the results shown in the paper (main text Fig. 2), the advantage of LwF over the baseline methods is evident. LwF can learn new tasks with a similar level of performance to the previous ones while maintaining the knowledge from the previous tasks. In contrast, both EWC and IMM fail to do so. For HAT, the difference in performance between different CIFAR-100 splits, where the performance is more stable for a short sequence of tasks, could point to an insufficient per task capacity problem. However, since LwF can both learn new tasks and maintain old ones with similar capacity, this points to an under-utilization of the network capacity. Thus, we suspect that HAT is not scalable for long task sequences even with larger networks. Although HyperCL seems to have very competitive results for these splits, its shortcoming is revealed in the main paper, looking at a longer sequence of tasks, such as Tiny-ImageNet. | 2021-07-27T01:16:29.498Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "5943648779544910b0590676f1c78b35dcfd8ac7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2107.12304",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5943648779544910b0590676f1c78b35dcfd8ac7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225106648 | pes2o/s2orc | v3-fos-license | miR-150-Based RNA Interference Attenuates Tubulointerstitial Fibrosis through the SOCS1/JAK/STAT Pathway In Vivo and In Vitro
We investigated whether microRNA-150 (miR-150)-based RNA interference (RNAi) ameliorates tubular injury and tubulointerstitial fibrosis. Mice injected with folic acid developed tubulointerstitial fibrosis at day 30. miR-150 levels were increased at day 7 and peaked at day 30. At day 30, protein levels of α-smooth muscle actin, fibronectin (FN), and collagen 1 (COL-1) were increased, while suppressor of cytokine signal 1 (SOCS1) was decreased. Kidneys manifested increased macrophage numbers and increased expression of potential mediators: interferon-γ, interleukin-6, and tumor necrosis factor-α. Locked nucleic acid-anti-miR-150, started prior to or after tubular injury and administered twice weekly for 4 weeks, reversed renal inflammation and fibrosis. In HK-2 cells, co-culture with macrophages increased miR-150 expression and decreased SOCS1. Janus kinase (JAK) and signal transducer and activators of transcription (STAT) pathway-related proteins p-JAK1, p-JAK2, p-STAT1, p-STAT3, and pro-fibrotic genes encoding α-smooth muscle actin, FN, and COL-1 were all upregulated. The miR-150 antagonist reversed these transcriptional changes. Lastly, in renal biopsies from patients with chronic interstitial fibrosis, renal miR-150, and pro-fibrotic gene expression and macrophage numbers were increased, while SOCS1 expression was decreased. In conclusion, miR-150-based RNAi is as a potential novel therapeutic agent for tubulointerstitial fibrosis, suppressing the SOCS1/JAK/STAT pathway and reducing macrophage influx.
INTRODUCTION
The prevalence of chronic kidney disease (CKD) is approximately 8%-16% worldwide and CKD is the 16th most common cause of death. 1 Renal tubulointerstitial fibrosis (TIF) is common pathologic feature of all CKD. 2 To date, there are few effective therapeutics to prevent or retard progression of TIF. We may reduce the prevalence of CKD through a deeper understanding the pathogenesis of TIF and by identifying novel therapeutic agents that target key mediators of renal fibrosis.
MicroRNAs (miRNAs or miRs) are recognized as critical contributors to kidney diseases. [3][4][5] We have identified miR-150 as the most overexpressed miRNA in renal biopsies from patients with lupus nephritis and have reported that miR-150 promotes renal fibrosis by downregulating suppressor of cytokine signal 1 (SOCS1) in kidney resident cells in vitro. 6 Subsequently, Guan et al. 7 reported that in renal ischemia/reperfusion model, miR-150 promotes kidney fibrosis by activating fibroblasts as a signal from injured tubular epithelial cells. Transfection of miR-150 into young mesangial cells promotes cell aging. 8 Acute kidney injury (AKI) induced by ischemia/reperfusion is attenuated in miR-150 knockout mice. 9 Renal miR-150 levels are increased in minimal change disease cases with poor prognosis compared, to those with good outcome; as these cases often represent unsampled focal segmental glomerulosclerosis, this suggests an association of miR-150 with renal fibrosis. 10 In addition, plasma and urinary miR-150 have emerged as potential progression biomarkers in CKD patients, including those with diabetic kidney disease. [11][12][13] Taken together, these findings suggest that miR-150 contributes to the pathogenesis of various scarring kidney diseases and is an intriguing and novel therapeutic target.
Here, we aimed to determine whether LNA-anti-miR-150 attenuates TIF induced by folic acid in mice. We investigated the underlying mechanisms by silencing overexpression of miR-150 in HK-2 cells co-cultured with macrophages. We show for the first time the effect of miR-150 based RNAi to reduced tubulointerstitial fibrosis and suggest a possible mechanism of action.
RESULTS
Renal Expression of miR-150 from AKI to TIF following Folic Acid Administration to Mice AKI developed 2 days after folic acid injection, as indicated by the elevated ratio of kidney weight to heart weight, serum creatinine (Scr), and blood urea nitrogen (BUN). These parameters improved by day 7 and returned to the baseline by day 30 (Figures 1A-1C). At day 2, features of AKI were present: loss of proximal tubular cells brush border, detachment of tubular epithelial cells, and modest inflammatory cells infiltrate were seen on periodic acid-Schiff (PAS) stained sections. At day 7, cortical histological tubular dilation remained and interstitial inflammatory cell numbers were increased, and at day 30, medullary rays displayed patchy fibrosis and severe infiltration of inflammatory cells ( Figure 1D). The severe histological fibrosis suggests transition of AKI to chronic interstitial fibrosis. Importantly, miR-150 levels were increased at day 7 and remained increased through day 30, as assessed by fluorescence in situ hybridization (FISH) and qPCR ( Figures 1E and 1F).
Efficacy and Safety of LNA-Anti-miR-150 in TIF Mice
We have previously demonstrated that LNA-anti-miR-150 is delivered to mouse kidneys following systemic administration. 25,26 Therefore, we investigated the efficacy and safety of LNA-anti-miR-150 in folic acid-induced TIF on day 30. Mice received eight LNA anti-miR-150 doses, either prophylactically (on day 0 before folic acid) or therapeutically (on day 2 after folic acid injection). Review of Masson and PAS stained sections, as well as semiquantitative analysis (Figures 2A-2D), revealed that both courses of LNAanti-miR-150 therapy decreased renal fibrosis, as indicated by reductions in the area of scarred cortex, regions of patchy fibrosis in the medulla, and the severity of inflammatory cell infiltrates. Further, both prophylactic exposure to LNA-anti-miR-150 and therapeutic use (starting after AKI) reduced renal overexpression of miR-150 on day 30 ( Figure 2E). LNA therapy appeared to lack toxicity, as judged by stable body weight and stable kidney and liver function ( Figures S1A-1C).
Effect of LNA-Anti-miR-150 on Renal Pro-Fibrotic Genes in TIF Mice To investigate the underlying anti-fibrotic mechanisms LNA-anti-miR-150 in mice with TIF, we examined the expression of selected pro-fibrotic gene mRNAs and proteins, including a-smooth muscle actin (a-SMA), fibronectin (FN), and collagen 1 (COL-1) in kidney tissues. Levels of a-SMA, FN, and COL-1 mRNA were all increased in TIF mice compared to control mice. These increased mRNA levels were reduced by LNA-anti-miR-150, compared to the scrambled LNA, with both prophylactic treatment and therapeutic use (Figure 3A). The same effect of LNA-anti-miR-150 was seen on the renal protein levels of these pro-fibrotic genes in TIF mice on western blotting and immunofluorescence staining, compared to the scrambled LNA mice ( Figures 3B and 3C).
Effect of LNA-Anti-miR-150 on Renal SOCS1 in TIF Mice
Our previous study reported that miR-150 promotes renal fibrosis by downregulating anti-fibrotic SOCS1 as assessed by SOCS1 luciferase reporter activity in human proximal tubular cells. 6 In present study, we further characterized renal SOCS1 expression in TIF mice. Renal SOCS1 protein expression was downregulated up to 44% in folicacid-induced TIF compared to control mice. Both early and delayed treatment with LNA-anti-miR-150 restored SOCS1 levels to normal, as assessed by western blotting and immunofluorescence staining, while scrambled LNA had no effect ( Figures 4A and 4B).
Effect of LNA-Anti-miR-150 on Renal Inflammation in TIF Mice
Macrophages are key contributors to the pathogenesis of tissue inflammation and fibrosis. 27 In the present study, we examined the effects of LNA-anti-miR-150 on macrophages. We measured renal RNA expression of cytokines interferon-g (IFN-g), interleukin-6 (IL-6), and tumor necrosis factor (TNF-a), all of which are produced by inflammatory cells including macrophages in mice with interstitial fibrosis. Both early and delayed treatment with LNA-anti-miR-150 abolished the increases in renal IFN-g, IL-6, and TNF-a mRNAs (Figures 5A-5C). We further examined the expression of two macrophage proteins, CD68 and F4/80, in mouse kidneys manifesting interstitial fibrosis. Renal CD68 expression increased on immunofluorescence staining ( Figure 5D) and F4/80 protein levels were upregulated on western blotting of tissue from TIF mice. LNA-anti-miR-150 reversed these changes, while scrambled LNA had no effect ( Figure 5E). T lymphocytes also secrete cytokines and we found that both early and delayed treatment of LNA-anti-miR-150 reduced To further investigate the mechanism by which LNA-anti-miR-150 attenuated folic acid-induced tubular damage and initiated renal fibrosis, we performed an in vitro study. Previously we found that LNA-anti-miR-150 reduced macrophage infiltration to reach antifibrosis effects in lupus nephritis mice with glomerular damage leading to renal fibrosis. 25 Therefore, we co-cultured immortalized human kidney proximal tubular cells (HK-2 cells) with macrophages. As we have demonstrated by luciferase reporter assay that miR-150 decreases SOCS1 gene expression, 6 therefore we next studied expression of the Janus kinase (JAK)/signal transducer and activators of transcription (STAT) genes, which are mechanistically downstream of SOCS1. We transfected miR-150 antagonist into HK-2 cells. We found that miR-150 and SOCS1 were co-expressed in untreated HK-2 cells ( Figure 6A). We further found that miR-150 levels were increased HK-2 cells following co-culture with macrophages, compared to the HK-2 cells cultured alone. The miR-150 antagonist reversed the increment of miR-150 induced by macrophages, compared to scrambled negative control oligonucleotides ( Figure 6B). The mRNA levels of fibrosis-related genes including SOCS1 was increased, while a-SMA, FN, and COL-1 were all reduced toward to normal by the miR-150 antagonist ( Figures 6C and 6D). With regard to protein expression of SOCS1 and phosphorylation of its downstream target JAK/STAT in HK-2 cells co-cultured with macrophages, we found the following changes: decreased SOCS1; increased p-JAK1, p-JAK2, p-STAT1, and p-STAT3; and increased pro-fibrotic proteins a-SMA, FN, and COL-1. Following transfection of miR-150 into HK-2 cells, the mRNA and protein expression levels of these genes were similar to the normal control ( Figure 6E).
Renal Expression of miR-150 and Its Regulated Proteins in Human Subjects
To extend our findings to the clinical setting, we established biopsies from ten patients with chronic tubulointerstitial nephritis (cTIN). We selected ten kidney tissue samples obtained from age-and sexmatched controls (Table S1). Renal fibrosis was present in renal biopsies from cTIN patients on Masson staining ( Figure 7A). FISH showed increased expression of miR-150 in tubular epithelial cells ( Figure 7B). In addition, double staining of miR-150 and aquaporin-1 (AQP1), a proximal tubule marker, confirmed localization of miR-150 to proximal tubules ( Figure S3). Increased expression of pro-fibrotic proteins a-SMA, FN, and collagen-1 was present (Figure 7C). Infiltration of CD68 + macrophages and CD3 + T cells was notable ( Figures 7D and S2B). Expression of anti-fibrotic protein SOCS1 was decreased in cTIN renal biopsies compared to normal control kidneys ( Figure 7E).
DISCUSSION
The major findings of this study are as follows. In a mouse model of tubulointerstitial fibrosis induced by folic acid, (1) LNA-anti-miR-150 attenuated renal fibrosis, and (2) the decrease of SOCS1 and increase of a-SMA, FN, and COL-1, and macrophage infiltration were reversed by LNA-anti-miR-150. (3) The addition of macrophages to HK-2 cells increased miR-150 expression and regulated the SOCS1/ JAK/STAT pathway, while miR-150 antagonist reversed the changes of those proteins, and (4) the changes of miR-150, fibrosis-related proteins, and macrophages present in mice with tubulointerstitial fibrosis and in cultured HK-2 cells were also observed in renal biopsies from patients with chronic tubulointerstitial nephritis.
We previously found an increased expression of miR-150 in renal biopsies from lupus nephritis patients and showed that miR-150 promotes renal fibrosis by downregulating SOCS1 in resident kidney cells. 6 Subsequently, transfection of an miR-150 mimic to young mesangial cells was shown to promote cell aging by Liu et al. 8 Renal miR-150 levels are increased in minimal change disease cases with a poor prognosis, suggesting that these may represent biopsies that from cases of focal segmental glomerulosclerosis in which the biopsy did not retrieve any scarred glomeruli. 10 In mice with AKI following experimental myocardial infarction, miR-150 deficiency alleviated renal fibrosis occurring 8 weeks later. 28 Guan et al. 7 reported that overexpression of miR-150 in tubular epithelial cells promotes kidney fibrosis by promoting the activation and proliferation of fibroblasts. Taken together, these data suggest that miR-150 contributes to renal fibrosis associated with glomerular injury or tubular damage. Renal fibrosis is a complex process and many genes participate in the pathogenesis. 2 It remains unclear whether miR-150 is a key upstream gene that can target multiple pro-fibrotic genes. Inhibition of miR-150 activity is one approach to gain insight into this issue.
Recent clinical trials have used miRNA-based RNAi to treat human diseases. 14 These include LNA-anti-miR-155 for lymphoma and leukemia and LNA-anti-miR-9 for heart failure. 14 In terms of miRNAbased RNAi in kidney diseases, anti-miR-21 is being tested in a phase II clinical trial targeting hereditary nephritis. 29 However, these clinical trials are all ongoing clinical studies and the results have not been disclosed. These clinical trials do suggest a promising era of miRNA-based RNAi serving as a therapeutic approach and the importance of preclinical studies in CKD. Such studies include miRNA-based RNAi including miR-192, miR-34a, miR-29a, and miR-25 in experimental diabetic nephropathy. 9,15-20 Further, miR-132 inhibitor attenuated kidney injury in mice of unilateral ureteral obstruction, 21 anti-miR30a ameliorated lipopolysaccharide-induced AKI, and anti-miR-709 improved cisplatin-induced AKI in mice. 22,23 Most of the above studies have used a single miRNA to treat a specific disease. Most recently, we found that LNA-anti-miR-150 ameliorates renal fibrosis initialed from glomerular injury in mouse models of lupus nephritis and focal segmental glomerulosclerosis mice. 25,26 These findings encouraged us to determine whether miR-150-based RNAi can attenuate tubulointerstitial fibrosis initiated following renal tubular damage.
We focused here on renal fibrosis, a common feature of all end-stage kidney diseases. We selected folic acid-induced a progressive severe renal fibrosis from AKI in mice for in vivo study. We tested initiating treatment with LNA-anti-miR-150 at day 0 before folic acid injection versus at day 2 after AKI onset, with obvious early renal tubular damage. The former experiment model is a prevention approach that could be used in patients at high risk for AKI, while the latter experiment model is a treatment approach for those patients with established AKI. With regard to folic acid nephrotoxicity, we found that both early treatment and delayed treatment attenuated renal fibrosis ( Figure 2). These findings are consistent with our previous work in experimental glomerular diseases. 25,26 To our knowledge, this is the first experimental study examining the therapeutic effect of miR-150 inhibitor in renal tubulointerstitial fibrosis using both preventive and therapeutic strategies.
To investigate the mechanisms by miR-150 contributes to renal fibrosis in the setting renal tubular damage, we examined fibrosisrelated genes, inflammatory cytokines, and kidney resident macrophages. The proteins a-SMA, FN, and COL-1 are characteristic of progressive renal fibrosis. 30 We found that both early and delayed treatment with LNA-anti-miR-150 decreased the expression of fibrotic mRNAs ( Figure 3) and increased the renal mRNA levels of anti-fibrotic gene SOCS1 (Figure 4). These data are supported by our previous data showing that SOCS1 is a target of miR-150. 6 miR-150 has been demonstrated to play a key regulatory role in many immune cells. For example, miR-150 may control B cell development or function by targeting c-Myb; 31 miR-150 regulates the development of NK cells and induced NKT cells; 32 and miR-150 promotes differentiation and cytolytic function of CD8 + T cells and suppresses function of CD4 + T cells. 33,34 In the present study, LNA-anti-miR-150 reversed the renal infiltration of inflammatory cells ( Figure 2B) including CD3 + T cells in folic acid-induced renal fibrosis ( Figure S2). We speculate that the different effects of miR-150 on different T cell subsets might be due to its cell-specific features. We also found increased numbers of renal macrophages and tubulointerstitial fibrosis, and LNA-anti-miR-150 reversed the increase of macrophages ( Figure 5). Macrophages play diverse roles renal inflammation and fibrosis. 27 We have explored mechanisms of miR-150 regulation of macrophages.
We conducted co-culture studies involving renal tubular HK-2 cells and macrophages, followed by transfection of a miR-150 antagonist. We found co-expression of miR-150 and SOCS1 in HK-2 cells by double staining with FISH for miR-150 and IF for SOCS1. These results suggested the interaction between miR-150 and SOCS1 (Figure 6A). We also found that miR-150 antagonist reversed the macrophages-induced decrease of SOCS1, the increase of a-SMA, FN, and COL-1, as well as overexpression of macrophage CD68. In addition, the miR-150 antagonist also partially suppressed the macrophagesinduced upregulation of p-JAK1, p-JAK2, p-STAT1, and p-STAT3 in HK-2 cells ( Figure 6). Our findings suggest that the miR-150/ SOCS1/pro-fibrotic protein pathway was not the only pathway contributing to the pathogenesis of folic acid-induced renal fibrosis. In this model, SOCS1/JAK/STAT pathway may also act downstream of miR-150. Consistent with the findings from other researchers, a dysregulated SOCS1/JAK/STAT pathway also contributes to the pathogenesis of inflammatory disorders in murine diabetic nephropathy and in rats of cisplatin-induced AKI, as shown from in vivo and in vitro data. 35,36 Macrophages are known to activate the SOCS1/JAK/STAT pathway. 37 Our findings suggest that macrophages activate the SOCS1/JAK/STAT pathway following kidney injury and thus contribute to the pathogenesis of renal fibrosis. The renal protective role of miR-150 inhibitor in renal fibrosis might be mediated by regulating SOCS1/JAK/STAT pathway in addition to inhibiting macrophage functions, as shown in the graphic abstract. To our knowledge, this is the first time the addition of JAK/STAT has been shown to act downstream genes whose expression is activated by miR-150 in renal fibrosis in folic acid-induced tubular injury in a mouse model and in HK-2 cells.
Lastly, we compared findings obtained from mice with folic-acidinduced interstitial fibrosis and co-cultured HK-2 cells with macrophages in renal biopsies from patients with chronic tubulointerstitial nephritis. In the renal biopsies, we found that renal overexpression of miR-150 and pro-fibrotic proteins including of a-SMA, FN, and COL-1, suppression of SOCS1, and the increased infiltration of macrophages indicated by CD68 positive staining (Figure 7). The findings in renal biopsies from patients with chronic interstitial nephritis were similar to those in murine with tubulointerstitial fibrosis and in HK-2 cells co-cultured with macrophages. It is well known that the injured proximal tubules can also elicit an extensive pro-inflammatory response via multiple mechanisms. 38 The limitation of our current study is that we did not investigate the possible differentiation between proximal tubule-specific inflammation and macrophageinduced inflammation. This will be conducted in our future project.
In conclusion, the renal protective mechanism of miR-150-based RNAi may be mediated by regulating pro-fibrotic genes and inflammation via infiltration of macrophages and SOCS1/JAK/STAT pathway. miR-150 based RNAi appears to be a novel promising therapeutic agent for renal tubulointerstitial fibrosis.
TIF Mouse Model
Male ICR mice (12-16 weeks old, 30-35 g) were purchased from Beijing Vital River Laboratory Animal Technology (Beijing, China), housed 4 per cage, and allowed free access to standard food and drinking water. Mice were maintained under a 12 h light/dark cycle with fixed temperature at 24 C and humidity (40%-55%). Folic acid (Sigma-Aldrich, MO, USA) was peritoneally injected to male ICR mice to induce AKI on day 2 and TIF on day 30 after one bolus of injection. 39 Animal studies were approved in advance by the Animal Care and Use Committee of China Medical University (15052111), and were performed following NIH Animal Care and Use Guidelines.
Animal Experimental Design and Samples Collection Natural History of Folic-Acid-Induced Kidney Injury
Male ICR mice (n = 24) were used to determine the natural history of folic-acid-induced kidney injury. Folic acid was dissolved in 0.3 mM NaHCO 3 solution. Approximately 0.2 mL of folic acid (250 mg/kg body weight) or vehicle were injected intraperitoneally into each mouse on day 0. Mice were anesthetized and blood samples were collected from abdominal aorta and the kidneys were collected after perfusion with PBS to remove intrarenal blood on day 2, day 7, and day 30 ( Figure S4A).
The Effect of LNA-Anti-miR-150 Treatment on TIF in Mice on Day 30 We administered LNA-anti-miR-150 or scrambled LNA (Exiqon, MA, USA), each at dose of 2 mg/kg, twice weekly for total of eight doses. ICR mice (n = 36) were divided into 6 groups: normal control, folic acid alone, prophylactic treatment (LNA-anti-miR-150 or scrambled LNA) starting at day 0 before AKI induction and delayed treatment (LNA-anti-miR-150 or scrambled LNA) starting at day 2 after AKI induction respectively for 28 days or 30 days. As described above, blood samples and kidneys were collected on day 30 after folic acid injection ( Figure S4B).
For all experimental animals, serum was isolated from blood samples and was stored at À80 C. The kidneys were divided into four parts: one piece was fixed into 4% paraformaldehyde and tissue was embedded in paraffin, one piece was put into optimal cutting temperature compound (OCT) (Sakura, CA, USA) and stored at À80 C, and the remaining two tissue pieces were stored at À80 C for isolation of total protein and RNA.
Cell Lines
HK-2 cells and the human monocytic cell line THP1 were purchased from ATCC (Manassas, VA, USA). HK-2 cells were cultured in DMEM/F 12 medium supplemented with 10% FBS. THP1 cell
Co-culture of HK-2 Cells and Macrophage Cells and Transfection of miR-150 Inhibitor
HK-2 cells (10 5 /mL) were seeded in the isolated upper compartments of a Transwell system plate and were allowed to grow to 70%-80% confluence. Small interfering RNA (siRNA; Ribobio, Guangdong, China) including siRNA against miR-150 and its negative control were transfected to HK-2 cells using Lipofectamine 3000 (Invitrogen, CA, USA) according to the manufacturer's instructions. The lower compartment of the Transwell plate with cultured macrophages covering 90% disc was set on the lower chamber of the co-culture system. HK-2 cells were harvested 24 h after the transfection and co-culture ( Figure S4C).
Kidney Tissues from Human Subjects
Renal biopsies were from ten cTIN patients and were obtained between June 2018 and December 2019 in the Department of Nephrology. cTIN was diagnosed by two renal pathologists. The exclusion criteria for cTIN were the patients with age <18 or >75, hypertension, diabetes, possible hepatitis B infection, tumors, and pregnancy. Normal kidney tissues were obtained from the nephrectomy of patients with renal tumor at The Department of Urology. These tissues located at least 5 cm away from the tumors. 25 A human subject research protocol was approved in advance by the Institutional Review Board of Shengjing Hospital of China Medical University. All subjects provided the written consent form. Renal function of human subjects was measured by clinical central laboratory of the hospital. Age and gender of human subjects were matched (Figure S4D; Table S1).
Histology and Immunofluorescence Studies
Quantitative scoring of tubular injury was assessed on PAS-stained sections using a semiquantitative scale as described previously. 40 Tubulointerstitial fibrosis was assessed on Masson stained sections as the estimated percentage of blue area to total area. 41 Frozen OCT fixed kidney tissue sections (5 mm) slides were incubated with antibodies of a-SMA, FN, COL-1, SOCS1, CD68, and CD3 were incubated at 4 C overnight followed by incubation with secondary antibodies conjugated with Alexa 568 or Alexa 488 (Table S2) at room temperature for 1 h. After three washes with PBS, slides were mounted with 4', 6-diamidino-2-phenylindole (DAPI) medium for 10 min. Images were captured using immunofluorescence microscopy (Nikon, Tokyo, Japan). Staining intensity was semi-quantified by Image-Pro Plus 6.0 (Media Cybernetics, MD, USA) as previously described. 25,26 Western Blotting Equal amounts of total protein from kidney tissues (50 mg) or HK2 cells (30 mg) were separated by SDS-PAGE and transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore Immobilon-P, MA, USA). And the membranes were incubated at 4 C overnight with primary antibodies against a-SMA, FN, COL-1, SOCS1, F4/80, and JAK/STAT pathway related proteins, including JAK1, p-JAK1, JAK2, p-JAK2, STAT1, p-STAT1, STAT3, and p-STAT3 (Table S2). After washing, antibodies goat anti-mouse/rabbit immunoglobulin G (IgG) was added for 1 h at room temperature (Table S2). Antibody-antigen binding was detected by high-sig ECL western blotting substrate and visualized by the Tanon 5500 imaging system (Shanghai, China). The protein loading variation was normalized by a-tubulin or GAPDH. Blot density was analyzed by NIH ImageJ software (Bethesda, MD, USA)
FISH of Kidney Tissues
For FISH, sections from paraffin-embedded human kidney biopsies, normal control kidney tissues, and mouse kidney tissues were cut at 4 mm thickness. Sections were deparaffinized, rehydrated, digested with trypsin, and hybridized with a digoxigeninhorseradish peroxidase (DIG-HRP)-labeled oligonucleotide probe complementary to miR-150 (Table S3) at 37 C overnight following by stained with anti-DIG-HRP antibody for 50 min, FITC-tyramine signal amplification for 5 min, and DAPI for 5 min. Images were captured by immunofluorescence microscopy (Nikon, Tokyo, Japan). Positive staining areas were semi-quantified as reported by Huang et al. 42
Double Staining of FISH and IF
For double staining of miR-150 and AQP1 in human kidneys, FISH of miR-150 was performed with the protocol described above. After washing three times with PBS, slides were incubated with primary antibody against AQP1at 4 C overnight (Table S2). Following three washes with PBS, the slides were incubated with secondary antibody labeled with Alexa 568 for 1 h in the dark at room temperature, followed by a PBS wash. Slides were mounted with DAPI medium for 5 min. Images were captured by immunofluorescent microscopy (Nikon, Tokyo, Japan).
For double immunofluorescenc staining of miR-150 FISH and SOCS1, HK-2 cells were cultured on the slides placed in the culture dishes with HK-2 medium. At confluence, cells were fixed using in situ hybridization fixative (Servicebio, Wuhan, China) for 15 min at room temperature. FISH to detect miR-150 and IF of SOCS1 was performed with the same protocol as described for kidney tissues. Total RNA was isolated from the frozen kidney tissues and harvested HK-2 cells using TRIzol reagent (Life Technologies, CA, USA) according to instructions. Primers were designed using Primer Express (Applied Biosystems, CA, USA) and synthesized by Life Technologies (Shanghai, China). Real-time fluorescence signal was detected with QuantStudio 6 Flex quantitative real-time PCR system (Applied Biosystems). Actin and sno202/U6 were used as endogenous controls for mRNA and mouse/human miR-150, respectively (Table S3). The relative level of each target gene was calculated using 2 ÀDDCt method (D Ct: Ct of endogenous control -Ct of individual target gene) Statistical Analysis SPSS 22.0 (SPSS, IL, USA) and Prism 8.0 (GraphPad, CA, USA) software were used for statistical analysis and graphing. Quantitative data were expressed as mean ± SD. Differences between groups were analyzed for statistical significance by one or two way-ANOVA for animal and cell data, and t test for human data. A value p <0.05 was considered as statistically significant.
AUTHOR CONTRIBUTIONS
J.L. contributed to conducting experiments and manuscript writing. H.Z. designed, supervised the experiments, and finalized the manuscript. D.W., X.C., and D.L. participated in data collection and analysis. C.J., Y.Z., and Y.W. contributed to human kidney collection. J.F., P.S.T.Y., and J.P. participated in advisory roles. J.B.K. provided advice and edited the manuscript.
CONFLICTS OF INTEREST
The authors declare no competing interests. | 2020-10-28T19:13:11.258Z | 2020-10-14T00:00:00.000 | {
"year": 2020,
"sha1": "dd54d91f723af3ce5597bc4b68bbc687f3d8d0cc",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2162253120303206/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3267b6ea225ca75240d3ef12c8780fcbf0503dc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
248346243 | pes2o/s2orc | v3-fos-license | Dysregulated RNA processing and metabolism: a new hallmark of ageing and provocation for cellular senescence
The human genome is capable of producing hundreds of thousands of different proteins and non‐coding RNAs from <20 000 genes, in a co‐ordinated and regulated fashion. This is achieved by a collection of phenomena known as mRNA processing and metabolism, and encompasses events in the life cycle of an RNA from synthesis to degradation. These factors are critical determinants of cellular adaptability and plasticity, which allows the cell to adjust its transcriptomic output in response to its internal and external environment. Evidence is building that dysfunctional RNA processing and metabolism may be a key contributor to the development of cellular senescence. Senescent cells by definition have exited cell cycle, but have gained functional features such as the secretion of the senescence‐associated secretory phenotype (SASP), a known driver of chronic disease and perhaps even ageing itself. In this review, I will outline the impact of dysregulated mRNA processing and metabolism on senescence and ageing at the level of genes, cells and systems, and describe the mechanisms by which progressive deterioration in these processes may impact senescence and organismal ageing. Finally, I will present the evidence implicating this important process as a new hallmark of ageing, which could be harnessed in the future to develop new senotherapeutic interventions for chronic disease.
The human genome is capable of producing hundreds of thousands of different proteins and non-coding RNAs from <20 000 genes, in a coordinated and regulated fashion. This is achieved by a collection of phenomena known as mRNA processing and metabolism, and encompasses events in the life cycle of an RNA from synthesis to degradation. These factors are critical determinants of cellular adaptability and plasticity, which allows the cell to adjust its transcriptomic output in response to its internal and external environment. Evidence is building that dysfunctional RNA processing and metabolism may be a key contributor to the development of cellular senescence. Senescent cells by definition have exited cell cycle, but have gained functional features such as the secretion of the senescence-associated secretory phenotype (SASP), a known driver of chronic disease and perhaps even ageing itself. In this review, I will outline the impact of dysregulated mRNA processing and metabolism on senescence and ageing at the level of genes, cells and systems, and describe the mechanisms by which progressive deterioration in these processes may impact senescence and organismal ageing. Finally, I will present the evidence implicating this important process as a new hallmark of ageing, which could be harnessed in the future to develop new senotherapeutic interventions for chronic disease.
The hallmarks of ageing
The hallmarks of ageing are a series of basic health maintenance mechanisms that together describe the molecular, cellular and systemic effects that drive or result from the ageing process in multiple species. At the time of writing, there have been nine interconnected and multifaceted hallmarks described which are as follows: genomic instability, epigenetic alterations, mitochondrial dysfunction, altered intercellular communication, deregulated proteostasis, deregulated nutrient sensing, telomere attrition, stem cell exhaustion and cellular senescence [1]. The interconnections between hallmarks means that impacting one hallmark can have significant effects on other hallmarks. The criteria for a phenomenon to be defined as a hallmark are (a) it should occur during normal ageing, (b) its experimental induction should result in accelerated ageing and (c) its experimental abrogation should bring about improvement to aspects of the ageing phenotype. The hallmarks of ageing may represent useful points of future therapeutic intervention for the diseases of ageing. Cellular senescence; a driver of organismal ageing Cellular senescence is one of the most intensively studied hallmarks of ageing and is described as a permanent cell cycle arrest, which occurs in response to cellular damage or cell stress [2]. Senescent cells have important functions in young systems, including roles in protection from tumorigenesis, and tissue remodelling and repair during development and wound healing [3]. However, during the ageing process, senescent cells accumulate in response to cumulative cellular insult and impaired immune clearance. There are multiple types of senescent cells and senescence can be provoked by multiple stimuli [4]. Senescent cells have many features that differentiate them from their nonsenescent counterparts [5], which are depicted in Fig. 1 and described in Table 1. Importantly, senescence has since been linked with multiple common, chronic diseases of ageing in animal models and in humans including lung fibrosis [6], osteoarthritis [7], age-related macular degeneration [8], neurodegeneration [9], vascular dysfunction [10], cardiovascular disease [11], chronic kidney disease [12], diabetes [13] and nonalcoholic fatty liver disease (NAFLD) [14]. Selective removal of senescent cells has been demonstrated to result in improved lifespan and healthspan in animal models [15]. Subsequent work in these systems has Fig. 1. The characteristics of senescent cells. This figure illustrates the characteristics of senescent cells. These include mitochondrial dysfunction, secretion of the senescence-associated secretory phenotype, characteristic changes to gene expression, the presence of senescence-associated heterochromatic foci (SAFH), increased resistance to apoptosis, altered size and morphology, the high lysosomal content indicated by high levels of senescence-associated beta galactosidase, genetic damage and exit from cell cycle. demonstrated beneficial effects on brain ageing and neurodegenerative disease [16] and musculoskeletal function [17]. More recently, selective removal of senescent cells by induced apoptosis (senolysis) has been demonstrated to result in clinical improvement in humans in the context of idiopathic lung fibrosis [18] and diabetic kidney disease [19].
Dysregulated mRNA processing and metabolism: a link between splicing and stress A unifying feature of most of these provocations for senescence is an aberrant response to different types of cellular stress. Eukaryotic cells have several mechanisms to deal with internal and external stresses, but one of the most important is alternative mRNA processing and metabolism [20][21][22][23]. This term refers to the collection of phenomena that happen to an RNA molecule from its transcription to its degradation, to ensure correct regulation of gene expression. Newly transcribed mRNAs are processed to add a 5 0 cap structure, undergo splicing to remove non-protein coding intronic sequences and are subject to the addition of a poly A tail prior to export from the nucleus. Following export, RNAs can be transported to specific subcellular localisations, stored and sequestered, or they may be translated (if they code for proteins). Their stability is regulated by post-transcriptional regulation by the action of microRNAs and other non-coding RNA (ncRNA) species, or RNA-binding proteins (RBPs). At the end of their lifecycle, they are then degraded by the exosome (Fig. 2). Ribosomal RNAs (rRNAs), transfer RNAs (tRNAs) and messenger RNAs (mRNAs) all undergo processing, and there is evidence that suggests that metabolism of all three RNA species are associated with ageing and/or cellular senescence [24,25]. Messenger RNA (mRNA) processing specifically is the collective set of phenomena that allow most eukaryotic genomes to adjust their transcriptomic output in response to internal and external environmental cues. It brings exceptional adaptability and plasticity to the human genome, and accordingly, over 95% of all human genes express more than one isoform [26]. Alternative isoforms may have differential spatial or expression patterns, and often confer alternative or antagonistic function. For example, the VEGFA gene produces two main classes of isoform, some of which promote angiogenesis, and some are anti-angiogenic [27]. There are seven primary forms of alternative mRNA isoform production. These are alternative promoter usage, alternative polyadenylation, cassette exon usage, mutually-exclusive exon usage, alternative 5 0 splice site usage, alternative 3 0 splice site usage and retained introns. Alternative mRNA splicing is regulated at two levels; at the level of DNA or pre-mRNA sequence, and by the binding of a series of splicing regulator proteins (Fig. 3). There are a number of sequence elements that regulate splicing; the core elements (the 5 0 splice site, the 3 0 splice site, the polypyrimidine tract and the branch point), which are binding platforms for elements of the spliceosomal machinery and are essential for splice site usage. Mutations in these regions are often causal for inherited genetic disease [28]. Then, there are a set of auxiliary binding sites termed exon and intron splicing silencers (ISS, ESS) and enhancers (ISE, ESE) [29]. These motifs are responsible for more regulated splicing, and are primarily responsible for the plasticity of splicing. These sequences are important because they bind important spliceosomal components or splicing regulatory proteins responsible for alternative splicing. Serine arginine-rich (SRSF) splicing factors bind to ESE and ISE elements and usually, but not exclusively, promote splice site usage. Conversely, heterogeneous nuclear ribonucleoprotein particles (hnRNPs) bind to ISE and ISS elements and usually, but again not exclusively, inhibit splice site usage. The combinatorial balance of splicing activators and inhibitors determines whether a given splice site is used and a given isoform expressed [30]. Table 1. Characteristics of senescent cells. The characteristics of senescent cells are given below. BrdU, bromodeoxyuridine; EdU, 5-ethynyl-2'-deoxyuridine; MMP, Matrix metalloproteinases; ROS, reactive oxygen species; SAHF, senescence-associated heterochromatic foci; SA-b-Gal, senescence-associated beta galactosidase.
Regulation of splicing factors
Splicing factors themselves are regulated at multiple levels. They are regulated at the level of transcription by repeated and constitutive activation of cellular signalling pathways such as ERK and AKT [31], but also interface with other signalling pathways, including AMPK, FOXO1 and mTOR [32], due to crosstalk between pathways. They are thus influenced by a great many of the stimuli classically associated with ageing, including inflammation, DNA damage, mitochondrial dysfunction and dysregulated nutrient sensing [33][34][35]. Many SASP-associated cytokines associated with paracrine senescence are regulated by the NFjB pathway [36], which in turn is also influenced by ERK signalling. Splicing factor expression may initially increase in acute response to inflammatory factors, but chronic inflammation exerts a negative influence on their expression. Many splicing factors also feed forward and regulate the processing and stability of inflammatory genes [37,38]. Although individual splicing factors exert patterns of temporal and spatial specificity of expression, they are co-ordinately regulated at the level of transcription by 'master control genes' such as FOXO1 and ETV6, which lie downstream of ERK and AKT [31], and also the DNA damage response gene ATM [39]. Splicing factors are also regulated by phosphorylation at the protein level; their subcellular localisation is controlled by kinases of the CLK and SPRK classes [40], and by the action of AKT [41]. Finally, splicing factors frequently regulate their own expression by the inclusion or exclusion of poison exons which promote degradation by the nonsense-mediated decay pathway [42,43]. Gene expression is co-transcriptional, so these processes may occur simultaneously. Following transcription, messenger RNA transcripts are processed to add a 5' cap structure, spliced to remove introns and undergo the addition of a poly A tail. Processed mRNAs are then exported from the nucleus to the cytoplasm, where they may be translated (if they code for proteins) or act as non-coding RNAs that regulate other genes. RNAs may be sequestered at specific cellular locations for later translation. RNA transcripts can be dynamically regulated by the action of non-coding RNAs or RNA binding proteins and are degraded at the end of their functional lifespan. Exonic sequences are given on blue, intrinsic sequences in red. The nuclear membrane and nuclear pore are indicated by a curved coral line. The cap structure is given by a blue circle and the ribosome is given in pale orange. The nascent polypeptide is given by green circles.
1224
The
Splicing factors, ageing and senescence
Gene set enrichment analyses (GSEA) of age-related gene expression signatures deriving from human peripheral blood indicates that the pathways that regulate alternative splicing are amongst the major pathways disrupted by age in multiple human populations [44]. Splicing factor genes are usually, but not exclusively, downregulated during ageing and senescence, but this may differ from tissue to tissue [45]. Each individual splicing event, however, is determined by combinatorial binding of splicing factors to individual splicing regulatory sequences [30], the directionality of effect for different splicing factors in different contexts may thus vary. Changes in splicing factor expression are manifested as changes to programs of alternative splicing in different organ systems in ageing people [44,46,47], and can be explained at least in part by upstream changes in splicing regulation [48]. Splicing factor changes have also been extensively reported by ourselves and others in senescent cells of multiple cell and tissue types [39,[49][50][51][52][53][54][55]. Splicing factor expression has been shown to associate with lifespan in humans [56] and in other species [56,57], and similarly are causally involved in response to other lifespan-extending phenomena such as dietary restriction (DR) in humans [58] and in other species [57]. Interestingly, changes in splicing factor expression and the consequent downstream effects on splicing are not a feature of ageing in naked mole-rats, a species with exceptionally long life and negligible senescence [59]. Disrupted patterns of alternative splicing are a major characteristic of many common chronic age-related diseases, such as cancer [60], neurodegenerative disease [61], osteoarthritis and cardiovascular disease [62], and also more systemic diseases of the elderly, such as frailty and sarcopenia [63]. Most of these diseases are also characterised by senescence as described above. Splicing changes are also prevalent in rare diseases associated with premature ageing such as idiopathic pulmonary fibrosis [64] and Hutchinson Gilford Progeria Syndrome (HGPS) [65]. There are also links between common chronic disease and dysregulation of splicing factors. Levels of the HNRNPA0, HNRNPM and AKAP17A are predictively associated with multiple ageing phenotypes in humans, where splicing factor levels measured at baseline are associated with later cognitive decline and hand grip strength, a measure of sarcopenia [66]. There is a well-characterised increase in transcriptional noise with ageing, but recent findings suggest that measures of expression at the level of mRNA processing may be a better determinant of the ageing process [67].
Potential mechanistic links between disrupted splicing regulation and senescence
The likely candidacy of disrupted mRNA processing regulators as a major driver of senescence stems from the multifunctional nature of these proteins ( Fig. 4 and Table 2). The global nature of splicing-associated changes to the transcriptome means that effects are likely to be far reaching for the transcriptome as a whole. A total of 98% of genes undergo alternative splicing [26], and are represented in every cellular process. Progressive and irreversible dysregulation of splicing regulation will therefore inevitably lead to farreaching consequences for cells and systems. Furthermore, the auto-regulatory nature of splicing factor regulation means that disruption to the homeostasis of splicing regulation is likely to result in ongoing and increasing disruption.
There are also isoforms of known senescence genes with altered functionality or expression. Isoforms of CDKN1A (which encodes p21) demonstrate different temporal dynamics in response to doxorubicin, with p21 variant 2 showing a slower, but more marked Fig. 4. The mechanisms by which dysregulated splicing factor expression could induce and maintain cellular senescence. These include genome-wide disruption of alternative splicing, dysregulation of mRNA export, dysregulation of RNA stability, disruption of mRNA surveillance pathways such as nonsense-mediated decay, increase in aberrant splice products, stabilisation of mRNAs encoding inflammatory factors, impaired transcriptomic response to cellular insult resulting in reduced molecular resilience, failure of telomere maintenance and uncoupling of transcription-coupled DNA damage and splicing interactions. response to genotoxic stimuli [68]. Isoforms of ANRIL, derived from the CDKN2A locus which also encodes p14, p15 and p16, also demonstrate altered functionality in response to different senescence-inducing provocations [69]. Splicing response is also tightly coupled to DNA damage [70]. Many genes with important roles in damage repair have alternatively spliced isoforms. The cyclin D1 gene encodes several isoforms with different abilities to initiate DNA damage response (DDR); with the cyclin D 1a isoform able to initiate DDR, but the cyclin D 1b isoform lacking this ability [71]. Similarly, an age-related increase in the expression of the truncated Tp53 isoform D40p53 is associated with an accelerated ageing phenotype and increased levels of senescence [72]. Disrupted splicing is likely to result not only in changes to the abundance of canonical isoforms, but also result in the occurrence of aberrant splicing events. Recent evidence has also suggested that dynamic retained intron events occur in senescence and in aged tissues, and were negatively correlated with the expression of their host genes [73]. Accumulation of retained introns has also been described in the pre-symptomatic stage of ageing in wild-type mouse models [74]. Aberrant isoforms may have dominant negative properties, altered functional characteristics or misdirected subcellular localisation which may have profound implications for cells, tissues and systems and contribute to ageing and senescence phenotypes.
Altered levels of splicing factors may also compromise the ability of an organism to react to challenging stimuli in its environment and reduce transcriptomic resilience, leading to multiple forms of cellular stress. As described above, cellular stress is a major provocation for senescence. Many genes use mRNA surveillance pathways such as nonsense-mediated decay (NMD) as part of their normal regulation, in addition to its role in mRNA quality control [75]. Proper splicing is essential for NMD, as the deposition of the exon-junction-complex (EJC) at the site of spliced exons is a signal for initiation of degradation [76]. Without effective NMD, RNA quality control mechanisms may be compromised, as has been observed in nematode models [77] and in humans [78].
Many splicing factors also have roles in RNA stabilisation or destabilisation [79,80]. In addition to its role on splicing, splicing factors such as hnRNPD (also known as AUF1) act to destabilise their targets through binding to A-rich elements (AREs) in their 3 0 untranslated regions [81]. Other splicing factors such as hnRNPA1 can stabilise, rather than destabilise their targets [82]. The aberrant production of transcripts with alternative 3 0 untranslated regions can have consequences for their regulation by microRNAs or RNA-binding proteins, another stress responsive gene regulatory mechanism [83]. This has particular relevance when the role of splicing factors in regulation of multiple components of the SASP is considered. Many pro-inflammatory cytokines are regulated by A-rich elements in their 3 0 untranslated regions [84,85]. Dysregulated expression of splicing factors can thus alter the negative regulation of SASP factors, leading to induction of chronic levels of inflammation.
Splicing factors also have additional roles in RNA export [86,87]. The ability to shuttle processed RNAs to the cytoplasm for translation or storage is a key component of the quality control of gene expression. Defects in this process lead to the accumulation of aberrant or mislocalised RNAs in the cell and are commonly observed in neurodegenerative disease [88].
Many splicing factors have roles in telomere maintenance [89][90][91]. HNRNPF and H associate with the G quadruplex elements in the hTERC component of the Table 2. Mechanisms by which dysregulation of splicing factor expression or activity can modulate senescence phenotypes. The progressive dysregulation of splicing factor gene expression and activity over time is predicted to interface with many of the molecular mechanisms underpinning cellular homeostasis and cellular health, by virtue of their multifunctional nature. Some of the pathways that may be affected by dysregulated expression of splicing factors are given below.
Mechanism
Consequence References Changes to patterns of canonical splicing Expression of prosenescence isoforms [68,69,71] Increase in aberrant splicing Increase in transcriptional noise and production of isoforms with altered functionality [73,74] Disruption to RNA surveillance Failure of RNA quality control and fine tuning of gene expression [77,115] Altered dynamics of RNA turnover Enhanced stability of SASP mRNAs [81,82] Defective RNA export Altered subcellular localisation of mRNAs, nuclear accumulation and impaired translation of mRNAs [88,116] Compromised telomere maintenance Telomere shortening and initiation of replicating senescence [89][90][91] Decreased plasticity and adaptability of transcriptome Decreased molecular stress resilience [117] Uncoupling of DNA damage response and splicing regulation Accumulation and faulty repair of DNA damage [70,95] 1227 The telomerase holoenzyme and modulate telomerase activity and telomere length [92]. Other splicing factors can promote recruitment of telomerase to telomeres [93] or modulate the accessibility to the telomerase promoter to its transcription factors [91]. Telomerase itself is also regulated during development by alternative splicing [94]. Finally, there are known interactions between mRNA processing factors and transcription-coupled DNA damage response. ATM, an initiator of DNA damage response, is known to be a negative regulator of splicing factor expression [39], but the mRNA processing pathway is however co-transcriptional, not linear. RNA molecules may exist in a state where there may be interactions between not only RNA and mRNA processing factors, but also between nascent transcripts and DNA. Evidence is mounting that dysregulated coordination between different RNA processing steps may facilitate introduce defects in genome stability by disrupting the interactions between nascent RNA and the DNA template [95]. DNA damage is also able to directly modify splicing proteins via PARylation, arginine methylation, acetylation, ubiquitination/sumoylation or phosphorylation [70].
Splicing factor dysregulation may thus act at many points in the development of senescence, and contribute to multiple senescence phenotypes. These proteins play multiple and interlinked roles in many aspects of the initiation and persistence of the senescence phenotypes, including DNA repair, telomere shortening, decreased molecular stress resilience and secretion of the SASP. As such, this phenomenon is poised to interface with multiple hallmarks of ageing, with influence on many aspects of ageing biology.
Splicing factors as potential therapeutic targets for age-related disease Many important gene regulatory proteins are tightly regulated to maintain correct homeostatic levels, and splicing factors are no exception. Splicing factors are normally held in a narrow homeostatic range by a combination of autoregulation and transcriptional control via cellular signalling pathways, many of which are responsive to age-associated stimuli [31,39]. During the ageing process, repeated and constitutive activation of signalling pathways such as AKT and ERK exerts negative regulatory pressures on splicing factor expression via the transcription factors ETV6 and FOXO1 [31]. Splicing factors also commonly auto-regulate via the inclusion or exclusion of a poison exon, which promotes mRNA degradation [96]. The expression of most splicing factors declines with age [44]. It therefore follows that restoration of splicing factor levels back within their normal homeostatic constraints may confer an advantage to the cell. Early studies suggested that some splicing factors may be upregulated by treating cells with the polyphenol resveratrol [97], a molecule long associated with healthspan benefits [98][99][100]. Subsequent studies demonstrated that the effect of resveratrol and associated analogues produced a global restoration of splicing factor levels, and was associated with reversal of multiple aspects of cellular senescence in primary human dermal fibroblasts [101]. Similar results were obtained in studies where primary human dermal fibroblasts were treated with specific inhibitors of the negative upstream signalling pathways ERK and AKT, or their downstream effectors FOXO1 or ETV6 [31]. Other studies demonstrated that it was possible to uncouple reversal of different aspects of the senescence phenotype and produce reversal and attenuation of SASP, without re-entry to cell cycle in primary human endothelial cells by treatment of cells with mitochondria-targeted hydrogen sulphide (H 2 S) donors [102]. These data clearly demonstrate the potential utility of small molecule or genetic modulation of splicing factor expression for attenuation of cellular senescence, but challenges still remain in terms of specific delivery to senescent cells, translation to systemic models and precise dosing to maintain homeostasis. In this respect, the presence of auto-regulatory feedback loops may actually be an advantage, as overstimulation will elicit a negative regulatory response and attenuation of splicing factor mRNA levels. The development of splicing regulatory modulators is in its infancy, but nevertheless holds great potential for future senotherapeutics.
Dysregulated mRNA processing: the 10 th hallmark of ageing?
Dysregulation of mRNA processing fulfils all the criteria for categorisation as a new hallmark of ageing. The first criteria is that a hallmark must occur during normal ageing. Data suggest that splicing factor dysregulation does occur during human ageing, both at the level of populations [44] but also in multiple cell types at the individual cellular level in terms of senescence [31,39,49,50,101,102]. This is echoed by disruption to patterns of alternative splicing in multiple tissues in multiple species during ageing [48,56,103,104], except in animals that have negligible senescence [59]. The second criteria is that its experimental induction of defects in RNA processing should result in accelerated ageing. Data on this aspect are harder to find as complete knockout of many splicing factors is lethal and mimicking transcriptome wide patterns of splicing defects consistent with those found in ageing experimentally is difficult. Mice where the HNRNPD gene has been ablated do show evidence of accelerated ageing, however, including kyphosis (hunched back), reduced subcutaneous fat and reproductive organ atrophy [91]. Experimental depletion of HNRNPD or SRSF2 expression also yields cellular senescence [102]. Similarly, genetic perturbation of HNRNPA3, SRSF7 and SRSF4 expression levels was sufficient to provoke senescence, as was disruption of transcriptome wide splicing patterns using a pharmacological inhibitor of SF3B1, an important component of the U2 snRNP involved in branch site recognition [53]. Similarly, depletion of the pre-mRNA processing factor Prp19 promotes cellular senescence and premature ageing in mouse skin [95]. The third criteria of a hallmark of ageing is that its experimental abrogation should bring about improvement to aspects of the ageing phenotype. Restoration of splicing factor expression using small molecule or genetic means has been shown to be capable of rescuing multiple aspects of the senescent cell phenotype [31,101,102]. Similarly, overexpression of the RNA processing factor PRP19 has been shown to extend human endothelial cell lifespan in vitro by increasing stress resilience and DNA repair capacity [105]. Examples of the beneficial effect of systemic restoration of splicing regulation on in vivo ageing models are not so well-documented, but, overexpression of Prp19 has been shown to increase lifespan in a Drosophila systemic model [106]. It should also be noted that the original description of the hallmarks of ageing states that not all the hallmarks are fully supported yet by interventions that succeed in ameliorating ageing [1].
Conclusion and open questions
Given the evidence presented here, it should be considered that dysregulated mRNA processing fulfils at least as many of the criteria as some of the phenomena already designated as hallmarks of ageing, and as such should be designated accordingly (Fig. 5). The links between dysregulated mRNA processing, cellular senescence and ageing are now beginning to coalesce into a clearer picture, which is perhaps unsurprising given that correct alternative splicing is a fundamental prerequisite to cellular health. Approaches to restore splicing regulation and restore transcriptomic plasticity may therefore represent a useful new tools in our senotherapeutic armoury, although there are still questions to answer. First, what is the most facile and effective methodology to restore them to their correct homeostatic positions? Is it sufficient to do this systemically, or do we need to restrict our interventions to target organs and cell types? The answer to this will largely depend upon the desired outcome; restoring splicing regulation in accessible target organs in the context of relieving aspects of age-related disease is likely lower hanging fruit than tackling bigger and more systemic issues. A second question is regarding the impact of splicing-targeted senotherapies on the non-senescent cells resident in the same organs. In this scenario, it may be advantageous to consider senescent cell targeting, or strategies to attenuate only aspects of the senescent cell phenotype such as the SASP. There is still work to do here and this work is in its infancy, but represents an exciting new future therapeutic avenue. | 2022-04-24T06:17:15.133Z | 2022-04-23T00:00:00.000 | {
"year": 2022,
"sha1": "f3b7ccb9c9014cd71aedc15e35731f4517d021fa",
"oa_license": "CCBY",
"oa_url": "https://ore.exeter.ac.uk/repository/bitstream/10871/129598/1/The%20FEBS%20Journal%20-%202022%20-%20Harries%20-%20Dysregulated%20RNA%20processing%20and%20metabolism%20%20a%20new%20hallmark%20of%20ageing%20and%20provocation%20(1).pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "0d89c91ee87281644cf5572b337c17431d242b74",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266466851 | pes2o/s2orc | v3-fos-license | Epidemiological, Clinical, and Laboratory Findings of 235 Hospitalized COVID-19 Adult Patients (Survivors and Non-Survivors) at Sohar Hospital in Oman
Objectives: The aim of this study was to describe the epidemiological and clinical characteristics and laboratory findings of coronavirus disease 2019 (COVID-19) patients at the Sohar Hospital, Sohar, Oman. Methods: This retrospective study of admitted COVID-19 patients at Sohar Hospital in Oman was carried out from March to October 2020. Demographics and laboratory data of 19 tests for 235 COVID-19 patients, of which 202 were survivors and 33 were non-survivors, were collected from the hospital information system after ethics approval. Results: Thirteen factors were significantly correlated with in-hospital mortality, including older age, having chronic disease, high neutrophil count, high troponin T, high creatinine, low albumin (p < 0.0001), high white blood cell (WBC) count, low hemoglobin, high D-dimer (p < 0.001), high C-reactive protein (CRP) (p < 0.002), low lymphocyte count (p < 0.003), high alkaline phosphatase (ALP) enzyme (p < 0.007) and high ferritin (p < 0.045). The most common laboratory blood test abnormalities that were highly correlated with mortality were increased values of CRP (100% of non-survivors), D-dimer (94.1% of non-survivors), ferritin (88.2% of non-survivors), and troponin T (85% of non-survivors) and reduced lymphocyte count (73.9% of non-survivors). Conclusion: These findings could help in categorizing COVID-19 patients for risk-based assessment and early identification of patients with poor prognosis.
Introduction
The novel strain of coronavirus, called severe acute respiratory syndrome coronavirus-2 (SARS-CoV 2), first emerged in Wuhan, China, in late 2019 [1].After around one month, the new virus was transmitted to 19 different countries [1].Human-to-human transmission outside of China was first recorded at the end of January 2020.World Health Organization (WHO) declared a public health emergency of global concern due to the rapid spread of coronavirus disease 2019 (COVID- 19).After less than two months (March 11, 2020), as a result of the spread of COVID-19 dramatically increasing worldwide, WHO acknowledged that COVID-19 is a pandemic [2,3].In Oman, the first case of COVID-19 was recorded in February 2020 and the first death was confirmed in March 2020 [4].Until October 11, 2022, this new virus infected 627,018,121 patients and killed 6,562,012 worldwide [5].
COVID-19 has different clinical presentations, ranging from asymptomatic, mild, or severe symptoms to death [6].The early identification of COVID-19 patients who are at high risk for poor outcomes is crucial for better management [6].Therefore, it is vital to identify risk factors for early prediction of the severity of COVID-19 patients.
Several studies stated that blood biomarkers could play a significant role in the diagnosis, staging, and treatment strategies of patients with COVID-19 [7,8].Generally, it has been found that patients with a viral infection have low neutrophil count, low to normal concentration of C-reactive protein (CRP), and elevated concentration of myxoma resistance protein (MxA1) compared to patients with bacterial pneumonia [7].A recent study revealed that levels of ferritin, procalcitonin (PCT), and CRP in serum were correlated with the severity of the COVID-19 infection [8].In Chinese patients, a study showed that COVID-19 patients had increased concentrations of interleukin (IL) 1B, interferon (IFN) γ, interferon gamma-induced protein 10 (IP10), and monocyte chemoattractant protein 1 (MCP1) [2].It has been also shown that age, serum lactate dehydrogenase (LDH), CRP, red blood cell distribution width (RDW), blood urea nitrogen (BUN), albumin, and direct bilirubin can be used as accurate prognostic indicators for COVID-19 patients [9].
A recent meta-analysis mentioned that studying alternation in values of biochemical and hematological tests can help in prognosis prediction of COVID-19 patients [10].In line with these findings, several laboratory investigations have been identified as useful prognostic predictors and could help clinicians make a decision regarding the management of this disease [11,12].Another method is to use a panel of blood biomarkers combined with clinical symptoms [7].According to a study conducted by Huang et al., the most common symptoms of COVID-19 infection in Chinese patients were fever (98% of patients), cough (76% of patients), and myalgia or fatigue (44% of patients) [2].Other less common symptoms were sputum production (28% of patients), headache (8% of patients), hemoptysis (5% of patients), and diarrhea (3% of patients) [2].
Blood biomarkers of COVID-19 and the link with in-hospital mortality have not yet been investigated amongst patients in the northern regions of Oman.Al Harthi et al. carried out a study at the central of Oman (Al Nahdha Hospital) to understand the sociodemographic, clinical, radiological, and laboratory features of the confirmed cases of COVID-19 comparing patients who required critical care and those who did not [13].
In order to further improve our understanding of the disease, this research has been conducted at Sohar Hospital, Sohar, Oman, with the aim to identify significant laboratory investigations that could play a role in early severity prediction of COVID-19 and the possibility to correlate with mortality.
Study design
This study is a retrospective study of admitted COVID-
Study population
All adult COVID-19 patients (>14 years old) admitted at Sohar Hospital between March 1 2020 and October 30 2020 were eligible to be included in this study.COVID-19 was confirmed by positive results of real-time reverse transcriptase-polymerase chain reaction (RT-PCR) (BD MAX™, Becton, Dickinson and Company, Franklin Lakes, New Jersey, United States) and Gene-Xpert (Cepheid, Sunnyvale, California, United States).Exclusion criteria were pregnant women, post-partum women, hospital staff, patients younger than 14 years old, patients discharged within 24 hours, and patients with less than two blood tests performed.
Data collection
The patient's data was generated from the patient's electronic medical records.It included demographic information such as age and gender, clinical presentation, and laboratory results.The clinical information consisted of the patient's presenting symptoms such as fever, cough, myalgia or fatigue, sputum production, headache, diarrhea, difficulty in breathing, sore throat, and running nose, in addition to other information such as the presence of chronic illness, COVID-
Statistical analysis
Data were analyzed using IBM SPSS Statistics for Windows, Version 24.0 (Released 2016; IBM Corp., Armonk, New York, United States).Data was summarized as frequencies and numbers for categorical variables or medians with interquartile ranges (IQRs) for continuous variables.Groups were compared using Mann-Whitney tests for continuous variables and the Chi-squared test for categorical data using a 95% confidence interval (CI) and p < 0.05 was considered statistically significant.
Epidemiological and clinical characteristics
Characteristics of hospitalized COVID-19 patients (survivors and non-survivors) are summarized in Table 1.
Data were collected from 235 COVID-19 patients admitted to Sohar Hospital in the study period.Their ages ranged from 16 to 91 years with an average of 53 years.Of those, 81% were Omanis, male and female distribution was 55% and 45%, respectively, and 47.2% had no chronic diseases.The mortality rate was 14% and this was significantly higher in older COVID-19 patients compared to younger ones (p < 0.0001).It has been noticed that the mortality rate increased with age, reaching 72.7% for patients over 50 years old.There was a significant association between the mortality rate and having a chronic disease.The percentage of non-survivors with a chronic disease was 87.9%, which is higher than non-survivors with no chronic disease (12.1%).Similarly, the percentage of non-survivors with chronic diseases such diabetes, hypertension, renal disease, cardiovascular disease or chronic obstructive pulmonary disease was more than double the percentage of survivors who had chronic diseases (40.6%).The most common symptoms in COVID-19 survivors and non-survivors were fever (72.3%, 75.8%), cough (72.3%, 63.6%), and myalgia (37.1%, 39.4%), respectively (Table 2).Difficulty in breathing was a less common symptom in both groups.Other rare symptoms were headache, runny nose, sputum production, sore throat, and diarrhea.
TABLE 3: Laboratory investigations for admitted COVID-19 patients (survivors and non-survivors)
In non-survival patients, WBC count, neutrophil count, D-dimer, CRP, ferritin, troponin T, ALP, and creatinine levels were significantly higher compared with survival COVID-19 patients.On the other hand, lymphocyte count, hemoglobin, and albumin levels were significantly lower.P < 0.05 was considered statistically significant.
Discussion
The results of laboratory blood tests can be used with clinical data to predict the severity of COVID-19 and could improve prognosis via better management of the disease.Nevertheless, more research is required to better elucidate these alternations in the laboratory tests and their links with the prognosis [14].In this study, characteristics and factors that could be related to in-hospital mortality among 235 COVID-19 patients at Sohar Hospital were described focusing on the alternations in 19 blood tests.The study found that factors associated with in-hospital mortality were older age, having chronic disease, high neutrophil count, high troponin T, high creatinine, low albumin, high WBC count, low hemoglobin, high D-dimer, high CRP, low lymphocyte count, high ALP, and high ferritin.The most common laboratory blood test abnormalities that were linked with mortality were increased values of CRP, D-dimer, ferritin, and troponin T and reduced lymphocyte count.
In line with another study [6], the current study found that the rate of in-hospital mortality was 14%, while other national [15] and international studies reported that the mortality rate ranged from 11.9% to 28% [16,17].These differences are also seen in other international studies of hospital-admitted COVID-19 patients.For example, in the United Kingdom, the mortality rate was 26% [18] and in China it ranged from 1.4% [19] to 17% [20].These variations in the rate of mortality could be due to the differences in the presence of comorbidities, demographic properties, methods used for the diagnosis, and ways of identification of COVID-19-linked death [21].Other causes could be related to variations in the inclusion criteria of patients in each study, health resources availability, and criteria of hospital admission [6].
The mortality rate was the highest among patients over 50 years old, which is similar to studies conducted in China [19] and in Oman [13,22].A British study [18] and Spanish studies [6,16,17] showed the highest mortality in older age groups ranging from 66 to 73 years.Similar to the current study, international [6,23] and national [15,22] studies found that the mortality rate is higher in older patients compared to younger patients.These variations could be related to the differences in average life span in each country.
Contrary to this study, it has been noted that sex was significantly correlated to poor prognosis of COVID-19; thus, the rate of mortality was higher in males compared to females [16][17][18].In the current study, the insignificant difference within the sex group could be related to the small sample number.
Having one or more chronic diseases was described as a risk factor for the severity of COVID-19 [24].Similar to previous international studies [16][17][18], our study showed that the rate of mortality was high for patients with comorbidity, with around 88% of non-survivors having one or more chronic diseases including diabetes, hypertension, renal disease, cardiovascular disease, or chronic obstructive pulmonary disease.
Unfortunately, the correlation between each single chronic disease and mortality rate was not studied in this research.Data related to obesity or dyslipidemia was also not collected in this study.
Fever and cough were the most common symptoms in COVID-19 patients.Other uncommon symptoms were myalgia, difficulty in breathing, and sore throat.These findings agree with previous national [13,15,22,25] and an international study [2].
This study showed that the most common alterations were increased values of CRP, ferritin, D-dimer, reduced amount of CO2 in serum, and reduced lymphocyte count, and these could be considered as significant biomarkers for COVID-19 prognosis.These findings agree with recent national [15] and international studies [6,26].Similar to our findings, Wang et al. reported that increased values of CRP, Ddimer, ferritin, and troponin T and reduced lymphocyte count were the most frequent laboratory alterations that were highly associated with mortality [27].Alterations in these inflammatory biomarkers could be linked with the cytokine storm which has already been identified in COVID-19 patients with poor prognosis [27].Furthermore, a significant number of patients show abnormal values of other blood tests even in the early stages of the disease [27].These alterations were seen with the following parameters: WBC, neutrophil, ALP, creatinine, hemoglobin, and albumin [6,17] Unfortunately, the current study could not investigate the blood levels of procalcitonin and lactate dehydrogenase (LDH), which were found to be good biomarkers and involved in the prognosis of COVID-19 [6,15,17,18].This suggests that these two markers may play a role in determining the outcome of the disease.Similar to the current study, Bhargava et al. investigated the role of creatinine as a significant biomarker for poor prognosis of COVID-19 [28].They reported that acute renal disease is a predictor of poor outcomes of COVID-19 [28].Another study attempted to explain the mechanism of acute renal injury in patients with COVID-19 [29].They suggested that the virus uses angiotensin-converting enzyme 2 (ACE2) receptor to enter renal cells and cytokines that are induced as a response for this virus cause indirect effects on these cells through hypoxia or shock [29].
To our knowledge, this is the first study that compared the results of laboratory blood tests between a reasonable number of COVID-19 survivors and non-survivors in the northern region of Oman.Using of single analyzer for each group of blood tests ensured results comparability.Nevertheless, the current study has some limitations, for instance, possible errors in clinical data entry, sample selection, and inclusion criteria.Requests for blood tests varied from one doctor to another.Furthermore, a shortage of some reagents resulted in getting incomplete test profiles for some patients and the number of cases tested for a specific laboratory test was not the same.In addition, procalcitonin, IL-6, IL1B, INF γ, IP10, and MCP1 serum, and LDH, which were reported earlier as significant predictors for the severity of COVID-19 [2,8,9,15,30] were not investigated in the current study.Most of these tests are not carried out at the laboratory of Sohar Hospital except LDH, which was tested in a minor number of patients and excluded from the data analysis.
Conclusions
This study identified significant predictors for COVID-19 severity, including older age, presence of chronic disease, high WBC count, high neutrophil count, low lymphocyte count, low hemoglobin, high D-dimer, high CRP, high ferritin, high troponin T, high creatinine, low albumin, and high ALP.Besides, this study showed that the most common laboratory blood test alternations that were strongly associated with inhospital mortality are increased levels of CRP, D-dimer, ferritin, and troponin T and reduced lymphocyte count.These findings suggested that these blood biomarkers could help in categorizing COVID-19 patients for risk-based assessment, early identification of patients at a higher risk for poor prognosis, better patient management, and improved admission criteria.
interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
TABLE 1 : Epidemiological data of 235 COVID-19 patients grouped according to patient outcome during the first seven months of the COVID-19 pandemic at Sohar Hospital, Oman Data
presented as n(%); p < 0.05 was considered statistically significant.*Diabetes, hypertension, renal disease, cardiovascular disease or chronic obstructive pulmonary disease COVID-19: coronavirus disease 2019
Table 2
shows that no significant difference was identified between the symptoms of survivors and non-survivors.
TABLE 2 : Symptoms of COVID-19 survivors and non-survivors
Results of the laboratory tests for COVID-19 survivors and non-survivors admitted to Sohar Hospital in the study period are presented in Table3.The most common abnormal laboratory results in both COVID-19 survivors and non-survivors were increased values of CRP (87%), ferritin (69.9%), and D-dimer (63.8%), and reduced levels of serum CO2 (50%) and lymphocyte count (49%). | 2023-11-23T16:18:56.520Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "59b4a662547ac61dc3a35915dd67610b6acc6c85",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/199391/20231121-21582-nx4wy4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c0637e4c76ae661bb9fc0109110e3332833ddd0",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237472048 | pes2o/s2orc | v3-fos-license | Additive quantile mixed effects modelling with application to longitudinal CD4 count data
Quantile regression offers an invaluable tool to discern effects that would be missed by other conventional regression models, which are solely based on modeling conditional mean. Quantile regression for mixed-effects models has become practical for longitudinal data analysis due to the recent computational advances and the ready availability of efficient linear programming algorithms. Recently, quantile regression has also been extended to additive mixed-effects models, providing an efficient and flexible framework for nonparametric as well as parametric longitudinal forms of data analysis focused on features of the outcome beyond its central tendency. This study applies the additive quantile mixed model to analyze the longitudinal CD4 count of HIV-infected patients enrolled in a follow-up study at the Centre of the AIDS Programme of Research in South Africa. The objective of the study is to justify how the procedure developed can obtain robust nonlinear and linear effects at different conditional distribution locations. With respect to time and baseline BMI effect, the study shows a significant nonlinear effect on CD4 count across all fitted quantiles. Furthermore, across all fitted quantiles, the effect of the parametric covariates of baseline viral load, place of residence, and the number of sexual partners was found to be major significant factors on the progression of patients’ CD4 count who had been initiated on the Highly Active Antiretroviral Therapy study.
by nonlinear estimation of families of conditional quantile functions that relax the independence assumption 2 . The use of parametric and nonparametric regression models for analyzing patients' CD4 count in most applications implies that the estimated effects describe the average CD4 count. However, it is of even great interest to examine the quantile of the outcome distribution, such as the lower ( ≤ 25%) quantile, which identifies patients at higher risk of developing illnesses.
Quantiles, commonly symbolized by the Greek letter τ , are location and scale parameters simultaneously. For a given τ ∈ (0, 1) , the τ th quantile is the value of a random variable, where τ × 100% of its value lies below it. In other words, it is the value where at most (1 − τ ) × 100% of the value lies above. Thus, τ th quantiles close to 0.5-quantile give the median, which is a well-known location parameter. On the other hand, τ th quantiles close to zero or one give an idea of the scale. For instance, the interquartile range (IQR) is defined as the 0.75 quantile minus the 0.25 quantile: IQR = Q 3 − Q 1 .
Quantile regression (QR) solutions are computed for a selected number of quantiles, typically the three quantiles along with two extreme quantiles, that is, for τ = {0.05, 0.25(Q 1 ), 0.5(Q 2 ), 0.75(Q 3 ), 0.95} . This necessitates the search for a suitable compromise between the amount of output to manage and the results to interpret and summarize. Although in many practical applications of QR, the focus is on estimating a subset of quantiles, however, it is worth noticing that it is possible to attain estimates across the entire interval of conditional quantiles; in particular, the set: {β τ : τ ∈ (0, 1)} 2 .
QR is a versatile statistical method with many applications that complement mean regression 3,4 . Thus, it emerged as an effective analytic technique in numerous study areas of science due to its competence to drive inferences about individuals that rank below or above the conditional population mean and/or focused on features of the response beyond its central tendency [4][5][6][7][8][9][10][11][12][13] . QR is specifically appropriate for the parameters' heterogeneous effect as it yields inferences that can be legitimate irrespective of the true underlying distribution 4,14 . QR techniques look further into the data, get more information, and become more important 15 . By fitting models for more percentiles, one can detect the covariates' heterogeneous effects at the conditional distribution of the response, rather than just the conditional mean. That is especially useful when valuable information lies at the bottom or top quantiles. "QR also enjoys several properties, including equivariance to monotone transformations and robustness to outliers" 2,16 . A semiparametric extension of quantile regression models with different types of nonlinear effects included in the model equation leads to an additive quantile regression model (AQM) 12 . Such a model may reveal systematic differences in dispersion, tail behavior, and other features for covariates 2 .
Additive mixed models (AMMs), an extension of additive models, have been developed precisely to incorporate linear and nonlinear effects, as well as random terms when the data are sampled according to longitudinal designs 4,17 . AMMs have been integrated into QR methods to obtain robust results, not only focused on features of the longitudinal outcome at its central tendency that may not be the best location to characterize the data specifically when the errors are non-normally distributed, and the location-shift hypothesis of the normal model is violated but also at conditional quantiles of the longitudinal outcome with no assumption about the response or errors distribution apart from the distribution is restricted to have the τ th quantile to be zero. Thus, additive quantile mixed models, which have gained popularity recently as a general method for longitudinal data, bring a comprehensive and more complete picture of the nonparametric as well as the parametric effects 1,4 .
CD4 cell count levels signify the well-being of an individual immune system (body's natural defense system against pathogens, infections, and illnesses). The CD4 cell counts of a person who does not have HIV can be between 500 and 1500 per cubic millimeter. Individuals living with HIV who have a CD4 count over 500 but whose immune response is still strong are usually in good health. However, individuals living with HIV who have a CD4 count below 200 are at high risk of developing severe illnesses and death 18,19 .
With the CD4 count at deficient levels, patients' immunity is weak. If HIV-infected patients are not on treatment or not virally suppressed, they become vulnerable to acquire opportunistic infections (OIs), making them at risk of the new and ongoing coronavirus disease 2019 (COVID-19) infection and underlying illness 18 . The best strategy to avoid these infections and diseases is by enhancing the immune function level through HAART, a combination of multiple antiretroviral (ARV) drugs. HAART's fundamental goal is to prolong or stop the progression to AIDS and loss of life for those infected with HIV by suppressing and preventing the virus from making copies of itself. When the virus's level (viral load) in the blood is low or undetectable, there is less damage to the body's immune system and fewer HIV infection complications. Even though HIV treatment is prescribed for all individuals living with HIV, it is particularly critical for patients with low CD4 count to start treatment sooner rather than later and adhere to the treatment schedule 18,20 . While researchers believe that early diagnosis and effective treatment are essential to effective control, more research is needed to understand better the adaptive, innate, and host responses that alter viral load set-point and consequently prognosis and infectiousness 18,20 .
The need for good and better health is one of each human being's fundamental rights without qualification of race, religion, gender, political conviction, financial, or social condition. Women's health includes their emotional, social, and physical welfare and is determined by these factors and the economic setting of their lives, as well as by biology. However, health issues evade the longer part of women. In national and universal forums, women have emphasized that equality, the sharing of family duties, development, and peace are necessary conditions to achieve good health all through the life cycle. Women are biologically and socially more vulnerable to HIV infection, especially in developing countries [21][22][23][24] .
HIV/AIDS and other sexually transmitted diseases (STD) have a devastating effect on women's health, mostly young ladies. The consequences of HIV/AIDS go beyond women's health to include their families' economic support and livelihoods. Thus, the social, development, and health consequences of HIV/AIDS and other sexually transmitted diseases have strong gender dimensions that cannot be ignored [23][24][25] . Understanding the changing epidemiology of HIV using statistical disease models will allow the clinician to decide who may be at high risk and clarify the application of rules to avoid sequential HIV transmission 18 www.nature.com/scientificreports/ CD4 count or evolution of the viral load using data-driven models will allow the clinician to interpret potential information accurately and cope with misdirection or distortion of the information due to patient-specific effects 18,[26][27][28] . This study is a continuation of our previous work in Yirga et al. 18 . This study aims to analyze the longitudinal CD4 count of HIV-infected patients involved in a CAPRISA study using AQMM and justify how the method evolved can be used to attain robust nonparametric as well as parametric effects at various locations of the conditional distribution that brings a comprehensive and more complete picture of the covariate effects. The use of AQMM has many advantages. Additive nonparametric effects models are not new in the applied statistics literature. To implement these methods, Koenker et al. 47 introduce smoothing penalties for total variation, especially for the nonparametric components of the model. Researchers are also eager to learn what are the factors influencing the CD4 count (high or low) in HIV studies. AQMMs are the best way to answer this question. 18,20,[29][30][31][32] . Once HIV-infected women were enrolled in CAPRISA's AI Phase II study, their CD4 count and viral load were measured and assessed regularly. When their CD4 count ≤ 350 cells/mm 3 for more than two consecutive visits between six months or if they are with AIDS-defining illness (WHO clinical stage 3-5), they would be referred to a public government clinic for ARV treatment. However, according to the South African National Department of Health, these patients would only start HAART once their CD4 count is ≤ 200 cells/mm 3 , until 2015. With effect from the 1st of January 2015, according to the National Department of Health, the criteria to start HIV patients on early initiation of ART is CD4 count of 500 cells/mm 3 or less than that 20 . HIV-infected women in Phase II-IV were followed up until they are started HAART. After that, they would be transitioned to Phase V and followed up for a minimum of five years, or eligible participants would be offered to join immediately into Phase V 33 . After the five years of follow-up have been accomplished, participants would be offered an optional annual follow-up for up to fifteen extra years to patients who recurred in Phase V 33 . Figure 1 illustrates the screening and enrolment process of the study data set. One can find further detail on the study population's design, development, and procedures here [29][30][31][32][33] .
Methods
Parametric regression models typically use a linear function to connect the conditional values of the response variable to the covariates. In real-world applications, however, biased or invalid results might result from such a linearity assumption. Many studies use nonlinear assumptions between variables 34-37 . One may consider various www.nature.com/scientificreports/ modeling techniques when dealing with nonlinearity. The most popular nonparametric models, smoothing splines, and transformation models use parameters such as sampling designs (cross-sectional or longitudinal), outcomes (discrete or continuous), distribution assumptions (parametric or nonparametric), and so on 2 . In choosing which method to follow, the amount of effort expended during the investigation may have a significant influence. Likewise, lacking theory or programming can lead to a certain decision being made over another 2 . Nonparametric regression permits the presumption of linearity to be relaxed 34,35,38 and limits the analysis to smooth and continuous functions 39 . Nonparametric regression, also known as scatter smoothing, aims to distinguish the best regression function according to the data distribution instead of estimating the parameters 39 .
The nonparametric regression model is given by.
where the function f i (·) is unknown, and commonly assumed that the errors are normally and identically distributed: ε i ∼ NID 0, σ 2 39 . Several methods have been introduced to model nonparametric regression models; however, the most used techniques that have been extended to QR are local polynomial regression 40 47 . The parametric QR model is given by.
where Y i is the response variable, x i 's are covariates, β τ i 's are the quantile specific linear effects, and ε τ i is a random variable assumed to be an unknown error term on which no specific distributional assumptions are made except that the distribution is restricted to have the τ th quantile to be zero 12,48,49 . For this reason, the parametric QR model aims at describing the quantile function Q Y i (τ |x i ) of the continuous outcome Y i conditional on covariate vector x i at a given quantile τ , and this can be expressed as follows For a comprehensive overview of QR, see, for example, Koenker 2 , Konker and Basset 3 , Buchinsky 5 , Yu et al. 9 , or Koenker and Hallock 50 .
As much as the parametric QR assumptions enjoy a simple model structure, convenience of interpretation, and lower computational cost, it is not flexible enough and hence carries the risk of model misidentifications for complex problems 51 . Nonparametric QR has become a viable alternative to avoid restrictive parametric assumptions. Koenker et al. 47 explored nonparametric QR in spline models (quantile smoothing splines), which they defined as solutions to where ρ τ (u) = u{τ − I(u < 0)}, p ≥ 1 , is the so-called check (loss) function, the parameter τ ∈ (0, 1) controls the quantile of interest, and ∈ R + is a smoothing parameter 3,47 .
As closely analogous to the parametric QR model (3), Koenker 2 generalized nonparametric QR models as Then, Koenker 2 formulated the τ th nonparametric QR estimator as Several techniques were proposed for nonparametric QR modelings, such as Bivariate quantile smoothing spline 52 and Kernel quantile regression 53 . However, nonparametric QR is an important yet challenging topic that needs to be addressed in-depth 51 . One can find a brief account of nonparametric QR strategies in numerous studies; see, for example, Koenker 2 or Davino et al. 39 . To account for the nonlinearity relationships between quantiles of the outcome and covariates, Rigby and Stasinopoulos 54 also proposed generalized additive models for location, scale, and shape (GAMLSS). GAMLSS enables additional flexibility to fit the covariates' nonlinear effects; however, they do not result in easily interpretable expressions for the quantiles. They are based on specifying distinct distributional parameters 12 . Instead, additive quantile regression models (AQMs) allow for the inclusion of nonlinear covariate effects and give more flexibility 12 .
Additive models, introduced by Hastie and Tibshirani 41 , Stone 55 , and Breiman and Friedman 56 , are flexible regression tools that manipulate linear as well as nonlinear terms. The nonlinear terms in additive models are modeled through smoothing splines 4 . They provide programmatic approaches for nonparametric (nonlinear in parameters) regression modelings; by restricting nonlinear covariate effects to be composed of low-dimensional additive pieces so that we can overcome some of the worst aspects of the notorious curse of dimensionality 11 . The literature on additive models is vast 17,41,55,57,58 . However, most of the work has been done based on estimating conditional mean functions. The additive quantile regression model (AQM) provides an attractive framework www.nature.com/scientificreports/ for parametric as well as nonparametric regression illustrations focused on features of the response beyond its central tendency 4,11,12 . Fenske et al. 12 defined the τ th AQMs that extend the linear predictor, x ′ i β τ , with a sum of nonlinear functions of continuous covariates, f τ j (·) , as follows.
where f τ j denote generic functions of covariates z i for the ith observation, and allows for the inclusion of different model terms such as nonlinear effects (smooth functions) of z k , f τ (z k ) , and varying coefficient terms, z ′ k f τ (z k ) , where the effect of the covariate z ′ k varies smoothly over the domain of z k according to some functions of f τ . However, the underlying assumption of the error term, ε τ i , remains the same as in the QR model (3); see Fenske et al. 12 for more details.
AQM estimates the additive effect using linear programming algorithms as in the conventional QR model 12 . However, in the AQM case, determining adequate numbers and the position of knots is challenging. To avoid these challenges, Fenske et al. 12 used penalty methods such as quantile smoothing splines of Koenker et al. 47 . Thus, the minimization problem of AQM that consists of extra penalty term is given by 12 : where the sup is taken over all partitions a ≤ z 1 < . . . < z n < b , and is a tuning parameter that controls the smoothness of the estimated function also known as "total variation regularization": see Koenker 2 , Fenske et al. 12 , or Koenker et al. 47 for more details.
Fenske et al. 1 proposed extending AMMs to the QR model for longitudinal data that consists of fixed individual-specific intercepts and slopes modeled through penalized splines of Ruppert et al. 59 . However, their model did not include random-effect terms and did not allow for individual-specific effects to have a general covariance structure 4 . The version of Geraci 4 additive QR model for longitudinal data includes linear and nonlinear terms, as well as multiple random effects to account for the correlation at the individual level with a general variance-covariance matrix and allow for automatic smoothing selection within a mixed model framework of Ruppert et al. 59 . Thus, as pointed out by Geraci 4 , because of the following two basic ideas, his model was shown to have superior performance compared with the approach of Fenske et al. 1 : the first point is regarding the ith unit effects, which he assumed to be random instead of fixed so that the covariance structure between effects can be introduced; the second point is that instead of prior specification, the nonparametric term's smoothing is automatically estimated from the data 4 .
Geraci 4 defined the τ th additive QR model for longitudinal data as where x ′ ij is the jth row of a known n i × p matrix X i , z ′ ij is the jth row of a known n i × q matrix Z i , y ij is the jth observation of the response vector y i = y 11 , . . . , y 1n i ′ for the ith unit, f k τ (·) is a τ-specific, centered, twicedifferentiable smooth function of the kth component of x , and u τ ,i is a q × 1 vector of values that collects ith unit random effects associated with z ij and its distribution is assumed to depend on a τ-specific parameter 4 .
Geraci 4 considered a spline model of the type: (9) is then expressed as follows 4 : In matrix notation, the ith unit of expression (10), which is then called additive quantile mixed model (AQMM), is given by 4 where B (k) x ijk is considered as H k×1 vector of values taken by the kth spline evaluated at x ijk , v τ ,k = (v τ ,1 , . . . , v τ ,H k ) ′ considered as the H k×1 vector of spline coefficients for the kth covariate, and H = k H k .
Furthermore, B i and v τ , defined, respectively, as the n i × H matrix with rows The objective function of AQMM, where the vectors u τ ,i and v τ are assumed to follow zero-centered multivariate Gaussian distributions with variance-covariance matrices Σ τ and Φ τ = ⊕ s k=1 φ τ , k I H k , respectively, with selecting ρ τ (r) = n j=1 r j τ − I r j < 0 for a vector r = (r 1 , . . . , r n ) ′ , is given by Geraci 4 as where " u τ ,i 's are assumed to be independent for different i (but may have a general covariance matrix) and are independent of v τ , and φ τ ,k 's determine the amount of smoothing for the nonparametric terms" 4 . Minimizing the objective function of expression (12) proceeds as the same as minimizing the objective function of quantile mixed-effects models 49,60,61 where the asymmetric Laplace distribution with a location parameter µ , scale parameter σ > 0 , and skewness parameter τ ∈ (0, 1) 60,62-64 , are employed as quasi-likelihood for the fidelity term 4 . Further discussion of AQMM is provided by Geraci 4 . Ethical approval and consent to participate. The study was approved by the Research Ethics Committee of the University of KwaZulu-Natal (E013/04), the University of the Witwatersrand (MM040202), and the University of Cape Town (025/2004). All participants provided written informed consent. All methods were performed following the relevant guidelines and regulations expressed in the Declaration of Helsinki. Geraci 4 illustrated the full range of AQMM that is described above. The purpose of this analysis is to model the CD4 count of patients from KwaZulu-Natal, South Africa, as part of a comprehensive study of HIV/AIDS. The results of this study illustrate longitudinal CD4 counts among HIV-infected patients enrolled in the CAPRISA 002 AI study by employing an AQMM. The median age of our sample of 235 women was 25 years. Our sample consisted of 7019 measurements on 235 women from 18 to 59 years of age. There were multiple visits for all participants, ranging from 2 to 61, with a median of 29. Tables 1 and 2 show descriptive measures for the variables studied. Low (upper) quantiles are those where at least 25% (75%) of the observations are at or below it, or 75% (25%) are at or above it 2 . In Table 1, it is shown that the median BMI for the participants was 26.84 (range 17.89-54.89). The median square root CD4 count and baseline viral load were 22.98 cells/mm 3 and 26,600 copies, respectively. Of a total of 235 women, 105 (44.7%) lived around Vulindlela (rural area), and 130 (55.3%) lived around eThekwini (Durban, urban area) in KwaZulu-Natal, South Africa (see Table 2). The majority of the women, 182 (77.4%), were in a stable partnership, 224 (95.3%) completed secondary school (Table 2), and most of them (78.8%) were self-reported sex workers 18,29,31 . Additional details are available here [29][30][31][32] concerning the CAPRISA 002 AI study. We analyze this study data set intending to explain the different conditional distribution of the CD4 count by considering two covariates entered as nonparametric additive effects: time and baseline BMI; as well as discrete (baseline viral load), continuous (age), and categorical covariates (place of residence, educational level, and the number of sexual partners) entered in the model as parametric effects (see Tables 1, 2). Figure 2 shows observed square root transformed CD4 counts www.nature.com/scientificreports/ by treatment time and baseline BMI, respectively, for a total of 7019 observations. The nonlinear patterns, which connect the sample quantiles, are estimated conditionally on time and baseline BMI for six quantile levels. The curves (nonlinear patterns) suggest the requirement of some degree of smoothing (Fig. 2). Following the AQMM of Geraci 4 , we used a transformed continuous form of the outcome (i.e., square root CD4 count) for fitting purposes. Thus, the proposed τ th AQMM form of our study, using expression (10), can be specified as where y ij is the square root transformed form of the outcome ( √ CD4count ) at the jth time point for the ith subject, time is the time variable measured in months from the start of the study, BMI indicates the patient's baseline BMI, ART is the dichotomous HAART initiation (0 = pre-ART, 1 = post-ART), VL is patient's baseline viral load, the residence is patient's place of residence, education is the educational level of participants, partner indicates the number of sexual partners of the participant, age is participant age at enrolment, u τ ,0 indicates the random intercept, and u τ ,1 indicates the random slope. The symbol τ specifies the quantile of interest; we made the estimation at τ = 0.05, 0.25, 0.5, 0.75, 0.85, 0.95, and 0.99 to get the complete picture of the effects.
Results
Geraci 4 employed the AQMM in the R package lqmm as an ad-on to fit additive quantile mixed models. As the same as the smooth terms' specification in the R package mgcv 17 , one can enter continuous covariates within the s (smooth) function to control the model smoothness using splines when fitting AQMM 4 . Furthermore, the shrinkage smoothers obtained using the bs option inside the s command in the R package mgcv are constructed so that smooth terms can be penalized away altogether, not contribute to the model 17,65 . Thin plate smoother provides statistical and computational efficiency, stable optimal approximations (especially for large data sets), and can be constructed for smooths of more than one covariate at a time 4,66 . Thus, it was used as a shrinkage spline to fit the proposed model (13). The remaining parametric terms in the aqmm function 4 are specified the same way as in other R linear mixed model fitting functions such as lqmm () and lme4 (). The output is separated into two parts: Parametric part that includes estimated fixed effects, with their standard errors (SE), in parentheses, and significant mixed effect representation of smoothing splines (see Table 3). Since the smooth coefficients are mostly uninterpretable, we focus on their variances to evaluate the spline coefficients' penalty at various quantiles (see Table 4 and Supplementary information). However, their estimated smoothed effects are depicted in Fig. 3. Table 4 also presents the estimated variance of the random effects from the fitted model (13).
According to Table 3, the age effect is positive and significant at the bottom, median, and at τ = 0.75 quantile levels (see also Supplementary information). On the other hand, the effect of education on square root CD4 count does not seems to be significant across all quantiles after the patient had been initiated on HAART. The square root CD4 count across all quantiles is affected by post-HAART initiation as expected. A significant positive effect of HAART initiation on CD4 cell counts is observed at the median quantile and becomes roughly constant at higher quantiles (see Table 3 and Supplementary information). In addition, patients with stable sexual partners showed significant improvements in their CD4 cell count across all quantiles. The CD4 cell count is significantly lowered in patients who have many sexual partners, especially at the bottom ( τ = 0.05 ) and at the top ( τ = 0.95, 0.99 ) quantiles (Table 3). 3 residence i + β τ , 4 education i + β τ , 5 partner i + β τ , 6 age i + u τ ,0 + u τ ,1 (time i ), www.nature.com/scientificreports/ Furthermore, we found a clear indication, at the bottom ( τ = 0.05 ) and more extreme quantiles ( τ = 0.85, 0.95, 0.99 ), that there is a significant negative effect of patients who were residing around the urban area on their CD4 cell count (see Table 3 and Supplementary information). Table 3 also shows that the negative effect of baseline viral load on the CD4 cell count is higher at the lower quantiles than at the median and higher quantiles (see also, Supplementary information). In addition, R package aqmm() sample outputs using CAPRISA 002 AI study data at τ = 0.25, 0.75, 0.85, and 0.99 can be found in Supplementary information.
The variance of the first smooth term ( φ Time ) indicates a stronger penalty on the spline coefficients at τ = 0.25, 0.5, 0.75, 0.85 quantiles than at the bottom and at the top quantiles (Table 4). Similarly, the variance of the second smoother ( φ BaselineBMI ) shows a strong penalty on the spline coefficients at τ = 0.25, 0.5, 0.75, 0.85 quantiles than at the bottom and at more extreme quantiles. Table 4 shows that the random effects' variances have roughly constant variability of subject linear trends across the fitted quantiles (see, also, Supplementary information).
Based on the seven fitted quantile levels ( τ = 0.05, 0.25, 0.5, 0.75, 0.85, 0.95, 0.99 ), Fig. 3 depicts the two estimated smoothed covariate effects on patients' CD4 counts. Patients enrolled in the CAPRISA 002 AI study exhibit nonlinear time effects on CD4 counts that are prominent at all quantile levels. As the quantile increases, its effect becomes stronger. However, it is after several treatment visits that such progress towards higher CD4 counts occurs. Consequently, the progression is slow until about 50 months, then it increases steadily thereafter across all quantile levels (Fig. 3).
Furthermore, overall fit quantile levels, the significant smoothed baseline BMI effect on patients' CD4 counts is roughly constant for patients with a baseline BMI of about 40 but gradually improves from there. Because of this, patients with low BMI need to be monitored carefully before and after HAART initiation. Despite this, physicians should not ignore patients with high BMI. According to our studies and other findings, a plausible explanation may be that BMI may affect drug metabolism and, thus, the progress of HAART and its immunological responses 20,67,68 . Moreover, higher levels of BMI have a more significant effect than lower levels (Fig. 3).
Discussion and conclusion
As a cutting-edge statistical method for modeling percentiles of response variables conditioned on respective covariates, quantile regression is the most widely used. While regression for medians may be seen as more robust than regression for the mean, QR, a generalization of median regression, allows better exploration of data by allowing the modeling of conditional quantiles at low or high extents, such as the 5th and 95th percentiles. As Table 3. Parameter estimates followed by results of the smoothing terms from the AQMM for the CAPRISA 002 AI study data across different quantiles. *Significance codes: 0 '***' , 0.001 '**' , 0.01 '*' , 0.05 '. ' , 0.1 ' ' , 1. The reference categories are given in Table 2. www.nature.com/scientificreports/ a result, QR is becoming more common in clinical, biomedical, and other health-related research. Mean-based regression is used to formulate mixed-effects models and their estimated effects on the response variable. In some cases, this centrality-based inference method may not be the optimal method for dealing with the data since the data may not adequately represent their distribution. It has recently been demonstrated that QR has the potential to be extended to a mixed-effects modeling setting, even though QR was initially developed in a univariate setting 48,60,61 . Studies of quantile mixed-effects models have received increasing attention 15,48,60,61,[69][70][71][72][73][74][75][76] . Quantile mixed-effects models have been extended to additive models to obtain robust results across various quantile levels of the longitudinal outcome, which brings a rigorous covariates' effect [74][75][76] . The additive version of the quantile mixed-effects model has gained a great deal of popularity, as discussed above; because it offers an efficient and flexible framework for nonlinear and linear longitudinal forms of data analysis focused on features of the outcome beyond its central tendency 1,4,11,12,47,73,75,76 .
In this study, we investigated the effect of multivariate additive quantile mixed models of Geraci 4 on the longitudinal CD4 count of HIV-infected patients across different quantile levels according to parametric and nonparametric covariate effects. By using this recently developed model, robust results are obtained, not only at the central location of the longitudinal outcome that may not be the best place to analyze the data but also at different points of the conditional distribution that gives an inclusive and more complete picture of the parametric as well as the nonparametric covariate effects.
A series of AQMM at τ = 0.05, 0.25, 0.5, 0.75, 0.85, 0.95 , and 0.99 were performed, and the results were discussed. According to the results, patients' CD4 count is markedly increased after HAART initiation, and their baseline viral load shows a negative effect on the progression of their CD4 count over time, as we would have expected. All fitted quantiles of the response variable were affected by a significant nonlinear relationship between time and baseline BMI. Study results suggest that, across all fitted quantile levels, the patient's education level does not significantly influence the progression of CD4 counts over time. All but the most extreme quantiles of HIV-positive patients showed a significant difference in the CD4 count regardless of their age. In addition, CD4 cell recovery was found to be significant across all quantiles among patients with a stable sexual partner. Contrary to this, HIV-infected patients with many sexual partners during the treatment period showed a negative effect on CD4 cell count across all fitted quantile levels.
As we expected, the patient's CD4 count increased significantly after HAART was initiated, and their baseline viral load also showed a significant negative effect on the patient's CD4 count over time. Baseline BMI and time were also significant nonlinear effects in our analysis. Further, patients with higher BMIs at baseline have improved CD4 cell count over time after treatment. Despite this, higher BMI patients should not be ignored www.nature.com/scientificreports/ clinically. This study instead suggests that BMI can influence drug metabolism and, consequently, the immunological responses to HAART. According to the nonlinear time effect, patients' CD4 counts are not increasing rapidly over time. The growth starts after multiple treatment visits. Hence, the study suggests that HIV patients who are not clinically and immunologically stable on HAART could experience increased risks if exposed to COVID-19, especially if they are not on HAART immediately after HIV exposure. One can estimate the covariate effects over the grid τ ∈ (0, 1) as per the analysis aspects. An investigator, however, should be cautious when using AQMM since the method needs some adjustment to control the estimation algorithm and demands more computing time to estimate the random effects 4 . For instance, for this study, it took 2-3 h to fit the proposed model (13) at a single τ as like Geraci 4 . To overcome this computational burden, Geraci 4 suggested the necessity of further improvement to the AQMM. As the studied data set is an ongoing study, there is a plan to extend AQMM application to genetics in future work since it produces satisfactory results.
Data availability
The dataset used for this study can be obtained by requesting Dr. Nonhlanhla Yende-Zuma (Head of Biostatistics Unit, CAPRISA, Email: Nonhlanhla.Yende@caprisa.org) on reasonable request. | 2021-09-11T06:17:06.695Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "a34c1696fc7e2dd22f3e928e59538d47adb03bdd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-97114-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4daab86563cda48239c5a439f005f54764706eed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231964895 | pes2o/s2orc | v3-fos-license | Secondary bile acid ursodeoxycholic acid alters weight, the gut microbiota, and the bile acid pool in conventional mice
Ursodeoxycholic acid (commercially available as ursodiol) is a naturally occurring bile acid that is used to treat a variety of hepatic and gastrointestinal diseases. Ursodiol can modulate bile acid pools, which have the potential to alter the gut microbiota community structure. In turn, the gut microbial community can modulate bile acid pools, thus highlighting the interconnectedness of the gut microbiota-bile acid-host axis. Despite these interactions, it remains unclear if and how exogenously administered ursodiol shapes the gut microbial community structure and bile acid pool in conventional mice. This study aims to characterize how ursodiol alters the gastrointestinal ecosystem in conventional mice. C57BL/6J wildtype mice were given one of three doses of ursodiol (50, 150, or 450 mg/kg/day) by oral gavage for 21 days. Alterations in the gut microbiota and bile acids were examined including stool, ileal, and cecal content. Bile acids were also measured in serum. Significant weight loss was seen in mice treated with the low and high dose of ursodiol. Alterations in the microbial community structure and bile acid pool were seen in ileal and cecal content compared to pretreatment, and longitudinally in feces following the 21-day ursodiol treatment. In both ileal and cecal content, members of the Lachnospiraceae Family significantly contributed to the changes observed. This study is the first to provide a comprehensive view of how exogenously administered ursodiol shapes the healthy gastrointestinal ecosystem in conventional mice. Further studies to investigate how these changes in turn modify the host physiologic response are important.
In healthy humans administered ursodiol (15 mg/kg/day) for 3 weeks, biliary and duodenal bile acid concentrations of UDCA and its conjugates (glycoursodeoxycholic acid, GUDCA and tauroursodeoxycholic acid, TUDCA) increased by 40% compared to baseline [31]. A decrease in primary bile acids (CA and CDCA) and their glycine and taurine conjugates, as well as a decrease in the secondary bile acid DCA and its conjugates (glycodeoxycholic acid, GDCA and taurodeoxycholic acid, TDCA) was observed within biliary and duodenal bile [31]. An increase in conjugates of the secondary bile acid LCA (glycolithocholic acid, GLCA and taurolithocholic acid, TLCA) were observed after UDCA treatment within biliary and duodenal bile samples [31]. Ursodiol can alter liver and biliary bile acid pools, but gastrointestinal contents and feces have not been well studied, thus limiting our understanding of how ursodiol shapes the microbial niche and bile acid profiles within the gastrointestinal ecosystem.
Evidence is mounting that bile acids, through TGR5 and FXR signaling, are capable of altering the host physiologic response (recently reviewed in Wahlstrom et al. [3] and Fiorucci et al. [4]). Bile acids can also directly and indirectly, through activation of the innate immune response, alter the gut microbial composition [3,4]. Together, highlighting the interconnectedness and complexity of the gut microbiota-bile acid-host axis, and emphasizing the fact that exogenously administered bile acids will likely modulate this axis. Additionally, our laboratory recently demonstrated that ursodiol pretreatment was able to significantly alter the bile acid metabolome, host edema, and the host inflammatory transcriptome during Clostridioides difficile infection (CDI); thus highlighting the clinical potential of ursodiol to mitigation of an overly robust host inflammatory response in CDI patients [32]. Hence, our rudimentary knowledge of how ursodiol modulates the gut microbial community structure, bile acid pool, and host physiology warrants further characterization to better understand the complex role of bile acids within the gastrointestinal ecosystem.
This study aims to define how ursodiol alters the gastrointestinal ecosystem in conventional mice. Mice were administered three different doses of ursodiol (50,150, 450 mg/kg) via daily oral gavage for 21 days. The gut microbial community structure and bile acid pool were evaluated. Samples were obtained longitudinally in fecal samples and ileal and cecal content were collected pretreatment and after 21 days of ursodiol. Serum bile acid profiles were also evaluated after 21 days of ursodiol treatment. Collectively, ursodiol treatment resulted in biographically distinct alterations within the indigenous gut microbiota and bile acid metabolome in conventional mice. These findings support that ursodiol administration impacts the indigenous gastrointestinal ecosystem and thus modulates the gut microbiota-bile acid-host axis.
Ethical statement
The Institutional Animal Care and Use Committee (IACUC) at North Carolina State University College of Veterinary Medicine (NCSU) approved this study. The NCSU Animal Care and Use policy applies standards and guidelines set forth in the Animal Welfare Act and Health Research Extension Act of 1985. Laboratory animal facilities at NCSU adhere to guidelines set forth in the Guide for the Care and Use of Laboratory Animals. The animals' health statuses were assessed daily, and moribund animals were humanely euthanized by CO 2 asphyxiation followed by secondary measures (cervical dislocation). Trained animal technicians or a veterinarian performed animal husbandry in an AAALAC-accredited facility during this study.
Animals and housing
C57BL/6J wildtype mice (females and males) were purchased from Jackson Laboratories (Bar Harbor, ME) and quarantined for 1 week prior to starting the ursodiol administration to adapt to the new facilities and avoid stress-associated responses. Following quarantine, the mice were housed with autoclaved food, bedding, and water. Cage changes were performed weekly by laboratory staff in a laminar flow hood. Mice had a 12 hr cycle of light and darkness.
Ursodiol dosing experiment and sample collection
Groups of 5 week old C57BL/6J WT mice (male and female) were treated with ursodiol at three distinct doses (50,150, and 450 mg/kg dissolved in corn oil; Ursodiol U.S.P., Spectrum Chemical, CAS 128-13-2) given daily via oral gavage for 21 days (Fig 1). These distinct doses were selected for a proof of concept experiment in order to achieve sufficient intestinal concentrations of ursodiol to alter the life cycle of Clostridioides difficile in vivo [32]. The total (50,150, and 450 mg/kg) given daily via oral gavage for 21 days. Fecal collection was performed twice daily throughout the experiment. Two independent experiments were performed, with a total of n = 8 (4 females/4males) mice per treatment group. Mice were monitored and weighed daily throughout the experiment. A control group of mice were necropsied prior to initiating any treatments (pretreatment group). Necropsy was performed at day 21 for all ursodiol treated mice (open circles).
https://doi.org/10.1371/journal.pone.0246161.g001 volume gavaged was consistent between the three distinct doses in order to control for the volume of corn oil administered. Ursodiol dosing was adjusted once weekly, based on current weight. Two independent experiments were performed, with a total of n = 8 mice (female/ male) per treatment group. Mice were weighed daily over the course of the experiment. Fecal pellets were collected twice daily, flash-frozen and stored at -80˚C until further analysis. A control group of mice were necropsied prior to initiating any treatments (pretreatment group). This pretreatment group serves as a microbiome and bile acid metabolome baseline prior to mice receiving ursodiol treatment. An additional control group of mice underwent daily handling similar to the treatment groups, but were not administered ursodiol (no treatment control). Necropsy was performed at day 21 in all ursodiol treated mice and the no treatment control mice. Gastrointestinal contents and tissue from the ileum and cecum were collected, flash frozen in liquid nitrogen, and stored at -80˚C until further analysis. Serum and bile aspirated from the gallbladder was obtained flash frozen in liquid nitrogen, and stored at -80˚C until further analysis.
On several occasions, mice had evidence of corn oil within the oral cavity or on their muzzles immediately after the gavage. These mice were monitored closely for signs of aspiration pneumonia for 36 hr following this event. Two mice, one from the ursodiol 50 mg/kg group and another from the ursodiol 450 mg/kg group, inadvertently aspirated gavaged ursodiol, containing corn oil, and subsequently developed respiratory distress within 12-24 hr following the aspiration event. The clinical signs were most consistent with lipid induced pneumonitis and both mice were humanely euthanized and excluded from the study.
Targeted metabolomics of murine bile acid by UPLC-MS/MS
Targeted analysis of bile acids in ileal and cecal content, fecal pellets, serum, and bile were performed with an ACQUITY ultraperformance liquid-chromatography (UPLC) system using a C8 BEH column (2.1 × 100 mm, 1.7 μm) coupled with a Xevo TQ-S triplequadrupole mass spectrometer equipped with an electrospray ionization (ESI) source operating in negative ionization mode (All Waters, Milford, MA) as previously described [33]. The sample was thawed on ice and 25 mg was added to 1 mL of pre-cooled methanol containing 0.5 μM stable-isotope-labeled bile acids as internal standards (IS), followed by homogenization (1.0-mm-diameter zirconia/silica beads added) and centrifugation. Supernatant (200 μl) was transferred to an autosampler vial. 20 μL of serum was extracted by adding 200 μL pre-cooled methanol containing 0.5 μM IS. 5 μL of gall bladder bile was extracted with 500 μL pre-cooled methanol containing 0.5 μM IS. Following centrifugation, the supernatant of the extract was transferred to an autosampler vial for quantitation. Following centrifugation, the supernatant of the extract was transferred to an autosampler vial for quantitation. Bile acids were detected by either multiple reaction monitoring (MRM) (for conjugated bile acid) or selected ion monitoring (SIM) (for non-conjugated bile acid). MS methods were developed by infusing individual bile acid standards. Calibration curves were used to quantify the biological concentration of bile acids. Bile acid quantitation was performed in the laboratory of Dr. Andrew Patterson at Penn State University.
Random Forest analysis was performed in MetaboAnalyst 3.0 (http://www.metaboanalyst. ca/faces/ModuleView.xhtml) [34]. Briefly, the data were uploaded in the Statistical Analysis module with default settings and no further data filtering. Random Forest analysis Ward clustering algorithm and Euclidean distance were used to identify top bile acids within ursodiol treatment groups. Heatmaps and box and whisker plots of bile acid concentrations, and nonmetric multidimensional scaling (NMDS) depicting the dissimilarity indices via Horn distances between bile acid profiles were generated using R packages (http://www.R-project.org).
Illumina MiSeq sequencing of bacterial communities
Microbial DNA was extracted from murine fecal pellets and ileal and cecal tissue snips that also included luminal content using the PowerSoil-htp 96-well soil DNA isolation kit (Mo Bio Laboratories, Inc.). The V4 region of the 16S rRNA gene was amplified from each sample using a dual-indexing sequencing strategy [35]. Each 20 μl PCR mixture contained 2 μl of 10× Accuprime PCR buffer II (Life Technologies), 0.15 μl of Accuprime high-fidelity Taq (catalog no. 12346094) high-fidelity DNA polymerase (Life Technologies), 2 μl of a 4.0 μM primer set, 1 μl DNA, and 11.85 μl sterile double-distilled water (ddH 2 O) (free of DNA, RNase, and DNase contamination). The template DNA concentration was 1 to 10 ng/μl for a high bacterial DNA/host DNA ratio. PCR was performed under the following conditions: 2 min at 95˚C, followed by 30 cycles of 95˚C for 20 sec, 55˚C for 15 sec, and 72˚C for 5 min, followed by 72˚C for 10 min. Each 20 μl PCR mixture contained 2 μl of 10× Accuprime PCR buffer II (Life Technologies), 0.15 μl of Accuprime high-fidelity Taq (catalog no. 12346094) high-fidelity DNA polymerase (Life Technologies), 2 μl of 4.0 μM primer set, 1 μl DNA, and 11.85 μl sterile ddH 2 O (free of DNA, RNase, and DNase contamination). The template DNA concentration was 1 to 10 ng/μl for a high bacterial DNA/host DNA ratio. PCR was performed under the following conditions: 2 min at 95˚C, followed by 20 cycles of 95˚C for 20 sec, 60˚C for 15 sec, and 72˚C for 5 min (with a 0.3˚C increase of the 60˚C annealing temperature each cycle), followed by 20 cycles of 95˚C for 20 sec, 55˚C for 15 sec, and 72˚C for 5 min, followed by 72˚C for 10 min. Libraries were normalized using a Life Technologies SequalPrep normalization plate kit (catalog no. A10510-01) following the manufacturer's protocol. The concentration of the pooled samples was determined using the Kapa Biosystems library quantification kit for Illumina platforms (KapaBiosystems KK4854). The sizes of the amplicons in the library were determined using the Agilent Bioanalyzer high-sensitivity DNA analysis kit (catalog no. 5067-4626). The final library consisted of equal molar amounts from each of the plates, normalized to the pooled plate at the lowest concentration.
Sequencing was done on the Illumina MiSeq platform, using a MiSeq reagent kit V2 with 500 cycles (catalog no. MS-102-2003) according to the manufacturer's instructions, with modifications [35]. Libraries were prepared according to Illumina's protocol for preparing libraries for sequencing on the MiSeq (part 15039740 Rev. D) for 2 or 4 nM libraries. The final load concentration was 4 pM (but it can be up to 8 pM) with a 10% PhiX spike to add diversity. Sequencing reagents were prepared according to Illumina's protocol for 16S sequencing with the Illumina MiSeq personal sequencer [35]. (Updated versions of this protocol can be found at http://www.mothur.org/wiki/MiSeq_SOP.) Custom read 1, read 2, and index primers were added to the reagent cartridge, and FASTQ files were generated for paired-end reads.
Microbiome analysis
Analysis of the V4 region of the 16S rRNA gene was done using mothur (version 1.40.1) [35,36]. Briefly, the standard operating procedure (SOP) at http://www.mothur.org/wiki/MiSeq_ SOP was followed to process the MiSeq data. The paired-end reads were assembled into contigs and then aligned to the SILVA 16S rRNA sequence database (release 132) [37,38] and were classified to the mothur-adapted RDP training set v16 [39] using the Wang method and an 80% bootstrap minimum to the family taxonomic level. All samples with <500 sequences were removed. Chimeric sequences were removed using UCHIME [40]. Sequences were clustered into operational taxonomic units (OTU) using a 3% species-level definition. The OTU data were then filtered to include only those OTU that made up 1% or more of the total sequences. The percentage of relative abundance of bacterial phyla and family members in each sample was calculated. A cutoff of 0.03 (97%) was used to define operational taxonomic units (OTU) and Yue and Clayton dissimilarity metric (θYC) was utilized to assess beta diversity. In addition to NMDS ordination, principle coordinate analysis (PCoA) biplots using Spearman correlation were used to examine difference in microbial community structures between ursodiol treatments and compared to pretreatment. Standard packages in R (http:// www.R-project.org) were used to create NMDS ordination on serial fecal samples.
Statistical analysis
Statistical tests were performed using Prism version 7.0b for Mac OS X (GraphPad Software, La Jolla California USA) or using R packages (http://www.R-project.org). To assess weight loss a two-way ANOVA with Dunnett's multiple comparisons post hoc test comparing ursodiol treatment groups and untreated mice was performed. For microbiome analysis, analysis of molecular variance (AMOVA) was used to detect significant microbial community clustering of treatment groups in NMDS plots and principle coordinate analysis (PCoA) biplots using Spearman correlation were used to examine difference in microbial community structures between ursodiol treatments and compared to pretreatment [41]. For bile acid metabolome, a NMDS illustrates dissimilarity indices via Horn distances between bile acid profiles. To assess the comprehensive bile acid profiles, a two-way ANOVA followed by Dunnett's multiple comparisons post hoc test was used to compare ursodiol treatment groups to pretreatment bile acid profiles. A Kruskal-Wallis one-way ANOVA test followed by Dunn's multiple comparisons test was used to calculate the significant of individual bile acid within each ursodiol treatment group compared to pretreatment. Statistical significance was set at a p value of < 0.05 for all analyses ( � , p <0.05; �� , p < 0.01; ��� , p < 0.001; ���� , p < 0.0001).
Availability of data and material
Raw 16S sequences have been deposited in the Sequence Read Archive (SRA) under accession numbers PRJNA622536 and PRJNA640596. Bile acid metabolomics is provided in S1 Table.
Ursodiol treatment results in weight loss
C57BL/6J conventional mice were administered three different doses of ursodiol (50, 150, 450 mg/kg/day; denoted here on out as ursodiol 50, ursodiol 150, and ursodiol 450 respectively) via oral gavage for 21 days (Fig 1). Mice were monitored and weighed daily. Over the 21 days, mice had no signs of diarrhea or appreciable alterations in food or water consumption. Mice in the 50 and 450 mg/kg ursodiol treatment groups sustained significant weight loss within a week of administration of ursodiol compared to untreated mice (Fig 2A and 2C). For the ursodiol 50 mg/kg treatment group, this weight loss persisted over the course of the experiment (Fig 2A). For the ursodiol 450 mg/kg treatment group, initially weight loss was noted during the first and third week of ursodiol administration (Fig 2C). The ursodiol 150 mg/kg treatment group did not have significantly different weights compared to the untreated mice (Fig 2B). No other clinical signs were noted during ursodiol administration. In general, mice tolerated daily gavage with diminishing stress related to the procedure over the course of the experiment.
Ursodiol alters the gut microbial community structure in conventional mice
Paired fecal samples were collected from the same mice serially over the 21-day experiment to facilitate simultaneous evaluation of the microbial community structure and bile acid metabolome. Mice were sacrificed at day 21 and gut content from the ileum and cecum were collected at necropsy, and stored for later analysis. 16S rRNA gene sequencing was performed to define the gut microbiota.
Within the ileum, the gut microbial community structure of the ursodiol 150 and ursodiol 450 treatment groups were significantly different from pretreatment (Fig 3A; AMOVA; p = 0.02 and p = 0.009, respectively). Bar plots were utilized to visualize relative composition of ileal microbial communities, which are different across each ursodiol dose and compared to pretreatment (Fig 3C). However, the overall gut microbial community structure between treatments was not significantly different based on AMOVA. A biplot of the correlating OTUs towards PCoA axes 1 and 2 revealed OTU 109 (classified as Lachnospiraceae unclassified) as the only significant member contributing to ileal microbial community alterations seen with ursodiol treatment (S1A Fig and Fig 3C).
Within the cecum, the gut microbial community structure of the ursodiol 450 treatment group was significantly different from pretreatment (Fig 3B; AMOVA; p = 0.002). Bar plots were utilized to visualize relative composition of cecal microbial communities, which were marginally different across each ursodiol dose and compared to pretreatment (Fig 3D). In accordance, the overall gut microbial community structure between treatments was not significantly different based on AMOVA. A biplot of the top 10 OTUs towards PCoA axes 1 and 2 revealed OTU 86 (classified as Lachnospiraceae unclassified) as a significant member contributing to cecal microbial community alterations seen with ursodiol treatment (S1B Fig).
Within the feces, the gut microbial community structures of all ursodiol treatment groups were significantly different from pretreatment (S1C Fig; AMOVA; p = 0.004, p <0.001, p <0.001, respectively). A biplot of the top 10 correlating OTUs towards PCoA axes 1 and 2 revealed OTU 24 (classified as Ruminococcaceae) as a significant member contributing to fecal microbial community alterations seen with ursodiol treatment over time and eight opposing OTUs (S1C Fig).
Ursodiol alters the bile acid pool in conventional mice
To determine the extent that ursodiol alters the bile acid pool, assessment of 47 bile acids, was conducted on paired ileal, cecal, and fecal samples used in the preceding microbial community structure evaluation. In addition to NMDS ordination and comprehensive bile acid profile heatmaps, Random Forest analysis was applied to identify bile acids that are important for distinguishing between ursodiol treatments.
Ileal content bile acid profiles revealed segregation of the ursodiol 150 and ursodiol 450 treatments from pretreatment bile acid profiles (Fig 4A). A total of 35 distinct bile acids were quantified within murine ileal content (Fig 4C). When assessing the ileal bile acid profile, 3 bile acids, TUDCA, tauro-β-muricholic acid (TβMCA), and TCA were significantly different compared to pretreatment using a two-way ANOVA followed by Dunnett's multiple comparisons post hoc test. For TUDCA, all three ursodiol treatments were significantly different from pretreatment (all treatments, p = 0.0001). For TβMCA, only the ursodiol 50 treatment was significantly different from pretreatment (p = 0.0001). For TCA, all three ursodiol treatments were significantly different from pretreatment (ursodiol 50, p = 0.0002; ursodiol 150, p = 0.0040, and ursodiol 450, p = 0.0374). Within the ileal content, the two highest MDA scores from the Random Forest analysis were UDCA and TUDCA, with high concentrations of both these bile acids in the ursodiol 450 treatment group (S2A Fig). A Kruskal-Wallis oneway ANOVA test followed by Dunn's multiple comparisons test was used to calculate the significance of an individual bile acid within each ursodiol treatment group compared to pretreatment. For ileal content, UDCA, TUDCA, GUDCA, and LCA were significantly higher in ursodiol 450 treatment compared to pretreatment (p = 0.0007, p = 0.0013, p = 0.0022, and p = 0.0218, respectively; S3A Fig).
Cecal content bile acid profiles revealed segregation of the ursodiol treatments from pretreatment bile acid profiles (Fig 4B). A total of 38 distinct bile acids were quantified within murine cecal content (Fig 4D). When assessing the cecal bile acid profile, 2 bile acids, TUDCA and TβMCA were significantly different compared to pretreatment using a two-way ANOVA followed by Dunnett's multiple comparisons post hoc test. For TUDCA, ursodiol 50 and 450 treatment groups were significantly different from pretreatment (both treatments, p = 0.0001). For TβMCA, only the ursodiol 50 treatment was significantly different from pretreatment (p = 0.0219). The two highest MDA scores from the Random Forest analysis were TCDCA and TUDCA, with high concentrations of both these bile acids in the ursodiol 450 treatment group (S2B Fig). A Kruskal-Wallis one-way ANOVA test followed by Dunn's multiple comparisons test was used to calculate the significance of an individual bile acid within each ursodiol treatment group compared to pretreatment. For cecal content, LCA, 3-ketocholanic acid, and α-muricholic acid (αMCA) were significantly higher in the ursodiol 150 treatment compared to pretreatment (p = 0.0143, p = 0.0255; and p = 0.0280, respectively; S3B Fig)
PLOS ONE
Serial fecal bile acid profiles revealed distinct segregation of the ursodiol treatments from each other and from pretreatment bile acid profiles (Fig 5A). A total of 38 distinct bile acids were quantified within murine feces (Fig 5B). When assessing fecal bile acid profiles, 4 bile acids, UDCA, TUDCA, MCA, and TβMCA were significantly different compared to pretreatment using a two-way ANOVA followed by Dunnett's multiple comparisons post hoc test performed at each sampling day (Day 5, 8, 10, 12, and 15). Within the ursodiol 50 treatment group, UDCA and TUDCA were significantly different from pretreatment only at Day 8
PLOS ONE
(p = 0.0296 and p = 0.0001, respectively). Within the ursodiol 150 treatment group, UDCA and TUDCA were significantly different from pretreatment only at Day 15 (p = 0.0001 and p = 0.0107, respectively). Within the ursodiol 450 treatment group, UDCA was significantly different from pretreatment at Days 5 (p = 0.0020), 8 (p = 0.0007), 10 (p = 0.0044), and 15 (p = 0.0001). TUDCA was also significantly different from pretreatment in the ursodiol 450 group at all sampling days (p = 0.0001 for all days). Additionally, MCA and TβMCA in the ursodiol 450 treatment group on Day 15 were significantly different from pretreatment (p = 0.0001 for both).
Within serum, aside from a single ursodiol 50 treatment serum sample, the ursodiol treatments segregated distinctly from the pretreatment samples with ursodiol treatments clustering together at day 21 (S4A Fig). A total of 35 distinct bile acids were quantified within murine serum samples (S4B Fig). The two highest MDA scores from the Random Forest analysis were TUDCA and UDCA, with high concentrations of both these bile acids in the ursodiol 450 treatment group (S4C Fig). A Kruskal-Wallis one-way ANOVA test followed by Dunn's multiple comparisons test was used to calculate the significance of an individual bile acid within each ursodiol treatment group compared to pretreatment. UDCA, TUDCA, GUDCA, and LCA were significantly higher in ursodiol 450 treatment compared to pretreatment (p = 0.0008, p = 0.0007, p = 0.0230, and p = 0.0065, respectively; S4D Fig).
Discussion
This study is the first to provide a comprehensive examination of how exogenously administered ursodiol shapes the gastrointestinal ecosystem in conventional mice. By evaluating the gut microbial community structure and bile acid pool throughout the gastrointestinal tract and in feces, we obtained a biogeographical view of ursodiol mediated ecological impact. Our findings indicate distinct ursodiol mediated alterations in the ileum, cecum, and feces likely attributed to biogeographical differences in the intestinal physiology and microbial ecology in each region [42]. Our results are in accordance with studies of ursodiol mediated microbiota changes observed in disease models/states such as murine nonalcoholic fatty liver disease (high fat diet models) and in humans primary biliary cirrhosis [43,44].
Dose dependent ursodiol mediated alterations in the gut microbial community structures were observed in the ileum and cecum (Fig 3). In both the ileum and cecum, members of the Lachnospiraceae Family (Phylum Firmicutes, Class Clostridia) significantly contributed to the observed alterations (S1 Fig). Lachnospiraceae are Gram-positive obligate anaerobes, which are highly abundant in the digestive tracts of many mammals, including humans and mice [45,46]. Members of the Lachnospiraceae have been linked to obesity [47][48][49] and may provide protection from colon cancer, [50,51] mainly due to their association with butyric acid production [52], which is essential for microbial and host cell growth [45]. Additionally, monocolonization of germfree mice with a Lachnospiraceae isolate resulted in greatly improved clinical outcomes and partial restoration of colonization resistance against the enteric pathogen Clostridioides difficile [53]. Collectively, emphasizing the varied disease states where members of the Lachnospiraceae family are important and demonstrating potential applications of ursodiol mediated Lachnospiraceae expansion to precisely modulate microbial mediated disease states.
Ursodiol administration resulted in global increases of several key bile acid species, namely UDCA, TUDCA, GUDCA, LCA, TCA, and TβMCA. Each of these bile acids can interact with bile acid activated receptors, including TGR5 and FXR, and thus are able to regulate and alter host physiologic responses [3][4][5]. Activation of either bile acid receptor has distinct physiologic consequences. For example, FXR regulates bile acid, glucose, and lipid homeostasis, and insulin signaling and immune responses [3,4]. TGR5 regulates energy homeostasis, thermogenesis, insulin signaling, and inflammation [3,4]. In terms of innate immune regulation, the overall response of FXR and TGR5 activation is maintenance of a tolerogenic phenotype within the intestine and liver (recently reviewed in Fiorucci et al.) [4]. Each bile acid species differ in their agonistic or antagonistic effects and affinity for FXR and TGR5 (see Table 1). This intensifies the complexity of unraveling the cumulative host physiologic responses resulting from ursodiol mediated bile acid metabolome alterations.
Additionally, bile acid species can directly and indirectly, through activation of the innate immune response, alter the gut microbial composition [3,4]. Further adding to the interconnectedness and complexity of the gut microbiota-bile acid-host axis. Evaluation of the host intestinal transcriptome may elucidate local ursodiol mediated impacts on host physiology and complete our examination of the gut microbiota-bile acid-host axis. Acquisition of such data, in combination with the comprehensive microbiome and bile acid metabolome data obtained in this study, could be integrated using bioinformatics and mathematical modeling to further illustrate these intricate interactions between the gut microbiota, bile acids, and the host in an ursodiol altered intestinal ecosystem.
During ursodiol administration significant weight loss was noted in the ursodiol 50 and ursodiol 450 treatments compared to untreated mice (Fig 2). We speculate that weight loss was attributed to bile acid TGR5 activation resulting in alteration to energy metabolism. A similar pathophysiology of weight loss attributed to bile acid activation of TGR5 is documented in patients following bariatric surgery [59]. Circulating bile acids can activate TGR5 receptors within enteroendocrine cells, skeletal muscle, and brown adipose tissue [60]. Aside from TGR5 mediated glucagon-like peptide-1 (GLP-1) release, which can improve glycemic control by increasing insulin secretion and sensitivity [61], TGR5 can facilitate weight loss by increasing resting energy expenditure by promoting conversion of inactive thyroxine (T4) into active thyroid hormone (T3) [62]. In our study, global large-scale increases in TUDCA, a TGR5 receptor agonist [55], were observed and may explain why weight loss occurred in our ursodiol treated mice. It is unclear why weight loss was not observed in the ursodiol 150 treatment group. Further investigation into TGR5 activation and subsequent modulation of energy expenditure with ursodiol administration would be of interest.
In this study, we reported that daily ursodiol administration in conventional mice significantly impacts the gastrointestinal ecosystem, with alterations in the microbial composition and bile acid pool. Such substantial ecology changes are likely to modify host physiology. Ecological succession after ursodiol discontinuation was not evaluated in the present study, thus understanding how durable ursodiol mediated changes are in the mouse gastrointestinal systems remain unclear. Therefore, although ursodiol is generally well tolerated and safe to administer for various hepatic diseases [6,[21][22][23][24][25][26][27][28][29], the long-term consequences of ursodiol | 2021-02-20T06:16:15.869Z | 2021-02-18T00:00:00.000 | {
"year": 2021,
"sha1": "a6371e60265a4b3d5b03785225eab98524bc4fe9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246161&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb15c2efa5ef3db2ba7659282575ed848edeb1ce",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
251757231 | pes2o/s2orc | v3-fos-license | Home Oral Care Domiciliary Protocol for the Management of Dental Erosion in Rugby Players: A Randomized Clinical Trial
People performing regular physical activity are at high risk of dental erosion especially in cases of high sport drinks intake. Biomimetic hydroxyapatite-based oral hygiene products, like toothpastes and mouthwashes, have been investigated in recent years for their remineralizing activity on the teeth. The aim of the present study was to evaluate the efficacy of two different oral hygiene protocols, respectively consisting of the combination of a hydroxyapatite-based toothpaste plus mouthwash (Trial group) or toothpaste alone (Control Group). At baseline (T0), as well as at 15 days (T1), 30 days (T2) and 90 days (T3), the following clinical indexes were assessed: Basic Erosive Wear Examination (BEWE), Schiff Air Index (SAI), Visual Analogue Scale (VAS), Plaque Index (PI) and Bleeding Index (BI). In general, for all the indexes assessed, a progressive intragroup reduction was noticed from the baseline to the subsequent timepoints, with no intergroup differences. Accordingly, the use of the hydroxyapatite-based toothpaste, alone or in combination with the mouthwash containing hydroxyapatite as well, is an effective method for the domiciliary management of dental erosion in physically active individuals like rugby players.
Introduction
Enamel demineralization is one of the major concerns in dentistry and several studies have been conducted on this topic, both in children and in adults [1,2]. The arise of white spot lesions (WSLs) on the tooth surface is the first sign of dental caries. This alteration of the enamel structure, besides being inaesthetic, might also be responsible of dental hypersensitivity [3]. Considering its supersaturation with calcium (Ca 2+ ) and phosphate (PO 4 3− ) ions, saliva has a protective role towards enamel erosion with these ions being able to diffuse into deficient lesions, thus promoting remineralization [4]. However, this process is insufficient to completely repair the enamel, therefore additional remineralizing agents are required [5]. Until now, fluoride has been extensively used for both caries prevention and the remineralization of early lesions, however some concerns are related to its use [6,7]. For instance, its efficacy is decreased when the oral cavity pH is under 4.5, moreover, an adequate concentration of Ca 2+ and PO 4 3− is necessary for fluoride to exert its action [4]. Additionally, only the outer layer of the enamel takes advantage of this remineralizing process, whereas the core of the lesion is not altered [5]. Finally, another shortcoming associated with fluoride use is the risk of fluorosis and toxicity [8]. On the basis of these considerations, new agents have been proposed, and one of the most recent technologies is represented by the use of biomimetic hydroxyapatite (HAP) in form of a microcluster or in nanocrystalline form [9,10]. This substance is chemically similar to the apatite constituting the human enamel crystals, and previous research has demonstrated the deposition of biomimetic hydroxyapatite on the tooth [11].
Dental erosion is a form of tooth wear that can be found both in deciduous and permanent dentition; its prevalence is increasing in the recent years, and this involves a variegated etiology: dietary habits, lifestyle, drugs, exposure to acidic substances, and elevated gastric acidity due to comorbidities. Regular visits from the dentist can help in preventing this condition. The treatment should consider including not only dental specialists, but also medicine specialists, given the fact that it is a multifactorial issue [12]. Almost a half of the professional players were found to suffer from dental erosion, with a significant correlation with sports drink use [13]. In detail, rugby players exhibit poor oral conditions with a higher prevalence of DMF scores, gingivitis and dysbiotic oral microbiota, with a prevalence of Streptococcus [14]. Current guidelines agree in adopting a multidisciplinary approach in the management of dental erosion in those kind of patients [15].
To date, the impact of sport training on the oral health has been already addressed in literature [16]. However, due to the lack of studies investigating the approaches of management for dental erosion in athletes, the aim of the present study was to compare the efficacy of two different domiciliary oral hygiene protocols based on biomimetic hydroxyapatite in physically active people like rugby players. The null hypothesis of the study was that no significant differences occurred between the two different protocols tested.
Trial Design
This was a single-center, parallel-group, randomized clinical trial with a 1:1 allocation ratio, approved by the Unit Internal Review Board (registration number: 2021-0908) and registered on Clinicaltrials.gov (NCT number: NCT05140538).
Participants
Patients registered for periodontal care at the Unit of Dental Hygiene, Section of Dentistry, Department of Clinical, Surgical, Diagnostic and Pediatric Sciences of the University of Pavia (Pavia, Italy) were enrolled for the study. The aim of the study was explained to patients that fulfilled the inclusion criteria and asked to participate. After signing the informed consent, they were enrolled from December 2021 to January 2022. The study ended in May 2021. Both interventions and outcomes assessment were conducted at the same Unit.
The inclusion criteria were being aged between 18 and 70 years; being a rugby player that used a mouthguard; being a patient willing to participate to the study. The following were the exclusion criteria: patients with a cardiac pacemaker; patients suffering from neurological disorders; patients suffering from psychological disorders; pregnant or breastfeeding women.
Interventions and Outcomes
At the first appointment (T0), patients were asked to sign the informed consent document to participate to the study. An instructed operator collected the following periodontal clinical indexes by means of a probe (UNC probe 15; Hu-Friedy, Chicago, IL, USA): Plaque Index (PI) (values from zero to three, respectively representing no plaque, thin plaque layer, moderate plaque layer, and abundant plaque layer) [17]; Bleeding on Probing (BOP) (values from zero to three, respectively representing no inflammation with no bleeding, mild inflammation with no bleeding, moderate inflammation with bleeding on probing, and severe inflammation with tendency to spontaneous bleeding) [11]; BEWE index assessed with Intact Tooth ® smartphone application (values from zero to three, respectively representing no erosion, initial loss of surface texture, hard tissue loss less than 50%, and hard tissue loss more than 50%) [17]; Visual Analogue Scale (VAS) for dental sensitivity (values from zero to 10 referred by the patient) [18]; Schiff Air Index (values from zero to three, respectively representing no response to air stimulus, response to air stimulus but with no request of discontinuation, response to air stimulus and request of air discontinuation, and response to air stimulus perceived as painful and request of discontinuation) [18]. Then, a professional supragingival and subgingival oral hygiene procedure was conducted using a piezoelectric instrument (Multipiezo, Mectron S.p.a, Carasco, Italy) and Gracey curettes (Hu-Friedy, Chicago, IL, USA).
Participants received verbal instructions on domiciliary oral hygiene with a softbristled electric toothbrush to be used twice a day for 2 min. At this stage, they were divided into two groups according to the assigned home treatment: in the Trial group, Biorepair Total Protective Repair toothpaste and Biorepair Triple Action Mouthwash were used twice a day; in the Control group, only Biorepair Total Protective Repair toothpaste was used twice a day. The planned follow-ups were conducted after 15 days (T1), 30 days (T2), and 90 days (T3). At each appointment, the periodontal examination with the collection of clinical indexes was performed.
Participants were recommended to regularly and correctly clean their teeth [19]. The composition of the products that were tested is shown in Table 1.
Sample Size
Sample size calculation (Alpha = 0.05; Power = 80%) for two independent study groups and a continuous primary endpoint was calculated considering the "BEWE index" variable. The expected mean was supposed to be 1.07 with a standard deviation of 1.82, and the expected difference between the means was supposed to be 1.15, therefore 20 patients per group were required for the study [20].
Randomization and Blinding
By means of a block randomization table, the data analyst provided a randomization sequence, considering a permuted block of 40 total participants. An operator enrolled the participants and executed the professional oral procedures, collecting all the abovementioned indexes. On the basis of previously prepared sequentially numbered, opaque, sealed envelopes (SNOSE), an assistant assigned each participant to the respective group, concealing the products for home use. The data analyst was blinded for the allocation and outcomes.
Statistical Methods
Data were submitted for statistical analysis using R Software (R version 3.1.3, R Development Core Team, R Foundation for Statistical Computing, Wien, Austria). For each group and variable, descriptive statistics (mean, standard deviation, minimum, median, and maximum) were calculated. BEWE, SAI, and VAS were calculated as pure values; BOP and PI were calculated in percentages. Data normality was assessed using the Kolmogorov-Smirnov test. For each variable, an ANOVA test was performed, detecting significant differences among the groups. Lasty, inferential comparisons using a post hoc Tukey test were performed.
Significance was predetermined at p < 0.05 for all the tests performed.
Participant Flow and Baseline Data
A total of 40 patients responded to the inclusion criteria and they were asked to participate in the study. They all agreed to participate and received the allocated interventions.
No patient was excluded from the analysis. The flow chart of the study is shown in Figure 1.
The study sample consisted of male patients, showing a mean age of 26.28 ± 5.55 years (20 patients for the Trial group, with a mean age of 26.9 ± 6.01; 20 patients for the Control group, with a mean age of 25.65 ± 5.12). Descriptive and inferential statistics for the five variables tested are reported in the following sections. Intergroup and intragroup comparisons are shown using a documented letter-based comparison system so that the presence of the same letter/letters for compared means shows that no significant differences are present between them [21]. Ad- Descriptive and inferential statistics for the five variables tested are reported in the following sections. Intergroup and intragroup comparisons are shown using a documented letter-based comparison system so that the presence of the same letter/letters for compared means shows that no significant differences are present between them [21]. Additionally, the letter-based significance description has been widely used in previous recent reports [22][23][24].
Basic Erosive Wear Examination (BEWE)
The results of the BEWE examination are shown in Table 2 and Figure 2. For both the groups, a decrease of the BEWE scores was assessed along all the time frames of the study. No significant intergroup comparisons were found (p > 0.05). A significant decrease was found between T2 and T3 time frames for both of the groups (p < 0.05).
Schiff Air Index (SAI)
The results of the SAI are shown in Table 3 and Figure 2. A reduction of the tested index was assessed along the time frames of the study. A significant intragroup reduction can be found from T1 to T2 for both the groups, and from T2 to T3 only for Trial Group (p < 0.05). No intergroup differences can be found among the groups except that during T3, the Trial Group significantly differs from all the time frames (p < 0.05).
Group
Time Mean Standard Deviation Significance *
Schiff Air Index (SAI)
The results of the SAI are shown in Table 3 and Figure 2. A reduction of the tested index was assessed along the time frames of the study. A significant intragroup reduction can be found from T1 to T2 for both the groups, and from T2 to T3 only for Trial Group (p < 0.05). No intergroup differences can be found among the groups except that during T3, the Trial Group significantly differs from all the time frames (p < 0.05).
Visual Analogue Scale for Dental Sensitivity (VAS)
The results of the VAS are shown in Table 4 and Figure 2. The VAS scores decreased in both the groups during the study. Intragroup-significant reduction can be found only between T2 and T3 for the Control Group, and between all the time frames of the Trial Group (p < 0.05). In regards for the intergroup differences, a significant difference can be found between the two groups only for the T3 time frame (p < 0.05).
Plaque Index (PI%)
The results of the PI are shown in Table 5 and Figure 2. A reduction of the PI scores was assessed in both the groups. Lower values can be found in the Control Group for all the time frames of the study, however no significant intergroup and intragroup difference was found (p > 0.05).
Bleeding Index (BI%)
The results of the BI index are shown in Table 6 and Figure 2. The BI mean values significantly decreased between T0 and T1 in both the groups (p < 0.05). No significant intergroup differences were found at each time frame (p > 0.05).
Discussion
Dental erosion consists of the loss of the enamel and the dentine structure, which are chemically dissolved by exogenous or endogenous acids. One of the categories that is most affected by tooth erosion is represented by people undergoing regular physical activity with a high consumption of sports drinks [25,26]. Such beverages mainly contain citric acid, an organic acid which causes a reduction of the salivary pH with dissolution of the calcium ions constituting the enamel [27,28]. Moreover, other factors related to the specific sport could have an influence; for instance, water sports professionals are extremely exposed to erosive lesions due to the exposition to chlorinated water [29].
In their recent systematic review with meta-analysis that is aimed at establishing a relationship between dental erosion and regular physical activity with or without sports drink consumption, Nijakowski and colleagues [25] included 16 studies reporting the prevalence of erosion in physically active individuals and thus determined that the aggregate prevalence was approximately 46.55% [95% CI: 36.10-57.15%]. Similarly, in this group, the summarized frequency of the consumption of sports drinks was estimated to be around 56.02% [95% CI: 29.70-80.64%]. Altogether, physically active individuals who declare regular consumption of sports drinks have a more than 2.5-fold increase in the odds of developing erosive lesions; in general, almost half of the people who practice sports suffer from tooth erosion, and more than half frequently consume sports beverages. Upon the limitations of their review, like the heterogeneity of the included studies encompassing different age groups and various sport disciplines, the authors concluded that regular physical activity was associated with an increased risk of dental erosion, especially under the influence of the frequent consumption of sports drinks.
The aim of the present study was to compare the efficacy of two different domiciliary oral hygiene protocols based on biomimetic hydroxyapatite use in the management of dental erosion in rugby players. The null hypothesis was partially rejected considering that significant differences were found. For both the groups, respectively assigned to the combination of hydroxyapatite-based toothpaste plus mouthwash and toothpaste alone, a significant decrease in the BEWE scores was found between T2 and T3, whereas no significant intergroup comparisons were found. Considering the Schiff Air Index (SAI), a significant intragroup reduction was found between T1 and T2 for both the groups and between T2 and T3 this was only for the Trial Group; conversely, no intergroup differences were found among the groups except for T3 of the Trial Group. In regards to the VAS scores, they decreased in both the groups during the study-an intragroup-significant reduction was found only between T2 and T3 for the Control Group and between all the time frames of the Trial Group. Additionally, a significant intergroup difference was found between the two groups only for the T3 time frame.
In addition to the abovementioned parameters, even indexes related to oral hygiene, like Plaque Index (PI) and Bleeding Index (BI) were assessed. A reduction in the PI scores was assessed in both the groups. Lower values were found in the Control Group for all the time frames of the study, however no significant intergroup and intragroup differences were found. Finally, the BI mean values significantly decreased between T0 and T1 in both the groups and no significant intergroup differences were found at any time frame.
On the basis of the abovementioned results, both the combination of the hydroxyapatitebased toothpaste plus mouthwash and the toothpaste alone was generally effective in improving enamel erosion and oral hygiene parameters. To the best of our knowledge, this is the first study evaluating the action of biomimetic hydroxyapatite for the dental erosion of rugby players, therefore no direct comparisons of the results obtained can be done.
Previous studies have demonstrated the deposition of biomimetic hydroxyapatite contained in products for oral care. In particular, in a previous work by our group, the deposition of calcium, phosphorus, and silicon ions on the surfaces of bulk-filled polymeric composite resins in the oral environment after one month of daily oral hygiene with a toothpaste containing microRepair ® (Zn-carbonate hydroxyapatite) was demonstrated by means of SEM and EDS analyses [30]. In the study by Lelli and colleagues [31], two groups of patients between 18 and 75 years old were assigned to the use of a Zn-CHA nanocrystalsbased toothpaste (experimental group) and a potassium nitrate/sodium fluoride toothpaste (active control group) for 8 weeks. At the end of this period, extractions were performed in five subjects per study group. Negative controls were represented by two subjects treated with non-specified fluoride toothpaste. Teeth were then analyzed by means of Scanning Electronic Microscopy and with Elementary analysis, X-Ray Diffraction analysis, and Infrared analysis. The results showed that the use of the Zn-CHA nanocrystals toothpaste led to a remineralization/repair of the enamel surface, by deposition of a hydroxyapatiterich coating. Conversely, the use of both nitrate potassium/sodium fluoride and nonspecified fluoride toothpastes did not appreciably alter the enamel surface. Despite our study being based on a different methodology, the results obtained by our group confirm those that are mentioned above.
The main limitation of the present work is represented by the assessment of only clinical indexes with no morphological and chemical analyses on extracted teeth. Moreover, no negative controls consisting of other chemical compounds (like fluoride) have been considered. Future studies with longer follow-ups and those taking into account further remineralizing agents are thus required to confirm the results here obtained. In particular, SEM and EDS analyses are necessary to demonstrate the real deposition of hydroxyapatite crystals on the tooth surfaces in order to strengthen the findings here that were obtained clinically.
Conclusions
People practicing sport, like rugby players, are exposed to the risk of tooth erosion due to the sport drinks intake. The findings of the present study show that the combination of a hydroxyapatite-based toothpaste plus mouthwash or toothpaste alone could be effective in remineralizing enamel, besides improving oral hygiene indexes. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Unit Internal Review Board (2021-0908) and registered on clinicaltrials.gov (NCT05140538).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All data are available upon request to corresponding authors. | 2022-08-24T15:02:17.138Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "443c289c5bcf0c6cd9421d064bae52181b98caa7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/16/4893/pdf?version=1660988045",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4387b0ba84e85280f808569e2df896a353313571",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3308900 | pes2o/s2orc | v3-fos-license | Upregulation of arylsulfatase B in carotid atherosclerosis is associated with symptoms of cerebral embolization
The aim of this study was to identify genes for which the expression within carotid atherosclerosis was reproducibly associated with the symptoms of cerebral embolization. Two publically available microarray datasets E-MEXP-2257 and GSE21545 were analysed using GeneSpring 11.5. The two datasets utilized a total of 22 and 126 carotid atherosclerosis samples, obtained from patients with and without symptoms of cerebral embolization, respectively. To assess whether the findings were reproducible we analysed carotid atherosclerosis samples from another 8 patients with and 7 patients without symptoms of cerebral embolization using real-time PCR. In vitro studies using VSMC were performed to assess the functional relevance of one of the validated genes. We identified 1624 and 135 differentially expressed genes within carotid atherosclerosis samples of symptomatic compared to asymptomatic patients using the E-MEXP-2257 and GSE21545 datasets, respectively (≥1.15-absolute fold-change, P < 0.05). Only 7 differentially expressed genes or 0.4% (7/1,752) were consistent between the datasets. We validated the differential expression of ARSB which was upregulated 1.15-fold (P = 0.029) in atherosclerosis from symptomatic patients. In vitro incubation of VSMCs with the ARSB inhibitor L-ascorbic acid resulted in marked upregulation of SIRT1 and AMPK. This study suggests that ARSB may represent a novel target to limit carotid embolization.
The prevalence of carotid artery stenosis is approximately 4% to 8% in adults aged 50 to 79 years [1][2][3] . Carotid atherosclerosis is estimated to be responsible for ~20% of all ischemic strokes 4,5 . Atherosclerotic plaque rupture and cerebral embolization is believed to be the mechanism by which carotid atherosclerosis leads to cerebral symptoms, such as transient ischemic attack and stroke 1 . However, the identification of the so-called "vulnerable plaque" has been elusive. The first whole-genome gene expression study of stroke was published a decade ago 6 . The authors at that time profiled peripheral blood mononuclear cells of stroke patients and compared them with those of healthy donors 6 . Almost 200 differentially expressed genes were identified; however, those assessed had limited diagnostic value with estimated specificity and sensitivity less than 80% 6 . A number of previous studies have examined differential gene expression in symptomatic carotid atherosclerosis; however, findings have not been consistent across the studies [7][8][9][10][11][12][13][14][15] . All these results question the value of differential gene expression in representing true molecular determinants of stroke. The current study re-examined previously published microarray datasets of carotid artery atherosclerosis to gain further insight. An attempt has been made to identify consistent and reproducible differentially expressed genes using publically available microarray datasets that utilized carotid endarterectomy samples from symptomatic and asymptomatic patients.
Scientific RepoRts | 7: 4338 | DOI: 10.1038/s41598-017-04497-9 Methods Data preparation. Two human microarray datasets were included in this study to establish a consensus set of differentially expressed genes in carotid atherosclerosis associated with the recent symptoms of cerebral embolization 12,13 . Suitable datasets were required to be publically available, to assess whole-genome gene expression within carotid plaque biopsies from patients both with and without symptoms of cerebral embolization, and to use commercially available microarrays of the same chip platform. The publically available raw array data were downloaded from Gene Expression Omnibus (GEO) provided by the National Center for Biotechnology Information (NCBI) or ArrayExpress provided by the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) public repositories.
Re-analysis of the original array data. In order to identify differentially expressed genes in carotid atherosclerotic tissue between symptomatic and asymptomatic patients, we analysed each original microarray dataset as described previously 16 . Briefly, the raw data matrix downloaded from GEO or ArrayExpress public repositories was uploaded into GeneSpring GX 11.5 (Agilent Technologies Pty Ltd) and the standard normalization procedures recommended for the Affymetrix GeneChip arrays was followed. Expression values were normalized using percentile shift normalization with default settings. These included normalization to 75 th percentile. The expression profile of carotid atherosclerosis samples obtained from participants with symptoms of cerebral embolization was compared to those without symptoms. Since all samples represented advanced atherosclerotic tissue, only small differences in gene expression were expected between patients with and without symptoms of cerebral embolization. Previous evidence suggested that small changes in gene expression are able to induce significant phenotypic differences 17 . In line with this, genes with ≥1.15-absolute fold differential expression between groups based on P-value < 0.05 by non-parametric Mann-Whitney U-test with no mathematical correction for multiple testing were considered to be differentially expressed.
Validation of microarrays findings. Using total RNA obtained from atherosclerosis removed from the proximal internal carotid (PIC) arteries of 8 patients with and 7 patients without symptoms of cerebral embolization, we assessed the validity of microarray findings (validation group). Total RNA was extracted from PIC biopsies stored in RNAlater (Sigma-Aldrich) at −80 °C using RNeasy Mini Kit (Qiagen) according to manufacturer's instructions. Symptomatic patients presented with focal neurological symptoms related to their anterior cerebral circulation such as transient ischemic attack (TIA) or stroke within 6 weeks of surgery; asymptomatic patients presented with no history of neurological symptoms 1 . Quantitative real-time reverse transcription PCR (qRT-PCR) assay was performed to assess the relative expression of arylsulphatase B (ARSB) as this was consistently differential expressed in the array analyses. The relative expression of ARSB in each sample was calculated by using the Concentration-Ct-standard curve method and normalized using the average expression of the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) gene for each sample using the Rotor-Gene Q operating software version 2.0.24 (Qiagen). GAPDH was chosen as the "housekeeping" gene since analyses showed its expression to be similar in carotid biopsies from symptomatic and asymptomatic patients. The QuantiTect SYBR Green one-step RT-PCR Kit (Qiagen) was used according to the manufacturer's instructions with 20 ng of total RNA as template. All reactions were independently repeated in duplicate to assess the repeatability of the results and the mean of the two values for each sample was used for analyses. The QuantiTect Primer Assays QT00026684 and QT00079247 (Qiagen) were used for the ARSB and GAPDH assessments, respectively. Mann-Whitney U test was performed to identify differences in expression levels of ARSB between carotid atherosclerosis biopsies of patients with and without symptoms of cerebral embolization. Data were presented as box-and-whisker plots with median and interquartile range with maximum and minimum data points (whiskers) for relative expression. Statistical significance was defined at the conventional 5% level. All computations were performed using the Stata/MP 13.1 statistical software (StataCorp LP, USA). Ethical approval was granted from The Townsville Hospital (TTH) and Health Services Committees, written informed consent was obtained from each participant, and the protocols conformed to the ethical guidelines of the Declaration of Helsinki.
Cell culture. We investigated the effects of chondroitin sulphate, a natural substrate for ARSB, and L-ascorbic acid, an ARSB inhibitor, on the expression of sirtuin 1 (SIRT1) and protein kinase AMP-activated catalytic subunit alpha 1 (PRKAA1 or AMPK, 5′-prime-AMP-activated protein kinase) under inflammatory conditions in vitro. These two ARSB-associated bioactive molecules were specifically selected to test their individual and combined ability to upregulate the SIRT1/AMPK metabolic pathway that exerts strong anti-inflammatory, anti-atherogenic, and plaque-stabilizing effects 18,19 . We used human vascular smooth muscle cells (VSMC; Clonetics) that were plated at a seeding density of 1 × 10 5 cells/well and maintained at 37 °C, 5% CO 2 , in Dulbecco's Modified Eagle Medium (DMEM; Sigma-Aldrich) containing 10% fetal bovine serum (FBS; Gibco). Cells were growth arrested at 90% confluency by incubation in DMEM + 0.1% FBS overnight (18 hours). Control cultures (n = 6) were exposed to DMEM + 10% FBS comprising 10% v/v conditioned media generated from human monocytic THP-1 cells exposed to 10 µg/ml endotoxin (Lipopolysaccharide; Sigma-Aldrich) over a period of 24 hours. Experimental cultures were exposed to the same pro-inflammatory media but supplemented with either L-ascorbic acid (Sigma-Aldrich; 400 μM; n = 6 cultures) or chondroitin sulphate sodium salt (Sigma-Aldrich; 300 μM; n = 6 cultures), or a combination of both (n = 6 cultures). All cells were harvested after a 24-hour experimental period and subjected to RNA extraction followed by qRT-PCR using the Qiagen's QuantiTect Primer Assay QT00009436 (AMPK) and QT00051261 (SIRT1) as outlined above.
Results
Datasets characteristics. Two whole-genome gene expression datasets were included in this study to determine the consensus set of differentially expressed genes in carotid atherosclerosis associated with the symptoms of cerebral embolization ( Table 1). The E-MEXP-2257 dataset utilized 22 carotid plaque biopsies obtained from 13 and 9 patients with and without symptoms of cerebral embolization, respectively. The summary characteristics of participants included in the E-MEXP-2257 dataset are presented elsewhere 12 . In brief, the mean age of patients was 64 ± 8 years, ~27% of participants were females (6/22), and ~27% of participants (6/22) had a positive history of smoking ( Table 1). The second dataset included in this study, GSE21545, utilized 126 carotid plaque biopsies obtained from 25 and 101 symptomatic and asymptomatic patients, respectively ( Table 1). The summary characteristics of participants included in the GSE21545 dataset are presented elsewhere 13 . Briefly, the mean age of patients was 71 ± 9 years, ~22% of participants were females (28/126), and ~49% of participants (62/126) had a positive history of smoking (Table 1). No individual patient's clinical characteristics, including medication and severity of carotid atherosclerosis, were publically available for both datasets.
Numerical assessment of differentially expressed genes. The E-MEXP-2257 and GSE21545 microarray datasets were individually re-analysed to identify differentially expressed genes and the overlap between the findings. A total of 1,624 and 135 genes were found to be differentially expressed (≥1.15-absolute fold change, uncorrected P < 0.05) within carotid plaques of symptomatic compared to asymptomatic patients in E-MEXP-2257 and GSE21545, respectively (Fig. 1). Full lists of differentially expressed genes are given in Supplemental Table I and Supplemental Table II for E-MEXP-2257 and GSE21545, respectively. Although E-MEXP-2257 and GSE21545 collectively identified 1,752 differentially expressed individual genes, only 7 genes or 0.4% were consistently differentially expressed in the two datasets (7/1,752; Fig. 1). Table 2). The coagulation factor III, tissue factor (F3) gene and the ubiquitin like modifier activating enzyme 6 (UBA6) gene were found to be upregulated in symptomatic compared to asymptomatic patients in the E-MEXP-2257 dataset but downregulated in symptomatic compared to asymptomatic patients in the GSE21545 dataset ( Table 2). The growth arrest specific 6 (GAS6) gene, the uncharacterized LOC730101 (LOC730101), and the secretory leukocyte peptidase inhibitor (SLPI) gene were identified to be downregulated in symptomatic compared to asymptomatic patients in both datasets (Table 2). Finally, the G protein-coupled receptor 135 (GPR135) gene was found to be downregulated in symptomatic compared to asymptomatic patients in the E-MEXP-2257 dataset but upregulated in symptomatic compared to asymptomatic patients in the GSE21545 dataset ( Table 2).
Validation of microarray findings. The validity of microarray findings was further assessed using carotid atherosclerosis biopsies obtained from 8 symptomatic and 7 asymptomatic patients undergoing carotid endarterectomy (validation group; Table 3). The risk factors and medications of symptomatic and asymptomatic patients were similar (Table 3). The relative expression of ARSB, the only gene consistently upregulated in symptomatic compared to asymptomatic patients in both microarray datasets, was also found to be significantly increased within the carotid atherosclerotic tissue of symptomatic compared to asymptomatic patients of the validation group using qRT-PCR (*P = 0.029; Fig. 2).
Cell culture. Finally, we investigated the effect of chondroitin sulphate (ARSB substrate) and L-ascorbic acid (ARSB inhibitor) on important anti-atherogenic pathways represented by the AMPK and SIRT1 genes using human VSMC in vitro under inflammatory conditions. We found that incubation of VSMC with L-ascorbic acid was associated with upregulation of both AMPK (Fig. 3A) and SIRT1 (Fig. 3B). The incubation of VSMC with chondroitin sulphate was associated with upregulation of SIRT1 (Fig. 3B) but not AMPK (Fig. 3A). Importantly, the simultaneous incubation of VSMC with chondroitin sulphate and L-ascorbic acid resulted in synergistic upregulation of AMPK (Fig. 3A) and additive upregulation of SIRT1 (Fig. 3B). These findings suggest that the combination of chondroitin sulphate and L-ascorbic acid may represent a potent activator of the AMPK/SIRT1 pathways. Table 2. Genes for which the expression in carotid atherosclerosis was consistently associated with the symptoms of cerebral embolization in the two microarray datasets included in this study. P-value, calculated with non-parametric Mann-Whitney test without mathematical correction for multiple comparisons.
Discussion
We analysed publically available microarray datasets from previous gene expression studies that utilised carotid plaque biopsies from patients with and without symptoms of cerebral embolization 12,13 . By focusing on genes that were simultaneously differentially expressed in the datasets analysed, we were able to discover and validate the upregulation of ARSB in carotid plaques of symptomatic compared to asymptomatic patients, not acknowledged in the original microarray studies. It is important to note that the protein product of this gene is involved in degradative processes of sulphated proteoglycans, the major component of virtually all extracellular matrices (ECMs) 20 . Furthermore, previous data suggest an inverse association between enzymatic activity of ARSB and the stability of sulphated proteoglycans within the ECM 21 . In line with this, Koledgie et al. found that plaque rupture sites contain very little proteoglycan content relative to stable lesions 22 , consistent with a degradative process.
Although the upregulation of ARSB within the carotid plaque biopsies of symptomatic patients identified in both microarray datasets was relatively small, we were able to confirm these findings by qRT-PCR using carotid plaque biopsies obtained from another group of symptomatic and asymptomatic patients. This led us to hypothesize that upregulation of ARSB may represent an important pathological mechanism associated with symptoms of cerebral embolization, consistent with previous findings that even small changes in gene expression can induce major phenotypic differences 17 . The ARSB enzyme catalyses de-sulphation of ubiquitous glycosaminoglycans such as chondroitin sulphate 23 . Published evidence suggests that plasma concentration of under-sulphated chondroitin is elevated in patients with carotid artery disease 24 , while sulphated chondroitin has been long known to exhibit anti-atherogenic properties in rodents, primates, and humans [25][26][27][28] . Several historical studies from the 1960s and 1970s report reduced incidence of coronary events and cardiovascular mortality in atherosclerotic subjects treated with chondroitin sulphate [28][29][30] . Recent data suggests that anti-atherogenic actions of chondroitin sulphate may occur through interfering with the pro-inflammatory activation of monocytes and endothelial cells by tumor necrosis factor (TNF) alpha 31 , a cytokine thought to be crucially involved in the pathogenesis of atherosclerotic plaque 32 . Although the authors did not elucidate the exact mechanism of action of chondroitin sulphate 31 , it is possible that upregulation of anti-inflammatory microRNAs, the negative regulators of gene expression, could play a role [33][34][35] . Previous studies suggest an inhibitory effect of chondroitin sulphates on gene expression through modification of microRNAs 36 . Importantly, the ARSB enzyme is inhibited by L-ascorbic acid 21,37 . Due to the lack of a well-developed animal model of carotid atherosclerosis associated with cerebral embolization, we further investigated the effect of L-ascorbic acid and chondroitin sulphate on important anti-atherogenic pathways in vitro. We found that chondroitin sulphate and L-ascorbic acid administered together induced a remarkable upregulation in the expression of SIRT1 and AMPK genes in VSMCs exposed to inflammatory conditions in vitro. These findings suggest that chondroitin sulphate formulated with L-ascorbic acid may serve as a potent activator of the SIRT1/AMPK pathway. This may hold promise as a novel therapeutic approach for carotid atherosclerosis since the SIRT1/AMPK pathway is key to a number of vasculoprotective processes. In particular, SIRT1 is the nicotinamide adenosine dinucleotide (NAD)-dependent deacetylase that has been associated with inhibition of the proatherogenic VSMC foam cell formation possibly through the suppressing of the nuclear factor kappa B (NF-κB) signalling pathway 38 . AMPK is the main energy-sensing kinase in all eukaryotic cells and has been implicated in stabilizing atherosclerotic plaques through the inhibition of the mammalian target of rapamycin (mTOR) signalling pathway 39 . The downregulation of genes for secretory leukocyte peptidase inhibitor (SLPI), uncharacterized LOC730101, and growth arrest specific 6 (GAS6) in carotid atherosclerosis associated with the symptoms of cerebral embolization in both datasets included in this study was also demonstrated. The role of SLPI and LOC730101 in human carotid atherosclerosis is largely unknown. Some evidence suggests that more severe atherosclerosis in humans is associated with an increase in GAS6 expression 40 , while similar expression of GAS6 in human carotid arteries with and without atherosclerosis has been reported 41 . Further investigation of the role of these three genes in carotid atherosclerosis is needed.
The limitations of this study include the relatively small number of patients included in the original microarray datasets. In particular, although both datasets collectively included 148 carotid atherosclerosis tissue samples, the total number of 38 ischemic events was relatively small and findings need to be substantiated by larger studies. We also observed very limited overlap and consistency between the genes differentially expressed in the two datasets suggesting the heterogeneous nature of the patients investigated. Further genome-wide gene expression studies involving histologically standardized sets of patients are needed. In view of these limitations we sought to validate important microarrays finding using another set of carotid artery biopsies obtained from patients with and without symptoms of cerebral embolization. The assessment of independent samples helps to minimize the possibility that selection biases adversely affected the generalizability of the findings. We did, however, only assess mRNA not protein levels due to limited availability of carotid artery biopsies. Finally, the exact mechanism by which chondroitin sulphate combined with L-ascorbic acid upregulated the SIRT1/AMPK pathway as well as the therapeutic doses of these two bioactive molecules remains to be elucidated.
In conclusion, a decade after the first microarrays for stroke, its molecular determinants are still poorly understood. The outcome of this study highlight a potential role for arylsulfatase B in promoting atherosclerosis-related stroke and warrants its further investigation as a therapeutic target that could be of potential clinical benefit. | 2018-04-03T01:05:21.193Z | 2017-06-28T00:00:00.000 | {
"year": 2017,
"sha1": "b63fb72a8ae70749876f3198f3d057354a0d4004",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-04497-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b63fb72a8ae70749876f3198f3d057354a0d4004",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118610940 | pes2o/s2orc | v3-fos-license | Phase Diagram of Two-dimensional Polarized Fermi Gas With Spin-Orbit Coupling
We investigate the ground state of the two-dimensional polarized Fermi gas with spin-orbit coupling and construct the phase diagram at zero temperature. We find there exist phase separation when the binding energy is low. As the binding energy increasing, the topological nontrivial superfluid phase coexist with topologically trivial superfluid phase which is topological phase separation. The spin-orbit coupling interaction enhance the triplet pairing and destabilize the phase separation against superfluid phase.
INTRODUCTION
The topological properties have been investigated extensively in condensed matter systems such as topological insulators(TIs) [1,2], topological superconductors(TSCs) [3][4][5][6], etc, which are described by topological order [7] instead of the traditional Landau symmetry breaking theory. In ultracold atomic system, the effective spin-orbit coupling(SOC) has been realized recently by utilizing the spatial varying laser fields [8,9]. With the technique of Feshbach resonance [10,11], the spin-orbit coupled ultracold atomic systems provide a clean platform to investigate the topological properties of the condensed matter system.
The SOC significantly changes the Fermi surface and largely enhances the low energy density of state [12,13]. Therefor, many interesting phases and intriguing phenomena become possible. The triplet pairing and the transition temperature are largely enhanced [14] while the pair coherence lengths are suppressed by the SOC [15]. In three dimensions, the ground state of the Fermi system is enriched by the SOC [16][17][18][19][20][21][22][23]. In two dimensions, the superfluid phase of the spin-orbit coupled Fermi gas can be topologically nontrivial [24][25][26][27]. Furthermore, there is topologically nontrivial phase separation(TPS) which is the coexistence of superfluid phases with different topological order in the trapped SOC Fermi systems with population imbalance [28,29].
In this paper, we investigate the uniform polarized twodimensional(2D) Fermi gas with SOC near a wide Feshbach resonance at zero temperature. The phase separation is possible for a polarized Fermi gas without the SOC due to the competition between the polarization and the pairing interaction. To map out a exact phase diagram, we determined the ground state by minimizing the thermodynamic potential of the phase separation [30].
In the presence of SOC, the Fermi surface is topologically changed. The topological phase transition(TPT) takes place when the excitation gap is closing. Therefor, the topologically nontrivial superfluid phase(TSF) shows up in the phase diagram against the topologically trivial superfluid phase(NSF). For the phase separation phase, the topological phase transition much more tend to take place in the smaller pairing gap component state, thus the phase separation becomes topologically nontrivial.
This paper is organized as follows. In Sec.2, introducing the Hamiltonian of 2D uniform polarized Fermi gas, we obtain the zero temperature thermodynamic potential by mean field theory, and then give the gap equation and the number equations for superfluid phase. In Sec.3, we investigate the ground state by minimizing the thermodynamic potential of the phase separation phase and map out the phase diagram in detail. A brief conclusion is given in Sec.4.
FORMALISM OF THE SYSTEM
We consider the uniform 2D polarized Fermi gas with SOC, which is described by the Hamiltonian: where H 0 is the kinetic term, H SO is the spin-orbit interaction, and H int is the s-wave interaction between the two fermionic species. They take where ξ k,σ = k 2 /(2m) − µ σ , c † k,σ (c k,σ ) denotes the creation(annihilation) operators for a fermion with momentum k and spin σ = {↑, ↓}, λ is the strength of Rashba spin-orbit coupling, ϕ k = arg(k x + ik y ), g is the bare s-wave interaction strength which can be renormalized by By the transformation, the Eq.2 becomes where a † k,± (a k,± ) is the creation(annihilation) operator for the state with helicity (±), ξ k,± = ξ k ± λk with ξ k = ǫ k − µ and the chemical potentials The Hamiltonian1 can be rewritten in the helicity basis Ψ k = (a k,+ , a k,− , a † −k,+ , a † −k,− ) T as: with We know that the classification of above 2D BdG Hamiltonian, which breaks the time-reversal symmetry but preserves the particle-hole symmetry, is Z class [5]. The topological numbers which characterize the topological properties of the superfluid phases are integer. There is topological phase transition at the gap closing point h = µ 2 + ∆ 2 . The topologically nontrivial superfluid phase show up when h > µ 2 + ∆ 2 .
The Hamiltonian can be diagonalized as where, α † k,± (α k,± ) is the creation(annihilation) operator for the quasiparticles with the excitation spec- The pairing gap should be self-consistently determined with chemical potential by minimizing the thermodynamic potential ∂Ω/∂∆ = 0 and the particle number equations n σ = −∂Ω/∂µ σ . They are given as where, n = n ↑ + n ↓ is the total particle number and p = (n ↑ − n ↓ )/n is the polarization. In the presence of SOC, the Fermi surface is topologically changed and the triplet pairing is possible. The condensate fraction should include singlet and triplet contributions n c = n 0 + n 1 which are given as
THE PHASE DIAGRAM IN p − λkF /EF PLANE
There is no guarantee that the ground state of the polarized Fermi gas corresponds to one of the spatially homogeneous states. As the competition between the population imbalance and the pairing interaction, the phase separation becomes possible. For the polarized Fermi gas, the stability of the phase separation against the superfluid should be considered like the case without SOC. By introducing the mixing coefficient x(0 ≤ x ≤ 1) and ignoring the interfaces energy between the two coexisting phase, the thermodynamic potential of the phase separation can be written as where selfconsistently, we construct the phase diagram in p − λk F /E F plane for different binding energy. First, we give the phase diagrams in p − λk F /E F plane with different binding energy in Fig.1((a) of SOC. When the polarization is larger than 0.32, the phase separation can not sustain against topologically trivial superfluid (NSF) in the phase diagram without SOC for E b = 0.1E F case. The critical polarization increase with the binding energy as shown in Fig.1. This consist with the recent result without the SOC [31].
In the presence of SOC, the Fermi surface is topologically changed and other interesting topologically nontrivial phases are possible. There is topological phase transition when the excitation gap closing at the critical point h = µ 2 + ∆ 2 . The topological phase transition tend to take place in the high polarization area in which the pairing gap is low and the imbalance of the chemical potential is large. Therefor, the phases are TSF in the phase diagrams with high polarization as shown in Fig.1.
For the phase separation phase, the topological phase transition much more tend to take place in the low pairing gap component state. The phase separation become topologically nontrivial when the low pairing gap component state become topologically nontrivial as shown in Fig.1(b),(c),(d). As the binding energy increasing, the topological phase separation is more pos-sible. The entire phase separation is topologically trivial with E b = 0.1E F (shown in Fig.1(a)) and nontrivial with E b = 1.0E F (shown in Fig.1(d)). The boundary (the red sold line) between the TSF and NSF merge with the phase separation boundary (the blue dash line) as the binding energy increasing. Fig.1 also show that the SOC destabilize the phase separation against superfluid phase. When the SOC strength increase to a critical value, the phase separation disappear.
Second, we show the behavior of thermodynamic potential toward the pairing gap in different phase regions of the phase diagram for E b /E F = 0.6 in Fig.2. The thermodynamic potential has two degenerate minimums in the phase separation regions as shown in Fig.2(a)(c). The two distinct superfluid phases can show up and coexist in the phase diagram. The two coexistent states are all topologically trivial in Fig.2(a). But, the the smaller component state is topologically nontrivial while the other is topologically trivial in Fig.2(c). The thermodynamic potential in the superfluid region only has one minimum as shown in Fig.2 Finally, we show the variation of ∆, µ, h and the condensate fractions for very low polarization (p = 0.01) with E b = 0.5E F and E b = 1E F in Fig.3. The triplet condensate fractions are enhanced by the SOC. The SOC enhance the triplet pairing in virtue of the topologically change the Fermi surface. Therefor, the system can not sustain the phase separation against the superfluid phase as the triplet pairing increasing as well as the SOC strength.
It should be point out that the gap equation divergent as the pairing gap ∆ reduce to zero when µ < −(λ 4 + 4h 2 )/(4λ 2 ) or µ < min(− | h | /2, −λ 2 /2), hence there is no boundary between the normal phase and the superfluid phase. We map out the boundaries (dot lines) for ∆ = 0.001E F as shown in Fig.1. Above the curve, the pairing gap is ∆ < 0.001E F and exponentially decreases as the SOC reduce to zero [29].
CONCLUSIONS
We construct the phase diagram for the twodimensional Fermi gas with spin-orbit coupling and population imbalance near a wide Feshbach resonance. We map out the stability regions of the topologically trivial and nontrivial superfluid phase, and phase separation in detail. As the spin-orbit coupling increasing, there is topological phase transition. Therefor, the topologically nontrivial phase separation is possible. The spin-orbit coupling enhance the triplet pairing and suppress the phase separation. The phase separation can not sustain against superfluid phase when the spin-orbit coupling is large.
We are very grateful to Wei Yi and Zhong Wang for helpful discussions. This work is supported by NSFC Grant No.10675108. | 2011-11-18T04:08:56.000Z | 2011-11-18T00:00:00.000 | {
"year": 2011,
"sha1": "28a9ec9fc96e309eb96968b87a06064e37b5847f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "28a9ec9fc96e309eb96968b87a06064e37b5847f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4790854 | pes2o/s2orc | v3-fos-license | A Retrospective Analysis of Oral Langerhans Cell Histiocytosis in an Iranian Population: a 20-year Evaluation.
STATEMENT OF THE PROBLEM
Langerhans cell histiocytosis is a rare disease with unknown pathogenesis and is characterized by local or disseminated proliferation of Langerhans cells. There is no previous investigation on prevalence of oral Langerhans cell histiocytosis in Iranian population.
PURPOSE
The purpose of this study was to assess the relative frequency of oral Langerhans cell histiocytosis in an Iranian population and to compare the data with previous reports.
MATERIALS AND METHOD
Pathology files of Oral and Maxillofacial Pathology Department of Dental School of Shahid Beheshti University of Medical Sciences from 1992 to 2012 were searched for cases recorded as oral Langerhans cell histiocytosis. A total number of 20 cases were found and the clinical information of patients was recorded.
RESULTS
The relative frequency of oral Langerhans cell histiocytosis was 0.34% and the most common location was the posterior mandible. In addition, the mean age of patients was 27 years and there was a definite male predominance. Most lesions were localized and tooth mobility was the most common oral presentation.
CONCLUSION
In Iranian population as in many other countries, the relative frequency of oral Langerhans cell histiocytosis is low. Moreover, tooth mobility and periodontal lesions are the frequent early signs of disease. Therefore, in patients with periodontal problems, good oral health, and no response to the treatment; Langerhans cell histiocytosis must be considered. Additionally, although most cases of oral Langerhans cell histiocytosis are localized, systemic involvement must also be considered and dental professionals have an important role in early detection of the disease.
Introduction
Langerhans cell histiocytosis (LCH) is caused by an uncontrolled pathogenic clonal proliferation of dendritic cells with Langerhans cell characteristics. [1] No definite etiology has been identified for this disease and it can be triggered by environmental agents and vi-ruses, in particular Epstein-Barr virus. [2] Three clinical subtypes of LCH are recognized.
The unifocal subtype (single system, single site), previously referred to as eosinophilic granuloma, usually affects the bones, lymph nodes, or lungs as the primary sites. [3] The multifocal subtype (single system, multi-ple sites) affects several sites in any particular organ system. Formerly this subtype was recognized as Hand-Schuller-Christian disease with bone lesions, diabetes insipidus, and exophthalmos. [4] The multiple-organ-system subtype, previously referred to as disseminated histiocytosis or Letterer-Siwe disease, affects multiple sites in different organ systems and is seen in the first year of life and has the worst prognosis. [4] The jaws are affected in 10% to 20% of all cases.
Dull pain and tenderness often accompany bone lesions. Bone destruction and tooth loosening may clinically resemble severe periodontitis. The lesions in periapical sites may mimic periapical inflammatory lesions. The involved gingival tissues are often inflamed, hyperplastic, or ulcerated. Oral mucosal lesions in form of submucosal nodules, ulcers, and leukoplakia have also been described. [5][6] Since there was no previous research about oral LCH in Iranian population, we assessed the relative frequency of oral LCH and compared the results with other countries.
Materials and Method
The files of oral and maxillofacial pathology department in Dental School of Shahid Beheshti University of Medical Sciences, Tehran, Iran, served as the source of materials during a 20-year-period from 1992 to 2012. All lesions diagnosed as oral LCH were subjected to microscopic re-evaluation. Then, immunohistochemical examination with CD1a protein was performed to confirm the diagnosis. Information including patients' age and gender, lesions' location, and patients' clinical symptoms was also recorded. The Chi-square test was used and the results with p< 0.05 were considered significant.
Result
During this period, 5744 specimens were referred to the oral pathology department. Twenty cases (0.34% of all cases) were identified as LCH. The mean age of patients was 27 years (ranged from 3 months to 51 years) and the majority of patients aged between 21 and 30 years old. 15 lesions had occurred in males and 5 in females (ratio 3:1). Chi-square test showed a significant difference between males and females. In ad-dition, all the lesions were intraosseous. 45% of all lesions had occurred in the mandible, 45% in both mandible and maxilla, and only 5% in the maxilla (Table 1). The ratio of mandibular to maxillary involvement was statistically significant (9 to 1). The posterior regions (especially in mandible) were the predominant sites of occurrence. The most common oral presenting symptom was tooth mobility (50%), followed by pain (35%) and swelling (35%) ( Table 2). (Table 3). 30-year period, [10][11][12] indicating the low prevalence of the disease.
LCH can affect any age group; however, it is more frequent in children. [10] Nevertheless, the mean age of patients in our research was 27 years. In fact, bony lesions of LCH usually occur in patients above 20. Therefore, the mean age of patients with jaw invol-vement is higher than other groups.
In this study, oral LCH was more frequent in males, generally affecting the posterior region of mandible. This was in agreement with other investigations. [4,13] The presenting signs and symptoms of oral LCH may be pain, swelling, oral mucosal ulceration, periodontal problems, non-healing extraction sites and granulomatous or hyperkeratotic lesions. [10,[14][15] These various presentations of oral LCH is due to the fact that either jaws or oral mucosa are involved. Oral LCH in this case series most frequently presented with tooth mobility, pain, and swelling.
Oral LCH lesions are localized or the earliest sign of a disseminated disease [3]. Various frequencies of disseminated form (10-66%) have been reported in the articles. [15][16] Most patients in our research (75%) had localized lesions and the remaining minority (25%) had systemic involvement. Since oral lesions may be the initial manifestations of a systemic involvement, dental professionals have an important role in early detection of the disease. [17] Once the diagnosis of LCH is established, the patient should be referred by the dentist to an internist for a thorough physical examination, particularly in lymph nodes and abdominal organs to rule out a more disseminated disease. It must be noted that any delay in diagnosis of LCH may lead to a poor response to the treatment.
Conclusion
Like many other countries, the relative frequency of oral Langerhans cell histiocytosis is low in Iranian population and the great majority of cases occur in males. In addition, tooth mobility and periodontal lesions are frequently the early signs of disease. Therefore, in patients with periodontal problems, good oral health, and no response to the treatment, LCH must be considered. Although most cases of oral LCH are localized, a thorough examination should be performed to rule out any systemic involvement.
Conflict of Interest
None declared | 2018-04-03T00:26:30.370Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "8ea55c34a8288af8120840adb93762f9cd6355d1",
"oa_license": "CCBY",
"oa_url": "http://europepmc.org/articles/pmc4623830?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ea55c34a8288af8120840adb93762f9cd6355d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.